PRESIDENT'S COUNCIL

ON INTEGRITY AND

EFFICIENCY



REVIEW

of

APPLICATION

SOFTWARE MAINTENANCE

in

FEDERAL AGENCIES









September 26, 1996



THE PRESIDENT'S COUNCIL ON

INTEGRITY AND EFFICIENCY'S REVIEW OF

APPLICATION SOFTWARE MAINTENANCE IN FEDERAL AGENCIES

EXECUTIVE SUMMARY

In September 1986, the President's Council on Integrity and Efficiency (PCIE) initiated the Computer Systems Integrity Project (CSIP). The project is a multi-task effort focusing on controls, security, and other integrity issues related to the entire data processing systems life cycle. The objectives of the overall project are to assess the integrity of Federal computer systems and develop recommendations for Governmentwide improvements in standards, procedures, documentation, and operations affecting computer systems integrity.

To date, four tasks have been completed. Task 1, Survey of Agency Implementation of Computer Systems Integrity Requirements, was completed with the June 1988 issuance of a summary PCIE report. Task 2A, Review of General Controls in Federal Computer Systems, was completed in October 1988 when a summary PCIE report was issued. Task 2B, Review of Application Controls in Federal Contract Tracking Systems was completed with the April 1991 issuance of a summary PCIE report. The lead agency for Task 3, Followup Audit on the Implementation of the PCIE CSIP Task 1 and Task 2A Audit Report Recommendations, concluded that the issuance of a consolidated summary PCIE report would produce few benefits, and therefore such a report was not prepared. For a detailed description of these tasks see Appendix A, page 33.

In May 1992, the Committee approved sponsorship of CSIP Task 4, Review of Application Software Maintenance in Federal Agencies. Software maintenance was selected as an area for review for two primary reasons. First and foremost, controls over application software modifications are vital to maintaining the reliability and integrity of sensitive and mission-critical application systems of Federal agencies. Failure to adequately control application software maintenance exposes an organization to corruption of system information--which in turn can lead to erroneous management decisions and/or the inability to meet organizational missions. The rapidly growing inventory of Federal application software systems is increasing the need for a strong, disciplined, clearly defined software maintenance approach which will guarantee the usefulness and integrity of the data maintained by those systems. Second, Federal managers have historically undervalued the economic importance of a sound, comprehensive software maintenance program.

The primary objectives of Task 4 were to identify common software maintenance problems across Federal Agencies and to identify Governmentwide recommendations to oversight agencies. Specifically, agencies determined the (1) adequacy and completeness of agency software maintenance policies and procedures, (2) effectiveness of controls over software changes, (3) extent to which software maintenance is budgeted and tracked, and (4) effectiveness of agency management of software maintenance contractors. The Environmental Protection Agency Office of Inspector General had overall responsibility for coordinating this task. See Appendix E, page 51, for a description of the audit methodology.

Seven Inspectors General offices (listed in Appendix C, page 41) participated in this PCIE project task. The Inspectors General reviewed the management of the software maintenance process (e.g., policies and procedures, contract management, etc.) for major mission-support/administrative applications within their agencies. Based on the results of field work conducted between November 1992 through November 1995, six individual agency reports were prepared and issued (see Appendix D, page 49). The remaining participant, the Social Security Administration, made no recommendations and issued a close-out memorandum in lieu of a report.

This review identified three areas that need to be addressed Governmentwide in order to strengthen the management and implementation of agencys' software maintenance programs. These areas include: (1) improper identification and accounting for software maintenance costs; (2) ineffective controls and oversight over software maintenance contracts and contractors; and (3) inadequate management of the software change control processes. These issues are presented in the Software Maintenance Weaknesses section of this report, including comprehensive Governmentwide recommendations, and are summarized in the following paragraphs.

-- Federal Departments and Agencies do not properly identify and account for software maintenance costs. For example, agencies do not consistently include the costs of administrative and clerical salaries, materials, computer usage, telecommunications, overhead costs, and Federal employee salaries in software maintenance costs. In addition, agencies reported cost-benefit analyses are not consistently prepared, updated, and/or maintained for application systems. As a result, agencies are not in a position to make informed budgeting and planning decisions regarding systems operations and maintenance. In addition, software maintenance expenses are being inaccurately reported to the Office of Management and Budget (OMB). These weaknesses occurred primarily because agencies are not defining software maintenance consistently, and Federal accounting requirements are not being followed.

-- The contracts used by agencies for software maintenance do not adequately protect the Government's interests. Agencies are not consistently awarding contracts that motivate contractors to perform at optimal levels. In addition, the monitoring of these contracts present an unstructured, poorly controlled approach to the management of maintenance for critical Government applications. Consequently, agencies lack control over software maintenance activities and rely heavily on contractors. These weaknesses are primarily due to the use of non performance-based contracting methods and agencies not specifying performance measures in software maintenance contracts. Furthermore, Federal employees lacked the technical expertise required to adequately oversee maintenance contractors.

-- Federal Departments and Agencies are not adequately managing the software change control and configuration management(1) processes. Specifically, Federal Departments and Agencies cited weaknesses with the change request process, change review and approval, and testing. As a result, agencies lack assurance that (1) applications will perform as intended and (2) management controls will adequately safeguard the integrity of the applications. These weaknesses resulted from agencies not following software maintenance policies, standards, and procedures for requesting, approving, and testing changes.

The actions prescribed for OMB, together with the agency-specific actions recommended by the respective Inspectors General, should substantially strengthen application software maintenance Governmentwide.

(This page intentionally left blank.)

TABLE OF CONTENTS

EXECUTIVE SUMMARY i

INTRODUCTION

Background 1

Overview of Governmentwide Requirements 3

Scope 4

Definition of Software Maintenance 5

SOFTWARE MAINTENANCE WEAKNESSES

Software Maintenance Costs 9

Recommendations 16

Software Maintenance Contracts And Contractor Oversight 17

Recommendations 23

Change Control Management 25

Recommendation 30

APPENDICES

A - Background of the PCIE CSIP Completed Tasks 33

Task 1 33

Task 2A 33

Task 2B 34

Task 3 35

B - Federal Software Maintenance Criteria and Guidance 37

Public Laws 37

Office of Management and Budget 38

Federal Information Processing Standards Publications 39

National Bureau of Standards Special Publications 40

C - Profile of Agency Missions and Applications Reviewed 41

Department of Housing and Urban Development 41

Department of State 42

Environmental Protection Agency 43

National Aeronautics and Space Administration 44

National Science Foundation 45

Railroad Retirement Board 46

Social Security Administration 47

D - Individual Agency Reports Issued for Task 4 49

E - Audit Methodology 51

Policies, Procedures, and Standards 51

Application Software Maintenance Lifecycle Management 51

Contract Management 51

Cost Management 52

IRM Staff Qualifications 52

Internal Control Issues 52

F - Acronyms 53

INTRODUCTION



Background

Federal agencies continue to rely heavily on information technology. Accordingly, over the years, agencies have continued to invest significantly in information technology. In fiscal year 1996, executive agencies expect to obligate more than $26 billion for information technology (IT) investments and operations. This IT spending represents a critical investment of public tax dollars affecting virtually every government function.

Information technology continues to play a vital role in supporting the operations and missions of Federal agencies. This growing use of, and reliance on, information systems introduces opportunities for fraud, waste, and abuse in Federal programs. This, in turn, raises concern for maintaining an adequate level of computer system integrity and security. Thus, cost-effective internal controls are essential to properly manage and safeguard computer operations and sensitive agency data.

From this perspective, in September 1986, the President's Council on Integrity and Efficiency (PCIE) Computer Committee initiated the Computer Systems Integrity Project (CSIP). The Computer Committee's name was changed to the Information Technology Committee in 1990. The project is a multi-task effort focusing on controls, security, and other integrity issues related to the entire data processing system life cycle. The objectives of the overall project are to assess the integrity of Federal computer systems and develop recommendations for Governmentwide improvements in standards, procedures, documentation, and operations affecting computer systems integrity.

To date, four tasks have been completed. Task 1, Survey of Agency Implementation of Computer Systems Integrity Requirements, was completed with the June 1988 issuance of a summary PCIE report. Task 2A, Review of General Controls in Federal Computer Systems, was completed in October 1988 when a summary PCIE report was issued. Task 2B, Review of Application Controls in Federal Contract Tracking Systems was completed with the April 1991 issuance of a summary PCIE report. The lead agency for Task 3, Followup Audit on the Implementation of the PCIE CSIP Task 1 and Task 2A Audit

Report Recommendations, concluded that the issuance of a consolidated summary PCIE report would produce few benefits, and therefore such a report was not prepared. For a detailed description of these tasks see Appendix A, page 33.

In May 1992, the Committee approved sponsorship of CSIP Task 4, Review of Application Software Maintenance in Federal Agencies. Software maintenance was selected as an area for review for two primary reasons. First and foremost, controls over application software modifications are vital to maintaining the reliability and integrity of sensitive and mission-critical application systems of Federal agencies. Failure to adequately control application software maintenance exposes an organization to corruption of system information--which in turn can lead to erroneous management decisions and/or the inability to meet organizational missions. The rapidly growing inventory of Federal application software systems is increasing the need for a strong, disciplined, clearly defined software maintenance approach which will guarantee the usefulness and integrity of the data maintained by those systems. Second, Federal managers have historically undervalued the economic importance of a sound, comprehensive software maintenance program.

The General Accounting Office's (GAO) February 1981 report entitled "Federal Agencies' Maintenance Of Computer Programs: Expensive And Undermanaged" (GAO/AMD-81-25) stated software maintenance in the Government was largely undefined, unquantified, and undermanaged. Specifically, Federal agencies spend millions of dollars annually on computer software (program) maintenance but little is done to manage it. GAO studied 15 Federal computer sites in detail, and received completed questionnaires from hundreds of others. All reported large maintenance efforts, but few had good maintenance records and very few managed software maintenance as a function. GAO concluded that improvements can and should be made both in reducing maintenance on existing software and in constructing new software to reduce its eventual maintenance costs. The report stated that published estimates of programmer time spent on the maintenance function range from 50 to 80 percent. Perhaps even more significant, some estimates say that up to 60 percent of all Automatic Data Processing (ADP) dollars will be spent on software maintenance. The report recommended that the National Bureau of Standards should issue a standard definition and specific technical guidelines for software maintenance. Heads of Federal agencies should require their automatic data processing managers to manage

the software maintenance as a discrete function.

GAO's July 1995 report entitled "Information Technology Investment A Governmentwide Overview" (Report No. GAO/AIMD-95-208) stated the total amount of annual federal spending for IT is unknown because the Office of Management and Budget (OMB) does not collect comprehensive IT-related budget data on a Governmentwide basis. For example, agencies with annual IT-related obligations under $2 million prior to FY 1996 and under $50 million for FY 1996 and beyond, as well as the legislative and judicial branches of the federal government, are not required to report IT obligation data to OMB. In addition, computers that are embedded in weapon systems are not included in the reporting to OMB. The Department of Defense estimates the costs of these computers to be $24 billion to $32 billion annually. Agencies do not break out IT obligations as separate line items in their budget documents, but rather include this information within program or administrative costs. Software maintenance costs are included in the "Support Services" category along with maintenance costs for equipment, telecommunications, data entry, training, planning, studies, facilities management, custom software development, system analysis and design, and computer performance evaluation and capacity management. The

Overview of Governmentwide Requirements

Governmentwide policies and procedures for software maintenance have been prescribed in general terms by a variety of Federal sources, including Congress and OMB. For example, Public Law (P.L.) 96-511, Paperwork Reduction Act of 1980 requires Departments and Agencies to ensure (1) ADP and communications technologies are acquired and used in a manner which improves service delivery and program management; and (2) the collection, maintenance, use and dissemination of information by the Federal Government is consistent with applicable laws relating to confidentiality, including the Privacy Act. Furthermore, OMB Circular A-123, Internal Control Systems, requires agencies to establish and maintain a system of internal controls to provide reasonable assurance that Government resources, including information resources, are protected from fraud, waste, unauthorized use, and misappropriation.

The National Institute of Standards and Technology (NIST) also provides guidance on software maintenance through Federal Information Processing Standard Publication (FIPS Pub.) 106,

Guideline on Software Maintenance. This Guideline is intended for use by both managers and maintainers. It addresses the need for a strong, disciplined, clearly defined approach to software maintenance. It emphasizes that the maintainability of the software must be taken into consideration throughout the lifecycle of a software system. Software must be planned, developed, used, and maintained with future software maintenance in mind. The techniques discussed in this Guideline are recommended for use in all Federal ADP organizations. Specifically, this publication includes information on:

For more information on Federal criteria pertaining to software maintenance issues see Appendix B, page 37.

Scope

Seven Inspectors General (IG) Offices participated in Task 4: the Departments of Health and Human Services(2) (HHS), Housing and Urban Development (HUD), and State (DOS); Environmental Protection Agency (EPA); National Aeronautics and Space Administration (NASA); National Science Foundation (NSF); and the Railroad Retirement Board (RRB). The Inspectors General reviewed the management of the software maintenance process (e.g., policies and procedures, contract management, etc.) for major mission-support/administrative applications within their agencies. Total Government obligations for Support Services, which includes software maintenance, totals almost $8.4 billion for fiscal year 1995. A profile of the missions of each agency and the applications reviewed during this task is included in Appendix C, page 41.

Audit work at each participating agency was conducted in accordance with the U.S. GAO's "Government Auditing Standards." Based on

the results of field work conducted between November 1992 through November 1995, six individual agency reports were prepared and issued (see Appendix D, page 49). The remaining participant, SSA, made no recommendations and issued a close-out memorandum in lieu of a report. The results of audit work at one agency indicated that the quality and quantity of software maintenance cost information was adequate, management of contractor performance was effective, and overall management of the software maintenance lifecycle process was effective. Therefore, this report consolidates the findings of the five individual agency assessments that reported weaknesses in the aforementioned areas, and presents recommendations addressing Governmentwide issues.

The primary objectives of Task 4 were to identify common software maintenance problems across Federal Agencies and to identify Governmentwide recommendations to oversight agencies. Specifically, agencies determined the (1) adequacy and completeness of agency software maintenance policies and procedures, (2) effectiveness of controls over software changes, (3) extent to which software maintenance is budgeted and tracked, and (4) effectiveness of agency management of software maintenance contractors. The EPA/OIG had overall responsibility for coordinating this task. As the task leader, EPA/OIG staff developed a Task 4 survey guide, audit guide, and data collection instrument which participants used in performing this audit work. See Appendix E, page 51, for a description of the audit methodology.

Definition of Software Maintenance

In accordance with FIPS Pub. 106, "Guideline on Software Maintenance," software maintenance is the set of activities which result in changes to the originally accepted (baseline) product set. These changes consist of corrections, insertions, deletions, extensions, and enhancements to the baseline system. Generally, these changes are made in order to keep the system functioning in an evolving, expanding user and operational environment.

Functionally, software maintenance activities can be divided into three categories: perfective, adaptive, and corrective.

Perfective maintenance includes all changes, insertions, deletions, modifications, extensions, and enhancements which are made to a system to meet the evolving and/or expanding needs of the user. Perfective maintenance refers to enhancements made to improve

software performance, maintainability, or understandability. It is generally performed as a result of new or changing requirements, or in an attempt to augment or fine tune the software. Activities designed to make the code easier to understand and to work with, such as restructuring or documentation updates (often referred to as 'preventive' maintenance) and optimization of code to make it run faster or use storage more efficiently are also included in the perfective category. Perfective maintenance comprises approximately 60 percent of all software maintenance.

Adaptive maintenance consists of any effort which is initiated as a result of changes in the environment in which a software system must operate. These environmental changes are normally beyond the control of the software maintainer and consist primarily of changes to the: (1) rules, laws, and regulations that affect the system; (2) hardware configurations, e.g., new terminals, local printers, etc.; (3) data formats, file structures; and (4) system software, e.g., operating systems, compilers, utilities, etc. Approximately 20 percent of software maintenance falls into the adaptive category.

Corrective maintenance refers to changes necessitated by actual errors (induced or residual "bugs") in a system. Corrective maintenance consists of activities normally considered to be error correction required to keep the system operational. By its nature, corrective maintenance is usually a reactive process where an error must be fixed immediately. Not all corrective maintenance is performed in this immediate response mode; but all corrective maintenance is related to the system not performing as originally intended. The three main causes of corrective maintenance are: (1) design errors, (2) logic errors, and (3) coding errors. Corrective maintenance accounts for approximately 20 percent of all software maintenance.

SOFTWARE MAINTENANCE WEAKNESSES

(This page intentionally left blank.)

SOFTWARE MAINTENANCE COSTS





Federal Departments and Agencies do not properly identify and account for software maintenance costs. For example, agencies do not consistently include the costs of administrative and clerical salaries, materials, computer usage, telecommunications, overhead costs, and Federal employee salaries in software maintenance costs. In addition, agencies reported cost-benefit analyses are not consistently prepared, updated, and/or maintained for application systems. As a result, agencies are not in a position to make informed budgeting and planning decisions regarding systems operations and maintenance. In addition, software maintenance expenses are being inaccurately reported to OMB. These weaknesses occurred primarily because agencies are not defining software maintenance consistently, and Federal accounting requirements are not being followed.

OMB Budget Criteria

OMB Circular A-11, dated June 6, 1995, entitled "Preparation and Submission of Budget Estimates" requires agencies that obligate more than $50 million annually for information technology activities to submit a report on obligations for the following categories:





computer performance evaluation and capacity management.

Also, agencies are required to submit an annual report on obligations and full time equivalent (FTEs)(3) employment data for financial systems. FTE employment should include Federal employees identified as directly designing, operating, or maintaining financial systems.

OMB Circular A-109, dated April 5, 1976, entitled, "Major Systems Acquisitions" states each agency acquiring major systems should maintain a capability to: (1) predict, review, assess, negotiate, and monitor lifecycle cost(4); (2) assess acquisition cost, schedule and performance experience against predictions, and provide such assessments for consideration by the agency head at key decision

points; (3) make new assessments where significant costs, schedule or performance variances occur; (4) estimate lifecycle costs during system design concept evaluation and selection, full-scale development, facility conversion, and production, to ensure appropriate trade-offs among investment costs, ownership costs, schedules, and performance; and (5) use independent cost estimates, where feasible, for comparison purposes.

OMB Circular A-130, dated February 8, 1996 and entitled "Management of Federal Information Resources" requires Federal agencies to account for the full costs of operating information technology facilities and recover these costs from the users. This Circular also requires Federal agencies to implement a system to distribute the full cost of providing services to the user. The term "full cost" is comprised of all direct, indirect, and general and administrative costs incurred in the operation of the facility. These costs include personnel, equipment, software, supplies, contracted services, space occupancy, intra-agency services, and other services. In addition, agencies are required to conduct a cost-benefit analysis for information systems to be used as budget justification material, as well as become part of the ongoing management oversight process to ensure prudent allocation of scarce resources. The cost-benefit analysis should be updated over the information system life cycle because of such factors as significant changes in projected costs and benefits, significant changes in information technology capabilities, and major changes in requirements (including legislative or regulatory changes).

Improper Identification of Software Maintenance Costs

Four of the five agencies are not properly identifying software maintenance costs. For example, agencies do not consistently include the costs of administrative and clerical salaries, materials, computer usage, telecommunications, overhead costs, and Federal employee salaries in software maintenance costs. One agency reported $4.7 million of system engineering software maintenance costs were categorized as development instead of maintenance. Another agency reported most system managers were only able to provide the contract dollars spent in support of their systems. In addition, agencies do not and in some cases cannot accurately distinguish software maintenance costs from development and operations. Agencies reported ADP support service contracts often cover many ADP-related activities, therefore making it difficult, if not impossible, to separate the cost of

maintenance from other ADP development and operational activities. Without accurate software maintenance cost information, management cannot make informed decisions regarding system enhancements or replacements. According to National Bureau of Standards Special Publication 500-106 entitled "Guidance on Software Maintenance", federal managers responsible for software application systems estimate that 60 to 70 percent of the total application software resources are spent on software maintenance. Using that basis(5), it can be estimated that approximately $2.7 to $3.1 billion was spent on software maintenance efforts Governmentwide in fiscal year 1995. These figures are conservative because they do not include related software costs in the services, support services, and supplies categories. If we include 25 percent of the dollars in these categories, our Governmentwide software maintenance cost estimates would increase by $2 to $2.5 billion.

GAO reported in its July 1995 information technology report (see page 3) that the need to achieve high returns on information technology investments and reduce systems development risks has never been greater, given the public's demand for a government that works better and costs less. Increasingly, Federal agencies' ability to improve their performance and reduce costs depends on automated data processing information needed to make good decisions, hold down costs, and improve services to the public.

Managers cannot effectively evaluate the efficiency of software maintenance efforts without knowing the extent of resources devoted to maintenance. Attempts to reduce maintenance may fail because of lack of information. Costs associated with the software maintenance function should be monitored and recorded. Segregating the costs of the different work functions involved in data processing is a prerequisite for effective management of ADP costs. Some specific reasons for accumulating such data are:

-



-



System modification requests should be reviewed and evaluated before any actual work is performed on the system. The evaluation should consider, among other things, the costs and benefits of the change. In particular, perfective maintenance changes must be thoroughly analyzed, since they are optional in the sense that failure to implement them will not adversely affect system performance. Changes should be approved only if the benefits outweigh the costs. Because corrective and adaptive maintenance are not optional, cost-benefit analysis is most appropriately used to determine the best option for applying the required changes.

Cost-Benefit Analysis

Three agencies reported cost-benefit analyses are not consistently prepared, updated, and/or maintained for application systems. Two agencies reported that none of the system owners were able to provide a current updated cost-benefit analysis. System managers at another agency relied on contractor estimates to determine the level of effort and costs involved with a proposed modification. For many systems, these estimates, if performed at all, were solicited after the requested modification had been approved by the change control committee. These estimates did not weigh the benefits to be derived by implementation of the proposed software change as opposed to the costs involved in the modification process. Further, several system managers responded that decisions to implement a change were based on whether sufficient funds were available to do the job and whether system functionality was affected. Software maintenance was

performed until available funds ran out; the benefits to be realized were a secondary consideration.

As a result of these weaknesses, system managers are not in a position to make informed and effective short- and long-term decisions, such as choosing the best enhancement, upgrade, or improvement based on accurate cost information or cost-benefit analysis. In particular, agencies cannot determine when systems should be evaluated for redesign or replacement because of excessive maintenance costs. A current cost-benefit analysis serves a valuable purpose, especially when allocating scarce budgetary resources. These analyses ensure the most cost-effective alternative which satisfies system requirements is chosen. Without it, scarce budgetary resources may be mis-allocated. Non-cost effective changes to the software could be approved by management and implemented into the production environment, despite the marginal benefits which would be realized from the modification. This could result in a lack of funds for those proposed software changes which are either mandatory in nature or necessary for proper application functioning.

Inaccurate Reporting

Software maintenance expenses are being inaccurately reported to OMB. For example, one agency reported it did not capitalize at least $38.1 million of software costs which distorted the accuracy of its current financial statements. This agency reported that all costs associated with the purchase, development, and enhancement of the ten systems they reviewed were treated as annual expenses. Another agency stated its annual information systems submission, "Report on Obligations for Major Information Technology Systems" is inaccurate because of improper maintenance cost classifications. Problems such as failing to include overhead charges, ignoring the cost of Federal personnel, commingling maintenance costs with other ADP-related expenses, and failing to capitalize and depreciate maintenance costs limit agencies' ability to report costs accurately. Additionally, agencies cannot prepare accurate financial statements and therefore are unable to meet the requirements of the Chief Financial Officers Act(6).

Reasons for Software Maintenance Cost Deficiencies

These weaknesses occurred primarily because agencies are not defining software maintenance consistently, and Federal accounting requirements are not being followed. Two of the participants stated their agencies do not have a standard definition of software maintenance. Three participants reported maintenance is defined by the types of software maintenance (i.e., perfective, adaptive, and corrective), while two others adopted the FIPS Pub. 106 definition. The remaining participant stated maintenance is distinguished from development by the length of time it takes to complete the task (e.g., a maintenance effort is generally performed within a shorter timeframe than a software development effort). This resulted in system managers not interpreting and applying a consistent definition of software maintenance. Specifically, the same types of costs (e.g., administrative and clerical salaries, materials, computer usage, telecommunications, overhead costs, and Federal employee salaries) are not consistently accounted for as maintenance costs. One participant reported the absence of a formal, consistently applied definition of software maintenance had created a significant understatement of maintenance costs. This participant requested various IT managers to review the project charges of seven systems in their tracking system over an 18 month period. The managers reclassified the charges based on the definition and functional classifications in FIPS Pub. 106. This review disclosed that maintenance costs were understated by $4.7 million in the tracking system. The primary reason for this understatement is that projects are categorized as either maintenance or development based on criteria and classifications that are not functionally segregated or consistent with FIPS Pub. 106. Also, it was noted the classifications used in both the project tracking and customer billing and allocation processes were inconsistent.

Further, information managers are not complying with Federal accounting requirements. These requirements dictate that information managers establish and implement mechanisms to track, differentiate, and capitalize maintenance costs. However, five agencies reported they lack a comprehensive process or system to accumulate costs. One agency stated they have not implemented a cost accumulation process for major information systems. The account code structure in its financial system does not provide for cost tracking by information systems and a cost accounting process has not been established to properly allocate cost by system. Another agency reported its cost allocation and billing process is complex,

disjointed, requires much manual intervention, and does not interface with the central accounting system. In addition, the project cost chart of accounts lacks specific accounts of activities necessary for cost measurement purposes.

Recommendations:

We recommend that OMB:

-- Emphasize to Federal agencies the need to accurately identify and account for all software maintenance costs.

-- Reiterate to Federal agencies the OMB Circular A-130 requirements for conducting and updating cost-benefit analyses so that system managers are in a better position to make informed and effective decisions when allocating budgetary resources.

-- Reiterate to Federal agencies the FIPS Pub. 106 definition of software maintenance.

SOFTWARE MAINTENANCE CONTRACTS AND CONTRACTOR OVERSIGHT





The contracts used by agencies for software maintenance do not adequately protect the Government's interests. Agencies are not consistently awarding contracts that motivate contractors to perform at optimal levels. In addition, the monitoring of these contracts present an unstructured, poorly controlled approach to the management of maintenance for critical Government applications. Consequently, agencies lack control over software maintenance activities and rely heavily on contractors. These weaknesses are primarily due to the use of non performance-based contracting methods and agencies not specifying performance measures in software maintenance contracts. Furthermore, Federal employees lacked the technical expertise required to adequately oversee maintenance contractors.

Office of Federal Procurement Policy Criteria

The Office of Federal Procurement Policy (OFPP) Letter 91-2 entitled "Service Contracting," dated April 9, 1991, states it is the policy of the Federal Government that agencies (1) use performance-based contracting methods to the maximum extent practicable when acquiring services, and (2) carefully select acquisition and contract administration strategies, methods, and techniques that best accommodate the requirements. Furthermore, contract types most likely to motivate contractors to perform at optimal levels shall be chosen. Fixed price contracts are appropriate for services that can be objectively defined and for which risk of performance is manageable. For such acquisitions, performance-based statements of work and measurable performance standards and surveillance plans shall be developed and fixed price contracts shall be preferred over cost reimbursement contracts. To the maximum extent practicable, contracts shall include incentive provisions to ensure that contractors are rewarded for good performance and quality assurance deduction schedules to discourage unsatisfactory performance. These provisions shall be based on measurement against predetermined performance standards and surveillance plans. In addition, agencies must document in the contract files the reasons for those instances where performance-based contracting methods are not used.

OFPP Pamphlet #4, entitled "A Guide for Writing and Administering Performance Statement of Work for Service Contracts," dated October 1980, provides guidelines for writing and administering performance statements of work for service contracts. It describes a systematic means to develop statements of work and quality assurance surveillance plans in order for agencies to define and measure the quality of contractors' performance.

Government's Interests Inadequately Protected

Three of the five agencies identified weaknesses with the type of contracts being used in the performance of software maintenance and/or the monitoring of these contractors. The contracts used by agencies for software maintenance do not adequately protect the Government's interests. Agencies reported they are not awarding performance-based contracts on a regular basis for software maintenance work. Instead, they are using labor hour contracts. One agency reported it almost exclusively awards labor hour (i.e., Cost Plus Fixed Fee (CPFF)) contracts for software development and maintenance efforts when other, cost-effective and performance-oriented contract types could be used. A key factor in determining if a CPFF contract should be used is the procuring agency's inability to sufficiently describe the work requirements. Consequently, CPFF contracts place the Government at a higher risk than other contract types because of this lack of identifiable work requirements. The use of CPFF contracts as the sole contracting vehicle minimizes the contractor's incentive to perform well and control costs. CPFF contracts should be used only after other performance-oriented contract types have been considered. While generally appropriate to accomplish the type of maintenance support an agency needs when maintaining its software, labor hour contracts require constant monitoring because the total cost of the project is not fixed. In a contract based on labor hours, the contractor is reimbursed strictly on the basis of hours worked at the fixed labor rates specified in the contract. Since a fixed price is not established by the contractor to deliver a final product, close monitoring of these contracts is necessary. Procurement officials stated it is acceptable, and quite common, to use labor hour contracts for software maintenance and enhancement projects, however they acknowledged that labor hour contracts provide no incentive for the contractor to control costs or improve labor efficiency. Contract types that motivate contractors to perform at optimal levels include:



In addition, the monitoring of labor hour contracts presents an unstructured, poorly controlled approach to the management of maintenance for critical Government applications. One agency reported Contracting Officer Representatives (COR) were generally not aware of the extent of their contract monitoring responsibilities(7). Some CORs had not received appropriate training and therefore were not aware of what their official COR duties entailed. Consequently, the CORs did not always fulfill the role that was intended, and some critical monitoring of contractor performance went undone. Some CORs failed to adequately monitor the technical performance of the contractor they were to oversee, and others did not specifically know how they could assure the Government received what it needed

through its software maintenance contracts. A COR lacking the day-to-day knowledge of a contract, a schedule of the work required, and adequate technical knowledge may not have enough information to sufficiently monitor software maintenance contractors. Reviews of contract documents provided strong evidence that CORs routinely certify contract invoices for payment without supporting documentation. Even when contractors provided documentation in support of their bills, CORs frequently certified invoices for payment without evaluating/verifying the support. For example, one agency received two invoices that charged over $17,000 for 372 hours of work recorded by one contract employee during a two-week period. The COR approved payment for this total. However, agency auditors' work revealed that the billing was actually for a 2-month period. The COR stated that he relies on the Contracting Officer (CO) to question invoice charges that appear to be inappropriate or too high. However, the CO said he is not in a position to question the contractor's charges because he does not routinely receive copies of contractor invoices. One agency identified that COR duties were sometimes informally transferred to others, usually lower-grade employees. Transferring monitoring duties is problematic for two reasons. First, the officially designated COR may not have any assurance that the monitoring duties appropriate for the contract are fully and competently carried out. Second, the person to whom the COR duties are redelegated frequently does not have (1) day-to-day knowledge of the contract employees' activities, (2) overall knowledge of the terms of the contract, and (3) access to either the contract invoices or the contract terms and conditions.

Consequently, agencies lack control over software maintenance activities and rely heavily on contractor personnel to perform maintenance activities on critical Federal systems. One agency reported Federal management involvement focused on the initial review and approval of proposed changes, and chose to rely on contractor personnel to perform and oversee those reviews and controls which were built into the final stages of software modification. The assigned program office review board, or similarly responsible management personnel, had little or no interaction with the modification process once the software change had been sent to the contractor for work. Contractor personnel performed the actual design, coding, and testing of software changes for most application systems reviewed. Although independent peer reviews and unit tests were often part of the contractor's procedures, in many cases the

originating program office did not participate in an oversight capacity. Auditors concluded the extent and frequency of program office interface during the final stages of change control processing was often minimal and, in some cases, limited to administratively routing the modified code to the data center for implementation. Another agency concluded the extent of software maintenance that is routinely performed by contractors increases the government's vulnerability to excessive contractor costs and limits its ability to control software maintenance activity. The heavy use of contractors has had negative effects on operations. For example, a help desk at one agency is staffed entirely with contract employees because of a shortage of knowledgeable Federal employees to help users with their problems. This situation raises management control issues because contractors may not report user complaints or may initiate unnecessary software changes thereby increasing the need for their services. In addition, if a contract fails to specify required contractor personnel qualifications, agencies' options are limited if the contractor does not supply qualified personnel.

Performance-Based Contracting Methods Should Be Utilized

These weaknesses are primarily due to the use of non performance-based contracting methods, and agencies not specifying performance measures in software maintenance contracts(8). One agency reported that none of the contract files reviewed contained the required documentation to justify why performance-based contracting methods were not used. They also noted that all of the contracts contained identical "boilerplate" Statements of Work (SOW) except for the staffing requirements section. Agency officials stated "boilerplate" SOWs are used because they allow flexibility in responding to time sensitive legislative and administrative requirements. However, SOWs for time sensitive tasks require specific, not "boilerplate," performance and quality requirements to ensure successful timely completion of tasks with minimal risk of rework. Another reason given for using "boilerplate" SOWs is that program officials cannot always specify software requirements. However, the SOW is the element of a contract that should be used to clearly communicate to the contractor

the Government's expectations and serve as a basis for evaluating the contractor's performance. To avoid developing vague and imprecise SOWs, measurable performance standards, and acceptable quality levels must be identified within the SOW for the individual contracts. This gives agencies the necessary leverage to ensure that contractors' performance levels are of an acceptable quality and the Government only pays for products and services that meet contracting standards.

In addition, agencies are issuing contracts that do not contain appropriate measures to ensure adequate contractor performance. Agencies are issuing contracts that are vague, do not specify deliverables, lack product delivery dates, and lack a description of the government's responsibility for reviewing and approving contract deliverables. For example, a contract employee who helped maintain an application at one agency was never asked to provide performance and progress reports. The non-personal services contract under which he performed did not specify, other than in general terms, what was expected. While the contractor's work was supposed to be governed by the contract, he performed a variety of assignments not specified in any agreement. He was not required to submit invoices to document the hours worked. In fact, the contractor did not formally provide any products to management as an outcome of his contractual relationship, but was paid every two weeks based on an agreed upon annual salary. Officials at another agency indicated they rely on their personal and professional expertise, as well as user acceptance in determining the acceptability of deliverables and for measuring contractor performance. However, without establishing and incorporating measurable performance standards(9), acceptable quality levels(10), and quality assurance documents in the SOW, agencies cannot hold the contractor accountable for the quality of products or services provided. For example, in the ADP solicitation reviewed at one agency the performance measurement entitled, "Quality of Project Performance" indicated the contractor "will be evaluated to the extent the work is error free." However, a more appropriate

measurable performance standard could be 100 percent of the software changes moved into the production environment do not require a correction release. The associated Acceptable Quality Level would indicate that deviation of more than 5 percent of releases will result in a deduction. The performance indicator would be operational in the production environment without error for two or more production cycles. Without performance measurements, agencies cannot determine whether the millions of dollars paid each year for contractors to develop and maintain application systems are spent wisely. Also, if performance data is not collected and analyzed, agencies cannot prevent serious operational problems caused by deficient contractor performance. Furthermore, Federal employees lacked the technical expertise required to adequately oversee maintenance contractors. One agency reported that most of their application systems relied on contractor personnel to perform software maintenance activities because of the absence of qualified full-time employees. The lack of technical expertise among Federal employees available to properly perform software maintenance and monitor technical contractors has hindered the Government's ability to reduce its reliance on contractors.

Recommendations:

We recommend that OMB:

-- Reiterate to Federal agencies the requirements in OFPP Letter 91-2 with particular emphasis on:

-- Emphasize to Federal agencies the importance of effectively monitoring the performance of software maintenance contractors to ensure the maintenance needs of the Federal Government are met in a timely and cost-effective manner.

-- Require Federal agencies to ensure that officials responsible for monitoring software maintenance contractors are sufficiently trained in the areas of software maintenance and configuration management.

(This page intentionally left blank).

CHANGE CONTROL MANAGEMENT















NIST Guidance















Standard Change Request Forms Not Utilized

Federal Departments and Agencies are not adequately managing the software change control and configuration management(11) processes. Specifically, Federal Departments and Agencies cited weaknesses with the change request process, change review and approval, and testing. As a result, agencies lack assurance that (1) applications will perform as intended and (2) management controls will adequately safeguard the integrity of the applications. These weaknesses resulted from agencies not following software maintenance policies, standards, and procedures for requesting, approving, and testing changes.

FIPS Pub. 106, dated June 15, 1984 and entitled "Guideline on Software Maintenance," prescribes guidelines for achieving a strong, disciplined, and clearly defined approach to system software maintenance. As the publication explains, the primary purpose of change control is to assure smooth operational continuity and orderly evolution of the system. Effective change control is necessary to ensure that all system software installation and maintenance requirements are performed in a structured and controlled manner and provide management with a chronological history of all software modifications. An effective change control process includes the following control points: (1) a formal change request, (2) centralized review and approval of change requests, and (3) testing of changes. Compliance with such change control procedures further helps to ensure adherence to established standards and performance criteria for system software, and facilitates communication between software maintenance personnel and data center management.

Three of the five agencies identified weaknesses with the method used to request application systems changes. Specifically, agencies

were not using standardized change control request forms, or used no form at all. In fact, one agency report stated that even though numerous change control forms exist, the user community prefers to initiate changes by telephone call, or e-mail. All changes considered for a system should be formally requested in writing. These requests may be initiated by the user or maintainer in response to discovered errors, new requirements, or changing needs. Change requests should be submitted on forms which contain the following information: name of requestor, date of requestor, purpose for request, name of program(s) affected, name of document(s) affected, name of data file(s) affected, date request satisfactorily completed, date new version operational, name of maintainer, date of review, name of reviewer, and review decision. Procedures may vary regarding the format of a change request, but it is imperative that each request be fully documented, in writing, so it can be formally reviewed. Change requests should be carefully evaluated by the project manager or a change review board and decisions to proceed should be based on all the pertinent areas of consideration (probable effects on the system, actual need, resource requirements vs. resource availability, budgetary considerations, priority, etc.). The decision as to whether or not to accept the change, and reasons for that decision, should be recorded and included in the permanent documentation of the system. One agency reported that in several cases, the anticipated level of effort was the key factor in determining the exact submission channel and what data was required for submitting a change.

Without a standardized change request form, sufficient data may not be available to adequately evaluate the nature of a proposed software change or its resulting benefits, cost, or impact on the application system. In addition, approved changes could be misclassified or ranked inappropriately due to insufficient information. It is also possible that a review board may disapprove an important and necessary change due to a lack of information. Finally, valuable time and resources could be wasted by management in trying to discern the nature of the requested change or returning a request for more information.

Inadequate Review and Approval of Changes Prior To Implementation

Four of the five agencies identified weaknesses with the review and approval process for software changes. In fact, one agency report stated that 70 percent of the application systems it reviewed had minimal evidence to support that software changes were formally

reviewed, approved, or "closed" by either the change control board or responsible originating management officials. Another agency report stated that none of the application systems reviewed had a centralized change review process. Change requests should be carefully reviewed and evaluated before any actual work is performed on the system.

The evaluation should take into consideration, among other things, the staff resources available versus the estimated workload of the request; the estimated additional computing resources which will be required for the design, test, debug and operation of the modified system; and the time and cost of updating the documentation. The approval point should be centralized for all software maintenance projects, so that one person, or group of persons, has the knowledge of all requested and actual work being performed on the system. In practice, the review process ranges from the review and sign-off by the project manager or user, to the convening of a change review board which formally approves or rejects changes. The purpose of this process is to ensure: (1) all of the requirements of the change request have been met; (2) the system will perform according to specifications; (3) the changes will not adversely impact the rest of the system or other users; (4) all procedures are followed and rules and guidelines adhered to; and (5) the change is indeed ready for installation in the production system.

Furthermore, offices are not adhering to the priority level assigned to each application change. For example, a report stated that in 66 percent of the applications reviewed, significant problems in applying the requested changes could be traced to not prioritizing all change requests. Specifically, the project listed as the top priority was frequently ignored, and technical staff and supervisors performed maintenance changes that were not prioritized. A priority level should be assigned to the requested work. All review actions and findings should be added to the system documentation folder. This process will reduce the amount of unnecessary and/or unjustified work which is often performed on a system.

Without a centralized review and approval point for application system changes, similar enhancements to the system may be processed individually, thereby wasting limited resources; or, software changes could be implemented despite the cost-effectiveness of the proposed modifications. In addition, proposed software changes may be implemented without sufficient impact analysis and,

therefore, interact negatively with other application software components. For example, a major software interface problem was created at one agency which led to severe voucher payment backlogs, totaling over $1.25 million, and lasting from 1991 through 1993. Finally, proposed changes of lesser importance may be implemented ahead of more significant and necessary modifications. In fact, one agency reported that a project scheduled for implementation in six to eight months has taken more than two years, because it was never added to the priority listing schedule.

Inadequate Testing of Application Software Changes

Four of the five agencies reported weaknesses with the testing of application software changes. Specifically, test plans were not developed, test results were not analyzed, and proper test methodologies were not followed. For example, one agency reported that 93 percent of the maintenance projects reviewed did not have a written test plan. Another agency stated that over 70 percent of the project leaders were unable to provide a Test Plan and a Test Analysis and Results Report. Testing is a critical component of software maintenance. Testing standards and procedures should define the degree and depth of testing to be performed and the disposition of test materials upon successful completion of the testing. Whenever possible, the test procedures and test data should be developed by someone other than the person who performed the actual maintenance on the system. During the testing stage, the software and its related documentation should be evaluated in terms of readiness for implementation. The goal of testing is to find errors and, therefore, a test plan should (1) define the degree and depth of testing to be performed; (2) describe the expected output; and (3) test for valid, invalid, expected, and unexpected cases. The format and content of test plans and test analysis reports should emphasize the importance of (1) identifying and segregating the various functions of the program to be tested; (2) describing the strategy and limitations of the testing; and (3) describing the input data and expected output data for each planned test. A test analysis report should be prepared which summarizes and documents the test results and findings. The analysis summary should present the software capabilities, deficiencies, and recommendations.

Also, the functional user communities at two agencies were not given the opportunity to perform user acceptance testing. Acceptance testing is commonly performed by the functional users after system testing has been completed. Acceptance testing is considered the 'last

































Agencies Do Not Follow Software Maintenance Policies, Standards, and Procedures

line of defense' for the end user. The end users perform functional tests on the modified software using live data, test data, or a combination of data. The test should examine whether or not the program is doing what it is supposed to do. One agency report stated that functional and technical requirements for pending maintenance changes were poorly defined, not defined at all, or poorly communicated to technical personnel responsible for making the changes.

As a result of the above weaknesses, changes to software may not be sufficiently tested to account for all valid, invalid, expected, and unexpected outputs. Insufficient testing and analysis of test results could result in source code which fails when introduced to the production environment, due to unforeseen transaction conditions, interfaces, or user input. In addition, management cannot be certain the application will perform as intended. For example, during an eight month period at one agency, 26 releases(12) of software changes required 44 re-releases to correct one or more problems with the previous release. Furthermore, when an application lacks evidence that functional and technical requirements have been determined, reviewed, and formally approved in advance, it is difficult to ensure that all the necessary steps were completed. Without a functional acceptance test, program managers have no assurance that perfective or corrective changes made will satisfy the user needs when placed into production. This increases the risk that implementing the change will waste time and resources. In fact, at one agency 16 percent of all releases created new problems that were very costly in information technology and program office time. Without an effective testing process, there is no assurance that management controls exist to safeguard an application's integrity.

These change control weaknesses resulted from agencies not following software maintenance policies, standards, and procedures for requesting, approving, and testing changes. Departments and Agencies did not consistently apply standard procedures to ensure consistently good software maintenance practices. Furthermore, offices within the agencies determined their own maintenance procedures for the applications they develop. Due to this decentralized control, instances were found where maintenance changes were not centrally received, prioritized, and tracked. A

software maintenance policy should address the (1) need and justification for changes; (2) responsibility for making changes; (3) change controls and testing procedures; and (4) use of modern programming practices, techniques and tools. Implementation of the policy has the effect of enforcing adherence to rules regarding the operating software and documentation from initiation through completion of the requested changes. The key to controlling changes to a system is the centralization of change approval and formal requesting of changes.

Recommendation:

We recommend that OMB:

-- Encourage Federal agencies to consistently follow Federal and agency software maintenance policies, standards, and procedures for:

APPENDICES



(This page intentionally left blank).

Background of the PCIE CSIP Completed Tasks

Task 1--Survey of Agency Implementation of Computer Systems Integrity Requirements

Task 1 focused on the compliance of eight agencies with mandated policies and other requirements dealing with computer security and controls. The participating IG offices evaluated their agencies' implementation of OMB Circulars A-123, A-127, and A-130 requirements relative to the following computer integrity functions: information resource management, internal controls, computer security, and quality assurance. Each IG office issued a report describing the implementation deficiencies found at their respective agencies.

The June 1988 consolidated PCIE report for Task 1 identified five common obstacles which limited the effectiveness of agency compliance activities. The obstacles involved (1) varying terminology and specificity of requirements; (2) lack of emphasis on systems quality; (3) delayed sharing of Triennial Information Resource Management Review results; (4) lack of a budget mechanism to identify and justify systems integrity requirements; and (5) nonstandardized computer systems integrity training. Accordingly, the report made five recommendations for overcoming these obstacles and strengthening agencies' implementation capabilities Governmentwide. Implementing these recommendations required action by OMB, General Services Administration (GSA), Office of Personnel Management (OPM), and NIST.

Task 2A--Review of General Controls in Federal Computer Systems

Task 2A was aimed at assessing management controls over system software(13) in MVS-based computer systems at ten Federal computer centers. Work on this task focused on two key system software controls subareas: (1) operating system software controls and (2) access (security) software controls. This Task also included an evaluation of management practices employed in the utilization of disk and tape storage resources, since the data pertaining to these resources was available as a byproduct of system software controls work. Each IG office issued one or more reports (a total of 20 in all) describing the system software internal control weaknesses and disk and tape management deficiencies found at their respective agencies.

The October 1988 Task 2A consolidated report described serious operating system and security software control deficiencies in all of the agency computer centers reviewed. By exploiting the operating system integrity exposures identified, a knowledgeable perpetrator would have been able to access, modify, and/or destroy an agency's computer data, programs, and other resources without leaving an audit trail. These exposures resulted from (1) inadequate controls over enhancements to the operating system; (2) inadequate administration of the Authorized Program Facility(14); (3) improper maintenance of operating system software; and (4) a lack of policies, standards, and procedures pertaining to system software management. In addition, improper technical implementation of security software features and inadequate administrative controls over security software further increased the risks to operational continuity as well as the integrity of critical applications which support agency missions. Finally, as described in the report, an estimated $17 million in inefficiently used disk storage resources could have been recovered and made available for reuse through the application of generally accepted disk storage management techniques--thereby reducing the need for future additional disk storage procurements. Agencies had a similar opportunity to save substantial computer resources when processing magnetic tape files by applying generally accepted tape storage management techniques. the report contained eight Governmentwide recommendations for strengthening computer center management of operating system and security software, and four Governmentwide recommendations for strengthening disk and tape storage at Federal computer centers. Implementing these recommendations required action by OMB, NIST, NSA, and GSA.

Task 2B--Review of Application Controls in Federal Contract Tracking Systems

Task 2B, Review of Application Controls, was aimed at assessing the data integrity of a common administrative application system (the centralized contract tracking system). Work on this task focused on identifying application controls which needed strengthening, and determining system development efforts at seven Federal computer centers. Each IG office issued a report describing the agency's assessment of the centralized contracting systems.

The April 1991 Task 2B consolidated report stated that the centralized contract tracking systems of three of seven agencies reviewed had generally accurate data and relatively good application controls, however the remaining four agencies had unreliable data. The identified data integrity deficiencies resulted from weaknesses in data preparation, data entry, computer processing, and management oversight controls (including quality assurance), which allowed erroneous or unreported contract amounts to remain undetected or uncorrected. Agencies with multiple local procurement management/contract systems experienced the greatest data integrity problems. Conversely, those agencies with a single, comprehensive agencywide procurement management system generally had better management and internal controls and more accurate data. The integration of procurement tracking and financial accounting/reporting systems was the most effective internal control identified. The report made eight recommendations to OMB and GSA to advocate better controls over centralized contract systems.

Task 3--Followup Audit on the Implementation of the PCIE CSIP Task 1 and Task 2A Audit Report Recommendations

Task 3, Followup Audit on the Implementation of the PCIE CSIP Task 1 and Task 2A Audit Report Recommendations, was aimed at determining (1) what corrective actions were taken in response to the (a) recommendations made in the individual agency OIG reports issued under CSIP Task 1 and Task 2A; and (b) Governmentwide recommendations made to OMB, GSA, and NIST in the PCIE Task 1 and Task 2A summary reports; and (2) whether those actions adequately addressed the recommendations. In addition, task participants assessed how well their individual agencies complied with OMB's November 28, 1988 Directive M-89-06(15) to correct identified deficiencies, both in the specific systems reviewed in Task 2A and in other agency systems with similar system software.

In following up on their prior reports, the participating OIGs found that collectively, nearly half of their previous recommendations had not been fully implemented or the corrective actions taken did not fully satisfy the intent of the recommendations. Weaknesses identified in the prior tasks that continued to present integrity and security problems included (1) lack of emphasis on system quality; (2) inadequate administrative controls over security software; and (3) lack of policies, standards, and procedures pertaining to system software management. In addition, the participating OIGs found their agencies were either unaware of, or had not sufficiently complied with, OMB Directive M-89-06. The audit results at individual agencies were formally presented in 14 audit reports collectively containing 206 recommendations to those agencies.

The followup work on the Governmentwide recommendations, contained in the consolidated summary PCIE reports Task 1 and Task 2A, produced two groups of proposed new Governmentwide recommendations associated with Task 2A issues only. One group called for actions by OMB to spur Federal agencies to correct the continuing problems identified during the followup audit work. The correction of these problems was also the specific focus of the recommendations contained in the 14 audit reports issued to individual agencies. Accordingly, this group of proposed PCIE recommendations was aimed primarily at ensuring the specific corrective actions called for in the individual reports would be taken promptly. The other group of proposed Governmentwide recommendations called for the development and issuance of additional Governmentwide guidance. This thrust, however, was contrary to the decentralization and empowerment-related initiatives outlined in the Vice President's National Performance Review report. Finally, uncertainty existed regarding the appropriateness, applicability, and potential impact of the proposed recommendations in those Federal agencies where major changes in the technological environment had recently occurred or were in process. For these reasons, the Department of Transportation OIG (the task leader) concluded that issuance of a consolidated summary PCIE report for Task 3 would produce few benefits, and such a report was thus not issued.

Federal Software Maintenance Criteria and Guidance

Public Laws

P.L. 89-306, Automatic Data Processing Act. (October 30, 1965) This Act provides for the economic and efficient purchase, lease, maintenance, operation, and utilization of automatic data processing equipment by Federal departments and agencies.

P.L.96-511, Paperwork Reduction Act of 1980. (December 11, 1980) This Act requires Departments and Agencies to ensure (1) ADP and communications technologies are acquired and used in a manner which improves service delivery and program management; and (2) the collection, maintenance, use and dissemination of information by the Federal Government is consistent with applicable laws relating to confidentiality, including the Privacy Act.

P.L. 99-591, Paperwork Reduction Reauthorization Act of 1986, which amended the 1980 Paperwork Reduction Act. (October 30, 1986) This Act requires that Federal agencies periodically evaluate and, as needed, improve the accuracy, completeness, and reliability of data and records contained in Federal information systems.

P.L. 103-62, Government Performance and Results Act of 1993. (August 3, 1993) The purpose of the Act is to improve the confidence of the American people in the Federal government; initiate program performance reform including measuring performance against program goals; improve Federal program effectiveness; help Federal managers improve service delivery; improve congressional decision making; and improve internal management of the Federal Government. Specifically, each agency must prepare an annual performance plan covering each program activity with objective, quantifiable, and measurable goals.

P.L. 103-355, Federal Acquisition Streamlining Act of 1994. (October 13, 1994) Section 5052 of the Act states that results-oriented acquisition process guidelines will be developed that include the identification of quantitative measures and standards. These standards will be used for determining the extent to which an acquisition of items, other than commercial items, by a Federal agency satisfies the needs for which the items are being acquired.

P.L. 104-106, Division E--Information Technology Management Reform Act. (February 10, 1996) This Act seeks to improve Federal information management, and to facilitate Federal Government acquisition of state-of -the art information technology that is critical for improving the efficiency and effectiveness of Federal Government operations.

Office of Management and Budget

OMB Circular A-11, Preparation and Submission of Budget Estimates. (June 6, 1995) This directive provides detailed instructions and guidance on the preparation and submission of annual budgets and associated materials. This Circular requires agencies that obligate more than $50 million in a year for information technology activities to submit a report on obligations for information technology for the agency as a whole. The report will provide information on workyears and obligations for information technology activities. It will include obligations for: planning, including requirements, feasibility, and benefit-cost studies; system design, development, and acquisition; and voice and data telecommunications requirements, regardless of whether or not they are associated with an information system's installation, operations, maintenance, and support.

OMB Circular A-76, Performance of Commercial Activities. (August 4, 1983) This directive establishes Federal policy regarding the performance of commercial activities. The supplement to the circular sets forth procedures for determining whether commercial activities should be performed under contract with commercial sources or in-house using Government facilities and personnel.

OMB Circular A-109, Major System Acquisition. (April 5, 1976) This directive establishes policies to be followed by executive branch agencies in the acquisition of major systems. Specifically, OMB Circular A-109 requires each Agency acquiring major systems should maintain the capability to: (1) predict, review, assess, negotiate, and monitor lifecycle costs; (2) assess acquisition cost, schedule and performance experience against predications, and provide such assessments for consideration by the agency head at key decision points; (3) make new assessments where significant costs, schedule, or performance variances occur; (4) estimate lifecycle costs during system design, concept, evaluation, selection, full-scale development, facility conversion, and production, to ensure appropriate trade-offs among investment costs, ownership costs, schedules, and performance; and (5) use independent cost estimates, where feasible, for comparison purposes.

OMB Circular A-123, Internal Control Systems. (June 21, 1995) This directive requires agencies to establish and maintain a system of internal controls to provide reasonable assurance that Government resources, including information resources, are protected from fraud, waste, unauthorized use, and misappropriation.

OMB Circular A-130, Management of Federal Information Resources. (February 8, 1996) This Circular requires agency officials who administer a program supported by an information system to be responsible and accountable for the management of that information system throughout its lifecycle. Under Circular A-130, agencies are required to account for the full costs of operating information processing organizations. In addition, it requires agencies to prepare a cost-benefit analysis for each information system and update it as necessary throughout the information system lifecycle. The cost-benefit analysis must be (1) at a level of detail appropriate to the size of the investment, and (2) based on systematic measures of system performance which include: (a) effectiveness of program delivery; (b) efficiency of program administration; and (c) reduction in burden.

Office of Federal Procurement Policy Letter #91-2, Service Contracting. (April 9, 1991) This letter defines performance-based contracting as structuring all aspects of an acquisition around the purpose of the work to be performed, as opposed to either the manner by which the work is to be performed or a broad and imprecise statement of work. This approach provides the means to ensure the appropriate performance quality level is achieved, and payment is made only for services that meet contract standards. This policy emphasizes the use of performance requirements and quality standards in defining contract requirements, source selection, and quality assurance. It requires agencies to: (1) use performance based methods when developing SOWs; (2) develop formal, measurable performance standards and surveillance plans for assessing contractor performance; and (3) use contract types that motivate contractors to perform at optimal levels.

Office of Federal Procurement Policy Pamphlet #4, A Guide for Writing and Administering Performance Statements of Work for Service Contracts. (October 1980) This pamphlet provides guidelines for writing and administering performance Statements of Work for service contracts. It describes a systematic means to develop Statements of Work and quality assurance surveillance plans in order for agencies to define and measure the quality of contractors' performance.

Federal Information Processing Standards Publications

FIPS PUB. 64, Guidelines for Documentation of Computer Programs and Automated Data Systems for the Initiation Phase. (August 1, 1977) This publication provides a basis for determining the content and extent of documentation for the initiation phase of the software lifecycle--including project request documentation, feasibility study, and cost-benefit analysis.

FIPS PUB. 101, Guideline for Lifecycle Validation, Verification, and Testing of Computer Software. (June 6, 1983) This publication presents an integrated approach to validation, verification, and testing (VV&T) that should be used throughout the software lifecycle. The Guideline presents information on selection and use of VV&T techniques to meet project requirements and explains how to develop a VV&T plan to fulfill a specific project's VV&T requirements. The Guideline is intended for use by software developers, managers, verifiers, maintainers, and end users.

FIPS PUB. 106, Guideline on Software Maintenance. (June 15, 1984) This publication presents information on techniques, procedures, and methodologies to employ throughout the lifecycle of a software system to improve the maintainability of that system. The publication emphasizes the importance of the consideration of software maintenance throughout the lifecycle of a software system and stresses the need to plan, develop, use, and maintain a software system with future software maintenance in mind. It also presents guidance for controlling and improving the software maintenance process and includes suggested criteria for deciding whether continued maintenance of a software system is justified.

National Bureau of Standards Special Publications

NBS Special Publication 500-87, Management Guide for Software Documentation. (January 1982) This document assists in the establishment of policies and procedures for effective preparation, distribution, control, and maintenance of documentation which will aid in re-use, transfer, conversion, correction, and enhancement of computer programs. Such documentation, together with the computer programs themselves, will provide software product packages which can be transferred and used by people other than the originators of the programs.

NBS Special Publication 500-88, Software Development Tools. (March 1982) As part of the program to provide information to Federal agencies on the availability, capabilities, limitations, and applications of software development tools, a database of information about existing tools was collected over a three-year period. This document presents an analysis of the information contained in this database. In addition, abstracts of each tool are presented in an appendix.

NBS Special Publication 500-106, Guidance on Software Maintenance. (December 1983) This document addresses issues and problems of software maintenance and suggests actions and procedures which can help software maintenance organizations meet the growing demands of maintaining existing systems.

NBS Special Publication 500-129, Software Maintenance Management. (October 1985) This document focuses on the management and maintenance of software, and provides guidance to Federal government personnel to assist them in performing and controlling software maintenance. It presents an overview of the various aspects of software maintenance including the problems and issues identified during the Institute for Computer Sciences and Technology sponsored survey of Government and private industry maintenance organizations.

Profile of Agency Missions and Applications Reviewed

Department of Housing and Urban Development

HUD is the principle agency responsible for Federal housing programs, enforcing fair housing, and improving and developing the Nation's communities. The Department's major functions follow. HUD (1) insures mortgages for Single Family and multifamily dwellings and loans for home improvement and the purchase of manufactured homes; (2) makes capital grants for construction or rehabilitation of housing developments for the elderly and disabled; (3) channels funds from investors into the mortgage industry through the Government National Mortgage Association; (4) provides Federal housing subsidies for low and moderate income families; (5) provides grants to states and communities for community development activities; (6) promotes and enforces fair housing and equal housing opportunity; and (7) promotes empowerment of residents through Family Self Sufficiency and Homeownership for People Everywhere.

HUD examined seven application systems for this review. A brief description of each application system follows.

Department of State

The Department is responsible for overall direction, coordination, and supervision of U.S. Government activities overseas, except for certain military activities. It provides interdepartmental direction and leadership to other U.S. Government Foreign Affairs agencies. Through the Secretary of State, the Department serves as the President's principal advisor in the determination and execution of U.S. foreign policy. The Department supports the Secretary of State in the fulfillment of these duties and takes the lead with respect to such matters as international educational and cultural affairs, information activities, foreign assistance, food for peace, arms control and disarmament, supervision of programs authorized by the Peace Corps Act, social science research, immigration, and refugee assistance.

The Department has other major missions that are heavily dependent on automated systems. These missions include consular services for U.S. citizens overseas and providing both administrative and financial support to over 50 other agencies representing U.S. interests abroad.

DOS selected three financial systems for review. A brief description follows:

Environmental Protection Agency

EPA was established in December 1970 as an independent agency to execute the Federal laws for protecting the environment. The agency currently administers nine comprehensive environmental protection laws, such as the Clean Air Act; the Clean Water Act; the Resource Conservation and Recovery Act; and the Comprehensive Environmental Response, Compensation, and Liability Act (or "Superfund"). EPA performs its mission by coordinating effective Government action in reducing and controlling pollution through integration of a variety of research, monitoring, standard setting, and enforcement activities. EPA also coordinates and supports research and pollution prevention activities by state and local governments, private groups, individuals and education institutions. In total, EPA is designed to serve as the public's advocate for a liveable environment.

EPA reviewed ten application systems. A brief description of each is below:

National Aeronautics and Space Administration

NASA's mission is to (1) explore, use, and enable the development of space for human enterprises; (2) advance scientific knowledge and understanding of the Earth, Solar System, and the Universe, and use the environment of space for research; and (3) research, develop, verify, and transfer advanced aeronautics, space, and related technologies. NASA administers programs of a research and development nature that are designed to contribute to a number of national goals, including preeminence of the nation in the science and technology of aeronautics and space.

NASA selected three NASA-wide administrative application systems for review. A brief description follows:

National Science Foundation

NSF is an independent agency in the government's executive branch and is governed by a presidentially appointed 24-member Board and a Director. NSF provides financial and other support for research, education, and related activities in science, mathematics, and engineering. NSF does not conduct research itself, but provides grants to academic institutions, private research firms, industrial labs, and major research facilities and centers.

NSF was established by the National Science Foundation Act of 1950, which gave NSF its original standards and policies. NSF derives its current direction from changes to this Act and the standards established by government monitoring organizations and agencies, combined with internal NSF policies and procedures. NSF developed internal issuances (i.e., bulletins, manuals, etc.) to further define how it will conduct its information management and technology activities.

NSF reviewed eight systems as part of this audit. A brief description follows.

Railroad Retirement Board

The primary mission of the RRB is to administer the Railroad Retirement and Railroad Unemployment Insurance Acts, and to assist in the administration of the Social Security Act and the Internal Revenue Code. In carrying out this mission, the RRB will pay benefits to the right people, in the right amounts, in a timely manner; treat every person who comes into contact with the agency with courtesy and concern; and respond to all inquiries promptly and clearly.

The RRB reviewed seven application systems. A brief description of the systems reviewed follows:

Social Security Administration

On March 31, 1995, SSA became an independent agency under section 101 of the Social Security Independence and Program Improvements Act of 1994. The Agency's record-keeping activities cover everyone issued a Social Security Number, as well as the thousands of employers who report the earnings of these individuals.

In its Strategic Plan, Information Systems Plan, and other documents, SSA defines its role with the following statement: "It is the mission of the Social Security Administration to administer national Social Security programs as prescribed by legislation, in an equitable, efficient, and caring manner."

The SSA's data processing operations are highly centralized and integrated. Application software at SSA is either programmatic or administrative(16). The programmatic functions supported by application software are: (1) Enumeration; (2) Earnings; (3) Retirement, Survivors' and Disability Insurance; and (4) Supplemental Security Income. Each of these programmatic systems involve hundreds of software programs. These systems are all mainframe-based, batch processing operations with some modernized, on-line input capability. The major systems comprising the administrative structure are: (1) The Financial Accounting System; (2) The Human Resources Management Information System; (3) The Time and Attendance Processing System; (4) Retirement, Survivors' and Disability Insurance and Supplemental Security Income Quality Assurance System; (5) Security and Audit Trail System; (6) Control and Audit Test Facility; (7) The Commissioner's Correspondence Control System; (8) The Processor for the Analysis of Statistical Surveys; (9) Management Information Systems; (10) Debt Management System; and (11) Earnings Modernization. These administrative systems vary from small, localized, microcomputer based programs to large, widely used mainframe-based applications.

Because it is difficult, or in some cases impossible, to divide the agency's operations into discrete information systems, SSA treated the systems supporting each of the four major programmatic areas and the administrative area as the programmatic areas selected for this review. The application systems for this review are:



Individual Agency Reports Issued for Task 4
Agency and Product Title Report Type and Number Date Issued
Department of Housing and Urban Development
Controls Over Software Maintenance Must Be Significantly Strengthened Audit Report

96-DP-166-0001

March 1996
Department of State
Management of Software Maintenance Audit Report

6-IM-003

October 1995
Environmental Protection Agency
Management of Application Software

Maintenance at EPA

Audit Report

E1NMF3-15-0072-5100240

March 1995
National Aeronautics and Space Administration
Computer Systems Integrity Project Management

of Software Maintenance (PCIE Task 4)

Audit Report

HQ-95-004

June 1995
National Science Foundation
Review of NSF's Management of Application

Software Maintenance

Audit Report

OIG 94-2109

September 1994
Railroad Retirement Board
Review of the Agency's Management of the

Software Maintenance Process

Audit Report

94-24

September 1994
Social Security Administration
Close-Out of Our Review on the PCIE--Computer Security and Integrity Task 4A-Management of Application Software Maintenance Close-Out Memorandum

A-13-93-00423

June 1995


(This page intentionally left blank.)

Audit Methodology

The PCIE Task 4 review of software maintenance management at Federal agencies was divided into six areas: (1) policies, procedures, and standards; (2) application software maintenance lifecycle management; (3) contract management; (4) cost management; (5) IRM staff qualifications; and (6) internal control issues regarding the management of application software maintenance.

Policies, Procedures, and Standards

Agencies should have well established policies, procedures, and standards for efficiently and effectively maintaining agency software. Policies, procedures, and standards serve as a basis for management actions, and provide criteria upon which to evaluate the activities resulting from those actions. This set of audit steps involved determining whether agencies have (1) incorporated the software maintenance standards promulgated by higher monitoring authorities into its policies, procedures, and standards; (2) established policies promulgated by agency senior management which define the relationship between standards and agency implementation; and (3) developed procedures for implementing software maintenance policies.

Application Software Maintenance Lifecycle Management

Software maintenance is a critical element of an application system's lifecycle. Management of the system's lifecycle must not conclude with the introduction of the system into the production environment. The audit steps to evaluate the lifecycle management of an application system in production included a review of (1) the IRM strategic planning process; (2) the software maintenance initiation request process; (3) change control methodology; (4) the process by which changes are tested and accepted; (5) quality assurance controls; and (6) general controls affecting maintenance projects (e.g., separation of duties during maintenance).

Contract Management

A significant percentage of Governmentwide software maintenance work is performed by contractors. Inadequate contract management practices increase an agency's vulnerability to waste, fraud, and abuse. The audit steps for this section included reviewing a sample of software maintenance-related procurement documents (e.g., contracts, interagency agreements, cooperative agreements, cost-sharing, etc.) to determine whether (1) maintenance services were clearly specified in the scope of work; (2) adequate performance standards or criteria for acceptance or rejection of deliverables from the maintenance services was specified; and (3) test plans and test results were required as deliverables. In addition, participants determined if maintenance work was performed in accordance with the contract and user needs were met.

Cost Management

Software maintenance cost represents a significant percentage of the total cost of IRM in the Federal Government (estimates range from 20 to 70 percent). In order for IRM resources to be properly utilized, software maintenance costs must be properly accumulated and accurately reported. Both labor and computer costs should be maintained for each type of maintenance effort. The audit steps for cost management included determining (1) how agencies are tracking and maintaining software maintenance costs; (2) what types of costs are being maintained; and (3) if software maintenance costs are being capitalized or expensed.

IRM Staff Qualifications

Cost effective software maintenance of application systems depends heavily on having adequately qualified personnel. Accountability should also be established to ensure the tasks are effectively performed. To ensure that personnel are adequately qualified and can be held accountable, position descriptions should accurately reflect software maintenance responsibilities. In addition, performance standards must include specific criteria for evaluating employees performance in the software maintenance process. The audit steps for this section involved reviewing the position descriptions and performance requirements of employees responsible for performing software maintenance to determine if these documents reflected this aspect of their job.

Internal Control Issues

In providing for implementation of the Federal Managers' Financial Integrity Act of 1982, OMB Circular A-123 requires agencies to establish and maintain a cost-effective system of internal controls to provide management with reasonable assurance that assets are safeguarded against waste, loss, and unauthorized use. This set of audit steps included reviewing agencies Federal Managers' Financial Integrity Act reports to the President and Congress in order to determine if any material internal control weaknesses related to software maintenance were reported. In addition, agencies were to determine if software maintenance was categorized as a separate assessable unit and if any software maintenance weaknesses identified during this review met OMB's materiality criteria.

Acronyms

ADP Automatic Data Processing

AQLs Acceptable Quality Levels

CO Contracting Officer

COR Contracting Officer Representative

CPFF Cost-Plus-Fixed Fee

CSIP Computer Systems Integrity Project

DOS Department of State

EPA Environmental Protection Agency

FIPS Federal Information Processing Standards

GAO General Accounting Office

GSA General Services Administration

HHS Department of Health and Human Services

HUD Department of Housing and Urban Development

IRM Information Resources Management

IT Information Technology

NASA National Aeronautics and Space Administration

NBS National Bureau of Standards

NIST National Institute of Standards and Technology

NSF National Science Foundation

OFPP Office of Federal Procurement Policy

OIG Office of Inspector General

OMB Office of Management and Budget

OPM Office of Personnel Management

PCIE President's Council on Integrity and Efficiency

PRS Performance Requirement Summary

QASP Quality Assurance Surveillance Plan

RRB Railroad Retirement Board

SOW Statement of Work

SSA Social Security Administration

VV&T Verification and Validation Testing

1. Software configuration is defined as an arrangement of software parts, including all elements necessary for the software to work. Configuration management refers to the process of identifying and documenting the software configuration and then systematically controlling changes to it to maintain its integrity and to trace configuration changes.

2. HHS committed to the assignment; however, the Social Security Administration (SSA) performed the work for this audit. On March 31, 1995, SSA became an independent agency under section 101 of the Social Security Independence and Program Improvements Act of 1994.

3. FTE employment should include Federal employees identified as directly designing, developing, operating, or maintaining the system.

4. The Circular defines lifecycle costs as the sum total of the direct, indirect, recurring, nonrecurring, and other related costs incurred, or estimated to be incurred, in the design, development, production, operation, maintenance and support of a major system over its anticipated useful life span.

5. EPA Auditors included the costs of software, lease of software, and 50 percent of personnel costs in compiling total application software resources.

6. This Act provides for the production of complete, reliable, timely, and consistent financial information for use by the executive branch of the Government and the Congress in the financing, management, and evaluation of Federal programs.

7. Typical COR monitoring duties may include ensuring that hours billed by a contractor are reasonably accurate and appropriate, that the required work is performed, and other charges billed are in agreement with the terms of the contract.

8. Both the OFPP Letter 91-2 and OFPP Pamphlet #4 identify the Quality Assurance Surveillance Plan (QASP) and the Performance Requirement Summary (PRS) as the two primary documents agencies can use for planning, measuring, and monitoring contractor performance. A QASP is a well-defined written document used to ensure that systematic quality assurance methods are used. The PRS summarizes the performance requirements of the contractor's evaluation.

9. Performance standards are characterized as an acknowledged measure for comparison of quantitative and/or qualitative values. It is important to select performance standards that pertain to the deliverables, products, or services being acquired.

10. Acceptable Quality Levels (AQLs) are the maximum percentage of allowable variance from the norm before a deliverable, product, or service is rejected. The AQLs recognize that defective performance sometimes happens unintentionally.

11. Software configuration is defined as an arrangement of software parts, including all elements necessary for the software to work. Configuration management refers to the process of identifying and documenting the software configuration and then systematically controlling changes to it to maintain its integrity and to trace configuration changes.

12. A release is a set of software changes.

13. System software refers to the computer programs that manage the processing workload and control user access to the various resources of the computer system.

14. A Multiple Virtual Storage operating system mechanism for identifying and specifically authorizing programs which are to process in an unrestricted or privileged instruction mode.

15. This Directive instructed Federal departments and agencies to take immediate action to address the deficiencies identified in both the specific systems reviewed in Task 2A and in other agency systems with similar system software. In addition, agencies were urged to pay special attention to the requirements of the Computer Security Act..

16. Applications are programmatic if they directly support workload functions involving client services dictated by law or regulation; they are considered administrative if they do not.