Quality Assurance Assessment & Benchmarking Framework

Quality assurance assessment & benchmarking aims to provide guidance to assist in the quality assurance, benchmarking, and assessment of systems to improve IT services to the business. Quality assurance (QA) is the responsibility of all system stakeholders and business process owners at three levels of system support: internal, institutional and external. Quality assurance comprises administrative and procedural activities implemented in a quality management system (QMS) so that requirements and goals of a system or software solution will be fulfilled. It is the systematic measurement, comparison with a standard, monitoring of processes and an associated feedback loop that confers performance improvement and error prevention. Quality assurance includes two principles: “Fit for purpose” (the system should be suitable for the intended purpose), and “right first time” (mistakes should be eliminated). QA includes management of the quality of the system in its support of business processes. It is the quality assurance review of a system or software solution and its documentation for assurance that the system meets the requirements and specifications. The objectives of conducting quality assurance and benchmarking are the following:

•  To monitor the software development process and the final software developed.

•  To ensure whether the software project is implementing the standards and procedures set by management.

•  To notify groups and individuals about the software quality activities and results of these activities.

•  To ensure that risks and unresolved issues pertaining to the system or software development are escalated to management.

•  To identify gaps and deficiencies in the system or software development, and compare to industry benchmarking standards.

•  To increase confidence of users and sure the systems or software development meet standards and reflect system specifications.

•  To support system audits, and review of project milestones throughout the life cycle of system development.

In regards to benchmarking, this is a systematic process of measuring performance against recognized industry benchmarks and/or best practices that lead to superior system performance when adapted and utilized. To be successful, benchmarking should be implemented as a structured, systematic process, and will not be successful if applied randomly. Benchmarking must be best-practice-oriented and should be part of a continuous improvement program that incorporates a feedback process. Benchmarking requires an understanding of what is important to the organization (sometimes called critical success factors) and then measuring performance for these factors, especially for decision support systems. The gap between actual and preferred performance is analyzed to identify opportunities for improvement. Root cause analysis usually follows to assess the cause of unsatisfactory system performance, and the need for best practices may be used to help address performance problems.

IT Architects provides a service to complete a quality assurance assessment & benchmarking process introduced by Robert Camp at Xerox. Camp pioneered much work in quality assurance and benchmarking, and is credited for introducing the term “benchmarking”. IT Architects provides a 10-step Quality Assurance Assessment & Benchmarking Framework to conduct a quality assessment process:

    1.  Define what to benchmark (critical success factors)
    2.  Define the metrics
    3.  Develop data collection methodology
    4.  Collect data
    5.  Identify performance & practice use gap
    6.  Identify reasons for deficiencies (i.e. root cause for gap)
    7.  Develop action plan (i.e. select practices to narrow gap)
    8.  Integrate best practices into project delivery processes
    9.  Institutionalize as part of continuous improvement program
    10.  Repeat based on revised benchmarks

Furthermore, IT Architects has a qualified system test team to conduct the certification of a system or software product based on 4 levels of industry test standards: 

Level 1 − Code Walk-through. Offline software is examined or checked for any violations of the official coding rules. The walk-through focus is on examination of the documentation and level of in-code comments.

Level 2 − Compilation and Linking. Extensive compilation and linking is performed to verify that the software can compile and link all official platforms and operating systems.

Level 3 − Routine Running. Extensive software execution is performed to ensure that the software can run properly under a variety of conditions such as certain number of events (small and large), as well as handle complex business scenarios.

Level 4 − Performance Test. Extensive performance testing is conducted to ensure that the performance of the software satisfies the specified performance level and conditions. Stress testing is conducted to determine “breaking points” pertaining to number of users, transactions, and data throughput (specifically to test system interfaces and integrations).