Classes of Attack
Authentication
The Authentication section covers attacks that target a web site's method of validating the identity of a user, service or application. Authentication is performed using at least one of three mechanisms: "something you have", "something you know" or "something you are". This section will discuss the attacks used to circumvent or exploit the authentication process of a web site.
* Brute Force
A Brute Force attack is an automated process of trial and error used to guess a person's username, password, credit-card number or cryptographic key.
* Insufficient Authentication
Insufficient Authentication occurs when a web site permits an attacker to access sensitive content or functionality without having to properly authenticate.
* Weak Password Recovery Validation
Weak Password Recovery Validation is when a web site permits an attacker to illegally obtain, change or recover another user's password.
Authorization
The Authorization section covers attacks that target a web site's method of determining if a user, service, or application has the necessary permissions to perform a requested action. For example, many web sites should only allow certain users to access specific content or functionality. Other times a user's access to other resources might be restricted. Using various techniques, an attacker can fool a web site into increasing their privileges to protected areas.
* Credential/Session Prediction
Credential/Session Prediction is a method of hijacking or impersonating a web site user.
* Insufficient Authorization
Insufficient Authorization is when a web site permits access to sensitive content or functionality that should require increased access control restrictions.
* Insufficient Session Expiration
Insufficient Session Expiration is when a web site permits an attacker to reuse old session credentials or session IDs for authorization.
* Session Fixation
Session Fixation is an attack technique that forces a user's session ID to an explicit value.
Client-side Attacks
The Client-side Attacks section focuses on the abuse or exploitation of a web site's users. When a user visits a web site, trust is established between the two parties both technologically and psychologically. A user expects web sites they visit to deliver valid content. A user also expects the web site not to attack them during their stay. By leveraging these trust relationship expectations, an attacker may employ several techniques to exploit the user.
* Content Spoofing
Content Spoofing is an attack technique used to trick a user into believing that certain content appearing on a web site is legitimate and not from an external source.
* Cross-site Scripting
Cross-site Scripting (XSS) is an attack technique that forces a web site to echo attacker-supplied executable code, which loads in a user's browser.
Command Execution
The Command Execution section covers attacks designed to execute remote commands on the web site. All web sites utilize user-supplied input to fulfill requests. Often these user-supplied data are used to create construct commands resulting in dynamic web page content. If this process is done insecurely, an attacker could alter command execution.
* Buffer Overflow
Buffer Overflow exploits are attacks that alter the flow of an application by overwriting parts of memory.
* Format String Attack
Format String Attacks alter the flow of an application by using string formatting library features to access other memory space.
* LDAP Injection
LDAP Injection is an attack technique used to exploit web sites that construct LDAP statements from user-supplied input.
* OS Commanding
OS Commanding is an attack technique used to exploit web sites by executing Operating System commands through manipulation of application input.
* SQL Injection
SQL Injection is an attack technique used to exploit web sites that construct SQL statements from user-supplied input.
* SSI Injection
SSI Injection (Server-side Include) is a server-side exploit technique that allows an attacker to send code into a web application, which will later be executed locally by the web server.
* XPath Injection
XPath Injection is an attack technique used to exploit web sites that construct XPath queries from user-supplied input.
Information Disclosure
The Information Disclosure section covers attacks designed to acquire system specific information about a web site. System specific information includes the software distribution, version numbers, and patch levels. Or the information may contain the location of backup files and temporary files. In most cases, divulging this information is not required to fulfill the needs of the user. Most web sites will reveal a certain amount of data, but it's best to limit the amount of data whenever possible. The more information about the web site an attacker learns, the easier the system becomes to compromise.
* Directory Indexing
Automatic directory listing/indexing is a web server function that lists all of the files within a requested directory if the normal base file is not present.
* Information Leakage
Information Leakage is when a web site reveals sensitive data, such as developer comments or error messages, which may aid an attacker in exploiting the system.
* Path Traversal
The Path Traversal attack technique forces access to files, directories, and commands that potentially reside outside the web document root directory.
* Predictable Resource Location
Predictable Resource Location is an attack technique used to uncover hidden web site content and functionality.
Logical Attacks
The Logical Attacks section focuses on the abuse or exploitation of a web application's logic flow. Application logic is the expected procedural flow used in order to perform a certain action. Password recovery, account registration, auction bidding, and eCommerce purchases are all examples of application logic. A web site may require a user to correctly perform a specific multi-step process to complete a particular action. An attacker may be able to circumvent or misuse these features to harm a web site and its users.
* Abuse of Functionality
Abuse of Functionality is an attack technique that uses a web site's own features and functionality to consume, defraud, or circumvents access controls mechanisms.
* Denial of Service
Denial of Service (DoS) is an attack technique with the intent of preventing a web site from serving normal user activity.
* Insufficient Anti-automation
Insufficient Anti-automation is when a web site permits an attacker to automate a process that should only be performed manually.
* Insufficient Process Validation
Insufficient Process Validation is when a web site permits an attacker to bypass or circumvent the intended flow control of an application.
Wednesday, December 17, 2008
Tuesday, December 9, 2008
SQL Injection
SQL Injection
SQL Injection is an attack technique used to exploit web sites that construct SQL statements from user-supplied input.
Wikipedia Definition:
SQL injection is a technique that exploits a security vulnerability occurring in the database layer of an application. The vulnerability is present when user input is either incorrectly filtered for string literal escape characters embedded in SQL statements or user input is not strongly typed and thereby unexpectedly executed. It is in fact an instance of a more general class of vulnerabilities that can occur whenever one programming or scripting language is embedded inside another.
Structured Query Language (SQL) is a specialized programming language for sending queries to databases. Most small and industrial- strength database applications can be accessed using SQL statements. SQL is both an ANSI and an ISO standard. However, many database products supporting SQL do so with proprietary extensions to the standard language. Web applications may use user-supplied input to create custom SQL statements for dynamic web page requests.
When a web application fails to properly sanitize user-supplied input, it is possible for an attacker to alter the construction of backend SQL statements. When an attacker is able to modify a SQL statement, the process will run with the same permissions as the component that executed the command. (e.g. Database server, Web application server, Web server, etc.). The impact of this attack can allow attackers to gain total control of the database or even execute commands on the system.
The same advanced exploitation techniques available in LDAP Injection can also be similarly applied to SQL Injection.
Example
A web based authentication form might have code that looks like the following: SQLQuery = "SELECT Username FROM Users WHERE Username = '" & strUsername & "' AND Password = '" & strPassword & "'" strAuthCheck = GetQueryResult(SQLQuery)
In this code, the developer is taking the user-input from the form and embedding it directly into an SQL query.
Suppose an attacker submits a login and password that looks like the following: Login: ' OR ''=' Password: ' OR ''=' This will cause the resulting SQL query to become: SELECT Username FROM Users WHERE Username = '' OR ''='' AND Password = '' OR ''=''
Instead of comparing the user-supplied data with entries in the Users table, the query compares '' (empty string) to '' (empty string). This will return a True result and the attacker will then be logged in as the first user in the Users table.
There are two commonly known methods of SQL injection: Normal SQL Injection and Blind SQL Injection. The first is vanilla SQL Injection in which the attacker can format his query to match the developer's by using the information contained in the error messages that are returned in the response.
Normal SQL Injection
By appending a union select statement to the parameter, the attacker can test to see if he can gain access to the database:
http://example/article.asp?ID=2+union+all+select+name+from+sysobjects
The SQL server then might return an error similar to this: Microsoft OLE DB Provider for ODBC Drivers error '80040e14' [Microsoft][ODBC SQL Server Driver][SQL Server]All queries in an SQL statement containing a UNION operator must have an equal number of expressions in their target lists.
This tells the attacker that he must now guess the correct number of columns for his SQL statement to work.
Blind SQL Injection
In Blind SQL Injection, instead of returning a database error, the server returns a customer-friendly error page informing the user that a mistake has been made. In this instance, SQL Injection is still possible, but not as easy to detect. A common way to detect Blind SQL Injection is to put a false and true statement into the parameter value.
Executing the following request to a web site:
http://example/article.asp?ID=2+and+1=1
should return the same web page as:
http://example/article.asp?ID=2
because the SQL statement 'and 1=1' is always true.
Executing the following request to a web site:
http://example/article.asp?ID=2+and+1=0
would then cause the web site to return a friendly error or no page at all. This is because the SQL statement "and 1=0" is always false.
Once the attacker discovers that a site is susceptible to Blind SQL Injection, he can exploit this vulnerability more easily, in some cases, than by using normal SQL Injection.
SQL Injection is an attack technique used to exploit web sites that construct SQL statements from user-supplied input.
Wikipedia Definition:
SQL injection is a technique that exploits a security vulnerability occurring in the database layer of an application. The vulnerability is present when user input is either incorrectly filtered for string literal escape characters embedded in SQL statements or user input is not strongly typed and thereby unexpectedly executed. It is in fact an instance of a more general class of vulnerabilities that can occur whenever one programming or scripting language is embedded inside another.
Structured Query Language (SQL) is a specialized programming language for sending queries to databases. Most small and industrial- strength database applications can be accessed using SQL statements. SQL is both an ANSI and an ISO standard. However, many database products supporting SQL do so with proprietary extensions to the standard language. Web applications may use user-supplied input to create custom SQL statements for dynamic web page requests.
When a web application fails to properly sanitize user-supplied input, it is possible for an attacker to alter the construction of backend SQL statements. When an attacker is able to modify a SQL statement, the process will run with the same permissions as the component that executed the command. (e.g. Database server, Web application server, Web server, etc.). The impact of this attack can allow attackers to gain total control of the database or even execute commands on the system.
The same advanced exploitation techniques available in LDAP Injection can also be similarly applied to SQL Injection.
Example
A web based authentication form might have code that looks like the following: SQLQuery = "SELECT Username FROM Users WHERE Username = '" & strUsername & "' AND Password = '" & strPassword & "'" strAuthCheck = GetQueryResult(SQLQuery)
In this code, the developer is taking the user-input from the form and embedding it directly into an SQL query.
Suppose an attacker submits a login and password that looks like the following: Login: ' OR ''=' Password: ' OR ''=' This will cause the resulting SQL query to become: SELECT Username FROM Users WHERE Username = '' OR ''='' AND Password = '' OR ''=''
Instead of comparing the user-supplied data with entries in the Users table, the query compares '' (empty string) to '' (empty string). This will return a True result and the attacker will then be logged in as the first user in the Users table.
There are two commonly known methods of SQL injection: Normal SQL Injection and Blind SQL Injection. The first is vanilla SQL Injection in which the attacker can format his query to match the developer's by using the information contained in the error messages that are returned in the response.
Normal SQL Injection
By appending a union select statement to the parameter, the attacker can test to see if he can gain access to the database:
http://example/article.asp?ID=2+union+all+select+name+from+sysobjects
The SQL server then might return an error similar to this: Microsoft OLE DB Provider for ODBC Drivers error '80040e14' [Microsoft][ODBC SQL Server Driver][SQL Server]All queries in an SQL statement containing a UNION operator must have an equal number of expressions in their target lists.
This tells the attacker that he must now guess the correct number of columns for his SQL statement to work.
Blind SQL Injection
In Blind SQL Injection, instead of returning a database error, the server returns a customer-friendly error page informing the user that a mistake has been made. In this instance, SQL Injection is still possible, but not as easy to detect. A common way to detect Blind SQL Injection is to put a false and true statement into the parameter value.
Executing the following request to a web site:
http://example/article.asp?ID=2+and+1=1
should return the same web page as:
http://example/article.asp?ID=2
because the SQL statement 'and 1=1' is always true.
Executing the following request to a web site:
http://example/article.asp?ID=2+and+1=0
would then cause the web site to return a friendly error or no page at all. This is because the SQL statement "and 1=0" is always false.
Once the attacker discovers that a site is susceptible to Blind SQL Injection, he can exploit this vulnerability more easily, in some cases, than by using normal SQL Injection.
Monday, December 1, 2008
ISO 9000
ISO 9000 is a family of standards for quality management systems. ISO 9000 is maintained by ISO, the International Organization for Standardization and is administered by accreditation and certification bodies. Some of the requirements in ISO 9001 (which is one of the standards in the ISO 9000 family) include
* a set of procedures that cover all key processes in the business;
* monitoring processes to ensure they are effective;
* keeping adequate records;
* checking output for defects, with appropriate and corrective action where necessary;
* regularly reviewing individual processes and the quality system itself for effectiveness; and
* facilitating continual improvement
A company or organization that has been independently audited and certified to be in conformance with ISO 9001 may publicly state that it is "ISO 9001 certified" or "ISO 9001 registered". Certification to an ISO 9000 standard does not guarantee any quality of end products and services; rather, it certifies that formalized business processes are being applied. Indeed, some companies enter the ISO 9001 certification as a marketing tool.
Although the standards originated in manufacturing, they are now employed across several types of organization. A "product", in ISO vocabulary, can mean a physical object, services, or software. In fact, according to ISO in 2004, "service sectors now account by far for the highest number of ISO 9001:2000 certificates - about 31% of the total."
ISO 9000 family
ISO 9000 includes standards:
* ISO 9000:2000, Quality management systems – Fundamentals and vocabulary. Covers the basics of what quality management systems are and also contains the core language of the ISO 9000 series of standards. A guidance document, not used for certification purposes, but important reference document to understand terms and vocabulary related to quality management systems. In the year 2005, revised ISO 9000:2005 standard has been published, so it is now advised to refer to ISO 9000:2005.
* ISO 9001:2000 Quality management systems – Requirements is intended for use in any organization which designs, develops, manufactures, installs and/or services any product or provides any form of service. It provides a number of requirements which an organization needs to fulfill if it is to achieve customer satisfaction through consistent products and services which meet customer expectations. It includes a requirement for the continual (i.e. planned) improvement of the Quality Management System, for which ISO 9004:2000 provides many hints.
This is the only implementation for which third-party auditors may grant certification. It should be noted that certification is not described as any of the 'needs' of an organization as a driver for using ISO 9001 (see ISO 9001:2000 section 1 'Scope') but does recognize that it may be used for such a purpose (see ISO 9001:2000 section 0.1 'Introduction').
* ISO 9004:2000 Quality management systems - Guidelines for performance improvements. covers continual improvement. This gives you advice on what you could do to enhance a mature system. This standard very specifically states that it is not intended as a guide to implementation.
There are many more standards in the ISO 9001 family (see "List of ISO 9000 standards" from ISO), many of them not even carrying "ISO 900x" numbers. For example, some standards in the 10,000 range are considered part of the 9000 family: ISO 10007:1995 discusses Configuration management, which for most organizations is just one element of a complete management system. ISO notes: "The emphasis on certification tends to overshadow the fact that there is an entire family of ISO 9000 standards ... Organizations stand to obtain the greatest value when the standards in the new core series are used in an integrated manner, both with each other and with the other standards making up the ISO 9000 family as a whole".
Note that the previous members of the ISO 9000 family, 9001, 9002 and 9003, have all been integrated into 9001. In most cases, an organization claiming to be "ISO 9000 registered" is referring to ISO 9001.
Contents of ISO 9001
ISO 9001:2000 Quality management systems — Requirements is a document of approximately 30 pages which is available from the national standards organization in each country. Outline contents are as follows:
* Page iv: Foreword
* Pages v to vii: Section 0 Introduction
* Pages 1 to 14: Requirements
o Section 1: Scope
o Section 2: Normative Reference
o Section 3: Terms and definitions (specific to ISO 9001, not specified in ISO 9000)
* Pages 2 to 14
o Section 4: Quality Management System
o Section 5: Management Responsibility
o Section 6: Resource Management
o Section 7: Product Realization
o Section 8: Measurement, analysis and improvement
In effect, users need to address all sections 1 to 8, but only 4 to 8 need implementing within a QMS.
* Pages 15 to 22: Tables of Correspondence between ISO 9001 and other standards
* Page 23: Bibliography
The standard specifies six compulsory documents:
* Control of Documents (4.2.3)
* Control of Records (4.2.4)
* Internal Audits (8.2.2)
* Control of Nonconforming Product / Service (8.3)
* Corrective Action (8.5.2)
* Preventive Action (8.5.3)
In addition to these, ISO 9001:2000 requires a Quality Policy and Quality Manual (which may or may not include the above documents).
Summary of ISO 9001:2000 in informal language
* The quality policy is a formal statement from management, closely linked to the business and marketing plan and to customer needs. The quality policy is understood and followed at all levels and by all employees. Each employee needs measurable objectives to work towards.
* Decisions about the quality system are made based on recorded data and the system is regularly audited and evaluated for conformance and effectiveness.
* Records should show how and where raw materials and products were processed, to allow products and problems to be traced to the source.
* You need a documented procedure to control quality documents in your company. Everyone must have access to up-to-date documents and be aware of how to use them.
* To maintain the quality system and produce conforming product, you need to provide suitable infrastructure, resources, information, equipment, measuring and monitoring devices, and environmental conditions.
* You need to map out all key processes in your company; control them by monitoring, measurement and analysis; and ensure that product quality objectives are met. If you can’t monitor a process by measurement, then make sure the process is well enough defined that you can make adjustments if the product does not meet user needs.
* For each product your company makes, you need to establish quality objectives; plan processes; and document and measure results to use as a tool for improvement. For each process, determine what kind of procedural documentation is required (note: a “product” is hardware, software, services, processed materials, or a combination of these).
* You need to determine key points where each process requires monitoring and measurement, and ensure that all monitoring and measuring devices are properly maintained and calibrated.
* You need to have clear requirements for purchased product.
* You need to determine customer requirements and create systems for communicating with customers about product information, inquiries, contracts, orders, feedback and complaints.
* When developing new products, you need to plan the stages of development, with appropriate testing at each stage. You need to test and document whether the product meets design requirements, regulatory requirements and user needs.
* You need to regularly review performance through internal audits and meetings. Determine whether the quality system is working and what improvements can be made. Deal with past problems and potential problems. Keep records of these activities and the resulting decisions, and monitor their effectiveness (note: you need a documented procedure for internal audits).
* You need documented procedures for dealing with actual and potential nonconformances (problems involving suppliers or customers, or internal problems). Make sure no one uses bad product, determine what to do with bad product, deal with the root cause of the problem and keep records to use as a tool to improve the system.
1987 version
SO 9000:1987 had the same structure as the UK Standard BS 5750, with three 'models' for quality management systems, the selection of which was based on the scope of activities of the organization:
* ISO 9001:1987 Model for quality assurance in design, development, production, installation, and servicing was for companies and organizations whose activities included the creation of new products.
* ISO 9002:1987 Model for quality assurance in production, installation, and servicing had basically the same material as ISO 9001 but without covering the creation of new products.
* ISO 9003:1987 Model for quality assurance in final inspection and test covered only the final inspection of finished product, with no concern for how the product was produced.
ISO 9000:1987 was also influenced by existing U.S. and other Defense Standards ("MIL SPECS"), and so was well-suited to manufacturing. The emphasis tended to be placed on conformance with procedures rather than the overall process of management—which was likely the actual intent.
1994 version
ISO 9000:1994 emphasized quality assurance via preventive actions, instead of just checking final product, and continued to require evidence of compliance with documented procedures. As with the first edition, the down-side was that companies tended to implement its requirements by creating shelf-loads of procedure manuals, and becoming burdened with an ISO bureaucracy. In some companies, adapting and improving processes could actually be impeded by the quality system.
2000 version
ISO 9001:2000 combines the three standards 9001, 9002, and 9003 into one, called 9001. Design and development procedures are required only if a company does in fact engage in the creation of new products. The 2000 version sought to make a radical change in thinking by actually placing the concept of process management front and center ("Process management" was the monitoring and optimizing of a company's tasks and activities, instead of just inspecting the final product). The 2000 version also demands involvement by upper executives, in order to integrate quality into the business system and avoid delegation of quality functions to junior administrators. Another goal is to improve effectiveness via process performance metrics — numerical measurement of the effectiveness of tasks and activities. Expectations of continual process improvement and tracking customer satisfaction were made explicit.
The ISO 9000 standard is continually being revised by standing technical committees and advisory groups, who receive feedback from those professionals who are implementing the standard.
2008 version
ISO 9001:2008 only introduces clarifications to the existing requirements of ISO 9001:2000 and some changes intended to improve consistency with ISO14001:2004. There are no new requirements. A quality management system being upgraded just needs to be checked to see if it is following the clarifications introduced in the amended version.
Certification
ISO does not itself certify organizations. Many countries have formed accreditation bodies to authorize certification bodies, which audit organizations applying for ISO 9001 compliance certification. Although commonly referred to as ISO 9000:2000 certification, the actual standard to which an organization's quality management can be certified is ISO 9001:2000. Both the accreditation bodies and the certification bodies charge fees for their services. The various accreditation bodies have mutual agreements with each other to ensure that certificates issued by one of the Accredited Certification Bodies (CB) are accepted worldwide.
The applying organization is assessed based on an extensive sample of its sites, functions, products, services and processes; a list of problems ("action requests" or "non-compliances") is made known to the management. If there are no major problems on this list, the certification body will issue an ISO 9001 certificate for each geographical site it has visited, once it receives a satisfactory improvement plan from the management showing how any problems will be resolved.
An ISO certificate is not a once-and-for-all award, but must be renewed at regular intervals recommended by the certification body, usually around three years. In contrast to the Capability Maturity Model there are no grades of competence within ISO 9001.
Auditing
Two types of auditing are required to become registered to the standard: auditing by an external certification body (external audit) and audits by internal staff trained for this process (internal audits). The aim is a continual process of review and assessment, to verify that the system is working as it's supposed to, find out where it can improve and to correct or prevent problems identified. It is considered healthier for internal auditors to audit outside their usual management line, so as to bring a degree of independence to their judgments.
Under the 1994 standard, the auditing process could be adequately addressed by performing "compliance auditing":
* Tell me what you do (describe the business process)
* Show me where it says that (reference the procedure manuals)
* Prove that that is what happened (exhibit evidence in documented records)
How this led to preventive actions was not clear.
The 2000 standard uses the process approach. While auditors perform similar functions, they are expected to go beyond mere auditing for rote "compliance" by focusing on risk, status and importance. This means they are expected to make more judgments on what is effective, rather than merely adhering to what is formally prescribed. The difference from the previous standard can be explained thus:
Under the 1994 version, the question was broadly "Are you doing what the manual says you should be doing?", whereas under the 2000 version, the question is more "Will this process help you achieve your stated objectives? Is it a good process or is there a way to do it better?".
The ISO 19011 standard for auditing applies to ISO 9001 besides other management systems like EMS ( ISO 14001), FSMS (ISO 22000) etc.
Industry-specific interpretations
he ISO 9001 standard is generalized and abstract. Its parts must be carefully interpreted, to make sense within a particular organization. Developing software is not like making cheese or offering counseling services; yet the ISO 9001 guidelines, because they are business management guidelines, can be applied to each of these. Diverse organizations—police departments (US), professional soccer teams (Mexico) and city councils (UK)—have successfully implemented ISO 9001:2000 systems.
Over time, various industry sectors have wanted to standardize their interpretations of the guidelines within their own marketplace. This is partly to ensure that their versions of ISO 9000 have their specific requirements, but also to try and ensure that more appropriately trained and experienced auditors are sent to assess them.
* The TickIT guidelines are an interpretation of ISO 9000 produced by the UK Board of Trade to suit the processes of the information technology industry, especially software development.
* AS 9000 is the Aerospace Basic Quality System Standard, an interpretation developed by major aerospace manufacturers. Those major manufacturers include AlliedSignal, Allison Engine, Boeing, General Electric Aircraft Engines, Lockheed-Martin, McDonnell Douglas, Northrop Grumman, Pratt & Whitney, Rockwell-Collins, Sikorsky Aircraft, and Sundstrand. The current version is AS 9100.
* PS 9000 is an application of the standard for Pharmaceutical Packaging Materials. The Pharmaceutical Quality Group (PQG) of the Institute of Quality Assurance (IQA) has developed PS 9000:2001. It aims to provide a widely accepted baseline GMP framework of best practice within the pharmaceutical packaging supply industry. It applies ISO 9001: 2000 to pharmaceutical printed and contact packaging materials.
* QS 9000 is an interpretation agreed upon by major automotive manufacturers (GM, Ford, Chrysler). It includes techniques such as FMEA and APQP. QS 9000 is now replaced by ISO/TS 16949.
* ISO/TS 16949:2002 is an interpretation agreed upon by major automotive manufacturers (American and European manufacturers); the latest version is based on ISO 9001:2000. The emphasis on a process approach is stronger than in ISO 9001:2000. ISO/TS 16949:2002 contains the full text of ISO 9001:2000 and automotive industry-specific requirements.
* TL 9000 is the Telecom Quality Management and Measurement System Standard, an interpretation developed by the telecom consortium, QuEST Forum. The current version is 4.0 and unlike ISO 9001 or the above sector standards, TL 9000 includes standardized product measurements that can be benchmarked. In 1998 QuEST Forum developed the TL 9000 Quality Management System to meet the supply chain quality requirements of the worldwide telecommunications industry.
* ISO 13485:2003 is the medical industry's equivalent of ISO 9001:2000. Whereas the standards it replaces were interpretations of how to apply ISO 9001 and ISO 9002 to medical devices, ISO 13485:2003 is a stand-alone standard. Compliance with ISO 13485 does not necessarily mean compliance with ISO 9001:2000.
Debate on the effectiveness of ISO 9000
The debate on the effectiveness of ISO 9000 commonly centers on the following questions:
1. Are the quality principles in ISO 9001:2000 of value? (Note that the version date is important: in the 2000 version ISO attempted to address many concerns and criticisms of ISO 9000:1994).
2. Does it help to implement an ISO 9001:2000 compliant quality management system?
3. Does it help to obtain ISO 9001:2000 certification?
Advantages
It is widely acknowledged that proper quality management improves business, often having a positive effect on investment, market share, sales growth, sales margins, competitive advantage, and avoidance of litigation.[3][4] The quality principles in ISO 9000:2000 are also sound, according to Wade,[5] and Barnes, [4] who says "ISO 9000 guidelines provide a comprehensive model for quality management systems that can make any company competitive." Barnes also cites a survey by Lloyd's Register Quality Assurance which indicated that ISO 9000 increased net profit, and another by Deloitte-Touche which reported that the costs of registration were recovered in three years. According to the Providence Business News [6], implementing ISO often gives the following advantages:
1. Create a more efficient, effective operation
2. Increase customer satisfaction and retention
3. Reduce audits
4. Enhance marketing
5. Improve employee motivation, awareness, and morale
6. Promote international trade
7. Increases profit
8. Reduce waste and increases productivity
However, a broad statistical study of 800 Spanish companies [7] found that ISO 9000 registration in itself creates little improvement because companies interested in it have usually already made some type of commitment to quality management and were performing just as well before registration.[3]
In today's service-sector driven economy, more and more companies are using ISO 9000 as a business tool. Through the use of properly stated quality objectives, customer satisfaction surveys and a well-defined continual improvement program companies are using ISO 9000 processes to increase their efficiency and profitability.
Problems
A common criticism of ISO 9001 is the amount of money, time and paperwork required for registration.[8] According to Barnes, "Opponents claim that it is only for documentation. Proponents believe that if a company has documented its quality systems, then most of the paperwork has already been completed."[4]
According to Seddon, ISO 9001 promotes specification, control, and procedures rather than understanding and improvement. [9] [10] Wade argues that ISO 9000 is effective as a guideline, but that promoting it as a standard "helps to mislead companies into thinking that certification means better quality, ... [undermining] the need for an organization to set its own quality standards." [5] Paraphrased, Wade's argument is that total, blind reliance on the specifications of ISO 9001 does not guarantee a successful quality system.
The standard is seen as especially prone to failure when a company is interested in certification before quality.[9] Certifications are in fact often based on customer contractual requirements rather than a desire to actually improve quality.[4][11] "If you just want the certificate on the wall, chances are, you will create a paper system that doesn't have much to do with the way you actually run your business," said ISO's Roger Frost.[11] Certification by an independent auditor is often seen as the problem area, and according to Barnes, "has become a vehicle to increase consulting services." [4] In fact, ISO itself advises that ISO 9001 can be implemented without certification, simply for the quality benefits that can be achieved. [12]
Another problem reported is the competition among the numerous certifying bodies, leading to a softer approach to the defects noticed in the operation of the Quality System of a firm.
Summary
A good overview for effective use of ISO 9000 is provided by Barnes: [4]
"Good business judgment is needed to determine its proper role for a company... Is certification itself important to the marketing plans of the company? If not, do not rush to certification... Even without certification, companies should utilize the ISO 9000 model as a benchmark to assess the adequacy of its quality programs."
* a set of procedures that cover all key processes in the business;
* monitoring processes to ensure they are effective;
* keeping adequate records;
* checking output for defects, with appropriate and corrective action where necessary;
* regularly reviewing individual processes and the quality system itself for effectiveness; and
* facilitating continual improvement
A company or organization that has been independently audited and certified to be in conformance with ISO 9001 may publicly state that it is "ISO 9001 certified" or "ISO 9001 registered". Certification to an ISO 9000 standard does not guarantee any quality of end products and services; rather, it certifies that formalized business processes are being applied. Indeed, some companies enter the ISO 9001 certification as a marketing tool.
Although the standards originated in manufacturing, they are now employed across several types of organization. A "product", in ISO vocabulary, can mean a physical object, services, or software. In fact, according to ISO in 2004, "service sectors now account by far for the highest number of ISO 9001:2000 certificates - about 31% of the total."
ISO 9000 family
ISO 9000 includes standards:
* ISO 9000:2000, Quality management systems – Fundamentals and vocabulary. Covers the basics of what quality management systems are and also contains the core language of the ISO 9000 series of standards. A guidance document, not used for certification purposes, but important reference document to understand terms and vocabulary related to quality management systems. In the year 2005, revised ISO 9000:2005 standard has been published, so it is now advised to refer to ISO 9000:2005.
* ISO 9001:2000 Quality management systems – Requirements is intended for use in any organization which designs, develops, manufactures, installs and/or services any product or provides any form of service. It provides a number of requirements which an organization needs to fulfill if it is to achieve customer satisfaction through consistent products and services which meet customer expectations. It includes a requirement for the continual (i.e. planned) improvement of the Quality Management System, for which ISO 9004:2000 provides many hints.
This is the only implementation for which third-party auditors may grant certification. It should be noted that certification is not described as any of the 'needs' of an organization as a driver for using ISO 9001 (see ISO 9001:2000 section 1 'Scope') but does recognize that it may be used for such a purpose (see ISO 9001:2000 section 0.1 'Introduction').
* ISO 9004:2000 Quality management systems - Guidelines for performance improvements. covers continual improvement. This gives you advice on what you could do to enhance a mature system. This standard very specifically states that it is not intended as a guide to implementation.
There are many more standards in the ISO 9001 family (see "List of ISO 9000 standards" from ISO), many of them not even carrying "ISO 900x" numbers. For example, some standards in the 10,000 range are considered part of the 9000 family: ISO 10007:1995 discusses Configuration management, which for most organizations is just one element of a complete management system. ISO notes: "The emphasis on certification tends to overshadow the fact that there is an entire family of ISO 9000 standards ... Organizations stand to obtain the greatest value when the standards in the new core series are used in an integrated manner, both with each other and with the other standards making up the ISO 9000 family as a whole".
Note that the previous members of the ISO 9000 family, 9001, 9002 and 9003, have all been integrated into 9001. In most cases, an organization claiming to be "ISO 9000 registered" is referring to ISO 9001.
Contents of ISO 9001
ISO 9001:2000 Quality management systems — Requirements is a document of approximately 30 pages which is available from the national standards organization in each country. Outline contents are as follows:
* Page iv: Foreword
* Pages v to vii: Section 0 Introduction
* Pages 1 to 14: Requirements
o Section 1: Scope
o Section 2: Normative Reference
o Section 3: Terms and definitions (specific to ISO 9001, not specified in ISO 9000)
* Pages 2 to 14
o Section 4: Quality Management System
o Section 5: Management Responsibility
o Section 6: Resource Management
o Section 7: Product Realization
o Section 8: Measurement, analysis and improvement
In effect, users need to address all sections 1 to 8, but only 4 to 8 need implementing within a QMS.
* Pages 15 to 22: Tables of Correspondence between ISO 9001 and other standards
* Page 23: Bibliography
The standard specifies six compulsory documents:
* Control of Documents (4.2.3)
* Control of Records (4.2.4)
* Internal Audits (8.2.2)
* Control of Nonconforming Product / Service (8.3)
* Corrective Action (8.5.2)
* Preventive Action (8.5.3)
In addition to these, ISO 9001:2000 requires a Quality Policy and Quality Manual (which may or may not include the above documents).
Summary of ISO 9001:2000 in informal language
* The quality policy is a formal statement from management, closely linked to the business and marketing plan and to customer needs. The quality policy is understood and followed at all levels and by all employees. Each employee needs measurable objectives to work towards.
* Decisions about the quality system are made based on recorded data and the system is regularly audited and evaluated for conformance and effectiveness.
* Records should show how and where raw materials and products were processed, to allow products and problems to be traced to the source.
* You need a documented procedure to control quality documents in your company. Everyone must have access to up-to-date documents and be aware of how to use them.
* To maintain the quality system and produce conforming product, you need to provide suitable infrastructure, resources, information, equipment, measuring and monitoring devices, and environmental conditions.
* You need to map out all key processes in your company; control them by monitoring, measurement and analysis; and ensure that product quality objectives are met. If you can’t monitor a process by measurement, then make sure the process is well enough defined that you can make adjustments if the product does not meet user needs.
* For each product your company makes, you need to establish quality objectives; plan processes; and document and measure results to use as a tool for improvement. For each process, determine what kind of procedural documentation is required (note: a “product” is hardware, software, services, processed materials, or a combination of these).
* You need to determine key points where each process requires monitoring and measurement, and ensure that all monitoring and measuring devices are properly maintained and calibrated.
* You need to have clear requirements for purchased product.
* You need to determine customer requirements and create systems for communicating with customers about product information, inquiries, contracts, orders, feedback and complaints.
* When developing new products, you need to plan the stages of development, with appropriate testing at each stage. You need to test and document whether the product meets design requirements, regulatory requirements and user needs.
* You need to regularly review performance through internal audits and meetings. Determine whether the quality system is working and what improvements can be made. Deal with past problems and potential problems. Keep records of these activities and the resulting decisions, and monitor their effectiveness (note: you need a documented procedure for internal audits).
* You need documented procedures for dealing with actual and potential nonconformances (problems involving suppliers or customers, or internal problems). Make sure no one uses bad product, determine what to do with bad product, deal with the root cause of the problem and keep records to use as a tool to improve the system.
1987 version
SO 9000:1987 had the same structure as the UK Standard BS 5750, with three 'models' for quality management systems, the selection of which was based on the scope of activities of the organization:
* ISO 9001:1987 Model for quality assurance in design, development, production, installation, and servicing was for companies and organizations whose activities included the creation of new products.
* ISO 9002:1987 Model for quality assurance in production, installation, and servicing had basically the same material as ISO 9001 but without covering the creation of new products.
* ISO 9003:1987 Model for quality assurance in final inspection and test covered only the final inspection of finished product, with no concern for how the product was produced.
ISO 9000:1987 was also influenced by existing U.S. and other Defense Standards ("MIL SPECS"), and so was well-suited to manufacturing. The emphasis tended to be placed on conformance with procedures rather than the overall process of management—which was likely the actual intent.
1994 version
ISO 9000:1994 emphasized quality assurance via preventive actions, instead of just checking final product, and continued to require evidence of compliance with documented procedures. As with the first edition, the down-side was that companies tended to implement its requirements by creating shelf-loads of procedure manuals, and becoming burdened with an ISO bureaucracy. In some companies, adapting and improving processes could actually be impeded by the quality system.
2000 version
ISO 9001:2000 combines the three standards 9001, 9002, and 9003 into one, called 9001. Design and development procedures are required only if a company does in fact engage in the creation of new products. The 2000 version sought to make a radical change in thinking by actually placing the concept of process management front and center ("Process management" was the monitoring and optimizing of a company's tasks and activities, instead of just inspecting the final product). The 2000 version also demands involvement by upper executives, in order to integrate quality into the business system and avoid delegation of quality functions to junior administrators. Another goal is to improve effectiveness via process performance metrics — numerical measurement of the effectiveness of tasks and activities. Expectations of continual process improvement and tracking customer satisfaction were made explicit.
The ISO 9000 standard is continually being revised by standing technical committees and advisory groups, who receive feedback from those professionals who are implementing the standard.
2008 version
ISO 9001:2008 only introduces clarifications to the existing requirements of ISO 9001:2000 and some changes intended to improve consistency with ISO14001:2004. There are no new requirements. A quality management system being upgraded just needs to be checked to see if it is following the clarifications introduced in the amended version.
Certification
ISO does not itself certify organizations. Many countries have formed accreditation bodies to authorize certification bodies, which audit organizations applying for ISO 9001 compliance certification. Although commonly referred to as ISO 9000:2000 certification, the actual standard to which an organization's quality management can be certified is ISO 9001:2000. Both the accreditation bodies and the certification bodies charge fees for their services. The various accreditation bodies have mutual agreements with each other to ensure that certificates issued by one of the Accredited Certification Bodies (CB) are accepted worldwide.
The applying organization is assessed based on an extensive sample of its sites, functions, products, services and processes; a list of problems ("action requests" or "non-compliances") is made known to the management. If there are no major problems on this list, the certification body will issue an ISO 9001 certificate for each geographical site it has visited, once it receives a satisfactory improvement plan from the management showing how any problems will be resolved.
An ISO certificate is not a once-and-for-all award, but must be renewed at regular intervals recommended by the certification body, usually around three years. In contrast to the Capability Maturity Model there are no grades of competence within ISO 9001.
Auditing
Two types of auditing are required to become registered to the standard: auditing by an external certification body (external audit) and audits by internal staff trained for this process (internal audits). The aim is a continual process of review and assessment, to verify that the system is working as it's supposed to, find out where it can improve and to correct or prevent problems identified. It is considered healthier for internal auditors to audit outside their usual management line, so as to bring a degree of independence to their judgments.
Under the 1994 standard, the auditing process could be adequately addressed by performing "compliance auditing":
* Tell me what you do (describe the business process)
* Show me where it says that (reference the procedure manuals)
* Prove that that is what happened (exhibit evidence in documented records)
How this led to preventive actions was not clear.
The 2000 standard uses the process approach. While auditors perform similar functions, they are expected to go beyond mere auditing for rote "compliance" by focusing on risk, status and importance. This means they are expected to make more judgments on what is effective, rather than merely adhering to what is formally prescribed. The difference from the previous standard can be explained thus:
Under the 1994 version, the question was broadly "Are you doing what the manual says you should be doing?", whereas under the 2000 version, the question is more "Will this process help you achieve your stated objectives? Is it a good process or is there a way to do it better?".
The ISO 19011 standard for auditing applies to ISO 9001 besides other management systems like EMS ( ISO 14001), FSMS (ISO 22000) etc.
Industry-specific interpretations
he ISO 9001 standard is generalized and abstract. Its parts must be carefully interpreted, to make sense within a particular organization. Developing software is not like making cheese or offering counseling services; yet the ISO 9001 guidelines, because they are business management guidelines, can be applied to each of these. Diverse organizations—police departments (US), professional soccer teams (Mexico) and city councils (UK)—have successfully implemented ISO 9001:2000 systems.
Over time, various industry sectors have wanted to standardize their interpretations of the guidelines within their own marketplace. This is partly to ensure that their versions of ISO 9000 have their specific requirements, but also to try and ensure that more appropriately trained and experienced auditors are sent to assess them.
* The TickIT guidelines are an interpretation of ISO 9000 produced by the UK Board of Trade to suit the processes of the information technology industry, especially software development.
* AS 9000 is the Aerospace Basic Quality System Standard, an interpretation developed by major aerospace manufacturers. Those major manufacturers include AlliedSignal, Allison Engine, Boeing, General Electric Aircraft Engines, Lockheed-Martin, McDonnell Douglas, Northrop Grumman, Pratt & Whitney, Rockwell-Collins, Sikorsky Aircraft, and Sundstrand. The current version is AS 9100.
* PS 9000 is an application of the standard for Pharmaceutical Packaging Materials. The Pharmaceutical Quality Group (PQG) of the Institute of Quality Assurance (IQA) has developed PS 9000:2001. It aims to provide a widely accepted baseline GMP framework of best practice within the pharmaceutical packaging supply industry. It applies ISO 9001: 2000 to pharmaceutical printed and contact packaging materials.
* QS 9000 is an interpretation agreed upon by major automotive manufacturers (GM, Ford, Chrysler). It includes techniques such as FMEA and APQP. QS 9000 is now replaced by ISO/TS 16949.
* ISO/TS 16949:2002 is an interpretation agreed upon by major automotive manufacturers (American and European manufacturers); the latest version is based on ISO 9001:2000. The emphasis on a process approach is stronger than in ISO 9001:2000. ISO/TS 16949:2002 contains the full text of ISO 9001:2000 and automotive industry-specific requirements.
* TL 9000 is the Telecom Quality Management and Measurement System Standard, an interpretation developed by the telecom consortium, QuEST Forum. The current version is 4.0 and unlike ISO 9001 or the above sector standards, TL 9000 includes standardized product measurements that can be benchmarked. In 1998 QuEST Forum developed the TL 9000 Quality Management System to meet the supply chain quality requirements of the worldwide telecommunications industry.
* ISO 13485:2003 is the medical industry's equivalent of ISO 9001:2000. Whereas the standards it replaces were interpretations of how to apply ISO 9001 and ISO 9002 to medical devices, ISO 13485:2003 is a stand-alone standard. Compliance with ISO 13485 does not necessarily mean compliance with ISO 9001:2000.
Debate on the effectiveness of ISO 9000
The debate on the effectiveness of ISO 9000 commonly centers on the following questions:
1. Are the quality principles in ISO 9001:2000 of value? (Note that the version date is important: in the 2000 version ISO attempted to address many concerns and criticisms of ISO 9000:1994).
2. Does it help to implement an ISO 9001:2000 compliant quality management system?
3. Does it help to obtain ISO 9001:2000 certification?
Advantages
It is widely acknowledged that proper quality management improves business, often having a positive effect on investment, market share, sales growth, sales margins, competitive advantage, and avoidance of litigation.[3][4] The quality principles in ISO 9000:2000 are also sound, according to Wade,[5] and Barnes, [4] who says "ISO 9000 guidelines provide a comprehensive model for quality management systems that can make any company competitive." Barnes also cites a survey by Lloyd's Register Quality Assurance which indicated that ISO 9000 increased net profit, and another by Deloitte-Touche which reported that the costs of registration were recovered in three years. According to the Providence Business News [6], implementing ISO often gives the following advantages:
1. Create a more efficient, effective operation
2. Increase customer satisfaction and retention
3. Reduce audits
4. Enhance marketing
5. Improve employee motivation, awareness, and morale
6. Promote international trade
7. Increases profit
8. Reduce waste and increases productivity
However, a broad statistical study of 800 Spanish companies [7] found that ISO 9000 registration in itself creates little improvement because companies interested in it have usually already made some type of commitment to quality management and were performing just as well before registration.[3]
In today's service-sector driven economy, more and more companies are using ISO 9000 as a business tool. Through the use of properly stated quality objectives, customer satisfaction surveys and a well-defined continual improvement program companies are using ISO 9000 processes to increase their efficiency and profitability.
Problems
A common criticism of ISO 9001 is the amount of money, time and paperwork required for registration.[8] According to Barnes, "Opponents claim that it is only for documentation. Proponents believe that if a company has documented its quality systems, then most of the paperwork has already been completed."[4]
According to Seddon, ISO 9001 promotes specification, control, and procedures rather than understanding and improvement. [9] [10] Wade argues that ISO 9000 is effective as a guideline, but that promoting it as a standard "helps to mislead companies into thinking that certification means better quality, ... [undermining] the need for an organization to set its own quality standards." [5] Paraphrased, Wade's argument is that total, blind reliance on the specifications of ISO 9001 does not guarantee a successful quality system.
The standard is seen as especially prone to failure when a company is interested in certification before quality.[9] Certifications are in fact often based on customer contractual requirements rather than a desire to actually improve quality.[4][11] "If you just want the certificate on the wall, chances are, you will create a paper system that doesn't have much to do with the way you actually run your business," said ISO's Roger Frost.[11] Certification by an independent auditor is often seen as the problem area, and according to Barnes, "has become a vehicle to increase consulting services." [4] In fact, ISO itself advises that ISO 9001 can be implemented without certification, simply for the quality benefits that can be achieved. [12]
Another problem reported is the competition among the numerous certifying bodies, leading to a softer approach to the defects noticed in the operation of the Quality System of a firm.
Summary
A good overview for effective use of ISO 9000 is provided by Barnes: [4]
"Good business judgment is needed to determine its proper role for a company... Is certification itself important to the marketing plans of the company? If not, do not rush to certification... Even without certification, companies should utilize the ISO 9000 model as a benchmark to assess the adequacy of its quality programs."
Friday, November 28, 2008
Credit Card Testing
Credit Card Type Credit Card Number
American Express 378282246310005
________________________________________________________________________________
American Express 371449635398431
________________________________________________________________________________
American Express Corporate 378734493671000
________________________________________________________________________________
Australian BankCard 5610591081018250
________________________________________________________________________________
Diners Club 30569309025904
________________________________________________________________________________
Diners Club 38520000023237
________________________________________________________________________________
Discover 6011111111111117
________________________________________________________________________________
Discover 6011000990139424
_________________________________________________________________________________
JCB 3530111333300000
_________________________________________________________________________________
JCB 3566002020360505
_________________________________________________________________________________
MasterCard 5555555555554444
_________________________________________________________________________________
MasterCard 5105105105105100
_________________________________________________________________________________
Visa 4111111111111111
_________________________________________________________________________________
Visa 4012888888881881
_________________________________________________________________________________
Visa 4222222222222
_________________________________________________________________________________
Note : Even though this number has a different character count than the other test numbers, it is the correct and functional number.
Processor-specific Cards
Dankort (PBS) 76009244561
________________________________________________________________________________
Dankort (PBS) 5019717010103742
________________________________________________________________________________
Switch/Solo (Paymentech) 6331101999990016
__________________________________________________________________________________
Here is a list of dummy credit card number that can be used while testing your applications involving credit card transactions:
Visa: 4111-1111-1111-1111
_________________________________________________________________________________
MasterCard: 5431-1111-1111-1111
__________________________________________________________________________________
Amex: 341-1111-1111-1111
__________________________________________________________________________________
Discover: 6011-6011-6011-6611
_________________________________________________________________________________
Credit Card Prefix Numbers:
Visa: 13 or 16 numbers starting with 4
______________________________________________________
MasterCard: 16 numbers starting with 5
______________________________________________________
Discover: 16 numbers starting with 6011
______________________________________________________
AMEX: 15 numbers starting with 34 or 37
American Express 378282246310005
________________________________________________________________________________
American Express 371449635398431
________________________________________________________________________________
American Express Corporate 378734493671000
________________________________________________________________________________
Australian BankCard 5610591081018250
________________________________________________________________________________
Diners Club 30569309025904
________________________________________________________________________________
Diners Club 38520000023237
________________________________________________________________________________
Discover 6011111111111117
________________________________________________________________________________
Discover 6011000990139424
_________________________________________________________________________________
JCB 3530111333300000
_________________________________________________________________________________
JCB 3566002020360505
_________________________________________________________________________________
MasterCard 5555555555554444
_________________________________________________________________________________
MasterCard 5105105105105100
_________________________________________________________________________________
Visa 4111111111111111
_________________________________________________________________________________
Visa 4012888888881881
_________________________________________________________________________________
Visa 4222222222222
_________________________________________________________________________________
Note : Even though this number has a different character count than the other test numbers, it is the correct and functional number.
Processor-specific Cards
Dankort (PBS) 76009244561
________________________________________________________________________________
Dankort (PBS) 5019717010103742
________________________________________________________________________________
Switch/Solo (Paymentech) 6331101999990016
__________________________________________________________________________________
Here is a list of dummy credit card number that can be used while testing your applications involving credit card transactions:
Visa: 4111-1111-1111-1111
_________________________________________________________________________________
MasterCard: 5431-1111-1111-1111
__________________________________________________________________________________
Amex: 341-1111-1111-1111
__________________________________________________________________________________
Discover: 6011-6011-6011-6611
_________________________________________________________________________________
Credit Card Prefix Numbers:
Visa: 13 or 16 numbers starting with 4
______________________________________________________
MasterCard: 16 numbers starting with 5
______________________________________________________
Discover: 16 numbers starting with 6011
______________________________________________________
AMEX: 15 numbers starting with 34 or 37
Saturday, October 18, 2008
ACID Testing
ACID stands for :
Atomicity
Consistency
Isolation
Durability
For OLTP databases and when databases deal with mission-critical business transactions and critical information ACID features are much need for reliability & stability. For customers that demand high quality of databases to maintain confidentiality of their information, ACID features is what you need.
Atomicity:-
Atomicity describes database operations that are defined within a single transactions of nature “all or nothing”. If any single operation fails then the whole transaction fails. This ensures that the database is in a valid state at all times.
For instance, a transaction to transfer money from one bank account to another involves withdrawal & deposit to two different account. If deposit fails, withdrawal should also be reversed, else money will be lost. So these operations should be grouped into single atomic transaction to ensure data integrity. Databases provide atomicity in two ways. 1. Rollback feature, 2. Two phase commit protocol.
Consistency:-
Consistency of data is maintained by allowing only valid transactions that comply to preset database constraints. Some constraints are:
1. Unique constraint on a column - For eg: Only one SSN can be allowed per person record. if a duplicate SSN appears then that transaction should be rejected.
2. Rules: Some rules check if data is in specified format. Eg: Salary should always be numeric and alphabets should be rejected.
Many other features of database help ensure data consistency.
Isolation:-
This concurrent transactions don't overlap each other. Every transaction occurs before or after all other transactions. If huge number of concurrent transactions are expected, then a robust database system is needed which has guaranteed isolation. Databases use Locking mechanism to implement Isolation. The row, table or entire database can be locked when a particular transaction takes place. Although this ensures isolation it poses issues with response times.
Durability:-
A committed or completed transaction should persist regardless of failures of any kind. Eg : Hard disk crash, network unavailability should not affect the saved data in database. Durability is often achieved via
1. Database replication or
2. Load balanced servers where two servers simultaneously run on separate machines. Backups alone cannot provide "durability" of all transactions.
Atomicity
Consistency
Isolation
Durability
For OLTP databases and when databases deal with mission-critical business transactions and critical information ACID features are much need for reliability & stability. For customers that demand high quality of databases to maintain confidentiality of their information, ACID features is what you need.
Atomicity:-
Atomicity describes database operations that are defined within a single transactions of nature “all or nothing”. If any single operation fails then the whole transaction fails. This ensures that the database is in a valid state at all times.
For instance, a transaction to transfer money from one bank account to another involves withdrawal & deposit to two different account. If deposit fails, withdrawal should also be reversed, else money will be lost. So these operations should be grouped into single atomic transaction to ensure data integrity. Databases provide atomicity in two ways. 1. Rollback feature, 2. Two phase commit protocol.
Consistency:-
Consistency of data is maintained by allowing only valid transactions that comply to preset database constraints. Some constraints are:
1. Unique constraint on a column - For eg: Only one SSN can be allowed per person record. if a duplicate SSN appears then that transaction should be rejected.
2. Rules: Some rules check if data is in specified format. Eg: Salary should always be numeric and alphabets should be rejected.
Many other features of database help ensure data consistency.
Isolation:-
This concurrent transactions don't overlap each other. Every transaction occurs before or after all other transactions. If huge number of concurrent transactions are expected, then a robust database system is needed which has guaranteed isolation. Databases use Locking mechanism to implement Isolation. The row, table or entire database can be locked when a particular transaction takes place. Although this ensures isolation it poses issues with response times.
Durability:-
A committed or completed transaction should persist regardless of failures of any kind. Eg : Hard disk crash, network unavailability should not affect the saved data in database. Durability is often achieved via
1. Database replication or
2. Load balanced servers where two servers simultaneously run on separate machines. Backups alone cannot provide "durability" of all transactions.
Monday, September 22, 2008
Linear Code Sequence And Jump (LCSAJ)
LCSAJ means A Linear Code Sequence And Jump, consisting of the following three items (conventionally identified by line numbers in a source code listing): the start of the linear sequence of executable statements, the end of the linear sequence, and the target line to which control flow is transferred at the end of the linear sequence.
LCSAJ testing is a test case design technique for a component in which test cases are designed to execute LCSAJs.
An LCSAJ is defined as unbroken linear sequence of statements:
(a) Which begins at either the start of the program or a point to which the control flow may jump,
(b) Which ends at either the end of the program or a point from which the control flow may jump?
(c) And the point to which a jump is made following the sequence.
LCSAJ coverage = I/L
Where: I = Number of LCSAJs exercised at least once.
L = Total number of LCSAJs.
LCSAJs depends on the topology of a module's design and not just its semantics, they do not map onto code structures such as branches and loops. LCSAJs are not easily identifiable from design documentation. They can only be identified once code has already been written. LCSAJs are consequently not easily comprehensible.
Definitions:
LCSAJ: A Linear Code Sequence And Jump, consisting of the following three items (conventionally identified by line numbers in a source code listing): the start of the linear sequence of executable statements, the end of the linear sequence, and the target line to which control flow is transferred at the end of the linear sequence.
LCSAJ coverage: The percentage of LCSAJs of a component which are exercised by a test case suite.
LCSAJ testing: A test case design technique for a component in which test cases are designed to execute LCSAJs.
LCSAJ testing is a test case design technique for a component in which test cases are designed to execute LCSAJs.
An LCSAJ is defined as unbroken linear sequence of statements:
(a) Which begins at either the start of the program or a point to which the control flow may jump,
(b) Which ends at either the end of the program or a point from which the control flow may jump?
(c) And the point to which a jump is made following the sequence.
LCSAJ coverage = I/L
Where: I = Number of LCSAJs exercised at least once.
L = Total number of LCSAJs.
LCSAJs depends on the topology of a module's design and not just its semantics, they do not map onto code structures such as branches and loops. LCSAJs are not easily identifiable from design documentation. They can only be identified once code has already been written. LCSAJs are consequently not easily comprehensible.
Definitions:
LCSAJ: A Linear Code Sequence And Jump, consisting of the following three items (conventionally identified by line numbers in a source code listing): the start of the linear sequence of executable statements, the end of the linear sequence, and the target line to which control flow is transferred at the end of the linear sequence.
LCSAJ coverage: The percentage of LCSAJs of a component which are exercised by a test case suite.
LCSAJ testing: A test case design technique for a component in which test cases are designed to execute LCSAJs.
Thursday, September 18, 2008
Traceability Matrix
Definition
A traceability matrix is a table that correlates any two base lined documents that require a many to many relationship to determine the completeness of the relationship. It is often used with high-level requirements (sometimes known as marketing requirements) and detailed requirements of the software product to the matching parts of high-level design, detailed design, test plan, and test cases.
Common usage is to take the identifier for each of the items of one document and place them in the left column. The identifiers for the other document are placed across the top row. When an item in the left column is related to an item across the top, a mark is placed in the intersecting cell. The numbers of relationships are added up for each row and each column. This value indicates the mapping of the two items. Zero values indicate that no relationship exists. It must be determined if one must be made. Large values imply that the item is too complex and should be simplified.
To ease the creation of traceability matrices, it is advisable to add the relationships to the source documents for both backward traceability and forward traceability. In other words, when an item is changed in one base lined document, it's easy to see what needs to be changed in the other.
Description
A table that traces the requirements to the system deliverable component for that stage that responds to the requirement.
Size and Format
For each requirement, identify the component in the current stage that responds to the requirement. The requirement may be mapped to such items as a hardware component, an application unit, or a section of a design specification.
Requirements of Traceability Matrix
Traceability matrices can be established using a variety of tools including requirements management software, databases, spreadsheets, or even with tables or hyperlinks in a word processor.
A traceability matrix is created by associating requirements with the work products that satisfy them. Tests are associated with the requirements on which they are based and the product tested to meet the requirement.
Above is a simple traceability matrix structure. There can be more things included in a traceability matrix than shown. In traceability, the relationship of driver to satisfier can be one-to-one, one-to-many, many-to-one, or many-to-many.
Traceability requires unique identifiers for each requirement and product. Numbers for products are established in a configuration management (CM) plan.
Traceability ensures completeness, that all lower level requirements come from higher level requirements, and that all higher level requirements are allocated to lower level requirements. Traceability is also used to manage change and provides the basis for test planning.
Baseline Traceability Matrix
Description
A table that documents the requirements of the system for use in subsequent stages to confirm that all requirements have been met.
Size and Format
Document each requirement to be traced. The requirement may be mapped to such things as a hardware component, an application unit, or a section of a design specification.
Building a Traceability Matrix
Use a Traceability Matrix to:
• verify and validate system specifications
• ensure that all final deliverable documents are included in the system specification, such as process models and data models
• improve the quality of a system by identifying requirements that are not addressed by configuration items during design and code reviews and by identifying extra configuration items that are not required. Examples of configuration items are software modules and hardware devices.
• provide input to change requests and future project plans when missing requirements are identified.
• provide a guide for system and acceptance test plans of what needs to be tested.
Useful Traceability Matrices
Various traceability matrices may be utilized throughout the system life cycle. Useful ones include:
• Functional specification to requirements document: It shows that each requirement (obtained from a preliminary requirements statement provided by the customer or produced in the Concept Definition stage) has been covered in an appropriate section of the functional specification.
• Top level configuration item to functional specification: For example, a top level configuration item, Workstation, may be one of the configuration items that satisfies the function Input Order Information. On the matrix, each configuration item would be written down the left hand column and each function would be written across the top.
• Low level configuration item to top level configuration item: For example, the top level configuration item, Workstation, may contain the low level configuration items Monitor, CPU, keyboard, and network interface card.
• Design specification to functional specification verifies that each function has been covered in the design.
• System test plan to functional specification ensures you have identified a test case or test scenario for each process and each requirement in the functional specification.
Although the construction and maintenance of traceability matrices may be time-consuming, they are a quick reference during verification and validation tasks.
Sample Traceability Matrix
A traceability matrix is a report from the requirements database or repository. What information the report contains depends on your need. Information requirements determine the associated information that you store with the requirements. Requirements management tools capture associated information or provide the capability to add it.
The examples show forward and backward tracing between user and system requirements. User requirement identifiers begin with "U" and system requirements with "S." Tracing S12 to its source makes it clear this requirement is erroneous: it must be eliminated, rewritten, or the traceability corrected.
For requirements tracing and resulting reports to work, the requirements must be of good quality. Requirements of poor quality transfer work to subsequent phases of the SDLC, increasing cost and schedule and creating disputes with the customer.
A variety of reports are necessary to manage requirements. Reporting needs should be determined at the start of the effort and documented in the requirements management plan.
A traceability matrix is a table that correlates any two base lined documents that require a many to many relationship to determine the completeness of the relationship. It is often used with high-level requirements (sometimes known as marketing requirements) and detailed requirements of the software product to the matching parts of high-level design, detailed design, test plan, and test cases.
Common usage is to take the identifier for each of the items of one document and place them in the left column. The identifiers for the other document are placed across the top row. When an item in the left column is related to an item across the top, a mark is placed in the intersecting cell. The numbers of relationships are added up for each row and each column. This value indicates the mapping of the two items. Zero values indicate that no relationship exists. It must be determined if one must be made. Large values imply that the item is too complex and should be simplified.
To ease the creation of traceability matrices, it is advisable to add the relationships to the source documents for both backward traceability and forward traceability. In other words, when an item is changed in one base lined document, it's easy to see what needs to be changed in the other.
Description
A table that traces the requirements to the system deliverable component for that stage that responds to the requirement.
Size and Format
For each requirement, identify the component in the current stage that responds to the requirement. The requirement may be mapped to such items as a hardware component, an application unit, or a section of a design specification.
Requirements of Traceability Matrix
Traceability matrices can be established using a variety of tools including requirements management software, databases, spreadsheets, or even with tables or hyperlinks in a word processor.
A traceability matrix is created by associating requirements with the work products that satisfy them. Tests are associated with the requirements on which they are based and the product tested to meet the requirement.
Above is a simple traceability matrix structure. There can be more things included in a traceability matrix than shown. In traceability, the relationship of driver to satisfier can be one-to-one, one-to-many, many-to-one, or many-to-many.
Traceability requires unique identifiers for each requirement and product. Numbers for products are established in a configuration management (CM) plan.
Traceability ensures completeness, that all lower level requirements come from higher level requirements, and that all higher level requirements are allocated to lower level requirements. Traceability is also used to manage change and provides the basis for test planning.
Baseline Traceability Matrix
Description
A table that documents the requirements of the system for use in subsequent stages to confirm that all requirements have been met.
Size and Format
Document each requirement to be traced. The requirement may be mapped to such things as a hardware component, an application unit, or a section of a design specification.
Building a Traceability Matrix
Use a Traceability Matrix to:
• verify and validate system specifications
• ensure that all final deliverable documents are included in the system specification, such as process models and data models
• improve the quality of a system by identifying requirements that are not addressed by configuration items during design and code reviews and by identifying extra configuration items that are not required. Examples of configuration items are software modules and hardware devices.
• provide input to change requests and future project plans when missing requirements are identified.
• provide a guide for system and acceptance test plans of what needs to be tested.
Useful Traceability Matrices
Various traceability matrices may be utilized throughout the system life cycle. Useful ones include:
• Functional specification to requirements document: It shows that each requirement (obtained from a preliminary requirements statement provided by the customer or produced in the Concept Definition stage) has been covered in an appropriate section of the functional specification.
• Top level configuration item to functional specification: For example, a top level configuration item, Workstation, may be one of the configuration items that satisfies the function Input Order Information. On the matrix, each configuration item would be written down the left hand column and each function would be written across the top.
• Low level configuration item to top level configuration item: For example, the top level configuration item, Workstation, may contain the low level configuration items Monitor, CPU, keyboard, and network interface card.
• Design specification to functional specification verifies that each function has been covered in the design.
• System test plan to functional specification ensures you have identified a test case or test scenario for each process and each requirement in the functional specification.
Although the construction and maintenance of traceability matrices may be time-consuming, they are a quick reference during verification and validation tasks.
Sample Traceability Matrix
A traceability matrix is a report from the requirements database or repository. What information the report contains depends on your need. Information requirements determine the associated information that you store with the requirements. Requirements management tools capture associated information or provide the capability to add it.
The examples show forward and backward tracing between user and system requirements. User requirement identifiers begin with "U" and system requirements with "S." Tracing S12 to its source makes it clear this requirement is erroneous: it must be eliminated, rewritten, or the traceability corrected.
For requirements tracing and resulting reports to work, the requirements must be of good quality. Requirements of poor quality transfer work to subsequent phases of the SDLC, increasing cost and schedule and creating disputes with the customer.
A variety of reports are necessary to manage requirements. Reporting needs should be determined at the start of the effort and documented in the requirements management plan.
Wednesday, September 10, 2008
IEEE 829: Standard For Software Test Documentation
IEEE 829-1998, also known as the 829 Standard for Software Test Documentation, is an IEEE standard that specifies the form of a set of documents for use in eight defined stages of software testing, each stage potentially producing its own separate type of document. The standard specifies the format of these documents but does not stipulate whether they all must be produced, nor does it include any criteria regarding adequate content for these documents. These are a matter of judgments outside the purview of the standard. The documents are:
• Test Plan: a management planning document that shows:
• How the testing will be done (including SUT configurations).
• Who will do it
• What will be tested
• How long it will take (although this may vary, depending upon resource availability).
• What the test coverage will be, i.e. what quality level is required
• Test Design Specification: detailing test conditions and the expected results as well as test pass criteria.
• Test Case Specification: specifying the test data for use in running the test conditions identified in the Test Design Specification
• Test Procedure Specification: detailing how to run each test, including any set-up preconditions and the steps that need to be followed
• Test Item Transmittal Report: reporting on when tested software components have progressed from one stage of testing to the next
• Test Log: recording which tests cases were run, who ran them, in what order, and whether each test passed or failed
• Test Incident Report: detailing, for any test that failed, the actual versus expected result, and other information intended to throw light on why a test has failed. This document is deliberately named as an incident report, and not a fault report. The reason is that a discrepancy between expected and actual results can occur for a number of reasons other than a fault in the system. These include the expected results being wrong, the test being run wrongly, or inconsistency in the requirements meaning that more than one interpretation could be made. The report consists of all details of the incident such as actual and expected results, when it failed, and any supporting evidence that will help in its resolution. The report will also include, if possible, an assessment of the impact upon testing of an incident.
• Test Summary Report: A management report providing any important information uncovered by the tests accomplished, and including assessments of the quality of the testing effort, the quality of the software system under test, and statistics derived from Incident Reports. The report also records what testing was done and how long it took, in order to improve any future test planning. This final document is used to indicate whether the software system under test is fit for purpose according to whether or not it has met acceptance criteria defined by project stakeholders.
Relationship with other standards
Other standards that may be referred to when documenting according to IEEE 829 include:
• IEEE 1008, a standard for unit testing
• IEEE 1012, a standard for Software Verification and Validation
• IEEE 1028, a standard for software inspections
• IEEE 1044, a standard for the classification of software anomalies
• IEEE 1044-1, a guide to the classification of software anomalies
• IEEE 1233, a guide for developing system requirements specifications
• IEEE 730, a standard for software quality assurance plans
• IEEE 1061, a standard for software quality metrics and methodology
• IEEE 12207, a standard for software life cycle processes and life cycle data
• BSS 7925-1, a vocabulary of terms used in software testing
• BSS 7925-2, a standard for software component testing
Use of IEEE 829
The standard forms part of the training syllabus of the ISEB Foundation and Practitioner Certificates in Software Testing promoted by the British Computer Society. ISTQB, following the formation of its own syllabus based on ISEB's and Germany's ASQF syllabi, also adopted IEEE 829 as the reference standard for software testing documentation.
• Test Plan: a management planning document that shows:
• How the testing will be done (including SUT configurations).
• Who will do it
• What will be tested
• How long it will take (although this may vary, depending upon resource availability).
• What the test coverage will be, i.e. what quality level is required
• Test Design Specification: detailing test conditions and the expected results as well as test pass criteria.
• Test Case Specification: specifying the test data for use in running the test conditions identified in the Test Design Specification
• Test Procedure Specification: detailing how to run each test, including any set-up preconditions and the steps that need to be followed
• Test Item Transmittal Report: reporting on when tested software components have progressed from one stage of testing to the next
• Test Log: recording which tests cases were run, who ran them, in what order, and whether each test passed or failed
• Test Incident Report: detailing, for any test that failed, the actual versus expected result, and other information intended to throw light on why a test has failed. This document is deliberately named as an incident report, and not a fault report. The reason is that a discrepancy between expected and actual results can occur for a number of reasons other than a fault in the system. These include the expected results being wrong, the test being run wrongly, or inconsistency in the requirements meaning that more than one interpretation could be made. The report consists of all details of the incident such as actual and expected results, when it failed, and any supporting evidence that will help in its resolution. The report will also include, if possible, an assessment of the impact upon testing of an incident.
• Test Summary Report: A management report providing any important information uncovered by the tests accomplished, and including assessments of the quality of the testing effort, the quality of the software system under test, and statistics derived from Incident Reports. The report also records what testing was done and how long it took, in order to improve any future test planning. This final document is used to indicate whether the software system under test is fit for purpose according to whether or not it has met acceptance criteria defined by project stakeholders.
Relationship with other standards
Other standards that may be referred to when documenting according to IEEE 829 include:
• IEEE 1008, a standard for unit testing
• IEEE 1012, a standard for Software Verification and Validation
• IEEE 1028, a standard for software inspections
• IEEE 1044, a standard for the classification of software anomalies
• IEEE 1044-1, a guide to the classification of software anomalies
• IEEE 1233, a guide for developing system requirements specifications
• IEEE 730, a standard for software quality assurance plans
• IEEE 1061, a standard for software quality metrics and methodology
• IEEE 12207, a standard for software life cycle processes and life cycle data
• BSS 7925-1, a vocabulary of terms used in software testing
• BSS 7925-2, a standard for software component testing
Use of IEEE 829
The standard forms part of the training syllabus of the ISEB Foundation and Practitioner Certificates in Software Testing promoted by the British Computer Society. ISTQB, following the formation of its own syllabus based on ISEB's and Germany's ASQF syllabi, also adopted IEEE 829 as the reference standard for software testing documentation.
Monday, August 11, 2008
Bug Tracking
-Locating and repairing software bugs is an essential part of software development.
-Bugs can be detected and reported by engineers, testers, and end-users in all phases of the testing process.
-Information about bugs must be detailed and organized in order to schedule bug fixes and determine software release dates.
Bug Tracking involves two main stages: reporting and tracking.
1.Report Bugs
Once you execute the manual and automated tests in a cycle, you report the bugs (or defects) that you detected. The bugs are stored in a database so that you can manage them and analyze the status of your application.
When you report a bug, you record all the information necessary to reproduce and fix it. You also make sure that the QA and development personnel involved in fixing the bug are notified.
2.Track and Analyze Bugs
The lifecycle of a bug begins when it is reported and ends when it is fixed, verified, and closed.
-First you report New bugs to the database, and provide all necessary information to reproduce, fix, and follow up the bug.
-The Quality Assurance manager or Project manager periodically reviews all New bugs and decides which should be fixed. These bugs are given the status Open and are assigned to a member of the development team.
-Software developers fix the Open bugs and assign them the status Fixed.
-QA personnel test a new build of the application. If a bug does not reoccur, it is Closed. If a bug is detected again, it is reopened.
Communication is an essential part of bug tracking; all members of the development and quality assurance team must be well informed in order to insure that bugs information is up to date and that the most important problems are addressed.
The number of open or fixed bugs is a good indicator of the quality status of your application. You can use data analysis tools such as re-ports and graphs in interpret bug data.
-Bugs can be detected and reported by engineers, testers, and end-users in all phases of the testing process.
-Information about bugs must be detailed and organized in order to schedule bug fixes and determine software release dates.
Bug Tracking involves two main stages: reporting and tracking.
1.Report Bugs
Once you execute the manual and automated tests in a cycle, you report the bugs (or defects) that you detected. The bugs are stored in a database so that you can manage them and analyze the status of your application.
When you report a bug, you record all the information necessary to reproduce and fix it. You also make sure that the QA and development personnel involved in fixing the bug are notified.
2.Track and Analyze Bugs
The lifecycle of a bug begins when it is reported and ends when it is fixed, verified, and closed.
-First you report New bugs to the database, and provide all necessary information to reproduce, fix, and follow up the bug.
-The Quality Assurance manager or Project manager periodically reviews all New bugs and decides which should be fixed. These bugs are given the status Open and are assigned to a member of the development team.
-Software developers fix the Open bugs and assign them the status Fixed.
-QA personnel test a new build of the application. If a bug does not reoccur, it is Closed. If a bug is detected again, it is reopened.
Communication is an essential part of bug tracking; all members of the development and quality assurance team must be well informed in order to insure that bugs information is up to date and that the most important problems are addressed.
The number of open or fixed bugs is a good indicator of the quality status of your application. You can use data analysis tools such as re-ports and graphs in interpret bug data.
Change Request
1. Initiating a Change Request
A user or developer wants to suggest a modification that would improve an existing application, notices a problem with an application, or wants to recommend an enhancement. Any major or minor request is considered a problem with an application and will be entered as a change request.
2.Type of Change Request
Bug the application works incorrectly or provides incorrect information. (for example, a letter is allowed to be entered in a number field)
Change a modification of the existing application. (for example, sorting the files alphabetically by the second field rather than numerically by the first field makes them easier to find)
Enhancement new functionality or item added to the application. (for example, a new report, a new field, or a new button).
3. Priority for the request
Low the application works but this would make the function easier or more user friendly.
High the application works, but this is necessary to perform a job.
Critical the application does not work, job functions are impaired and there is no work around. This also applies to any Section 508 infraction.
A user or developer wants to suggest a modification that would improve an existing application, notices a problem with an application, or wants to recommend an enhancement. Any major or minor request is considered a problem with an application and will be entered as a change request.
2.Type of Change Request
Bug the application works incorrectly or provides incorrect information. (for example, a letter is allowed to be entered in a number field)
Change a modification of the existing application. (for example, sorting the files alphabetically by the second field rather than numerically by the first field makes them easier to find)
Enhancement new functionality or item added to the application. (for example, a new report, a new field, or a new button).
3. Priority for the request
Low the application works but this would make the function easier or more user friendly.
High the application works, but this is necessary to perform a job.
Critical the application does not work, job functions are impaired and there is no work around. This also applies to any Section 508 infraction.
Test Execution
Test Execution is the heart of the testing process. Each time your application changes, you will want to execute the relevant parts of your test plan in order to locate defects and assess quality.
1.Create Test Cycles
During this stage you decide the subset of tests from your test database you want to execute.
Usually you do not run all the tests at once. At different stages of the quality assurance process, you need to execute different tests in order to address specific goals. A related group of tests is called a test cycle, and can include both manual and automated tests
Example: You can create a cycle containing basic tests that run on each build of the application throughout development. You can run the cycle each time a new build is ready, to determine the application's stability before beginning more rigorous testing.
Example: You can create another set of tests for a particular module in your application. This test cycle includes tests that check that module in depth.
To decide which test cycles to build, refer to the testing goals you defined at the beginning of the process. Also consider issues such as the current state of the application and whether new functions have been added or modified.
Following are examples of some general categories of test cycles to consider:
• sanity cycle checks the entire system at a basic level (breadth, rather than depth) to see that it is functional and stable. This cycle should include basic-level tests containing mostly positive checks.
• normal cycle tests the system a little more in depth than the sanity cycle. This cycle can group medium-level tests, containing both positive and negative checks.
• advanced cycle tests both breadth and depth. This cycle can be run when more time is available for testing. The tests in the cycle cover the entire application (breadth), and also test advanced options in the application (depth).
• regression cycle tests maintenance builds. The goal of this type of cycle is to verify that a change to one part of the software did not break the rest of the application. A regression cycle includes sanity-level tests for testing the entire software, as well as in-depth tests for the specific area of the application that was modified.
2. Run Test Cycles (Automated & Manual Tests)
Once you have created cycles that cover your testing objectives, you begin executing the tests in the cycle. You perform manual tests using the test steps. Testing Tools executes automated tests for you. A test cycle is complete only when all tests-automatic and manual-have been run.
With Manual Test Execution you follow the instructions in the test steps of each test. You use the application, enter input, compare the application output with the expected output, and log the results. For each test step you assign either pass or fail status.
During Automated Test Execution you create a batch of tests and launch the entire batch at once. Testing Tools runs the tests one at a time. It then imports results, providing outcome summaries for each test.
3.Analyze Test Results
After every test run one analyze and validate the test results. And have to identify all the failed steps in the tests and to determine whether a bug has been detected, or if the expected result needs to be updated.
1.Create Test Cycles
During this stage you decide the subset of tests from your test database you want to execute.
Usually you do not run all the tests at once. At different stages of the quality assurance process, you need to execute different tests in order to address specific goals. A related group of tests is called a test cycle, and can include both manual and automated tests
Example: You can create a cycle containing basic tests that run on each build of the application throughout development. You can run the cycle each time a new build is ready, to determine the application's stability before beginning more rigorous testing.
Example: You can create another set of tests for a particular module in your application. This test cycle includes tests that check that module in depth.
To decide which test cycles to build, refer to the testing goals you defined at the beginning of the process. Also consider issues such as the current state of the application and whether new functions have been added or modified.
Following are examples of some general categories of test cycles to consider:
• sanity cycle checks the entire system at a basic level (breadth, rather than depth) to see that it is functional and stable. This cycle should include basic-level tests containing mostly positive checks.
• normal cycle tests the system a little more in depth than the sanity cycle. This cycle can group medium-level tests, containing both positive and negative checks.
• advanced cycle tests both breadth and depth. This cycle can be run when more time is available for testing. The tests in the cycle cover the entire application (breadth), and also test advanced options in the application (depth).
• regression cycle tests maintenance builds. The goal of this type of cycle is to verify that a change to one part of the software did not break the rest of the application. A regression cycle includes sanity-level tests for testing the entire software, as well as in-depth tests for the specific area of the application that was modified.
2. Run Test Cycles (Automated & Manual Tests)
Once you have created cycles that cover your testing objectives, you begin executing the tests in the cycle. You perform manual tests using the test steps. Testing Tools executes automated tests for you. A test cycle is complete only when all tests-automatic and manual-have been run.
With Manual Test Execution you follow the instructions in the test steps of each test. You use the application, enter input, compare the application output with the expected output, and log the results. For each test step you assign either pass or fail status.
During Automated Test Execution you create a batch of tests and launch the entire batch at once. Testing Tools runs the tests one at a time. It then imports results, providing outcome summaries for each test.
3.Analyze Test Results
After every test run one analyze and validate the test results. And have to identify all the failed steps in the tests and to determine whether a bug has been detected, or if the expected result needs to be updated.
Tuesday, July 15, 2008
Heuristics of Software Testing
Testability
Software testability is how easily, completely and conveniently a computer program can be tested.
Software engineers design a computer product, system or program keeping in mind the product testability. Good programmers are willing to do things that will help the testing process and a checklist of possible design points, features and so on can be useful in negotiating with them.
Here are the two main heuristics of software testing.
1. Visibility
2. Control
1.Visibility
Visibility is our ability to observe the states and outputs of the software under test. Features to improve the visibility are
• Access to Code
Developers must provide full access (source code, infrastructure, etc) to testers. The Code, change records and design documents should be provided to the testing team. The testing team should read and understand the code.
• Event logging
The events to log include User events, System milestones, Error handling and completed transactions. The logs may be stored in files, ring buffers in memory, and/or serial ports. Things to be logged include description of event, timestamp, subsystem, resource usage and severity of event. Logging should be adjusted by subsystem and type. Log file report internal errors, help in isolating defects, and give useful information about context, tests, customer usage and test coverage.
The more readable the Log Reports are, the easier it becomes to identify the defect cause and work towards corrective measures.
• Error detection mechanisms
Data integrity checking and System level error detection (e.g. Microsoft Appviewer) are useful here. In addition, Assertions and probes with the following features are really helpful
Code is added to detect internal errors.
Assertions abort on error.
Probes log errors.
Design by Contract theory---This technique requires that assertions be defined for functions. Preconditions apply to input and violations implicate calling functions while post-conditions apply to outputs and violations implicate called functions. This effectively solves the oracle problem for testing.
• Resource Monitoring
Memory usage should be monitored to find memory leaks. States of running methods, threads or processes should be watched (Profiling interfaces may be used for this.). In addition, the configuration values should be dumped. Resource monitoring is of particular concern in applications where the load on the application in real time is estimated to be considerable.
2.Control
Control refers to our ability to provide inputs and reach states in the software under test.
The features to improve controllability are:
• Test Points
Allow data to be inspected, inserted or modified at points in the software. It is especially useful for dataflow applications. In addition, a pipe and filters architecture provides many opportunities for test points.
• Custom User Interface controls
Custom UI controls often raise serious testability problems with GUI test drivers. Ensuring testability usually requires:
Adding methods to report necessary information
Customizing test tools to make use of these methods
Getting a tool expert to advise developers on testability and to build the required support.
Asking third party control vendors regarding support by test tools.
• Test Interfaces
Interfaces may be provided specifically for testing e.g. Excel and Xconq etc.
Existing interfaces may be able to support significant testing e.g. InstallSheild, AutoCAD, Tivoli, etc.
• Fault injection
Error seeding---instrumenting low level I/O code to simulate errors---makes it much easier to test error handling. It can be handled at both system and application level, Tivoli, etc.
• Installation and setup
Testers should be notified when installation has completed successfully. They should be able to verify installation, programmatically create sample records and run multiple clients, daemons or servers on a single machine.
A BROADER VIEW
Below are given a broader set of characteristics (usually known as James Bach heuristics) that lead to testable software.
Categories of Heuristics of software testing
•Operability
The better it works, the more efficiently it can be tested.
The system should have few bugs, no bugs should block the execution of tests and the product should evolve in functional stages (simultaneous development and testing).
•Observability
What we see is what we test.
Distinct output should be generated for each input
Current and past system states and variables should be visible during testing
All factors affecting the output should be visible.
Incorrect output should be easily identified.
Source code should be easily accessible.
Internal errors should be automatically detected (through self-testing mechanisms) and reported.
•Controllability
The better we control the software, the more the testing process can be automated and optimized.
Check that
All outputs can be generated and code can be executed through some combination of input.
Software and hardware states can be controlled directly by the test engineer.
Inputs and output formats are consistent and structured.
Test can be conveniently, specified, automated and reproduced.
•Decomposability
By controlling the scope of testing, we can quickly isolate problems and perform effective and efficient testing.
The software system should be built from independent modules which can be tested independently.
•Simplicity
The less there is to test, the more quickly we can test it.
The points to consider in this regard are functional (e.g. minimum set of features), structural (e.g. architecture is modularized) and code (e.g. a coding standard is adopted) simplicity.
•Stability
The fewer the changes, the fewer are the disruptions to testing.
The changes to software should be infrequent, controlled and not invalidating existing tests. The software should be able to recover well from failures.
•Understandability
The more information we will have, the smarter we will test.
The testers should be able to understand well the design, changes to the design and the dependencies between internal, external and shared components.
Technical documentation should be instantly accessible, accurate, well organized, specific and detailed.
•Suitability
The more we know about the intended use of the software, the better we can organize our testing to find important bugs.
The above heuristics can be used by a software engineer to develop a software configuration (i.e. program, data and documentation) that is convenient to test and verify.
Software testability is how easily, completely and conveniently a computer program can be tested.
Software engineers design a computer product, system or program keeping in mind the product testability. Good programmers are willing to do things that will help the testing process and a checklist of possible design points, features and so on can be useful in negotiating with them.
Here are the two main heuristics of software testing.
1. Visibility
2. Control
1.Visibility
Visibility is our ability to observe the states and outputs of the software under test. Features to improve the visibility are
• Access to Code
Developers must provide full access (source code, infrastructure, etc) to testers. The Code, change records and design documents should be provided to the testing team. The testing team should read and understand the code.
• Event logging
The events to log include User events, System milestones, Error handling and completed transactions. The logs may be stored in files, ring buffers in memory, and/or serial ports. Things to be logged include description of event, timestamp, subsystem, resource usage and severity of event. Logging should be adjusted by subsystem and type. Log file report internal errors, help in isolating defects, and give useful information about context, tests, customer usage and test coverage.
The more readable the Log Reports are, the easier it becomes to identify the defect cause and work towards corrective measures.
• Error detection mechanisms
Data integrity checking and System level error detection (e.g. Microsoft Appviewer) are useful here. In addition, Assertions and probes with the following features are really helpful
Code is added to detect internal errors.
Assertions abort on error.
Probes log errors.
Design by Contract theory---This technique requires that assertions be defined for functions. Preconditions apply to input and violations implicate calling functions while post-conditions apply to outputs and violations implicate called functions. This effectively solves the oracle problem for testing.
• Resource Monitoring
Memory usage should be monitored to find memory leaks. States of running methods, threads or processes should be watched (Profiling interfaces may be used for this.). In addition, the configuration values should be dumped. Resource monitoring is of particular concern in applications where the load on the application in real time is estimated to be considerable.
2.Control
Control refers to our ability to provide inputs and reach states in the software under test.
The features to improve controllability are:
• Test Points
Allow data to be inspected, inserted or modified at points in the software. It is especially useful for dataflow applications. In addition, a pipe and filters architecture provides many opportunities for test points.
• Custom User Interface controls
Custom UI controls often raise serious testability problems with GUI test drivers. Ensuring testability usually requires:
Adding methods to report necessary information
Customizing test tools to make use of these methods
Getting a tool expert to advise developers on testability and to build the required support.
Asking third party control vendors regarding support by test tools.
• Test Interfaces
Interfaces may be provided specifically for testing e.g. Excel and Xconq etc.
Existing interfaces may be able to support significant testing e.g. InstallSheild, AutoCAD, Tivoli, etc.
• Fault injection
Error seeding---instrumenting low level I/O code to simulate errors---makes it much easier to test error handling. It can be handled at both system and application level, Tivoli, etc.
• Installation and setup
Testers should be notified when installation has completed successfully. They should be able to verify installation, programmatically create sample records and run multiple clients, daemons or servers on a single machine.
A BROADER VIEW
Below are given a broader set of characteristics (usually known as James Bach heuristics) that lead to testable software.
Categories of Heuristics of software testing
•Operability
The better it works, the more efficiently it can be tested.
The system should have few bugs, no bugs should block the execution of tests and the product should evolve in functional stages (simultaneous development and testing).
•Observability
What we see is what we test.
Distinct output should be generated for each input
Current and past system states and variables should be visible during testing
All factors affecting the output should be visible.
Incorrect output should be easily identified.
Source code should be easily accessible.
Internal errors should be automatically detected (through self-testing mechanisms) and reported.
•Controllability
The better we control the software, the more the testing process can be automated and optimized.
Check that
All outputs can be generated and code can be executed through some combination of input.
Software and hardware states can be controlled directly by the test engineer.
Inputs and output formats are consistent and structured.
Test can be conveniently, specified, automated and reproduced.
•Decomposability
By controlling the scope of testing, we can quickly isolate problems and perform effective and efficient testing.
The software system should be built from independent modules which can be tested independently.
•Simplicity
The less there is to test, the more quickly we can test it.
The points to consider in this regard are functional (e.g. minimum set of features), structural (e.g. architecture is modularized) and code (e.g. a coding standard is adopted) simplicity.
•Stability
The fewer the changes, the fewer are the disruptions to testing.
The changes to software should be infrequent, controlled and not invalidating existing tests. The software should be able to recover well from failures.
•Understandability
The more information we will have, the smarter we will test.
The testers should be able to understand well the design, changes to the design and the dependencies between internal, external and shared components.
Technical documentation should be instantly accessible, accurate, well organized, specific and detailed.
•Suitability
The more we know about the intended use of the software, the better we can organize our testing to find important bugs.
The above heuristics can be used by a software engineer to develop a software configuration (i.e. program, data and documentation) that is convenient to test and verify.
Monday, July 14, 2008
Testing artifacts
Software testing process can produce several artifacts.
Test case
A test case is a software testing document, which consists of event, action, input, output, expected result, and actual result. Clinically defined a test case is an input and an expected result.[25] This can be as pragmatic as 'for condition x your derived result is y', whereas other test cases described in more detail the input scenario and what results might be expected. It can occasionally be a series of steps (but often steps are contained in a separate test procedure that can be exercised against multiple test cases, as a matter of economy) but with one expected result or expected outcome. The optional fields are a test case ID, test step or order of execution number, related requirement(s), depth, test category, author, and check boxes for whether the test is automatable and has been automated. Larger test cases may also contain prerequisite states or steps, and descriptions. A test case should also contain a place for the actual result. These steps can be stored in a word processor document, spreadsheet, database, or other common repository. In a database system, you may also be able to see past test results and who generated the results and the system configuration used to generate those results. These past results would usually be stored in a separate table.
Test script
The test script is the combination of a test case, test procedure, and test data. Initially the term was derived from the product of work created by automated regression test tools. Today, test scripts can be manual, automated, or a combination of both.
Test suite
The most common term for a collection of test cases is a test suite. The test suite often also contains more detailed instructions or goals for each collection of test cases. It definitely contains a section where the tester identifies the system configuration used during testing. A group of test cases may also contain prerequisite states or steps, and descriptions of the following tests.
Test plan
A test specification is called a test plan. The developers are well aware what test plans will be executed and this information is made available to the developers. This makes the developers more cautious when developing their code. This ensures that the developers code is not passed through any surprise test case or test plans.
Test harness
The software, tools, samples of data input and output, and configurations are all referred to collectively as a test harness.
In software testing, a test harness or automated test framework is a collection of software and test data configured to test a program unit by running it under varying conditions and monitor its behavior and outputs. It has two main parts: the test execution engine and the test script repository.
Test harnesses allow for the automation of tests. They can call functions with supplied parameters and print out and compare the results to the desired value. The test harness is a hook to the developed code, which can be tested using an automation framework.
A test harness should allow specific tests to run (this helps in optimising), orchestrate a runtime environment, and provide a capability to analyse results.
The typical objectives of a test harness are to:
* Automate the testing process.
* Execute test suites of test cases.
* Generate associated test reports.
A test harness typically provides the following benefits:
* Increased productivity due to automation of the testing process.
* Increased probability that regression testing will occur.
* Increased quality of software components and application.
Test case
A test case is a software testing document, which consists of event, action, input, output, expected result, and actual result. Clinically defined a test case is an input and an expected result.[25] This can be as pragmatic as 'for condition x your derived result is y', whereas other test cases described in more detail the input scenario and what results might be expected. It can occasionally be a series of steps (but often steps are contained in a separate test procedure that can be exercised against multiple test cases, as a matter of economy) but with one expected result or expected outcome. The optional fields are a test case ID, test step or order of execution number, related requirement(s), depth, test category, author, and check boxes for whether the test is automatable and has been automated. Larger test cases may also contain prerequisite states or steps, and descriptions. A test case should also contain a place for the actual result. These steps can be stored in a word processor document, spreadsheet, database, or other common repository. In a database system, you may also be able to see past test results and who generated the results and the system configuration used to generate those results. These past results would usually be stored in a separate table.
Test script
The test script is the combination of a test case, test procedure, and test data. Initially the term was derived from the product of work created by automated regression test tools. Today, test scripts can be manual, automated, or a combination of both.
Test suite
The most common term for a collection of test cases is a test suite. The test suite often also contains more detailed instructions or goals for each collection of test cases. It definitely contains a section where the tester identifies the system configuration used during testing. A group of test cases may also contain prerequisite states or steps, and descriptions of the following tests.
Test plan
A test specification is called a test plan. The developers are well aware what test plans will be executed and this information is made available to the developers. This makes the developers more cautious when developing their code. This ensures that the developers code is not passed through any surprise test case or test plans.
Test harness
The software, tools, samples of data input and output, and configurations are all referred to collectively as a test harness.
In software testing, a test harness or automated test framework is a collection of software and test data configured to test a program unit by running it under varying conditions and monitor its behavior and outputs. It has two main parts: the test execution engine and the test script repository.
Test harnesses allow for the automation of tests. They can call functions with supplied parameters and print out and compare the results to the desired value. The test harness is a hook to the developed code, which can be tested using an automation framework.
A test harness should allow specific tests to run (this helps in optimising), orchestrate a runtime environment, and provide a capability to analyse results.
The typical objectives of a test harness are to:
* Automate the testing process.
* Execute test suites of test cases.
* Generate associated test reports.
A test harness typically provides the following benefits:
* Increased productivity due to automation of the testing process.
* Increased probability that regression testing will occur.
* Increased quality of software components and application.
Test Report
A Test Report is a document that is prepared once the testing of a software product is complete and the delivery is to be made to the customer. This document would contain a summary of the entire project and would have to be presented in a way that any person who has not worked on the project would also get a good overview of the testing effort.
Contents of a Test Report
The contents of a test report are as follows:
Executive Summary
Overview
Application Overview
Testing Scope
Test Details
Test Approach
Types of testing conducted
Test Environment
Tools Used
Metrics
Test Results
Test Deliverables
Recommendations
These sections are explained as follows:
Executive Summary
This section would comprise of general information regarding the project, the client, the application, tools and people involved in such a way that it can be taken as a summary of the Test Report itself (i.e.) all the topics mentioned here would be elaborated in the various sections of the report.
1. Overview
This comprises of 2 sections – Application Overview and Testing Scope.
Application Overview – This would include detailed information on the application under test, the end users and a brief outline of the functionality as well.
Testing Scope – This would clearly outline the areas of the application that would / would not be tested by the QA team. This is done so that there would not be any misunderstandings between customer and QA as regards what needs to be tested and what does not need to be tested.
This section would also contain information of Operating System / Browser combinations if Compatibility testing is included in the testing effort.
2.Test Details
This section would contain the Test Approach, Types of Testing conducted, Test Environment and Tools Used.
Test Approach – This would discuss the strategy followed for executing the project. This could include information on how coordination was achieved between Onsite and Offshore teams, any innovative methods used for automation or for reducing repetitive workload on the testers, how information and daily / weekly deliverables were delivered to the client etc.
Types of testing conducted – This section would mention any specific types of testing performed (i.e.) Functional, Compatibility, Performance, Usability etc along with related specifications.
Test Environment – This would contain information on the Hardware and Software requirements for the project (i.e.) server configuration, client machine configuration, specific software installations required etc.
Tools used – This section would include information on any tools that were used for testing the project. They could be functional or performance testing automation tools, defect management tools, project tracking tools or any other tools which made the testing work easier.
3.Metrics
This section would include details on total number of test cases executed in the course of the project, number of defects found etc. Calculations like defects found per test case or number of test cases executed per day per person etc would also be entered in this section. This can be used in calculating the efficiency of the testing effort.
4.Test Results
This section is similar to the Metrics section, but is more for showcasing the salient features of the testing effort. Incase many defects have been logged for the project, graphs can be generated accordingly and depicted in this section. The graphs can be for Defects per build, Defects based on severity, Defects based on Status (i.e.) how many were fixed and how many rejected etc.
5.Test Deliverables
This section would include links to the various documents prepared in the course of the testing project (i.e.) Test Plan, Test Procedures, Test Logs, Release Report etc.
6.Recommendations
This section would include any recommendations from the QA team to the client on the product tested. It could also mention the list of known defects which have been logged by QA but not yet fixed by the development team so that they can be taken care of in the next release of the application.
Contents of a Test Report
The contents of a test report are as follows:
Executive Summary
Overview
Application Overview
Testing Scope
Test Details
Test Approach
Types of testing conducted
Test Environment
Tools Used
Metrics
Test Results
Test Deliverables
Recommendations
These sections are explained as follows:
Executive Summary
This section would comprise of general information regarding the project, the client, the application, tools and people involved in such a way that it can be taken as a summary of the Test Report itself (i.e.) all the topics mentioned here would be elaborated in the various sections of the report.
1. Overview
This comprises of 2 sections – Application Overview and Testing Scope.
Application Overview – This would include detailed information on the application under test, the end users and a brief outline of the functionality as well.
Testing Scope – This would clearly outline the areas of the application that would / would not be tested by the QA team. This is done so that there would not be any misunderstandings between customer and QA as regards what needs to be tested and what does not need to be tested.
This section would also contain information of Operating System / Browser combinations if Compatibility testing is included in the testing effort.
2.Test Details
This section would contain the Test Approach, Types of Testing conducted, Test Environment and Tools Used.
Test Approach – This would discuss the strategy followed for executing the project. This could include information on how coordination was achieved between Onsite and Offshore teams, any innovative methods used for automation or for reducing repetitive workload on the testers, how information and daily / weekly deliverables were delivered to the client etc.
Types of testing conducted – This section would mention any specific types of testing performed (i.e.) Functional, Compatibility, Performance, Usability etc along with related specifications.
Test Environment – This would contain information on the Hardware and Software requirements for the project (i.e.) server configuration, client machine configuration, specific software installations required etc.
Tools used – This section would include information on any tools that were used for testing the project. They could be functional or performance testing automation tools, defect management tools, project tracking tools or any other tools which made the testing work easier.
3.Metrics
This section would include details on total number of test cases executed in the course of the project, number of defects found etc. Calculations like defects found per test case or number of test cases executed per day per person etc would also be entered in this section. This can be used in calculating the efficiency of the testing effort.
4.Test Results
This section is similar to the Metrics section, but is more for showcasing the salient features of the testing effort. Incase many defects have been logged for the project, graphs can be generated accordingly and depicted in this section. The graphs can be for Defects per build, Defects based on severity, Defects based on Status (i.e.) how many were fixed and how many rejected etc.
5.Test Deliverables
This section would include links to the various documents prepared in the course of the testing project (i.e.) Test Plan, Test Procedures, Test Logs, Release Report etc.
6.Recommendations
This section would include any recommendations from the QA team to the client on the product tested. It could also mention the list of known defects which have been logged by QA but not yet fixed by the development team so that they can be taken care of in the next release of the application.
TEST PLAN
1.What is a Test Plan?
A Test Plan can be defined as a document that describes the scope, approach, resources and schedule of intended test activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning.
The main purpose of preparing a Test Plan is that everyone concerned with the project are in sync with regards to the scope, responsibilities, deadlines and deliverables for the project. It is in this respect that reviews and a sign-off are very important since it means that everyone is in agreement of the contents of the test plan and this also helps in case of any dispute during the course of the project (especially between the developers and the testers).
Purpose of preparing a Test Plan
A Test Plan is a useful way to think through the efforts needed to validate the acceptability of a software product.
The completed document will help people outside the test group understand the 'why' and 'how' of product validation.
It should be thorough enough to be useful but not so thorough that no one outside the test group will read it.
Contents of a Test Plan
1. Purpose
2. Scope
3. Test Approach
4. Entry Criteria
5. Resources
6. Tasks / Responsibilities
7. Exit Criteria
8. Schedules / Milestones
9. Hardware / Software Requirements
10. Risks & Mitigation Plans
11. Tools to be used
12. Deliverables
13. References
a. Procedures
b. Templates
c. Standards/Guidelines
14. Annexure
15. Sign-Off
2. Contents (in detail)
Purpose
This section should contain the purpose of preparing the test plan.
Scope
This section should talk about the areas of the application which are to be tested by the QA team and specify those areas which are definitely out of scope (screens, database, mainframe processes etc).
Test Approach
This would contain details on how the testing is to be performed and whether any specific strategy is to be followed (including configuration management).
Entry Criteria
This section explains the various steps to be performed before the start of a test (i.e.) pre-requisites. For example: Timely environment set up, starting the web server / app server, successful implementation of the latest build etc.
Resources
This section should list out the people who would be involved in the project and their designation etc.
Tasks / Responsibilities
This section talks about the tasks to be performed and the responsibilities assigned to the various members in the project.
Exit criteria
Contains tasks like bringing down the system / server, restoring system to pre-test environment, database refresh etc.
Schedules / Milestones
This sections deals with the final delivery date and the various milestone dates to be met in the course of the project.
Hardware / Software Requirements
This section would contain the details of PC’s / servers required (with the configuration) to install the application or perform the testing; specific software that needs to be installed on the systems to get the application running or to connect to the database; connectivity related issues etc.
Risks & Mitigation Plans
This section should list out all the possible risks that can arise during the testing and the mitigation plans that the QA team plans to implement incase the risk actually turns into a reality.
Tools to be used
This would list out the testing tools or utilities (if any) that are to be used in the project (e.g.) WinRunner, Test Director, PCOM, WinSQL.
Deliverables
This section contains the various deliverables that are due to the client at various points of time (i.e.) daily, weekly, start of the project, end of the project etc. These could include Test Plans, Test Procedure, Test Matrices, Status Reports, Test Scripts etc. Templates for all these could also be attached.
References
Procedures
Templates (Client Specific or otherwise)
Standards / Guidelines (e.g.) QView
Project related documents (RSD, ADD, FSD etc)
Annexure
This could contain embedded documents or links to documents which have been / will be used in the course of testing (e.g.) templates used for reports, test cases etc. Referenced documents can also be attached here.
Sign-Off
This should contain the mutual agreement between the client and the QA team with both leads / managers signing off their agreement on the Test Plan.
A Test Plan can be defined as a document that describes the scope, approach, resources and schedule of intended test activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning.
The main purpose of preparing a Test Plan is that everyone concerned with the project are in sync with regards to the scope, responsibilities, deadlines and deliverables for the project. It is in this respect that reviews and a sign-off are very important since it means that everyone is in agreement of the contents of the test plan and this also helps in case of any dispute during the course of the project (especially between the developers and the testers).
Purpose of preparing a Test Plan
A Test Plan is a useful way to think through the efforts needed to validate the acceptability of a software product.
The completed document will help people outside the test group understand the 'why' and 'how' of product validation.
It should be thorough enough to be useful but not so thorough that no one outside the test group will read it.
Contents of a Test Plan
1. Purpose
2. Scope
3. Test Approach
4. Entry Criteria
5. Resources
6. Tasks / Responsibilities
7. Exit Criteria
8. Schedules / Milestones
9. Hardware / Software Requirements
10. Risks & Mitigation Plans
11. Tools to be used
12. Deliverables
13. References
a. Procedures
b. Templates
c. Standards/Guidelines
14. Annexure
15. Sign-Off
2. Contents (in detail)
Purpose
This section should contain the purpose of preparing the test plan.
Scope
This section should talk about the areas of the application which are to be tested by the QA team and specify those areas which are definitely out of scope (screens, database, mainframe processes etc).
Test Approach
This would contain details on how the testing is to be performed and whether any specific strategy is to be followed (including configuration management).
Entry Criteria
This section explains the various steps to be performed before the start of a test (i.e.) pre-requisites. For example: Timely environment set up, starting the web server / app server, successful implementation of the latest build etc.
Resources
This section should list out the people who would be involved in the project and their designation etc.
Tasks / Responsibilities
This section talks about the tasks to be performed and the responsibilities assigned to the various members in the project.
Exit criteria
Contains tasks like bringing down the system / server, restoring system to pre-test environment, database refresh etc.
Schedules / Milestones
This sections deals with the final delivery date and the various milestone dates to be met in the course of the project.
Hardware / Software Requirements
This section would contain the details of PC’s / servers required (with the configuration) to install the application or perform the testing; specific software that needs to be installed on the systems to get the application running or to connect to the database; connectivity related issues etc.
Risks & Mitigation Plans
This section should list out all the possible risks that can arise during the testing and the mitigation plans that the QA team plans to implement incase the risk actually turns into a reality.
Tools to be used
This would list out the testing tools or utilities (if any) that are to be used in the project (e.g.) WinRunner, Test Director, PCOM, WinSQL.
Deliverables
This section contains the various deliverables that are due to the client at various points of time (i.e.) daily, weekly, start of the project, end of the project etc. These could include Test Plans, Test Procedure, Test Matrices, Status Reports, Test Scripts etc. Templates for all these could also be attached.
References
Procedures
Templates (Client Specific or otherwise)
Standards / Guidelines (e.g.) QView
Project related documents (RSD, ADD, FSD etc)
Annexure
This could contain embedded documents or links to documents which have been / will be used in the course of testing (e.g.) templates used for reports, test cases etc. Referenced documents can also be attached here.
Sign-Off
This should contain the mutual agreement between the client and the QA team with both leads / managers signing off their agreement on the Test Plan.
Tuesday, July 8, 2008
Specific Field Tests
1.Date Field Checks
1. Assure that leap years are validated correctly & do not cause errors/miscalculations.
2. Assure that month code 00 and 13 are validated correctly & do not cause errors/miscalculations.
3. Assure that 00 and 13 are reported as errors.
4. Assure that day values 00 and 32 are validated correctly & do not cause errors/miscalculations.
5. Assure that Feb. 28, 29, 30 are validated correctly & do not cause errors/ miscalculations.
6. Assure that Feb. 30 is reported as an error.
7. Assure that century change is validated correctly & does not cause errors/ miscalculations.
8. Assure that out of cycle dates are validated correctly & do not cause errors/miscalculations.
2.Numeric Fields
1. Assure that lowest and highest values are handled correctly.
2. Assure that invalid values are logged and reported.
3. Assure that valid values are handles by the correct procedure.
4. Assure that numeric fields with a blank in position 1 are processed or reported as an error.
5. Assure that fields with a blank in the last position are processed or reported as an error an error.
6. Assure that both + and - values are correctly processed.
7. Assure that division by zero does not occur.
8. Include value zero in all calculations.
9. Include at least one in-range value.
10. Include maximum and minimum range values.
11. Include out of range values above the maximum and below the minimum.
12. Assure that upper and lower values in ranges are handled correctly.
3.Alpha Field Checks
1. Use blank and non-blank data.
2. Include lowest and highest values.
3. Include invalid characters & symbols.
4. Include valid characters.
5. Include data items with first position blank.
6. Include data items with last position blank.
1. Assure that leap years are validated correctly & do not cause errors/miscalculations.
2. Assure that month code 00 and 13 are validated correctly & do not cause errors/miscalculations.
3. Assure that 00 and 13 are reported as errors.
4. Assure that day values 00 and 32 are validated correctly & do not cause errors/miscalculations.
5. Assure that Feb. 28, 29, 30 are validated correctly & do not cause errors/ miscalculations.
6. Assure that Feb. 30 is reported as an error.
7. Assure that century change is validated correctly & does not cause errors/ miscalculations.
8. Assure that out of cycle dates are validated correctly & do not cause errors/miscalculations.
2.Numeric Fields
1. Assure that lowest and highest values are handled correctly.
2. Assure that invalid values are logged and reported.
3. Assure that valid values are handles by the correct procedure.
4. Assure that numeric fields with a blank in position 1 are processed or reported as an error.
5. Assure that fields with a blank in the last position are processed or reported as an error an error.
6. Assure that both + and - values are correctly processed.
7. Assure that division by zero does not occur.
8. Include value zero in all calculations.
9. Include at least one in-range value.
10. Include maximum and minimum range values.
11. Include out of range values above the maximum and below the minimum.
12. Assure that upper and lower values in ranges are handled correctly.
3.Alpha Field Checks
1. Use blank and non-blank data.
2. Include lowest and highest values.
3. Include invalid characters & symbols.
4. Include valid characters.
5. Include data items with first position blank.
6. Include data items with last position blank.
Screen Validation Checklist
1.Aesthetic Conditions:
1. Is the general screen background the correct color?
2. Are the field prompts the correct color?
3. Are the field backgrounds the correct color?
4. In read-only mode, are the field prompts the correct color?
5. In read-only mode, are the field backgrounds the correct color?
6. Are all the screen prompts specified in the correct screen font?
7. Is the text in all fields specified in the correct screen font?
8. Are all the field prompts aligned perfectly on the screen?
9. Are all the field edit boxes aligned perfectly on the screen?
10. Are all group boxes aligned correctly on the screen?
11. Should the screen be resizable?
12. Should the screen be allowed to minimize?
13. Are all the field prompts spelt correctly?
14. Are all character or alphanumeric fields left justified? This is the default unless otherwise specified.
15. Are all numeric fields right justified? This is the default unless otherwise specified.
16. Is all the micro-help text spelt correctly on this screen?
17. Is all the error message text spelt correctly on this screen?
18. Is all user input captured in UPPER case or lowercase consistently?
19. Where the database requires a value (other than null) then this should be defaulted into fields. The user must either enter an alternative valid value or leave the default value intact.
20. Assure that all windows have a consistent look and feel.
21. Assure that all dialog boxes have a consistent look and feel.
2.Validation Conditions:
1. Does a failure of validation on every field cause a sensible user error message?
2. Is the user required to fix entries, which have failed validation tests?
3. Have any fields got multiple validation rules and if so are all rules being applied?
4. If the user enters an invalid value and clicks on the OK button (i.e. does not TAB off the field) is the invalid entry identified and highlighted correctly with an error message?
5. Is validation consistently applied at screen level unless specifically required at field level?
6. For all numeric fields check whether negative numbers can and should be able to be entered.
7. For all numeric fields check the minimum and maximum values and also some mid-range values allowable?
8. For all character/alphanumeric fields check the field to ensure that there is a character limit specified and that this limit is exactly correct for the specified database size?
9. Do all mandatory fields require user input?
10. If any of the database columns don't allow null values then the corresponding screen fields must be mandatory. (If any field, which initially was mandatory, has become optional then check whether null values are allowed in this field.)
3.Navigation Conditions:
1. Can the screen be accessed correctly from the menu?
2. Can the screen be accessed correctly from the toolbar?
3. Can the screen be accessed correctly by double clicking on a list control on the previous screen?
4. Can all screens accessible via buttons on this screen be accessed correctly?
5. Can all screens accessible by double clicking on a list control be accessed correctly?
6. Is the screen modal? (i.e.) Is the user prevented from accessing other functions when this screen is active and is this correct?
7. Can a number of instances of this screen be opened at the same time and is this correct?
4.Usability Conditions:
1. Are all the dropdowns on this screen sorted correctly? Alphabetic sorting is the default unless otherwise specified.
2. Is all date entry required in the correct format?
3. Have all pushbuttons on the screen been given appropriate Shortcut keys?
4. Do the Shortcut keys work correctly?
5. Have the menu options that apply to your screen got fast keys associated and should they have?
6. Does the Tab Order specified on the screen go in sequence from Top Left to bottom right? This is the default unless otherwise specified.
7. Are all read-only fields avoided in the TAB sequence?
8. Are all disabled fields avoided in the TAB sequence?
9. Can the cursor be placed in the microhelp text box by clicking on the text box with the mouse?
10. Can the cursor be placed in read-only fields by clicking in the field with the mouse?
11. Is the cursor positioned in the first input field or control when the screen is opened?
12. Is there a default button specified on the screen?
13. Does the default button work correctly?
14. When an error message occurs does the focus return to the field in error when the user cancels it?
15. When the user Alt+Tab's to another application does this have any impact on the screen upon return to the application?
16. Do all the fields edit boxes indicate the number of characters they will hold by there length? e.g. a 30 character field should be a lot longer.
5.Data Integrity Conditions:
1. Is the data saved when the window is closed by double clicking on the close box?
2. Check the maximum field lengths to ensure that there are no truncated characters?
3. Where the database requires a value (other than null) then this should be defaulted into fields. The user must either enter an alternative valid value or leave the default value intact.
4. Check maximum and minimum field values for numeric fields?
5. If numeric fields accept negative values can these be stored correctly on the database and does it make sense for the field to accept negative numbers?
6. If a set of radio buttons represents a fixed set of values such as A, B and C then what happens if a blank value is retrieved from the database? (In some situations rows can be created on the database by other functions, which are not screen based, and thus the required initial values can be incorrect.)
7. If a particular set of data is saved to the database check that each value gets saved fully to the database. (i.e.) Beware of truncation (of strings) and rounding of numeric values.
6.Modes (Editable Read-only) Conditions:
1. Are the screen and field colors adjusted correctly for read-only mode?
2. Should a read-only mode be provided for this screen?
3. Are all fields and controls disabled in read-only mode?
4. Can the screen be accessed from the previous screen/menu/toolbar in read-only mode?
5. Can all screens available from this screen be accessed in read-only mode?
6. Check that no validation is performed in read-only mode.
7.General Conditions:
1. Assure the existence of the "Help" menu.
2. Assure that the proper commands and options are in each menu.
3. Assure that all buttons on all tool bars have a corresponding key commands.
4. Assure that each menu command has an alternative (hot-key) key sequence, which will invoke it where appropriate.
5. In drop down list boxes, ensure that the names are not abbreviations / cut short
6. In drop down list boxes, assure that the list and each entry in the list can be accessed via appropriate key / hot key combinations.
7. Ensure that duplicate hot keys do not exist on each screen
8. Ensure the proper usage of the escape key (which is to undo any changes that have been made) and generates a caution message "Changes will be lost - Continue yes/no"
9. Assure that the cancel button functions the same as the escape key.
10. Assure that the Cancel button operates, as a Close button when changes have been made that cannot be undone.
11. Assure that only command buttons, which are used by a particular window, or in a particular dialog box, are present. – (i.e) make sure they don't work on the screen behind the current screen.
12. When a command button is used sometimes and not at other times, assures that it is grayed out when it should not be used.
13. Assure that OK and Cancel buttons are grouped separately from other command buttons.
14. Assure that command button names are not abbreviations.
15. Assure that all field labels/names are not technical labels, but rather are names meaningful to system users.
16. Assure that command buttons are all of similar size and shape, and same font & font size.
17. Assure that each command button can be accessed via a hot key combination.
18. Assure that command buttons in the same window/dialog box do not have duplicate hot keys.
19. Assure that each window/dialog box has a clearly marked default value (command button, or other object) which is invoked when the Enter key is pressed - and NOT the Cancel or Close button
20. Assure that focus is set to an object/button, which makes sense according to the function of the window/dialog box.
21. Assure that all option buttons (and radio buttons) names are not abbreviations.
22. Assure that option button names are not technical labels, but rather are names meaningful to system users.
23. If hot keys are used to access option buttons, assure that duplicate hot keys do not exist in the same window/dialog box.
24. Assure that option box names are not abbreviations.
25. Assure that option boxes, option buttons, and command buttons are logically grouped together in clearly demarcated areas "Group Box"
26. Assure that the Tab key sequence, which traverses the screens, does so in a logical way.
27. Assure consistency of mouse actions across windows.
28. Assure that the color red is not used to highlight active objects (many individuals are red-green color blind).
29. Assure that the user will have control of the desktop with respect to general color and highlighting (the application should not dictate the desktop background characteristics).
30. Assure that the screen/window does not have a cluttered appearance
31. Ctrl + F6 opens next tab within tabbed window
32. Shift + Ctrl + F6 opens previous tab within tabbed window
33. Tabbing will open next tab within tabbed window if on last field of current tab
34. Tabbing will go onto the 'Continue' button if on last field of last tab within tabbed window
35. Tabbing will go onto the next editable field in the window
36. Banner style & size & display exact same as existing windows
37. If 8 or less options in a list box, display all options on open of list box - should be no need to scroll
38. Errors on continue will cause user to be returned to the tab and the focus should be on the field causing the error. (i.e the tab is opened, highlighting the field with the error on it)
39. Pressing continue while on the first tab of a tabbed window (assuming all fields filled correctly) will not open all the tabs.
40. On open of tab focus will be on first editable field
41. All fonts to be the same
42. Alt+F4 will close the tabbed window and return you to main screen or previous screen (as appropriate), generating "changes will be lost" message if necessary.
43. Microhelp text for every enabled field & button
44. Ensure all fields are disabled in read-only mode
45. Progress messages on load of tabbed screens
46. Return operates continue
47. If retrieve on load of tabbed window fails window should not open
1. Is the general screen background the correct color?
2. Are the field prompts the correct color?
3. Are the field backgrounds the correct color?
4. In read-only mode, are the field prompts the correct color?
5. In read-only mode, are the field backgrounds the correct color?
6. Are all the screen prompts specified in the correct screen font?
7. Is the text in all fields specified in the correct screen font?
8. Are all the field prompts aligned perfectly on the screen?
9. Are all the field edit boxes aligned perfectly on the screen?
10. Are all group boxes aligned correctly on the screen?
11. Should the screen be resizable?
12. Should the screen be allowed to minimize?
13. Are all the field prompts spelt correctly?
14. Are all character or alphanumeric fields left justified? This is the default unless otherwise specified.
15. Are all numeric fields right justified? This is the default unless otherwise specified.
16. Is all the micro-help text spelt correctly on this screen?
17. Is all the error message text spelt correctly on this screen?
18. Is all user input captured in UPPER case or lowercase consistently?
19. Where the database requires a value (other than null) then this should be defaulted into fields. The user must either enter an alternative valid value or leave the default value intact.
20. Assure that all windows have a consistent look and feel.
21. Assure that all dialog boxes have a consistent look and feel.
2.Validation Conditions:
1. Does a failure of validation on every field cause a sensible user error message?
2. Is the user required to fix entries, which have failed validation tests?
3. Have any fields got multiple validation rules and if so are all rules being applied?
4. If the user enters an invalid value and clicks on the OK button (i.e. does not TAB off the field) is the invalid entry identified and highlighted correctly with an error message?
5. Is validation consistently applied at screen level unless specifically required at field level?
6. For all numeric fields check whether negative numbers can and should be able to be entered.
7. For all numeric fields check the minimum and maximum values and also some mid-range values allowable?
8. For all character/alphanumeric fields check the field to ensure that there is a character limit specified and that this limit is exactly correct for the specified database size?
9. Do all mandatory fields require user input?
10. If any of the database columns don't allow null values then the corresponding screen fields must be mandatory. (If any field, which initially was mandatory, has become optional then check whether null values are allowed in this field.)
3.Navigation Conditions:
1. Can the screen be accessed correctly from the menu?
2. Can the screen be accessed correctly from the toolbar?
3. Can the screen be accessed correctly by double clicking on a list control on the previous screen?
4. Can all screens accessible via buttons on this screen be accessed correctly?
5. Can all screens accessible by double clicking on a list control be accessed correctly?
6. Is the screen modal? (i.e.) Is the user prevented from accessing other functions when this screen is active and is this correct?
7. Can a number of instances of this screen be opened at the same time and is this correct?
4.Usability Conditions:
1. Are all the dropdowns on this screen sorted correctly? Alphabetic sorting is the default unless otherwise specified.
2. Is all date entry required in the correct format?
3. Have all pushbuttons on the screen been given appropriate Shortcut keys?
4. Do the Shortcut keys work correctly?
5. Have the menu options that apply to your screen got fast keys associated and should they have?
6. Does the Tab Order specified on the screen go in sequence from Top Left to bottom right? This is the default unless otherwise specified.
7. Are all read-only fields avoided in the TAB sequence?
8. Are all disabled fields avoided in the TAB sequence?
9. Can the cursor be placed in the microhelp text box by clicking on the text box with the mouse?
10. Can the cursor be placed in read-only fields by clicking in the field with the mouse?
11. Is the cursor positioned in the first input field or control when the screen is opened?
12. Is there a default button specified on the screen?
13. Does the default button work correctly?
14. When an error message occurs does the focus return to the field in error when the user cancels it?
15. When the user Alt+Tab's to another application does this have any impact on the screen upon return to the application?
16. Do all the fields edit boxes indicate the number of characters they will hold by there length? e.g. a 30 character field should be a lot longer.
5.Data Integrity Conditions:
1. Is the data saved when the window is closed by double clicking on the close box?
2. Check the maximum field lengths to ensure that there are no truncated characters?
3. Where the database requires a value (other than null) then this should be defaulted into fields. The user must either enter an alternative valid value or leave the default value intact.
4. Check maximum and minimum field values for numeric fields?
5. If numeric fields accept negative values can these be stored correctly on the database and does it make sense for the field to accept negative numbers?
6. If a set of radio buttons represents a fixed set of values such as A, B and C then what happens if a blank value is retrieved from the database? (In some situations rows can be created on the database by other functions, which are not screen based, and thus the required initial values can be incorrect.)
7. If a particular set of data is saved to the database check that each value gets saved fully to the database. (i.e.) Beware of truncation (of strings) and rounding of numeric values.
6.Modes (Editable Read-only) Conditions:
1. Are the screen and field colors adjusted correctly for read-only mode?
2. Should a read-only mode be provided for this screen?
3. Are all fields and controls disabled in read-only mode?
4. Can the screen be accessed from the previous screen/menu/toolbar in read-only mode?
5. Can all screens available from this screen be accessed in read-only mode?
6. Check that no validation is performed in read-only mode.
7.General Conditions:
1. Assure the existence of the "Help" menu.
2. Assure that the proper commands and options are in each menu.
3. Assure that all buttons on all tool bars have a corresponding key commands.
4. Assure that each menu command has an alternative (hot-key) key sequence, which will invoke it where appropriate.
5. In drop down list boxes, ensure that the names are not abbreviations / cut short
6. In drop down list boxes, assure that the list and each entry in the list can be accessed via appropriate key / hot key combinations.
7. Ensure that duplicate hot keys do not exist on each screen
8. Ensure the proper usage of the escape key (which is to undo any changes that have been made) and generates a caution message "Changes will be lost - Continue yes/no"
9. Assure that the cancel button functions the same as the escape key.
10. Assure that the Cancel button operates, as a Close button when changes have been made that cannot be undone.
11. Assure that only command buttons, which are used by a particular window, or in a particular dialog box, are present. – (i.e) make sure they don't work on the screen behind the current screen.
12. When a command button is used sometimes and not at other times, assures that it is grayed out when it should not be used.
13. Assure that OK and Cancel buttons are grouped separately from other command buttons.
14. Assure that command button names are not abbreviations.
15. Assure that all field labels/names are not technical labels, but rather are names meaningful to system users.
16. Assure that command buttons are all of similar size and shape, and same font & font size.
17. Assure that each command button can be accessed via a hot key combination.
18. Assure that command buttons in the same window/dialog box do not have duplicate hot keys.
19. Assure that each window/dialog box has a clearly marked default value (command button, or other object) which is invoked when the Enter key is pressed - and NOT the Cancel or Close button
20. Assure that focus is set to an object/button, which makes sense according to the function of the window/dialog box.
21. Assure that all option buttons (and radio buttons) names are not abbreviations.
22. Assure that option button names are not technical labels, but rather are names meaningful to system users.
23. If hot keys are used to access option buttons, assure that duplicate hot keys do not exist in the same window/dialog box.
24. Assure that option box names are not abbreviations.
25. Assure that option boxes, option buttons, and command buttons are logically grouped together in clearly demarcated areas "Group Box"
26. Assure that the Tab key sequence, which traverses the screens, does so in a logical way.
27. Assure consistency of mouse actions across windows.
28. Assure that the color red is not used to highlight active objects (many individuals are red-green color blind).
29. Assure that the user will have control of the desktop with respect to general color and highlighting (the application should not dictate the desktop background characteristics).
30. Assure that the screen/window does not have a cluttered appearance
31. Ctrl + F6 opens next tab within tabbed window
32. Shift + Ctrl + F6 opens previous tab within tabbed window
33. Tabbing will open next tab within tabbed window if on last field of current tab
34. Tabbing will go onto the 'Continue' button if on last field of last tab within tabbed window
35. Tabbing will go onto the next editable field in the window
36. Banner style & size & display exact same as existing windows
37. If 8 or less options in a list box, display all options on open of list box - should be no need to scroll
38. Errors on continue will cause user to be returned to the tab and the focus should be on the field causing the error. (i.e the tab is opened, highlighting the field with the error on it)
39. Pressing continue while on the first tab of a tabbed window (assuming all fields filled correctly) will not open all the tabs.
40. On open of tab focus will be on first editable field
41. All fonts to be the same
42. Alt+F4 will close the tabbed window and return you to main screen or previous screen (as appropriate), generating "changes will be lost" message if necessary.
43. Microhelp text for every enabled field & button
44. Ensure all fields are disabled in read-only mode
45. Progress messages on load of tabbed screens
46. Return operates continue
47. If retrieve on load of tabbed window fails window should not open
Subscribe to:
Posts (Atom)