Test Case – 1: Test Case for ATM
1) Insertion of ATM card with success.
2) Incorrect ATM Card Insertion – Leading to unsuccessful operation. Can not accepted.
3) ATM Card of an invalid account – Leading to unsuccessful operation. Can not accepted.
4) Successful feeding of ATM PIN Number.
5) Incorrect ATM PIN Number feeding 3 times - Leading to unsuccessful operation. Block the card for whole day. Can not be operated for whole day.
6) Selection of language of operation, with success.
7) Selection of Type of Bank Account with success.
8) Incorrect Bank Account type Selection in respect to the type of ATM Card inserted - Leading to unsuccessful operation.
9) Selection of withdrawal option with success.
10) Selection of Amount to be withdrawn with success.
11) Incorrect Currency denominations - Leading to unsuccessful operation. Can not accept denomination like Rs.50, 250. Should not accept these type of denomination and ask customer to enter proper denominations.
12) Successful completion of withdrawal of money.
13) Amount to be withdrawn in excess of the available Balance - Leading to unsuccessful operation. Should not allowed it and ask for again entering correct values. Should display message like 'Entering amount is greater that available balance, please enter less than or equal to available balance.
14) Shortage of Currency Notes in ATM - Leading to unsuccessful operation. Should display message. Only some of the denomination accepted . e.g Only 500 denomination should enter.
15) Amount to be withdrawn in excess of the daily withdrawal limit - Leading to unsuccessful operation. Should display message ' Daily withdraw limit is XYZ, you should enter less than or equal to XYZ'.
16) ATM link to the Bank Server not available at the moment - Leading to unsuccessful operation.
17) Clicking of the Cancel button after inserting the ATM card - Leading to unsuccessful operation.
18) Clicking of the Cancel button after feeding the ATM PIN Number - Leading to unsuccessful operation.
19) Clicking of the Cancel button after selection of language of operation - Leading to unsuccessful operation.
20) Clicking of the Cancel button after selection of Type of Bank Account - Leading to unsuccessful operation.
21) Clicking of the Cancel button after selection of Amount of withdrawal - Leading to unsuccessful operation.
22) Clicking of the Cancel button after feeding the amount to be withdrawn - Leading to unsuccessful operation.
Test Case – 2: Test Case for a Cell Phone
1) Check the correct insertion of the Battery in the cell phone.
2) Check the proper operation of Switch ON and Switch OFF functions of the cell phone.
3) Check the correct insertion of the SIM Card in the cell phone.
4) Check the correct insertion of one contact name and phone number in the Address book.
5) Check the successful operation of the Incoming call.
6) Check the successful operation of the outgoing call.
7) Check the successful operation of sending and receiving of Short Messages.
8) Check the correct selection & display of all Numbers and special characters.
9) Check the successful deletion of contact name and phone number from the Address book.
10) Check the successful capturing of the home Network from the service provider.
11) Check the successful connectivity of the GPRS facility – if supported on the cell phone.
12) Check the successful connectivity of the EDGE facility – if supported on the cell phone.
Test Case – 3: Test Case for a Traffic Signal
1) Check the presence of three lights like Green, Yellow & Red on the traffic light post.
2) Check the switching sequence of the lights.
3) Check the defined time delay between the switching of lights of defined colors.
4) Check the possibility and accuracy of adjustment in defining the time delay between the switching of various lights depending upon the traffic density.
5) Check the switching ON of light of one color at one particular time.
6) Check the switching of lights from some type of sensor.
Test Case – 4: Test Case for an Elevator
1) Check the capability of Upward & Downward movement.
2) Check the proper stopping at each and every floor.
3) Check the stoppage exactly at the floor whose corresponding number is pressed.
4) Check the automatic upward movement when called by someone from some floor at higher level.
5) Check the automatic downward movement when called by someone from some floor at lower level.
5) Check the proper functioning of the wait function till Close button is pressed.
6) Check the automatic opening of the door in the event of someone trying to step in while the closing of the door is in progress.
7) Check the motion of the elevator without any jerks.
8) Check the load limit prescribed for the elevator – Warn if load limit exceeds.
9) Check the presence & proper functioning of auto descent facility in case of power failure.
10) Check the presence & proper functioning of the communication system in case of power failure.
11) Check the presence & proper functioning of the ventilation system provided.
12) Check the presence & proper functioning of the fire fighting system in case of emergency
Wednesday, August 25, 2010
When to stop Testing
Many of testers no All the testers came across this question during in interview. And all of us know that the testing never complete but it stops at particular stage. This never show that system is bug free but it ensure that the system can work efficiently now on this stage.
There are many criteria s or assumption on which it stop.
* Stop the testing when the committed / planned testing deadlines are about to expire.
* Stop the testing when we are not able to detect any more errors even after execution of all the planned test Cases.
We can see that both the above statements do not carry any meaning and are contradictory since we can satisfy the first statement even by doing nothing while the second statement is equally meaningless since it can not ensure the quality of our test cases.
Pin pointing the time - when to stop testing is difficult. Many modern software applications are so complex and run in such an Interdependent environment, that complete testing can never be done.
Most common factors helpful in deciding when to stop the testing are:
* Stop the Testing when deadlines like release deadlines or testing deadlines have reached
* Stop the Testing when the test cases have been completed with some prescribed pass percentage.
* Stop the Testing when the testing budget comes to its end.
* Stop the Testing when the code coverage and functionality requirements come to a desired level.
* Stop the Testing when bug rate drops below a prescribed level
* Stop the Testing when the period of beta testing / alpha testing gets over.
Keeping a Track on the Progress of Testing:
Testing metrics can help the testers to take better and accurate decisions; like when to stop testing or when the application is ready for release, how to track testing progress & how to measure the quality of a product at a certain point in the testing cycle.
The best way is to have a fixed number of test cases ready well before the beginning of test execution cycle. Subsequently measure the testing progress by recording the total number of test cases executed using the following metrics which are quite helpful in measuring the quality of the software product
1) Percentage Completion: (Number of executed test cases) / (Total number of test cases)
2) Percentage Test cases Passed: Defined as (Number of passed test cases) / (Number of executed test cases)
3) Percentage Test cases Failed: Defined as (Number of failed test cases) / (Number of executed test cases)
A test case is declared - Failed even when just one bug is found while executing it, otherwise it is considered as - Passed
Scientific Methods to decide when to stop testing:
1) Decision based upon Number of Pass / Fail test Cases:
a) Preparation of predefined number of test cases ready before test execution cycle.
b) Execution of all test cases In every testing cycle.
c) Stopping the testing process when all the test cases get Passed
d) Alternatively testing can be stopped when percentage of failure in the last testing cycle is observed to be extremely low.
2) Decision based upon Metrics:
a) Mean Time Between Failure (MTBF): by recording the average operational time before the system failure.
b) Coverage metrics: by recording the percentage of instructions executed during tests.
c) Defect density: by recording the defects related to size of software like "defects per 1000 lines of code" or the number of open bugs and their severity levels.
Finally How to Decide:
Stop the testing, If:
1) Coverage of the code is good
2) Mean time between failure is quite large
3) Defect density is very low
4) Number of high severity Open Bugs is very low.
Here 'Good', 'Large', 'Low' and 'High' are subjective terms and depend on the type of product being tested. Ultimately, the risk associated with moving the application into production, as well as the risk of not moving forward, must be taken into consideration.
Broad / Universal statement to define the time to stop testing is when:
All the test cases, derived from equivalent partitioning, cause-effect analysis & boundary-value analysis are executed without detecting errors.
There are many criteria s or assumption on which it stop.
* Stop the testing when the committed / planned testing deadlines are about to expire.
* Stop the testing when we are not able to detect any more errors even after execution of all the planned test Cases.
We can see that both the above statements do not carry any meaning and are contradictory since we can satisfy the first statement even by doing nothing while the second statement is equally meaningless since it can not ensure the quality of our test cases.
Pin pointing the time - when to stop testing is difficult. Many modern software applications are so complex and run in such an Interdependent environment, that complete testing can never be done.
Most common factors helpful in deciding when to stop the testing are:
* Stop the Testing when deadlines like release deadlines or testing deadlines have reached
* Stop the Testing when the test cases have been completed with some prescribed pass percentage.
* Stop the Testing when the testing budget comes to its end.
* Stop the Testing when the code coverage and functionality requirements come to a desired level.
* Stop the Testing when bug rate drops below a prescribed level
* Stop the Testing when the period of beta testing / alpha testing gets over.
Keeping a Track on the Progress of Testing:
Testing metrics can help the testers to take better and accurate decisions; like when to stop testing or when the application is ready for release, how to track testing progress & how to measure the quality of a product at a certain point in the testing cycle.
The best way is to have a fixed number of test cases ready well before the beginning of test execution cycle. Subsequently measure the testing progress by recording the total number of test cases executed using the following metrics which are quite helpful in measuring the quality of the software product
1) Percentage Completion: (Number of executed test cases) / (Total number of test cases)
2) Percentage Test cases Passed: Defined as (Number of passed test cases) / (Number of executed test cases)
3) Percentage Test cases Failed: Defined as (Number of failed test cases) / (Number of executed test cases)
A test case is declared - Failed even when just one bug is found while executing it, otherwise it is considered as - Passed
Scientific Methods to decide when to stop testing:
1) Decision based upon Number of Pass / Fail test Cases:
a) Preparation of predefined number of test cases ready before test execution cycle.
b) Execution of all test cases In every testing cycle.
c) Stopping the testing process when all the test cases get Passed
d) Alternatively testing can be stopped when percentage of failure in the last testing cycle is observed to be extremely low.
2) Decision based upon Metrics:
a) Mean Time Between Failure (MTBF): by recording the average operational time before the system failure.
b) Coverage metrics: by recording the percentage of instructions executed during tests.
c) Defect density: by recording the defects related to size of software like "defects per 1000 lines of code" or the number of open bugs and their severity levels.
Finally How to Decide:
Stop the testing, If:
1) Coverage of the code is good
2) Mean time between failure is quite large
3) Defect density is very low
4) Number of high severity Open Bugs is very low.
Here 'Good', 'Large', 'Low' and 'High' are subjective terms and depend on the type of product being tested. Ultimately, the risk associated with moving the application into production, as well as the risk of not moving forward, must be taken into consideration.
Broad / Universal statement to define the time to stop testing is when:
All the test cases, derived from equivalent partitioning, cause-effect analysis & boundary-value analysis are executed without detecting errors.
Subscribe to:
Posts (Atom)