Wednesday 16 January 2013

Test Plan Template:

                    

Test Plan Template:

(Name of the Product)
Prepared by:
(Names of Preparers)
(Date)
TABLE OF CONTENTS
1.0 INTRODUCTION
2.0 OBJECTIVES AND TASKS
2.1 Objectives
2.2 Tasks
3.0 SCOPE
4.0 Testing Strategy
4.1 Alpha Testing (Unit Testing)
4.2 System and Integration Testing
4.3 Performance and Stress Testing
4.4 User Acceptance Testing
4.5 Batch Testing
4.6 Automated Regression Testing
4.7 Beta Testing
5.0 Hardware Requirements
6.0 Environment Requirements
6.1 Main Frame
6.2 Workstation
7.0 Test Schedule
8.0 Control Procedures
9.0 Features to Be Tested
10.0 Features Not to Be Tested
11.0 Resources/Roles & Responsibilities
12.0 Schedules
13.0 Significantly Impacted Departments (SIDs)
14.0 Dependencies
15.0 Risks/Assumptions
16.0 Tools
17.0 Approvals
1.0 INTRODUCTION
A brief summary of the product being tested. Outline all the functions at a high level.
2.0 OBJECTIVES AND TASKS
2.1 Objectives
Describe the objectives supported by the Master Test Plan, eg., defining tasks and responsibilities, vehicle for communication, document to be used as a service level agreement, etc.
2.2 Tasks
List all tasks identified by this Test Plan, i.e., testing, post-testing, problem reporting, etc.
3.0 SCOPE
General
This section describes what is being tested, such as all the functions of a specific product, its existing interfaces, integration of all functions.
Tactics
List here how you will accomplish the items that you have listed in the “Scope” section. For example, if you have mentioned that you will be testing the existing interfaces, what would be the procedures you would follow to notify the key people to represent their respective areas, as well as allotting time in their schedule for assisting you in accomplishing your activity?
4.0 TESTING STRATEGY
Describe the overall approach to testing. For each major group of features or feature combinations, specify the approach which will ensure that these feature groups are adequately tested. Specify the major activities, techniques, and tools which are used to test the designated groups of features.
The approach should be described in sufficient detail to permit identification of the major testing tasks and estimation of the time required to do each one.
4.1 Unit Testing
Definition:
Specify the minimum degree of comprehensiveness desired. Identify the techniques which will be used to judge the comprehensiveness of the testing effort (for example, determining which statements have been executed at least once). Specify any additional completion criteria (for example, error frequency). The techniques to be used to trace requirements should be specified.
Participants:
List the names of individuals/departments who would be responsible for Unit Testing.
Methodology:
Describe how unit testing will be conducted. Who will write the test scripts for the unit testing, what would be the sequence of events of Unit Testing and how will the testing activity take place?
4.2 System and Integration Testing
Definition:
List what is your understanding of System and Integration Testing for your project.
Participants:
Who will be conducting System and Integration Testing on your project? List the individuals that will be responsible for this activity.
Methodology:
Describe how System & Integration testing will be conducted. Who will write the test scripts for the unit testing, what would be sequence of events of System & Integration Testing, and how will the testing activity take place?
4.3 Performance and Stress Testing
Definition:
List what is your understanding of Stress Testing for your project.
Participants:
Who will be conducting Stress Testing on your project? List the individuals that will be responsible for this activity.
Methodology:
Describe how Performance & Stress testing will be conducted. Who will write the test scripts for the testing, what would be sequence of events of Performance & Stress Testing, and how will the testing activity take place?
4.4 User Acceptance Testing
Definition:
The purpose of acceptance test is to confirm that the system is ready for operational use. During acceptance test, end-users (customers) of the system compare the system to its initial requirements.
Participants:
Who will be responsible for User Acceptance Testing? List the individuals’ names and responsibility.
Methodology:
Describe how the User Acceptance testing will be conducted. Who will write the test scripts for the testing, what would be sequence of events of User Acceptance Testing, and how will the testing activity take place?
4.5 Batch Testing
4.6 Automated Regression Testing
Definition:
Regression testing is the selective retesting of a system or component to verify that modifications have not caused unintended effects and that the system or component still works as specified in the requirements.
Participants:
Methodology:
4.7 Beta Testing
Participants:
Methodology:
5.0 HARDWARE REQUIREMENTS
Computers
Modems
6.0 ENVIRONMENT REQUIREMENTS
6.1 Main Frame
Specify both the necessary and desired properties of the test environment. The specification should contain the physical characteristics of the facilities, including the hardware, the communications and system software, the mode of usage (for example, stand-alone), and any other software or supplies needed to support the test. Also specify the level of security which must be provided for the test facility, system software, and proprietary components such as software, data, and hardware.
Identify special test tools needed. Identify any other testing needs (for example, publications or office space). Identify the source of all needs which are not currently available to your group.
6.2 Workstation
7.0 TEST SCHEDULE
Include test milestones identified in the Software Project Schedule as well as all item transmittal events.
Define any additional test milestones needed. Estimate the time required to do each testing task. Specify the schedule for each testing task and test milestone. For each testing resource (that is, facilities, tools, and staff), specify its periods of use.
8.0 CONTROL PROCEDURES
Problem Reporting
Document the procedures to follow when an incident is encountered during the testing process. If a standard form is going to be used, attach a blank copy as an “Appendix” to the Test Plan. In the event you are using an automated incident logging system, write those procedures in this section.
Change Requests
Document the process of modifications to the software. Identify who will sign off on the changes and what would be the criteria for including the changes to the current product. If the changes will affect existing programs, these modules need to be identified.
9.0 FEATURES TO BE TESTED
Identify all software features and combinations of software features that will be tested.
10.0 FEATURES NOT TO BE TESTED
Identify all features and significant combinations of features which will not be tested and the reasons.
11.0 RESOURCES/ROLES & RESPONSIBILITIES
Specify the staff members who are involved in the test project and what their roles are going to be (for example, Mary Brown (User) compile Test Cases for Acceptance Testing). Identify groups responsible for managing, designing, preparing, executing, and resolving the test activities as well as related issues. Also identify groups responsible for providing the test environment. These groups may include developers, testers, operations staff, testing services, etc.
12.0 SCHEDULES
Major Deliverables
Identify the deliverable documents. You can list the following documents:
- Test Plan
- Test Cases
- Test Incident Reports
- Test Summary Reports
13.0 SIGNIFICANTLY IMPACTED DEPARTMENTS (SIDs)
Department/Business Area Bus. Manager Tester(s)
14.0 DEPENDENCIES
Identify significant constraints on testing, such as test-item availability, testing-resource availability, and deadlines.
15.0 RISKS/ASSUMPTIONS
Identify the high-risk assumptions of the test plan. Specify contingency plans for each (for example, delay in delivery of test items might require increased night shift scheduling to meet the delivery date).
16.0 TOOLS
List the Automation tools you are going to use. List also the Bug tracking tool here.
17.0 APPROVALS
Specify the names and titles of all persons who must approve this plan. Provide space for the signatures and dates.
Name (In Capital Letters) Signature Date
1.
2.
3.
4.
Here is detail of each step what testing exactly carried out in each software quality and testing life cycle specified by IEEE and ISO standards:
Review of the software requirement specifications
Objectives is set for the Major releases
Target Date planned for the Releases
Detailed Project Plan is build. This includes the decision on Design Specifications
Develop Test Plan based on Design Specifications
Test Plan : This includes Objectives, Methodology adopted while testing, Features to
be tested and not to be tested, risk criteria, testing schedule, multi-
platform support and the resource allocation for testing.
Test Specifications
This document includes technical details ( Software requirements )
required prior to the testing.
Writing of Test Cases
Smoke(BVT) test cases
Sanity Test cases
Regression Test Cases
Negative Test Cases
Extended Test Cases
Development - Modules developed one by one
Installers Binding: Installers are build around the individual product.
Build procedure :
A build includes Installers of the available products - multiple platforms.
Testing
Smoke Test (BVT) Basic application test to take decision on further testing
Testing of new features
Cross-platform testing
Stress testing and memory leakage testing.
Bug Reporting
Bug report is created
Development - Code freezing
No more new features are added at this point.
Testing
Builds and regression testing.
Black Box Testing: Types and techniques of BBT
Black box testing treats the system as a “black-box”, so it doesn’t explicitly use Knowledge of the internal structure or code. Or in other words the Test engineer need not know the internal working of the “Black box” or application.
Main focus in black box testing is on functionality of the system as a whole. The term ‘behavioral testing’ is also used for black box testing and white box testing is also sometimes called ’structural testing’. Behavioral test design is slightly different from black-box test design because the use of internal knowledge isn’t strictly forbidden, but it’s still discouraged.
Each testing method has its own advantages and disadvantages. There are some bugs that cannot be found using only black box or only white box. Majority of the applicationa are tested by black box testing method. We need to cover majority of test cases so that most of the bugs will get discovered by blackbox testing.
Black box testing occurs throughout the software development and Testing life cycle i.e in Unit, Integration, System, Acceptance and regression testing stages.
Tools used for Black Box testing:
Black box testing tools are mainly record and playback tools. These tools are used for regression testing that to check whether new build has created any bug in previous working application functionality. These record and playback tools records test cases in the form of some scripts like TSL, VB script, Java script, Perl.
Advantages of Black Box Testing
- Tester can be non-technical.
- Used to verify contradictions in actual system and the specifications.
- Test cases can be designed as soon as the functional specifications are complete
Disadvantages of Black Box Testing
- The test inputs needs to be from large sample space.
- It is difficult to identify all possible inputs in limited testing time. So writing test cases is slow and difficult
- Chances of having unidentified paths during this testing
Methods of Black box Testing:
Graph Based Testing Methods:
Each and every application is build up of some objects. All such objects are identified and graph is prepared. From this object graph each object relationship is identified and test cases written accordingly to discover the errors.
Error Guessing:
This is purely based on previous experience and judgment of tester. Error Guessing is the art of guessing where errors can be hidden. For this technique there are no specific tools, writing the test cases that cover all the application paths.
Boundary Value Analysis:
Many systems have tendency to fail on boundary. So testing boundry values of application is important. Boundary Value Analysis (BVA) is a test Functional Testing technique where the extreme boundary values are chosen. Boundary values include maximum, minimum, just inside/outside boundaries, typical values, and error values.
Extends equivalence partitioning
Test both sides of each boundary
Look at output boundaries for test cases too
Test min, min-1, max, max+1, typical values
BVA techniques:
1. Number of variables
For n variables: BVA yields 4n + 1 test cases.
2. Kinds of ranges
Generalizing ranges depends on the nature or type of variables
Advantages of Boundary Value Analysis
1. Robustness Testing - Boundary Value Analysis plus values that go beyond the limits
2. Min - 1, Min, Min +1, Nom, Max -1, Max, Max +1
3. Forces attention to exception handling
Limitations of Boundary Value Analysis
Boundary value testing is efficient only for variables of fixed values i.e boundary.
Equivalence Partitioning:
Equivalence partitioning is a black box testing method that divides the input domain of a program into classes of data from which test cases can be derived.
How is this partitioning performed while testing:
1. If an input condition specifies a range, one valid and one two invalid classes are defined.
2. If an input condition requires a specific value, one valid and two invalid equivalence classes are defined.
3. If an input condition specifies a member of a set, one valid and one invalid equivalence class is defined.
4. If an input condition is Boolean, one valid and one invalid class is defined.
Comparison Testing:
Different independent versions of same software are used to compare to each other for testing in this method.
Software Installation/Uninstallation Testing
Have you performed software installation testing? How was the experience? Well, Installation testing (Implementation Testing) is quite interesting part of software testing life cycle.
Installation testing is like introducing a guest in your home. The new guest should be properly introduced to all the family members in order to feel him comfortable. Installation of new software is also quite like above example.
If your installation is successful on the new system then customer will be definitely happy but what if things are completely opposite. If installation fails then our program will not work on that system not only this but can leave user’s system badly damaged. User might require to reinstall the full operating system.
In above case will you make any impression on user? Definitely not! Your first impression to make a loyal customer is ruined due to incomplete installation testing. What you need to do for a good first impression? Test the installer appropriately with combination of both manual and automated processes on different machines with different configuration. Major concerned of installation testing is Time! It requires lot of time to even execute a single test case. If you are going to test a big application installer then think about time required to perform such a many test cases on different configurations.
We will see different methods to perform manual installer testing and some basic guideline for automating the installation process.
To start installation testing first decide on how many different system configurations you want to test the installation. Prepare one basic hard disk drive. Format this HDD with most common or default file system, install most common operating system (Windows) on this HDD. Install some basic required components on this HDD. Each time create images of this base HDD and you can create other configurations on this base drive. Make one set of each configuration like Operating system and file format to be used for further testing.
How we can use automation in this process? Well make some systems dedicated for creating basic images (use software’s like Norton Ghost for creating exact images of operating system quickly) of base configuration. This will save your tremendous time in each test case. For example if time to install one OS with basic configuration is say 1 hour then for each test case on fresh OS you will require 1+ hour. But creating image of OS will hardly require 5 to 10 minutes and you will save approximately 40 to 50 minutes!
You can use one operating system with multiple attempts of installation of installer. Each time uninstalling the application and preparing the base state for next test case. Be careful here that your uninstallation program should be tested before and should be working fine.
Installation testing tips with some broad test cases:
1) Use flow diagrams to perform installation testing. Flow diagrams simplify our task. See example flow diagram for basic installation testing test case.
Add some more test cases on this basic flow chart Such as if our application is not the first release then try to add different logical installation paths.
2) If you have previously installed compact basic version of application then in next test case install the full application version on the same path as used for compact version.
3) If you are using flow diagram to test different files to be written on disk while installation then use the same flow diagram in reverse order to test uninstallation of all the installed files on disk.
4) Use flow diagrams to automate the testing efforts. It will be very easy to convert diagrams into automated scripts.
5) Test the installer scripts used for checking the required disk space. If installer is prompting required disk space 1MB, then make sure exactly 1MB is used or whether more disk space utilized during installation. If yes flag this as error.
6) Test disk space requirement on different file system format. Like FAT16 will require more space than efficient NTFS or FAT32 file systems.
7) If possible set a dedicated system for only creating disk images. As said above this will save your testing time.
8 ) Use distributed testing environment in order to carry out installation testing. Distributed environment simply save your time and you can effectively manage all the different test cases from a single machine. The good approach for this is to create a master machine, which will drive different slave machines on network. You can start installation simultaneously on different machine from the master system.
9) Try to automate the routine to test the number of files to be written on disk. You can maintain this file list to be written on disk in and excel sheet and can give this list as a input to automated script that will check each and every path to verify the correct installation.
10) Use software’s available freely in market to verify registry changes on successful installation. Verify the registry changes with your expected change list after installation.
11) Forcefully break the installation process in between. See the behavior of system and whether system recovers to its original state without any issues. You can test this “break of installation” on every installation step.
12) Disk space checking: This is the crucial checking in the installation-testing scenario. You can choose different manual and automated methods to do this checking. In manual methods you can check free disk space available on drive before installation and disk space reported by installer script to check whether installer is calculating and reporting disk space accurately. Check the disk space after the installation to verify accurate usage of installation disk space. Run various combination of disk space availability by using some tools to automatically making disk space full while installation. Check system behavior on low disk space conditions while installation.
13) As you check installation you can test for uninstallation also. Before each new iteration of installation make sure that all the files written to disk are removed after uninstallation. Some times uninstallation routine removes files from only last upgraded installation keeping the old version files untouched. Also check for rebooting option after uninstallation manually and forcefully not to reboot.
I have addressed many areas of manual as well as automated installation testing procedure. Still there are many areas you need to focus on depending on the complexity of your software under installation. These not addressed important tasks includes installation over the network, online installation, patch installation, Database checking on Installation, Shared DLL installation and uninstallation etc.
Bug life cycle
What is Bug/Defect?
Simple Wikipedia definition of Bug is: “A computer bug is an error, flaw, mistake, failure, or fault in a computer program that prevents it from working correctly or produces an incorrect result. Bugs arise from mistakes and errors, made by people, in either a program’s source code or its design.”
Other definitions can be:
An unwanted and unintended property of a program or piece of hardware, especially one that causes it to malfunction.
or
A fault in a program, which causes the program to perform in an unintended or unanticipated manner.
Lastly the general definition of bug is: “failure to conform to specifications”.
If you want to detect and resolve the defect in early development stage, defect tracking and software development phases should start simultaneously.
We will discuss more on Writing effective bug report in another article. Let’s concentrate here on bug/defect life cycle.
Life cycle of Bug:
1) Log new defect
When tester logs any new bug the mandatory fields are:
Build version, Submit On, Product, Module, Severity, Synopsis and Description to Reproduce
In above list you can add some optional fields if you are using manual Bug submission template:
These Optional Fields are: Customer name, Browser, Operating system, File Attachments or screenshots.
The following fields remain either specified or blank:
If you have authority to add bug Status, Priority and ‘Assigned to’ fields them you can specify these fields. Otherwise Test manager will set status, Bug priority and assign the bug to respective module owner.
Look at the following Bug life cycle:
[Click on the image to view full size] Ref: Bugzilla bug life cycle
The figure is quite complicated but when you consider the significant steps in bug life cycle you will get quick idea of bug life.
On successful logging the bug is reviewed by Development or Test manager. Test manager can set the bug status as Open, can Assign the bug to developer or bug may be deferred until next release.
When bug gets assigned to developer and can start working on it. Developer can set bug status as won’t fix, Couldn’t reproduce, Need more information or ‘Fixed’.
If the bug status set by developer is either ‘Need more info’ or Fixed then QA responds with specific action. If bug is fixed then QA verifies the bug and can set the bug status as verified closed or Reopen.
Bug status description:
These are various stages of bug life cycle. The status caption may vary depending on the bug tracking system you are using.
1) New: When QA files new bug.
2) Deferred: If the bug is not related to current build or can not be fixed in this release or bug is not important to fix immediately then the project manager can set the bug status as deferred.
3) Assigned: ‘Assigned to’ field is set by project lead or manager and assigns bug to developer.
4) Resolved/Fixed: When developer makes necessary code changes and verifies the changes then he/she can make bug status as ‘Fixed’ and the bug is passed to testing team.
5) Could not reproduce: If developer is not able to reproduce the bug by the steps given in bug report by QA then developer can mark the bug as ‘CNR’. QA needs action to check if bug is reproduced and can assign to developer with detailed reproducing steps.
6) Need more information: If developer is not clear about the bug reproduce steps provided by QA to reproduce the bug, then he/she can mark it as “Need more information’. In this case QA needs to add detailed reproducing steps and assign bug back to dev for fix.
7) Reopen: If QA is not satisfy with the fix and if bug is still reproducible even after fix then QA can mark it as ‘Reopen’ so that developer can take appropriate action.
8 ) Closed: If bug is verified by the QA team and if the fix is ok and problem is solved then QA can mark bug as ‘Closed’.
9) Rejected/Invalid: Some times developer or team lead can mark the bug as Rejected or invalid if the system is working according to specifications and bug is just due to some misinterpretation.
How to write a good bug report? Tips and Tricks
Why good Bug report?
If your bug report is effective, chances are higher that it will get fixed. So fixing a bug depends on how effectively you report it. Reporting a bug is nothing but a skill and I will tell you how to achieve this skill.
“The point of writing problem report(bug report) is to get bugs fixed” - By Cem Kaner. If tester is not reporting bug correctly, programmer will most likely reject this bug stating as irreproducible. This can hurt testers moral and some time ego also. (I suggest do not keep any type of ego. Ego’s like “I have reported bug correctly”, “I can reproduce it”, “Why he/she has rejected the bug?”, “It’s not my fault” etc etc..)
What are the qualities of a good software bug report?
Anyone can write a bug report. But not everyone can write a effective bug report. You should be able to distinguish between average bug report and a good bug report. How to distinguish a good or bad bug report? It’s simple, apply following characteristics and techniques to report a bug.
1) Having clearly specified bug number:
Always assign a unique number to each bug report. This will help to identify the bug record. If you are using any automated bug-reporting tool then this unique number will be generated automatically each time you report the bug. Note the number and brief description of each bug you reported.
2) Reproducible:
If your bug is not reproducible it will never get fixed. You should clearly mention the steps to reproduce the bug. Do not assume or skip any reproducing step. Step by step described bug problem is easy to reproduce and fix.
3) Be Specific:
Do not write a essay about the problem. Be Specific and to the point. Try to summarize the problem in minimum words yet in effective way. Do not combine multiple problems even they seem to be similar. Write different reports for each problem.
How to Report a Bug?
Use following simple Bug report template:
This is a simple bug report format. It may vary on the bug report tool you are using. If you are writing bug report manually then some fields need to specifically mention like Bug number which should be assigned manually.
Reporter: Your name and email address.
Product: In which product you found this bug.
Version: The product version if any.
Component: These are the major sub modules of the product.
Platform: Mention the hardware platform where you found this bug. The various platforms like ‘PC’, ‘MAC’, ‘HP’, ‘Sun’ etc.
Operating system: Mention all operating systems where you found the bug. Operating systems like Windows, Linux, Unix, SunOS, Mac OS. Mention the different OS versions also if applicable like Windows NT, Windows 2000, Windows XP etc.
Priority:
When bug should be fixed? Priority is generally set from P1 to P5. P1 as “fix the bug with highest priority” and P5 as ” Fix when time permits”.
Severity:
This describes the impact of the bug.
Types of Severity:
  • Blocker: No further testing work can be done.
  • Critical: Application crash, Loss of data.
  • Major: Major loss of function.
  • Minor: minor loss of function.
  • Trivial: Some UI enhancements.
  • Enhancement: Request for new feature or some enhancement in existing one.
Status:
When you are logging the bug in any bug tracking system then by default the bug status is ‘New’.
Later on bug goes through various stages like Fixed, Verified, Reopen, Won’t Fix etc.
Click here to read more about detail bug life cycle.
Assign To:
If you know which developer is responsible for that particular module in which bug occurred, then you can specify email address of that developer. Else keep it blank this will assign bug to module owner or Manger will assign bug to developer. Possibly add the manager email address in CC list.
URL:
The page url on which bug occurred.
Summary:
A brief summary of the bug mostly in 60 or below words. Make sure your summary is reflecting what the problem is and where it is.
Description:
A detailed description of bug. Use following fields for description field:
  • Reproduce steps: Clearly mention the steps to reproduce the bug.
  • Expected result: How application should behave on above mentioned steps.
  • Actual result: What is the actual result on running above steps i.e. the bug behavior.
These are the important steps in bug report. You can also add the “Report type” as one more field which will describe the bug type.
The report types are typically:
1) Coding error
2) Design error
3) New suggestion
4) Documentation issue
5) Hardware problem
Some Bonus tips to write a good bug report:
1) Report the problem immediately:If you found any bug while testing, do not wait to write detail bug report later. Instead write the bug report immediately. This will ensure a good and reproducible bug report. If you decide to write the bug report later on then chances are high to miss the important steps in your report.
2) Reproduce the bug three times before writing bug report:Your bug should be reproducible. Make sure your steps are robust enough to reproduce the bug without any ambiguity. If your bug is not reproducible every time you can still file a bug mentioning the periodic nature of the bug.
3) Test the same bug occurrence on other similar module:
Sometimes developer use same code for different similar modules. So chances are high that bug in one module can occur in other similar modules as well. Even you can try to find more severe version of the bug you found.
4) Write a good bug summary:
Bug summary will help developers to quickly analyze the bug nature. Poor quality report will unnecessarily increase the development and testing time. Communicate well through your bug report summary. Keep in mind bug summary is used as a reference to search the bug in bug inventory.
5) Read bug report before hitting Submit button:
Read all sentences, wording, steps used in bug report. See if any sentence is creating ambiguity that can lead to misinterpretation. Misleading words or sentences should be avoided in order to have a clear bug report.
6) Do not use Abusive language:
It’s nice that you did a good work and found a bug but do not use this credit for criticizing developer or to attack any individual.
Conclusion:
No doubt that your bug report should be a high quality document. Focus on writing good bug reports, spend some time on this task because this is main communication point between tester, developer and manager. Mangers should make aware to their team that writing a good bug report is primary responsibility of any tester. Your efforts towards writing good bug report will not only save company resources but also create a good relationship between you and developers.
For better productivity write a better bug report.
Test your Software Testing knowledge. Take this mock test
If you are preparing for CSTE testing certification exam or thinking to give the exam in coming days then this question series will help you for preparation. Here I have included some questions from CSTE objective type question papers.
The software testing/Quality assurance professionals can also take this exam to test their testing knowledge.
You can either take printout and mark the answers or note your answers somewhere on the paper serially for all 25 questions. Verify your answers at the answer page provided at the bottom of this test page.
1. Verification is:
a. Checking that we are building the right system
b. Checking that we are building the system right
c. Performed by an independent test team
d. Making sure that it is what the user really wants
2. A regression test:
a. Will always be automated
b. Will help ensure unchanged areas of the software have not been affected
c. Will help ensure changed areas of the software have not been affected
d. Can only be run during user acceptance testing
3. If an expected result is not specified then:
a. We cannot run the test
b. It may be difficult to repeat the test
c. It may be difficult to determine if the test has passed or failed
d. We cannot automate the user inputs
4. Which of the following could be a reason for a failure
1) Testing fault
2) Software fault
3) Design fault
4) Environment Fault
5) Documentation Fault
a. 2 is a valid reason; 1,3,4 & 5 are not
b. 1,2,3,4 are valid reasons; 5 is not
c. 1,2,3 are valid reasons; 4 & 5 are not
d. All of them are valid reasons for failure
5. Test are prioritized so that:
a. You shorten the time required for testing
b. You do the best testing in the time available
c. You do more effective testing
d. You find more faults
6. Which of the following is not a static testing technique
a. Error guessing
b. Walkthrough
c. Data flow analysis
d. Inspections
7. Which of the following statements about component testing is not true?
a. Component testing should be performed by development
b. Component testing is also know as isolation or module testing
c. Component testing should have completion criteria planned
d. Component testing does not involve regression testing
8. During which test activity could faults be found most cost effectively?
a. Execution
b. Design
c. Planning
d. Check Exit criteria completion
9. Which, in general, is the least required skill of a good tester?
a. Being diplomatic
b. Able to write software
c. Having good attention to detail
d. Able to be relied on
10. The purpose of requirement phase is
a. To freeze requirements
b. To understand user needs
c. To define the scope of testing
d. All of the above
11. The process starting with the terminal modules is called -
a. Top-down integration
b. Bottom-up integration
c. None of the above
d. Module integration
12. The inputs for developing a test plan are taken from
a. Project plan
b. Business plan
c. Support plan
d. None of the above
13. Function/Test matrix is a type of
a. Interim Test report
b. Final test report
c. Project status report
d. Management report
14. Defect Management process does not include
a. Defect prevention
b. Deliverable base-lining
c. Management reporting
d. None of the above
15. What is the difference between testing software developed by contractor outside your country, versus testing software developed by a contractor within your country?
a. Does not meet people needs
b. Cultural difference
c. Loss of control over reallocation of resources
d. Relinquishments of control
16. Software testing accounts to what percent of software development costs?
a. 10-20
b. 40-50
c. 70-80
d. 5-10
17. A reliable system will be one that:
a. Is unlikely to be completed on schedule
b. Is unlikely to cause a failure
c. Is likely to be fault-free
d. Is likely to be liked by the users
18. How much testing is enough
a. This question is impossible to answer
b. The answer depends on the risks for your industry, contract and special requirements
c. The answer depends on the maturity of your developers
d. The answer should be standardized for the software development industry
19. Which of the following is not a characteristic for Testability?
a. Operability
b. Observability
c. Simplicity
d. Robustness
20. Cyclomatic Complexity method comes under which testing method.
a. White box
b. Black box
c. Green box
d. Yellow box
21. Which of these can be successfully tested using Loop Testing methodology?
a. Simple Loops
b. Nested Loops
c. Concatenated Loops
d. All of the above
22. To test a function, the programmer has to write a ______, which calls the function and passes it test data.
a. Stub
b. Driver
c. Proxy
d. None of the above
23. Equivalence partitioning is:
a. A black box testing technique used only by developers
b. A black box testing technique than can only be used during system testing
c. A black box testing technique appropriate to all levels of testing
d. A white box testing technique appropriate for component testing
24. When a new testing tool is purchased, it should be used first by:
a. A small team to establish the best way to use the tool
b. Everyone who may eventually have some use for the tool
c. The independent testing team
d. The vendor contractor to write the initial scripts
25. Inspections can find all the following except
a. Variables not defined in the code
b. Spelling and grammar faults in the documents
c. Requirements that have been omitted from the design documents
d. How much of the code has been covered
10 Tips you should read before automating your testing work
I was getting too many questions on when and how to automate testing process. Instead of answering them individually I thought it would be better to have some discussion here. I will put my thoughts about when to automate, how to automate or should we automate our testing work? I know there some of our readers are smarter than me. So it would be always a good idea to start a meaningful discussion on such vast topic to get in-depth idea and thoughts from experts from different areas and their experience in automation testing.
Why Automation testing?
1) You have some new releases and bug fixes in working module. So how will you ensure that the new bug fixes have not introduced any new bug in previous working functionality? You need to test the previous functionality also. So will you test manually all the module functionality every time you have some bug fixes or new functionality addition? Well you might do it manually but then you are not doing testing effectively. Effective in terms of company cost, resources, Time etc. Here comes need of Automation.
- So automate your testing procedure when you have lot of regression work.
2) You are testing a web application where there might be thousands of users interacting with your application simultaneously. How will you test such a web application? How will you create those many users manually and simultaneously? Well very difficult task if done manually.
- Automate your load testing work for creating virtual users to check load capacity of your application.
3) You are testing application where code is changing frequently. You have almost same GUI but functional changes are more so testing rework is more.
- Automate your testing work when your GUI is almost frozen but you have lot of frequently functional changes.
What are the Risks associated in Automation Testing?
There are some distinct situations where you can think of automating your testing work. I have covered some risks of automation testing here. If you have taken decision of automation or are going to take sooner then think of following scenarios first.
1) Do you have skilled resources?
For automation you need to have persons having some programming knowledge. Think of your resources. Do they have sufficient programming knowledge for automation testing? If not do they have technical capabilities or programming background that they can easily adapt to the new technologies? Are you going to invest money to build a good automation team? If your answer is yes then only think to automate your work.
2) Initial cost for Automation is very high:
I agree that manual testing has too much cost associated to hire skilled manual testers. And if you are thinking automation will be the solution for you, Think twice. Automation cost is too high for initial setup i.e. cost associated to automation tool purchase, training and maintenance of test scripts is very high.
There are many unsatisfied customers regretting on their decision to automate their work. If you are spending too much and getting merely some good looking testing tools and some basic automation scripts then what is the use of automation?
3) Do not think to automate your UI if it is not fixed:
Beware before automating user interface. If user interface is changing extensively, cost associated with script maintenance will be very high. Basic UI automation is sufficient in such cases.
4) Is your application is stable enough to automate further testing work?
It would be bad idea to automate testing work in early development cycle (Unless it is agile environment). Script maintenance cost will be very high in such cases.
5) Are you thinking of 100% automation?
Please stop dreaming. You cannot 100% automate your testing work. Certainly you have areas like performance testing, regression testing, load/stress testing where you can have chance of reaching near to 100% automation. Areas like User interface, documentation, installation, compatibility and recovery where testing must be done manually.
6) Do not automate tests that run once:
Identify application areas and test cases that might be running once and not included in regression. Avoid automating such modules or test cases.
7) Will your automation suite be having long lifetime?
Every automation script suite should have enough life time that its building cost should be definitely less than that of manual execution cost. This is bit difficult to analyze the effective cost of each automation script suite. Approximately your automation suite should be used or run at least 15 to 20 times for separate builds (General assumption. depends on specific application complexity) to have good ROI.
Here is the conclusion:
Automation testing is the best way to accomplish most of the testing goals and effective use of resources and time. But you should be cautious before choosing the automation tool. Be sure to have skilled staff before deciding to automate your testing work. Otherwise your tool will remain on the shelf giving you no ROI. Handing over the expensive automation tools to unskilled staff will lead to frustration. Before purchasing the automation tools make sure that tool is a best fit to your requirements. You cannot have the tool that will 100% match with your requirements. So find out the limitations of the tool that is best match with your requirements and then use manual testing techniques to overcome those testing tool limitations. Open source tool is also a good option to start with automation. To know more on choosing automation tools read my previous posts here and here.
Instead of relying 100% on either manual or automation use the best combination of manual and automation testing. This is the best solution (I think) for every project. Automation suite will not find all the bugs and cannot be a replacement for real testers. Ad-hoc testing is also necessary in many cases.
Software Testing questions and Answers 1
Shivika asks:
“I have been given the assignment to test a UI based application page. They want me to break the functionality in any way. The first page is Sign up page containing fields like username password, email, url address field and some check box selection options . I have tried all the ways in which I can test the page. Can you also please suggest that what can be possible ways in which we can test the page?”
I will cover some major negative test cases to break the sign up page:
1) See the limit of username field. I mean the data type of this field in DB and the field size. Try adding more characters to this field than the field size limit. See how application respond to this.
2) Repeat above case for number fields. Insert number beyond the field storage capacity. This is typically a boundary test.
3) For username field try adding numbers and special characters in various combinations. (Characters like !@#$%^&*()_+}{”:?><,./;’[]). If not allowed specific message should be displayed to the user.
4) Try above special character combination for all the input fields on your sign up page having some validations. Like Email address field, URL field validations etc.
5) Many applications crash for the input field containing ‘ (single quote) and ” (double quote) examples field like: “Vijay’s web”. Try it in all the input fields one by one.
6) Try adding only numbers to input fields having validation to enter only characters and vice versa.
7) If URL validation is there then see different rules for url validation and add urls not fitting to the rules to observe the system behavior.
Example urls like: vijay.com/?q=vijay’s!@#$%^&*()_+}{”:?><,./;’[]web_page
Also add urls containing http:// and https:// while inserting into url input box.
8 ) If your sign up page is of some steps like step 1 step 2 etc. then try changing parameter values directly into browser address bar. Many times urls are formatted with some parameters to maintain proper user steps. Try altering all those parameters directly without doing anything actually on the sign up page.
9) Do some monkey testing manually or automating (i.e. Insert whatever comes in mind or random typing over keyboard) you will come up with some observations.
10) See if any page is showing JavaScript error either at the browser left bottom corner or enable the browser settings to display popup message to any JavaScript error.
These are all the negative test cases. I assume that you already tested the same sign up page with all valid cases to check application is working fine as per requirements.
If above cases are not breaking the application page then don’t forget to praise the developer
If you have some killer test cases to break such applications that you learned from your experience, you can specify them in comments below.
Jayant asks:
“Normally freshers pass out have a state of their mind as “we are freshers”, recently pass outs from college and expect that the companies to recruit them should consider the knowledge base they have and further should impact them training. In true terms what is meant by fresher for an industry?”
Good question. When I was fresher I was thinking on the similar lines. But think from employer point of view. Employer will think like “Why should we hire candidates having little knowledge base and experience? and need training first before assigning any work? Well, fortunately not all employers think like this and that’s why frehsers are getting the jobs and training on the board. Thanks to the booming IT industry. Demand will continue for freshers having good educational background and appropriate problem solving skill.
Tremendous growth in number of engineering colleges resulted in significant increase in number of graduates passing out each year. And the gap is also increasing between the skill of graduates and the expectations of the companies.
Now I will focus on what industry look specifically in fresh graduates? Typically it will include:
  • Problem solving and Analytical skill
  • Technical skills
  • Communication and interpersonal skill
  • Leadership skill
  • Extra activities like foreign languages, organization skills etc.
So it will be always better if you try to achieve any experience or skill before trying for any graduate jobs. You are one step ahead than those freshers having no experience at all.
This work experience typically includes:
Internship -
Internship work done in any company during or after the graduation. It may be free or paid internship
Sandwich courses -
In some courses industrial training is included in curriculum itself. It is typically of 6 months to 1 year in most of the universities. You can include this project training in your resume.
Special skill achievements through classes or companies:
Training taken from some institute or companies can be included in your work experience.
Projects:
Projects accomplished for commercial or research purpose. These are the paid or certification projects accomplished for companies during the graduation years.
All above-mentioned work will definitely count as a experience as you get actual idea of company, team work and company working culture. Find out your skill areas and what you can offer to employer before hunting for jobs. Companies always look for all-rounded candidates who can effectively utilize their skill into projects from universities, experience and extra activities.
A lot has been written and discussed about Website Testing till date. But still, website testing is probably one of the most commonly confused topic among testers! Need evidence? Spend a little time searching and you can see tons of queries flooding the Internet (Online Forums, Usenet Groups, Orkut Communities, Tech Corners etc) regarding website testing, how a website should be tested, what should be tested, which things should be given priority while testing, what should not be given much importance while testing and so on. I might well be the trillionth person on this planet to write an article on Website Testing here! But I am writing this because I am really tired of replying emails asking me to write an article on Website Testing in my blog. However, this article is not an attempt to show you how you should test your website. You should understand that there is no universal rule to testing that can fit all kind of similar testing assignments. The testing approach that is going to follow consists of a checklist of items that might be tested while testing a website. This does not mean that following this checklist can essentially guarantee you success while testing *any and every* kind of website. In contrary, this post is meant to be used as a website testing cheat-sheet, a kind of checklist of items to remember while testing a website! While following this checklist, the chance of success (or failure) greatly depends on your own particular context! However, here is the cheat-sheet for website testing:

[A] Functionality Testing:
While testing the functionality of the websites the following areas should be tested.

a) Links/URL Testing: There are mainly 4 types of links in most websites.
» Internal links [Test the links that point to the pages of the same website.]
» External links [Test the links that point to external websites.]
» Mail links [Test if the email links open the default email client with the recipient email ID already filled in the “To” field.]
» Broken links [Test if any of those links are broken or dead! Free tools like Link Valet (which is convenient for checking a few pages) or Xenulink (convenient for checking a whole site) can be helpful in testing broken links.]

b) Forms Testing: The web forms should be consistent and should contain all the required input and output controls. Test the integrity of the web forms and the consistency of the variables.

c) Validation Testing:
» You can use tools like W3C validator to test and make sure that you have valid HTML (or XHTML).
» Most of the modern day websites use CSS (Cascading Style Sheet). You can use tools like W3C CSS validator to test and validate the CSS used in your site.
» Test the different fields for field level validation. Test and validate user inputs like: TextBox inputs, ComBox inputs, DropDownBox selections, KeyDown, KeyPress, KeyUp etc.

d) Test the Error messages: Error messages are integral part of any well-developed website and they guide the user whenever any wrong/unexpected input is submitted. Testing the Error messages is very important as a badly designed Error message can misguide the end user about the actual impact of the error! About testing error messages, Ben Simo has a nice article that you might find interesting.

e) Testing optional and mandatory fields: Test if the web forms handle the optional and mandatory fields efficiently. Ideally, the application should not allow you to proceed unless you have filled in ALL the mandatory fields and should not restrict you from proceeding if you have left ANY of those optional fields unfilled!

f) Database Testing: Most of the modern day websites come with a backend database (unless it’s a site consisting of purely static web pages). So testing the database for its integrity becomes essential to make sure the website is able to handle the data processing effectively.

g) Cookies Testing: A cookie is information that a website (server side) puts on your hard disk (client side) so that it can remember something about you at a later time. If your website sets cookies at the client machines then test to check that cookies and other session tokens are created in a secure and unpredictable way. Poor handling of cookies can result in security holes and vulnerabilities that can be taken advantage by malicious users and hackers. Here is a nice article on Testing for Cookie and Session Token Manipulation.

h) Client-side Testing: Test the temporary Internet files on the client side system to make sure if any sensitive data (like password, credit card number etc) is being stored in the client system without being encrypted or in an unsecured way.

[B] Performance Testing:
IEEE defines Performance testing as the testing conducted to evaluate the compliance of a system or component with specified performance requirements. The goal of performance testing is not to find bugs, but to eliminate bottlenecks and establish a baseline for future load/stress testing. Performance testing can be applied to understand the website’s scalability, any loopholes in the load balancing and to test the response time between a request (from the client) and the reply (from the server) and the amount of load/stress the site is able to sustain. Scott Barber’s PerfTestPlus and Chris Loosley’s Web Performance Matters are two great resources for more exhaustive information on Performance Testing.

[C] Connection Speed Testing:
Test the website over various Networks like - Dial up connection with a 56-kbps modem (hard to believe, but there are lot of web users who still use a modem), ISDN, cable connection, broadband connection with different download speeds, DSL connection, satellite internet etc. With slower connection speed, if your website results in slow performance or partial loading of web pages then it should be cause of concern. As testers, we act as the representative of the end users. If we find out that even a part of our intended end user community is going to have problem with the application we are testing, it should be sufficient to raise the issue as a defect!

[D] Web Usability Testing:
If you develop a website which is not user-friendly and is difficult to learn, understand and navigate, then it won’t be of much use for the user. For example, if your website relies way too much on JavaScript for navigation, a browser with disabled JavaScript can render the site unusable. Some of the criteria to keep in mind while testing the usability of a website are:
» Ease of learning (How intuitive and self-explanatory the site is).
» Navigation.
» User satisfaction.
» Web accessibility testing (If all the content and parts of the site are accessible).
» General appearance (Look and feel).

[E] Testing Client-Server Interface:
In web testing the server side interface is tested to verify that communication is properly established with the client. In case of n-Tier architecture based web applications, the middle-tier Business Logic APIs should also be tested (even if they are third-party shrink-wrapped softwares) for any communication/interface malfunctioning!

[F] Compatibility Testing:
Test your website to verify that it’s pages render adequately in different browsers (e.g. IE 5, IE 6, IE 7, Firefox 2, Opera, Safari etc) using different operating systems (Win XP, Vista, Linux, Mac etc) on different hardware platforms. Different versions, configurations, display resolutions, and Internet connect speeds all can impact the behavior of the web pages and can introduce embarrassing (and often costly) bugs. However, an important thing to remember while testing the web compatibility is – we should first identify the major customer base for the website and then decide the main browsers and OS to test for. Some typical compatibility tests include testing your application:
» Using various browsers.
» Using various window sizes.
» Using various font sizes.
» Using browsers with CSS, JavaScript turned OFF!
» Using browsers with pop-up blockers!
» On various Operating Systems.
» With different screen resolutions and color depths.
» On various client hardware configurations.
» Using different memory sizes and hard drive space.
» In different network environments.
» Ability to take printouts with different printers (Printer-friendly Versions)

[G] Web Security Testing:
Often Web Security Testing is also referred to as Penetration Testing. The primary objective for testing the security of a website is to identify potential vulnerabilities/security holes and to patch/repair them. For example, if your website allows some files to be uploaded, your web server should have proper automated antivirus checking in place to detect and disable any attempt of virus uploading by the client side. Some of the major aspects of web security testing are:
» Network Scanning.
» Vulnerability Scanning.
» Password Cracking.
» Log Review.
» Integrity Checkers.
» Virus Detection.

Hope this article helps in giving a rough coverage of areas that needs attention while testing a website. Did I miss something here that should have been included in the checklist? Feel free to let me know by commenting and I will update this post along with proper credit to you.


What is Software Testing...?


1. What is Software Testing?
Software testing is more than just error detection;
Testing software is operating the software under controlled conditions, to (1) verify that it behaves “as specified”; (2) to detect errors, and (3) to validate that what has been specified is what the user actually wanted.
  1. Verification is the checking or testing of items, including software, for conformance and consistency by evaluating the results against pre-specified requirements. [Verification: Are we building the system right?]
  1. Error Detection: Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn’t or things don’t happen when they should.
  1. Validation looks at the system correctness – i.e. is the process of checking that what has been specified is what the user actually wanted. [Validation: Are we building the right system?]
In other words, validation checks to see if we are building what the customer wants/needs, and verification checks to see if we are building that system correctly. Both verification and validation are necessary, but different components of any testing activity.
The definition of testing according to the ANSI/IEEE 1059 standard is that testing is the process of analysing a software item to detect the differences between existing and required conditions (that is defects/errors/bugs) and to evaluate the features of the software item.
Remember: The purpose of testing is verification, validation and error detection in order to find problems – and the purpose of finding those problems is to get them fixed.
.2. Why Testing CANNOT Ensure Quality
Testing in itself cannot ensure the quality of software. All testing can do is give you a certain level of assurance (confidence) in the software. On its own, the only thing that testing proves is that under specific controlled conditions, the software functioned as expected by the test cases executed.
3. What is Software “Quality”?
Quality software is reasonably bug-free, delivered on time and within budget, meets requirements and/or expectations, and is maintainable.
However, quality is a subjective term. It will depend on who the ‘customer’ is and their overall influence in the scheme of things. A wide-angle view of the ‘customers’ of a software development project might include end-users, customer acceptance testers, customer contract officers, customer management, the development organisation’s management/accountants/testers/salespeople, future software maintenance engineers, stockholders, magazine reviewers, etc. Each type of ‘customer’ will have their own view on ‘quality’ - the accounting department might define quality in terms of profits while an end-user might define quality as user-friendly and bug-free.
4. What is “Quality Assurance”?
“Quality Assurance” measures the quality of processes used to create a quality product.
Software Quality Assurance (‘SQA’ or ‘QA’) is the process of monitoring and improving all activities associated with software development, from requirements gathering, design and reviews to coding, testing and implementation.
It involves the entire software development process - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with, at the earliest possible stage. Unlike testing, which is mainly a ‘detection’ process, QA is ‘preventative’ in that it aims to ensure quality in the methods & processes – and therefore reduce the prevalence of errors in the software.
Organisations vary considerably in how they assign responsibility for QA and testing. Sometimes they’re the combined responsibility of one group or individual. Also common are project teams that include a mix of testers and developers who work closely together, with overall QA processes monitored by project managers or quality managers.
5. Quality Assurance and Software Development
Quality Assurance and development of a product are parallel activities. Complete QA includes reviews of the development methods and standards, reviews of all the documentation (not just for standardisation but for verification and clarity of the contents also). Overall Quality Assurance processes also include code validation.
A note about quality assurance: The role of quality assurance is a superset of testing. Its mission is to help minimise the risk of project failure. QA people aim to understand the causes of project failure (which includes software errors as an aspect) and help the team prevent, detect, and correct the problems. Often test teams are referred to as QA Teams, perhaps acknowledging that testers should consider broader QA issues as well as testing.
6. What’s the difference between QA and testing?
Simply put:
§ TESTING means “Quality Control”; and
§ QUALITY CONTROL measures the quality of a product; while
§ QUALITY ASSURANCE measures the quality of processes used to create a quality product.
In well-run projects, the mission of the test team is not merely to perform testing, but to help minimise the risk of product failure. Testers look for manifest problems in the product, potential problems, and the absence of problems. They explore, assess, track, and report product quality, so that others in the project can make informed decisions about product development. It's important to recognise that testers are not out to "break the code." We are not out to embarrass or complain, just to inform. We are human meters of product quality.
Reference Links
Localization Testing Features
· Object based Recording: Does not record based on coordinates and hence Localization testing is not affected by position change due to shorter or longer localized strings.
· Centralized Object Repository: Localization testing will not be affected by textual changes as only logical name are placed in the scripts.
· Unicode Support: Full Unicode support allows you to record in any language.
· Automatic Resource Generator: Extracts all the string, literals and stores them in a resource file.
· I18N Editor: Allows you to assign localized terms for the contents of the resource.
· Powerful Library: Provides powerful libraries to get and set locales at runtime. Allows you to change the date format to suite the locale to be tested.
What is Localization?
Localization (L10N) is the process of customizing a software application that was originally designed for a domestic market so that it can be released in foreign markets. This process involves translating all native language strings to the target language and customizing the GUI so that it is appropriate for the target market. Depending on the size and complexity of the software, localization can range from a simple process involving a small team of translators, linguists, desktop publishers and engineers to a complex process requiring a Localization Project Manager directing a team of a hundred specialists. Localization is usually done using some combination of in-house resources, independent contractors and full-scope services of a localization company.all team of translators, linguists, desktop publishers and engineers to a complex process requiring a Localization Project Manager directing a team of a hundred specialists. Localization is usually done using some combination of in-house resources, independent contractors and full-scope ser What is the purpose of Localization Testing?
Products that are localized to international markets often face domestic competition, which makes it critical for the localized product to blend seamlessly into the native language and cultural landscape. The cost of a localization effort can be significant. Once you have the strings translated and the GUI updated, localization testing should be used to help ensure that the product is successfully migrated to the target market. In addition to verifying successful translation, basic functional testing should be performed. Functional issues often arise as a result of localizing software. Don't risk the time and effort spent localizing by not performing adequate Quality Assurance.vices of
Black Box Web Testing with HttpUnit
White or Black?
Automated software tests are crucial for IT projects. They enable continuous modifications to an existing code base without the fear of damaging existing functionality. They are executed at will and don't carry the costs and inconsistencies associated with manual tests.
There are two fundamental approaches for automated software tests:
  • White Box, where an application is tested from the inside using its internal application programmatic interface (API).
  • Black Box, where an application is tested using its outward-facing interface (usually a graphical user interface).
I consider White Box tests as a nice-to-have feature, while Black Box tests are a necessity. White Box tests suffer from the following shortcomings:
· Creating a false sense of stability and quality. An application can often pass all unit tests and still appear unavailable to users due to a non-functioning component (a database, connection pool, etc.). This is a critical problem, since it makes the IT application team look incompetent.
· Inability to validate use-case completeness because they use a different interface to the application. White Box tests focus on pleasing the wrong crowd -- developers. The right crowd to please in the real world is application users, because they fund IT projects. It is more important to understand how a user views an application than how a developer does. Developers and users use different interfaces to interact with an application; developers use a programmatic API, while users use a GUI. Application users care mostly about the inputs and outputs on their GUI screen. The only technique for testing these is interacting with the application via the same interface used by the users.
Sometimes White Box tests are sufficient. In general, I am no great supporter of White Box tests. White Box unit tests are sufficient to test programmatic interfaces (API). For example, a unit test is sufficient to test a square root function.
Black Box tests are useful in evaluating the overall health of the application and its ability to provide value to users. Most Black Box tests emulate the sequence of interactions between a user to an application in its exact order. Failure of a Black Box test indicates that a user received insufficient value from the application.
The focus of this article is Black Box tests of web browsers, perhaps the most common user interface in today's IT environment. The article demonstrates a simple yet effective way to perform a Black Box test of a web site using an open source package called Http Unit.

Figure 1. Where White Box and Black Box tests interface to an application
Emulating User Activity via Black Box Tests
Black Box tests emulate user interaction with an application. Keeping the tests closely aligned with the manner in which users interact with the application is the key for quality tests.
Historically, writing Black Box tests was difficult. Applications used different GUI technologies and communication protocols; thus, the potential for tool reuse across applications was low. Commercial testing-tool packages relied on capturing mouse clicks and keystrokes, which proved brittle to change. The combination of wide usage of HTTP/HTML standards and an emerging number of open source projects makes Black Box testing an easier task. Emulating user activity on web sites can be done in several manners:
  • Automating Internet Explorer via the SHDocVw COM interface.
  • Using a commercial testing tool.
  • Using an open source Java package like HttpUnit or jWebUnit.
HttpUnit is the focus of this article for several reasons:
  • HttpUnit covers the useful spectrum of HTTP/HTML standards functionality, including forms, frames, JavaScript, SSL, cookies, headers. HttpUnit can be used to test a web site written in any programming language.
  • HttpUnit is written in Java, so the test cases can use the entire spectrum of Java functionality.
  • HttpUnit executes within a JVM as a standard Java library and can be integrated easily with Ant scripts and daemon test agents.
  • HttpUnit is an open source project.
Swing and AWT applications are not suitable for HttpUnit tests. Those can be tested with sibling tools like jfcUnit.
· Test Suites. Typically you may have dozens or hundreds (or thousands?) of tests, and you may wish to run tests in a variety of modes:
    • Unattended Testing. Individual and/or groups of tests should be executable singly or in parallel from one or many workstations.
    • Background Testing. Tests should be executable from multiple browsers running "in the background" on an appropriately equipped workstation.
    • Distributed Testing. Independent parts of a test suite should be executable from separate workstations without conflict.
    • Performance Testing. Timing in performance tests should be resolved to the millisecond; this gives a strong basis for averaging data.
    • Random Testing. There should be a capability for randomizing certain parts of tests.
    • Error Recovery. While browser failure due to user inputs is rare, test suites should have the capability of resynchronizing after an error.
a localization company.
Localization (L10N) is the process of customizing a software application that was originally designed for a domestic market so that it can be released in foreign markets. This process involves translating all native language strings to the target language and customizing the GUI so that it is appropriate for the target market. Depending on the size and complexity of the software, localization can range from a simple process involving a small team of translators, linguists, desktop publishers and engineers to a complex process requiring a Localization Project Manager directing a team of a hundred specialists. Localization is usually done using some combination of in-house resources, independent contractors and full-scope services of a localization company.

3 comments:

  1. Good info. Lucky me I found your site by accident (stumbleupon).

    I have bookmarked it for later!

    Here is my web site :: blackberry keyboard cover

    ReplyDelete
  2. I really liked your Information. This is a really informative knowledge, Thanks for posting this informative Information.


    Selenium Training in Chennai | Certification | Online Courses

    selenium training in chennai

    selenium training in chennai

    selenium online training in chennai

    selenium training in bangalore

    selenium training in hyderabad

    selenium training in coimbatore

    selenium online training

    ReplyDelete