Saturday, June 23, 2007

टेस्टिंग

Testing is a process used to help identify the correctness, completeness and quality of developed computer software. With that in mind, testing can never completely establish the correctness of computer software.
There are many approaches to software testing, but effective testing of complex products is essentially a process of investigation, not merely a matter of creating and following rote procedure. One definition of testing is "the process of questioning a product in order to evaluate it", where the "questions" are things the tester tries to do with the product, and the product answers with its behavior in reaction to the probing of the tester. Although most of the intellectual processes of testing are nearly identical to that of review or inspection, the word testing is connoted to mean the dynamic analysis of the product—putting the product through its paces.
The quality of the application can and normally does vary widely from system to system but some of the common quality attributes include reliability, stability, portability, maintainability and usability. Refer to the ISO standard ISO 9126 for a more complete list of attributes and criteria.
Testing helps is Verifying and Validating if the Software is working as it is intended to be working. Thins involves using Static and Dynamic methodologies to Test the application.
Because of the fallibility of its human designers and its own abstract, complex nature, software development must be accompanied by quality assurance activities. It is not unusual for developers to spend 40% of the total project time on testing. For life-critical software (e.g. flight control, reactor monitoring), testing can cost 3 to 5 times as much as all other activities combined. The destructive nature of testing requires that the developer discard preconceived notions of the correctness of his/her developed software.
Software Testing Fundamentals

Testing objectives include

1. Testing is a process of executing a program with the intent of finding an error.
2. A good test case is one that has a high probability of finding an as yet undiscovered error.
3. A successful test is one that uncovers an as yet undiscovered error.

Testing should systematically uncover different classes of errors in a minimum amount of time and with a minimum amount of effort. A secondary benefit of testing is that it demonstrates that the software appears to be working as stated in the specifications. The data collected through testing can also provide an indication of the software's reliability and quality. But, testing cannot show the absence of defect -- it can only show that software defects are present.
When Testing should start:
Testing early in the life cycle reduces the errors. Test deliverables are associated with every phase of development. The goal of Software Tester is to find bugs, find them as early as possible, and make them sure they are fixed.
The number one cause of Software bugs is the Specification. There are several reasons specifications are the largest bug producer.
In many instances a Spec simply isn’t written. Other reasons may be that the spec isn’t thorough enough, its constantly changing, or it’s not communicated well to the entire team. Planning software is vitally important. If it’s not done correctly bugs will be created.
The next largest source of bugs is the Design, That’s where the programmers lay the plan for their Software. Compare it to an architect creating the blue print for the building, Bugs occur here for the same reason they occur in the specification. It’s rushed, changed, or not well communicated.

Coding errors may be more familiar to you if you are a programmer. Typically these can be traced to the Software complexity, poor documentation, schedule pressure or just plain dump mistakes. It’s important to note that many bugs that appear on the surface to be programming errors can really be traced to specification. It’s quite common to hear a programmer say, “ oh, so that’s what its supposed to do. If someone had told me that I wouldn’t have written the code that way.”
The other category is the catch-all for what is left. Some bugs can blamed for false positives, conditions that were thought to be bugs but really weren’t. There may be duplicate bugs, multiple ones that resulted from the square root cause. Some bugs can be traced to Testing errors.
Costs: The costs re logarithmic- that is, they increase tenfold as time increases. A bug found and fixed during the early stages when the specification is being written might cost next to nothing, or 10 cents in our example. The same bug, if not found until the software is coded and tested, might cost $1 to $10. If a customer finds it, the cost would easily top $100.
When to Stop Testing
This can be difficult to determine. Many modern software applications are so complex, and run in such as interdependent environment, that complete testing can never be done. "When to stop testing" is one of the most difficult questions to a test engineer. Common factors in deciding when to stop are:
• Deadlines ( release deadlines,testing deadlines.)
• Test cases completed with certain percentages passed
• Test budget depleted
• Coverage of code/functionality/requirements reaches a specified point
• The rate at which Bugs can be found is too small
• Beta or Alpha Testing period ends
• The risk in the project is under acceptable limit.
Practically, we feel that the decision of stopping testing is based on the level of the risk acceptable to the management. As testing is a never ending process we can never assume that 100 % testing has been done, we can only minimize the risk of shipping the product to client with X testing done. The risk can be measured by Risk analysis but for small duration / low budget / low resources project, risk can be deduced by simply: -
• Measuring Test Coverage.
• Number of test cycles.
• Number of high priority bugs.
Test Strategy:

How we plan to cover the product so as to develop an adequate assessment of quality.

A good test strategy is:

Specific
Practical
Justified

The purpose of a test strategy is to clarify the major tasks and challenges of the test project.

Test Approach and Test Architecture are other terms commonly used to describe what I’m calling test strategy.

Example of a poorly stated (and probably poorly conceived) test strategy:
"We will use black box testing, cause-effect graphing, boundary testing, and white box testing to test this product against its specification."
Test Strategy: Type of Project, Type of Software, when Testing will occur, Critical Success factors, Tradeoffs

Test Plan - Why
• Identify Risks and Assumptions up front to reduce surprises later.
• Communicate objectives to all team members.
• Foundation for Test Spec, Test Cases, and ultimately the Bugs we find.
Failing to plan = planning to fail.
Test Plan - What
• Derived from Test Approach, Requirements, Project Plan, Functional Spec., and Design Spec.
• Details out project-specific Test Approach.
• Lists general (high level) Test Case areas.
• Include testing Risk Assessment.
• Include preliminary Test Schedule
• Lists Resource requirements.
Test Plan
The test strategy identifies multiple test levels, which are going to be performed for the project. Activities at each level must be planned well in advance and it has to be formally documented. Based on the individual plans only, the individual test levels are carried out.
Entry means the entry point to that phase. For example, for unit testing, the coding must be complete and then only one can start unit testing. Task is the activity that is performed. Validation is the way in which the progress and correctness and compliance are verified for that phase. Exit tells the completion criteria of that phase, after the validation is done. For example, the exit criterion for unit testing is all unit test cases must pass.
Unit Test Plan {UTP}
The unit test plan is the overall plan to carry out the unit test activities. The lead tester prepares it and it will be distributed to the individual testers, which contains the following sections.
What is to be tested?
The unit test plan must clearly specify the scope of unit testing. In this, normally the basic input/output of the units along with their basic functionality will be tested. In this case mostly the input units will be tested for the format, alignment, accuracy and the totals. The UTP will clearly give the rules of what data types are present in the system, their format and their boundary conditions. This list may not be exhaustive; but it is better to have a complete list of these details.
Sequence of Testing
The sequences of test activities that are to be carried out in this phase are to be listed in this section. This includes, whether to execute positive test cases first or negative test cases first, to execute test cases based on the priority, to execute test cases based on test groups etc. Positive test cases prove that the system performs what is supposed to do; negative test cases prove that the system does not perform what is not supposed to do. Testing the screens, files, database etc., are to be given in proper sequence.
Basic Functionality of Units
How the independent functionalities of the units are tested which excludes any communication between the unit and other units. The interface part is out of scope of this test level. Apart from the above sections, the following sections are addressed, very specific to unit testing.
• Unit Testing Tools
• Priority of Program units
• Naming convention for test cases
• Status reporting mechanism
• Regression test approach
• ETVX criteria
Integration Test Plan
The integration test plan is the overall plan for carrying out the activities in the integration test level, which contains the following sections.
What is to be tested?
This section clearly specifies the kinds of interfaces fall under the scope of testing internal, external interfaces, with request and response is to be explained. This need not go deep in terms of technical details but the general approach how the interfaces are triggered is explained.
Sequence of Integration
When there are multiple modules present in an application, the sequence in which they are to be integrated will be specified in this section. In this, the dependencies between the modules play a vital role. If a unit B has to be executed, it may need the data that is fed by unit A and unit X. In this case, the units A and X have to be integrated and then using that data, the unit B has to be tested. This has to be stated to the whole set of units in the program. Given this correctly, the testing activities will lead to the product, slowly building the product, unit by unit and then integrating them.
System Test Plan {STP}
The system test plan is the overall plan carrying out the system test level activities. In the system test, apart from testing the functional aspects of the system, there are some special testing activities carried out, such as stress testing etc. The following are the sections normally present in system test plan.
What is to be tested?
This section defines the scope of system testing, very specific to the project. Normally, the system testing is based on the requirements. All requirements are to be verified in the scope of system testing. This covers the functionality of the product. Apart from this what special testing is performed are also stated here.
Functional Groups and the Sequence
The requirements can be grouped in terms of the functionality. Based on this, there may be priorities also among the functional groups. For example, in a banking application, anything related to customer accounts can be grouped into one area, anything related to inter-branch transactions may be grouped into one area etc. Same way for the product being tested, these areas are to be mentioned here and the suggested sequences of testing of these areas, based on the priorities are to be described.
Acceptance Test Plan {ATP}
The client at their place performs the acceptance testing. It will be very similar to the system test performed by the Software Development Unit. Since the client is the one who decides the format and testing methods as part of acceptance testing, there is no specific clue on the way they will carry out the testing. But it will not differ much from the system testing. Assume that all the rules, which are applicable to system test, can be implemented to acceptance testing also.
Since this is just one level of testing done by the client for the overall product, it may include test cases including the unit and integration test level details.
A sample Test Plan Outline along with their description is as shown below:
Test Plan Outline
1. BACKGROUND – This item summarizes the functions of the application system and the tests to be performed.
2. INTRODUCTION
3. ASSUMPTIONS – Indicates any anticipated assumptions which will be made while testing the application.
4. TEST ITEMS - List each of the items (programs) to be tested.
5. FEATURES TO BE TESTED - List each of the features (functions or requirements) which will be tested or demonstrated by the test.
6. FEATURES NOT TO BE TESTED - Explicitly lists each feature, function, or requirement which won't be tested and why not. 7. APPROACH - Describe the data flows and test philosophy.
Simulation or Live execution, Etc. This section also mentions all the approaches which will be followed at the various stages of the test execution.
8. ITEM PASS/FAIL CRITERIA Blanket statement - Itemized list of expected output and tolerances
9. SUSPENSION/RESUMPTION CRITERIA - Must the test run from start to completion?
Under what circumstances it may be resumed in the middle?
Establish check-points in long tests.
10. TEST DELIVERABLES - What, besides software, will be delivered?
Test report
Test software
11. TESTING TASKS Functional tasks (e.g., equipment set up)
Administrative tasks
12. ENVIRONMENTAL NEEDS
Security clearance
Office space & equipment
Hardware/software requirements
13. RESPONSIBILITIES
Who does the tasks in Section 10?
What does the user do?
14. STAFFING & TRAINING
15. SCHEDULE
16. RESOURCES
17. RISKS & CONTINGENCIES
18. APPROVALS
The schedule details of the various test pass such as Unit tests, Integration tests, System Tests should be clearly mentioned along with the estimated efforts.


Risk Analysis:
A risk is a potential for loss or damage to an Organization from materialized threats. Risk Analysis attempts to identify all the risks and then quantify the severity of the risks.A threat as we have seen is a possible damaging event. If it occurs, it exploits vulnerability in the security of a computer based system.
Risk Identification:

1. Software Risks: Knowledge of the most common risks associated with Software development, and the platform you are working on.

2. Business Risks: Most common risks associated with the business using the Software

3. Testing Risks: Knowledge of the most common risks associated with Software Testing for the platform you are working on, tools being used, and test methods being applied.

4. Premature Release Risk: Ability to determine the risk associated with releasing unsatisfactory or untested Software Prodicts.

5. Risk Methods: Strategies and approaches for identifying risks or problems associated with implementing and operating information technology, products and process; assessing their likelihood, and initiating strategies to test those risks.
Traceability means that you would like to be able to trace back and forth how and where any workproduct fulfills the directions of the preceeding (source-) product. The matrix deals with the where, while the how you have to do yourself, once you know the where.
Take e.g. the Requirement of UserFriendliness (UF). Since UF is a complex concept, it is not solved by just one design-solution and it is not solved by one line of code. Many partial design-solutions may contribute to this Requirement and many groups of lines of code may contribute to it.
A Requirements-Design Traceability Matrix puts on one side (e.g. left) the sub-requirements that together are supposed to solve the UF requirement, along with other (sub-)requirements. On the other side (e.g. top) you specify all design solutions. Now you can connect on the crosspoints of the matrix, which design solutions solve (more, or less) any requirement. If a design solution does not solve any requirement, it should be deleted, as it is of no value.
Having this matrix, you can check whether any requirement has at least one design solution and by checking the solution(s) you may see whether the requirement is sufficiently solved by this (or the set of) connected design(s).
If you have to change any requirement, you can see which designs are affected. And if you change any design, you can check which requirements may be affected and see what the impact is.
In a Design-Code Traceability Matrix you can do the same to keep trace of how and which code solves a particular design and how changes in design or code affect each other.
Demonstrates that the implemented system meets the user requirements.

Serves as a single source for tracking purposes.
Identifies gaps in the design and testing.
Prevents delays in the project timeline, which can be brought about by having to backtrack to fill the gaps
Software Testing Life Cycle:
The test development life cycle contains the following components:
Requirements
Use Case Document
Test Plan
Test Case
Test Case execution
Report Analysis
Bug Analysis
Bug Reporting
Typical interaction scenario from a user's perspective for system requirements studies or testing. In other words, "an actual or realistic example scenario". A use case describes the use of a system from start to finish. Use cases focus attention on aspects of a system useful to people outside of the system itself.
• Users of a program are called users or clients.
• Users of an enterprise are called customers, suppliers, etc.
Use Case:
A collection of possible scenarios between the system under discussion and external actors, characterized by the goal the primary actor has toward the system's declared responsibilities, showing how the primary actor's goal might be delivered or might fail.
Use cases are goals (use cases and goals are used interchangeably) that are made up of scenarios. Scenarios consist of a sequence of steps to achieve the goal, each step in a scenario is a sub (or mini) goal of the use case. As such each sub goal represents either another use case (subordinate use case) or an autonomous action that is at the lowest level desired by our use case decomposition.
This hierarchical relationship is needed to properly model the requirements of a system being developed. A complete use case analysis requires several levels. In addition the level at which the use case is operating at it is important to understand the scope it is addressing. The level and scope are important to assure that the language and granularity of scenario steps remain consistent within the use case.
There are two scopes that use cases are written from: Strategic and System. There are also three levels: Summary, User and Sub-function.
Scopes: Strategic and System

Strategic Scope:
The goal (Use Case) is a strategic goal with respect to the system. These goals are goals of value to the organization. The use case shows how the system is used to benefit the organization.,/p> These strategic use cases will eventually use some of the same lower level (subordinate) use cases.

System Scope:
Use cases at system scope are bounded by the system under development. The goals represent specific functionality required of the system. The majority of the use cases are at system scope. These use cases are often steps in strategic level use cases
Levels: Summary Goal , User Goal and Sub-function.

Sub-function Level Use Case:
A sub goal or step is below the main level of interest to the user. Examples are "logging in" and "locate a device in a DB". Always at System Scope.
User Level Use Case:
This is the level of greatest interest. It represents a user task or elementary business process. A user level goal addresses the question "Does your job performance depend on how many of these you do in a day". For example "Create Site View" or "Create New Device" would be user level goals but "Log In to System" would not. Always at System Scope.
Summary Level Use Case:
Written for either strategic or system scope. They represent collections of User Level Goals. For example summary goal "Configure Data Base" might include as a step, user level goal "Add Device to database". Either at System of Strategic Scope.
Test Documentation
Test documentation is a required tool for managing and maintaining the testing process. Documents produced by testers should answer the following questions:
• What to test? Test Plan
• How to test? Test Specification
• What are the results? Test Results Analysis Report
Bug Life cycle:
In entomology(the study of real, living Bugs), the term life cycle refers to the various stages that an insect assumes over its life. If you think back to your high school biology class, you will remember that the life cycle stages for most insects are the egg, larvae, pupae and adult. It seems appropriate, given that software problems are also called bugs, that a similar life cycle system is used to identify their stages of life. Figure 18.2 shows an example of the simplest, and most optimal, software bug life cycle.

This example shows that when a bug is found by a Software Tester, its logged and assigned to a programmer to be fixed. This state is called open state. Once the programmer fixes the code , he assigns it back to the tester and the bugs enters the resolved state. The tester then performs a regression test to confirm that the bug is indeed fixed and, if it closes it out. The bug then enters its final state, the closed state.
In some situations though, the life cycle gets a bit more complicated.
In this case the life cycle starts out the same with the Tester opening the bug and assigning to the programmer, but the programmer doesn’t fix it. He doesn’t think its bad enough to fix and assigns it to the project manager to decide. The Project Manager agrees with the Programmer and places the Bug in the resolved state as a “wont-fix” bug. The tester disagrees, looks for and finds a more obvious and general case that demonstrates the bug, reopens it, and assigns it to the Programmer to fix. The programmer fixes the bg, resolves it as fixed, and assign it to the Tester. The tester confirms the fix and closes the bug.

You can see that a bug might undergo numerous changes and iterations over its life, sometimes looping back and starting the life all over again. Figure below takes the simple model above and adds to it possible decisions, approvals, and looping that can occur in most projects. Of course every software company and project will have its own system, but this figure is fairly generic and should cover most any bug life cycle that you’ll encounter

The generic life cycle has two additional states and extra connecting lines. The review state is where Project Manager or the committee, sometimes called a change Control Board, decides whether the bug should be fixed. In some projects all bugs go through the review state before they’re assigned to the programmer for fixing. In other projects, this may not occur until near the end of the project, or not at all. Notice that the review state can also go directly to the closed state. This happens if the review decides that the bug shouldn’t be fixed – it could be too minor is really not a problem, or is a testing error. The other is a deferred. The review may determine that the bug should be considered for fixing at sometime in the future, but not for this release of the software.
The additional line from resolved state back to the open state covers the situation where the tester finds that the bug hasn’t been fixed. It gets reopened and the bugs life cycle repeats.
The two dotted lines that loop from the closed and the deferred state back to the open state rarely occur but are important enough to mention. Since a Tester never gives up, its possible that a bug was thought to be fixed, tested and closed could reappear. Such bugs are often called Regressions. It’s possible that a deferred bug could later be proven serious enough to fix immediately. If either of these occurs, the bug is reopened and started through the process again. Most Project teams adopt rules for who can change the state of a bug or assign it to someone else.For example, maybe only the Project Manager can decide to defer a bug or only a tester is permitted to close a bug. What’s important is that once you log a bug, you follow it through its life cycle, don’t lose track of it, and prove the necessary information to drive it to being fixed and closed.
Bug Report - Why
• Communicate bug for reproducibility, resolution, and regression.
• Track bug status (open, resolved, closed).
• Ensure bug is not forgotten, lost or ignored.

Used to back create test case where none existed before.


Static Testing
The Verification activities fall into the category of Static Testing. During static testing, you have a checklist to check whether the work you are doing is going as per the set standards of the organization. These standards can be for Coding, Integrating and Deployment. Review's, Inspection's and Walkthrough's are static testing methodologies.
Dynamic Testing
Dynamic Testing involves working with the software, giving input values and checking if the output is as expected. These are the Validation activities. Unit Tests, Integration Tests, System Tests and Acceptance Tests are few of the Dynamic Testing methodologies. As we go further, let us understand the various Test Life Cycle's and get to know the Testing Terminologies. To understand more of software testing, various methodologies, tools and techniques, you can download the Software Testing Guide Book from here.
Difference Between Static and Dynamic Testing: Please refer the definition of Static Testing to observe the differnce between the static testing and dynamic testing.
Blackbox Testing
Black box testing attempts to derive sets of inputs that will fully exercise all the functional requirements of a system. It is not an alternative to white box testing. This type of testing attempts to find errors in the following categories:
1. incorrect or missing functions,
2. interface errors,
3. errors in data structures or external database access,
4. performance errors, and 5. initialization and termination errors.
Tests are designed to answer the following questions:
1. How is the function's validity tested?
2. What classes of input will make good test cases?
3. Is the system particularly sensitive to certain input values?
4. How are the boundaries of a data class isolated?
5. What data rates and data volume can the system tolerate?
6. What effect will specific combinations of data have on system operation?
White box testing should be performed early in the testing process, while black box testing tends to be applied during later stages. Test cases should be derived which
1. reduce the number of additional test cases that must be designed to achieve reasonable testing, and
2. tell us something about the presence or absence of classes of errors, rather than an error associated only with the specific test at hand.
Equivalence Partitioning
This method divides the input domain of a program into classes of data from which test cases can be derived. Equivalence partitioning strives to define a test case that uncovers classes of errors and thereby reduces the number of test cases needed. It is based on an evaluation of equivalence classes for an input condition. An equivalence class represents a set of valid or invalid states for input conditions.

Equivalence classes may be defined according to the following guidelines:
1. If an input condition specifies a range, one valid and two invalid equivalence classes are defined.
2. If an input condition requires a specific value, then one valid and two invalid equivalence classes are defined.
3. If an input condition specifies a member of a set, then one valid and one invalid equivalence class are defined.
4. If an input condition is boolean, then one valid and one invalid equivalence class are defined.
Boundary Value Analysis
This method leads to a selection of test cases that exercise boundary values. It complements equivalence partitioning since it selects test cases at the edges of a class. Rather than focusing on input conditions solely, BVA derives test cases from the output domain also. BVA guidelines include:
1. For input ranges bounded by a and b, test cases should include values a and b and just above and just below a and b respectively.
2. If an input condition specifies a number of values, test cases should be developed to exercise the minimum and maximum numbers and values just above and below these limits.
3. Apply guidelines 1 and 2 to the output.
4. If internal data structures have prescribed boundaries, a test case should be designed to exercise the data structure at its boundary.
Cause-Effect Graphing Techniques
Cause-effect graphing is a technique that provides a concise representation of logical conditions and corresponding actions. There are four steps:
1. Causes (input conditions) and effects (actions) are listed for a module and an identifier is assigned to each.
2. A cause-effect graph is developed.
3. The graph is converted to a decision table.
4. Decision table rules are converted to test cases.
What is blackbox testing, difference between blackbox testing and whitebox testing, Blackbox Testing plans, unbiased blackbox testing
White Box Testing
White box testing is a test case design method that uses the control structure of the procedural design to derive test cases. Test cases can be derived that
1. guarantee that all independent paths within a module have been exercised at least once,
2. exercise all logical decisions on their true and false sides,
3. execute all loops at their boundaries and within their operational bounds, and
4. exercise internal data structures to ensure their validity.

The Nature of Software Defects
Logic errors and incorrect assumptions are inversely proportional to the probability that a program path will be executed. General processing tends to be well understood while special case processing tends to be prone to errors.
We often believe that a logical path is not likely to be executed when it may be executed on a regular basis. Our unconscious assumptions about control flow and data lead to design errors that can only be detected by path testing.
Typographical errors are random.
Basis Path Testing
This method enables the designer to derive a logical complexity measure of a procedural design and use it as a guide for defining a basis set of execution paths. Test cases that exercise the basis set are guaranteed to execute every statement in the program at least once during testing.

Flow Graphs
Flow graphs can be used to represent control flow in a program and can help in the derivation of the basis set. Each flow graph node represents one or more procedural statements. The edges between nodes represent flow of control. An edge must terminate at a node, even if the node does not represent any useful procedural statements. A region in a flow graph is an area bounded by edges and nodes. Each node that contains a condition is called a predicate node. Cyclomatic complexity is a metric that provides a quantitative measure of the logical complexity of a program. It defines the number of independent paths in the basis set and thus provides an upper bound for the number of tests that must be performed.


The Basis Set

An independent path is any path through a program that introduces at least one new set of processing statements (must move along at least one new edge in the path). The basis set is not unique. Any number of different basis sets can be derived for a given procedural design. Cyclomatic complexity, V(G), for a flow graph G is equal to

1. The number of regions in the flow graph.
2. V(G) = E - N + 2 where E is the number of edges and N is the number of nodes.
3. V(G) = P + 1 where P is the number of predicate nodes.

Deriving Test Cases

1. From the design or source code, derive a flow graph.
2. Determine the cyclomatic complexity of this flow graph.
Even without a flow graph, V(G) can be determined by counting
the number of conditional statements in the code.
3. Determine a basis set of linearly independent paths.
Predicate nodes are useful for determining the necessary paths.
4. Prepare test cases that will force execution of each path in the basis set.
Each test case is executed and compared to the expected results.

Automating Basis Set Derivation

The derivation of the flow graph and the set of basis paths is amenable to automation. A software tool to do this can be developed using a data structure called a graph matrix. A graph matrix is a square matrix whose size is equivalent to the number of nodes in the flow graph. Each row and column correspond to a particular node and the matrix corresponds to the connections (edges) between nodes. By adding a link weight to each matrix entry, more information about the control flow can be captured. In its simplest form, the link weight is 1 if an edge exists and 0 if it does not. But other types of link weights can be represented:

� the probability that an edge will be executed,
� the processing time expended during link traversal,
� the memory required during link traversal, or
� the resources required during link traversal.

Graph theory algorithms can be applied to these graph matrices to help in the analysis necessary to produce the basis set.

Loop Testing

This white box technique focuses exclusively on the validity of loop constructs. Four different classes of loops can be defined:

1. simple loops,
2. nested loops,
3. concatenated loops, and
4. unstructured loops.

Simple Loops

The following tests should be applied to simple loops where n is the maximum number of allowable passes through the loop:

1. skip the loop entirely,
2. only pass once through the loop,
3. m passes through the loop where m <>
• Correctness maintained over considerable period of time Processing of the application complies with the organization’s policies and procedures.
Secondary users needs are fulfilled:
• Security officer
• DBA
• Internal auditors
• Record retention
• Comptroller
How to Use

Test conditions created
• These test conditions are generalized ones, which becomes test cases as the SDLC progresses until system is fully operational.
• Test conditions are more effective when created from user’s requirements.
• Test conditions if created from documents then if there are any error in the documents those will get incorporated in Test conditions and testing would not be able to find those errors.
• Test conditions if created from other sources (other than documents) error trapping is effective.
• Functional Checklist created.
When to Use
• Every application should be Requirement tested
• Should start at Requirements phase and should progress till operations and maintenance phase.
• The method used to carry requirement testing and the extent of it is important.
Example
• Creating test matrix to prove that system requirements as documented are the requirements desired by the user.
• Creating checklist to verify that application complies to the organizational policies and procedures.
Regression Testing
Usage:
• All aspects of system remain functional after testing.
• Change in one segment does not change the functionality of other segment.
Objective:
• Determine System documents remain current
• Determine System test data and test conditions remain current
• Determine Previously tested system functions properly without getting effected though changes are made in some other segment of application system.
How to Use
• Test cases, which were used previously for the already tested segment is, re-run to ensure that the results of the segment tested currently and the results of same segment tested earlier are same.
• Test automation is needed to carry out the test transactions (test condition execution) else the process is very time consuming and tedious.
• In this case of testing cost/benefit should be carefully evaluated else the efforts spend on testing would be more and payback would be minimum.
When to Use
• When there is high risk that the new changes may effect the unchanged areas of application system.
• In development process: Regression testing should be carried out after the pre-determined changes are incorporated in the application system.
• In Maintenance phase : regression testing should be carried out if there is a high risk that loss may occur when the changes are made to the system
Example
• Re-running of previously conducted tests to ensure that the unchanged portion of system functions properly.
• Reviewing previously prepared system documents (manuals) to ensure that they do not get effected after changes are made to the application system.
Disadvantage
• Time consuming and tedious if test automation not done
Regression Testing - Software Testing - Network Regression Testing - Web & Automated Regression Testing

Error Handling Testing
Usage:
• It determines the ability of applications system to process the incorrect transactions properly
• Errors encompass all unexpected conditions.
• In some system approx. 50% of programming effort will be devoted to handling error condition.
Objective:
• Determine Application system recognizes all expected error conditions
• Determine Accountability of processing errors has been assigned and procedures provide a high probability that errors will be properly corrected
• Determine During correction process reasonable control is maintained over errors.
How to Use
• A group of knowledgeable people is required to anticipate what can go wrong in the application system.
• It is needed that all the application knowledgeable people assemble to integrate their knowledge of user area, auditing and error tracking.
• Then logical test error conditions should be created based on this assimilated information.
When to Use
• Throughout SDLC.
• Impact from errors should be identified and should be corrected to reduce the errors to acceptable level.
• Used to assist in error management process of system development and maintenance.
Example
• Create a set of erroneous transactions and enter them into the application system then find out whether the system is able to identify the problems..
• Using iterative testing enters transactions and trap errors. Correct them. Then enter transactions with errors, which were not present in the system earlier.
Manual Support Testing
Usage:
• It involves testing of all the functions performed by the people while preparing the data and using these data from automated system.
Objective:
• Verify manual support documents and procedures are correct.
• Determine Manual support responsibility is correct
• Determine Manual support people are adequately trained.
• Determine Manual support and automated segment are properly interfaced.
How to Use
• Process evaluated in all segments of SDLC.
• Execution of the can be done in conjunction with normal system testing.
• Instead of preparing, execution and entering actual test transactions the clerical and supervisory personnel can use the results of processing from application system.
• To test people it requires testing the interface between the people and application system.
When to Use
• Verification that manual systems function properly should be conducted throughout the SDLC.
• Should not be done at later stages of SDLC.
• Best done at installation stage so that the clerical people do not get used to the actual system just before system goes to production.
Example
• Provide input personnel with the type of information they would normally receive from their customers and then have them transcribe that information and enter it in the computer.
• Users can be provided a series of test conditions and then asked to respond to those conditions. Conducted in this manner, manual support testing is like an examination in which the users are asked to obtain the answer from the procedures and manuals available to them.
Intersystem Testing
Usage:
• To ensure interconnection between application functions correctly.
Objective:
• Determine Proper parameters and data are correctly passed between the applications
• Documentation for involved system is correct and accurate.
• Ensure Proper timing and coordination of functions exists between the application system.
How to Use
• Operations of multiple systems are tested.
• Multiple systems are run from one another to check that they are acceptable and processed properly.
When to Use
• When there is change in parameters in application system
• The parameters, which are erroneous then risk associated to such parameters, would decide the extent of testing and type of testing.
• Intersystem parameters would be checked / verified after the change or new application is placed in the production.
Example
• Develop test transaction set in one application and passing to another system to verify the processing.
• Entering test transactions in live production environment and then using integrated test facility to check the processing from one system to another.
• Verifying new changes of the parameters in the system, which are being tested, are corrected in the document.
Disadvantage
• Time consuming and tedious if test automation not done
• Cost may be expensive if system is run several times iteratively.
Control Testing
Usage:
• Control is a management tool to ensure that processing is performed in accordance to what management desire or intents of management.
Objective:
• Accurate and complete data
• Authorized transactions
• Maintenance of adequate audit trail of information.
• Efficient, effective and economical process.
• Process meeting the needs of the user.
How to Use
• To test controls risks must be identified.
• Testers should have negative approach i.e. should determine or anticipate what can go wrong in the application system.
• Develop risk matrix, which identifies the risks, controls; segment within application system in which control resides.
When to Use
• Should be tested with other system tests.
Example
• file reconciliation procedures work
• Manual controls in place.
Parallel Testing
Usage:
• To ensure that the processing of new application (new version) is consistent with respect to the processing of previous application version.
Objective:
• Conducting redundant processing to ensure that the new version or application performs correctly.
• Demonstrating consistency and inconsistency between 2 versions of the application.
How to Use
• Same input data should be run through 2 versions of same application system.
• Parallel testing can be done with whole system or part of system (segment).
When to Use
• When there is uncertainty regarding correctness of processing of new application where the new and old version are similar.
• In financial applications like banking where there are many similar applications the processing can be verified for old and new version through parallel testing
Example
• Operating new and old version of a payroll system to determine that the paychecks from both systems are reconcilable.
• Running old version of application to ensure that the functions of old system are working fine with respect to the problems encountered in the new system.
Volume Testing
Whichever title you choose (for us volume test) here we are talking about realistically exercising an application in order to measure the service delivered to users at different levels of usage. We are particularly interested in its behavior when the maximum number of users are concurrently active and when the database contains the greatest data volume.
The creation of a volume test environment requires considerable effort. It is essential that the correct level of complexity exists in terms of the data within the database and the range of transactions and data used by the scripted users, if the tests are to reliably reflect the to be production environment. Once the test environment is built it must be fully utilised. Volume tests offer much more than simple service delivery measurement. The exercise should seek to answer the following questions:
What service level can be guaranteed. How can it be specified and monitored?
Are changes in user behaviour likely? What impact will such changes have on resource consumption and service delivery?
Which transactions/processes are resource hungry in relation to their tasks?
What are the resource bottlenecks? Can they be addressed?
How much spare capacity is there?
The purpose of volume testing is to find weaknesses in the system with respect to its handling of large amount of data during extended time periods
Stress Testing
The purpose of stress testing is to find defects of the system capacity of handling large numbers of transactions during peak periods. For example, a script might require users to login and proceed with their daily activities while, at the same time, requiring that a series of workstations emulating a large number of other systems are running recorded scripts that add, update, or delete from the database.
System performance is generally assessed in terms of response time and throughput rates under differing processing and configuration conditions. To attack the performance problems, there are several questions should be asked first:

Performance Testing
• How much application logic should be remotely executed?
• How much updating should be done to the database server over the network from the client workstation?
• How much data should be sent to each in each transaction?
According to Hamilton [10], the performance problems are most often the result of the client or server being configured inappropriately.
The best strategy for improving client-sever performance is a three-step process [11]. First, execute controlled performance tests that collect the data about volume, stress, and loading tests. Second, analyze the collected data. Third, examine and tune the database queries and, if necessary, provide temporary data storage on the client while the application is executing.

testblog

थिस इस मय फर्स्ट टेस्ट ब्लोग

Wednesday, May 30, 2007