Monday, March 29, 2010

Test Metrics - Description and Definition

Severity wise Defect distribution % :
Description: This report shows severity wise defect distribution
Definition: Severity 1 defect % = (Total number of Severity 1 defects/Total number of defects) *100 and so on

Status wise Defect distribution % :
Description: This report shows State wise defect distribution
Definition: Closed Defect % = (Total number of Closed Defects /Total number of defects )*100 and so on for other status of the Defects

Regression Defect % :
Description:This shows the ability to keep the product right while fixing defect
Definition: Regression % = ( no of regression bugs )*100 / Total no of bugs

Reopen defect %:
Description: Percentage of reopened bugs against total bugs
Definition: Reopen bugs % = (reopened bugs) * 100/ total bugs

Invalid / Duplicate defect %:
Description:Percentage of invalid and duplicate bugs against total bugs
Definition: Invalid & Duplicate bugs % = (invalid bugs + Duplicate bugs) * 100/ total bugs

Defect distribution %:
Description: This metric shows a defect distribution per different kind of testing eg Functional testing, Regression testing, Usability testing, Documentation, Installation/Uninstallation Testing etc
Definition: % of Defect out of Functional testing =(Number of defects produced by functional testing /The total defects) *100

Root Cause Analysis on Customer found Defect and % distribution:
Description: This report shows a root cause analysis and percentage distribution of the customer found defects
Definition: Distribute it % wise as (Test case missing), (Platform not supported), (Not clear Documentation), (Not clear requirement), (Platform not covered) etc

Cumulative defects fixed per week:
Description: The shows the cumulative volume of defects that gets fixed on a weekly basis. An input to the testing team to plan ahead for the validation effort
Definition: Sum of all defects fixed in the week+sum of all defects fixed till that week.

Defect Validation %:
Description: This report shows (Closed+Reopen) vs Fixed defects
Definition: (No of validated (closed + reopen) defects/No of fixed defects) *100

Defect Validation Rate:
Description:No of defects validated per week. This is a measure of QA productivity in terms of validating the fixed defects
Definition: No of defects validated per week per person = (total no. of validated defects) *40/(Total hours to validate the defects * no. of resources)

Cumulative defects closed per week:
Description: The shows the cumulative volume of defects that gets closed on a weekly basis.
Definition: Sum of all defects closed in the week+sum of all defects closed till that week.

Bug Trend and Forecasting:
Description: This shows the inclination of uncovering a bug based on the past data and thus extrapolate it to forecast a value for future
Definition: E.g. Past 3 weeks moving average of weekly number of bugs

Defect ageing report:
Description: This shows how many bugs are still open on different time period slots.
Definition: No of open bugs with <> 3 months old etc as applicable

Defect Density:
Description: This shows the number of bugs found per per months of dev effort
Definition: (Total no. of bugs) * 160/(Total man hours of planned dev effort)

Defect Removal Efficiency (DRE):
Description: This shows the efficiency of QA in finding bugs against the bugs found by customers
Definition: DRE = (Pre release Sev 1&2 bugs) * 100/ (Pre release sev 1&2 bugs + post Release sev 1&2 bugs)

Test Coverage %:
Description:This report shows the extent to which testing covers the product’s complete functionality
Definition: (Total number of Test cases executed/Total number of Test cases available across all the platforms )*100

Test design per person week:
Description: No of test cases written per week classified into size or complexity based categories. This is a measure of QA productivity during the TC definition stages.
Definition: TC created per week per person = (total no. of TCs created) *40/(Total hours to wrting TCs * no. of resources)

Test Case effectiveness:
Description: This metric shows an indication of the effectiveness of the test cases.
Definition: (Number of Test cases produced defects/. the total number of test cases) *100

Status wise Test Case execution distribution:
Description: This metric gives an indication of status of Test case execution
Definition: "Passed Test Cases %=(No of Passed Test cases/Total Number of Test cases)*100 and so on forFailedBlockedNot yet Executed Test cases"

Cumulative test execution per week:
Description: The shows the cumulative no. of test cases that gets executed on a weekly basis.
Definition: Sum of all test cases executed in the week+sum of all test cases executed till that week.

Test execution per person week:
Description: No of Test cases or Test steps executed per week. This is a measure of QA productivity during the Testing phases.
Definition: TCs executed per week per person = (total no. of TCs executed) *40/(Total hours to execute TCs * no. of resources)

Review Comments on Test Plan:
Description: This shows the quality of the Test Plan
Definition: No of review comments received per Test Plan or comments can be distributed as High, medium, low

Review Comments on Test cases:
Description: This shows the quality of the Test Cases
Definition: No of review comments received per Test Case or comments can be distributed as High, medium, low

Automated Test Cases %:
Description: This report shows Automated vs total Test Cases
Definition: (No of automated Test case/Total no of Test cases to be automated)*100 %

Automation of TC rate:
Description:No of Test cases automated per week. This is a measure of QA productivity in terms of automating the test cases
Definition: TCs automated per week per person = (total no. of TCs automated) *40/(Total hours to automate TCs * no. of resources)

Resource Utilization:
Description: This shows how the time spent by the QA resources have been utilized.
Definition: (Time reported on productive QA task) / Standard time available for the period

Effort variance:
Description: This is the deviation in the actual effort as a percentage of the estimated planned effort.
Definition: (Actual effort hours - Planned effort hours) / Planned effort

Schedule variance:
Description: This is the deviation in the actual completion date from the planned completion date as captured in the project schedule.
Definition: Differential duration / Planned duration

Thursday, March 25, 2010

Qualities for a Software Tester Part - 2

Continued from Qualities for a Software Tester Part - 1 .....

Here I have compiled some of the qualities required for a software tester. (Part 2)

C. Domain Knowledge
1. Understanding of domain and use it for TS/TC writing
Beginner - Basic understanding of domain related to own area(module/feature) of work
Intermediate - Ability to write test scenarios in line with domain
Expert -
Understanding of the entire domain and expertise in using it to write test scenarios /Test Cases
Ability to review test scenarios in line with domain
Ability to guide others on the domain knowledge and help them write test scenario/TC
Ability to judge the quality of a defect and impact from the domain knowledge

D. Product knowledge
1. Understanding of diff product line
Beginner -
Basic understanding of the product/application flow
Intermediate -
Ability to write test cases aligned to product requirements
Ability to prioritize the test cases
Ability to review TC in line with product flow
Ability to identify the stable and unstable area of the product
Ability to identify the area which needs more coverage
Expert -
Ability to provide technical expertise on specific products
Understanding of end to end product line
Ability to judge the quality of a defect and impact from the product knowledge
Ability to ensure adequacy of test coverage with respect to functionality of product
Ability to develop test strategy based on impact analysis due to defect fixes
Ability to demonstrate the product feature to customer
Ability to propose new feature

E. Automation
Beginner -
Ability to use Test Automation tools to create test scripts, based on manual test steps
Ability to execute automated test cases
Intermediate -
Ability to automate new test cases, maintaining/executing existing regression suites
Ability to analyze automation execution result
Expert -
Ability to use Test Automation tools to create a framework for automated testing
Ability to identify the stable areas for automation
Ability to sequence test scripts for dependency
Ability to troubleshoot issues related to automation suites
Ability to prepare Automation strategy
Ability to evaluate and recommend automation tool for use

F. Tools and technology
1. Clear quest, Quality Center, Version One,SubVersion, Watin and Nunit, Concept of web applictaion, Agile methodology, RUP concept etc. etc.

G. Communication
1. with peers, with cross team members, with onsite team members, with project stakeholders
Beginner -
Ability to attend calls
Ability to provide timely and accurate analysis of findings
Ability to discuss doubts with other team
Intermediate -
Ability to work within a team environment with good communication skills
Able to report any problems as quickly and accurately as possible
Ability to co-ordinate with onsite for issue resolution.
Ability to attend the regular client call and discuss the weekly status with the client.
Expert -
Ability to attend appropriate development project meetings with developers and end users, to gather information for future testing projects and to fully understand the scope of the project.
Ability to communicate effectively with developers and other stakeholders in the organization.
Ability to participate in status meetings with the offshore and onsite teams and update the status to the concerned parties on time
Ability to communicate the test project status (detailed, and measurable) to the team members, to the customer, and to management
Ability to understand the business objectives and providing potential soln



Thursday, March 11, 2010

Test Deliverable(s)

Here is the list of Test Deliverables for an Existing Project/New Project/Support Project

Existing Projects – Projects that have been delivered to the market but are still undergoing new features and fixes.
Following are the deliverables from the QA team as applicable to a particular project
1. Test plans (new feature)
2. Test cases
3. Modification of the existing Test cases
4. Test case execution result
5. Reports on Bugs
6. Automated test suit
7. Modification of existing automated tests
8. Automated Test case execution result
9. Back up of Test Environment
10. Test Metrics
11. Post Mortem Report and Lesson Learned document

Interim Deliverables:
1. Status Report
2. Review of Test Plan and Test Cases

QA team should be involved right from requirement review discussion to functional design document review meeting to get a better understanding of the product functionality. Test Plan, Test Cases would be reviewed by Internal QA team, Concerned Dev team

Broad testing tasks would include Task updation in the iteration/release backlog -> Test Plan preparation->Test cases preparation -> Modification of the Existing Test cases -
>Environment setup -> Automate test suit -> Modification of existing automated tests -> Test case execution -> Report Bugs -> Demonstration to client at the end of the each iteration -> Post Mortem report and Lesson Learned document

New Projects – Any application that has not been delivered to the market place yet.

Following are the deliverables from the QA team as applicable to a particular project
1. Test plans (new feature)
2. Test cases
3. Modification of the Existing Test cases
4. Test case execution result
5. Reports on Bugs
6. Automated test suit
7. Modification of existing automated tests
8. Automated Test case execution result
9. Back up of Test Environment
10. Test Metrics
11. Post Mortem Report and Lesson Learned document

Interim Deliverables:
1. Status Report
2. Review of Test Plan and Test Cases

QA team should be involved right from requirement review discussion to functional design document review meeting to get a better understanding of the product functionality. Test Plan, Test Cases would be reviewed by Internal QA team, Concerned Dev team

Broad testing tasks would include Task updation in the iteration/release backlog -> Test Plan preparation->Test cases preparation -> Modification of the Existing Test cases ->Environment setup -> Automate test suit -> Modification of existing automated tests -> Test case execution -> Report Bugs -> Demonstration to client at the end of the each iteration -> post mortem report


Support – Applications that are in a production environment but are only changing based on reported defects.

Following are the deliverables from the QA team as applicable to a particular project

1. Modification of the Existing Test cases / Add test cases
2. Test case execution result3. Modification of existing automated tests.

Broad testing tasks would include Test Plan preparation->Test cases preparation -> Modification of the Existing Test cases ->Environment setup -> Defect reproduction-> Automate test suit -> Modification of existing automated tests -> Test case execution -> Report Bugs -> Add the test cases to the correct repository

Testing Activities in a Test Process following RUP + Agile methhodology

Here is a list of Test Activities to be carried out in a Test Process following RUP+Agile Methodology.

Background: A new testing team will carry out the testing of product say ABC 6.0 where as the previous release ABC 5.0 was tested by some other team and very basic info is passed from the previous team to current team.

Inception Phase
• High level Test Strategy will be prepared
• Level-0 estimate from QE side will be provided

Elaboration Phase
• Test Plan will be prepared
• Detailed estimate for Elaboration phase from QE side will be provided
• Base test cases for ABC 5.0 will be made ready
• Base test automation scripts for ABC 5.0 will be made ready
•Functional Testing
oHigh level Functional test scenarios creation referring to the Use case document will be prepared. Test scenarios based on workflows, covering one or more requirements will be created
oTest cases from the Test scenarios will be derived in iteration wise as an when design document and UI screenshots for the particular feature is available
oBase setup of Test environment for ABC 5.0 will be made ready
•Performance Testing
To ensure validity in comparisons of ABC 6.0 performance to its baseline, the performance testing methodology would include to be executed on ABC 5.0 to refresh past performance test results as the application is expected to function with client's user volume of data or Match the
testing methodology used for the ABC 5.0, so that those tests can be used as a baseline
o Analysis of ABC 6.0 requirement will be carried outo Performance Test Plan will be prepared
o Test environment for ABC 5.0 will be prepared
o Test Scripts will be prepared
o Base Line testing for ABC 5.0 will be carried out and parameters will be defined and finalized for comparison.

Construction Phase
• Test Plan will be updated/modified with execution plan.
• Detailed estimate for Construction phase from QE side will be provided
• Functional Testing
Assuming that all the new requirements part of ABC 6.0 would be delivered to QE in a number of iterations, following activities would be carried out on each of the iteration, Iteration 1 to N as defined by the team for the short releases.
o Test Case writing will be completed. Test cases would be organized as Smoke / Priority 1 / Regression test cases
o Traceability matrix from Requirement to Use Case to Test Cases will be createdo In each iteration, all the test cases authored will be sent to the stakeholders or their representatives for reviewo Setting up of the Test Environment during iteration wise testing will be carried out by deploying the build provided by Development team on top of ABC 5.0.
o Functional test cases would be executed to validate each requirement implemented correctly or not.
o Test Scripts for Automation would be prepared. Focus will be given on continuous creation of automated tests covering all Functional and Regression test cases.
o Automated test scripts will be executed
o After testing the new features, Regression testing is planned at the end of each iteration to ensure that defects fixed during the current iterations did not regress the producto Defect will be reported as part of Functional and Regression Testing
o All the fixed defects will be verified in the new build before going for the new feature testing. Sev1 and Sev2 defects must be planned to get fixed in the current iteration.o Test cases and Test Scripts will be modified as required .
o Test Plan will be reviewed at the end of each iteration to investigate incomplete tasks and re-estimate the timelines, if required
o Document validation with respect to the feature – User Manual, Online help, Installation Guides, Configuration, and Read Me File, Labeling and Packaging will be validated.o Project meeting will be conducted every day where the entire team sits together and discusses about the progress of the project and any issues preventing the progress.
o At the end of each iteration, produce Iteration Summary report with metrics to measure progress like
• No. of Test cases run at each iteration
• The running and tested feature metrics
• Effort Planned Vs Actual
• No. of Defects Vs functionality
• Schedule Variance
• Total No. of Test cases executed / Passed / Failed/ Blocked / Not Run
o There will be a Post Iteration Review after each of the iteration and the feedback will be taken as input for the next iteration plan.
Performance Testing
Performance testing will be carried out on iterative basis as on when Functional Testing for a particular feature is completed to ensure that the system performance does not degrade.
o Test Environment Setup ABC 6.0 and deployment of the build.
o Performance testing executiono Analysis, Monitoring and comparison report on performance parameter will be produced on each iteration

Transition Phase
• Test Plan will be updated/modified with further details
• Detailed estimate for Transition phase from QE side will be provided
• Functional Testing
Hardening /Regression iterations are planned to make sure system is stable and the fix does not have any adverse effect on the systemo In the final iteration QE team would test all new requirements and one complete flow as part of regression testing.
o During the final iteration, Regression test cases will be executed on test environment. Automated Regression test suite will be run as part of this.
o Validation of the defects which will be fixed in this phase will be carried out.
o Release note will be validated. Release note mentions the requirement items coded, defects fixed, known issues and any other critical information required for testing
o At the end of the iteration, produce Iteration Summary report with metrics to measure progress like
• No. of Regression Test cases run at each iteration
• Effort Planned Vs Actual
• No. of Defects Vs functionality
• Schedule Variance
• Total No. of Test cases executed / Passed / Failed/ Blocked / Not Run
• Integration Testing
Integration testing will be carried out in the integration lab to ensure the proper integration of all components of the product.
Performance Testing
Performance testing will be carried out after the regression testing is completed.
o Clean Test Environment Setup ABC 6.0o Performance testing executiono Analysis, Monitoring and comparison report on performance parameter will be produced at the end of the testing.

Monday, March 8, 2010

Qualities for a Software Tester Part - 1

Here I have compiled some of the qualities required for a software tester. (Part 1)

A. QA Concepts
1. Requirement Analysis
Beginner- Ability to analyze business requirement and understand it
Intermediate - Ability to analyze the Business requirement and derive test requirements
Expert - Ability to seek further information to fill up the gaps.Ability to understand the business objectives

2. Test Strategy
Beginner - Ability to understand and adhere to test strategy while testing
Intermediate - Ability to understand the Test Strategy and the objective behind itAbility to give some inputs towards test strategy derivation
Expert - Ability to write Test Strategy

3. Test Planning
Beginner - Ability to understand the Test Plan and work accordingly
Intermediate - Ability to fill up few sections in the Test PlanAbility to derive Test Plan using past project data
Expert - Ability to write Test Plan without any heuristic data

4. Test Scenario
Beginner - Ability to understand the Test scenario and derive test cases
Ability to document the Test Scenario if the Test Scenario has been described orally
Intermediate - Ability to derive test scenario from Use case/Business requirement
Expert - Ability to derive test scenarios with max coverage of the featuresSupport, guide and provide directions to the testing team to generate quality test case documents

5. Test Case
Beginner -
Ability to write test cases from Test Scenario/Use Cases
Ability to follow written instructions to execute test cases on specific areas of product functionality and check outcomes against expected results
Intermediate -
Ability to write test cases with complete details and aligned with Test strategyAbility to prioritize the test casesAbility to create test cases for various features/modules of the product
Expert -
Ability to write test cases including many aspects of the feature which can produce more bugs
Ability to optimize the test cases numberAbility to identify the areas which need more test coverage
Ability to write test cases in line with a specification to test a piece of functionality, identify and investigate ambiguities in tests and gain agreement from Product management team on proposed solutions
Support, guide and provide directions to the testing team to generate quality test case documents

6. Review Test Scenario/Test Case
Beginner - Ability to review individual test case/test scenario
Intermediate - Ability to review test cases/test scenarios from the feature/module coverage perspective
Expert - Ability to review test cases/test scenarios from Product and Domian knowledge perspectiveAbility to ensure proper reviews are done on the documents from the team

7. Traceability and Test Coverage
Beginner - Ability to map Req to TC with guidance
Intermediate - Ability to map Req to TC and derive reports
Expert -
Ability to map Req to TC and find out the gaps and identify the weak areas for coverage
Ability to ensure that the test cases have optimal coverage on the requirements – perform a test coverage analysis

8. Doc Review
Beginner - Ability to review the application flow of the user guide
Intermediate - Ability of reviewing technical and user documentation
Expert - Ability to review from the end user perspective

9. Defect raising and Defect Tracking
Beginner -
Ability to identify a defect
Ability to find defect
Ability to replicate a defect
Ability to report the defect clearly
Ability to validate a defect
Intermediate -
Ability to prepare bug report
Ability to identify a blocker defect
Ability to attend bug review meetings
Expert -
Ability to participate in bug scrub
Ability to utilize the analytical skills in determining the root cause of problems
Ability to analyze the bug reports for decision making
Ability to analyse the likely cause of and trends-in in defects during testing to identify problem areas, reduce errors and to suggest further improvements in the software development process.

10. Diff kind of testing(Functional, Regression, Smoke, Sanity, Integration, Performance, Load, User Interface, Database, Security, BlackBox and WhiteBox testing)
Beginner -
Should have basic knowledge of diff kind of testing
Ability to execute planned testing tasks with guidance
Ability to report test result
Intermediate -
Ability to execute various kind of testing
Ability to monitor test execution keeping the deadline in mind
Expert -
Ability of execution of test cases to test a complex application or component
Ability to prioritize the test cases while executing
Ability to guide others on testing concepts

11. Test Env Setup
Beginner -
Ability to set up the test env with guidanceAbility to deploy the build
Intermediate -
Ability to setup Test Env with minimal guidanceAbility to troubleshoot on own and identify the problem
Expert -
Ability to setup Test Env from scratch
Ability to derive the Test Env creation req.
Ability to identifying hardware and operating system requirements from functional specifications

B. Test Management
1. Scope of work, Scope of Deliverables
Beginner - Ability to understand the scope of work
Intermediate -
Ability to negotiate with for SOWAbility to derive the deliverables
Expert -
Ability to cope with changing requirements
Ability to adapt to the changing needs of a project
Ability to handling out-of-scope request

2. Test Effort Estimation, Task identification, Scheduling, Resource Allocation/Identification, Setting Milestone
Beginner -
Ability to come up with Test effort estimation with prior project data
Intermediate -
Ability to allocate work to the team based on the expertise
Able to prioritise and work within tight time scales in order to meet deadlines
Expert -
Ability to come up with effort estimation without prior project data
Ability to transfer estimation to scheduleUsing MS Project to create and track test schedules Ability to identify the resource need and the associated skill
Ability of timely delivery of different testing milestones.

3. Tracking progress
Beginner -
Basic knowledge of tracking tool
Intermediate -
Ability to track deliverables against plan (using Microsoft Project/version one) through observations, weekly meetings, reports
Expert -
Ability to review project plans and determine short and medium-term work plans

4. Issue Tracker, Risk and Mitigation
Beginner -
Ability to identify the risk related to own task
Knowledge of escalation methodAbility to timely escalation of issues
Intermediate -
Ability to communicate the issues faced effectively.Ability to perform risk analysis when required
Expert -
Escalate the issues about project requirements (Software, Hardware, Resources) to higher management.
Ability to identify the risks across test phases

5. Applying corrective action
Beginner -
Ability to understand the feedback mechanism/review mechanism in place and act accordingly
Intermediate -
Ability to ensure proper review and feedback mechanism is in place
Expert -
Ability to meet deadlines by constantly applying corrective action.Ability to take ownership, planning and driving of projects from inception to completion
6. Test metrics, Test Reporting/Status report, Test Closure - exit of project, Conducting Post mortem
Beginner -
Basic knowledge of tracking tool
Ability to produce reports, and collate reports from other testers
Ability to derive the metrics
Intermediate -
Ability to track overall progress
Ability to derive the applicability of metrics that can be extracted and that can be define, measure, and track quality goals for the project
Expert -
Ability to find gaps in coverage, opportunities and risks
Ability to make efficiency improvements and reliably manage projects
Ability to communicate the state of software; what has been successfully tested, what has not passed, what hasn't been tested at all (this can be due to time constraints, functionality unavailability etc)
Ability to say "no" when quality is not acceptable for release, supported by facts
Ability to provide weekly status reports, and plan own time and that of other testers to ensure completion of objectives so that the schedule is delivered to specified project timescales
Ability to prepares / updates the metrics dashboard at the end of a phase or at the completion of project.
Ability to conduct post mortem meeting and take feedback to turn into action items at the post mortem meeting

7. Resource Utilization, Team development, Developing backup resource, Training need
Beginner -
Ability to organize brainstorm sessions on a regular basis
Ability to explain any decision taken so that the team get the ideas behind that led to that decision
Ability to continuous monitoring and mentoring of Testing team members
Ability to coach other Test Engineers on any technical issues within own area of understanding
Intermediate -
Ability to adopt the best practises
Ability to organize trainings (external as well as internal)
Ability to work in group to share competencies
Ability to mentor and prepare induction plans for new members joining the team
Ability to provide direction to the team member.
Expert -
Ability to discuss with QA engineers and help them to derive the idea instead of exposing directly the idea
Ability to look forward to the products in development and planned for development, to foresee what new expertise and training is needed by the group
Ability to hire people based on current and future testing needs
Ability to assign task to all Testing Team members and ensure that all of them have sufficient work in the project.

8. Team Building
Beginner -
Ability to promote cooperation between Software and Test/QA Engineers,
Ability to run meetings and keep them focused
Intermediate -
Assign roles and responsibilities to test team members
Expert -
Ability to build a testing team of professionals with appropriate skills, attitudes and motivation
Ability to maintain enthusiasm of their team and promote a positive atmosphere;
Ability to promote teamwork to increase productivity;

9. Knowledge of configuration management - build, branching, development process, scope of using any tool for effectiveness, Building knowledge management
Beginner -
Basic Knowledge of CM procedures
Basic knowledge of the software development process
Intermediate -
Ability to take decision to procure Software Testing tools for the organization
Ability in leveraging capabilities and efficiently using third party tools to optimize the environment for testing
Expert -
Ability to conduct studies on various testing methodologies and tools and provide recommendations on the same
Ability to ensure the content and structure of all Testing documents / artifacts is documented and maintained.

10. Implementing test process
Beginner -
Ability to adherence to the company's Quality processes and procedures
Intermediate -
Ability to ensure effective test process improvement feedback by providing information back to process improvement groups
Ability to promote improvements in QA processes
Expert -
Ability to identify and implement process improvements for test efficiency
Ability to document, implement, monitor, and enforce all processes for testing as per standards defined by the organization.

11. Recruitment
Beginner -
Basic judgment skills to hire the right people
Intermediate -
Basic judgment skills to hire the right people
Expert -
Support the recruitment team for interviews

In Part 2 , I will cover on Domain knowledge, Product knowledge, Automation, Tools and Technology and Communication.