Wednesday, October 10, 2012

Sample 1 : How to fill up Appraisal Form ( by a Test Engineer)

Sample 1 : How to fill up Appraisal Form ( by a Test Engineer)



Communication:

Clearly communicated to Team members, Manager.
Provided the status to Manager without fail/followup from Manager.
Asked for guidance to manager clearly whenever needed.
Communicated well to cross team members, lead and manager.
Communicated via email/conf call with onsite team members/leads and managers.


Productivity:

I have exceeded the target in the execution of test cases. Executed x test cases/week, target was y test cases/week
As my responsibilities also include many report generation , follow up and helping the team members ,still I am able to maintain to achieve the target as an individual contributor along with lead activities.
Always try to find out new ways to accomplish the work and thus be more productive. e.g - give few examples.


Process Compliance:

Always followed up the process.
After writing the Test Plan document for WU module sent it for review (internal- Dev team, external -onsite Team)
Followed the Team track process.
Followed up the TD process e.g. changing the status of test cases accordingly to pass/fail/blocked/NA.
Always make sure to put defect id against the Failed test cases. For cross reference please check Test Director.
Creating the User Stories/Tasks in the Xplanner
Regularly updating the Tasks in the XPlanner


Task Effectiveness:

Timely escalated project specific issues to manager,Dev team,Onsite QE counter part.
Towards the end of AU release, module testing came which was not planned earlier. Still maintained the productivity and delivered a high quality while testing Modules as part of AU release.
Few mails attached as proof.


Test Execution:

This year I worked on the following projects and always met the target date.
Core Components:
AU
MA with AU 7.0
Linux support for AU 7.0
License Reporting AU 7.0 SP1
Modules:
O release - WU as individual contributor and guiding OU module


Meeting Deadline:

Always met the timeline.
For AU7.0SP1 could finish the assigned tasks before the scheduled date and took some tasks from onsite QE team which was assigned to them initially.


Quality of Defects raised:

Raised defects with all the details. Zero complaints from offshore and onsite team.
I have filed 69 issues in this year. Attached is the excel sheet. Among these 2 are not a bug and hence actual count is 67.
Severity wise distribution is as follows.
Sev1 - 3
Sev2 - 24
Sev3 - 24
Sev4 - 16
QQ = 3x4+24x3+24x2+16x1=148
I have 2 bugs as invalid i.e 2/69=2.89% which is much below than the limit allowed


Verification of Defects:

Verified the pending_validation issues on time.
Verified and closed 44 issues in this year. Severity wise distribution is as follows. Attached is the excel sheet for details.
Sev1 - 3
Sev2 - 8
Sev3 - 17
Sev4 - 16


Test Plan and Test Case Development:

Regularly updating the Test cases in the Test director.
Adding test cases wherever gap is found.
Passing the relevant test cases after verifying the issue.
License Reporting, AU 7.0 - Created the test cases
For cross reference, Test Director can be checked.
Reviewed the Test Plan created by F and gave comments. (mail attached)
Reviewed the Test Plan created by other team members. (mail attached)
Reviewed the PRD and gave comments to PM. (Mail attached)
Written test cases covering the 100% requirements.
Test cases are detailed enough to be understood by a new person to the product.
No majore issues found in the created TestPlan and TestCases.
Always adhered to the date for submitting Test Plan and Test Cases for review.


Bonding with Team Members:

I have worked with the following people this year.
Zero complaints from team members working with me
Zero complaints from other team members in MA QE team
Zero complaints from Dev team
Zero complaints from Onsite QE and Dev team
[January To March] - [RR ] - [Krishna, P, A and K]
[April To May] - [MA],[Linux Support ]-[K,P,De,R]
[June]-[License Reporting Testing] - [Krishna and Dev by A]
[July To September] - [AU7.0SP1 and support of 6 modules]-[K,A,S]
[October To December]-[O Release]-[K,A,D,R]


Knowledge Sharing:

Imparted trainings to H(xplanner) and B(Test Director, team Track) by scheduled date
Gave inputs on AU Test cases to D and R who joined the team.
Shared environment set up with Dev team to help them to fix the issue. [Feedback from K can be taken]
Helped Doc team to set up the environment from the scratch for some reference. [Feedback from V can be taken]


Extra Achievement:

Received two times MVP [Most Valuable Player] Award in QE Team
1. Got appreciation from onsite QE manager J. [Mails can be produced as proof]
Comment:
 Krishna,
You and your team did a really good job on the MA7.0.1 release.  ..............
Regards,  J
"You are making great strides and I applauded your efforts and your results up to this point.  Keep up the good work.  " -J
2. Got good feedback from L for project report and planning
"Great report K!  " - L
3. Takes interview for QE position.
4. Got good feedback from Manager about understanding the requirement and answering after that. [Attached email]
"Thanks K. This is perfect. You understand requirements correctly and reply to the point. Please keep up this strength."
P

Sunday, October 7, 2012

Sample : Award Nomination

Sample : Award Nomination



K has done a great job in testing & managing the testing team. Not only she has tested this product but 
gained the knowledge which is required to improve the product’s quality in the long run. Because of the 
change in test strategy suggested by K and digging the information in Test Director the product AU is in 
a much better shape as compared to the previous version. Also K has taken care of her team well and 
always exceeded the productivity expectations.


P has done a great job in testing AUS module.  She has raised many issues and thus stabilizes this area as 
this area was never tested thoroughly before. She has always worked beyond expectation and takes 
complete ownership of the tasks. She has also contributed towards M release and delivered a quality 
product. She always comes out with her own strategy to execute the test cases faster and her productivity 
output is much higher. Also she contributed towards license testing which was completely new area to 
explore. 


A has done a great job in testing SI modules.  SI being a complex & high business volume module needed 
special attention and A has done the work beyond expectations. Because of the efforts and hard work put 
in by A, the product was delivered much ahead of schedule to the customer.  
(Appreciation email attached. Customer appreciation snippets can be mentioned here.)





Wednesday, October 3, 2012

1st day of Testing - Breadth vs Depth - Which is important?


1st day of Testing - Breadth vs Depth - Which is important?

Or

Smoke Testing vs Bug Finding - Which is important?



In this article I am trying to project how important it is to execute the Core functional test cases across the application under testing (Breadth testing) vs Finding more no of the defects in detail testing (Depth testing) on the first day of testing after receiving the build or in the beginning of test cycle.

Scenario 1:
---------------------

BACKGROUND:
---------------------------
You are part of testing team. You have completed Test Case writing and also prepared the Test Bed and eagerly waiting for the build from the Dev team. 

You were expecting the build on Monday morning. But build from the Dev team got delayed by 2 days ( as most of the times it happened! Especially happened when the Dev team takes it granted that the Testing Team will absorb the delay and final delivery date to the customer will be met or the Test Manager finds it easy to propagate the pressure to the test team instead of raising a red flag to the Project manager about the delay in build .)

[A case of Onsite and Offshore Model]Onsite team has passed the new release of application XYZ 3.0 to offshore and lot of pressure to the offshore team to prove themselves. Moreover Onsite team also has promised to pass further product lines to offshore for development and testing based on the performance of the team on delivery of XYZ 3.0. Everybody in the project team looking forward to see and feel about the new release of developed application and get a first hand feedback from the Testing team.

Test manager has motivated the team and in each team meeting cited an indication of how many bugs the team can find and also very enthusiastic about projecting all the charts and graphs on defect metrics. Hence he is waiting and hoping that testing team will find enough bugs to make his chart/graph very lucrative while he presents it to the higher management/project manager.

Sounds familiar scenario..

[Assumption:  you don't have automated set of Smoke Testing Cases in place. ]

On a fine Wednesday Morning finally you received the build. After receiving and downloading from the build location and transferring it to your testing machine and after overcoming the hiccup of installation of the software lunch time arrived..After having lunch you are going to start testing and you are left with 4 hours before you can give your feedback about the first hand feedback to your test manager who inturn will present it to higher management/ project manager.


ACTIONS:
--------------------
It is natural tendency of a tester to find out as many bug as possible after receiving the build. Though there in QC you have good number of test cases written by testing team members and also got reviewed by the team/prod manager etc. still how many of you can resist the urge of doing Adhoc testing as soon as you have access to the application instead following the test case execution as written in QC one by one? And while doing Adhoc testing it is very easy to carried away by exploring more and more into / dive deep into the application and see any nook and corner of the application and find a bug in any corner cases. Moreover as per as productivity concerns you feel like its more worth finding maximum number of bug in the team rather than telling x number of test cases passed for the day from your side.

[[[Note:

If you have the expertise of doing exploratory testing well and good..In that case you can do that as in the exploratory testing while you have an eye with as the definition goes while you ensure the core functional test cases and simultaneously you see the behavior of the application to know about the where a defect is lurking.
Exploratory testing : "a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the quality of his/her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project."]]]


A very simple example to understand Ad-hoc testing:

Case1:

Open page 1
Click button1
Page 2 opens up
Do some functional testing and see it is working

Case2:

Open page 1
Click button 2
Page 3 opens up [You did not do activities in the page. You changed your mind and wanted to go to Page 2 and do some functional testing]So clicked the button to go to Page 1
Page 1 opens up
Click button 1
Page 2 opens up
Do some functional testing.
You observe the functionality is not working fine.

[Cause could be : your development friend has setup a boolean variable from true to false while you visited Page3 and because of that variable set in page 2 it is not working...]

but I can bet while Case 1 is part of your test cases Case2 will not be part of your test cases. 


So you can have utilized your 4 hours time of the first day of testing by executing the basic core functional test cases or doing exploratory testing & trying to isolate the bug & thus raising the number of raised bug count & spending time executing some uncommon scenario (which you think your brain has enough intelligence to come up with a particular scenario which might get a bug because you think the scenario must not have come to developer's brain and must missed to take care of coding keeping the scenario in mind) .

Or knowing the appraisal in corner, this urge of finding more bugs could be intensify. After all this is a very easy way how you can sell your capability of testing and you feel you can convince your test manager why you should get more hike ! or already   there is a objective of how many bugs have been raised by a test engineer and this objectives has been set up by your test manager during the goal setting and this particular objective is very active at this moment in your head and you feel this is good opportunity to uncover bugs on the first day of testing before it  is uncovered by your tester friend in your team.
You feel the developer has done some basic functionality testing and chances are less in getting a bug in core functional test cases. and bugs can be found only in exhaustive test cases and in some particular combination..

You stretched your working hours and everybody in the team is staying late to have a meeting end of the 1st day which was scheduled at  7:30 PM: (sounds familiar again !)

At 6:45 PM: Your test manager asks you how many test cases executed and how many Passed and how many Failed. [More interested to hear number of Failed Test cases and How many defects has been raised in Bug tracking system as he is going to give the first day report to the management where every one is eager to know the first hand feedback from the testing team. 

But you know you have not followed the test cases as per QA. You were busy finding the bug and raise your bug count...

So now what do you do : You ask for another 10 minutes to wrap up and give him your count and status report. Now at this point you do  reverse engineering and see how many test cases matched with what you tested. You find out that you went deep into a particular module to uncover more bugs rather paying concentration to core functionality across all the modules.  As a result say you have found 5 cosmetics bugs.

When analyzed the bugs it was found that all the 5 bugs are in a particular combination/scenario. So Developers think that there is no major gap.


Now here is a comparison case.
------------------------

Tester 1 - 
Testing approach:
Tested all the functionality and find 0 bug.

Tester 2 - 
Testing approach:
Raised 5 bugs but did not assure of the core functionality of the application. Dived deep into a particular module and carried out rarest combination/scenario.

Whom would you prefer in your team?

I will hire tester1 who is more reliable to give the correct picture. At least nothing major damage will happen in this case.

In case of Tester 2 , how will you sooth the customer,if he is going to find a major bug tomorrow. Will he be happy to see the uncovered 15 bugs? A big question to ponder over....

You are in safer side if the basic functionality works well but think about the consequences if a major bug is found in the main functionality later on.


Yes I admit as a tester we should uncover as many bug but first of all ensure that all happy paths works as expected. Your image will go in a toss if you somebody else (god forbid somebody else is customer !) find a core functional test case bug in your tested application.  Think about the consequences.

So should not we do more exploratory testing?
yes we should do to uncover more bugs and but concentrate on crucial functional test cases, once that is completed you have the liberty or you are free bird to explore and break the application now.



Tips of the day ----

So I think more emphasis should be put on Breadth testing and cover the entire application each feature and functionality.


Do the Smoke testing first...Even if you don't find any bug... that is far more important than raising your bug count... nobody can save if something is missed in the basic functionality of the application.

Example : You have A, B,C , D , E, F, G feature and each feature has 3- 4 sub features. So it is wise to touch all the feature and then dig in to the application like deep into sub features. 

I think it will give more assurance to know the health of the application from boarder perspective even if i don't have any bug to report rather 5 bugs in one module and rest of the feature untouched completely. 


Count of the bug is not always important. Certification is more important.

Testers are there not only to raise bugs but to certify that the application is working right which is equally important. 


[Note: In subsequent posts I will cover few more scenarios.]