Wednesday, January 28, 2015

Selenium Tips: CSS Selectors

I: Simple

Direct child

A direct child in XPATH is defined by the use of a “/“, while on CSS, it’s defined using “>”
Examples:
//div/a

css=div > a

Child or subchild

If an element could be inside another or one it’s childs, it’s defined in XPATH using “//” and in CSS just by a whitespace.
Examples:
//div//a

css=div a

Id

An element’s id in XPATH is defined using: “[@id='example']” and in CSS using: “#”
Examples:
//div[@id='example']//a

css=div#example a

Class

For class, things are pretty similar in XPATH: “[@class='example']” while in CSS it’s just “.”
Examples:
//div[@class='example']//a

css=div.example a

II: Advanced

Next sibling

This is useful for navigating lists of elements, such as forms or ul items. The next sibling will tell selenium to find the next adjacent element on the page that’s inside the same parent. Let’s show an example using a form to select the field after username.
</input> </input>
Let’s write a css selector that will choose the input field after “username”. This will select the “alias” input, or will select a different element if the form is reordered.
css=form input.username + input

Attribute values

If you don’t care about the ordering of child elements, you can use an attribute selector in selenium to choose elements based on any attribute value. A good example would be choosing the ‘username’ element of the form without adding a class.
</input> </input> </input> </input>
We can easily select the username element without adding a class or an id to the element.
css=form input[name='username']
We can even chain filters to be more specific with our selections.
css=input[name='continue'][type='button']
Here Selenium will act on the input field with name=”continue” and type=”button”

Choosing a specific match

CSS selectors in Selenium allow us to navigate lists with more finess that the above methods. If we have a ul and we want to select its fourth li element without regard to any other elements, we should use nth-child or nth-of-type.
  • <p>Heading</p>
  • Cat
  • Dog
  • Car
  • Goat
If we want to select the fourth li element (Goat) in this list, we can use the nth-of-type, which will find the fourth li in the list.
css=ul#recordlist li:nth-of-type(4)
On the other hand, if we want to get the fourth element only if it is a li element, we can use a filtered nth-child which will select (Car) in this case.
css=ul#recordlist li:nth-child(4)
Note, if you don’t specify a child type for nth-child it will allow you to select the fourth child without regard to type. This may be useful in testing css layout in selenium.
css=ul#recordlist *:nth-child(4)

Sub-string matches

CSS in Selenium has an interesting feature of allowing partial string matches using ^=, $=, or *=. I’ll define them, then show an example of each:
^= Match a prefix
css=a[id^='id_prefix_']
A link with an “id” that starts with the text “id_prefix_”
$= Match a suffix
css=a[id$='_id_sufix']
A link with an “id” that ends with the text “_id_sufix”
*= Match a substring
css=a[id*='id_pattern']
A link with an “id” that contains the text “id_pattern”

Matching by inner text

And last, one of the more useful pseudo-classes, :contains() will match elements with the desired text block:
css=a:contains('Log Out')
This will find the log out button on your page no matter where it’s located. This is by far my favorite CSS selector and I find it greatly simplifies a lot of my test code.

Thursday, January 22, 2015

Push Your App to GitHub

Things you need before you get started

Git & GitHub

  • Check if Git is installed
    • In the terminal type git --version (1.8 or higher preferred)
  • If not, download Git [here] (http://git-scm.com/downloads). Then, setup your local Git profile - In the terminal:
    • Type git config --global user.name "your-name"
    • Type git config --global user.email "your-email"
    • To check if Git is already config-ed you can type git config --list
  • Create a free GitHub account or login if you already have one
COACH: Talk a little about git, verison control, and open source

Push your app to GitHub using the command line

On your GitHub profile click “new repo” screen shot 2013-06-01 at 12 38 50 pm give it a name (example: rails-girls), brief description, choose the “public” repo option, and click “create repository”.
In the command line–make sure you cd into your railgirls folder–and type:
git init
This initializes a git repository in your project
Note: If you’ve already done the Heroku guide, then you’ve already initialized a git repository & you can move on to the next steps.
Next check if a file called READEME.rdoc exists in your railsgirl directory:
dir README.rdoc
Choose your operating system: Windows | Other
If the file doesn’t exist, create it by typing:
touch README.rdoc
COACH: Talk a little about README.rdoc
Then type:
git status
This will list out all the files in your working directory.
COACH: Talk about some of your favorite git commands
Then type:
git add .
This adds in all of your files & changes so far to a staging area.
Then type:
git commit -m "first commit"
This commits all of your files, adding the message “first commit”
Next type:
git remote add origin https://github.com/username/rails-girls.git
Your GitHub Repository page will list the repository URL, so feel free to copy and paste from there, rather than typing it in manually. You can copy and paste the link from your GitHub repository page by clicking the clipboard icon next to the URL.
This creates a remote, or connection, named “origin” pointing at the GitHub repository you just created.
Then type:
git push -u origin master
This sends your commits in your “master” branch to GitHub
Congratulations your app is on GitHub! Go check it out by going to the same url you used above: https://github.com/username/rails-girls (without the .git part)
If you want to continue making changes and pushing them to GitHub you’ll just need to use the following three commands:
git add .
git commit -m "type your commit message here"
git push origin master

What’s next?

Be a Part of the Open Source Community

  • Follow your fellow Rails Girls & coaches on GitHub
  • Star or watch their projects
  • Fork a repo, then clone and push changes to your fork. Share the changes with the originator by sending them a pull request!
  • Create an issue on a project when you find a bug
  • Explore other open source projects - search by programming languange or key word

Learn more Git


Wednesday, January 21, 2015

Adding an existing project to GitHub using the command line

  1. Create a new repository on GitHub. To avoid errors, do not initialize the new repository with README, license, or gitignore files. You can add these files after your project has been pushed to GitHub.
  2. In the Command prompt, change the current working directory to your local project.
  3. Initialize the local directory as a Git repository.
    git init
    
  4. Add the files in your new local repository. This stages them for the first commit.
    git add .
    # Adds the files in the local repository and stages them for commit
    
  5. Commit the files that you've staged in your local repository.
    git commit -m 'First commit'
    # Commits the tracked changes and prepares them to be pushed to a remote repository
    
  6. Copy remote repository URL fieldAt the top of your GitHub repository's Quick Setup page, click  to copy the remote repository URL.
  7. In the Command prompt, add the URL for the remote repository where your local repository will be pushed.
    git remote add origin remote repository URL
    # Sets the new remote
    git remote -v
    # Verifies the new remote URL
    
    Note: GitHub for Windows users should use the command git remote set-url origin instead of git remote add origin here.
  8. Push the changes in your local repository to GitHub.
    git push origin master
    # Pushes the changes in your local repository up to the remote repository you specified as the origin

What is Client side performance testing in Client Server Application?

In this article we are going to learn about client side performance testing. And following engineering practice, we will see client side analysis and helping tools.

In old time (like 8-10 years ago), the web was totally different then now. At that time there were less type of client and less client side processing. But, now a days, spatially after Web 2.0 booming, we see client has become more smarter, more functionality there as well as new innovation through new technology. And, we have seen hardware become cheaper. So, client also started using local hardware not like as old time , fully depends on server. So, in performance engineering context, if we only do performance testing and monitoring of server, it does not make any sense. We will be missing a lot of performance issues based on client functionality. To measure over all performance situation, it has become necessary to test for both Server and Client for any server-client application. So,

What is Client side performance testing?

When we say client side, it means it involves every thing involving client side activity. That means, when we do performance testing of an application based on its client activity, that is client side performance testing. Example : if you consider an web application, client side performance will include the time of server execution and client side browser rendering, JS/AJAX calling, socket responses, service data population etc. So, it can be differ based on operating systems, browser version, environment settings, firewall and antivirus, faster priority functional execution and user activity of course.

So, main targets for client side performance testing are

1. Measuring actual Client timing for particular scenarios. It can be grouped as business transactions or can be measured with separate request for single user. 

2. Measuring single user time under different load and stress scenario. It is actually part of usability but it is included as performance test activity. 

3. Observe application behavior when server is down. Spatially due to application stress, when one or more servers are down, what are the situation . This might be some critical due to data integrity. I have tested for server back up time test after getting down.
 This particular type is fully based on requirements. Like as in our current project, we run this test every day to see the progress. We run regression test script(critical items only) so that we case see where is our business transition timing is going.
As you know from my previous post, type of performance testing, this will lead us to two basic part, performance measurement and performance monitoring or debugging.

Client side Performance Measurement :

This part is tricky. In performance world. when we say performance tools, it all refers to server side performance measurement tool like loadrunner, jmeter etc. So, what about client side performance?. As, it was not popular before, it was mostly done by manually. Still it is one of the best practices to sit and test application critical functionality with a stop watch and measure that. I remember doing that in back to 2008. These are handy, no need automation , no need to know  much technical stuffs. But, as it is manual time measurement and humans are not as perfect as machine for measuring time. So, it has error. So, there should be tool there.

Usually, before Jmeter Plug-ins, there was no mentionable tool for web application performance test tools. We can use Jmeter webdriver plug-in to operate the same functionality that a human does and measure that time accurately. And, we can do same steps programmatically by using browser simulation. Like

1. Selenum-webdriver-running in Java/C#/Python/Ruby/nodeJS with any supported test runner that measures.
2. Ruby-watir-Gherkin-cucumber
3. Java-robot simulation
4. Java/c# – Robot framework
5. Native action simulation tools/scripts(Auto IT/ Sikuli)
6. Robotium/monkey runner/ Applium for Mobile client performance measurement.

So, as Jmeter has this web driver sampler in plugin , we can use that. I will provide separate post for webdriver sampler example.

Client side performance monitoring :

This means we have to have monitoring for our application as well as client resources.
Like as every operation systems, windows or linux has their own system internal tools to monitor resources. And, as opens source jmeter consultant, i should say we can use perfMon as Jmeter plug in to monitor client side (you may say local host)

Now, for client side application monitoring, its really depend on application client type. If it is a TCP client, so you have to use TCP monitoring tool on the port which your application works.

Lets see some tools for web application protocol, http(s) for monitoring and analysis.

1. Browser Based Tools : Most of modern browsers have build in tools. Like , IE or Chrome, if you press F12, you can see the tools.(they follow w3 standards on navigation timing)
->I like YSlow with firebug in firefox. (first install fire bug and then YSlow)
->Most popular , Page Speed by google.
->Tools from Chrome Extension like , Easy website optimizer. Developer tool, for REST web service Rest console or Advance REST client etc.

2. 3rd party Website:
->GTMatrix , is one of my favorite
->Web Page Test is very useful

3. Proxy : For traffic monitoring i use
->YATT with winPicap
->MaxQ(not focus on monitoring but you can use for that)
->IEWatch

4. 3rd Party tools:
->DynaTrace
->SolarWinds
->AppDynamics
->Nagios (Free)
->MS Message analyzer
->For web service testing, SOAP UI

 And more and more....:).
Paid tools are good but i guess you can use a skill person for using set of other tools rather paying them…:)
Helper tools: For different web architecture, data come to client in different format. So, you should have
1. Base64 decoder that supports different character set.
2. URL decoder
3. Decompressing tool
4. File/Character format converters.

Here we get the tools, but before using tools, we need to define what to monitor. Usually for an application we monitor.
1. Application rendering time
2. Specific JS/AJAX request/response/processing time
3. User dependent request time
4. Client side validation time
5. Loading time for script/style/dynamic content.
6. Total and request specific data coming and going
7. How request are queued and processed(the behavior)
8. Any exception based(serer/client) function or behavior.
9. Business transaction or Particular  request time

And for client resources.
1. Browser & Application occupying CPU/Cache/Memory/Disk/IO/Bandwidth
2. If application interact with any other service or application , we need monitoring that too.

Example, once our application uses the export function open with MS excel and in a point of time it crashes due to our application occupied so much memory , that excel could have memory to load up the big size data.

Test plan for client side execution: 
Usually, a separate thread or users used to run the client side performance test to get the timing. Not like as server side script that will run thousands of user and run parallel as it is specifically made for single user execution time for specific scenarios.
So, in the next post we are going to test a sample application using Jmeter webdriver and measure the time.

What is Performance Reporting? Example with web application.

In this post we are going to learn what are the things need to be included for report after performance testing, results and analysis. This is typically as integration of performance results. This document communicates with different type of people interested with performance testing.

What is performance report?
After a successful performance test execution, we testers has to provide a report based on performance results. I am referring that report not raw results. A report should be fully formatted based on requirement and targeted for specific people or group of people.
First of all we need to know what does reporting means. I have seen lots of performance reports with very much technological terms with a lot of numbers. yes, that is what we performance testers do.

But, what I found, every one is not very good with those numbers. Actually, i think, those numbers does not mean any thing if there is no certain context. So, how can we get context? Its not that hard. Performance testing is related to certain type of people in the group. And, a good performance engineer should add value with those results by analyze them based on different goal and requirements. I have separate blog post based on goal and requirements. Please read before this.

So, after getting requirement we have to make reports. I will give an example of web application(financial domain) to have better understanding. So, lets discuses what should be the steps.

Step 1 : Analyze the results : 

This is very fundamental. We have to analysis the results. I believe, analysis is same important as performance test run. This actually get real value for the test execution and can pinpoint the problem. And, a performance tester should have that capability unless where is the skills. He has to analyze and define the issues may or may not be there.
So, based on the goal and requirements we should gather information and categorize them .
Example : we have a financial software that does transactions (debit, credit) with a lot of approval process. So, we have a lot of people interested among those transaction results (all business , marketing and software team). So, we categorize rest results in those grouping and show repot to only related groups.

Next is, we do, matching requirement with grouped data.
-What were the goals for these kinds of people?
-What were the primary requirement and target?
-What is the actual value and how far we are from expired? 
-What are the impacts? Impact based on revenue , client feedback, prestige of the company, interaction to another system etc.
-What are the causes? Architectural problem,database problem, application problem, human resource skill problem, process problem, resource problem, deployment problem etc. 
-What are the evidence of those causes? What are those related values? How much the project can tolerate and how much they cant. We might need to use profilers along with performance test tools. Like Ants memory profiler, Yourkit profiler, or even framework build in profiler(on the language platform that you are using).
-What are things can be done to resolve those?
(this part is tricky, a new performance tester may not come up with this, but he can talk to his system architect or lead to come up with solution. In my current project , I am trying to do. I am proposing possible solution and discuses with dev lead about those and we sit together and do some experiment to suggest the best solution that match with existing architecture. BTW, there can be issues with architecture, I have seen this in 2009, where I was involve with a product that could support some amount of users, (avoiding to be specific). To scale up the software, our team have to change the full architecture of the application to support almost 4oo% more)
So, we have done some analysis. We can do a lot more in analysis. Usually, if you are doing analysis with performance results for a product for the first time, you will get a lot of issues and need a large time to do that. But, gradually time goes, issues will be solved.

Step 2: Group Data :  

After analysis, we need to select what data should be involve with each group. This is king of tricky if you do not know the interested group. First step should be know those people and have some idea on how to communicate them. I think, performance report is just a communication format of you performance results. So, you have to very careful in this. It should be based on goal. Let me give example from project, (web app). We have 3 kind of people interested with performance testing.

1. Business users or client, who are real users interact with the system. They do trading by them self. So, for them, the high priority is , how fast we can do business transaction. A transaction includes multiple steps and involving approval, so how fast we can do that. Think, if we add anything like throughput or request size or bandwidth measures, it will not get as much attention as the time for each total business transaction. And, as they are paying, we have to ensure, after each build/release we are not decreasing. And , if we decrease, there should proper explanation from development team.(like, new feature, DB shifted, adding more security etc).

2. Product Stakeholders(all CxOs, BAs and Managers): These people are not like business users, they know the basic of inner system component but most of the time i found they want to avoid technical details. They interested same as users on those values, but they not only need to know time values, they also want to know more detail regarding what is causing those. And, if you include how to resolve those in minimal cost (with cost measurement) , believe you will be appreciated . Believe me, if you add those work values and your findings, these people will be interested more in performance reports.

3. Development team(DEVs or QAs): These are people are from development team. We used to attach raw results with report. In their report , things are little different. We start with problems, and explain those problem, provide them evidence with reproducible way(even teach then how to use performance tools ), and some guidance how to solve them like best practices, code samples, tricks people have used so far. And, as graph, we give them detail timing. Like , throughput, size, hit to server, server processing time , db request time, post/get request individual time), resource time with expected measurements.

Step 3: Arrange reports :Drive Shared Example

Like as all other report, typically a repot contains (I am adding common for all groups)

->First page with Heading as Home where product name and performance testing
->A page for Table of content
->An introduction: This will keep people on context. Add objective of the report in 2/3 sentences.
->Summary : It should contain what is the final outcomes. Pass/fail/improvement areas.
->Test Objectives : Why we testing? This should contain the requirements in bullet format.
->Test Strategy : This contains, what was the plan, which tools was used, what tools for analysis or debugging.
->Test Overview : How it was tested, during testing, what were the situation, what were monitored, what were the observation.
->Test Scenario: What are scenarios involved in test execution (It may be presided based on group of users)
->Test Conditions: Test conditions based on tool, environment, application settings and configuration including delays.
->Load Profile : How user was generating load during test. Jmeter/ Load runner or all other tools provide this. You can take screenshot of the graph and add here. Like, 100 user, 1 hour, 500 users 3 hours like this with graph.
->KPI :It is optional. It is called Key Performance Indicator. Based on requirement , each group need to know a value that indicates performance situation of the product. Usually it drives future investment and activity. I will provide a separate post how to make a KPI based on user requirements in case you don’t have any measurement.
->Results: Tabular results, common for every tool. Jmeter provides summary results or Synthesis Report. Some times, this can be optional to hide detail results from end user/business users. We used hide them.
->Results Graph: All graphs based on tabular results. We should be very careful in this area. We should put only related report here. We have see the goal and requirements and then decide. I mean, put the context with each graph. Ask your self, why this graph you use.
For example, in our project, we include only transaction comparison graph for business users.
But, for stakeholders, we added Throughput/min, Hit/Sec, Error% over time and user. Etc.
And, for developers, we include al most all type of graphs following jmeter listeners. And, graphs with raw request reference not in business transaction so that each step can be shown.

Note:
->Some time we might have to change the unit of the results for better graph, like through per second to per minuets. It should be based on what are your value in range. 
->Please be careful to add at least 1 line to describe each graph before putting them in report.

->Product Analysis : This part should be shown only to DEV/QA team. if interested, to stakeholders. This is very important part if you consider about project. Put all necessary part based on your analysis here, specifying issues with evidence.
This might include detail report separately. 
This might have detail screenshots with different tools.

->Suggestions : This part should be based on group. Suggestions for business user report should be more generic way , at best reference with UI.  For stakeholders, it can be referenced with product module or resource(like DB server, app server). But for DEV team , it should be pinpointing solutions or best practices. This whole area is based on project context, so use it sensibly. Try to be more generic and non technical in language( I have learn this in hard way..).

->Conclusion : This section should contain, 3/4 sentence defining performance status and things can be considered for testing in future.

->Appendix : This section should have detail definition on the terms we have used in whole report. Usually what is throughput, hit per second, transaction, average, min, median, 90% line etc should be defined here.

Step 4: Report Format :

Performance reports can be in provide in PDF, DOC, Excel or even Power point format. It really based on your company or working group. If you don’t have any standards, just follow any other report of you project. It is not that important unless your group maintain a system that reads the report and shows to other people. In that case you have to be specific  on file specification. I personally like PDF edition.

Notes :
->Some time we need to have section for report document summary.
->Some reports might have a section for who will see this report.
->Some reports may have less sections than example, just making sure that it follows your requirements.

So, have performance report with context. and have priority for the management in performance test activity.

What are Performance requirements?

In the article we are going to see details about requirements. What are those? How to deal with it.

What is Performance requirements?
Like all testing , we need requirement before performance activity. So, what is Performance Requirement, how does it look like? How to deal with it.
As we know performance testing is all about , time & resource. So, Performance requirement will full of time and resource requirement related to application. That means basically related to the following questions.
How many users?
How fast?

And based on that we need to see
What are the bottlenecks?
How can we solve it?

Before going to deeper into requirements, please see the performance goal from this post. It is necessary for getting in context. 

As we have seen from my previous post about performance test types, if we realize, we will get requirements based on Application and the infrastructure. And we need to have those related to time. Usually time are measured in Millisecond and size in bytes. So, based on that the performance requirements must be involve in following types..

Application Requirements :
Server Side :
->Number of user application can handle concurrently.
->Request/Transaction Processing maximum time
->Request/Transaction Required maximum Memory/CPU/Disk Space usages.
->Minimum Required Memory/Storage/Disk Space for running Supported number of users.
->Application (request/transaction) behavior in case of errors or extreme peak.

Client Side:
-> Request/Transaction Processing time. This involving , particular request time ,browser rendering time(for browser based application), or native app render time(if it is mobile application/desktop application) or even environment render(if it is running under custom environment/OS.
->Request/Transaction Required maximum Memory/CPU/Disk Space usages. Usually , it is based on where the application is running. There should be specification for the environment like for web app, which OS, which browser version on what settings under what conditions(Antivirus or any monitoring tools)
->Minimum Client environment specification and its verification.
->Application (request/transaction) behavior in errors or extreme peak.

And , we also need Application monitoring for all of those in server side and client side. This might involve with application monitoring and debugging tools as well as framework based tools and environment based tools(OS tools, browser tools, network tools)

And, before project, please define those monitoring requirements. It is also based on Goals. That means, if you are targeting for server update, so have spatial monitoring on server. I had a change to work with some requirement for performance testing involving with monitoring activity. That means, those performance test are design to monitor not only application, including infrastructure. Those have specification for network / bandwidth consumptions, and resource consumptions in server and client.(PC/Tablet/Mobile). Example :

Server Performance :
->Maximum CPU/Memory/Storage/Virtual Memory usages during certain scenario.
->Fault /Error recovery time(including reboot or initiation time)
->Resource Temperature monitoring during extreme conditions(full load, and going up)
->GC and Thread conditions in extreme peak.
->Max Log, size in case of App/DB Logs/Web server logs.

Client Performance :
->Maximum CPU/Memory/Storage/Virtual Memory usages during certain scenario.
->For web application, browser behavior/time for Reponses.(Apps for mobile)
->Device Temperature monitoring during high data/complex operation (for Mobile/Tablet / device)
->GC and Thread conditions in extreme peak.
->Max client Log size and limit.
->For web app, network and application resource(file/requests) monitoring, spatially time and resource needed in client.
->As, now a days web Applications does a lot of processing in client side. So depend on architecture of the application, we need to trace those for client environment.

Network Performance :
->Maximum/Minimum Bandwidth for application/transaction/request
->Behavior during extreme peak
->Fault or Error recovery time and behavior.
->As this is involves network devices, so tracing there behavior are used to be in specification.(like, we had a HUB, we needed to test what temperature during multiple request to server connected with this. Usually, this become complex when we send large data over network following lower network layer)

More over, depend on architecture, there should be performance requirements. For example, we had an asp.net application and we followed default view state and call back state policy , which become main issue for application slowness. We had to test default timing for a particular transaction, then we test that with different browser to verify we are going in right direction. And we also tested the resource and time needed in client for particular request. And then new version released after modification of the  view state implementation. And do full thing again to prove application performs well and takes less resources while running in client.

Example : We had a Small SOAP service.That takes an XML, and based on XML data, it process and returns XML. Client used to sell that service to other vendors for using. So, our requirements were like
->Max server processing time for particular XML type(we have different type) 750ms.
->Our server should server concurrent 120 requests and at least 2000 users in 1 hour block for particular XML type.
->Client should not get more than 1200ms time in response in any case(request and response time = network time+sever process time)
->Get the Max support of users by our server
->After Maximum users, when it is in extreme condition, server should not die and provide server busy message as XML return
->During High or Extreme Load Clients should not get any 4x or 5x messages
Note : As, our server need authentication tokens, we should not have any security issues during high or extreme load.

Analysis & Reporting  Requirements:
In real testing, there are some requirements related to analysis and reporting. Those are mainly performance goal based requirements. I think it is mandatory to have those goal based reporting requirements so, that we can find bugs during analysis phase.

Analysis should fully based on performance test goals. So that we can pinpoint the issue or bug or improvements and provide in well formatted reports for different type stakeholders. This is important because, performance testing is not easy to understand by any one. We need to format the results and issues for stakeholders, developers, other QA members, Business people, CTO/CEO/COO etc.

User Stories : In agile based projects, performance tester should come up with Performance Scenario. This is important because, it will make easy to understand by other members of the team. It is very easy to make, just follow standard user story making rules. Like, from my example, one story can be , a client(mobile/web) should be able to process certain type of XML in 1200ms when server is processing 80 other requests in 70% of resource loaded.

And, if you don’t have requirements, use my previous post of GOAL and this one to come up with requirements for your requirements.

So, in summary (i do not go deeper on client server, just the principles)
image

The Mup Link(open with mindmup)

Why Performance Test? Performance test goals.

In this article we are going to see Why we do Performance Tests? What are the main Performance test goals?

It is really depressing that very less performance test project started with goals. (as far as I have seen). It is defined when it is gradually moving foreword. Commonly, it is necessary to have goals to save time. As we, see we have a lot of parameters to take care about for performance testing(I will make separate post on this) we should have goals. So,

What is Performance Testing? What are the goals : 
Performance goal refers to what exactly we want to accomplish from performance testing activity. Like what we want to measure/see. This measurement has different prospective. Let me come up with real examples : (better to understand)
Usually different performance test goals comes from different type of group of people from team. like

Business Goals :(Business people are involved)
->We want a jump up a revenue , so what we can do to improve our performance?(which is major customer complains)
->We have a upcoming sales event/cycle. How many target clients we can drive for? What point should we not exceed?
->We might have a good reputation mentioned by Important people over Media on a particular date. What areas we need to take initiative to scale up the application?
->Specifically for online shopping application, if there is Black Friday or Cyber Monday event coming, what areas they need to take care about to keep their sites up and running for selling.
->Similar things for ticket booking application in holiday season and any major event (like concert, sports).
->Marketing might need to know the application tolerance for how much offer they can provide to customer, when they can.
->How much budget we need to scale our application for certain number of users? 
->How quick we back on business(function) if we crashed?
 
Technical Goals :(DEVs & QAs)
->How fast is our application? How much data we are using?
->We want performance testing for measure our application resource consumption and timings.
->We are getting lots of user/QA feed back on certain functionality/transaction. So, we need performance test on those to debug bottlenecks. 
->How good is our Application Recovery process?
->We want to have performance review to see how our application behaves when we have a major request processing?
->How much our application is scalable?Where are failing?
->We are migrating / changing our application architecture, we want to know the benchmarks.
->We are applying new language/framework/platform to make application faster. Are we really faster then previous?
->We are providing continuous delivery every week So, in case of performance , where are we? Are we upgrading or going down?
->In Scrum, Lets have a performance cycle. So before that, lets have a performance test. 
 
Usability Goals :(QAs, BAs, UXs)
->How many user we are supporting with usual behavior? 
->What are the behavior when we are in high or extreme load.
->We want to see how our customers feeling while application is using?
->What are the critical issues that may occur in peak point?
->What are prevention plans need to be taken when user face such bottlenecks? (Risk Planning)
->How good is our Application handles error conditions on high load?How is recovery?
 
Operational Goals :(Admin, Net Admin, Ops)
->Does our infrastructure supports the scalability which is provide by our application?
->We want performance testing for measure our application resource consumption and timings on operation/admin prospective.
->We are transferring our whole physical resource system (DB/APP server change or move to cloud) , so what is the bench mark, are we really improving?
->We are trying to add more resources in the system, so does our application supports performance improvements when we add more resources, or its just waste of money?
->What is happening maliciously by the application over our resources while running?(i faced that practically over DB logs which becomes so big while app is running that it overflows C drive and application crashes)
->What should be the recovery plans? What areas to take care about? 

What is Stress Testing? How to do it

In this article we are going to have some basic idea on What is stress testing?And some basic steps on how to do it. In my other stress tool specific post, I will explain how to implement.
In here, I am just describing my understanding. It may varies from others. If you have any comments, it will be very nice for me.

In the software world, stress testing refers to the process to determine the ability of doing the functionality by the software under unfavorable conditions. In a simple word, stress testing observes how software works under stressful conditions. It is one kind of performance testing.

The main idea is to provide stress to a system up to the breaking point in order to find bugs that will make the system unstable. Usually the system is not expected full functional but should behave in accepted manner.

Why we do stress testing?Primary goal
-To determine System’s , robustness, availability, reliability under extremes conditions.
-To identify application issues (or working capability/bugs) that arises only on extreme condition
-To find synchronization bugs (data post/get)
-To find timing bugs(slow response),
-To find interlock problems
-To find priority problems (slow to determine priory tasks)
-To find resource loss (data/instructions)
-To find the recover-ability of the system on failure conditions 
-To find vulnerabilities on stressful conditions. (Ex-Stress testing on Authentication)

Why we need Stress testing?
All application project don't need stress testing. When ever we need following, we have to perform stress testing.
-To get the breaking point of the system
-To get the maximum capability for specification.
-To define the behavior/functionality at the maximum capability specifications
-To determine the age of the software in market(Optional for a web application that interact with users)

When we will perform stress testing?
Usually we do stress testing after alpha release, that means, before going to user. Typically while in the development phage , stress testing can provide interesting bugs in following cases
-Data is lost or corrupted.
-Resource uses remains unacceptably high
-Application components fail to respond.
-UN-handled exceptions are presented to the end user

What are the scenarios of stress testing? 
Usually stress scenarios should contain these following things.
-heavy loads(number of users),
-high concurrency(Maximum data handling or transferring)
-limited computational resources (Limited memory/processor/bandwidth)

Sample Scenarios :
-DoS(denial of service) attack, a situation like where a widely viewed news item viewed by a large number of users in a Web site during a two-minute period.( Excessive volume of either users or data)
-Resource reduction such as a disk drive failure/memory failure/Processor busy. A situation when resource(processor/memory) is occupied by others on the server.
-Unexpected sequencing. A situation that performs the way which is costly (more time and resource hungry)
-Unexpected outages/outage recovery. A situation where application riches to its maximum capability of support and performs defined tasks under defined scopes.

For example, while deploying our application on a server that is already running a processor-intensive application. Here, the application is immediately “starved” of processor resources and must compete with the other application for processor cycles. We can also stress-test a single Web page or even a single item such as a stored procedure or class or particular method(function).

Scenarios are usually divided into following types:

Application stress :   This type of test typically focuses on more than one transaction on the system under stress (without the isolation of components).
Target:  Finding defects related to
-Data locking and blocking,
-Network congestion, and
-Performance bottlenecks (On different components or methods across the entire application). Because the test scope is a single application, it is common to use this type of stress testing
When :
-After a robust application load-testing effort.
-As a last test phase for capacity planning
-Generally in the necessity to find defects related to race conditions /general memory leaks from shared code or components.

Transactional stress: Usually it aims at working at a transactional level with load volumes that go beyond production operations. These tests are focused on validating behavior under stressful conditions. (EX- high load with the same resource constraints for the entire application).
Target:
-To isolate an individual transaction, or group of transactions
-To have specific understanding of throughput capacities and other characteristics for individual components.
When :
-We need tuning, optimizing the application
-We have to find error conditions for a specific component level.

Systemic stress: This type of testing is also known as integration stress testing or consolidation stress testing. Usually this test generates stress/extreme load on targeted application with multiple applications running on the same system, thereby pushing the boundaries of the applications’ expected capabilities to an extreme
Target :
-To get defects in situations where different applications block one another and compete for system resources (memory/ processor /disk space/bandwidth).
When
-We need to specify the application behaviors in extreme conditions
-We need to specify the impact to the system from our application under extreme conditions
Note :
-In large-scale usually stress testing all of the applications together in the same consolidated environment.
-Some organizations choose to perform this type of testing in a larger test lab facility or hardware or software vendor’s assistance.

What are basic steps for doing a stress testing?
1. Gather Information:
-Application usage characteristics (scenarios)
-Concerns about those scenarios under extreme conditions
-Workload profile characteristics
-Current peak load capacity (obtained from load testing)-This is not mandatory for all cases
-Hardware and network architecture and data(for better scenario generation)
-Disaster-risk assessment (Application general expected behavior on failure)
-Results from previous stress tests(For comparing or benchmarking)

2. Define Goals:  We have to identify the objectives of stress testing (Why we are testing?).
-Finding the ways the system can possibly fail suddenly in production
-Getting helpful information(number of users, volume of Data, amount of resource use, amount of times etc) to the development team to build defenses against catastrophic failures
-Measure how the application behaves (User Interactions, basic functionalities, data correctness etc) when system resources are depleted
-Ensure functionality does not break under stress.(proofing the system)

3. Define scenarios:  We have to identify the scenario/cases that need to be stress-tested. It should be dependent on different version so that we can compare at any point of a software development life cycle. We should keep following in mind to define scenarios
-How critical they are on overall application performance
-Effects on the system (intensive locking and synchronization, long transactions, disk-intensive (I/O) operations)
-Affected areas based on Load testing reports
EX- Updating inventory in a order processing scenario.  User activity & interest specific search results, Memory overflowing queries like calling entire table etc

4. Define workload:  We have to define the workload for a particular scenario based on the workload and peak load capacity inputs. The key is to systematically test with various workloads, until we create a significant failure. The steps should be
Define work distribution:  Define the work to be done (steps to do) under each key scenario .Usually it is based on the number and type of users inside the scenario during the test.
Define Peak Load:  Define the maximum expected number of users for load at peak. We should define the % of user load for each key scenario.
[There is another way to express this. We can use inverse % of use. That means, we will define % of user free for a key scenario so that those free user can be define for other scenarios. This is helpful while calculation]
Note: Workload must represent the accurate and realistic test data (Ex- type and volume/different user logins/product IDs/product categories etc.) that allows us to simulate important failures ( deadlocks/ resource consumption).

5.Make metrics:  We have to make a metrics for data collection for application’s performance Based on the scenarios (potential problems) identified in the Goal section.  The metric is focused on
- how (well/poorly) our application is performing in compared to our performance objectives.
-define problem areas and bottlenecks within application.
In case of making Matrices, if we consider following bold items(part of our scenario for measurement), we also should include the indent items as sub items for the measurement procedure.
Processor:
-Processor utilization
-Processor responses
Memory :
-Memory available
-Memory utilization
Disk :
-Disk utilization
-Disk responses
Network:
-Network utilization
-Network Bandwidth
Process:
-Memory consumption
-Processor utilization
-Process recycles
Transactions/business metrics:
-Transactions/sec
-Transactions succeeded
-Transactions failed
Threading:
-Contentions per second
-Deadlocks
-Thread allocation
Response times
-Transactions times
Notes : Matrices should be related to performance and throughput goals. providing information about potential problems

6. Create test cases: We need to create the test cases in which we can define steps for running a single test. Each test case should mention the expected results and/or the key data of interest to be collected (for analysis/report), in such a way that each test case can be marked as a “pass,” “fail,” or “inconclusive” after execution. Example :
Title : Stress on Successful Log In
Load: 1,000 simultaneous users.(How many user will hit the site in a Time Unit)
Time:  5 seconds. Time to simulate load user (usually 1-10 second, for say 5 seconds)
Duration: 5 hours (How long the test will run )
   
Expected results:
-Application process should not recycle because on deadlock/resource consumption.
-Throughput should not be below 20RPS (requests per second).[Depend on your application requirement]
-Response time should be less than 2000 MS (Up to 90% of a total transactions completed)
-“Server busy” errors/HTTP errors should not be more than 10% of the total response (For related issues)
-Log in should not fail during test execution. Log in session count should match with the Successful Log In count.
-After log in Information should be handled properly on the web sites/application.(data integrity)

7. Simulate load: We have to use tools to generate load for each test case and capture the metric data results. Before starting,
-Validate that the test environment matches the configuration that we are have designed your test for.
-Ensure that both the test and the test environment are configured correctly for metrics data collection.
Note :
-We may perform a quick “smoke test” to ensure the test script and remote performance counters are working correctly.
-We may reset the system (unless your scenario needs) and start a formal stress test execution.

8. Analyze Results:  We have to analyze the metric data captured during the test up to expected level. On a failed scenario we may have to do
-A design/Architecture review.
-A code / Unit Test Review
-Re-Run failed stress tests in under Debugging facility

9.Deliver Reports : Typically a stress testing efforts requires reports to be generated after testing. That reports should be presented in a Visual manner (with different type of charts). This reports may include comparison charts with previous test results. A standard delivery report should identify the bottlenecks as well as defined failed scenarios (bugs).

Some Stress tools :
-JMeter
-WAPT/WAPTPro
-Load UI
-Load Runner
-Visual Studio Load test
-Solex
-Test Complete
-Web load
-Neo Load
-WCF Load(only services)
 -SOASTA
-The Grinder 

What is Load Testing? How to do it

In this article we are going to see what is Load testing and how to do it. I will try to discuses about strategy to perform Load testing. In my other Load test tool specific post, I will explain how to implement a load test.

Load testing is the process of putting demand on a system or device and measuring its response. In case of software, the demands will be the functions which are needed to be performed and our system will have to perform those. For example,A load test will define and measure a computer/ peripheral/ server/network /application's maximum level of work of its specifications.

The primary goal of load testing are -
-To define the maximum amount of work a system can handle without significant performance degradation
-For comparing capabilities and accuracies of a system with other systems (under controlled lab conditions).
-To get idea about the system on how it functions in real life.
-To get different type of report for having a overall idea on a new or legacy system.

There are mainly two type of load testing can be performed

A. Longevity testing : This will measure the system's ability on how long the system performs on standard operations(standard load). The main focus will be on how long(the time) the system is stable and consistent.

B.Volume testing : This will measure the system's ability on how the system is stable on heavy operation(heavy load) in standard time. The main focus will be the volume of the operations(load) that system can handle. This load may be performed by multiple users.

Both have pinpoint bottlenecks, bugs and component limitations. EX-A mobile may have a fast processor but a limited amount of RAM (memory). Load testing can provide us with a general idea of
-how many applications or processes can be run simultaneously while maintaining the rated level of performance.
-How long the mobile will be stable with operating standard applications.
-How many users can use the application's certain function (if it is a mobile app using web service)

To perform a load test you may go through following steps
-Define the functions to be perform for the load test .
-Define the time for performing the functions.
-Make a bottom line standard system behavior to define the bugs/issues(like , set a time for perform some operation or set some operations to run for certain period of time)
-Define the measuring parameters(what to monitor)
-Choose suitable reports(grid results, charts, graphs etc) to monitor the results.
-Choose alert/marking type to determine alarm points(defining bugs).
-Choose tools meeting the above requirements.

For different scenarios, load testing measuring parameter can be different. 
For a desktop/mobile application (not using data network)
-Use of processor/memory/time on certain functions.
For a web application/service/data server/applications using internet(desktop/mobile)
-Amount of time on certain functions.
-Number of users concurrently can use certain functions.
-Number of functions can be used in certain time

Some load test tools are :
a. JMeter
b. Load UI
c. Load Runner
d. Visual Studio Load test
e. Eclipse Load test tool 
f. SOASTA

Some Load testing areas are like
-Performing extreme process using functions(like complex logic operation, tree operations)
-Writing and reading data to and from a hard disk continuously
-Downloading a series of large files from the Internet
-Log in to a same system with large number of users.
-Exchanging big number and big size of mail in a mail server. 
-Downloading a series of large files from the Internet.
-Running multiple applications on a computer or server simultaneously.
-Assigning many jobs to a printer in a queue.
- Large amount of permission concurrently occurs as accepted/rejected/filtered.  

Performance Testing in a Nutshell


In this article we are going to see Performance testing activities at a glance. The basics of performance testing.
The goal of this post is to have basic idea on how they start and follow-up.

So, here is a small flow chart type(not strictly following flow chart rules) description to illustrate what are the basic steps for a performance testing from my project experience.

Viewers can add their comments and their experiences too.

Performance Testing

Test closure activity is fully based on which methodology you are using in development. I have added small agile practice that I have worked on.