Archive | Performance RSS feed for this section

Compression,Decompression,Mobile Performance and LoadRunner

4 Feb

Recently I inherited some of the  LR scripts from one of my colleagues,it was all about building the json calls for stressing the backend spring security framework which was first layer of entry into the mobile infrastructure.Those scripts were simple scripts  built using the custom request with json string as a body part.One of the things that really surprised me as part of this effort was that web custom request in itself was taking close to 100ms to 300ms to do decompression of the server response during load testing.

Okay first let me give you some background,servers were configured to send the response compressed in gzip format with content encoding header as gzip.The functionality under scope had SLA of 1 sec max and quite a few functionality in scope also had SLA that was less than 500ms.Quite a challenging SLA’s I would say.But again these functionality were supposed to be accessed over the mobile device,so probably less the response time better it is for users.

Most of the response coming from the server for all functionality was served as chunked bytes,so what it means is that server sends initially some bytes as response in compressed gzip format,LR decompresses  those bytes in 5 to 10ms and then again server sends next range of bytes as chunked gzip response and then again LR will spend close to 5 to 10ms to decompress those bytes and like wise the process continues till we have final set of bytes.All these process happens in the single connection and connection never closes with the server.In case if you do have some server response validation in place, then expect that it will add another 10ms to do that validation.

Now I have measured all these times in the single iteration of vugen,these times increase exponentially when we are running the Load Test in controller or PC and this overhead of decoding the gzip content becomes a quite an issue when response time SLA are in ms.

Here is how it looks when you see the behavior in LR Vugen with decompression on in the script.You can see that it takes 5ms to decode the 154 bytes of response.Now imagine the normal webpage will have size of 2mb of data gzipped,so you can see the impact of this decoding  when size of page increase specially when response is coming as chunked bytes with no fixed content length from the server.

pic1

 

I think HP LR team might also be aware of this behavior and probably that the reason as why they might have come up with function to disable this.Use Web set option with decode content flag turned off if you are running the scripts which do not require validation and has response time SLA’s in ms.The drawback of disabling this feature is that all your correlation and other checks for server response will fail since server response will show up as binary content like below.

pic3

 

I would suggest you to disable this feature if you can and do the response validation by using the other techniques like verifying server logs etc.By disabling this you will gain close to 15 to 20% reduction in response time reported by LR.

Is this expected behavior of LoadRunner ?, I think they have to do this,unless they decode the response, none of the other function like web reg save param or web reg find will work and these functions are core functions of LoadRunner.Probably the right way is that LR should not add these decompression timing in their transaction markers.These timing really pollute the results specially for web applications or probably they can increase the speed of this decompression library what they are using in LoadRunner.

Advertisements

How to Identify Slow Running SQL Query in MYSQL 5.5x

1 Oct

From past couple of days I have also been playing around with MYSQL 5.5X Database doing some bit of writing queries, creating tables, indexes ,routines here and there for one of project. MYSQL database seems to be bit easy to understand and provides almost all similar features as provided by MSSQL or Oracle. (Of course there might be some difference in the ways they are designed or in the way they use SQL).

As soon as someone reports that application is slow or during test if we find slowness, the find thing we need to do is to identify cause of slowness (Most people don’t do this step, they become defensive, at times even I have exhibited this behavior,its humanly approach). There could be many ways to identify the cause of slowness and there could be many reasons for this slowness. However for the sake of this post let’s assume that we have identified the slowness as MySQL database and we have ruled out other causes for this slowness.

In order to identify the slow running MySQL query, one can run the below command in MySQL workbench or via MySQL client and see whats going on in the MySQL box,

Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.

mysql> show full processlist\G
*************************** 1. row ***************************
     Id: 1
   User: root
   Host: localhost:51606
    db: mydb
  Command: Sleep
   Time: 372
  State:
   Info: NULL
*************************** 2. row ***************************
     Id: 2
   User: root
   Host: localhost:51607
     db: mydb
Command: Query
   Time: 58
  State:Query
   Info: SELECT * FROM MYTABLE WHERE auto_id = 46102

 

As you can see from above that Select statement in itself is taking around 58 secs to execute.In addition to above,Show Process List command can also be used to get  insights as which threads are running in MySQL server and it is quite often used to debug connection issues.This link will provide more info about this command.

Once we know which SQL is taking more time, then the next task here is to replicate the issue outside the application using the same data and same statement but with using  MySQL client. Only when we are able replicate this issue outside application, we can say that issue is with SQL Query and not with any other elements of the environment or application.In almost all cases replication of issue happens successfully.(However do watch out for those smart and excellent communicator DBA, who often share the screen with businesses to show them that in spite of querying more rows of data, issue cannot be reproduced and query executes in fraction of eye blink,in such cases ensure that we use same set of data which is used in application during the time you saw slowness along with before and after row count for the table and also all condition remains the same.)

Moving on, once you are able to replicate the issue, the next step is to identify the Query plan generated by the query,in MySQL server, this can done  by using Explain Statement,

MySQL> EXPLAIN SELECT * FROM MYTABLE WHERE auto_id = 46102
           id: 1
  select_type: SIMPLE
        table: MYTABLE
         type: ALL
possible_keys: NULL
          key: NULL
      key_len: NULL
          ref: NULL
         rows: 47890
        Extra: Using where
 

In above query execution plan,any query that do not use an index signified by the key row above in the preceding output can be considered a poorly tuned SQL query. The number of rows read in evaluating this SQL statement,is as signified by the rows row,gives some indication to as how much data is read and can directly correlate to the amount of time required to execute the query. The type row with a value of ALL is also an indicator of a problem.

Adding the indexes to the table might help in these cases,but again it also depends a lot on the structure of the table, so before applying any fix to the situation, it makes more sense to understand the table structure and amount of the data the table holds,

Below command will give you the information about table structure,

SHOW CREATE TABLE ‘MYTABLE’ ;

The above statement will provide you the information about the table along with all column information.Once we understand the structure of the table it becomes quite easy to apply and try out various fixes.Below command will give you information about data length and other various table information

SHOW TABLE STATUS LIKE 'MYTABLE'

Both the above commands gives us very interesting information and this information can help in doing sizing of the databases along with capacity planning.

Once we have all these information, we can start working on applying fixes.Maybe after I fix some of my tables, I can write some more interesting things to do.

 

 

 
 

Short note on using Indexes- MS SQL 2005

18 Aug

Most often whenever Performance Engineer detects some issues  with regard to response time, the first thing the application team thinks as a fix is to check and add some more indexes to the columns in the table.Adding indexes is low hanging fruit which every development team tries and should try  before reviewing their code base for fix.I certainly agree and go with most development team on this.However these concepts are now misused and now people often try to add indexes as band aid than do real time fix for their insufficient design. I recently had one such case, and to everyone’s surprise it failed  and they had rework on their queries.Don’t get me wrong, designs often get outdated with technologies.They don’t stay in pace with growth of technologies.Today I may use Model 2 MVC, but 3 years down the line, Model 2 MVC will be outdated.However these  should not impact low level basics used in the design.If I am storing zip codes, then probably I should be using char or int with limited bytes range depending on zip code format.This according to me is low level basics and the same applies to the basics about indexes.

So this post is all about indexes and my thoughts on it.

Indexes helps in reducing IO when we use it efficiently.Indexes work in the similar manner the way indexes in the book works.Lets say if we have book with around 1000 pages in it and you need to find a definition of some single word and you know for sure that definition exists somewhere in the book but not sure which page,how are you going to find out ?.Just go to the index page of the book and then search of the word what you require and then go to that page number.No need to flip over thousand pages to get the information.That’s the power of index when correctly implemented.Lets assume what happens when there is no index on any column involving the query, MSSQL database engine has to read each row of the table for getting the information and this results in the full table scan.So whenever you have large amount of rows and full table scan happens, the first thing that gets impacted is the response time of the application and to some extent resource usage of the database server.Your Database box is going to get very busy even to fetch 1 row of record.

Indexes are categorized as  clustered and non clustered Indexes.Clustered indexes are the one which often gives the better results and is default index type whenever we create the primary key constraint on the table in MSSQL Database.Clustered indexes are often created first than non clustered indexes since row locators of the non clustered indexes can be pointed toward the index keys of the clustered index.Clustered indexes also helps in sorting of the data and in cases where large range of rows needs to be retrieved.Non clustered index is useful when we are retrieving a very small number of rows.

Then there are also certain situations where in adding the indexes in fact degrade the performance of the application.Indexes often adds overhead costs in terms of more memory consumption and can also impact response time of the applications.The Insert/update/Delete  often gets longer whenever we have indexes on the tables where these operations takes place.Lets take earlier example, If I am deleting 25 pages from 1000 pages book, then the order of the all the pages changes and this change can lead to serious response time issues depending on the frequency of the change.So this is reason as why we should be cautious in using indexes for use cases where in we do high number of these DML operation.

Some of the main factors to be considered while designing indexes are,

  • Where clause and Joins to be used in queries:If the queries returns less rows, better the benefits we get due to index.We have to keep in mind the whole purpose of Where clause is to get exact required information from the table and if Where clause column is already indexed and the benefits due to the index almost gets doubled.
  • Column Data types : Columns with data type char,varchar,nchar etc of String family gains less due to indexes.Less the number of bytes they occupy,more the gain we can have due to indexes.
  • Columns which has duplicate data in large numbers are bad choice for indexes.Boolean columns where in we store the flag status with only 2 options Y and N.The reason these are bad is that it defeats the uniqueness of the Where clause if used.Column which has large number of unique values gains most due to the indexes.

Information about the indexes and their associated costs can also be found out using Data management View (DMV’s) of MS SQL Servers.The DMV sys.dm_db_index_operational stats can help us to get  low-level activity, such as locks and I/O, on an index that is in use. The DMV sys.dm_db_index_usage_stats can give us the  counts of the various index operations that have occurred to an index over time.

These are the key factors to be considered while designing the indexes on the table.Indexes often helps in fixing performance issues but if the design of the table or query is bad, then adding indexes to it may worsen the problem than resolving it.

Performance Testing Web based Ajax Applications–Then Read this

23 Jun

From past couple of days,lot of people have asked me about Ajax,Ajax protocol and how to Load Test Ajaxified Applications. So I though let me write a short code and try explain my thoughts about this. Ajax is often used in the web site to do quick and short interaction with servers for implementing some of the if then condition to implement some requirement.Ajax helps a lot in bringing some interactivity or I can say it creates VOW experience.Of course I am not the hard core front end engineer ,but yes having written couple  of lines of CSS/HTML/Javascript, I can visualize and make a sense as how to write the front end code which brings some interactivity to the site.

For the sake of this post, I have used Jquery library which has rich methods for making Ajax calls.It also has excellent methods of interacting with CSS and HTML markup elements.With Jquery we can easily write the call back function and write the response data to the DOM on the fly based on some conditions.Few drawbacks Which I noticed using  Jquery library is that it makes you lazy (If you really want to become good front end engineer, then having good gasp of Raw javascript is must) and just for implementing one or 2 functionality , I have to import 10 k lines of code.But again with Jquery benefits outweighs the risks.

Implementation of Ajax with Jquery looks something like below

$(document).ready(function(){
    $(“form”).submit(function(event){  
         event.preventDefault();
        var aemail = $(“#idtxt1”).val();
        var apassword = $(“#idtxt2”).val();
           alert(aemail);
           alert(apassword);
        var request = $.ajax({  
            url: “TestAjax.do”,
            type: “POST”,
            data: {
               email:ae,password:ap
            },
            cache: false,
            ifModifiedBoolean:false,
            beforeSend:function(){
              $(“#idspan”).val(“”);  
            },
            success:function (data){
//                $(“#idspan”).append(data);
                if(data == “Success”){ //redirect…
                      window.location = “/mysite/mypage.php”;
                } else { //report failure…
                       $(“#idspan”).append(data);
                }

            },
            error: function(data) {
               $(“span”).append(“data”);
                 
            }
        })            
    });
});

 

The granular level of details of Jquery Ajax function can be found here. So you might be thinking as why I have written this piece of code when it’s already available. This post is not about Jquery or how to use Jquery, this post is about understanding Ajax and how they are implemented and based on that coming up with proper solution for load testing Ajax based web applications.

If you look at the above code closely,you can observe that it does post request on the form submit event.As soon as user clicks on the submit button it does the call to the backend program.This is how browsers operate.Browsers bring interactivity to the site using event driven methods.This understanding is the key to know as how Ajax calls are made, every Ajax call is associated with some user driven event on the HTML Element. Events could be on Click, on Submit, on Hover,onMousein,onMouseOut,Focus in,Focus out,onKeyPressup and onKeyPressdown.These are some of the events which are associated with the HTML Elements and Javascript is used bring some interactivity to the site on occurrence of the these events.So every Ajax call has some event associated with it.Please note that there are many many events associated with each HTML element and complete reference of those can be found ECMA Javascript Guide or Web Standards document.

So continuing further, the above code makes the Ajax post call to TestAjax. do function which resides on the backend server.In above Ajax function call I am collecting value of 2 text fields name #idtxt1 and #idtxt 2 and passing those values as post body request to my backend J2ee program.Most of the Ajax calls irrespective of the library used do these things in the similar way(almost 99.99% of time),they capture the user inputs with Javascript methods and then post the data to the backend program which resides on some application servers.The program which resides on the backend servers communicates to the Ajax call whether the request has passed or succeeded.If the request succeeds , for example in above code if my backend program sends me the data as “success”, I redirect the request to mypage.php and if requests fails, I write some error text to the HTML  page within Span Element.

The browsers developers tools also helps in understanding the way Ajax often works,

      image

If you capture the network trace with Browser’s developer’s tool, the trace would look something like above, in the trace you are clearly see that Initiator of the call was  XMLHttpRequest. This is nothing but the core method which implements Ajax calls in the browsers.Since the above request was of type GET, values were appended to the url of the request and send to the backend program.In addition to this these tools bar also give lot of other information like http status code, which event initiated the request etc.However I will not suggest you to measure the response time of Ajax calls with these tools as I believe they somewhat give incomplete picture of response time.

The response data received from the server can also be viewed via browser developer’s tool and in my case it was looking some thing like below,

image

This response data(“Username not available”) is later appended to the page elements which is viewed the end users.

So coming back to original purpose of this post, I keep hearing from various Performance Engineers that existing Web Http protocol is insufficient in testing Ajax based web applications, now having implemented real time Ajax with fat datasets, I believe there isn’t much challenge to load test Ajax based applications.We just need to keep some key things in mind while working with Ajax based applications,

  • Understand the functionality of your application from technical prospective.
  • Ask developers explicitly, which functionality in the application is using Ajax call, is the Ajax call synchronous or Asynchronous.With Synchronous Calls,one cannot proceed unless he receive the data from the server for his Ajax call and with Asynchronous calls, one can work on the other parts of the page(Technically with Async Calls, browser need not wait for server response to build DOM Tree and with Sync calls, it has to wait for server response).
  • If you believe some events for your applications are not getting recorded via regular HTTP protocol, then probably you are not triggering the Ajax call at all.Remember to trigger Ajax call, you need to Trigger an event on the html element and it could be that you need to tab out of the element or bring focus in that element etc etc.Ask your developer as how to trigger a Ajax call for the business process under scope.
  • If you believe that you are not able to record Ajax call, then probably your Ajax request is cached.Ajax calls are heavily cached by browsers since they contain all JS/CSS files in them.Clear browser cache/cookies etc etc and Try again.Use Developer tools bars to debug such cases,ensure that you get 200 status for all your resources.
  • Ajax calls irrespective of libraries or technology uses regular GET/POST types which is nothing but http calls.Http calls should and must be recorded if tool claims to support HTTP Protocol.
  • If you see some unique values getting generated at the client side and these values are not seen in the server response, don’t get scared or nervous, they might be Unix style timestamp or Microsoft tick timestamp(However If you get Tick, you have solid reason to worry and it could be your pure good luck if your application is not using complete power of Tick.If it heavily uses Tick, then probably you need to go temple and prey God.You will surely require his blessing.Most of the current set of tools don’t go beyond Unix style timestamps and Tick is much more powerful timestamp format than Unix style timestamp).These values are generated by JS library in order to force browser not to cache the key JS files.However lot depends on the headers as well.
  • Thick client web based applications often uses chained JS calls to build different parts of the page, All you need to do in these cases is to ensure that you follow the right steps, trigger the events which chains many events during recording and then do your regular steps.
  • Remember for Load Testing Ajax based application still the goal would be capture the network traffic which goes out of the application to the server and stress the server for those calls.If your ajax calls are slow, then it gives an impression that front end is taking more time,most often it is never true.Spinning wheel which keeps spinning for minutest or seconds indicates server bottleneck and not client side bottlenecks.
  • Remember client side performance measurement metrics/techniques are different than Server side performance measurement/techniques.They require different skillsets and tools.Just having lot of Ajax calls do not necessarily mean that you have client side performance issues.However it does mean that you do lot of DOM Work and always browser needs to keep working or guessing so as to where and how much some space for response data.So repainting and refills of DOM happens quite often.

So finally you must be wondering if Ajax can be done with regular http protocol, then why are companies like HP coming up with new Ajax based protocol etc for Load Testing.

Answer is simple, they want to save some time for scripting and time is money in corporate world.But again if you have money, it does not always mean that you will save time.

How much time it saves ?

Again I cannot say as I have never used these protocols myself.But I really have a doubt whether they can successfully emulate the calls for all events.There are lot many ways to trigger Ajax calls.If you use Ajax protocol without understanding the fundamentals of Ajax, then I would say probably you have miles to go before you become a  Performance Engineer.There exists a high risk that your text checks will always succeed not matter what as most tools do not interact with DOM , so they cannot not read or write to the DOM on the fly.With Ajax most of the time error validation is done without browser refresh.So you have extra cautious here.

If you still believe we cannot test Ajax based application with regular http protocol, then I would like to hear from you about such cases and would appreciate your feedback with some sample test use case.

Performance Testing Web based Ajax Applications–Then Read this

23 Jun

From past couple of days,lot of people have asked me about Ajax,Ajax protocol and how to Load Test Ajaxified Applications. So I though let me write a short code and try explain my thoughts about this. Ajax is often used in the web site to do quick and short interaction with servers for implementing some of the if then condition to implement some requirement.Ajax helps a lot in bringing some interactivity or I can say it creates VOW experience.Of course I am not the hard core front end engineer ,but yes having written couple  of lines of CSS/HTML/Javascript, I can visualize and make a sense as how to write the front end code which brings some interactivity to the site.

For the sake of this post, I have used Jquery library which has rich methods for making Ajax calls.It also has excellent methods of interacting with CSS and HTML markup elements.With Jquery we can easily write the call back function and write the response data to the DOM on the fly based on some conditions.Few drawbacks Which I noticed using  Jquery library is that it makes you lazy (If you really want to become good front end engineer, then having good gasp of Raw javascript is must) and just for implementing one or 2 functionality , I have to import 10 k lines of code.But again with Jquery benefits outweighs the risks.

Implementation of Ajax with Jquery looks something like below

$(document).ready(function(){
    $(“form”).submit(function(event){  
         event.preventDefault();
        var ae = $(“#idtxt1”).val();
        var ap= $(“#idtxt2”).val();
           alert(ae);
           alert(ap);
        var request = $.ajax({  
            url: “TestAjax.do”,
            type: “POST”,
            data: {
               email:ae,password:ap
            },
            cache: false,
            ifModifiedBoolean:false,
            beforeSend:function(){
              $(“#idspan”).val(“”);  
            },
            success:function (data){
//                $(“#idspan”).append(data);
                if(data == “Success”){ //redirect…
                      window.location = “/mysite/mypage.php”;
                } else { //report failure…
                       $(“#idspan”).append(data);
                }

            },
            error: function(data) {
               $(“span”).append(“data”);
                 
            }
        })            
    });
});

 

The granular level of details of Jquery Ajax function can be found here. So you might be thinking as why I have written this piece of code when it’s already available. This post is not about Jquery or how to use Jquery, this post is about understanding Ajax and how they are implemented and based on that coming up with proper solution for load testing Ajax based web applications.

If you look at the above code closely,you can observe that it does post request on the form submit event.As soon as user clicks on the submit button it does the call to the backend program.This is how browsers operate.Browsers bring interactivity to the site using event driven methods.This understanding is the key to know as how Ajax calls are made, every Ajax call is associated with some user driven event on the HTML Element. Events could be on Click, on Submit, on Hover,onMousein,onMouseOut,Focus in,Focus out,onKeyPressup and onKeyPressdown.These are some of the events which are associated with the HTML Elements and Javascript is used bring some interactivity to the site on occurrence of the these events.So every Ajax call has some event associated with it.Please note that there are many many events associated with each HTML element and complete reference of those can be found ECMA Javascript Guide or Web Standards document.

So continuing further, the above code makes the Ajax post call to TestAjax. do function which resides on the backend server.In above Ajax function call I am collecting value of 2 text fields name #idtxt1 and #idtxt 2 and passing those values as post body request to my backend J2ee program.Most of the Ajax calls irrespective of the library used do these things in the similar way(almost 99.99% of time),they capture the user inputs with Javascript methods and then post the data to the backend program which resides on some application servers.The program which resides on the backend servers communicates to the Ajax call whether the request has passed or succeeded.If the request succeeds , for example in above code if my backend program sends me the data as “success”, I redirect the request to mypage.php and if requests fails, I write some error text to the HTML  page within Span Element.

The browsers developers tools also helps in understanding the way Ajax often works,

      image

If you capture the network trace with Browser’s developer’s tool, the trace would look something like above, in the trace you are clearly see that Initiator of the call was  XMLHttpRequest. This is nothing but the core method which implements Ajax calls in the browsers.Since the above request was of type GET, values were appended to the url of the request and send to the backend program.In addition to this these tools bar also give lot of other information like http status code, which event initiated the request etc.However I will not suggest you to measure the response time of Ajax calls with these tools as I believe they somewhat give incomplete picture of response time.

The response data received from the server can also be viewed via browser developer’s tool and in my case it was looking some thing like below,

image

This response data(“Username not available”) is later appended to the page elements which is viewed the end users.

So coming back to original purpose of this post, I keep hearing from various Performance Engineers that existing Web Http protocol is insufficient in testing Ajax based web applications, now having implemented real time Ajax with fat datasets, I believe there isn’t much challenge to load test Ajax based applications.We just need to keep some key things in mind while working with Ajax based applications,

  • Understand the functionality of your application from technical prospective.
  • Ask developers explicitly, which functionality in the application is using Ajax call, is the Ajax call synchronous or Asynchronous.With Synchronous Calls,one cannot proceed unless he receive the data from the server for his Ajax call and with Asynchronous calls, one can work on the other parts of the page(Technically with Async Calls, browser need not wait for server response to build DOM Tree and with Sync calls, it has to wait for server response).
  • If you believe some events for your applications are not getting recorded via regular HTTP protocol, then probably you are not triggering the Ajax call at all.Remember to trigger Ajax call, you need to Trigger an event on the html element and it could be that you need to tab out of the element or bring focus in that element etc etc.Ask your developer as how to trigger a Ajax call for the business process under scope.
  • If you believe that you are not able to record Ajax call, then probably your Ajax request is cached.Ajax calls are heavily cached by browsers since they contain all JS/CSS files in them.Clear browser cache/cookies etc etc and Try again.Use Developer tools bars to debug such cases,ensure that you get 200 status for all your resources.
  • Ajax calls irrespective of libraries or technology uses regular GET/POST types which is nothing but http calls.Http calls should and must be recorded if tool claims to support HTTP Protocol.
  • If you see some unique values getting generated at the client side and these values are not seen in the server response, don’t get scared or nervous, they might be Unix style timestamp or Microsoft tick timestamp(However If you get Tick, you have solid reason to worry and it could be your pure good luck if your application is not using complete power of Tick.If it heavily uses Tick, then probably you need to go temple and prey God.You will surely require his blessing.Most of the current set of tools don’t go beyond Unix style timestamps and Tick is much more powerful timestamp format than Unix style timestamp).These values are generated by JS library in order to force browser not to cache the key JS files.However lot depends on the headers as well.
  • Thick client web based applications often uses chained JS calls to build different parts of the page, All you need to do in these cases is to ensure that you follow the right steps, trigger the events which chains many events during recording and then do your regular steps.
  • Remember for Load Testing Ajax based application still the goal would be capture the network traffic which goes out of the application to the server and stress the server for those calls.If your Ajax calls are slow, then it gives an impression that front end is taking more time,most often it is never true.Spinning wheel which keeps spinning for minute or seconds indicates server bottleneck and not client side bottlenecks.
  • Remember client side performance measurement metrics/techniques are different than Server side performance measurement/techniques.They require different skillsets and tools.Just having lot of Ajax calls do not necessarily mean that you have client side performance issues.However it does mean that you do lot of DOM Work and always browser needs to keep working or guessing so as to where and how much some space for response data.So repainting and refills of DOM happens quite often.

So finally you must be wondering if Ajax can be done with regular http protocol, then why are companies like HP coming up with new Ajax based protocol etc for Load Testing.

Answer is simple, they want to save some time for scripting and time is money in corporate world.

How much time it saves ?

Again I cannot say as I have never used these protocols myself.But I have some doubt whether they can successfully emulate the calls for all events.There are lot many ways to trigger Ajax calls.If you use Ajax protocol without understanding the fundamentals of Ajax, then I would say probably its incorrect way of doing the Job.There exists a high risk that your text checks will always succeed not matter what as most tools do not have JS Engine or HTML Parser in them, so they have limited ability to read or write to the HTML document.With Ajax most of the time error validation is done without browser refresh based on some conditions.So you have extra cautious here.

If you still believe we cannot test Ajax based application with regular http protocol, then I would like to hear from you about such cases and would appreciate your feedback with some sample test use case.

Technorati Tags: ,,

Know your Default Initial and Max heap size of JVM

7 Jun

At times it becomes necessary that we know the default heap size allocated to the JVM in order to debug some issues, for those cases, I suggest run the below command on the command line of the server box to get this information,

java -XX:+PrintCommandLineFlags -version

On my machine, where I have tomcat server installed, I get the information something like,

C:\Users\kiran>java -XX:+PrintCommandLineFlags -version
-XX:InitialHeapSize=16777216 -XX:MaxHeapSize=268435456 -XX:+PrintCommandLineFlag
s -XX:-UseLargePagesIndividualAllocation
java version “1.6.0_32”
Java(TM) SE Runtime Environment (build 1.6.0_32-b05)
Java HotSpot(TM) Client VM (build 20.7-b02, mixed mode, sharing)

 

Performance,Dirty Reads ,Lost Updates and Locks

25 May

Recently I came across a very interesting issue, which I feel I should write about it. During load testing of web based application, I came across an issue where in response time for couple of the business process was exceeding a lot compare to SLA’s given to us. Test was almost clean with less than 0.1% error with no failed transactions in it. Server resources were also looking good, no abnormal behavior of JVM/CPU/Memory etc. Still the transaction response time was very high for the couple of business processes.

Let’s call those business processes for the sake of this post as A, B, C. Business process A does adding (Insert) a new record to the database, Business B does editing the record in the same table (We call edit as Update in DB Language) and Process C does downloading the latest information which includes both Process A and Process B. We ran these 3 business processes for about an hour with 1 user each. All the data sets used in load testing for these 3 business processes were unique. These business processes were fairly simple to script and web navigation was also fairly easy. Our job was to test these business processes with 1 user each so as to achieve close to 100 transactions for each process in an hour. Pretty easy task and we ran lot many tests with these processes and in all these tests response time was almost same with very little variance. The results were like puzzle. Thankfully we had HP Dia installed in that environment and we had some instrumentation done for this application. So I thought let’s see what application threads are doing, while I am running the test for this application.HP Diagnostics has a very good feature of showing you live thread count along with thread trace as what each threads are doing at that point of time. So I took the thread dump of the all threads running in this JVM. Believe me, taking thread dump with HP Diagnostic is as easy as clicking the link.

Vow, for the first 10 mins, most of the thread were runnable state, and as slowly users ramped up, then few of the threads went into the waiting state and then few were oscillating between runnable and waiting state. Another 10 mins in the test run, I could see lot many threads in waiting state. Now after seeing threads in waiting state I can understood as why I was seeing a very high response time for the most simple business process which involves nothing but simple insert and update operation. Response time was high because threads were waiting for something and that something was nothing but DB Execute update calls.

Now next puzzle for me was why on earth these threads are waiting at DB Execute calls and that too for cool 10 secs and with so less load (I had close to 17 scripts with one user each per script). Now given the limited access we had on database, and after discussing my findings with application development team, we decided to engage DBA to assist us in finding out the root cause of this wait. At this point I must appreciate the honestly of the application developer who agreed to this idea of engaging the DBA. There are very rare cases where developers agree with performance engineers and that’s reason I must thank this guy.

So we engaged the DBA and ran couple of tests and he collected the database stats during the test. He did some analysis and identified couple of stored procedure and dynamic queries which was in need of tuning. In addition to this he came back and said that response time for these 3 business process were high because, row level locking was getting escalated to table level lock after certain duration of time and again after certain time, some transactions were getting dirty read and still after couple of minutes, table gets locked and does not allow any insert or update. Fair enough analysis.

Hmm I was happy that finally we know as why we were seeing high response time. Our next step was discussing this with key stakeholders about our findings. Lot many things were discussed in the meeting, but the key question which I liked in an hour’s meeting was that of DBA asking “Can this application tolerate the dirty reads “. There was long pause for some time in the meeting and finally after some time, this developer guy said this application cannot tolerate the dirty reads. Hmm there was pause again for about 30 secs, after another 20 secs, another voice said, these 3 transactions are executed less than 100 times in an hour and there exists absolutely very little chance that we see this concurrency in Production. There was again some silence and later on everyone moved on to next topic.

Now the reason as why I feel this is interesting case and deserves some writing on this topic is that we have dirty reads/lost updates/phantom reads that can be found out only by proper load testing and there exists a very high probability that if we ignore this we are sending out incorrect information to the application users in production under load and this incorrect information at times could be used by those users to sue the company back. These are typical cases where data integrity issues takes precedence to performance.So if you cannot redesign the queries, than I would suggest you to sacrifice the performance.These types of issues should become high priority fix at least in cases where we are dealing with financial applications. I also feel that at times these are critical bugs given that we have table level lock and this lock impacts all the users if one user locks the table for any reason.

Maybe later on I will write another short post on dirty read/lost updates. These are interesting cases where only proper load testing reproduces the issue and strong foundational skills of performance engineering helps to identify the root cause of the issue. .

%d bloggers like this: