October 18, 2022

Prevent Slowdowns With Shift Left Performance Testing

Mobile Application Development
DevOps

Nobody likes a slow UX. It is consistently cited as one of the biggest points of frustration for users, and it can quickly become a deterrent for potential customers engaging with your company. According to one study, UX slowdowns can ultimately have twice the negative impact on an organization’s revenues as outages. 

That is why, in an effort to root out the issues causing these slowdowns, teams are turning to shift left performance testing. By shifting your DevOps cycle left, obstacles can be addressed and resolved sooner — meaning a better final product in a faster timeframe. 

There is a misconception that testing performance at the end of the development cycle is sufficient. But what if the application’s performance is not up to the mark? In that case, tracing back the origin of the issue adds to the debugging time and efforts, thus reducing the overall development velocity and increasing the cost. 

Performance testing should no longer be considered an afterthought — rather, it should be considered a priority from the very onset of the project. Tracing and fixing performance issues just before deployment is an expensive exercise. With shorter delivery cycles, it is prudent to check every deliverable, however small, for performance. Integrating performance tests in the continuous testing process is a great way of ensuring that every deliverable is tested thoroughly for functionality as well as performance. 

In this blog, we will take a deep dive into shift left performance testing and how to implement it with a web timing approach.

An overview of shift left performance testing.
Back to top

How to Shift Left: Web Timing

Shifting left is not a small undertaking. It is a fundamental change to the entire development process. When considering how to shift left, there are two important questions to ask: 

  • What new insight can I gain earlier? 
  • How easy is it to implement? 

Here is how leveraging web timing can help you achieve this change. 

Web Page Timing

Web page timers are not new, but they are helpful in optimizing content for various pages and browsers. The data is extremely detailed and readily available for analysis. Additionally, almost all browsers support the API, so one does not need any special setup to collect and report these metrics.

chart displaying web page timing

Grabbing the page timers is fairly easy. For example, the below code could be added to a basic Java Selenium test: 

Map<String,String> pageTimers = new HashMap<String,String>(); 
 
Object pageTimersO =  driver.executeScript("var a =  performance.getEntriesByType(“navigation”)[0] ;     return a; ", pageTimers);  

Here’s an example of the timers resulting from a single page load: 

List of results

Processing the timers can be done as follows: 

Double loadEventEnd = convertDouble(data.get("loadEventEnd")); 
Double connectEnd = convertDouble(data.get("connectEnd")); 
Double requestStart = convertDouble(data.get("requestStart")); 
Double responseStart = convertDouble(data.get("responseStart")); 
Double responseEnd = convertDouble(data.get("responseEnd")); 
Double domLoaded = convertDouble(data.get("domContentLoadedEventStart")); 
 
this.duration =  convertDouble(data.get("duration")); 
this.networkTime = connectEnd; 
this.httpRequest = responseStart - requestStart; 
this.httpResponse = responseEnd - responseStart; 
this.buildDOM = domLoaded - responseEnd; 
this.render = loadEventEnd - domLoaded; 

Now that we’ve got the page-level timers, we can store them and drive some offline analysis: 

Chart containing timing results.

 

Within the test, you can examine the current page load time or size and determine a pass or fail based on that: 

// compare current page load time vs. whats been recorded in past runs 

public Map<String, String> comparePagePerformance(int KPI, CompareMethod method, WebPageTimersClass reference, Double min, Double max, Double avg){ 

        Map<String,String> returnMap = new HashMap<String,String>(); 

        returnMap.put("CurrentPageDuration", duration.toString()); 

        returnMap.put("BaseReference", reference.duration.toString()); 

        returnMap.put("BaseAvgDuration", avg.toString()); 

        returnMap.put("BaseMaxDuration", max.toString()); 

        returnMap.put("BaseMinDuration", min.toString()); 

        returnMap.put("BrowserName", browserName.toString()); 

        returnMap.put("PlatformName", OSName.toString()); 

  

        switch(method){ 

            case VS_BASE: 

                System.out.println("comparing current: "+duration +" against base reference: "+ reference.duration); 

                returnMap.put("TestConditionResult", String.valueOf(((duration > reference.duration) ||  (duration > KPI)))); 

                returnMap.put("ComparisonMethod", "comparing current: "+duration +" against base reference: "+ reference.duration); 

  

                return returnMap; 

            case VS_AVG: 

                System.out.println("comparing current: "+duration +" against AVG: "+ avg); 

                returnMap.put("TestConditionResult", String.valueOf(((duration > avg) ||  (duration > KPI)))); 

                returnMap.put("ComparisonMethod", "comparing current: "+duration +" against AVG: "+ avg); 

  

                return returnMap; 

            case VS_MAX: 

                System.out.println("comparing current: "+duration +"  against Max: "+ max); 

                returnMap.put("TestConditionResult", String.valueOf((duration - max) > KPI)); 

                returnMap.put("ComparisonMethod", "comparing current: "+duration +"  against Max: "+ max); 

  

                return returnMap; 

            case VS_MIN: 

                System.out.println("comparing current: \"+duration +\"  against min: "+ min); 

                returnMap.put("TestConditionResult", String.valueOf((duration - min) > KPI)); 

                returnMap.put("ComparisonMethod", "comparing current: "+duration +"  against Min: "+ min); 

  

                return returnMap; 

            default: 

                System.out.println("comparing current: \"+duration +\"  against AVG method was not defined N/A: "+ avg); 

                returnMap.put("TestConditionResult", "false"); 

                return returnMap; 

        } 

  

    } 

Web Page Resource Timing

Up to this point, we have been discussing the page-level timing for shift left performance testing. The data gleaned from this stage is helpful — you can detect latency in page performance across any page and any browser. This lets you get direction on whether the issue relates to DNS discovery, content lookup, download, or something else. 

However, when conducting this phase in the test cycle, the big changes will come from the content that is being downloaded; think of large images downloaded to small screens over cellular networks, downloads of non-compressed content, repeated downloads of JS or CSS, etc.  

So, how can developers get immediate actionable insight to optimize page performance? By using the resource timing API. This will offer insight into every object that the browser requests: the server, timing, size, type, and more. 

To obtain access to the resource timing object, all that needs to be done is: 

List<Map<String, String>> resourceTimers = new ArrayList<Map<String, String>>(); 
 
ArrayList<Map<String, Object>> resourceTimersO =   (ArrayList<Map<String,Object>>) driver.executeScript("var a =  window.performance.getEntriesByType(\"resource\") ;     return a; ", resourceTimers); 

Here is an example of the data that is available: 

Available data

Each page would have a long list of resources like above. You could summarize all the objects into types and produce a summary of totals and some distribution stats: 

Summary of stats

Note the variation in the number of the total resources between different browsers for the same page. Below, for example, you can summarize the resources by type for each execution: 

Summary of resources by type

Execution Time Comparison & Benchmarking

At the beginning of this blog, we defined shift left performance testing as delivering insight early and easily. Now that we have accessed the raw data and conducted some analysis, we can take it a step further. 

Choose a webpage and set a performance baseline — think of it as the objective you are aspiring to. Then, with every execution, we would measure responsiveness, provide a pass/fail, and a full comparison of the current page data vs. the baseline.  

With a little code magic, we can make that happen. Here is the top-level, page-level summary of the current vs. baseline run: 

Comparing current page vs baseline

There is not a major difference in the number of items, but you can see the page load time is almost 3.5 seconds longer. At first glance, it seems the increasing rendering time is causing the longer load time. 

  duration  network  http Request  http response  build DOM  render  total resources  total resources size  total resources duration 
  6912.6 226.7 416.3 295.7 5706.3 246.6 291 2931697 295590 
  3314 913 534 279 1506 68 300 2971487 102422 
Diff 3598.6 -686.3 -117.7 16.7 4200.3 178.6 -9 -39790 193168 

Here is the comparison between the type summary: 

 

Comparison summary of current page and baseline

This table compares the total items, size and duration by type against the baseline. It is not surprising that there are not any new types of content introduced on this page, nor are there massive changes in the number of elements per type given the last run was just a few days earlier.  

Still, even though there is only one additional image, it appears images drive the most latency in loading the page. To take a closer look, here are the images with the largest load time.

Load times of images tested

 

As you can see, we are just scratching the surface of the depth of analysis we could achieve. 

Back to top

How to Analyze Your Shift Left Performance Testing With Perfecto 

We just went through an example of individual shift left performance testing. But what about test analysis on a larger scale? Knowing how to shift left and executing it at scale are two different things — but Perfecto’s Smart Reporting makes it seem easy. 

Perfecto Insights 

Perfecto Insights is designed to help you jump-start your root cause analysis efforts, improve your testing success over time, and accelerate the identification of real bugs in the tested application. Insights can help you constantly improve your success rate and efficiently find the bug in the tested application.  

The latter goal may be hard to achieve when many tests are failing for various reasons. Which error should you focus on first? Which test is the one that deserves immediate attention? The Perfecto Insights documentation has you covered. 

Perfecto Heatmap 

The Perfecto Heatmap presents an overview of the test results. The results appear as color-coded cross-sections. Each block in the display represents a group of tests. It helps users navigate the mountain of test results confidently with data slices using groups and filters. Grouping the failure reasons help testers, architects, and managers isolate the issues and prioritize the fix. 

With Perfecto Smart Reporting, users can discover how to shift left with both functional and non-functional tests (performance and security testing). The failed tests can be categorized as: 

  • Script Failures – Tests failed due to script errors like JSON Error, Invalid Arguments, etc. 
  • Non-Functional Failures – Tests failed due to non-compliance with Non-Functional requirements such as slower page load, security vulnerabilities, accessibility violations, etc. 
  • Functional Failures – Tests failed due to functionality issues including Element Not Found, login error, and invalid popup. 
  • Lab Errors – Tests failed/blocked due to fracture issues like invalid capabilities, device already in use, device in error state, etc. 
Back to top

Bottom Line 

Preparing to implement shift left performance testing in your development cycle can be daunting, but it does not have to be. A helpful tactic to shift left is by leveraging web timing, and it can be done for all sorts of tests: smoke, regression, and even production. It is all there at your fingertips and Perfecto will be with you every step of the way as you undertake this fundamental change. 

Test smarter. Test faster. Streamline your development process to save valuable time and money with the only one-stop testing shop on the market. Give us a try for free today and see what you have been missing. 

Start Trial 

Back to top