TABLE OF CONTENT
- WHAT IS PERFORMANCE TESTING?
- WHY DO PERFORMANCE TESTING?
- COMMON PERFORMANCE PROBLEMS
- BEST PRACTICES TO IMPROVE PERFORMANCE
- TOOLS FOR MEASURING PERFORMANCE
- IMPORTANT WEB APPLICATION PERFORMANCE METRICS
- PERFORMANCE TOOL – SILK PERFORMER (Click Here)
What is Performance testing?
In software engineering, performance testing is in general, a testing practice performed to determine how a system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage.
Performance testing, a subset of performance engineering, is a computer science practice which strives to build performance standards into the implementation, design and architecture of a system.
https://en.wikipedia.org/wiki/Software_performance_testing
Why do Performance testing?
Performance testing is done to provide stakeholders with information about their application regarding speed, stability and scalability. More importantly, performance testing uncovers what needs to be improved before the product goes to market. Without performance testing, software is likely to suffer from issues such as: running slow while several users use it simultaneously, inconsistencies across different operating systems and poor usability. Performance testing will determine whether or not their software meets speed, scalability and stability requirements under expected workloads.
Common Performance Problems
Most performance problems revolve around speed, response time, load time and poor scalability. Speed is often one of the most important attributes of an application. A slow running application will lose potential users. Performance testing is done to make sure an app runs fast enough to keep a user’s attention and interest.Following are some common performance problems
- Long Load time – Load time is normally the initial time it takes an application to start. This should generally be kept to a minimum.
- Poor response time – Response time is the time it takes from when a user inputs data into the application until the application outputs a response to that input. Generally this should be very quick.
- Poor scalability – A software product suffers from poor scalability when it cannot handle the expected number of users or when it does not accommodate a wide enough range of users.
- Bottlenecking – Bottlenecks are obstructions in system which degrade overall system performance. Bottlenecking is when either coding errors or hardware issues cause a decrease of throughput under certain loads. Bottlenecking is often caused by one faulty section of code. The key to fixing a bottlenecking issue is to find the section of code that is causing the slow down and try to fix it there. Bottle necking is generally fixed by either fixing poor running processes or adding additional Hardware.
Best Practices to Improve Performance
Improving web application performance is more critical than ever. More than 5% of the developed world’s economy is now on the Internet. And our always-on, hyper-connected modern world means that user expectations are higher than ever. If our site does not respond instantly, or if our app does not work without delay, users quickly move on to our competitors.
Wanting to improve performance is easy, but actually seeing results is difficult. Here are some of the best practices to improve performance by as much as 10x.
- Accelerate with Reverse Proxy Server – A reverse proxy server is a type of proxy server that typically sits behind the firewall in a private network and directs client requests to the appropriate backend server. A reverse proxy provides an additional level of abstraction and control to ensure the smooth flow of network traffic between clients and servers. Using a reverse proxy server frees the application server from having to wait for users to interact with the web app and lets it concentrate on building pages for the reverse proxy server to send across the Internet. The application server, which no longer has to wait for client responses, can run at speeds close to those achieved in optimized benchmarks.
- Adding a Load Balancer – A load balancer runs on a reverse proxy server to share traffic evenly across a number of application servers. With a load balancer in place, we can add application servers without changing our application at all. Adding a load balancer is a relatively easy change which can create a dramatic improvement in the performance and security of our site. Instead of making a core web server bigger and more powerful, we can use a load balancer to distribute traffic across a number of servers. Even if an application is poorly written, or has problems with scaling, a load balancer can improve the user experience without any other changes.
- Cache Static and Dynamic Content – Caching improves web application performance by delivering content to clients faster. Caching can involve several strategies: preprocessing content for fast delivery when needed, storing content on faster devices, storing content closer to the client, or a combination. There are two different types of caching to consider:
- Caching of static content – Infrequently changing files, such as image files (JPEG, PNG) and code files (CSS, JavaScript), can be stored on an edge server for fast retrieval from memory or disk.
- Caching of dynamic content – Many Web applications generate fresh HTML for each page request. By briefly caching one copy of the generated HTML for a brief period of time, we can dramatically reduce the total number of pages that have to be generated while still delivering content that’s fresh enough to meet our requirements.
- Compress Data – Using media file compression such as JPEG for photos, PNG for graphics, MPEG-4 for movies, and MP3 for music files can greatly improve performance. Once these are all in use, then compressing text data (code and HTML) can improve initial page load times by a factor of two.
Tools for Measuring Performance
- JMeter (server-side performance) – Use Apache JMeter to test performance both on static and dynamic resources (files, servlets, Perl scripts, Java objects, databases and queries, FTP servers, and more). You can also use it to simulate a heavy load on a server, network, or object to test its strength or analyze overall performance under different load types. Consider using it to make a graphical analysis of performance or to test your server/script/object behavior under a heavy concurrent load.
- Google PageSpeed Insights (client-side performance) – It’s a service that analyzes the content of a web page and generates suggestions to make your pages load faster. Reducing page load times reduces bounce rates and increases conversion rates.
- Sitespeed.io (client-side performance) – This open source tool analyzes your website’s speed and performance based on performance best practices and timing metrics. You can analyze one site, analyze and compare multiple sites, or let your continuous integration server break your build when you have exceeded your performance budget.
- WebPagetest.org (client-side performance) – provides deep insights into the performance of the client side in a variety of real browsers. This utility will test a web page in any browser, from any location, over any network condition and it’s free.
- Silk Performer – is a software performance testing tool across web, mobile and enterprise applications. Silk Performer supports major Web 2.0 environments like Adobe’s Flash/Flex, Microsoft Silverlight, and HTML/AJAX. Silk Performer also supports load testing Web applications at the protocol level (HTTP). It ensures that applications and server up times are maintained when faced with peak customer usage.
Important Web Application Performance Metrics
- Average Application Response Time – In simple terms, the average application response time is the amount of time an application takes to return a request to a user. An application should be tested under different circumstances (i.e. number of concurrent users, number of transactions requested). Typically, this metric is measured from the start of the request to the time the last byte is sent.
- Peak Response Time – While the average response time gives you a sense of performance from the user’s perspective, the peak response time metric will help identify areas where performance could be improved. For example, it is possible for the average response time to be one second, but within that average there could be an element taking 10 seconds to load (the peak response time) while other elements are transferring in less than a second. The peak response time metric pinpoints slow elements within the application that should be investigated and corrected.
- Error Rate – The error rate is a calculation that measures the percentage of problem requests (errors) compared to all requests. Error rates should be measured with different loads. An acceptable error rate may differ from company to company, but this metric helps businesses understand pinpoint when the application will fail.
- Requests Per Second – Requests per second measures how many actions are being sent to the target server every second. Any resource on the page (HTML pages, images, multimedia files, dynamic resources from databases, etc.) is considered a request. Requests per second will vary greatly on the type of resource requested and how that request is processed.
- Throughput – How many units of information can a system process in a set period of time. It’s a measurement of how much bandwidth is required to handle a load (concurrent users and requests). A higher value throughput means the application can handle an increasing number of concurrent users.