|
|
|
|
| Home Articles Authors Links Useful Tips Polls |
|
Can You Handle The Traffic ? - Part 2Last month we talked about the importance of handling web traffic. This month we'll take a look at the importance of and how poor testing can hurt you in the long run. ContentsPoor Testing PracticesBinary Evolution's Testing Practices Prepare for the Worst Conclusion References Poor Testing PracticesIn order to verify that no page on the web site takes more than 8 seconds to load and to ensure reliability of the site, appropriate testing must be performed. Each web site has its own method for performing quality assurance. The goal of such testing is clear: to simulate how the web site will behave when put into production. Listed below are a few of the testing techniques that have been used at major web sites. These techniques do not accurately predict the web site's quality and reliability.
Binary Evolution's Testing PracticesWell designed web site testing detects problems in the development computer before such problems appear on the live web site, thus reducing damages significantly. Before Binary Evolution begins any testing, we work with the client to understand a few basic requirements and principles. Web site operators must adequately test the application that they have written for defects, performance and reliability, prior to placing the web site into production. Web site testing requires at least two separate computers: one for development, and a second for the live operation of the web site. The development computer is used to create the initial web site and add new features. Also, before updating the live web site, additional testing must be conducted on the development machine. Web site testing is performed using simulation techniques that reflect expected usage and real world load as closely as possible. We verify that the web site can handle double the maximum expected traffic during peak usage. When performing testing for a client that has no existing online presence, we attempt to create scenarios that reflect how the site will be used under real world conditions. At Binary Evolution, we use specialized software called a proxy server, which captures all information that is sent between the client browser and the web server: the requested URL, arguments, POST data, and cookies. Once the browser has been configured to use the proxy, several sample web sessions are generated. After capturing is complete, the data contained within the proxy log is separated into several files, one for each web session. Our load testing tool, VeloMeter Pro ( https://www.velometer.com ) is then used to read and simulate users using the captured sessions. ![]() Figure 5. VeloMeter's User Friendly Interface VeloMeter has been awarded the WebTechniques trophy for Best Web Site Management Tool. Our Java-based tool is capable of simulating multiple users using Java's threading capabilities. POST, GET and cookie protocols are supported. Secure Socket Layer (SSL) support is available when used with a proxy server that has SSL enabled. During performance testing, web usage can be amplified by specifying the concurrent users to simulate (Figure 5) in order to predict how much load a web site can handle before it exceeds the "8 second rule". Results can be viewed graphically (Figure 6) or exported to an Excel spreadsheet for further analysis. ![]() Figure 6. VeloMeter's Graphical Results After simulating user load with VeloMeter, we verify that the traffic the web site can handle is double the maximum expected. If so, the web site is given our seal of approval. For existing sites, the load testing procedure is slightly different. For such sites, we take advantage of VeloMeter's ability to read the daily access log from the site. If the web site uses the POST or cookie protocols, additional sample sessions are generated using the proxy server method as described above. During and after testing, we try to capture as much information about the web site's usage as possible. Ideally, all requests and associated data should be logged so that web usage can be fully reproduced. Unfortunately, today's web servers do not provide a mechanism for capturing all data associated with an HTTP request. At Binary Evolution, custom software is run to listen and record all web traffic. Recording web usage helps facilitate the simulation of real world conditions. Also, such data greatly helps to recreate conditions leading up to a web site crash. Prepare for the WorstEven after a company has adopted good web testing practices, there is always the possibility of a server crash. The more complex the web application, the more likely some unforeseen combination of parameters will take down the site. For example, one of our clients had a small mistake in their production web server's configuration file. The mistake brought out a bug in the web server software. Unfortunately the bug did not present itself until a few HEAD requests (part of the HTTP protocol used only by search engines) were made for a CGI script. Days later, and after processing hundreds of thousands of requests, the web server eventually crashed. Only after careful analysis of the access log and further testing was the problem resolved. No software is bug free. A plan of action for the worse case scenario should be written up before the web site goes live. The steps taken in that plan of action should correspond to the amount of damage caused by a crash. Depending on the number of potential dollars lost, a monitoring service can be utilized to ensure that the web site is running. It is also advisable to choose an ISP with 24/7 support and subscribe to a monthly performance consulting service. When updating an existing web site, the plan of action should also include steps to quickly fall-back to the previous "working" revision. Be prepared to record all information about the web server machine when a crash happens. The more information that can be gathered, the easier it will be to recreate events that lead up to the crash. The most valuable clues will be found in the web server's access log. The log will give the list of events that caused the web application to fail. Ideally, all HTTP traffic to the web site should be recorded: POST, GET, HEAD, SSL, etc. ConclusionOver the last couple years, Binary Evolution has developed a number of methods to ensure web site reliability and performance. If we ignore some of the technical specifics, our web site design and testing strategy boils down to a few key principles:
By following these principles, you will learn from the mistakes made by other e-commerce companies and increase your chance of becoming the next big dot com success story. ReferencesBinary Evolution, Inc., info@binaryevolution.com, URL: https://www.binaryevolution.com Connie Guglielmo, acmewriter@aol.com, "Crash and Get Burned", Inter@ctive Week, September 6, 1999, URL: https://www4.zdnet.com/intweek/stories/news/0,4164,2327453,00.html Forrester Research, Inc., info@forrester.com, URL: https://www.forrester.com Keynote Systems, info@keynote.com, URL: https://www.keynote.com Tim Wilson, tbwilson@cmp.com, "The Cost of Downtime", Internet Week, July 30, 1999, URL: https://www.internetwk.com/lead/lead073099.htm Zona Research, Inc., info@zonaresearch.com, "The Economic Impact of Unacceptable Web Site Download Speeds", November, 1999, URL: https://www.zonaresearch.com Zona Research, Inc., info@zonaresearch.com, "The Need for Speed", June, 1999, URL: https://www.zonaresearch.com Other Articles by Alex ShahCan You Handle The Traffic ? - Part 1 |
| 0.4.0 | Copyright to all articles belong to their respective authors. Everything else © 2025 LinuxMonth.com Linux is a trademark of Linus Torvalds. Powered by Apache, mod_perl and Embperl. |