RSS Feed NEAL NELSON BENCHMARK LABORATORY

Server Throughput - Test Methodology

Benchmark
Laboratory Home
Summary
Test Results
White
Papers
About
This Site
Press
Archive
Our
Email List
Contact
Information
Neal Nelson
& Assoc. Home

A server's transaction processing throughput capability is tested by first building a relational database on the system and then measuring the maximum transaction processing rate acheived as the system is subjected to various levels of simultantous user requests. MySQL is used as the RDBMS and the test transactions simulate credit card purchases at a service station. The transactions are submitted as 'html' requests from 32 client machines to the Apache2 web server sotware package. There is one transaction per web request page.

All servers are loaded with the same operating system, SUSE Linux Enterprise Server version 10, and all machines are confgured with exactly the same 'tunables'.

Tests are run at a steady state for 60-100 minutes and the reported throughput is measured during the last 30 minutes of the test interval.

A measurement sequence includes tests with 100, 150, 200, 250, 300, 350, 400, 450 and 500 simultaneous users. In all cases the transactions are submitted with no "think delay". This means that when a one transaction has finished processing the simulated web client immediately submits another transaction to the server.

Cached disk I/O presents a problem to many benchmarks. Users with larger memory sizes and smaller disk working sets will experience fast "disk I/O" because virtually all of the disk I/O requests will be satisfied from cache. Users with larger disk working sets or smaller memory sizes will expereince slower disk I/O because there will be a large amount of actual physical disk I/O.

A benchmark with a fixed size test database can give the wrong answer about disk I/O speed. If the test database fits in the disk cache the benchmark will report a fast result. If the test database overflows the memory cache the benchmark may report a slow machine.

We address this problem by changing the size of the test database as the benchmark progresses. At lower user counts the size of the test database working set is small. As the user count increases the size of the test database working set gets larger. Thus our test results show the speed of the server with cached I/O at the lower user counts and the speed with physical disk I/O at the higher user counts.

The size of the server's disk buffer cache is also reflected in our test results since machines with larger caches will show the faster cached transaction rates for higher user counts.

Click here to read white papers that describe this procedure in more detail.


Copyright © 2006-2008
Neal Nelson & Associates
Trademarks which may be mentioned on this site are the property of their owners.