%version; ]> DHCP Performance Guide 2012 Internet Systems Consortium, Inc. ("ISC") Tomasz Mrugalski BIND 10 is a framework that features Domain Name System (DNS) and Dynamic Host Configuration Protocol (DHCP) software with development managed by Internet Systems Consortium (ISC). This document describes various aspects of DHCP performance, measurements and tuning. It covers BIND 10 DHCP (codename Kea), existing ISC DHCP4 software, perfdhcp (a DHCP performance measurement tool) and other related topics. This is a companion document for BIND 10 version &__VERSION__;. Preface
Acknowledgements ISC would like to acknowledge generous support for BIND 10 development of DHCPv4 and DHCPv6 components provided by Comcast.
Introduction This document is in the early stages of development. It is expected to grow significantly in the near future. It will cover topics like database backend perfomance measurements, tools, and the pros an cons of various optimization techniques. ISC DHCP 4.x TODO: Write something about ISC DHCP4 here. Kea
Backend performance evaluation Kea will support several different database backends, using both popular databases (like MySQL or SQLite) and custom-developed solutions (such as an in-memory database). To aid in the choice of backend, the BIND 10 source code features a set of performance microbenchmarks. Written in C/C++, these are small tools that simulate expected DHCP server behaviour and evaluate the performance of considered databases. As implemented, the benchmarks are not really simulating DHCP operation, but rather use set of primitives that can be used by a real server. For this reason, they are called micro-benchmarks. Although there are many operations and data types that server could store in a database, the most frequently used data type is lease information. Although the information held for IPv4 and IPv6 leases differs slightly, it is expected that the performance differences will be minimal between IPv4 and IPv6 lease operations. Therefore each test uses the lease4 table (in which IPv4 leases are stored) for performance measurements. All benchmarks are implemented as single threaded applications that take advantage of a single database connection. Those benchmarks are stored in tests/tools/dhcp-ubench directory of the BIND 10 source tree. This directory contains simplified prototypes for the various database back-ends that are planned or considered as a possibly for BIND10 DHCP. Athough trivial now, the benchmarks are expected to evolve into useful tools that will allow users to measure performance in their specific environment. Currently the following benchmarks are implemented: In memory + flat file SQLite MySQL As the benchmarks require additional (sometimes heavy) dependencies, they are not built by default. Actually, their build system is completely separate from that of the rest of BIND 10. It is anticipated that they will be eventually merged into the rest of BIND 10, but that is a low priority for now. All benchmarks will follow the same pattern: Prepare operation (connect to a database, create a file etc.) Measure timestamp 0 Commit new lease4 record (repeated N times) Measure timestamp 1 Search for random lease4 record (repeated N times) Measure timestamp 2 Update existing lease4 record (repeated N times) Measure timestamp 3 Delete existing lease4 record (repeated N times) Measure timestamp 4 Print out statistics, based on N and measured timestamps. Although this approach does not attempt to simulate actual DHCP server operation that has mix of all steps, it answers the questions about basic database strengths and weak points. In particular it can show what is the impact of specific database optimizations, such as changing engine, optimizing for writes/reads etc. The framework attempts to do the same amount of work for every backend thus allowing fair complarison between them.
MySQL backend The MySQL backend requires the MySQL client development libraries. It uses the mysql_config tool (similar to pkg-config) to discover required compilation and linking options. To install required packages on Ubuntu, use the following command: $ sudo apt-get install mysql-client mysql-server libmysqlclient-dev Make sure that MySQL server is running. Make sure that you have your setup configured so there is a user that is able to modify used database. Before running tests, you need to initialize your database. You can use mysql.schema script for that purpose. WARNING: It will drop existing Kea database. Do not run this on your production server. Assuming your MySQL user is "kea", you can initialize your test database by: $ mysql -u kea -p < mysql.schema After the database is initialized, you are ready to run the test: $ ./mysql_ubench or $ ./mysql_ubench > results-mysql.txt Redirecting output to a file is important, because for each operation there is a single character printed to show progress. If you have a slow terminal, this may considerably affect test performance. On the other hand, printing something after each operation is required as poor database settings may slow down operations to around 20 per second. (The observant user is expected to note that the initial dots are printed too slowly and abort the test.) Currently all default parameters are hardcoded. Default values can be overridden using command line switches. Although all benchmarks take the same list of parameters, some of them are specific to a given backend. To get a list of supported parameters, run the benchmark with the "-h" option: $ ./mysql_ubench -h This is a benchmark designed to measure expected performance of several backends. This particular version identifies itself as following: MySQL client version is 5.5.24 Possible command-line parameters: -h - help (you are reading this) -m hostname - specifies MySQL server to connect (MySQL backend only) -u username - specifies MySQL user name (MySQL backend only) -p password - specifies MySQL passwod (MySQL backend only) -f name - database or filename (MySQL, SQLite and memfile) -n integer - number of test repetitions (MySQL, SQLite and memfile) -s yes|no - synchronous/asynchronous operation (MySQL, SQLite and memfile) -v yes|no - verbose mode (MySQL, SQLite and memfile) -c yes|no - should compiled statements be used (MySQL only) Synchronous operation requires database backend to physically store changes to disk before proceeding. This property ensures that no data is lost in case of the server failure. Unfortunately, it slows operation considerably. Asynchronous mode allows database to write data at a later time (usually controlled by the database engine on OS disk buffering mechanism).
MySQL tweaks One parameter that has huge impact on performance is the choice of backend engine. You can get a list of engines of your MySQL implementation by using > show engines; in your mysql client. Two notable engines are MyISAM and InnoDB. mysql_ubench uses use MyISAM for synchronous mode and InnoDB for asynchronous. Please use '-s yes|no' to choose whether you want synchronous or asynchronous operations. Another parameter that affects performance are precompiled statements. In a basic approach, the actual SQL query is passed as a text string that is then parsed by the database engine. Alternative is a so called precompiled statement. In this approach the SQL query is compiled an specific values are being bound to it. In the next iteration the query remains the same, only bound values are changing (e.g. searching for a different address). Usage of basic or precompiled statements is controlled with '-c no|yes'.
SQLite-ubench The SQLite backend requires both the sqlite3 development and run-time packages. Their names may vary from system to system, but on Ubuntu 12.04 they are called sqlite3 libsqlite3-dev. To install them, use the following command: > sudo apt-get install sqlite3 libsqlite3-dev Before running the test the database has to be created. Use the following command for that: > cat sqlite.schema | sqlite3 sqlite.db A new database called sqlite.db will be created. That is the default name used by sqlite_ubench test. If you prefer other name, make sure you update sqlite_ubench.cc accordingly. Once the database is created, you can run tests: > ./sqlite_ubench or > ./sqlite_ubench > results-sqlite.txt
SQLite tweaks To modify default sqlite_ubench parameters, command line switches can be used. The currently supported switches are (default values specified in brackets): -f filename - name of the database file ("sqlite.db") -n num - number of iterations (100) -s yes|no - should the operations be performed in a synchronous (yes) or asynchronous (no) manner (yes) -v yes|no - verbose mode. Should the test print out progress? (yes) -c yes|no - precompiled statements. Should the SQL statements be precompiled? SQLite can run in asynchronous or synchronous mode. This mode can be controlled by using "synchronous" parameter. It is set using the SQLite command: PRAGMA synchronous = ON|OFF Another tweakable feature is journal mode. It can be turned to several modes of operation. Its value can be modified in SQLite_uBenchmark::connect(). See http://www.sqlite.org/pragma.html#pragma_journal_mode for detailed explanantion. sqlite_bench supports precompiled statements. Please use '-c no|yes' to define which should be used: basic SQL query (no) or precompiled statement (yes).
memfile-ubench The memfile backend is a custom backend that somewhat mimics operation of ISC DHCP4. It implements in-memory storage using standard C++ and boost mechanisms (std::map and boost::shared_ptr<>). All database changes are also written to a lease file, which is strictly write-only. This approach takes advantage of the fact that file append operation is faster than modifications introduced in the middle of the file (as it often requires moving all data after modified point, effectively requiring writing large parts of the whole file, not just changed fragment).
memfile tweaks To modify default memfile_ubench parameters, command line switches can be used. Currently supported switches are (default values specified in brackets): -f filename - name of the database file ("dhcpd.leases") -n num - number of iterations (100) -s yes|no - should the operations be performend in a synchronous (yes) or asynchronous (no) manner (yes) -v yes|no - verbose mode. Should the test print out progress? (yes) memfile can run in asynchronous or synchronous mode. This mode can be controlled by using sync parameter. It uses fflush() and fsync() in synchronous mode to make sure that data is not buffered and physically stored on disk.
Basic performance measurements This section contains sample results for backend performance measurements, taken using microbenchmarks. Tests were conducted on reasonably powerful machine: CPU: Quad-core Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz (8 logical cores) HDD: 1,5TB Seagate Barracuda ST31500341AS 7200rpm, ext4 partition OS: Ubuntu 12.04, running kernel 3.2.0-26-generic SMP x86_64 compiler: g++ (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3 MySQL version: 5.5.24 SQLite version: 3.7.9sourceid version is 2011-11-01 00:52:41 c7c6050ef060877ebe77b41d959e9df13f8c9b5e The benchmarks were run without using precompiled statements. The code was compiled with the -O0 flag (no code optimizations). Each run was executed once. Two series of measures were made, synchronous and asynchronous. As those modes offer radically different performances, synchronous mode was conducted for one thousand repetitions and asynchronous mode was conducted for one hundred thousand repetitions. Synchronous results (basic) Backend Operations Create [s] Search [s] Update [s] Delete [s] Average [s] MySQL 1,000 31.604 0.117 27.964 27.695 21.845 SQLite 1,000 61.421 0.033 59.477 56.034 44.241 memfile 1,000 38.224 0.001 38.041 38.017 28.571
The following parameters were measured for asynchronous mode. MySQL and SQLite were run with one hundred thousand repetitions. Memfile was run for one million repetitions due to its much higher performance. Asynchronous results (basic) Backend Operations Create [s] Search [s] Update [s] Delete [s] Average [s] MySQL 100,000 10.585 10.386 10.062 8.890 9.981 SQLite 100,000 3.710 3.159 2.865 2.439 3.044 memfile 1,000,000 1.300 0.039 1.307 1.278 0.981
The presented performance results can be converted into operations per second metrics. It should be noted that due to large differences between various operations (sometimes over three orders of magnitude), it is difficult to create a simple, readable chart with that data. Estimated basic performance Backend Create [oper/s] Search [oper/s] Update [oper/s] Delete [oper/s] Average [oper/s] MySQL (async) 9447.47 9627.97 9938.00 11248.34 10065.45 SQLite (async) 26951.59 31654.29 34899.70 40993.59 33624.79 memfile (async) 76944.27 2542588.35 76504.54 78269.25 693576.60 MySQL (sync) 31.64 8575.45 35.76 36.11 2169.74 SQLite (sync) 16.28 20045.37 16.81 17.85 7524.08 memfile (sync) 26.16 1223990.21 26.29 26.30 306017.24
Basic performance measurements Graphical representation of the basic performance results presented in table .
Optimized performance measurements This section contains sample results for backend performance measurements, taken using microbenchmarks. Tests were conducted on reasonably powerful machine: CPU: Quad-core Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz (8 logical cores) HDD: 1,5TB Seagate Barracuda ST31500341AS 7200rpm, ext4 partition OS: Ubuntu 12.04, running kernel 3.2.0-26-generic SMP x86_64 compiler: g++ (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3 MySQL version: 5.5.24 SQLite version: 3.7.9sourceid version is 2011-11-01 00:52:41 c7c6050ef060877ebe77b41d959e9df13f8c9b5e The benchmarks were run with precompiled statements enabled. The code was compiled with the -Ofast flag (optimize compilation for speed). Each run was repeated three times and measured values were averaged. Again the benchmarks were run in two series, synchronous and asynchronous. As those modes offer radically different performances, synchronous mode was conducted for one thousand repetitions and asynchronous mode was conducted for one hundred thousand repetitions. Synchronous results (optimized) Backend Operations Create [s] Search [s] Update [s] Delete [s] Average [s] MySQL 1,000 27.887 0.106 28.223 27.696 20.978 SQLite 1,000 61.299 0.015 59.648 61.098 45.626 memfile 1,000 39.564 0.001 39.543 39.326 29.608
The following parameters were measured for asynchronous mode. MySQL and SQLite were run with one hundred thousand repetitions. Memfile was run for one million repetitions due to its much higher performance. Asynchronous results (optimized) Backend Operations Create [s] Search [s] Update [s] Delete [s] Average [s] MySQL 100,000 8.507 9.698 7.785 8.326 8.579 SQLite 100,000 1.562 0.949 1.513 1.502 1.382 memfile 1,000,000 1.302 0.038 1.306 1.263 0.977
The presented performance results can be converted into operations per second metrics. It should be noted that due to large differences between various operations (sometime over three orders of magnitude), it is difficult to create a simple, readable chart with the data. Estimated optimized performance Backend Create [oper/s] Search [oper/s] Update [oper/s] Delete [oper/s] Average [oper/s] MySQL (async) 11754.84 10311.34 12845.35 12010.24 11730.44 SQLite (async) 64005.90 105391.29 66075.51 66566.43 75509.78 memfile (async) 76832.16 2636018.56 76542.50 79188.81 717145.51 MySQL (sync) 35.86 9461.10 35.43 36.11 2392.12 SQLite (sync) 16.31 67036.11 16.76 16.37 16771.39 memfile (sync) 25.28 3460207.61 25.29 25.43 865070.90
Optimized performance measurements Graphical representation of the optimized performance results presented in table .
Conclusions Improvements gained by introducing support for precompiled statements in MySQL is somewhat disappointing - between 6 and 29%. On the other hand, the improvement in SQLite is surprisingly high - the efficiency is more than doubled. Compiled statements do not have any measureable impact on synchronous operations. That is as expected, because the major bottleneck is the disk performance. Compilation flags yield surprisingly high improvements for C++ STL code. The memfile backend is in some operations is almost twice as fast. If synchronous operation is required, the current performance results are likely to be deemed inadequate. The limiting factor here is a disk access time. Even migrating to high performance 15,000 rpm disk is expected to only roughly double number of leases per second, compared to the current results. The reason is that to write a file to disk, at least two writes are required: the new content and i-node modification of the file. The easiest way to boost synchronous performance is to switch to SSD disks. Memory-backed RAM disks are also a viable solution. However, care should be taken to properly engineer backup strategy for RAM disks. While the custom made backend (memfile) provides the best perfomance, it carries over all the limitations existing in the ISC DHCP4 code: there are no external tools to query or change database, the maintenance requires deep knowledge etc. Those flaws are not shared by usage of a proper database backend, like MySQL and SQLite. They both offer third party tools for administrative tasks, they are well documented and maintained. However, SQLite support for concurrent access is limiting in certain cases. Since all three investigated backends more than meet expected performance results, it is recommended to use MySQL as a first concrete database backend. Should this choice be rejected for any reason, the second recommended choice is SQLite. It should be emphaisized that obtained measurements indicate only database performance and they cannot be directly translated to expected leases per second or queries per second performance by an actual server. The DHCP server must do much more than just query the database to properly process client's message. The provided results should be considered as only rough estimates. They can also be used for relative comparisons between backends.
Possible further optimizations For basic measurements the code was compiled with -g -O0 flags. For optimized measurements the benchmarking code was compiled with -Ofast (optimize for speed). In both cases, the same backend (MySQL or SQLite) library was used. It may be useful to recompile the libraries (or the whole server in case of MySQL) with -Ofast. There are many MySQL parameters that various sources recommend to improve performance. They were not investigated further. Currently all operations are conducted on one by one basis. Each operation is treated as a separate transaction. Grouping N operations together will potentially bring almost N fold increase in synchronous operations. Such a feature is present in ISC DHCP4 and is called cache-threshold. Extension for this benchmark in this regard should be considered. That affects only write operations (insert, update and delete). Read operations (search) are expected to be barely affected. Multi-threaded or multi-process benchmark may be considered in the future. It may be somewhat difficult as only some backends support concurrent access.
perfdhcp TODO: Write something about perfdhcp here.