Tomek Mrugalski 7f37eed040 [2041] SQLite benchmark coding complete. 12 years ago
..
Makefile 7f37eed040 [2041] SQLite benchmark coding complete. 12 years ago
README 7f37eed040 [2041] SQLite benchmark coding complete. 12 years ago
benchmark.cc 6c8b787f32 [2041] Initial SQLite benchmark implementation 12 years ago
benchmark.h dbb7269449 [2042] Initial MySQL benchmark development complete. 12 years ago
mysql.schema 7f37eed040 [2041] SQLite benchmark coding complete. 12 years ago
mysql_ubench.cc 6c8b787f32 [2041] Initial SQLite benchmark implementation 12 years ago
mysql_ubench.h dbb7269449 [2042] Initial MySQL benchmark development complete. 12 years ago
sqlite.schema 6c8b787f32 [2041] Initial SQLite benchmark implementation 12 years ago
sqlite_ubench.cc 7f37eed040 [2041] SQLite benchmark coding complete. 12 years ago
sqlite_ubench.h 6c8b787f32 [2041] Initial SQLite benchmark implementation 12 years ago

README


DHCP micro-benchmarks
-----------------------

Copyright (C) 2012 Internet Systems Consortium, Inc. ("ISC")

by Tomasz Mrugalski

This directory contains simplified prototypes for various DB back-ends
that are planned or considered as a backend engine for BIND10 DHCP.
Athough trivial now, they are expected to evolve into useful tools
that will allow users to measure performance in their specific environment.

Currently the following benchmarks are planned:
- in memory+flat file
- SQLite
- MySQL

As they require additional (sometimes heavy) dependencies, they are not
built by default. Actually, their build system completely separated.
It will be eventually merged with the main BIND10 makefile system, but
that is a low priority for now.

All planned benchmarks will follow the same pattern.

1. prepare operation (connect to a database, create a file etc.)
2. Measure timestamp 0
3. commit new lease4 (step repeated X times)
4. Measure timestamp 1
5. search for random lease4 (step repeated X times)
6. Measure timestamp 2
7. update existing lease4 (step repeated X times)
8. Measure timestamp 3
9. delete existing lease4 (step repeated X times)
10. Measure timestamp 4
11. Print out statistics, based on X and measured timestamps.

Although this approach does not attempt to simulate actual DHCP server
operation that has mix of all steps intervening, it answers the
questions about basic database strenghts and weak points. In particular
it can show what is the impact of specific DB optimizations, like
changing engine, optimizing for writes/reads etc.

The framework attempts to do the same amount of operations for every
backend thus allowing fair complarison between them.

MySQL backend
---------------

MySQL backend requires MySQL client development libraries. It uses
mysql_config tool (that works similar to pkg-config) to discover
required compilation and linking options. To install required packages
on Ubuntu, use the following command:

sudo apt-get install mysql-client mysql-server libmysqlclient-dev

Running MySQL server is required. Make sure that you have your setup
configured so there is a user that is able to create databases.

Before running tests, you need to initialize your database. You can
use mysql-schema.sql script for that purpose.

WARNING: It will drop existing Kea database. Do not run this on
your production server. Assuming your MySQL user is kea, you can
initialize your test database by:

mysql -u kea -p < mysql.schema

After that step, you are ready to run the test:

./mysql_ubench

or

./mysql_ubench > results-mysql.txt

Redirecting output to a file is important, because for each operation
there is a single character printed to show progress. If you have a slow
terminal, this may considerably affect test perfromance. On the other hand,
printing something after each operation is required, as poor DB setting
may slow down operations to around 20 per second. Observant user is expected
to note that initial dots are printed too slowly and abort the test.

Currently all parameters are hardcoded. To modify them, one needs to
modify source code and recompile. Fortunately, that is quite easy.
To modify MySQL parameters, see main() method at the end of mysql_ubench.c
file. That is the plase where one can modify MySQL connection
parameters (MySQL server hostname, user and password and database name).

One parameter that has huge impact on performance is a a backend engine.
You can get a list of engines of your MySQL implementation by using

> show engines;

in your mysql client. Two notable engines are MyISAM and InnoDB. You
can tweak it by using the following command in mysql:

> alter table lease4 engine=MyISAM;

or

> alter table lease4 engine=InnoDB;

SQLite backend
----------------
SQLite backend requires both sqlite3 development and run-time package. Their
names may vary from system to system, but on Ubuntu 12.04 they are called
sqlite3 libsqlite3-dev. To install them, use the following command:

sudo apt-get install sqlite3 libsqlite3-dev

To run SQLite3 tests, first create the database:

cat sqlite.schema | sqlite3 sqlite.db

A new database called sqlite.db will be created. That is the default name used
by sqlite_ubench test. If you prefer other name, make sure you update
sqlite_ubench.cc accordingly. Once the database is created, you can run
tests:

./sqlite_ubench > results-sqlite.txt

Make sure to tweak number of iterations in sqlite_ubench.cc. See num variable
in main() function near the end of sqlite_ubench.cc file.

To improve performance, asynchronous mode is used (PRAGMA synchronous = OFF).
If you do not like it, modify SQLite_uBenchmark::connect() by commenting it out
and suffer the pain of having around 10 inserts per seconds.

Performance optimizations
---------------------------

The following section is a collection of loose notes, ideas and comments that
could be investigated to possibly improve (or thrash) performance. Please
consider them as developer's scratchpad only. However, if you are experienced
programmer or a DB admin, you are welcome to try some of them. ISC engineers
are more than welcome to discuss pros and cons of various approaches on
bind10-dev@lists.isc.org list.

- SQLite: Excellent article that discusses performance optimizations:
- http://stackoverflow.com/questions/1711631/how-do-i-improve-the-performance-of-sqlite
- http://blog.quibb.org/2010/08/fast-bulk-inserts-into-sqlite/
- use prepared statements
- use journal mode in memory (or disable it completely)