Browse Source

[2143] Editorial changes to layout and language

Stephen Morris 12 years ago
parent
commit
7a60b5784c
2 changed files with 274 additions and 264 deletions
  1. 92 90
      tests/tools/dhcp-ubench/dhcp-perf-guide.html
  2. 182 174
      tests/tools/dhcp-ubench/dhcp-perf-guide.xml

File diff suppressed because it is too large
+ 92 - 90
tests/tools/dhcp-ubench/dhcp-perf-guide.html


+ 182 - 174
tests/tools/dhcp-ubench/dhcp-perf-guide.xml

@@ -40,8 +40,8 @@
 
     <abstract>
       <para>BIND 10 is a framework that features Domain Name System
-      (DNS) suite and Dynamic Host Configuration Protocol (DHCP)
-      servers with development managed by Internet Systems Consortium (ISC).
+      (DNS) and Dynamic Host Configuration Protocol (DHCP)
+      software with development managed by Internet Systems Consortium (ISC).
       This document describes various aspects of DHCP performance,
       measurements and tuning. It covers BIND 10 DHCP (codename Kea),
       existing ISC DHCP4 software, perfdhcp (a DHCP performance
@@ -70,11 +70,10 @@
   <chapter id="intro">
     <title>Introduction</title>
     <para>
-      This document is in its early stages of development. It is
-      expected to grow significantly in a near future. It will
+      This document is in the early stages of development. It is
+      expected to grow significantly in the near future. It will
       cover topics like database backend perfomance measurements,
-      pros an cons of various optimization techniques and
-      tools.
+      tools, and the pros an cons of various optimization techniques.
     </para>
 
   </chapter>
@@ -97,22 +96,24 @@
       <para>
         Kea will support several different database backends, using
         both popular databases (like MySQL or SQLite) and
-        custom-developed solutions (like in-memory database).  BIND 10
-        source code features set of performance microbenchmarks.
-        These are small tools written in C/C++ that simulate expected
+        custom-developed solutions (such as an in-memory database).
+        To aid in the choice of backend, the BIND 10
+        source code features a set of performance microbenchmarks.
+         Written in C/C++, these are small tools that simulate expected
         DHCP server behaviour and evaluate the performance of
-        considered databases. As implemented benchmarks are not really
+        considered databases. As implemented, the benchmarks are not really
         simulating DHCP operation, but rather use set of primitives
-        that can be used by a real server, they are called
+        that can be used by a real server.  For this reason, they are called
         micro-benchmarks.
       </para>
 
       <para>Although there are many operations and data types that
       server could store in a database, the most frequently used data
-      type is lease information. Although lease information for IPv4
-      and IPv6 differs slightly, it is expected that the performance
+      type is lease information. Although the information held for IPv4
+      and IPv6 leases differs slightly, it is expected that the performance
       differences will be minimal between IPv4 and IPv6 lease operations.
-      Therefore each test uses lease4 table for performance measurements.
+      Therefore each test uses the lease4 table (in which IPv4 leases are stored)
+      for performance measurements.
       </para>
 
       <para>All benchmarks are implemented as single threaded applications
@@ -120,9 +121,9 @@
 
       <para>
         Those benchmarks are stored in tests/tools/dhcp-ubench
-        directory. This directory contains simplified prototypes for
-        various DB back-ends that are planned or considered as a
-        backend engine for BIND10 DHCP.  Athough trivial now, they are
+        directory of the BIND 10 source tree. This directory contains simplified prototypes for
+        the various database back-ends that are planned or considered as a
+        possibly for BIND10 DHCP.  Athough trivial now, the benchmarks are
         expected to evolve into useful tools that will allow users to
         measure performance in their specific environment.
       </para>
@@ -130,52 +131,53 @@
     <para>
       Currently the following benchmarks are implemented:
       <itemizedlist>
-        <listitem><para>in memory+flat file</para></listitem>
+        <listitem><para>In memory + flat file</para></listitem>
         <listitem><para>SQLite</para></listitem>
         <listitem><para>MySQL</para></listitem>
       </itemizedlist>
     </para>
 
     <para>
-      As they require additional (sometimes heavy) dependencies, they are not
-      built by default. Actually, their build system is completely separated.
-      It will be eventually merged with the main BIND10 makefile system, but
+      As the benchmarks require additional (sometimes heavy) dependencies, they are not
+      built by default. Actually, their build system is completely separate from that
+      of the rest of BIND 10.
+      It is anticipated that they will be eventually merged into the rest of BIND 10, but
       that is a low priority for now.
     </para>
 
     <para>
       All benchmarks will follow the same pattern:
       <orderedlist>
-        <listitem><para>prepare operation (connect to a database, create a file etc.)</para></listitem>
+        <listitem><para>Prepare operation (connect to a database, create a file etc.)</para></listitem>
         <listitem><para>Measure timestamp 0</para></listitem>
-        <listitem><para>Commit new lease4 (repeated X times)</para></listitem>
+        <listitem><para>Commit new lease4 record (repeated N times)</para></listitem>
         <listitem><para>Measure timestamp 1</para></listitem>
-        <listitem><para>Search for random lease4 (repeated X times)</para></listitem>
+        <listitem><para>Search for random lease4 record (repeated N times)</para></listitem>
         <listitem><para>Measure timestamp 2</para></listitem>
-        <listitem><para>Update existing lease4 (repeated X times)</para></listitem>
+        <listitem><para>Update existing lease4 record (repeated N times)</para></listitem>
         <listitem><para>Measure timestamp 3</para></listitem>
-        <listitem><para>Delete existing lease4 (repeated X times)</para></listitem>
+        <listitem><para>Delete existing lease4 record (repeated N times)</para></listitem>
         <listitem><para>Measure timestamp 4</para></listitem>
-        <listitem><para>Print out statistics, based on X and measured timestamps.</para></listitem>
+        <listitem><para>Print out statistics, based on N and measured timestamps.</para></listitem>
       </orderedlist>
 
       Although this approach does not attempt to simulate actual DHCP server
-      operation that has mix of all steps intervening, it answers the
-      questions about basic database strenghts and weak points. In particular
-      it can show what is the impact of specific DB optimizations, like
+      operation that has mix of all steps, it answers the
+      questions about basic database strengths and weak points. In particular
+      it can show what is the impact of specific database optimizations, such as
       changing engine, optimizing for writes/reads etc.
     </para>
 
     <para>
-      The framework attempts to do the same amount of operations for every
+      The framework attempts to do the same amount of work for every
       backend thus allowing fair complarison between them.
     </para>
     </section>
 
     <section id="mysql-backend">
       <title>MySQL backend</title>
-      <para>MySQL backend requires MySQL client development libraries. It uses
-      mysql_config tool (that works similar to pkg-config) to discover required
+      <para>The MySQL backend requires the MySQL client development libraries. It uses
+      the mysql_config tool (similar to pkg-config) to discover required
       compilation and linking options. To install required packages on Ubuntu,
       use the following command:
 
@@ -185,31 +187,35 @@
       configured so there is a user that is able to modify used database.</para>
 
       <para>Before running tests, you need to initialize your database. You can
-      use mysql.schema script for that purpose. WARNING: It will drop existing
-      Kea database. Do not run this on your production server. Assuming your
-      MySQL user is kea, you can initialize your test database by:
+      use mysql.schema script for that purpose.</para>
+      
+      <para><emphasis>WARNING: It will drop existing
+      Kea database. Do not run this on your production server. </emphasis></para>
+      
+      <para>Assuming your
+      MySQL user is "kea", you can initialize your test database by:
 
       <screen>$ <userinput>mysql -u kea -p &lt; mysql.schema</userinput></screen>
       </para>
 
-      <para>After database is initialized, you are ready to run the test:
+      <para>After the database is initialized, you are ready to run the test:
       <screen>$ <userinput>./mysql_ubench</userinput></screen>
 
       or
 
-      <screen>$ <userinput>./mysql_ubench &gt; results->mysql.txt</userinput></screen>
+      <screen>$ <userinput>./mysql_ubench &gt; results-mysql.txt</userinput></screen>
 
       Redirecting output to a file is important, because for each operation
       there is a single character printed to show progress. If you have a slow
-      terminal, this may considerably affect test perfromance. On the other hand,
-      printing something after each operation is required, as poor DB setting
-      may slow down operations to around 20 per second. Observant user is expected
-      to note that initial dots are printed too slowly and abort the test.</para>
+      terminal, this may considerably affect test performance. On the other hand,
+      printing something after each operation is required as poor database settings
+      may slow down operations to around 20 per second. (The observant user is expected
+      to note that the initial dots are printed too slowly and abort the test.)</para>
 
       <para>Currently all default parameters are hardcoded. Default values can be
-      overwritten using command line switches. Although all benchmarks take
-      the same list of parameters, some of them are specific to a given backend
-      type. To get a list of supported parameters, run your benchmark with -h option:
+      overridden using command line switches. Although all benchmarks take
+      the same list of parameters, some of them are specific to a given backend.
+      To get a list of supported parameters, run the benchmark with the "-h" option:
 
       <screen>$ <userinput>./mysql_ubench -h</userinput>
 This is a benchmark designed to measure expected performance
@@ -234,14 +240,14 @@ Possible command-line parameters:
       <section>
         <title>MySQL tweaks</title>
 
-        <para>One parameter that has huge impact on performance is a a backend engine.
+        <para>One parameter that has huge impact on performance is the choice of backend engine.
         You can get a list of engines of your MySQL implementation by using
 
         <screen>&gt; <userinput>show engines;</userinput></screen>
 
         in your mysql client. Two notable engines are MyISAM and InnoDB. mysql_ubench uses
         use MyISAM for synchronous mode and InnoDB for asynchronous. Please use
-        '-s 0|1' to choose whether you want synchronous or asynchronous operations.</para>
+        '-s yes|no' to choose whether you want synchronous or asynchronous operations.</para>
 
         <para>Another parameter that affects performance are precompiled statements.
         In a basic approach, the actual SQL query is passed as a text string that is
@@ -249,14 +255,14 @@ Possible command-line parameters:
         statement. In this approach the SQL query is compiled an specific values are being
         bound to it. In the next iteration the query remains the same, only bound values
         are changing (e.g. searching for a different address). Usage of basic or precompiled
-        statements is controlled with '-c 0|1'.</para>
+        statements is controlled with '-c no|yes'.</para>
     </section>
     </section>
 
 
     <section id="sqlite-ubench">
       <title>SQLite-ubench</title>
-      <para>SQLite backend requires both sqlite3 development and run-time package. Their
+      <para>The SQLite backend requires both the sqlite3 development and run-time packages. Their
       names may vary from system to system, but on Ubuntu 12.04 they are called
       sqlite3 libsqlite3-dev. To install them, use the following command:
 
@@ -278,12 +284,12 @@ Possible command-line parameters:
       <section id="sqlite-tweaks">
         <title>SQLite tweaks</title>
         <para>To modify default sqlite_ubench parameters, command line
-        switches can be used. Currently supported parameters are
+        switches can be used. The currently supported switches are
         (default values specified in brackets):
         <orderedlist>
           <listitem><para>-f filename - name of the database file ("sqlite.db")</para></listitem>
           <listitem><para>-n num - number of iterations (100)</para></listitem>
-          <listitem><para>-s yes|no - should the operations be performend in synchronous (yes)
+          <listitem><para>-s yes|no - should the operations be performed in a synchronous (yes)
           or asynchronous (no) manner (yes)</para></listitem>
           <listitem><para>-v yes|no - verbose mode. Should the test print out progress? (yes)</para></listitem>
           <listitem><para>-c yes|no - precompiled statements. Should the SQL statements be precompiled?</para></listitem>
@@ -291,8 +297,10 @@ Possible command-line parameters:
         </para>
 
         <para>SQLite can run in asynchronous or synchronous mode. This
-        mode can be controlled by using sync parameter. It is set
-        using (PRAGMA synchronous = ON or OFF).</para>
+        mode can be controlled by using "synchronous" parameter. It is set
+        using the SQLite command:</para>
+        
+        <para><command>PRAGMA synchronous = ON|OFF</command></para>
 
         <para>Another tweakable feature is journal mode. It can be
         turned to several modes of operation. Its value can be
@@ -301,30 +309,30 @@ Possible command-line parameters:
         detailed explanantion.</para>
 
         <para>sqlite_bench supports precompiled statements. Please use
-        '-c 0|1' to define which should be used: basic SQL query (0) or
-        precompiled statement (1).</para>
+        '-c no|yes' to define which should be used: basic SQL query (no) or
+        precompiled statement (yes).</para>
       </section>
     </section>
 
     <section id="memfile-ubench">
       <title>memfile-ubench</title>
-      <para>Memfile backend is custom developed prototype backend that
-      somewhat mimics operation of ISC DHCP4. It uses in-memory
+      <para>The memfile backend is a custom backend that
+      somewhat mimics operation of ISC DHCP4. It implements in-memory
       storage using standard C++ and boost mechanisms (std::map and
       boost::shared_ptr&lt;&gt;). All database changes are also
-      written to a lease file. That file is strictly write-only. This
+      written to a lease file, which is strictly write-only. This
       approach takes advantage of the fact that simple append is faster
       than edition with potential whole file relocation.</para>
 
       <section id="memfile-tweaks">
         <title>memfile tweaks</title>
         <para>To modify default memfile_ubench parameters, command line
-        switches can be used. Currently supported parameters are
+        switches can be used. Currently supported switches are
         (default values specified in brackets):
         <orderedlist>
           <listitem><para>-f filename - name of the database file ("dhcpd.leases")</para></listitem>
           <listitem><para>-n num - number of iterations (100)</para></listitem>
-          <listitem><para>-s yes|no - should the operations be performend in synchronous (yes)
+          <listitem><para>-s yes|no - should the operations be performend in a synchronous (yes)
           or asynchronous (no) manner (yes)</para></listitem>
           <listitem><para>-v yes|no - verbose mode. Should the test print out progress? (yes)</para></listitem>
         </orderedlist>
@@ -350,15 +358,15 @@ MySQL version: 5.5.24
 SQLite version: 3.7.9sourceid version is 2011-11-01 00:52:41 c7c6050ef060877ebe77b41d959e9df13f8c9b5e</screen>
       </para>
 
-      <para>Benchmarks were run without using precompiled statements.
-      The code was compiled wit -O0 flag (no code optimizations).
+      <para>The benchmarks were run without using precompiled statements.
+      The code was compiled with the -O0 flag (no code optimizations).
       Each run was executed once.</para>
 
-      <para>Benchmarks were run in two series: synchronous and
+      <para>Two series of measures were made, synchronous and
       asynchronous. As those modes offer radically different
-      performances, synchronous mode was conducted for 1000 (one
-      thousand) repetitions and asynchronous mode was conducted for
-      100000 (hundred thousand) repetitions.</para>
+      performances, synchronous mode was conducted for one
+      thousand repetitions and asynchronous mode was conducted for
+      one hundred thousand repetitions.</para>
 
       <!-- raw results sync -->
       <table><title>Synchronous results (basic)</title>
@@ -374,51 +382,51 @@ SQLite version: 3.7.9sourceid version is 2011-11-01 00:52:41 c7c6050ef060877ebe7
           <row>
             <entry>Backend</entry>
             <entry>Operations</entry>
-            <entry>Create</entry>
-            <entry>Search</entry>
-            <entry>Update</entry>
-            <entry>Delete</entry>
-            <entry>Average</entry>
+            <entry>Create [s]</entry>
+            <entry>Search [s]</entry>
+            <entry>Update [s]</entry>
+            <entry>Delete [s]</entry>
+            <entry>Average [s]</entry>
           </row>
         </thead>
         <tbody>
           <row>
             <entry>MySQL</entry>
-            <entry>1000</entry>
-            <entry>31.603978s</entry>
-            <entry> 0.116612s</entry>
-            <entry>27.964191s</entry>
-            <entry>27.695209s</entry>
-            <entry>21.844998s</entry>
+            <entry>1,000</entry>
+            <entry>31.603978</entry>
+            <entry> 0.116612</entry>
+            <entry>27.964191</entry>
+            <entry>27.695209</entry>
+            <entry>21.844998</entry>
           </row>
 
           <row>
             <entry>SQLite</entry>
-            <entry>1000</entry>
-            <entry>61.421356s</entry>
-            <entry> 0.033283s</entry>
-            <entry>59.476638s</entry>
-            <entry>56.034150s</entry>
-            <entry>44.241357s</entry>
+            <entry>1,000</entry>
+            <entry>61.421356</entry>
+            <entry> 0.033283</entry>
+            <entry>59.476638</entry>
+            <entry>56.034150</entry>
+            <entry>44.241357</entry>
           </row>
 
           <row>
             <entry>memfile</entry>
-            <entry>1000</entry>
-            <entry>38.223757s</entry>
-            <entry> 0.000817s</entry>
-            <entry>38.041153s</entry>
-            <entry>38.017293s</entry>
-            <entry>28.570755s</entry>
+            <entry>1,000</entry>
+            <entry>38.223757</entry>
+            <entry> 0.000817</entry>
+            <entry>38.041153</entry>
+            <entry>38.017293</entry>
+            <entry>28.570755</entry>
           </row>
 
         </tbody>
       </tgroup>
       </table>
 
-      <para>Following parameters were measured for asynchronous mode.
-      MySQL and SQLite were run with 100 thousand repetitions. Memfile
-      was run for 1 million repetitions due to much larger performance.</para>
+      <para>The following parameters were measured for asynchronous mode.
+      MySQL and SQLite were run with one hundred thousand repetitions. Memfile
+      was run for one million repetitions due to its much higher performance.</para>
 
       <!-- raw results async -->
       <table><title>Asynchronous results (basic)</title>
@@ -444,41 +452,41 @@ SQLite version: 3.7.9sourceid version is 2011-11-01 00:52:41 c7c6050ef060877ebe7
         <tbody>
           <row>
             <entry>MySQL</entry>
-            <entry>100000</entry>
-            <entry>10.584842s</entry>
-            <entry>10.386402s</entry>
-            <entry>10.062384s</entry>
-            <entry> 8.890197s</entry>
-            <entry> 9.980956s</entry>
+            <entry>100,000</entry>
+            <entry>10.584842</entry>
+            <entry>10.386402</entry>
+            <entry>10.062384</entry>
+            <entry> 8.890197</entry>
+            <entry> 9.980956</entry>
           </row>
 
           <row>
             <entry>SQLite</entry>
-            <entry>100000</entry>
-            <entry> 3.710356s</entry>
-            <entry> 3.159129s</entry>
-            <entry> 2.865354s</entry>
-            <entry> 2.439406s</entry>
-            <entry> 3.043561s</entry>
+            <entry>100,000</entry>
+            <entry> 3.710356</entry>
+            <entry> 3.159129</entry>
+            <entry> 2.865354</entry>
+            <entry> 2.439406</entry>
+            <entry> 3.043561</entry>
           </row>
 
           <row>
             <entry>memfile</entry>
-            <entry>100000</entry>
-            <entry> 1.299642s</entry>
-            <entry> 0.039330s</entry>
-            <entry> 1.307112s</entry>
-            <entry> 1.277641s</entry>
-            <entry> 0.980931s</entry>
+            <entry>1,000,000</entry>
+            <entry> 1.299642</entry>
+            <entry> 0.039330</entry>
+            <entry> 1.307112</entry>
+            <entry> 1.277641</entry>
+            <entry> 0.980931</entry>
           </row>
 
         </tbody>
       </tgroup>
       </table>
 
-      <para>Presented performance results can be computed into operations per second metrics.
-      It should be noted that due to large differences between various operations (sometime
-      over 3 orders of magnitude), it is difficult to create a simple, readable chart with
+      <para>The presented performance results can be converted into operations per second metrics.
+      It should be noted that due to large differences between various operations (sometimes
+      over three orders of magnitude), it is difficult to create a simple, readable chart with
       that data.</para>
 
       <table id="tbl-basic-perf-results"><title>Estimated basic performance</title>
@@ -587,15 +595,15 @@ MySQL version: 5.5.24
 SQLite version: 3.7.9sourceid version is 2011-11-01 00:52:41 c7c6050ef060877ebe77b41d959e9df13f8c9b5e</screen>
       </para>
 
-      <para>Benchmarks were run with precompiled statements enabled.
-      The code was compiled wit -Ofast flag (optimize compilation for speed).
-      Each run was repeated 3 times and measured values were averaged.</para>
+      <para>The benchmarks were run with precompiled statements enabled.
+      The code was compiled with the -Ofast flag (optimize compilation for speed).
+      Each run was repeated three times and measured values were averaged.</para>
 
-      <para>Benchmarks were run in two series: synchronous and
+      <para>Again the benchmarks were run in two series, synchronous and
       asynchronous. As those modes offer radically different
-      performances, synchronous mode was conducted for 1000 (one
-      thousand) repetitions and asynchronous mode was conducted for
-      100000 (hundred thousand) repetitions.</para>
+      performances, synchronous mode was conducted for one
+      thousand repetitions and asynchronous mode was conducted for
+      one hundred thousand repetitions.</para>
 
       <!-- raw results sync -->
       <table><title>Synchronous results (optimized)</title>
@@ -611,51 +619,51 @@ SQLite version: 3.7.9sourceid version is 2011-11-01 00:52:41 c7c6050ef060877ebe7
           <row>
             <entry>Backend</entry>
             <entry>Operations</entry>
-            <entry>Create</entry>
-            <entry>Search</entry>
-            <entry>Update</entry>
-            <entry>Delete</entry>
-            <entry>Average</entry>
+            <entry>Create [s]</entry>
+            <entry>Search [s]</entry>
+            <entry>Update [s]</entry>
+            <entry>Delete [s]</entry>
+            <entry>Average [s]</entry>
           </row>
         </thead>
         <tbody>
           <row>
             <entry>MySQL</entry>
-            <entry>1000</entry>
-            <entry>27.887s</entry>
-            <entry> 0.106s</entry>
-            <entry>28.223s</entry>
-            <entry>27.696s</entry>
-            <entry>20.978s</entry>
+            <entry>1,000</entry>
+            <entry>27.887</entry>
+            <entry> 0.106</entry>
+            <entry>28.223</entry>
+            <entry>27.696</entry>
+            <entry>20.978</entry>
           </row>
 
           <row>
             <entry>SQLite</entry>
-            <entry>1000</entry>
-            <entry>61.299s</entry>
-            <entry> 0.015s</entry>
-            <entry>59.648s</entry>
-            <entry>61.098s</entry>
-            <entry>45.626s</entry>
+            <entry>1,000</entry>
+            <entry>61.299</entry>
+            <entry> 0.015</entry>
+            <entry>59.648</entry>
+            <entry>61.098</entry>
+            <entry>45.626</entry>
           </row>
 
           <row>
             <entry>memfile</entry>
-            <entry>1000</entry>
-            <entry>39.564s</entry>
-            <entry> 0.000724s</entry>
-            <entry>39.543s</entry>
-            <entry>39.326w</entry>
-            <entry>29.608s</entry>
+            <entry>1,000</entry>
+            <entry>39.564</entry>
+            <entry> 0.000724</entry>
+            <entry>39.543</entry>
+            <entry>39.326</entry>
+            <entry>29.608</entry>
           </row>
 
         </tbody>
       </tgroup>
       </table>
 
-      <para>Following parameters were measured for asynchronous mode.
-      MySQL and SQLite were run with 100 thousand repetitions. Memfile
-      was run for 1 million repetitions due to much larger performance.</para>
+      <para>The following parameters were measured for asynchronous mode.
+      MySQL and SQLite were run with one hundred thousand repetitions. Memfile
+      was run for one million repetitions due to its much higher performance.</para>
 
       <!-- raw results async -->
       <table><title>Asynchronous results (optimized)</title>
@@ -681,42 +689,42 @@ SQLite version: 3.7.9sourceid version is 2011-11-01 00:52:41 c7c6050ef060877ebe7
         <tbody>
           <row>
             <entry>MySQL</entry>
-            <entry>100000</entry>
-            <entry>8.507s</entry>
-            <entry>9.698s</entry>
-            <entry>7.785s</entry>
-            <entry>8.326s</entry>
-            <entry>8.579s</entry>
+            <entry>100,000</entry>
+            <entry>8.507</entry>
+            <entry>9.698</entry>
+            <entry>7.785</entry>
+            <entry>8.326</entry>
+            <entry>8.579</entry>
           </row>
 
           <row>
             <entry>SQLite</entry>
-            <entry>100000</entry>
-            <entry> 1.562s</entry>
-            <entry> 0.949s</entry>
-            <entry> 1.513s</entry>
-            <entry> 1.502s</entry>
-            <entry> 1.382s</entry>
+            <entry>100,000</entry>
+            <entry> 1.562</entry>
+            <entry> 0.949</entry>
+            <entry> 1.513</entry>
+            <entry> 1.502</entry>
+            <entry> 1.382</entry>
           </row>
 
           <row>
             <entry>memfile</entry>
-            <entry>100000</entry>
-            <entry>1.302s</entry>
-            <entry>0.038s</entry>
-            <entry>1.306s</entry>
-            <entry>1.263s</entry>
-            <entry>0.977s</entry>
+            <entry>1,000,000</entry>
+            <entry>1.302</entry>
+            <entry>0.038</entry>
+            <entry>1.306</entry>
+            <entry>1.263</entry>
+            <entry>0.977</entry>
           </row>
 
         </tbody>
       </tgroup>
       </table>
 
-      <para>Presented performance results can be computed into operations per second metrics.
+      <para>The presented performance results can be converted into operations per second metrics.
       It should be noted that due to large differences between various operations (sometime
-      over 3 orders of magnitude), it is difficult to create a simple, readable chart with
-      that data.</para>
+      over three orders of magnitude), it is difficult to create a simple, readable chart with
+      the data.</para>
 
       <table id="tbl-optim-perf-results"><title>Estimated optimized performance</title>
       <tgroup cols='6' align='center' colsep='1' rowsep='1'>
@@ -832,15 +840,15 @@ SQLite version: 3.7.9sourceid version is 2011-11-01 00:52:41 c7c6050ef060877ebe7
       </para>
 
       <para>
-        If synchronous operation is required the current performance
+        If synchronous operation is required, the current performance
         results are likely to be deemed inadequate. The limiting
         factor here is a disk access time. Even migrating to high
-        performance 15.000rpm disk is expected to only roughly double
+        performance 15,000 rpm disk is expected to only roughly double
         number of leases per second, compared to the current results.
-        The reason is that to write a file to disk, at lease 2 writes
+        The reason is that to write a file to disk, at least two writes
         are required: the new content and i-node modification of the
         file. The easiest way to boost synchronous performance is to
-        switch to SSD disks. Memory-backed RAM disks are also viable
+        switch to SSD disks. Memory-backed RAM disks are also a viable
         solution. However, care should be taken to properly engineer
         backup strategy for RAM disks.
       </para>
@@ -867,7 +875,7 @@ SQLite version: 3.7.9sourceid version is 2011-11-01 00:52:41 c7c6050ef060877ebe7
         translated to expected leases per second or queries per second
         performance by an actual server. The DHCP server must do much
         more than just query the database to properly process client's
-        message. Provided results should be considered as only rough
+        message. The provided results should be considered as only rough
         estimates. They can also be used for relative comparisons
         between backends.
       </para>
@@ -891,8 +899,8 @@ SQLite version: 3.7.9sourceid version is 2011-11-01 00:52:41 c7c6050ef060877ebe7
       <para>
         Currently all operations are conducted on one by one
         basis. Each operation is treated as a separate
-        transaction. Grouping X operations together will potentially
-        bring almost X fold increase in synchronous operations. Such a
+        transaction. Grouping N operations together will potentially
+        bring almost N fold increase in synchronous operations. Such a
         feature is present in ISC DHCP4 and is called cache-threshold.
         Extension for this benchmark in this regard should be
         considered.  That affects only write operations (insert,