Browse Source

[3601] Addressed review comments, added ability to downgrade

Several minor cleanup items based on review comments.  Implemented
support for downgrading files from newer schema versions:

doc/guide/admin.xml
    amended text on upgrading memfile to discuss downgrading

src/lib/dhcpsrv/dhcpsrv_messages.mes
src/lib/dhcpsrv/lease_file_loader.h
    revamped log messages to accomodate downgrading

src/lib/dhcpsrv/memfile_lease_mgr.cc
src/lib/dhcpsrv/memfile_lease_mgr.h
    added commentary to MemfileLeaseMgr ctor
    automatic conversion logic accomdates both upgrading and downgrading

src/lib/dhcpsrv/tests/csv_lease_file4_unittest.cc
src/lib/dhcpsrv/tests/csv_lease_file6_unittest.cc
    replaced tooManyHeaderColumns test with downGrade test

src/lib/util/csv_file.h
src/lib/util/csv_file.cc
    added CSVRow::trim()

src/lib/util/tests/csv_file_unittest.cc
    added CSVRow.trim test

src/lib/util/tests/versioned_csv_file_unittest.cc
    added VersionedCSVFileTest.currentSchemaTest test
    replaced tooManyHeaderColumns test with downGrading test
    revamped tests to check  getInputSchemaState() and needsConversion()

src/lib/util/versioned_csv_file.h
    Updated commentary to describe downgrade support

src/lib/util/versioned_csv_file.cc
    enum InputSchemaState
    input_schema_state_
    input_header_count_
    getInputHeaderCount()
    getInputSchemaState()
    needsConversion()

    next(CSVRow& row) - now supports downgrading rows
    validateHeder() -  now throws if called when no schema has been defined,
    and supports downgrading rows
Thomas Markwalder 9 years ago
parent
commit
91a4978e30

+ 15 - 9
doc/guide/admin.xml

@@ -154,15 +154,21 @@
         <title>Upgrading Memfile Lease Files from an Earlier Version of Kea</title>
         <title>Upgrading Memfile Lease Files from an Earlier Version of Kea</title>
         <para>
         <para>
         There are no special steps required to upgrade memfile lease files
         There are no special steps required to upgrade memfile lease files
-        from  earlier version of Kea to a new version of Kea.  During startup,
-        Kea's DHCP servers will automatically detect memfile lease files that
-        need upgrading and will launch an invocation of the LFC process to 
-        convert them.  This should only occur the first time the files are 
-        encountered.
-
-        If you wish to convert the files manually, prior to starting the 
-        servers you may do so by running the LFC process yourself, 
-        see <xref linkend="kea-lfc"/> for more information.
+        from an earlier version of Kea to a new version of Kea.
+
+        During startup the servers will check the schema version of the lease
+        files against their own.  If there is a mismatch, the servers will
+        automatically launch the LFC process to convert the files to the
+        server's schema vesion.  While this mechanism is primarily meant to
+        ease the process of upgrading to newer versions of Kea, it can also
+        be used for downgrading should the need arise.  When upgrading, any
+        values not present in the original lease files will be assigned
+        appropriate default values.  When downgrading, any data present in
+        the files but not in the server's schema will be dropped.
+
+        If you wish to convert the files manually, prior to starting the
+        servers you may do so by running the LFC process yourself.
+        See <xref linkend="kea-lfc"/> for more information.
         </para>
         </para>
       </section>
       </section>
       <!-- @todo: document lease file upgrades once they are implemented in kea-admin -->
       <!-- @todo: document lease file upgrades once they are implemented in kea-admin -->

+ 20 - 11
src/lib/dhcpsrv/dhcpsrv_messages.mes

@@ -220,6 +220,13 @@ with the specified address to the memory file backend database.
 The code has issued a commit call.  For the memory file database, this is
 The code has issued a commit call.  For the memory file database, this is
 a no-op.
 a no-op.
 
 
+% DHCPRSV_MEMFILE_CONVERTING_LEASE_FILES running LFC now to convert lease files to the current schema: %1.%2
+A warning message issued when the server has detected lease files that need
+to be either upgraded or downgraded to match the server's schema, and that
+the server is automatically running the LFC process to perform the conversion.
+This should only occur the first time the server is launched following a Kea
+installation upgrade (or downgrade).
+
 % DHCPSRV_MEMFILE_DB opening memory file lease database: %1
 % DHCPSRV_MEMFILE_DB opening memory file lease database: %1
 This informational message is logged when a DHCP server (either V4 or
 This informational message is logged when a DHCP server (either V4 or
 V6) is about to open a memory file lease database.  The parameters of
 V6) is about to open a memory file lease database.  The parameters of
@@ -352,12 +359,20 @@ timer used for lease file cleanup scheduling. This is highly unlikely
 and indicates programming error. The message include the reason for this
 and indicates programming error. The message include the reason for this
 error.
 error.
 
 
-% DHCPSRV_MEMFILE_NEEDS_UPGRADING Lease file: %1 is schema version %2, it needs to be upgraded to current schema version, %3.
+% DHCPSRV_MEMFILE_NEEDS_DOWNGRADING lease file: %1 is beyond version %2, it needs to be downgraded to current schema version, %3.
+A warning message issued when the schema of the lease file loaded by the server
+is newer than the memfile schema of the server.  The server converts the lease
+data from newer schemas to its schema as it is read, therefore the lease
+information in use by the server will be correct. Note though, that any data
+data stored in newer schema fields will be dropped.  What remains is for the
+file itself to be rewritten using the current schema.
+
+% DHCPSRV_MEMFILE_NEEDS_UPGRADING lease file: %1 is schema version %2, it needs to be upgraded to current schema version, %3.
 A warning message issued when the schema of the lease file loaded by the server
 A warning message issued when the schema of the lease file loaded by the server
-is pre-dates the current Memfile schema.  Note that the server converts the lease
-data from older schemas to the current schema as it is read, therefore the lease
-information in use by the server will be correct.  What remains is for the file
-itself to be rewritten using the current schema.
+pre-dates the memfile schema of the server.  Note that the server converts the
+lease data from older schemas to the current schema as it is read, therefore
+the lease information in use by the server will be correct.  What remains is
+for the file itself to be rewritten using the current schema.
 
 
 % DHCPSRV_MEMFILE_NO_STORAGE running in non-persistent mode, leases will be lost after restart
 % DHCPSRV_MEMFILE_NO_STORAGE running in non-persistent mode, leases will be lost after restart
 A warning message issued when writes of leases to disk have been disabled
 A warning message issued when writes of leases to disk have been disabled
@@ -384,12 +399,6 @@ lease from the memory file database for the specified address.
 A debug message issued when the server is attempting to update IPv6
 A debug message issued when the server is attempting to update IPv6
 lease from the memory file database for the specified address.
 lease from the memory file database for the specified address.
 
 
-% DHCPRSV_MEMFILE_UPGRADING_LEASE_FILES Running LFC now, to upgrade lease files to current schema: %1.%2
-A warning message when the server has detected lease files that need to be upgraded,
-and is automatically running the LFC process to perform the upgrade.  This should
-only occur the first time the server is launched following a Kea upgrade in which
-the Memfile schema was updated.
-
 % DHCPSRV_MULTIPLE_RAW_SOCKETS_PER_IFACE current configuration will result in opening multiple brodcast capable sockets on some interfaces and some DHCP messages may be duplicated
 % DHCPSRV_MULTIPLE_RAW_SOCKETS_PER_IFACE current configuration will result in opening multiple brodcast capable sockets on some interfaces and some DHCP messages may be duplicated
 A warning message issued when the current configuration indicates that multiple
 A warning message issued when the current configuration indicates that multiple
 sockets, capable of receiving brodcast traffic, will be opened on some of the
 sockets, capable of receiving brodcast traffic, will be opened on some of the

+ 8 - 4
src/lib/dhcpsrv/lease_file_loader.h

@@ -17,7 +17,7 @@
 
 
 #include <dhcpsrv/dhcpsrv_log.h>
 #include <dhcpsrv/dhcpsrv_log.h>
 #include <dhcpsrv/memfile_lease_storage.h>
 #include <dhcpsrv/memfile_lease_storage.h>
-#include <util/csv_file.h>
+#include <util/versioned_csv_file.h>
 
 
 #include <boost/shared_ptr.hpp>
 #include <boost/shared_ptr.hpp>
 
 
@@ -154,10 +154,14 @@ public:
             }
             }
         }
         }
 
 
-        if (lease_file.needsUpgrading()) {
-            LOG_WARN(dhcpsrv_logger, DHCPSRV_MEMFILE_NEEDS_UPGRADING)
+        if (lease_file.needsConversion()) {
+            LOG_WARN(dhcpsrv_logger,
+                     (lease_file.getInputSchemaState()
+                      == util::VersionedCSVFile::NEEDS_UPGRADE
+                      ?  DHCPSRV_MEMFILE_NEEDS_UPGRADING
+                      : DHCPSRV_MEMFILE_NEEDS_DOWNGRADING))
                      .arg(lease_file.getFilename())
                      .arg(lease_file.getFilename())
-                     .arg(lease_file.getInputSchemaVersion())
+                     .arg(lease_file.getInputSchemaState())
                      .arg(lease_file.getSchemaVersion());
                      .arg(lease_file.getSchemaVersion());
         }
         }
 
 

+ 16 - 16
src/lib/dhcpsrv/memfile_lease_mgr.cc

@@ -163,7 +163,7 @@ LFCSetup::setup(const uint32_t lfc_interval,
                 bool run_once_now) {
                 bool run_once_now) {
 
 
     // If to nothing to do, punt
     // If to nothing to do, punt
-    if (lfc_interval == 0 && run_once_now == false) {
+    if (lfc_interval == 0 && !run_once_now) {
         return;
         return;
     }
     }
 
 
@@ -266,14 +266,14 @@ const int Memfile_LeaseMgr::MINOR_VERSION;
 Memfile_LeaseMgr::Memfile_LeaseMgr(const DatabaseConnection::ParameterMap& parameters)
 Memfile_LeaseMgr::Memfile_LeaseMgr(const DatabaseConnection::ParameterMap& parameters)
     : LeaseMgr(), lfc_setup_(), conn_(parameters)
     : LeaseMgr(), lfc_setup_(), conn_(parameters)
     {
     {
-    bool upgrade_needed = false;
+    bool conversion_needed = false;
 
 
     // Check the universe and use v4 file or v6 file.
     // Check the universe and use v4 file or v6 file.
     std::string universe = conn_.getParameter("universe");
     std::string universe = conn_.getParameter("universe");
     if (universe == "4") {
     if (universe == "4") {
         std::string file4 = initLeaseFilePath(V4);
         std::string file4 = initLeaseFilePath(V4);
         if (!file4.empty()) {
         if (!file4.empty()) {
-            upgrade_needed = loadLeasesFromFiles<Lease4,
+            conversion_needed = loadLeasesFromFiles<Lease4,
                                                  CSVLeaseFile4>(file4,
                                                  CSVLeaseFile4>(file4,
                                                                 lease_file4_,
                                                                 lease_file4_,
                                                                 storage4_);
                                                                 storage4_);
@@ -281,7 +281,7 @@ Memfile_LeaseMgr::Memfile_LeaseMgr(const DatabaseConnection::ParameterMap& param
     } else {
     } else {
         std::string file6 = initLeaseFilePath(V6);
         std::string file6 = initLeaseFilePath(V6);
         if (!file6.empty()) {
         if (!file6.empty()) {
-            upgrade_needed = loadLeasesFromFiles<Lease6,
+            conversion_needed = loadLeasesFromFiles<Lease6,
                                                  CSVLeaseFile6>(file6,
                                                  CSVLeaseFile6>(file6,
                                                                 lease_file6_,
                                                                 lease_file6_,
                                                                 storage6_);
                                                                 storage6_);
@@ -295,11 +295,11 @@ Memfile_LeaseMgr::Memfile_LeaseMgr(const DatabaseConnection::ParameterMap& param
    if (!persistLeases(V4) && !persistLeases(V6)) {
    if (!persistLeases(V4) && !persistLeases(V6)) {
         LOG_WARN(dhcpsrv_logger, DHCPSRV_MEMFILE_NO_STORAGE);
         LOG_WARN(dhcpsrv_logger, DHCPSRV_MEMFILE_NO_STORAGE);
     } else  {
     } else  {
-        if (upgrade_needed) {
-            LOG_WARN(dhcpsrv_logger, DHCPRSV_MEMFILE_UPGRADING_LEASE_FILES)
+        if (conversion_needed) {
+            LOG_WARN(dhcpsrv_logger, DHCPRSV_MEMFILE_CONVERTING_LEASE_FILES)
                     .arg(MAJOR_VERSION).arg(MINOR_VERSION);
                     .arg(MAJOR_VERSION).arg(MINOR_VERSION);
         }
         }
-        lfcSetup(upgrade_needed);
+        lfcSetup(conversion_needed);
     }
     }
 }
 }
 
 
@@ -907,12 +907,12 @@ bool Memfile_LeaseMgr::loadLeasesFromFiles(const std::string& filename,
     storage.clear();
     storage.clear();
 
 
     // Load the leasefile.completed, if exists.
     // Load the leasefile.completed, if exists.
-    bool upgrade_needed = false;
+    bool conversion_needed = false;
     lease_file.reset(new LeaseFileType(std::string(filename + ".completed")));
     lease_file.reset(new LeaseFileType(std::string(filename + ".completed")));
     if (lease_file->exists()) {
     if (lease_file->exists()) {
         LeaseFileLoader::load<LeaseObjectType>(*lease_file, storage,
         LeaseFileLoader::load<LeaseObjectType>(*lease_file, storage,
                                                MAX_LEASE_ERRORS);
                                                MAX_LEASE_ERRORS);
-        upgrade_needed |= lease_file->needsUpgrading();
+        conversion_needed = conversion_needed || lease_file->needsConversion();
     } else {
     } else {
         // If the leasefile.completed doesn't exist, let's load the leases
         // If the leasefile.completed doesn't exist, let's load the leases
         // from leasefile.2 and leasefile.1, if they exist.
         // from leasefile.2 and leasefile.1, if they exist.
@@ -920,14 +920,14 @@ bool Memfile_LeaseMgr::loadLeasesFromFiles(const std::string& filename,
         if (lease_file->exists()) {
         if (lease_file->exists()) {
             LeaseFileLoader::load<LeaseObjectType>(*lease_file, storage,
             LeaseFileLoader::load<LeaseObjectType>(*lease_file, storage,
                                                    MAX_LEASE_ERRORS);
                                                    MAX_LEASE_ERRORS);
-            upgrade_needed |= lease_file->needsUpgrading();
+            conversion_needed =  conversion_needed || lease_file->needsConversion();
         }
         }
 
 
         lease_file.reset(new LeaseFileType(appendSuffix(filename, FILE_INPUT)));
         lease_file.reset(new LeaseFileType(appendSuffix(filename, FILE_INPUT)));
         if (lease_file->exists()) {
         if (lease_file->exists()) {
             LeaseFileLoader::load<LeaseObjectType>(*lease_file, storage,
             LeaseFileLoader::load<LeaseObjectType>(*lease_file, storage,
                                                    MAX_LEASE_ERRORS);
                                                    MAX_LEASE_ERRORS);
-            upgrade_needed |= lease_file->needsUpgrading();
+            conversion_needed =  conversion_needed || lease_file->needsConversion();
         }
         }
     }
     }
 
 
@@ -940,9 +940,9 @@ bool Memfile_LeaseMgr::loadLeasesFromFiles(const std::string& filename,
     lease_file.reset(new LeaseFileType(filename));
     lease_file.reset(new LeaseFileType(filename));
     LeaseFileLoader::load<LeaseObjectType>(*lease_file, storage,
     LeaseFileLoader::load<LeaseObjectType>(*lease_file, storage,
                                            MAX_LEASE_ERRORS, false);
                                            MAX_LEASE_ERRORS, false);
-    upgrade_needed |= lease_file->needsUpgrading();
+    conversion_needed =  conversion_needed || lease_file->needsConversion();
 
 
-    return (upgrade_needed);
+    return (conversion_needed);
 }
 }
 
 
 
 
@@ -970,7 +970,7 @@ Memfile_LeaseMgr::lfcCallback() {
 }
 }
 
 
 void
 void
-Memfile_LeaseMgr::lfcSetup(bool upgrade_needed) {
+Memfile_LeaseMgr::lfcSetup(bool conversion_needed) {
     std::string lfc_interval_str = "0";
     std::string lfc_interval_str = "0";
     try {
     try {
         lfc_interval_str = conn_.getParameter("lfc-interval");
         lfc_interval_str = conn_.getParameter("lfc-interval");
@@ -986,9 +986,9 @@ Memfile_LeaseMgr::lfcSetup(bool upgrade_needed) {
                   << lfc_interval_str << " specified");
                   << lfc_interval_str << " specified");
     }
     }
 
 
-    if (lfc_interval > 0 || upgrade_needed) {
+    if (lfc_interval > 0 || conversion_needed) {
         lfc_setup_.reset(new LFCSetup(boost::bind(&Memfile_LeaseMgr::lfcCallback, this)));
         lfc_setup_.reset(new LFCSetup(boost::bind(&Memfile_LeaseMgr::lfcCallback, this)));
-        lfc_setup_->setup(lfc_interval, lease_file4_, lease_file6_, upgrade_needed);
+        lfc_setup_->setup(lfc_interval, lease_file4_, lease_file6_, conversion_needed);
     }
     }
 }
 }
 
 

+ 20 - 4
src/lib/dhcpsrv/memfile_lease_mgr.h

@@ -118,6 +118,18 @@ public:
 
 
     /// @brief The sole lease manager constructor
     /// @brief The sole lease manager constructor
     ///
     ///
+    /// This method:
+    /// - Initializes the new instance based on the parameters given
+    /// - Loads (or creates) the appropriate lease file(s)
+    /// - Initiates the periodic scheduling of the LFC (if enabled)
+    ///
+    /// If any of the files loaded require conversion to the current schema
+    /// (upgrade or downgrade), @c lfcSetup() will be invoked with its
+    /// @c run_once_now parameter set to true.  This causes lfcSetup() to
+    /// invoke the LFC process immediately regardless of whether LFC is
+    /// enabled. This ensures that any files which need conversion are
+    /// converted automatically.
+    ///
     /// dbconfig is a generic way of passing parameters. Parameters
     /// dbconfig is a generic way of passing parameters. Parameters
     /// are passed in the "name=value" format, separated by spaces.
     /// are passed in the "name=value" format, separated by spaces.
     /// Values may be enclosed in double quotes, if needed.
     /// Values may be enclosed in double quotes, if needed.
@@ -549,6 +561,9 @@ private:
     /// @tparam LeaseFileType @c CSVLeaseFile4 or @c CSVLeaseFile6.
     /// @tparam LeaseFileType @c CSVLeaseFile4 or @c CSVLeaseFile6.
     /// @tparam StorageType @c Lease4Storage or @c Lease6Storage.
     /// @tparam StorageType @c Lease4Storage or @c Lease6Storage.
     ///
     ///
+    /// @return Returns true if any of the files loaded need conversion from
+    /// an older or newer schema.
+    ///
     /// @throw CSVFileError when parsing any of the lease files fails.
     /// @throw CSVFileError when parsing any of the lease files fails.
     /// @throw DbOpenError when it is found that the LFC is in progress.
     /// @throw DbOpenError when it is found that the LFC is in progress.
     template<typename LeaseObjectType, typename LeaseFileType,
     template<typename LeaseObjectType, typename LeaseFileType,
@@ -626,10 +641,11 @@ private:
     /// Kea build directory, the @c KEA_LFC_EXECUTABLE environmental
     /// Kea build directory, the @c KEA_LFC_EXECUTABLE environmental
     /// variable should be set to hold an absolute path to the kea-lfc
     /// variable should be set to hold an absolute path to the kea-lfc
     /// excutable.
     /// excutable.
-    /// @param upgrade_needed flag that indicates input lease file(s) are
-    /// from an earlier schema version and need conversion.  This value is
-    /// passed through to LFCSetup::setup() via its run_once_now parameter.
-    void lfcSetup(bool upgrade_needed = false);
+    /// @param conversion_needed flag that indicates input lease file(s) are
+    /// schema do not match the current schema (older or newer), and need
+    /// conversion. This value is passed through to LFCSetup::setup() via its
+    /// run_once_now parameter.
+    void lfcSetup(bool conversion_needed = false);
 
 
     /// @brief Performs a lease file cleanup for DHCPv4 or DHCPv6.
     /// @brief Performs a lease file cleanup for DHCPv4 or DHCPv6.
     ///
     ///

+ 32 - 6
src/lib/dhcpsrv/tests/csv_lease_file4_unittest.cc

@@ -359,15 +359,41 @@ TEST_F(CSVLeaseFile4Test, invalidHeaderColumn) {
 }
 }
 
 
 // Verifies that a lease file with more header columns than defined
 // Verifies that a lease file with more header columns than defined
-// columns will not open.
-TEST_F(CSVLeaseFile4Test, tooManyHeaderColumns) {
-    // Create 1.0 file
+// columns will downgrade.
+TEST_F(CSVLeaseFile4Test, downGrade) {
+    // Create 2.0 PLUS a column file
     io_.writeFile("address,hwaddr,client_id,valid_lifetime,expire,subnet_id,"
     io_.writeFile("address,hwaddr,client_id,valid_lifetime,expire,subnet_id,"
-                  "fqdn_fwd,fqdn_rev,state,FUTRE_COL\n");
+                  "fqdn_fwd,fqdn_rev,hostname,state,FUTURE_COL\n"
 
 
-    // Open the lease file.
+                  "192.0.2.3,06:07:08:09:3a:bc,,200,200,8,1,1,"
+                  "three.example.com,2,BOGUS\n");
+
+    // Lease file should open and report as needing downgrade.
     boost::scoped_ptr<CSVLeaseFile4> lf(new CSVLeaseFile4(filename_));
     boost::scoped_ptr<CSVLeaseFile4> lf(new CSVLeaseFile4(filename_));
-    ASSERT_THROW(lf->open(), CSVFileError);
+    ASSERT_NO_THROW(lf->open());
+    EXPECT_TRUE(lf->needsConversion());
+    EXPECT_EQ(util::VersionedCSVFile::NEEDS_DOWNGRADE,
+              lf->getInputSchemaState());
+    Lease4Ptr lease;
+
+    {
+    SCOPED_TRACE("First lease valid");
+    EXPECT_TRUE(lf->next(lease));
+    ASSERT_TRUE(lease);
+
+    // Verify that the third lease is correct.
+    EXPECT_EQ("192.0.2.3", lease->addr_.toText());
+    HWAddr hwaddr1(*lease->hwaddr_);
+    EXPECT_EQ("06:07:08:09:3a:bc", hwaddr1.toText(false));
+    EXPECT_FALSE(lease->client_id_);
+    EXPECT_EQ(200, lease->valid_lft_);
+    EXPECT_EQ(0, lease->cltt_);
+    EXPECT_EQ(8, lease->subnet_id_);
+    EXPECT_TRUE(lease->fqdn_fwd_);
+    EXPECT_TRUE(lease->fqdn_rev_);
+    EXPECT_EQ("three.example.com", lease->hostname_);
+    EXPECT_EQ(Lease::STATE_EXPIRED_RECLAIMED, lease->state_);
+    }
 }
 }
 
 
 
 

+ 43 - 6
src/lib/dhcpsrv/tests/csv_lease_file6_unittest.cc

@@ -413,15 +413,52 @@ TEST_F(CSVLeaseFile6Test, invalidHeaderColumn) {
 }
 }
 
 
 // Verifies that a lease file with more header columns than defined
 // Verifies that a lease file with more header columns than defined
-// columns will not open.
-TEST_F(CSVLeaseFile6Test, tooManyHeaderColumns) {
-    io_.writeFile("address,duid,valid_lifetime,expire,subnet_id,pref_lifetime,"
+// columns will open as needing a downgrade.
+TEST_F(CSVLeaseFile6Test, downGrade) {
+    // Create a mixed schema file
+    io_.writeFile(
+             // schema 1.0 header
+              "address,duid,valid_lifetime,expire,subnet_id,pref_lifetime,"
               "lease_type,iaid,prefix_len,fqdn_fwd,fqdn_rev,hostname,"
               "lease_type,iaid,prefix_len,fqdn_fwd,fqdn_rev,hostname,"
-              "hwaddr,state,FUTURE_COL\n");
+              "hwaddr,state,FUTURE_COL\n"
 
 
-    // Open should fail.
+              // schema 3.0 record - has hwaddr and state
+              "2001:db8:1::3,00:01:02:03:04:05:06:0a:0b:0c:0d:0e:03,"
+              "200,200,8,100,0,7,0,1,1,three.example.com,0a:0b:0c:0d:0e,1,"
+              "BOGUS\n");
+
+    // Open should succeed in the event someone is downgrading.
     boost::scoped_ptr<CSVLeaseFile6> lf(new CSVLeaseFile6(filename_));
     boost::scoped_ptr<CSVLeaseFile6> lf(new CSVLeaseFile6(filename_));
-    ASSERT_THROW(lf->open(), CSVFileError);
+    ASSERT_NO_THROW(lf->open());
+    EXPECT_TRUE(lf->needsConversion());
+    EXPECT_EQ(util::VersionedCSVFile::NEEDS_DOWNGRADE,
+              lf->getInputSchemaState());
+
+
+    Lease6Ptr lease;
+    {
+    SCOPED_TRACE("First lease valid");
+    EXPECT_TRUE(lf->next(lease));
+    ASSERT_TRUE(lease);
+
+    // Verify that the lease attributes are correct.
+    EXPECT_EQ("2001:db8:1::3", lease->addr_.toText());
+    ASSERT_TRUE(lease->duid_);
+    EXPECT_EQ("00:01:02:03:04:05:06:0a:0b:0c:0d:0e:03", lease->duid_->toText());
+    EXPECT_EQ(200, lease->valid_lft_);
+    EXPECT_EQ(0, lease->cltt_);
+    EXPECT_EQ(8, lease->subnet_id_);
+    EXPECT_EQ(100, lease->preferred_lft_);
+    EXPECT_EQ(Lease::TYPE_NA, lease->type_);
+    EXPECT_EQ(7, lease->iaid_);
+    EXPECT_EQ(0, lease->prefixlen_);
+    EXPECT_TRUE(lease->fqdn_fwd_);
+    EXPECT_TRUE(lease->fqdn_rev_);
+    EXPECT_EQ("three.example.com", lease->hostname_);
+    ASSERT_TRUE(lease->hwaddr_);
+    EXPECT_EQ("0a:0b:0c:0d:0e", lease->hwaddr_->toText(false));
+    EXPECT_EQ(Lease::STATE_DECLINED, lease->state_);
+    }
 }
 }
 
 
 
 

+ 6 - 0
src/lib/util/csv_file.cc

@@ -65,6 +65,12 @@ CSVRow::writeAt(const size_t at, const char* value) {
     values_[at] = value;
     values_[at] = value;
 }
 }
 
 
+void
+CSVRow::trim(const size_t count) {
+    checkIndex(count);
+    values_.erase(values_.end() - count, values_.end());
+}
+
 std::ostream& operator<<(std::ostream& os, const CSVRow& row) {
 std::ostream& operator<<(std::ostream& os, const CSVRow& row) {
     os << row.render();
     os << row.render();
     return (os);
     return (os);

+ 8 - 0
src/lib/util/csv_file.h

@@ -117,6 +117,14 @@ public:
     /// @c CSVRow::getValuesCount.
     /// @c CSVRow::getValuesCount.
     std::string readAt(const size_t at) const;
     std::string readAt(const size_t at) const;
 
 
+    /// @brief Trims a given number of elements from the end of a row
+    ///
+    /// @param number of elements to trim
+    ///
+    /// @throw CSVFileError if the number to trim is larger than
+    /// then the number of elements
+    void trim(const size_t count);
+
     /// @brief Retrieves a value from the internal container.
     /// @brief Retrieves a value from the internal container.
     ///
     ///
     /// This method is reads a value from the internal container and converts
     /// This method is reads a value from the internal container and converts

+ 29 - 0
src/lib/util/tests/csv_file_unittest.cc

@@ -109,6 +109,35 @@ TEST(CSVRow, append) {
     EXPECT_EQ("alpha,beta,gamma,delta,epsilon", text);
     EXPECT_EQ("alpha,beta,gamma,delta,epsilon", text);
 }
 }
 
 
+// This test checks that a row can be trimmed of
+// a given number of elements
+TEST(CSVRow, trim) {
+    CSVRow row("zero,one,two,three,four");
+    ASSERT_EQ(5, row.getValuesCount());
+    EXPECT_EQ("zero", row.readAt(0));
+    EXPECT_EQ("one", row.readAt(1));
+    EXPECT_EQ("two", row.readAt(2));
+    EXPECT_EQ("three", row.readAt(3));
+    EXPECT_EQ("four", row.readAt(4));
+
+    ASSERT_THROW(row.trim(10), CSVFileError);
+
+    // Verify that we can erase just one
+    ASSERT_NO_THROW(row.trim(1));
+    ASSERT_EQ(4, row.getValuesCount());
+    EXPECT_EQ("zero", row.readAt(0));
+    EXPECT_EQ("one", row.readAt(1));
+    EXPECT_EQ("two", row.readAt(2));
+    EXPECT_EQ("three", row.readAt(3));
+
+    // Verfiy we can trim more than one
+    ASSERT_NO_THROW(row.trim(2));
+    ASSERT_EQ(2, row.getValuesCount());
+    EXPECT_EQ("zero", row.readAt(0));
+    EXPECT_EQ("one", row.readAt(1));
+}
+
+
 /// @brief Test fixture class for testing operations on CSV file.
 /// @brief Test fixture class for testing operations on CSV file.
 ///
 ///
 /// It implements basic operations on files, such as reading writing
 /// It implements basic operations on files, such as reading writing

+ 140 - 46
src/lib/util/tests/versioned_csv_file_unittest.cc

@@ -154,21 +154,22 @@ TEST_F(VersionedCSVFileTest, addColumn) {
     ASSERT_TRUE(exists());
     ASSERT_TRUE(exists());
 
 
     // We should have 3 defined columns
     // We should have 3 defined columns
+    // Input Header should match defined columns on new files
+    // Valid columns should match defined columns on new files
+    // Minium valid columns wasn't set. (Remember it's optional)
     EXPECT_EQ(3, csv->getColumnCount());
     EXPECT_EQ(3, csv->getColumnCount());
-
-    // Number valid columns should match defined columns
+    EXPECT_EQ(3, csv->getInputHeaderCount());
     EXPECT_EQ(3, csv->getValidColumnCount());
     EXPECT_EQ(3, csv->getValidColumnCount());
-
-    // Minium valid columns wasn't set. (Remember it's optional)
     EXPECT_EQ(0, csv->getMinimumValidColumns());
     EXPECT_EQ(0, csv->getMinimumValidColumns());
 
 
-    // Upgrade flag should be false
-    EXPECT_EQ(false, csv->needsUpgrading());
-
     // Schema versions for new files should always match
     // Schema versions for new files should always match
     EXPECT_EQ("3.0", csv->getInputSchemaVersion());
     EXPECT_EQ("3.0", csv->getInputSchemaVersion());
     EXPECT_EQ("3.0", csv->getSchemaVersion());
     EXPECT_EQ("3.0", csv->getSchemaVersion());
 
 
+    // Input Schema State should be current for new files
+    EXPECT_EQ(VersionedCSVFile::CURRENT, csv->getInputSchemaState());
+    EXPECT_FALSE(csv->needsConversion());
+
     // Make sure we can't add columns (even unique) when the file is open.
     // Make sure we can't add columns (even unique) when the file is open.
     ASSERT_THROW(csv->addColumn("zoo", "3.0", ""), CSVFileError);
     ASSERT_THROW(csv->addColumn("zoo", "3.0", ""), CSVFileError);
 
 
@@ -178,12 +179,69 @@ TEST_F(VersionedCSVFileTest, addColumn) {
     EXPECT_NO_THROW(csv->addColumn("zoo", "3.0", ""));
     EXPECT_NO_THROW(csv->addColumn("zoo", "3.0", ""));
 }
 }
 
 
-// Verifies the basic ability to upgrade valid files. 
+// Verifies that a current schema version file loads correctly.
+TEST_F(VersionedCSVFileTest, currentSchemaTest) {
+
+    // Create our versioned file, with three columns
+    boost::scoped_ptr<VersionedCSVFile> csv(new VersionedCSVFile(testfile_));
+    ASSERT_NO_THROW(csv->addColumn("animal", "2.0", ""));
+    ASSERT_NO_THROW(csv->addColumn("color", "2.0", "grey"));
+    ASSERT_NO_THROW(csv->addColumn("age", "2.0", "0"));
+
+    // Write a file compliant with the current schema version.
+    writeFile("animal,color,age\n"
+              "cat,black,2\n"
+              "lion,yellow,17\n"
+              "dog,brown,5\n");
+
+    // Header should pass validation and allow the open to succeed.
+    ASSERT_NO_THROW(csv->open());
+
+    // For schema current file We should have:
+    // 3 defined columns
+    // 3 columns total found in the header
+    // 3 valid columns found in the header
+    // Minium valid columns wasn't set. (Remember it's optional)
+    EXPECT_EQ(3, csv->getColumnCount());
+    EXPECT_EQ(3, csv->getInputHeaderCount());
+    EXPECT_EQ(3, csv->getValidColumnCount());
+    EXPECT_EQ(0, csv->getMinimumValidColumns());
+
+    // Input schema and current schema should both be  2.0
+    EXPECT_EQ("2.0", csv->getInputSchemaVersion());
+    EXPECT_EQ("2.0", csv->getSchemaVersion());
+
+    // Input Schema State should be CURRENT
+    EXPECT_EQ(VersionedCSVFile::CURRENT, csv->getInputSchemaState());
+    EXPECT_FALSE(csv->needsConversion());
+
+    // First row is correct.
+    CSVRow row;
+    ASSERT_TRUE(csv->next(row));
+    EXPECT_EQ("cat", row.readAt(0));
+    EXPECT_EQ("black", row.readAt(1));
+    EXPECT_EQ("2", row.readAt(2));
+
+    // Second row is correct.
+    ASSERT_TRUE(csv->next(row));
+    EXPECT_EQ("lion", row.readAt(0));
+    EXPECT_EQ("yellow", row.readAt(1));
+    EXPECT_EQ("17", row.readAt(2));
+
+    // Third row is correct.
+    ASSERT_TRUE(csv->next(row));
+    EXPECT_EQ("dog", row.readAt(0));
+    EXPECT_EQ("brown", row.readAt(1));
+    EXPECT_EQ("5", row.readAt(2));
+}
+
+
+// Verifies the basic ability to upgrade valid files.
 // It starts with a version 1.0 file and updates
 // It starts with a version 1.0 file and updates
 // it through two schema evolutions.
 // it through two schema evolutions.
 TEST_F(VersionedCSVFileTest, upgradeOlderVersions) {
 TEST_F(VersionedCSVFileTest, upgradeOlderVersions) {
 
 
-    // Create version 1.0 schema  CSV file 
+    // Create version 1.0 schema  CSV file
     writeFile("animal\n"
     writeFile("animal\n"
               "cat\n"
               "cat\n"
               "lion\n"
               "lion\n"
@@ -198,22 +256,24 @@ TEST_F(VersionedCSVFileTest, upgradeOlderVersions) {
     // Header should pass validation and allow the open to succeed.
     // Header should pass validation and allow the open to succeed.
     ASSERT_NO_THROW(csv->open());
     ASSERT_NO_THROW(csv->open());
 
 
-    // We should have 2 defined columns
+    // We should have:
+    // 2 defined columns
+    // 1 column found in the header
+    // 1 valid column in the header
+    // Minium valid columns wasn't set. (Remember it's optional)
     EXPECT_EQ(2, csv->getColumnCount());
     EXPECT_EQ(2, csv->getColumnCount());
-
-    // We should have found 1 valid column in the header
+    EXPECT_EQ(1, csv->getInputHeaderCount());
     EXPECT_EQ(1, csv->getValidColumnCount());
     EXPECT_EQ(1, csv->getValidColumnCount());
-
-    // Minium valid columns wasn't set. (Remember it's optional)
     EXPECT_EQ(0, csv->getMinimumValidColumns());
     EXPECT_EQ(0, csv->getMinimumValidColumns());
 
 
-    // Upgrade flag should be true
-    EXPECT_EQ(true, csv->needsUpgrading());
-
     // Input schema should be 1.0, while our current schema should be 2.0
     // Input schema should be 1.0, while our current schema should be 2.0
     EXPECT_EQ("1.0", csv->getInputSchemaVersion());
     EXPECT_EQ("1.0", csv->getInputSchemaVersion());
     EXPECT_EQ("2.0", csv->getSchemaVersion());
     EXPECT_EQ("2.0", csv->getSchemaVersion());
 
 
+    // Input Schema State should be NEEDS_UPGRADE
+    EXPECT_EQ(VersionedCSVFile::NEEDS_UPGRADE, csv->getInputSchemaState());
+    EXPECT_TRUE(csv->needsConversion());
+
     // First row is correct.
     // First row is correct.
     CSVRow row;
     CSVRow row;
     ASSERT_TRUE(csv->next(row));
     ASSERT_TRUE(csv->next(row));
@@ -256,22 +316,24 @@ TEST_F(VersionedCSVFileTest, upgradeOlderVersions) {
     // Header should pass validation and allow the open to succeed
     // Header should pass validation and allow the open to succeed
     ASSERT_NO_THROW(csv->open());
     ASSERT_NO_THROW(csv->open());
 
 
-    // We should have 2 defined columns
+    // We should have:
+    // 3 defined columns
+    // 1 column found in the header
+    // 1 valid column in the header
+    // Minium valid columns wasn't set. (Remember it's optional)
     EXPECT_EQ(3, csv->getColumnCount());
     EXPECT_EQ(3, csv->getColumnCount());
-
-    // We should have found 1 valid column in the header
+    EXPECT_EQ(1, csv->getInputHeaderCount());
     EXPECT_EQ(1, csv->getValidColumnCount());
     EXPECT_EQ(1, csv->getValidColumnCount());
-
-    // Minium valid columns wasn't set. (Remember it's optional)
     EXPECT_EQ(0, csv->getMinimumValidColumns());
     EXPECT_EQ(0, csv->getMinimumValidColumns());
 
 
-    // Upgrade flag should be true
-    EXPECT_EQ(true, csv->needsUpgrading());
-
     // Make sure schema versions are accurate
     // Make sure schema versions are accurate
     EXPECT_EQ("1.0", csv->getInputSchemaVersion());
     EXPECT_EQ("1.0", csv->getInputSchemaVersion());
     EXPECT_EQ("3.0", csv->getSchemaVersion());
     EXPECT_EQ("3.0", csv->getSchemaVersion());
 
 
+    // Input Schema State should be NEEDS_UPGRADE
+    EXPECT_EQ(VersionedCSVFile::NEEDS_UPGRADE, csv->getInputSchemaState());
+    EXPECT_TRUE(csv->needsConversion());
+
     // First row is correct.
     // First row is correct.
     ASSERT_TRUE(csv->next(row));
     ASSERT_TRUE(csv->next(row));
     EXPECT_EQ("cat", row.readAt(0));
     EXPECT_EQ("cat", row.readAt(0));
@@ -298,7 +360,7 @@ TEST_F(VersionedCSVFileTest, upgradeOlderVersions) {
 }
 }
 
 
 TEST_F(VersionedCSVFileTest, minimumValidColumn) {
 TEST_F(VersionedCSVFileTest, minimumValidColumn) {
-    // Create version 1.0 schema  CSV file 
+    // Create version 1.0 schema  CSV file
     writeFile("animal\n"
     writeFile("animal\n"
               "cat\n"
               "cat\n"
               "lion\n"
               "lion\n"
@@ -311,7 +373,7 @@ TEST_F(VersionedCSVFileTest, minimumValidColumn) {
     ASSERT_NO_THROW(csv->addColumn("color", "2.0", "blue"));
     ASSERT_NO_THROW(csv->addColumn("color", "2.0", "blue"));
     ASSERT_NO_THROW(csv->addColumn("age", "3.0", "21"));
     ASSERT_NO_THROW(csv->addColumn("age", "3.0", "21"));
 
 
-    // Verify we can't set minimum columns with a non-existant column
+    // Verify we can't set minimum columns with a non-existent column
     EXPECT_THROW(csv->setMinimumValidColumns("bogus"), VersionedCSVFileError);
     EXPECT_THROW(csv->setMinimumValidColumns("bogus"), VersionedCSVFileError);
 
 
     // Set the minimum number of columns to "color"
     // Set the minimum number of columns to "color"
@@ -346,42 +408,74 @@ TEST_F(VersionedCSVFileTest, minimumValidColumn) {
 
 
 TEST_F(VersionedCSVFileTest, invalidHeaderColumn) {
 TEST_F(VersionedCSVFileTest, invalidHeaderColumn) {
 
 
-    // Create version 2.0 schema  CSV file 
-    writeFile("animal,colour\n"
-              "cat,red\n"
-              "lion,green\n");
-
-    // Create our versioned file, with three columns, one for each
-    // schema version
+    // Create our version 2.0 schema file
     boost::scoped_ptr<VersionedCSVFile> csv(new VersionedCSVFile(testfile_));
     boost::scoped_ptr<VersionedCSVFile> csv(new VersionedCSVFile(testfile_));
     ASSERT_NO_THROW(csv->addColumn("animal", "1.0", ""));
     ASSERT_NO_THROW(csv->addColumn("animal", "1.0", ""));
     ASSERT_NO_THROW(csv->addColumn("color", "2.0", "blue"));
     ASSERT_NO_THROW(csv->addColumn("color", "2.0", "blue"));
 
 
+    // Create a file with the correct number of columns but a wrong column name
+    writeFile("animal,colour\n"
+              "cat,red\n"
+              "lion,green\n");
+
     // Header validation should fail, we have an invalid column
     // Header validation should fail, we have an invalid column
     ASSERT_THROW(csv->open(), CSVFileError);
     ASSERT_THROW(csv->open(), CSVFileError);
 }
 }
 
 
-TEST_F(VersionedCSVFileTest, tooManyHeaderColumns) {
-
-    // Create version 2.0 schema  CSV file 
-    writeFile("animal,color,age\n,"
-              "cat,red\n"
-              "lion,green\n");
-
-    // Create our versioned file, with three columns, one for each
-    // schema version
+TEST_F(VersionedCSVFileTest, downGrading) {
+    // Create our version 2.0 schema file
     boost::scoped_ptr<VersionedCSVFile> csv(new VersionedCSVFile(testfile_));
     boost::scoped_ptr<VersionedCSVFile> csv(new VersionedCSVFile(testfile_));
     ASSERT_NO_THROW(csv->addColumn("animal", "1.0", ""));
     ASSERT_NO_THROW(csv->addColumn("animal", "1.0", ""));
     ASSERT_NO_THROW(csv->addColumn("color", "2.0", "blue"));
     ASSERT_NO_THROW(csv->addColumn("color", "2.0", "blue"));
 
 
-    // Header validation should fail, we have too many columns 
-    ASSERT_THROW(csv->open(), CSVFileError);
+    // Create schema 2.0 file PLUS an extra column
+    writeFile("animal,color,age\n"
+              "cat,red,5\n"
+              "lion,green,8\n");
+
+    // Header should validate and file should open.
+    ASSERT_NO_THROW(csv->open());
+
+    // We should have:
+    // 2 defined columns
+    // 3 columns found in the header
+    // 2 valid columns in the header
+    // Minium valid columns wasn't set. (Remember it's optional)
+    EXPECT_EQ(2, csv->getColumnCount());
+    EXPECT_EQ(3, csv->getInputHeaderCount());
+    EXPECT_EQ(2, csv->getValidColumnCount());
+    EXPECT_EQ(0, csv->getMinimumValidColumns());
+
+    // Input schema and current schema should both be 2.0
+    EXPECT_EQ("2.0", csv->getInputSchemaVersion());
+    EXPECT_EQ("2.0", csv->getSchemaVersion());
+
+    // Input Schema State should be NEEDS_DOWNGRADE
+    EXPECT_EQ(VersionedCSVFile::NEEDS_DOWNGRADE, csv->getInputSchemaState());
+    EXPECT_TRUE(csv->needsConversion());
+
+    // First row is correct.
+    CSVRow row;
+    EXPECT_TRUE(csv->next(row));
+    EXPECT_EQ("cat", row.readAt(0));
+    EXPECT_EQ("red", row.readAt(1));
+
+    // No data beyond the second column
+    EXPECT_THROW(row.readAt(2), CSVFileError);
+
+    // Second row is correct.
+    ASSERT_TRUE(csv->next(row));
+    EXPECT_EQ("lion", row.readAt(0));
+    EXPECT_EQ("green", row.readAt(1));
+
+    // No data beyond the second column
+    EXPECT_THROW(row.readAt(2), CSVFileError);
 }
 }
 
 
 
 
 TEST_F(VersionedCSVFileTest, rowChecking) {
 TEST_F(VersionedCSVFileTest, rowChecking) {
     // Create version 2.0 schema CSV file with a
     // Create version 2.0 schema CSV file with a
-    // - valid header 
+    // - valid header
     // - row 0 has too many values
     // - row 0 has too many values
     // - row 1 is valid
     // - row 1 is valid
     // - row 3 is too few values
     // - row 3 is too few values

+ 61 - 30
src/lib/util/versioned_csv_file.cc

@@ -19,7 +19,8 @@ namespace util {
 
 
 VersionedCSVFile::VersionedCSVFile(const std::string& filename)
 VersionedCSVFile::VersionedCSVFile(const std::string& filename)
     : CSVFile(filename), columns_(0), valid_column_count_(0),
     : CSVFile(filename), columns_(0), valid_column_count_(0),
-      minimum_valid_columns_(0) {
+      minimum_valid_columns_(0), input_header_count_(0),
+      input_schema_state_(CURRENT) {
 }
 }
 
 
 VersionedCSVFile::~VersionedCSVFile() {
 VersionedCSVFile::~VersionedCSVFile() {
@@ -55,6 +56,11 @@ VersionedCSVFile::getValidColumnCount() const {
     return (valid_column_count_);
     return (valid_column_count_);
 }
 }
 
 
+size_t
+VersionedCSVFile::getInputHeaderCount() const {
+    return (input_header_count_);
+}
+
 void
 void
 VersionedCSVFile::open(const bool seek_to_end) {
 VersionedCSVFile::open(const bool seek_to_end) {
     if (getColumnCount() == 0) {
     if (getColumnCount() == 0) {
@@ -75,13 +81,19 @@ VersionedCSVFile::recreate() {
     }
     }
 
 
     CSVFile::recreate();
     CSVFile::recreate();
-    // For new files they always match. 
+    // For new files they always match.
     valid_column_count_ = getColumnCount();
     valid_column_count_ = getColumnCount();
+    input_header_count_ = getColumnCount();
+}
+
+VersionedCSVFile::InputSchemaState
+VersionedCSVFile::getInputSchemaState() const {
+    return (input_schema_state_);
 }
 }
 
 
 bool
 bool
-VersionedCSVFile::needsUpgrading() const {
-    return (getValidColumnCount() < getColumnCount());
+VersionedCSVFile::needsConversion() const {
+    return (input_schema_state_ != CURRENT);
 }
 }
 
 
 std::string
 std::string
@@ -105,7 +117,7 @@ VersionedCSVFile::getSchemaVersion() const {
 const VersionedColumnPtr&
 const VersionedColumnPtr&
 VersionedCSVFile::getVersionedColumn(const size_t index) const {
 VersionedCSVFile::getVersionedColumn(const size_t index) const {
     if (index >= getColumnCount()) {
     if (index >= getColumnCount()) {
-        isc_throw(isc::OutOfRange, "versioned column index " << index 
+        isc_throw(isc::OutOfRange, "versioned column index " << index
                   << " out of range;  CSV file : " << getFilename()
                   << " out of range;  CSV file : " << getFilename()
                   << " only has " << getColumnCount() << " columns ");
                   << " only has " << getColumnCount() << " columns ");
     }
     }
@@ -115,7 +127,7 @@ VersionedCSVFile::getVersionedColumn(const size_t index) const {
 
 
 bool
 bool
 VersionedCSVFile::next(CSVRow& row) {
 VersionedCSVFile::next(CSVRow& row) {
-    // Use base class to physicall read the row, but skip its row
+    // Use base class to physical read the row, but skip its row
     // validation
     // validation
     CSVFile::next(row, true);
     CSVFile::next(row, true);
     if (row == CSVFile::EMPTY_ROW()) {
     if (row == CSVFile::EMPTY_ROW()) {
@@ -128,48 +140,54 @@ VersionedCSVFile::next(CSVRow& row) {
     // an invalid row.
     // an invalid row.
     if (row.getValuesCount() < getValidColumnCount()) {
     if (row.getValuesCount() < getValidColumnCount()) {
         std::ostringstream s;
         std::ostringstream s;
-        s << "the size of the row '" << row << "' has too few valid columns "
+        s << " The row '" << row << "' has too few valid columns "
           << getValidColumnCount() << "' of the CSV file '"
           << getValidColumnCount() << "' of the CSV file '"
           << getFilename() << "'";
           << getFilename() << "'";
         setReadMsg(s.str());
         setReadMsg(s.str());
         return (false);
         return (false);
     }
     }
 
 
-    // If we're upgrading, we need to add in any missing values
-    for (size_t index = row.getValuesCount(); index < getColumnCount();
-         ++index) {
-        row.append(columns_[index]->default_value_);
+    // If we have more values than columns defined, we need to
+    // check if we should "downgrade" the row. We will if the
+    // number of values we have matches the number of columns in
+    // input header.  If now we'll toss the row.
+    if (row.getValuesCount() > getColumnCount()) {
+        if (row.getValuesCount() != getInputHeaderCount()) {
+            std::ostringstream s;
+            s << " The row '" << row << "' has too many columns "
+              << getValidColumnCount() << "' of the CSV file '"
+              << getFilename() << "'";
+              setReadMsg(s.str());
+            return (false);
+        }
+
+        // We're downgrading a row, so toss the extra columns
+        row.trim(row.getValuesCount() - getColumnCount());
+    } else {
+        // If we're upgrading, we need to add in any missing values
+        for (size_t index = row.getValuesCount(); index < getColumnCount();
+            ++index) {
+            row.append(columns_[index]->default_value_);
+        }
     }
     }
 
 
-    return (CSVFile::validate(row));
+    return (true);
 }
 }
 
 
 bool
 bool
 VersionedCSVFile::validateHeader(const CSVRow& header) {
 VersionedCSVFile::validateHeader(const CSVRow& header) {
-    // @todo does this ever make sense? What would be the point of a versioned
-    // file that has no defined columns?
     if (getColumnCount() == 0) {
     if (getColumnCount() == 0) {
-        return (true);
+        isc_throw(VersionedCSVFileError,
+                  "cannot validate header, no schema has been defined");
     }
     }
 
 
-    // If there are more values in the header than defined columns
-    // then the lease file must be from a newer version, so bail out.
-    // @todo - we may wish to remove this constraint as it prohibits one
-    // from usig a newer schema file with older schema code.
-    if (header.getValuesCount() > getColumnCount()) {
-        std::ostringstream s;
-        s << " - header has " << header.getValuesCount()  << " column(s), "
-          << "it should not have more than " << getColumnCount();
-
-        setReadMsg(s.str());
-        return (false);
-    }
+    input_header_count_ = header.getValuesCount();
 
 
     // Iterate over the number of columns in the header, testing
     // Iterate over the number of columns in the header, testing
     // each against the defined column in the same position.
     // each against the defined column in the same position.
     // If there is a mismatch, bail.
     // If there is a mismatch, bail.
     size_t i = 0;
     size_t i = 0;
-    for (  ; i < header.getValuesCount(); ++i) {
+    for (  ; i < getInputHeaderCount() && i < getColumnCount(); ++i) {
         if (getColumnName(i) != header.readAt(i)) {
         if (getColumnName(i) != header.readAt(i)) {
             std::ostringstream s;
             std::ostringstream s;
             s << " - header contains an invalid column: '"
             s << " - header contains an invalid column: '"
@@ -181,10 +199,10 @@ VersionedCSVFile::validateHeader(const CSVRow& header) {
 
 
     // If we found too few valid columns, then we cannot convert this
     // If we found too few valid columns, then we cannot convert this
     // file.  It's too old, too corrupt, or not a Kea file.
     // file.  It's too old, too corrupt, or not a Kea file.
-    if (i < minimum_valid_columns_) {
+    if (i < getMinimumValidColumns()) {
         std::ostringstream s;
         std::ostringstream s;
         s << " - header has only " << i << " valid column(s), "
         s << " - header has only " << i << " valid column(s), "
-          << "it must have at least " << minimum_valid_columns_;
+          << "it must have at least " << getMinimumValidColumns();
         setReadMsg(s.str());
         setReadMsg(s.str());
         return (false);
         return (false);
     }
     }
@@ -195,6 +213,19 @@ VersionedCSVFile::validateHeader(const CSVRow& header) {
     // and upgrade data rows.
     // and upgrade data rows.
     valid_column_count_ = i;
     valid_column_count_ = i;
 
 
+    if (getValidColumnCount() < getColumnCount()) {
+        input_schema_state_ = NEEDS_UPGRADE;
+    } else if (getInputHeaderCount() > getColumnCount()) {
+        // If there are more values in the header than defined columns
+        // then, we'll drop the extra.  This allows someone to attempt to
+        // downgrade if need be.
+        input_schema_state_ = NEEDS_DOWNGRADE;
+        std::ostringstream s;
+        s << " - header has " << getInputHeaderCount() - getColumnCount()
+          << " extra column(s), these will be ignored";
+        setReadMsg(s.str());
+    }
+
     return (true);
     return (true);
 }
 }
 
 

+ 51 - 17
src/lib/util/versioned_csv_file.h

@@ -85,40 +85,57 @@ typedef boost::shared_ptr<VersionedColumn> VersionedColumnPtr;
 /// the column found in the header to the columns defined in the schema. The
 /// the column found in the header to the columns defined in the schema. The
 /// columns must match both by name and the order in which they occur.
 /// columns must match both by name and the order in which they occur.
 ///
 ///
-/// 1. If there are fewer columns in the header than in the schema, the file
+/// -# If there are fewer columns in the header than in the schema, the file
 /// is presumed to be an earlier schema version and will be upgraded as it is
 /// is presumed to be an earlier schema version and will be upgraded as it is
 /// read.  There is an ability to mark a specific column as being the minimum
 /// read.  There is an ability to mark a specific column as being the minimum
 /// column which must be present, see @ref VersionedCSVFile::
 /// column which must be present, see @ref VersionedCSVFile::
 /// setMinimumValidColumns().  If the header does contain match up to this
 /// setMinimumValidColumns().  If the header does contain match up to this
 /// minimum column, the file is presumed to be too old to upgrade and the
 /// minimum column, the file is presumed to be too old to upgrade and the
-/// open will fail.
+/// open will fail.  A valid, upgradable file will have an input schema
+/// state of VersionedCSVFile::NEEDS_UPGRADE.
 ///
 ///
-/// 2. If there is a mismatch between a found column name and the column name
+/// -# If there is a mismatch between a found column name and the column name
 /// defined for that position in the row, the file is presumed to be invalid
 /// defined for that position in the row, the file is presumed to be invalid
 /// and the open will fail.
 /// and the open will fail.
 ///
 ///
+/// -# If the content of the header matches exactly the columns defined in
+/// the schema, the file is considered to match the schema exactly and the
+/// input schema state will VersionedCSVFile::CURRENT.
+///
+/// -# If there columns in the header beyond all of the columns defined in
+/// the schema (i.e the schema is a subset of the header), then the file
+/// is presumed to be from a newer version of Kea and can be downgraded. The
+/// input schema state fo the file will be set to
+/// VersionedCSVFile::NEEDS_DOWNGRADE.
+///
 /// After successfully opening a file,  rows are read one at a time via
 /// After successfully opening a file,  rows are read one at a time via
 /// @ref VersionedCSVFile::next().  Each data row is expected to have at least
 /// @ref VersionedCSVFile::next().  Each data row is expected to have at least
 /// the same number of columns as were found in the header. Any row which as
 /// the same number of columns as were found in the header. Any row which as
 /// fewer values is discarded as invalid.  Similarly, any row which is found
 /// fewer values is discarded as invalid.  Similarly, any row which is found
-/// to have more values than are defined in the schema is discarded as invalid
-/// (@todo, consider removing this constraint as it would prohibit reading a
-/// newer schema file with an older server).
+/// to have more values than were found in the header is discarded as invalid.
 ///
 ///
-/// When a row is found to have fewer than the defined number of columns,
-/// the values for each missing column is filled in with the default value
-/// specified by that column's descriptor.  In this manner rows from earlier
-/// schemas are upgraded to the current schema.
+/// When upgrading a row, the values for each missing column is filled in
+/// with the default value specified by that column's descriptor.  When
+/// downgrading a row, extraneous values are dropped from the row.
 ///
 ///
-/// It is important to note that upgrading a file does NOT alter the physical
-/// file itself.  Rather the conversion occurs after the raw data has been read
-/// but before it is passed to caller.
+/// It is important to note that upgrading or downgrading a file does NOT
+/// alter the physical file itself.  Rather the conversion occurs after the
+/// raw data has been read but before it is passed to caller.
 ///
 ///
 /// Also note that there is currently no support for writing out a file in
 /// Also note that there is currently no support for writing out a file in
 /// anything other than the current schema.
 /// anything other than the current schema.
 class VersionedCSVFile : public CSVFile {
 class VersionedCSVFile : public CSVFile {
 public:
 public:
 
 
+    /// @brief Possible input file schema states.
+    /// Used to categorize the input file's schema, relative to the defined
+    /// schema.
+    enum InputSchemaState {
+        CURRENT,
+        NEEDS_UPGRADE,
+        NEEDS_DOWNGRADE
+    };
+
     /// @brief Constructor.
     /// @brief Constructor.
     ///
     ///
     /// @param filename CSV file name.
     /// @param filename CSV file name.
@@ -164,6 +181,8 @@ public:
     /// been opened.
     /// been opened.
     size_t getValidColumnCount() const;
     size_t getValidColumnCount() const;
 
 
+    size_t getInputHeaderCount() const;
+
     /// @brief Opens existing file or creates a new one.
     /// @brief Opens existing file or creates a new one.
     ///
     ///
     /// This function will try to open existing file if this file has size
     /// This function will try to open existing file if this file has size
@@ -237,11 +256,18 @@ public:
     /// @trow OutOfRange exception if the index is invalid
     /// @trow OutOfRange exception if the index is invalid
     const VersionedColumnPtr& getVersionedColumn(const size_t index) const;
     const VersionedColumnPtr& getVersionedColumn(const size_t index) const;
 
 
-    /// @brief Returns true if the opened file is needs to be upgraded
+    /// @brief Fetches the state of the input file's schema
     ///
     ///
-    /// @return true if the file's valid column count is greater than 0 and
-    /// is less than the defined number of columns
-    bool needsUpgrading() const;
+    /// Reflects that state of the input file's schema relative to the
+    /// defined schema as a enum, InputSchemaState.
+    ///
+    /// @return VersionedCSVFile::CURRENT if the input file schema matches
+    /// the defined schema, NEEDS_UPGRADE if the input file schema is older,
+    /// and NEEDS_DOWNGRADE if it is newer
+    enum InputSchemaState getInputSchemaState() const;
+
+    /// @brief Returns true if the input file schema state is not CURRENT
+    bool needsConversion() const;
 
 
 protected:
 protected:
 
 
@@ -272,6 +298,14 @@ private:
     /// @brief Minimum number of valid columns an input file must contain.
     /// @brief Minimum number of valid columns an input file must contain.
     /// If an input file does not meet this number it cannot be upgraded.
     /// If an input file does not meet this number it cannot be upgraded.
     size_t minimum_valid_columns_;
     size_t minimum_valid_columns_;
+
+    /// @brief The number of columns found in the input header row
+    /// This value represent the number of columns present, in the header
+    /// valid or otherwise.
+    size_t input_header_count_;
+
+    /// @brief The state of the input schema in relation to the current schema
+    enum InputSchemaState input_schema_state_;
 };
 };