Migrating to FairCom DB SQL

FairCom strives to keep the transition from older c-tree versions to FairCom DB SQL as easy as possible by maintaining the utmost in compatibility. This chapter addresses the areas where it is possible to occasionally encounter unexpected situations.

Migration from c-tree ODBC to FairCom DB SQL ODBC

With more and more traditional c-tree users leveraging the power of FairCom DB SQL, FairCom has labored to keep this transition as easy as possible. While we strive for the utmost in compatibility, we occasionally find areas where it is possible to encounter unexpected situations. The c-tree ODBC and FairCom DB ODBC drivers are both fully standards compliant, however, subtle differences do exist. We have summarized these differences here.

  • All table names returned by the FairCom DB ODBC Driver are as “owner.tablename” without regard to the server configuration option SQL_OPTION OWNER_FILE_NAMES.
  • The SQL_OPTION OWNER_FILE_NAMES only affects the name of the table created on disk. It determines whether the owner name is prepended to the physical table name or not. Table names returned by the ODBC driver are not affected by this FairCom DB SQL option.
  • Using the c-tree ODBC driver it was common practice to represent the symbolic table name in uppercase. You can use uppercase table names in FairCom DB SQL as long as you do not wrap them in double quotes.

    Another option is to use the SQL_OPTION DB_CASE_INSENSITIVE keyword before creating the template database and the FairCom DB SQL databases.
  • Hidden Fields: The c-tree drivers will not display a field whose field name given in the DODA begins with an underscore: “_”. Hiding fields that are used to form keys is not recommended and may cause problems.
  • Read-Only Fields: The c-tree drivers will display but will not allow updates on a field whose field name given in the DODA begins with the "legacy read only" field identifier, a right square bracket: “]”.

    FairCom DB SQL import does not handle this field identifier. This character is imported as part of the field name unless the -k option (skip fields not complying with conventional identifiers rules) is specified on import. However, this is an unconventional character and must be double quoted.

See also:

Overview of the FairCom DB SQL Migration SDK

Many customers have a long history using FairCom’s ISAM and low-level APIs. With the introduction of the additional capabilities of the FairCom DB SQL, there are many users who would like to migrate their existing c-tree applications and data to take advantage of this powerful new SQL technology. The rich flexibility given to database architects through the years with c-tree now presents a challenge when moving to the standards-driven SQL interface.

This SDK is provided to assist in the migration of existing ISAM tables for use with FairCom DB SQL. Unlike the FairCom DB SQL Import Utility (ctsqlimp), which makes only existing tables accessible through SQL, the migration SDK physically relocates data files for use with FairCom DB SQL. It is important to note the migrated new tables could very well not be accessible with your existing ISAM application. If you have requirements to share data between existing ISAM applications and FairCom DB SQL you should consider using the FairCom DB SQL Import Utility in place of the FairCom DB SQL Migration SDK.

With modifications, the FairCom DB SQL Migration Utility can do data transformations, filtering and other changes to the data files which may be required with a migration from ISAM to SQL. Source code is provided and callback “hooks” are available to the advanced user to implement these capabilities. This requires a certain level of programming ability. The migrate utility does not update indexes during the data transfer but does attempt to rebuild the indexes after moving the records.

Migration

The following sequence of steps assume you have a standard (non-SQL) FairCom Server in operation, and are switching to FairCom DB SQL. You may create your new tables immediately if you are already using FairCom DB SQL, and proceed with step 4. Actual procedures will vary depending upon your applications, data, and needs. See below for detailed information on customizing this utility for your specific data transformation needs. Please contact your nearest FairCom office if you need assistance in determining or addressing your specific needs.

Key Steps

These steps assume you will keep you existing standard FairCom Server directory untouched and intact. You should always back up all of your existing data and configuration files before upgrading to any new FairCom Server to ensure complete data integrity.

  1. Log off all clients and shut down an existing standard FairCom Server if it is running.
  2. Restart the server with no clients attached, and then shut down again. This ensures automatic recovery has taken place, and the c-tree data files are in a clean state.
  3. Copy your ctsqlimp utility into your FairCom DB SQL directory.
  4. Copy your data file(s) into your FairCom DB SQL directory.
  5. Start FairCom DB SQL.
  6. Use the ctsqlimp utility. This allows you to automatically build the tables you require based on your existing data files.

Note: Your server must be up and running to run this utility.

Type the following at the command line:

ctsqlimp yourdatafile.dat -u ADMIN -a ADMIN -s FAIRCOMS

Important Observations

  • The index rebuilding phase occurs even if you have not defined any indexes or keys on your tables as the utility updates internal indexes (ROWID and RECBYT) necessary for FairCom DB SQL.
  • By default, records are moved across two tables, field by field, for fields with matching names. Fields in the destination table that do not have a matching field in the source table are set to NULL.
  • Type conversion support is only between variable and fixed strings (both directions). However, this behavior can be changed by implementing the fmap member in the Migration Info structure as described later.
  • If the original table is a table created with c-treeDB API and contains a ROWID field, that value is maintained in the destination table.
  • If the utility fails with error CTDB_INTERNAL (4025), the destination table may be corrupt and should be recovered from a backup or recreated.
  • The c-tree standalone, single user model with transaction processing is used by this utility.
  • A default 50MB buffer is used for rebuilding operations. Using as much memory as you have available will increase the speed of the data migration.

Customization

The FairCom DB SQL Migration Utility requires two ‘C’ source files, ctsqlmgr.c and migrfncs.c, which can be modified extensively, as described below, to meet the exact needs of your application. These two files can be found in ctreeSDK\ctreeAPI\ctree\samples\special\sqlmigr. To compile the program, you will need to generate and link against a c-tree Plus single user standalone library with c-treeDB API support and transaction processing.

The logic of the migration utility is in ctsqlmgr.c, and this is the module to which you will add any customized code. The migrfncs.c file can be modified as well, although this should not be necessary under normal circumstances. The core of the migrate.c module is the following function:

MigrateTable(pCTDBSESSION hSession, pTEXT orig, pTEXT dest, MigrationInfo *fconv).

This function takes a pointer, hSession, to a c-treeDB API session which is currently logged in, the source file name, orig, the destination file name, dest, and a pointer to a structure containing information about the migration process, fconv. The majority of your customizations will be located in this MigrationInfo structure. It may be necessary to add custom logon information for your particular installation. That information is located outside of this structure.

If the migration structure is not provided to the MigrateTable() function (i.e., fconv is set to NULL), the code internally populates a MigrationInfo structure based on the default implementation described above.

The MigrationInfo structure is defined in migrfncs.c and described as follows.

typedef struct _MigrationInfo

{

/* callbacks */

MigrFunc AfterRecRead;

MigrFunc AfterRecWrite;

MigrFunc AfterRecConv;

MigrFunc BeforeOpens;

MigrFunc BeforeFirst;
 

/* ErrMsgFunc */

ErrMsgFunc ErrorMessage;
 

/* memory usage */

ULONG memoryKB;
 

/* source/dest objects */

pCTDBTABLE hSourceTbl;

pCTDBTABLE hDestTbl;

pCTDBRECORD hSourceRec;

pCTDBRECORD hDestRec;
 

/* field mapping */

FieldMap *fmap;
 

/* internals */

pTEXT strBuffer;

VRLEN strSize;

pCTBLOB pBlob;

} MigrationInfo;

 

Migration Callbacks

AfterRecRead(), AfterRecWrite(), AfterRecConv(), BeforeOpens(),and BeforeFirst() are callback functions that can be used to perform verifications on the record being migrated, implementing custom error handling, setting table and record properties, etc.

The callbacks have the following prototype:

NINT (* MigrFunc) (CTHANDLE Source, CTHANDLE Dest, CTDBRET err);

  • Source and Dest are the source and the destination record or table (depending on the callback) handle.
  • err is the error code generated by the last operation (only for the Afterxxx() functions).

The callbacks should handle the error code passed in and return one of:

  • ERR_STOP to stop the migration execution
  • ERR_CONT to continue the execution
  • ERR_SKIP to continue the execution by fetching a new record; the current record is not migrated to the new table.

AfterRecRead(): The migration logic calls this function callback after having read a record from the source table. It is important to note this callback is not called immediately by c-treeDB API after reading the record from the table at the ISAM level, but by the migration logic after c-treeDB API reads the record and performs any c-treeDB API internal operations needed for c-treeDB API functionalities. When this function is implemented, it should perform error handling. The Source parameter is the record handle. Dest is always NULL.

AfterRecWrite(): This function callback is called after having written a record to the destination table. Source and Dest parameters of the function call are record handles.

AfterRecConv(): This function callback is called after converting the source record into the destination record. This is most useful for error checking. Source and Dest parameters of the function call are record handles.

BeforeOpens(): This function callback is called before opening the tables. Source and Dest parameters of the function call are table handles.

BeforeFirst(): This function callback is called before reading the first record, hence before starting the actual record migration. Source and Dest parameters of the function call are table handles.

Error Handling Callback

ErrorMessage(): When specified, this function callback is called in order to display error messages. If set to NULL, the error message display is skipped.

The function prototype is as follows:

VOID (* ErrMsgFunc) (pTEXT msg);

msg is the error message. You can take advantage of this callback if you need to output error messages inside the callbacks implementation by using the macro PRINT_ERROR(msg).

Memory Usage

memoryKB: This is the amount of memory to use for the rebuild (default is 50Mb).

Source and Destination Objects:

hSourceTbl, hDestTbl: These are pointers to the source and destination table handles. These can be set to NULL and, in this case, the handles are allocated and freed internally. Providing these handles is useful in cases when it is necessary to set a table property, such as a callback at c-treeDB API level. The handle must not be active; there should be no open table linked to the handle.

hSourceRec, hDestRec: These are pointers to the source and destination record handles. These can be set to NULL and, in this case, the handles are allocated and freed internally. The handle must relate to the proper table (i.e., hSourceRec = ctdbAllocRecord(hSourceTbl);)

Field Mappings

fmap: This is the structure where the bulk of your customizations will take place. This is a pointer to a field conversion array, which is NULL for default behavior. When this field is not NULL, it is expected to contain an array of FieldMap elements. The number of elements in the array MUST equal the number of fields in the destination table (do not count internal c-treeDB API fields). Each element is related to a field in the destination table in order, i.e., the first field corresponds to the first element in the array, the second field to the second array element and so forth. This structure provides a mechanism allowing code to read a field (sfldno) from a source table and store this field in a string used to insert the field into the destination record using ctdbSetFieldAsString().

FieldMap has the following definition:

typedef struct _fieldmap {

NINT sfldno;

GetBufferLenFunc GetBufferLen;

GetFieldFunc GetFieldAsString;

IsNullFunc IsNullField;

pTEXT dfl_value;

} FieldMap;

sfldno: This is the field number on the source table. Set to CT_MIGRATE_NEW_FIELD in the case when there is no corresponding field in the source table.

GetBufferLen: This optional pointer points to a function returning the size of the string used when retrieving the source field into a string using the GetFieldAsString function pointer. When this field is not specified, the field is copied without any kind of conversion, i.e., a binary copy.

The function prototype is as follows:

VRLEN (* GetBufferLenFunc) (CTHANDLE Handle, NINT FieldNbr);

Where Handle is the record handle the logic will pass to the function and FieldNbr is the field number on the source table. The function should return the buffer size.

GetFieldAsString: This optional pointer refers to a function to convert the field in the source record into a proper string that can be stored into the destination field using the ctdbSetFieldAsString() function. Please notice that the destination field does not necessarily need to be a string field as c-treeDB API is able to convert the string into the appropriate field type. When GetBufferLen is not specified, this is copied without any conversion.

The function prototype is as follows:

CTDBRET (* GetFieldFunc) (CTHANDLE Handle, NINT FieldNbr, CTSTRING pValue, VRLEN size);

Where Handle is the record handle, FieldNbr is the field number in the source table, pValue is a buffer of size bytes that the function must fill with the proper information as fetched from the source record. The function should return CTDBRET_OK on success, otherwise an error code.

Where Handle is the record handle, FieldNbr is the field number in the source table. The function should return YES if the field is NULL (or should be considered as NULL), NO otherwise.

dfl_value: A string containing the default value to store in the field when sfldno is CT_MIGRATE_NEW_FIELD or the source field is set to NULL.