Inserts data from an external file with a supported file format into a table, hierarchy, view or nickname. LOAD is a faster alternative, but the load utility does not support loading data at the hierarchy level.
Quick link to File type modifiers for the import utility.
Database. If implicit connect is enabled, a connection to the default database is established. Utility access to Linux, UNIX, or Windows database servers from Linux, UNIX, or Windows clients must be a direct connection through the engine and not through a DB2 Connect™ gateway or loop back environment.
>>-IMPORT FROM--filename--OF--filetype--------------------------> >--+-------------------------+--+------------------------+------> | .-,--------. | | .-,--------. | | V | | | V | | '-LOBS FROM----lob-path-+-' '-XML FROM----xml-path-+-' >--+-------------------------------+----------------------------> | .--------------. | | V | | '-MODIFIED BY----filetype-mod-+-' >--+-------------------------------------------------------------------------------------------------------+--> | .-,------------------------. | | V | | '-METHOD--+-L--(----column-start--column-end-+--)--+------------------------------------------------+-+-' | | .-,-------------------. | | | | V | | | | '-NULL INDICATORS--(----null-indicator-list-+--)-' | | .-,-----------. | | V | | +-N--(----column-name-+--)------------------------------------------------------------------+ | .-,---------------. | | V | | '-P--(----column-position-+--)--------------------------------------------------------------' >--+------------------------------------+-----------------------> '-XMLPARSE--+-STRIP----+--WHITESPACE-' '-PRESERVE-' >--+--------------------------------------------------------------------------------------+--> '-XMLVALIDATE USING--+-XDS--+-----------------------+--| Ignore and Map parameters |-+-' | '-DEFAULT--schema-sqlid-' | +-SCHEMA--schema-sqlid------------------------------------------+ '-SCHEMALOCATION HINTS------------------------------------------' .-ALLOW NO ACCESS----. >--+--------------------+--+----------------------------+-------> '-ALLOW WRITE ACCESS-' '-COMMITCOUNT--+-n---------+-' '-AUTOMATIC-' >--+---------------------+--+-------------+---------------------> '-+-RESTARTCOUNT-+--n-' '-ROWCOUNT--n-' '-SKIPCOUNT----' >--+-----------------+--+-----------+---------------------------> '-WARNINGCOUNT--n-' '-NOTIMEOUT-' >--+------------------------+-----------------------------------> '-MESSAGES--message-file-' >--+-+-INSERT---------+--INTO--+-table-name--+-------------------------+-+----------------------+->< | +-INSERT_UPDATE--+ | | .-,-------------. | | | | +-REPLACE--------+ | | V | | | | | '-REPLACE_CREATE-' | '-(----insert-column-+--)-' | | | '-| hierarchy description |---------------' | '-CREATE--INTO--+-table-name--+-------------------------+--------------+--| tblspace-specs |-' | | .-,-------------. | | | | V | | | | '-(----insert-column-+--)-' | '-| hierarchy description |--+-AS ROOT TABLE---------+-' '-UNDER--sub-table-name-' Ignore and Map parameters |--+--------------------------------+---------------------------> | .-,------------. | | V | | '-IGNORE--(----schema-sqlid-+--)-' >--+----------------------------------------------------+-------| | .-,-----------------------------------. | | V | | '-MAP--(----(--schema-sqlid--,--schema-sqlid--)-+--)-' hierarchy description .-ALL TABLES---------. |--+-| sub-table-list |-+--+----+-------------------------------> '-IN-' >--HIERARCHY--+-STARTING--sub-table-name-+----------------------| '-| traversal-order-list |-' sub-table-list .-,-------------------------------------------. V | |--(----sub-table-name--+-------------------------+-+--)--------| | .-,-------------. | | V | | '-(----insert-column-+--)-' traversal-order-list .-,--------------. V | |--(----sub-table-name-+--)-------------------------------------| tblspace-specs |--+----------------------------------------------------------------------------------+--| '-IN--tablespace-name--+---------------------------+--+--------------------------+-' '-INDEX IN--tablespace-name-' '-LONG IN--tablespace-name-'
The application is forced off the database and the current unit of work is rolled back. To avoid processing the rows that were already committed when you retry the import operation, use the RESTARTCOUNT or SKIPCOUNT command parameters.
Creates the table definition and row contents in the code page of the database. If the data was exported from a DB2 table, sub-table, or hierarchy, indexes are created. If this option operates on a hierarchy, and data was exported from DB2, a type hierarchy will also be created. This option can only be used with IXF files.
This parameter is not valid when you import to a nickname.
The DEFAULT clause takes precedence over the IGNORE and MAP clauses. If an XDS satisfies the DEFAULT clause, the IGNORE and MAP specifications will be ignored.
If a schema is specified in the IGNORE clause, it cannot also be present in the left side of a schema pair in the MAP clause.
The IGNORE clause applies only to the XDS. A schema that is mapped by the MAP clause will not be subsequently ignored if specified by the IGNORE clause.
One can use an alias for INSERT, INSERT_UPDATE, or REPLACE, except in the case of an earlier server, when the fully qualified or the unqualified table name should be used. A qualified table name is in the form: schema.tablename. The schema is the user name under which the table was created.
This parameter is not valid when you import to a nickname.
If a schema is present in the left side of a schema pair in the MAP clause, it cannot also be specified in the IGNORE clause.
Once a schema pair mapping is applied, the result is final. The mapping operation is non-transitive, and therefore the schema chosen will not be subsequently applied to another schema pair mapping.
A schema cannot be mapped more than once, meaning that it cannot appear on the left side of more than one pair.
db2 load from datafile1.del of del method P(1, 3, 4)
replace into table1 (c1, c3, c4)
A value of Y in the NULL indicator column specifies that the column data is NULL. Any character other than Y in the NULL indicator column specifies that the column data is not NULL, and that column data specified by the METHOD L option will be imported.
The NULL indicator character can be changed using the MODIFIED BY option, with the nullindchar file type modifier.
The WSF file type is not supported when you import to a nickname.
This parameter is not valid when you import to a nickname.
This option does not honor the CREATE TABLE statement's NOT LOGGED INITIALLY (NLI) clause or the ALTER TABLE statement's ACTIVE NOT LOGGED INITIALLY clause.
If an import with the REPLACE option is performed within the same transaction as a CREATE TABLE or ALTER TABLE statement where the NLI clause is invoked, the import will not honor the NLI clause. All inserts will be logged.
This limitation applies to DB2 Universal Database™ Version 7 and DB2 UDB Version 8
If the table exists, deletes all existing data from the table by truncating the data object, and inserts the imported data without changing the table definition or the index definitions.
If the table does not exist, creates the table and index definitions, as well as the row contents, in the code page of the database. See Imported table re-creation for a list of restrictions.
This option can only be used with IXF files. If this option is used when moving data between hierarchies, only the data for an entire hierarchy, not individual subtables, can be replaced.
This parameter is not valid when you import to a nickname.
The DEFAULT, IGNORE, and MAP clauses can be used to modify the schema determination behavior. These three optional clauses apply directly to the specifications of the XDS, and not to each other. For example, if a schema is selected because it is specified by the DEFAULT clause, it will not be ignored if also specified by the IGNORE clause. Similarly, if a schema is selected because it is specified as the first part of a pair in the MAP clause, it will not be re-mapped if also specified in the second part of another MAP clause pair.
Example 1
db2 import from myfile.ixf of ixf messages msg.txt insert into staff
SQL3150N The H record in the PC/IXF file has product "DB2 01.00", date
"19970220", and time "140848".
SQL3153N The T record in the PC/IXF file has name "myfile",
qualifier " ", and source " ".
SQL3109N The utility is beginning to load data from file "myfile".
SQL3110N The utility has completed processing. "58" rows were read
from the input file.
SQL3221W ...Begin COMMIT WORK. Input Record Count = "58".
SQL3222W ...COMMIT of any database changes was successful.
SQL3149N "58" rows were processed from the input file. "58" rows were
successfully inserted into the table. "0" rows were rejected.
Example 2 (Importing into a table with an identity column)
TABLE2 is the same as TABLE1, except that C2 is a GENERATED ALWAYS identity column.
"Liszt"
"Hummel",,187.43, H
"Grieg",100, 66.34, G
"Satie",101, 818.23, I
"Liszt", 74.49, A
"Hummel", 0.01, H
"Grieg", 66.34, G
"Satie", 818.23, I
db2 import from datafile1.del of del replace into table1
db2 import from datafile1.del of del method P(1, 3, 4)
replace into table1 (c1, c3, c4)
db2 import from datafile1.del of del modified by identityignore
replace into table1
db2 import from datafile2.del of del replace into table1 (c1, c3, c4)
db2 import from datafile2.del of del modified by identitymissing
replace into table1
If DATAFILE1 is imported into TABLE2 without using any of the identity-related file type modifiers, rows 1 and 2 will be inserted, but rows 3 and 4 will be rejected, because they supply their own non-NULL values, and the identity column is GENERATED ALWAYS.
Example 1 (XMLVALIDATE USING XDS)
XMLVALIDATE USING XDS
IGNORE (S1.SCHEMA_A)
MAP ((S1.SCHEMA_A, S2.SCHEMA_B))
The import would fail
due to invalid syntax, since the IGNORE of S1.SCHEMA_A
would conflict with the MAP of S1.SCHEMA_A to
S2.SCHEMA_B.Example 2 (XMLVALIDATE USING XDS)
XMLVALIDATE USING XDS
DEFAULT S8.SCHEMA_H
IGNORE (S9.SCHEMA_I, S10.SCHEMA_J)
MAP ((S1.SCHEMA_A, S2.SCHEMA_B), (S3.SCHEMA_C, S5.SCHEMA_E),
(S6.SCHEMA_F, S3.SCHEMA_C), (S4.SCHEMA_D, S7.SCHEMA_G))
<XDS FIL='xmlfile.001.xml' />
The
XML schema with SQL identifier "S8.SCHEMA_H" is used to validate the
document in file "xmlfile.001.xml", since "S8.SCHEMA_H"
was specified as the default schema to use.<XDS FIL='xmlfile.002.xml' OFF='10' LEN='500' SCH='S10.SCHEMA_J' />
No
schema validation occurs for the document in file "xmlfile.002.xml",
since although the XDS specifies "S10.SCHEMA_J" as the schema to use,
that schema is part of the IGNORE clause. The
document contents can be found at byte offset 10 in the file (meaning
the 11th byte), and is 500 bytes long.<XDS FIL='xmlfile.003.xml' SCH='S6.SCHEMA_F' />
The
XML schema with SQL identifier "S3.SCHEMA_C" is used to validate the
document in file "xmlfile.003.xml". This is
because the MAP clause specifies that schema
"S6.SCHEMA_F" should be mapped to schema "S3.SCHEMA_C". Note that
further mapping does not take place, therefore the mapping of schema
"S3.SCHEMA_C" to schema "S5.SCHEMA_E" does not apply in this case.<XDS FIL='xmlfile.004.xml' SCH='S11.SCHEMA_K' />
The
XML schema with SQL identifier "S11.SCHEMA_K" is used to validate
the document in file "xmlfile.004.xml". Note
that none of the DEFAULT, IGNORE,
or MAP specifications apply in this case.Example 3 (XMLVALIDATE USING XDS)
XMLVALIDATE USING XDS
DEFAULT S1.SCHEMA_A
IGNORE (S1.SCHEMA_A)
<XDS FIL='xmlfile.001.xml' />
The
XML schema with SQL identifier "S1.SCHEMA_A" is used to validate the
document in file "xmlfile.001.xml", since "S1.SCHEMA_1"
was specified as the default schema to use.<XDS FIL='xmlfile.002.xml' SCH='S1.SCHEMA_A' />
No
schema validation occurs for the document in file "xmlfile.002",
since although the XDS specifies "S1.SCHEMA_A" as the schema to use,
that schema is part of the IGNORE clause.Example 4 (XMLVALIDATE USING XDS)
XMLVALIDATE USING XDS
DEFAULT S1.SCHEMA_A
MAP ((S1.SCHEMA_A, S2.SCHEMA_B), (S2.SCHEMA_B, S1.SCHEMA_A))
<XDS FIL='xmlfile.001.xml' />
The
XML schema with SQL identifier "S1.SCHEMA_A" is used to validate the
document in file "xmlfile.001.xml", since "S1.SCHEMA_1"
was specified as the default schema to use. Note that since the DEFAULT clause
was applied, the MAP clause is not subsequently
applied. Therefore the mapping of schema "S1.SCHEMA_A" to schema
"S2.SCHEMA_B" does not apply in this case.<XDS FIL='xmlfile.002.xml' SCH='S1.SCHEMA_A' />
The
XML schema with SQL identifier "S2.SCHEMA_B" is used to validate the
document in file "xmlfile.002.xml". This is
because the MAP clause specifies that schema
"S1.SCHEMA_A" should be mapped to schema "S2.SCHEMA_B". Note that
further mapping does not take place, therefore the mapping of schema
"S2.SCHEMA_B" to schema "S1.SCHEMA_A" does not apply in this case.<XDS FIL='xmlfile.003.xml' SCH='S2.SCHEMA_B' />
The
XML schema with SQL identifier "S1.SCHEMA_A" is used to validate the
document in file "xmlfile.003.xml". This is
because the MAP clause specifies that schema
"S2.SCHEMA_B" should be mapped to schema "S1.SCHEMA_A". Note that
further mapping does not take place, therefore the mapping of schema
"S1.SCHEMA_A" to schema "S2.SCHEMA_B" does not apply in this case.Example 5 (XMLVALIDATE USING SCHEMA)
XMLVALIDATE USING SCHEMA S2.SCHEMA_B
<XDS FIL='xmlfile.001.xml' />
The
document in file xmlfile.001.xml is validated
using the XML schema with SQL identifier "S2.SCHEMA_B".<XDS FIL='xmlfile.002.xml' SCH='S1.SCHEMA_A' />
The
document in file "xmlfile.002.xml" is validated
using the XML schema with SQL identifier "S2.SCHEMA_B". Note that
the SCH attribute is ignored, since validation is being performed
using a schema specified by the USING SCHEMA clause.Example 6 (XMLVALIDATE USING SCHEMALOCATION HINTS)
<XDS FIL='xmlfile.001.xml' />
The
XML schema used is determined by the schemaLocation attribute in the
document contents, and no validation would occur if one is not present.<XDS FIL='xmlfile.002.xml' SCH='S1.SCHEMA_A' />
The
XML schema used is determined by the schemaLocation attribute in the
document contents, and no validation would occur if one is not present.
Note that the SCH attribute is ignored, since validation is being
performed using SCHEMALOCATION HINTS.Be sure to complete all table operations and release all locks before starting an import operation. This can be done by issuing a COMMIT after closing all cursors opened WITH HOLD, or by issuing a ROLLBACK.
The utility performs an automatic COMMIT after the old rows are deleted during a REPLACE or a REPLACE_CREATE operation. Therefore, if the system fails, or the application interrupts the database manager after the table object is truncated, all of the old data is lost. Ensure that the old data is no longer needed before using these options.
If the log becomes full during a CREATE, REPLACE, or REPLACE_CREATE operation, the utility performs an automatic COMMIT on inserted records. If the system fails, or the application interrupts the database manager after an automatic COMMIT, a table with partial data remains in the database. Use the REPLACE or the REPLACE_CREATE option to rerun the whole import operation, or use INSERT with the RESTARTCOUNT parameter set to the number of rows successfully imported.
Whenever the import utility performs a COMMIT, two messages are written to the message file: one indicates the number of records to be committed, and the other is written after a successful COMMIT. When restarting the import operation after a failure, specify the number of records to skip, as determined from the last successful COMMIT.
The import utility accepts input data with minor incompatibility problems (for example, character data can be imported using padding or truncation, and numeric data can be imported with a different numeric data type), but data with major incompatibility problems is not accepted.
If an error occurs while recreating the foreign keys, modify the data to maintain referential integrity.
Referential constraints and foreign key definitions are not preserved when recreating tables from PC/IXF files. (Primary key definitions are preserved if the data was previously exported using SELECT *.)
Importing to a remote database requires enough disk space on the server for a copy of the input data file, the output message file, and potential growth in the size of the database.
If an import operation is run against a remote database, and the output message file is very long (more than 60 KB), the message file returned to the user on the client might be missing messages from the middle of the import operation. The first 30 KB of message information and the last 30 KB of message information are always retained.
Importing PC/IXF files to a remote database is much faster if the PC/IXF file is on a hard drive rather than on diskettes.
The database table or hierarchy must exist before data in the ASC, DEL, or WSF file formats can be imported; however, if the table does not already exist, IMPORT CREATE or IMPORT REPLACE_CREATE creates the table when it imports data from a PC/IXF file. For typed tables, IMPORT CREATE can create the type hierarchy and the table hierarchy as well.
PC/IXF import should be used to move data (including hierarchical data) between databases. If character data containing row separators is exported to a delimited ASCII (DEL) file and processed by a text transfer program, fields containing the row separators will shrink or expand. The file copying step is not necessary if the source and the target databases are both accessible from the same client.
The data in ASC and DEL files is assumed to be in the code page of the client application performing the import. PC/IXF files, which allow for different code pages, are recommended when importing data in different code pages. If the PC/IXF file and the import utility are in the same code page, processing occurs as for a regular application. If the two differ, and the FORCEIN option is specified, the import utility assumes that data in the PC/IXF file has the same code page as the application performing the import. This occurs even if there is a conversion table for the two code pages. If the two differ, the FORCEIN option is not specified, and there is a conversion table, all data in the PC/IXF file will be converted from the file code page to the application code page. If the two differ, the FORCEIN option is not specified, and there is no conversion table, the import operation will fail. This applies only to PC/IXF files on DB2 clients on the AIX® operating system.
For table objects on an 8 KB page that are close to the limit of 1012 columns, import of PC/IXF data files might cause DB2 to return an error, because the maximum size of an SQL statement was exceeded. This situation can occur only if the columns are of type CHAR, VARCHAR, or CLOB. The restriction does not apply to import of DEL or ASC files. If PC/IXF files are being used to create a new table, an alternative is use db2look to dump the DDL statement that created the table, and then to issue that statement through the CLP.
DB2 Connect can be used to import data to DRDA® servers such as DB2 for OS/390®, DB2 for VM and VSE, and DB2 for OS/400®. Only PC/IXF import (INSERT option) is supported. The RESTARTCOUNT parameter, but not the COMMITCOUNT parameter, is also supported.
When using the CREATE option with typed tables, create every sub-table defined in the PC/IXF file; sub-table definitions cannot be altered. When using options other than CREATE with typed tables, the traversal order list enables one to specify the traverse order; therefore, the traversal order list must match the one used during the export operation. For the PC/IXF file format, one need only specify the target sub-table name, and use the traverse order stored in the file.
The import utility can be used to recover a table previously exported to a PC/IXF file. The table returns to the state it was in when exported.
Data cannot be imported to a system table, a created temporary table, a declared temporary table, or a summary table.
Views cannot be created through the import utility.
Importing a multiple-part PC/IXF file whose individual parts are copied from a Windows system to an AIX system is supported. Only the name of the first file must be specified in the IMPORT command. For example, IMPORT FROM data.ixf OF IXF INSERT INTO TABLE1. The file data.002, etc should be available in the same directory as data.ixf.
Security labels in their internal format might contain newline characters. If you import the file using the DEL file format, those newline characters can be mistaken for delimiters. If you have this problem use the older default priority for delimiters by specifying the delprioritychar file type modifier in the IMPORT command.
The IMPORT utility does not match the number of columns in a table and the number of fields in a data file. The utility checks for a sufficient amount of data in the data file and if a row in the data file does not contain sufficient columns of data, the row may either be rejected with a warning message if the corresponding table columns without data are defined as NOT NULL, or be inserted successfully without a warning message if the corresponding table columns are defined as NULL. On the other hand, if a row contains a higher number of columns than required, the sufficient number of columns are processed while the remaining columns of data are omitted and no warning message is given.
When using the IMPORT command and the INSERT, UPDATE, or INSERT_UPDATE command parameters, you must ensure that you have CONTROL privilege on the participating nickname. You must ensure that the nickname you want to use when doing an import operation already exists. There are also several restrictions you should be aware of as shown in the IMPORT command parameters section.
Some data sources, such as ODBC, do not support importing into nicknames.
Modifier | Description |
---|---|
compound=x | x is a number between 1 and 100 inclusive.
Uses nonatomic compound SQL to insert the data, and x statements
will be attempted each time. If this modifier is specified, and the transaction log is not sufficiently large, the import operation will fail. The transaction log must be large enough to accommodate either the number of rows specified by COMMITCOUNT, or the number of rows in the data file if COMMITCOUNT is not specified. It is therefore recommended that the COMMITCOUNT option be specified to avoid transaction log overflow. This modifier is incompatible with INSERT_UPDATE mode, hierarchical tables, and the following modifiers: usedefaults, identitymissing, identityignore, generatedmissing, and generatedignore. |
generatedignore | This modifier informs the import utility that data for all generated columns is present in the data file but should be ignored. This results in all values for the generated columns being generated by the utility. This modifier cannot be used with the generatedmissing modifier. |
generatedmissing | If this modifier is specified, the utility assumes that the input data file contains no data for the generated columns (not even NULLs), and will therefore generate a value for each row. This modifier cannot be used with the generatedignore modifier. |
identityignore | This modifier informs the import utility that data for the identity column is present in the data file but should be ignored. This results in all identity values being generated by the utility. The behavior will be the same for both GENERATED ALWAYS and GENERATED BY DEFAULT identity columns. This means that for GENERATED ALWAYS columns, no rows will be rejected. This modifier cannot be used with the identitymissing modifier. |
identitymissing | If this modifier is specified, the utility assumes that the input data file contains no data for the identity column (not even NULLs), and will therefore generate a value for each row. The behavior will be the same for both GENERATED ALWAYS and GENERATED BY DEFAULT identity columns. This modifier cannot be used with the identityignore modifier. |
lobsinfile | lob-path specifies the path to the files
containing LOB data. Each path contains at least one file that contains at least one LOB pointed to by a Lob Location Specifier (LLS) in the data file. The LLS is a string representation of the location of a LOB in a file stored in the LOB file path. The format of an LLS is filename.ext.nnn.mmm/, where filename.ext is the name of the file that contains the LOB, nnn is the offset in bytes of the LOB within the file, and mmm is the length of the LOB in bytes. For example, if the string db2exp.001.123.456/ is stored in the data file, the LOB is located at offset 123 in the file db2exp.001, and is 456 bytes long. The LOBS FROM clause specifies where the LOB files are located when the "lobsinfile" modifier is used. The LOBS FROM clause will implicitly activate the LOBSINFILE behavior. The LOBS FROM clause conveys to the IMPORT utility the list of paths to search for the LOB files while importing the data. To indicate a null LOB, enter the size as -1. If the size is specified as 0, it is treated as a 0 length LOB. For null LOBS with length of -1, the offset and the file name are ignored. For example, the LLS of a null LOB might be db2exp.001.7.-1/. |
no_type_id | Valid only when importing into a single sub-table. Typical usage is to export data from a regular table, and then to invoke an import operation (using this modifier) to convert the data into a single sub-table. |
nodefaults | If a source column for a target table column is not explicitly
specified, and the table column is not nullable, default values are
not loaded. Without this option, if a source column for one of the
target table columns is not explicitly specified, one of the following
occurs:
|
norowwarnings | Suppresses all warnings about rejected rows. |
rowchangetimestampignore | This modifier informs the import utility that data for the row change timestamp column is present in the data file but should be ignored. This results in all ROW CHANGE TIMESTAMP being generated by the utility. The behavior will be the same for both GENERATED ALWAYS and GENERATED BY DEFAULT columns. This means that for GENERATED ALWAYS columns, no rows will be rejected. This modifier cannot be used with the rowchangetimestampmissing modifier. |
rowchangetimestampmissing | If this modifier is specified, the utility assumes that the input data file contains no data for the row change timestamp column (not even NULLs), and will therefore generate a value for each row. The behavior will be the same for both GENERATED ALWAYS and GENERATED BY DEFAULT columns. This modifier cannot be used with the rowchangetimestampignore modifier. |
seclabelchar | Indicates that security labels in the input
source file are in the string format for security label values rather
than in the default encoded numeric format. IMPORT converts
each security label into the internal format as it is loaded. If a
string is not in the proper format the row is not loaded and a warning
(SQLSTATE 01H53) is returned. If the string does not represent a valid
security label that is part of the security policy protecting the
table then the row is not loaded and a warning (SQLSTATE 01H53, SQLCODE
SQL3243W)) is returned. This modifier cannot be specified if the seclabelname modifier is specified, otherwise the import fails and an error (SQLCODE SQL3525N) is returned. |
seclabelname | Indicates that security labels in the input
source file are indicated by their name rather than the default encoded
numeric format. IMPORT will convert the name to
the appropriate security label if it exists. If no security label
exists with the indicated name for the security policy protecting
the table the row is not loaded and a warning (SQLSTATE 01H53, SQLCODE
SQL3244W) is returned. This modifier cannot be specified if the seclabelchar modifier is specified, otherwise the import fails and an error (SQLCODE SQL3525N) is returned. Note: If the file type
is ASC, any spaces following the name of the security label will be
interpreted as being part of the name. To avoid this use the striptblanks
file type modifier to make sure the spaces are removed.
|
usedefaults | If a source column for a target table column has
been specified, but it contains no data for one or more row instances,
default values are loaded. Examples of missing data are:
|
Modifier | Description |
---|---|
codepage=x | x is an ASCII character string. The value
is interpreted as the code page of the data in the input data set.
Converts character data from this code page to the application code
page during the import operation. The following rules apply:
Note:
![]()
![]() |
dateformat="x" | x is the format of the date in the source
file.2 Valid date elements are:
A default
value of 1 is assigned for each element that is not specified. Some
examples of date formats are:
|
implieddecimal | The location of an implied decimal point is determined by the column definition; it is no longer assumed to be at the end of the value. For example, the value 12345 is loaded into a DECIMAL(8,2) column as 123.45, not 12345.00. |
timeformat="x" | x is the format of the time in the source
file.2 Valid time elements are:
A default value
of 0 is assigned for each element that is not specified.
Some examples of time formats are:
|
timestampformat="x" | x is the format of the time stamp in the
source file.2 Valid time stamp elements are:
A default value
of 1 is assigned for unspecified YYYY, M, MM, D,
DD, or DDD elements. A default value of 'Jan' is
assigned to an unspecified MMM element. A default value of 0 is
assigned for all other unspecified elements. Following is an example
of a time stamp format:
The valid values for the MMM element include: 'jan', 'feb', 'mar', 'apr', 'may', 'jun', 'jul', 'aug', 'sep', 'oct', 'nov' and 'dec'. These values are case insensitive. The following example illustrates
how to import data containing user defined date and time formats
into a table called schedule:
|
usegraphiccodepage | If usegraphiccodepage is given, the assumption
is made that data being imported into graphic or double-byte character
large object (DBCLOB) data fields is in the graphic code page. The
rest of the data is assumed to be in the character code page. The
graphic code page is associated with the character code page. IMPORT
determines the character code page through either the codepage modifier,
if it is specified, or through the code page of the application if
the codepage modifier is not specified. This modifier should be used in conjunction with the delimited data file generated by drop table recovery only if the table being recovered has graphic data. Restrictions The usegraphiccodepage modifier MUST NOT be specified with DEL files created by the EXPORT utility, as these files contain data encoded in only one code page. The usegraphiccodepage modifier is also ignored by the double-byte character large objects (DBCLOBs) in files. |
xmlchar | Specifies that XML documents are encoded in the character code
page. This option is useful for processing XML documents that are encoded in the specified character code page but do not contain an encoding declaration. For each document, if a declaration tag exists and contains an encoding attribute, the encoding must match the character code page, otherwise the row containing the document will be rejected. Note that the character codepage is the value specified by the codepage file type modifier, or the application codepage if it is not specified. By default, either the documents are encoded in Unicode, or they contain a declaration tag with an encoding attribute. |
xmlgraphic | Specifies that XML documents are encoded in the specified graphic
code page. This option is useful for processing XML documents that are encoded in a specific graphic code page but do not contain an encoding declaration. For each document, if a declaration tag exists and contains an encoding attribute, the encoding must match the graphic code page, otherwise the row containing the document will be rejected. Note that the graphic code page is the graphic component of the value specified by the codepage file type modifier, or the graphic component of the application code page if it is not specified. By default, documents are either encoded in Unicode, or they contain a declaration tag with an encoding attribute. Note: If
the xmlgraphic modifier is specified with the IMPORT command,
the XML document to be imported must be encoded in the UTF-16 code
page. Otherwise, the XML document may be rejected with a parsing error,
or it may be imported into the table with data corruption.
|
Modifier | Description |
---|---|
nochecklengths | If nochecklengths is specified, an attempt is made to import each row, even if the source data has a column definition that exceeds the size of the target table column. Such rows can be successfully imported if code page conversion causes the source data to shrink; for example, 4-byte EUC data in the source could shrink to 2-byte DBCS data in the target, and require half the space. This option is particularly useful if it is known that the source data will fit in all cases despite mismatched column definitions. |
nullindchar=x | x is a single character. Changes the character
denoting a null value to x. The default value of x is Y.3 This modifier is case sensitive for EBCDIC data files, except when the character is an English letter. For example, if the null indicator character is specified to be the letter N, then n is also recognized as a null indicator. |
reclen=x | x is an integer with a maximum value of 32 767. x characters are read for each row, and a new-line character is not used to indicate the end of the row. |
striptblanks | Truncates any trailing blank spaces when loading data into
a variable-length field. If this option is not specified, blank spaces
are kept. In the following example, striptblanks causes
the import utility to truncate trailing blank spaces:
This option cannot be specified together with striptnulls. These are mutually exclusive options. This option replaces the obsolete t option, which is supported for earlier compatibility only. |
striptnulls | Truncates any trailing NULLs (0x00 characters) when loading
data into a variable-length field. If this option is not specified,
NULLs are kept. This option cannot be specified together with striptblanks. These are mutually exclusive options. This option replaces the obsolete padwithzero option, which is supported for earlier compatibility only. |
Modifier | Description |
---|---|
chardelx | x is a single character string delimiter.
The default value is a double quotation mark ("). The specified character
is used in place of double quotation marks to enclose a character
string.34 If you want to explicitly specify
the double quotation mark as the character string delimiter, it should
be specified as follows:
The
single quotation mark (') can also be specified as a character string
delimiter. In the following example, chardel'' causes
the import utility to interpret any single quotation mark (') it encounters
as a character string delimiter:
|
coldelx | x is a single character column delimiter.
The default value is a comma (,). The specified character is used
in place of a comma to signal the end of a column.34 In
the following example, coldel; causes the import
utility to interpret any semicolon (;) it encounters as a column delimiter:
|
decplusblank | Plus sign character. Causes positive decimal values to be prefixed with a blank space instead of a plus sign (+). The default action is to prefix positive decimal values with a plus sign. |
decptx | x is a single character substitute for the
period as a decimal point character. The default value is a period
(.). The specified character is used in place of a period as a decimal
point character.34 In the following example, decpt; causes
the import utility to interpret any semicolon (;) it encounters as
a decimal point:
|
delprioritychar | The current default priority for delimiters is: record delimiter,
character delimiter, column delimiter. This modifier protects existing
applications that depend on the older priority by reverting the delimiter
priorities to: character delimiter, record delimiter, column delimiter.
Syntax:
For
example, given the following DEL data file:
With the delprioritychar modifier specified, there will be only two rows in this data file. The second <row delimiter> will be interpreted as part of the first data column of the second row, while the first and the third <row delimiter> are interpreted as actual record delimiters. If this modifier is not specified, there will be three rows in this data file, each delimited by a <row delimiter>. |
keepblanks | Preserves the leading and trailing blanks in each field of type CHAR, VARCHAR, LONG VARCHAR, or CLOB. Without this option, all leading and trailing blanks that are not inside character delimiters are removed, and a NULL is inserted into the table for all blank fields. |
nochardel | The import utility will assume all bytes found between the
column delimiters to be part of the column's data. Character delimiters
will be parsed as part of column data. This option should not be
specified if the data was exported using DB2
(unless nochardel was specified at export time). It is provided to
support vendor data files that do not have character delimiters.
Improper usage might result in data loss or corruption. This option cannot be specified with chardelx, delprioritychar or nodoubledel. These are mutually exclusive options. |
nodoubledel | Suppresses recognition of double character delimiters. |
Modifier | Description |
---|---|
forcein | Directs the utility to accept data despite code page mismatches,
and to suppress translation between code pages. Fixed length target fields are checked to verify that they are large enough for the data. If nochecklengths is specified, no checking is done, and an attempt is made to import each row. |
indexixf | Directs the utility to drop all indexes currently defined on the existing table, and to create new ones from the index definitions in the PC/IXF file. This option can only be used when the contents of a table are being replaced. It cannot be used with a view, or when a insert-column is specified. |
indexschema=schema | Uses the specified schema for the index name during index creation. If schema is not specified (but the keyword indexschema is specified), uses the connection user ID. If the keyword is not specified, uses the schema in the IXF file. |
nochecklengths | If nochecklengths is specified, an attempt is made to import each row, even if the source data has a column definition that exceeds the size of the target table column. Such rows can be successfully imported if code page conversion causes the source data to shrink; for example, 4-byte EUC data in the source could shrink to 2-byte DBCS data in the target, and require half the space. This option is particularly useful if it is known that the source data will fit in all cases despite mismatched column definitions. |
forcecreate | Specifies that the table should be created with possible missing or limited information after returning SQL3311N during an import operation. |
codepage=N | usegraphiccodepage | IMPORT behavior |
---|---|---|
Absent | Absent | All data in the file is assumed to be in the application code page. |
Present | Absent | All data in the file is assumed to be in code page N.
Warning: Graphic data will be corrupted when imported into the database if N is a single-byte code page. |
Absent | Present | Character data in the file is assumed to be in the application
code page. Graphic data is assumed to be in the code page of the application
graphic data. If the application code page is single-byte, then all data is assumed to be in the application code page. Warning: If the application code page is single-byte, graphic data will be corrupted when imported into the database, even if the database contains graphic columns. |
Present | Present | Character data is assumed to be in code page N.
Graphic data is assumed to be in the graphic code page of N.
If N is a single-byte or double-byte code page, then all data is assumed to be in code page N. Warning: Graphic data will be corrupted when imported into the database if N is a single-byte code page. |
"M" (could be a month, or a minute)
"M:M" (Which is which?)
"M:YYYY:M" (Both are interpreted as month.)
"S:M:YYYY" (adjacent to both a time value and a date value)
In
ambiguous cases, the utility will report an error message, and the
operation will fail. "M:YYYY" (Month)
"S:M" (Minute)
"M:YYYY:S:M" (Month....Minute)
"M:H:YYYY:M:D" (Minute....Month)
Some characters, such as double quotation marks and back slashes, must be preceded by an escape character (for example, \).
... modified by coldel# ...
... modified by coldel0x23 ...
... modified by coldelX23 ...