Authorization properties for the LOCATION clause

Some configuration properties can be modified after you install Db2 Big SQL to validate your authorization to use the LOCATION clause, or to enforce the requirement that the LOCATION clause can be provided in conjunction with the EXTERNAL clause only. This ensures that the DROP command does not accidentally drop data that does not reside in the distributed file system warehouse directory.

Edit the bigsql-default.xml file under $BIGSQL_HOME/conf to modify the default settings of the following properties. The default value of these properties is true. After modifying the bigsql-default.xml file, stop and then restart Db2 Big SQL.
bigsql.validate.createtab.location.permissions
By default, this property means that before the CREATE [EXTERNAL] HADOOP TABLE with the LOCATION clause can proceed, it first validates that you have one of the following authorizations to use the LOCATION clause:
  • DATAACCESS authority.
  • Privileges on the distributed file system files and the directory:
    • READ and WRITE privileges on the directory and all files in the directory, including files in any subdirectories that might be recursively specified in the LOCATION clause.
    • EXECUTE privileges on all the directories and subdirectories in the specified LOCATION clause.

If you set the property to 'false', the CREATE [EXTERNAL] HADOOP TABLE statement with the LOCATION clause proceeds without this additional validation.

<property>
<name>bigsql.validate.createtab.location.permissions</name>
  <value>true</value> 
</property>
bigsql.validate.droptab.location.permissions
By default, the DROP TABLE statement on a managed (non-EXTERNAL) Hadoop table with a LOCATION outside of the distributed file system hive\warehouse directory is restricted unless you are the owner of the table or you have one of the following authorizations to use the LOCATION clause:
  • DATAACCESS authority.
  • Privileges on the distributed file system files and the directory:
    • READ and WRITE privileges on the directory and all files in the directory, including files in any subdirectories that might be recursively specified in the LOCATION clause.
    • EXECUTE privileges on all the directories and subdirectories in the specified LOCATION clause.
If you set the property to 'false', the DROP TABLE statement on a managed (non-EXTERNAL) Hadoop table proceeds without this additional validation.

<property>
<name>bigsql.validate.droptab.location.permissions</name>
  <value>true</value> 
</property>
bigsql.validate.load.srcfile.permissions
By default, this property means that before the LOAD HADOOP USING FILE statement can proceed, it first validates that you have one of the following authorizations to use the LOCATION clause:
  • DATAACCESS authority.
  • Privileges on the distributed file system files and the directory:
    • READ privileges on the directory and all files in the directory, including files in any subdirectories that might be recursively specified in the LOCATION clause.
    • EXECUTE privileges on all the directories and subdirectories in the specified LOCATION clause.
If you set the property to 'false', the LOAD HADOOP USING FILE statement proceeds without this additional validation.

<property>
<name>bigsql.validate.load.srcfile.permissions</name>
  <value>true</value> 
</property>
bigsql.allow.createtab.managed.location
By default, the CREATE HADOOP TABLE statement with the LOCATION clause is allowed. If you set the property to 'false', you cannot use the LOCATION clause on the CREATE HADOOP TABLE statement unless the EXTERNAL keyword is also present. Setting this property to 'false' ensures that Db2 Big SQL cannot delete data that resides outside of the hive\warehouse directory.

<property>
<name>bigsql.allow.createtab.managed.location</name>
  <value>true</value> 
</property>