Using Managed File Transfer in combination with z/OS utilities to transfer files
The various methods you can use with Managed File Transfer (MFT) to transfer a wide range of data sets between z/OS® systems.
You can use these methods for all data sets that MFT supports, but these methods are particularly useful when used to transfer data sets for which MFT supports with restrictions, or has no support for at all. These approaches work for all supported versions of MFT.
For example, these methods can be used to transfer PDSE data sets between systems without losing directory information.
- Run one or more z/OS utilities to convert the source data set into a format that MFT can transfer.
- Schedule MFT to transfer the converted data set to the target system, and wait until the transfer is complete.
- Schedule JCL on the target system to run one or more z/OS utilities to convert the converted data set into a target data set that is the same as the original source data set.
As well as the methods described in this topic, there is an alternative approach described in vsamtransfer, which describes how Ant tasks can be used to run commands before and after a transfer to do a similar thing. While the sample demonstrates the transfer of VSAM data sets, the approach can be extended to other data set types, subject to the limitations of the REPRO command.
Method 1: Using the TRANSMIT (XMIT) and RECEIVE commands with MFT
This method uses the TRANSMIT (XMIT) TSO command to convert a data set into a sequential data set, and transfer it using MFT. Once the transfer is complete the sequential data set is converted back into the original data set type using the RECEIVE command.
This method can be used with any data set supported by the XMIT command. A list of supported data sets, and attributes are listed in Transmitting data sets. For example, this method can be used to transfer PDSEs while preserving directory information, but it cannot be used to transfer VSAM data sets.
This method is implemented using two JCL jobs and you need to adjust these jobs so that they are
suitable for your environment, and the type of data being transferred. You need to change the values
inside < >. In most environments extra job steps need to be added to delete
earlier versions of the data sets, or alternatively you can use generation data groups.
You submit the first of these jobs, XMITJOB1 shown in the following example, on the sending side.
The XMIT step runs the XMIT command to convert the source data set into a sequential format data
set. X.X is specified for the node and user name to pass the command validation
checks, but a proper node and user name are not needed.
The MFT step initiates a file transfer from the source agent, SRC, to the destination agent, DEST. The -w flag means that the fteCreatetransfer command waits until the transfer has completed. The -ds flag indicates that a sequential data set is to be created on the destination agent and provides the correct DCB characteristics, so that there is sufficient space when the data set is dynamically allocated.
In this case, both data set names are surrounded with double quotes, indicating that fully qualified data set names are used. If double quotes are not used, the default high level qualifier of the source or destination agent is used.
The SUBMIT step only runs if the MFT step successfully completes. This step submits the RECVJOB1 job which restores the transferred data set to its original format on the destination system.
//XMITJOB1 JOB NOTIFY=&SYSUID
//*
//*******************************************************************
//* Use the XMIT command to unload the data set to fix block,
//* 80 logical record format
//*******************************************************************
//XMIT EXEC PGM=IKJEFT01
//SYSTSPRT DD SYSOUT=*
//SYSTSIN DD *
XMIT X.X DSN('USER1.SOURCE.DATASET') +
OUTDATA('USER1.SOURCE.DATASET.UNLOADED')
/*
//*******************************************************************
//* Invoke MFT fteCreateTransfer
//*******************************************************************
//MFT EXEC PGM=IKJEFT01,REGION=0M
//STDERR DD SYSOUT=*
//STDOUT DD SYSOUT=*
//SYSTSPRT DD SYSOUT=*
//SYSTSIN DD *
BPXBATCH SH <MFT path>/fteCreateTransfer +
-w +
-sa SRC +
-da DEST +
-ds "//'USER1.TARGET.DATASET.UNLOADED'; +
RECFM(F,B);BLKSIZE(3120);LRECL(80);SPACE(10,10); +
CYL;RELEASE" +
"//'USER1.SOURCE.DATASET.UNLOADED'"
/*
//*******************************************************************
//* Submit the restore job to the internal reader
//*******************************************************************
//SUBMIT EXEC PGM=IEBGENER,COND=(0,NE)
//SYSPRINT DD SYSOUT=*
//SYSUT1 DD DISP=SHR,DSN=USER1.JCL.MFT(RECVJOB1)
//SYSUT2 DD SYSOUT=(A,INTRDR),DCB=BLKSIZE=80
//SYSIN DD DUMMY
The RECVJOB1 JCL is shown in the following example. When it is submitted by XMITJOB1, it is routed by JES2 to the target node as indicated on the ROUTE command on the second line of the job. Depending on the settings of your installation, you might need to provide USER and PASSWORD parameters on the JOB step.
The RECEIVE step takes the data set that has been transferred by MFT and uses the TSO RECEIVE command to convert it back into its original format.
//RECVJOB1 JOB NOTIFY=&SYSUID
/*ROUTE XEQ NODE2
//*
//*************************************************************
//* Convert the data set back into its original format
//*************************************************************
//RECEIVE EXEC PGM=IKJEFT01
//SYSTSPRT DD SYSOUT=*
//UNLOAD DD DISP=SHR,DSN='USER1.TARGET.DATASET.UNLOADED'
//SYSTSIN DD *
RECEIVE INFILE(UNLOAD)
DSN('USER1.TARGET.DATASET')
/*
Method 2: Using the ADDRSSU utility with MFT
This method uses the DUMP and RESTORE commands of the ADRDSSU utility to convert data sets to and from a format that MFT can transfer. This method can be used with a wider range of data sets than method one, including VSAM data sets, and for transfer of multiple data sets at the same time.
Information on data sets that are not supported with DUMP is described in Special considerations for DUMP.
As before, this method is implemented using two JCL jobs and you need to adjust these jobs so
that they are suitable for your environment, and the type of data being transferred. You need to
change the values inside < >. In most environments extra job steps need to be
added to delete earlier versions of the data sets, or alternatively you can use generation data
groups.
You submit the first of these jobs, DUMPJOB1 shown in the following example, on the sending side.
The DUMP step runs the ADRDSSU DUMP command to convert the source data set into a sequential data set. This step can be adjusted to dump multiple data sets if needed.
The XMIT step converts the dumped data set into a fix block, 80 logical record format. This step
is not strictly necessary but provides consistency with the approach used in XMITJOB1.
X.X is specified for the node and user name to pass the command validation checks,
but a proper node and user name are not needed.
The MFT step initiates a file transfer from the source agent, SRC, to the destination agent, DEST. The -w flag means that the fteCreatetransfer command waits until the transfer has completed. The -ds flag indicates that a sequential data set is to be created on the destination agent and provides the correct DCB characteristics, so that there is sufficient space when the data set is dynamically allocated.
In this case, both data set names are surrounded with double quotes, indicating that fully qualified data set names are used. If double quotes are not used, the default high level qualifier of the source or destination agent is used.
The SUBMIT step only runs if the MFT step successfully completes. This step submits the RESTJOB1 job which restores the transferred data set to its original format on the destination system.
//DUMPJOB1 JOB NOTIFY=&SYSUID,REGION=0M
//*
//*******************************************************************
//* Invoke ADRDSSU to unload the selected data sets
//*******************************************************************
//DUMP EXEC PGM=ADRDSSU,REGION=2048K
//SYSPRINT DD SYSOUT=*
//DUMPDD DD DSN=USER1.SOURCE.DATASET.BACKUP,DISP=(NEW,CATLG),
// UNIT=SYSDA,SPACE=(CYL,(200,100,0),RLSE)
//SYSIN DD *
DUMP DATASET(INCLUDE(USER1.SOURCE.DATASET)) -
OPTIMIZE(4) OUTDDNAME(DUMPDD) TOLERATE(ENQF)
/*
//*******************************************************************
//* Convert the contents to fix block, 80 logical record format
//*******************************************************************
//XMIT EXEC PGM=IKJEFT01
//SYSTSPRT DD SYSOUT=*
//DUMPDD DD DISP=SHR,DSN=USER1.SOURCE.DATASET.BACKUP
//XMITDD DD DISP=(,CATLG),DSN=USER1.SOURCE.DATASET.BACKUP.UNLOAD,
// DCB=(LRECL=80,RECFM=FB,BLKSIZE=3120),
// UNIT=SYSDA,SPACE=(CYL,(200,100,0),RLSE)
//SYSTSIN DD *
XMIT X.X DDNAME(DUMPDD) +
OUTDD(XMITDD)
/*
//*******************************************************************
//* Invoke MFT fteCreateTransfer
//*******************************************************************
//MFT EXEC PGM=IKJEFT01,REGION=0M
//STDERR DD SYSOUT=*
//STDOUT DD SYSOUT=*
//SYSTSPRT DD SYSOUT=*
//SYSTSIN DD *
BPXBATCH SH <MFT path>/fteCreateTransfer +
-w +
-sa SRC +
-da DEST +
-ds "//'USER1.TARGET.DATASET.BACKUP.UNLOAD'; +
RECFM(F,B);BLKSIZE(3120);LRECL(80);SPACE(50,50); +
CYL;RELEASE;UNIT(SYSDA)" +
"//'USER1.SOURCE.DATASET.BACKUP.UNLOAD'"
/*
//*******************************************************************
//* Submit the restore job to the internal reader
//*******************************************************************
//SUBMIT EXEC PGM=IEBGENER,COND=(0,NE)
//SYSPRINT DD SYSOUT=*
//SYSUT1 DD DISP=SHR,DSN=USER1.JCL.MFT(RESTJOB1)
//SYSUT2 DD SYSOUT=(A,INTRDR),DCB=BLKSIZE=80
//SYSIN DD DUMMY
The RESTJOB1 JCL is shown in the following example. When the job is submitted by DUMPJOB1, it is routed by JES2 to the target node as indicated on the ROUTE command on the second line of the job. Depending on the settings of your installation, you might need to provide USER and PASSWORD parameters on the JOB step.
The RECEIVE step takes the data set that has been transferred by MFT and uses the TSO RECEIVE command to convert it back into the format expected by the ADRDSSU RECEIVE command.
The RESTORE step then uses ADRDSSU RECEIVE to convert the data set into its original format. The RENAMEU parameter could be used here to change the data set prefixes if needed.
//RESTJOB1 JOB NOTIFY=&SYSUID,REGION=0M
//*
//*************************************************************
//* Convert the data set back into the form accepted by
//* RECEIVE
//*************************************************************
//RECEIVE EXEC PGM=IKJEFT01
//SYSTSPRT DD SYSOUT=*
//UNLOAD DD DISP=SHR,DSN=USER1.TARGET.DATASET.BACKUP.UNLOAD
//SYSTSIN DD *
RECEIVE INFILE(UNLOAD)
DSN('USER1.TARGET.DATASET.BACKUP')
/*
//*************************************************************
//* Convert the data set back into its original format
//*************************************************************
//RESTORE EXEC PGM=ADRDSSU,REGION=2048K
//SYSPRINT DD SYSOUT=*
//DUMPDD DD DISP=SHR,DSN=USER1.TARGET.DATASET.BACKUP
//SYSIN DD *
RESTORE DATASET(INCLUDE(**)) -
INDDNAME(DUMPDD) -
CATALOG
/*