PRECISION attribute

The precision of a coded arithmetic data item includes the number of digits and the scaling factor. (The scaling factor is used only for fixed-point items).

number of digits
An integer that specifies how many digits the value can have. For fixed-point items, the integer is the number of significant digits. For floating-point items, the integer is the number of significant digits to be maintained excluding the decimal point (independent of its position).
scaling factor
An optionally-signed integer that specifies the assumed position of the decimal or binary point, relative to the rightmost digit of the number. If no scaling factor is specified, the default is 0.

The precision attribute specification is often represented as (p,q), where p represents the number of digits and q represents the scaling factor.

The scaling factor must be nonnegative. A positive scaling factor (q) that is larger than the number of digits specifies a fraction, with the point assumed to be located q places to the left of the rightmost actual digit. In this case, intervening zeros are assumed, but they are not stored; only the specified number of digits is actually stored.

If PRECISION is omitted, the precision attribute must follow, with no intervening attribute specifications, the scale (FIXED or FLOAT), base (DECIMAL or BINARY), or mode (REAL or COMPLEX) attributes at the same factoring level.

If included, PRECISION can appear anywhere in the declaration.

Integer value means a fixed-point value with a scaling factor of zero.