IBM Support

Best Practices for optimal text recognition in IBM Datacap

Education


Abstract

This document provides information about text recognition in general, and detailed guidance to achieve the best results when processing documents using IBM Datacap.

Content

1. Introduction

The optimal recognition settings can vary based on the contents of a specific document. Some of the tips for improving recognition are related to how to prepare input documents for processing. Other tips highlight various product features that can be used to improve recognition or why to use one feature vs. another.

NOTE: New features are added to IBM Datacap over time. Some of the features mentioned in this document may not be available in the older versions of IBM Datacap, and may need an upgrade to the latest version of Datacap to access these features.

Recognition uses heuristic algorithms, which by their nature, are not 100% accurate. The guidance provided by this document is intended to help achieve better accuracy from the input documents to help reduce the need for viewing or manual correction from a verify operator, although it might not eliminate the need for manual correction.

IBM Datacap provides a vast number of tools to control the recognition and the post-processing of recognition, to avoid or fix mistakes to reduce the need for a user to manually verify them. Instead of relying solely on recognition, utilize the actions provided by IBM Datacap to validate and adjust the data. It is recommended to review all the action libraries with available guides and Red Books for application creation.

2. Recognition Tips and Best Practices

IBM Datacap provides several different recognition engines. Each engine has its own strengths and abilities. Recognition does not provide 100% accurate results. You should evaluate the engine capabilities and determine which engine is best for the type of documents that you must process. Datacap is a toolkit of features that can be mixed and matched. It is recommended to run tests on data with different engines, different settings, and image enhancement features, to find the combination that produces the best results for your documents.

For more information, see the Image Enhancement ruleset documentation.

2.1. Recognition Types:

Recognition is typically classified into two types: OCR and ICR

  1. OCR:

    Optical Character Recognition refers to recognition on machine-printed text that uses various fonts, such as Arial, New Times Roman, and so on. This text is created with a word processor, typewriter, or printer.

  2. ICR:

    Intelligent Character Recognition refers to recognition on hand-printed or cursive text. Cursive is typically the most difficult type of text to recognize. If an engine supports ICR, it does not imply that it supports both printed and cursive text. If it does support both printed and cursive text, all the features and languages may not be supported for both types of writing.

It is possible one engine is required for OCR, while a different engine is required for ICR. IBM Datacap allows use of multiple recognition engines in a single application. If you have some documents that need OCR and others that need ICR, one typical implementation would be to run different rules based on the assigned page type and the rules would run the appropriate engine based on the page type.

2.2. Test “Real” Documents

When evaluating the recognition engines, use the actual pages that your application needs to process. It is not advisable to test some sample text documents to see how well the engine performs. Performance can be different when compared to some “manufactured” test documents versus the actual documents that will be ultimately processed. Even if they are similar, a problem may exist in a test document that does not exist in a “real” document, and vice versa.

2.3. Image Recommendations and Minimum Requirements

2.3.1. DPI

For best recognition results, use a dpi (dots per inch) between 200 and 300 dpi. 200 dpi is a minimum dpi for text that is 10 point or larger. If the text is 9 points or smaller, the dpi would need to be between 400 and 600.

For languages that have small intricate characters, such as Thai, Arabic, and Asian, use minimum 300 dpi for 10 point text. Generally speaking, a dpi higher than 300 dpi can be used for any language but may slow the engine without yielding better results. Languages that have intricate characters may give better results with a higher dpi than 300 even if the text is 10 point.

NOTE: Be aware that some actions may filter out very large letters to avoid recognition of logos and other large pieces of text. When a very high dpi is used, the letters appear larger. In this situation, adjust this feature using actions from the CCO action library to avoid filtering out large words. If large words are filtered out, warning messages are listed in the action log file along with the text that was removed. If you are recognizing a page, and there are missing words, perform this check to see if the CCO is filtering out large words. The CCO action library allows configuration of the size of characters to filter out.

If an image has a dpi less than 200, it is possible to use actions to resize the image so it has a higher dpi. While rescaling an image technically makes it conform to the required resolution, it may not necessarily improve recognition. It adds pixels to the image, but it does not necessarily add sharper detail to image. The new image may now meet the dpi requirement but it may or may not help recognition.

2.3.1.1. DPI on Photos

The dpi is a value that is used to determine the physical size of an image. For example, on an 8.5” x 11” page, when scanned at 200 dpi, it creates an image of 1700 x 2200 pixels. The same page scanned at 300 dpi produces an image that is 2550 x 3300 pixels.

When a camera takes a photo, it does not have the ability to determine the physical size of the image. For example, the camera cannot tell if it is taking a picture of a 3”x3” sticky note, an 8.5” x 11” page or a picture of a 10 story building. Typically, a camera will typically default to 72 or 96 dpi. This dpi does not allow the software to understand the physical size of the image. More importantly, it does not allow an easy understanding to determine if the image has enough detail to achieve good recognition.

When attempting to understand if a photo has enough detail to accurately recognize, then the pixel size of the image needs to be reviewed. For example, if the photo has a pixel size of 1700 x 2000, and if the page photographed is completely filling the photo without any borders, then it could be considered to be a 8.5”x 11" page at 200 dpi. This photo potentially could be recognized. If the page is only filling half of the image, then the page would roughly be considered 100 dpi quality, causing poor recognition results.

Unfortunately, photos have several issues working against them:

  • JPEG Compression – By default, cameras typically use JPEG compression. This compression is intended for photos, not for text to be recognized. JPEG softens text and blurs lines together, resulting in more recognition errors.
  • Images are not always straight – Images can be skewed or trapezoidal.
  • Images can have poor lighting, flash hot spots, or shadows – This also reduces successful recognition of the page.

Image enhancement can be used to improve the quality of a document. It can sharpen an image, straighten an image, and so on. These enhancements can help improve recognition. Review the Image Enhancement features. Also, review the features built into the recognition actions to improve the quality of images. When performing image enhancements, ensure that the image is changed to a lossless compression, or the quality is reduced again when saved in a lossless format. Each time an image is re-saved with a lossless compression, the quality can be reduced.

The general rule is to have a large pixel size for photographs, fill the entire image area with the page, have good and consistent lighting and hold the camera as straight as possible. While recognition can occur when images are not ideal, obviously the more problems introduced to an image will cause poorer results.

2.3.1.2. Isotropic Images

Images should be isotropic, meaning the X dpi and Y dpi should be identical. Isotropic images ensure that zones line up correctly during all aspects of processing, such as recognition and verify, and helps fingerprints match correctly.

Sometimes images can be non-isotropic. A fax is commonly a non-isotropic image. When it is possible that non-isotropic images can be ingested in the workflow, use the EqualizeUnbalancedImage action to scale the image to ensure that the X and Y dpi values are always identical.

2.3.2. Compression

If you can control the compression used in your images, a lossless compression is better than lossy compression.

Using a lossy compression — such as JPEG compression — can cause text edges to become soft, which can make recognition less accurate. While JPEG is a popular image format for photos, it is not the best format for text.

NOTE: Always avoid using JPEG compression for images that have text or barcodes which will be used in recognition.

Using JPEG compression will degrade text, reducing recognition quality, or even prevent recognition in some cases.

A lossless compression preserves the original image without adding additional artifacts that distort the text. Lossless compressions include Group 3 Fax, Group 4 Fax, and LZW. Fax compression is only supported for black and white images.

2.3.3. Improving Image Quality for Better Recognition

Images should be straight and free from background noise. Image enhancement actions are available to adjust images to improve recognition results.

Common image enhancements include:

  • Deskew: Straightening a slightly tilted image.
  • Rotation: Fixing the orientation of an image so it is upright.
  • Despeckle: Removing specks and background noise from an image, leaving just text.
  • Border removal and crop: Removing the edges of an image to the actual image size. This is especially important with photos of checks.
  • Line removal: Removing lines within a document, leaving just the text.
  • Inverse Text Correction: Inverse text (white text on a black background) may not be recognized so inverse text could be corrected.

These are the more commonly used features. IBM Datacap also provides additional image enhancement options. Use the Image Enhancement ruleset to load the target image and adjust the values and immediately view the results to guide you to the best settings for your specific documents.

IBM Datacap provides several tools to fix the rotation of an image. The automatic rotation action in the OCR/A action library uses the configured language to correct the rotation and is the recommended way to rotate documents more reliably than the Image Enhancement rotate method, which simply uses image geometry to rotate a document. For example, it is recommended to use the OCR/A library to rotate Hebrew documents, which can understand the specific character orientation. Hebrew documents may be rotated upside down by the Image Enhancement actions.

If a single page contains text in multiple orientations, it is possible that the page may not automatically rotate as desired.

2.3.3.1. Unique Settings for Different Document Types

It is possible that different image enhancement settings may be needed for different sets of documents. For example, a document from a certain vendor may always need to be despeckled, a document with a specific page type may need to use inverse text detection, etc. A standard set of image enhancement settings may not work across all documents in the workflow. Particular image enhancements that improve one image could have negative effects on another image. Image Enhancement supports different sets of image enhancements based on page type. It is possible to use this feature to control the types of image enhancements run on the images.

2.3.3.2. Common Issues

  • Images with textured backgrounds, stamps, or watermarks make it harder for the recognition engine to correctly recognize the text. If possible, use image enhancement features to remove the background. Sometimes converting a color image to black and white can cause light background textures or images to be removed completely, or nearly completely, leaving the text.
  • Sometimes directly recognizing a color image produces better results than converting it to black and white. This can be true even if there are textures or colored backgrounds because the engine can better distinguish the black text from the color background. If the documents are in color, run a test to see whether better results are obtained by recognizing in color or by recognizing in black and white. The Binarizeimage enhancement converts a color image to black and white. Separate actions are also available to convert images to black and white. Converting an image to black and white can create sharper character edges. Separate actions are also available to convert images to black and white. They work slightly differently, so choose the approach that works best for you.
  • Image enhancements can be ordered. Some image enhancement features work better when performed in a specific order. For example, deskew action followed by border removal provides better results than border removal followed by the deskew action. The sequence of image enhancement features can be changed or repeated as needed.
  • Some image enhancement features have color depth requirements. Some features require a black and white image, whereas some require a color image. In the case of color images, run the enhancements that require color images, binarize the image, and then run the enhancements that require black and white images.

    NOTE: When you convert images to black and white for recognition, the original color images are renamed so as to retain them for future use. You can rename them back to the original names to allow the end user to see the verify image in color. Additionally, work can be performed on the adjusted images because they may recognize better, and then the original images can be exported so that the original image is retained.

  • Just as with different image enhancement features, the actions can support different color depths and compression types. It is recommended that you review the list of the compressions and color depths supported in IBM Datacap.

2.4. Page Recognition and Field Level Recognition

Recognition can be performed in two different ways: Page and Field.

  • Page recognition — also called full-page recognition — provides the entire page to the recognition engine and all of the text is recognized at once.
  • Field recognition — also called zone recognition — recognizes portions of text on the page in pre-defined rectangular locations.

Each approach has its own benefits and drawbacks.

Salient features of Full-page recognition:

  • It is useful when all the text on the page is required for processing or archiving.
  • It allows searching the entire page.
  • Full-page recognition results can be loaded into fields/zones with actions such as the “Update” action in the Locate library or SnapCCOtoDCO action in the Recog_Shared library.
  • It is slower than field-level recognition.

Salient features of Field-level recognition:

  • It is faster than full-page recognition.
  • It recognizes text directly into a field.
  • It allows additional constraints to be placed on the recognition engine, such as allowable characters. Many of these restrictions are configurable in the Zones tab of Datacap Studio.
  • Some features require or are only supported with field recognition, such as cursive recognition or signature verification.
  • Field-level recognition can be more accurate than full-page recognition.
  • Some recognition features require zones.

2.5. Recognition of PDF documents

PDF documents are a common electronic format currently in use. A PDF can contain a mixture of text and images. A PDF that contains text embedded in it along with image elements is called a "searchable PDF". Although the PDF page displayed to the user may be an image, it can have embedded text with positional coordinates, which allow the use of features such as cut-and-paste along with searching while viewing the PDF document.

IBM Datacap can process PDF documents that have searchable text and those that only contain images. The two common ways of processing a PDF are:

  1. By performing recognition directly on the PDF.
  2. By converting the PDF to a TIFF image and then performing recognition on the extracted image.

2.5.1. Performing recognition directly on the PDF

If the input PDF is built using ideal conditions, such as being created directly from an electronic source document, this approach may give better results for two reasons:

  1. The image is recognized at the same native resolution as it is embedded in the PDF.
  2. If the PDF has embedded text, this text guides the engine and typically produces better quality results.

Unfortunately, PDFs are not always created with ideal pages. When recognizing the PDF directly, image enhancement cannot be performed. Images cannot be rotated, deskewed, have lines removed, and so on. If the PDF is not created from an electronic source, such as a word document converted to a PDF, then the best approach is to convert the PDF to images without performing recognition, use image correction actions and then run recognition on the images. For example, if the PDF is created from a set of scanned images, then it is best to extract the images and recognize them as images. The original PDF can always be saved for archiving at the end of the batch.

Field recognition is not supported on a PDF. Fingerprinting is not supported on a PDF. A PDF must be converted to an image for field recognition and fingerprinting.

The recognition method to be used depends on the images that need to be processed. IBM Datacap architecture allows the use of both mechanisms in the same application. An application could be setup to recognize one set of PDF files one way and another set of PDF files the other, as long as a rule is defined to make the decision of the recognition method.

When processing other types of electronic documents such as Word or Excel documents, these can be directly converted to a TIFF image, and then recognition can be performed on the TIFF. Alternatively, there might be a benefit in converting these documents to a PDF, and then performing recognition on the PDF, instead of converting the file directly to an image and performing recognition on the image. The reason recognition might be better is: when an electronic document is converted to a PDF, the embedded text from the source document is placed into the PDF which can aid the recognition engine.

2.5.2. Converting PDF to TIFF and then performing recognition on the extracted image

Creating TIFF files from the PDF is beneficial even if you are recognizing the PDF directly. These images can be used to display the page in the Verify panels. It also makes it possible to perform enhancements or image redaction.

The action PDFFREDocumentToImage in the Convert library is capable of directly recognizing the PDF as well as producing image files at the same time.

Field-level recognition can be faster than full-page recognition and it allows additional filtering abilities. For example, it is possible at the field level to configure restrictions on the recognized text, such as numeric-only or a specific set of characters. In addition to speed, it provides higher accuracy. When performing to do field level recognition, it is advisable to first convert a PDF to an image, because field level recognition is not supported on a PDF.

2.6. Recognition Quality

Text recognition is not guaranteed to be 100% accurate. As noted earlier in this document, a number of factors can make it hard for an engine to correctly recognize text. While a crisp and clean document is recognized better than a slanted, noisy image, it still can have recognition problems.

2.6.1. Substitution

A typical recognition problem is known as substitution. When substitution occurs, the engine recognizes a letter for a similar letter. Here are some examples:

  • Recognition of an O (capital O) vs. o (lower case o) vs. 0 (zero)
  • Recognition of 1 (one) vs. I (capital I) vs. l (lower case l)
  • Recognition of W (a capital W) vs. VV (two capital V)

There is a greater chance for substitution when the input data is not word, such as an alpha-numeric ID or account number.

One mechanism to reduce substitution is to provide hints to the engine as to the type of characters expected. Many of the engines support field-level specifications where the type of expected data can be identified. For example, if a field is expected to always be numeric, then that field can be configured to tell the engine that only numbers are expected. This helps when the engine cannot tell if a letter is a “1” or a “l” and it leans towards the numeric “1”. If there are specific characters that are expected (such as only uppercase, only lower case, or some combination of characters and symbols), this can be specified to provide hints to help the engine.

In some applications, it may be necessary to search the text of a document and then take steps based on the found text. To illustrate a typical example, the application may look for the word “Invoice” on the page, and if it finds this word, it then sets the page type as Invoice and continues processing based on that page type. To help catch some common errors, this kind of search would be performed with a regular expression that would allow for different permutations of upper vs. lower case and common interchanges such as O vs 0. So the search expression would perform the search that allows for both versions of the character within the word.

Sometimes the recognition engine will mistakenly put spaces in a word, so the regular expression could account for variable number of spaces as well.

2.6.2. Restricting Field Text

When fields are used, IBM Datacap provides a number of actions that can continue to massage the data to help fix recognition errors before showing them to the user. For example, if there is a field that expects to contain data and that data should not contain spaces, then running an action on the field after recognition can remove spaces from that field then other verification tests can be performed on that field. Field settings in the zones tab control how the engine recognizes text in a field while actions can be used for post-processing of recognized text.

There are other sets of actions that perform validations on fields that can test the field data to see if the recognition engine did a good job of recognition. If there are specific patterns of data, for example, an account number should always be 10 digits, or there should always be some prefix on an account number, or many other types of tests, validation actions can be used to test these fields and flag them to the user if they do not pass validity tests.

2.6.3. Character Confidence

For each character that is recognized by the engine, there is an associated confidence level of the character. The confidence is an assigned a value from 1 to 10, which indicates how sure the engine is that it recognized a character correctly. 1 would mean that the engine is not confident where 10 means the engine is very confident that the text is correct. A threshold can be adjusted to flag the confidence of each character. If a character falls below this threshold, then it will be highlighted to the verify operator.

Of course, each application can have its own rules about what it shows to the operator. In most cases, the goal is to process as many pages as possible without having to show the document to the user to confirm that the information is correct. The tolerances and rules for when to display a page to the user can all be controlled by the custom Datacap application. During the initial roll-out, it is not uncommon for an application to be configured with a lower tolerance to show the user problems. As the administrator gets confidence that the application is doing a good job, the tolerances can be increased to show less potential issues to the verify operator.

2.7. Text Blocks and Table Identification

Some recognition engines can identify a table on a document when performing full page recognition and group text into blocks. When text is recognized as a table, it means that additional metadata is internally stored about the words that have been recognized. This extra metadata stores the cell information, row and column position, for the text. This table metadata can be used by subsequent actions that support table functionality.

2.7.1. Line Items

Use of table identification isn’t the only way to process tabular data. Datacap has a long-existing feature of processing tabular data as line items. The APT application, which processes invoices, is a good example of processing tabular data without the engine actually recognizing the table as a table and producing this tabular metadata. The mechanism used by APT is called “line items”.

The line item recognition results are provided to the user in a tabular view using standard page level recognition, without requiring a layout file to be created and the recognition engine does not perform table identification. The line item approach works reliably for APT, even though the table identification mechanism of the recognition engine is not used. Line items are one of the mechanisms for processing table data. It is the approach most often used for working with tables. APT is a vertical application setup to process invoices. It does not process other kinds of documents out-of-the-box, but it can be used as an example for processing different kinds of documents with the line item approach.

To use the line item approach in your custom applications and document formats, review the APT application and understand how it works. This will enable you to use similar techniques in your custom application. To configure and use APT for creating a custom application, see the IBM Datacap Accounts Payable Capture Redbook Guide.

2.7.2. Engine Identification of Tables

When the engine is performing table identification, table rows and columns may not be recognized with 100% accuracy, as compared to how the table looks on the page and how the human eye interprets the image. Recognition engines internally use heuristic algorithms and they make mistakes by definition. As is true for text recognition, your application will need to handle situations where table layouts are not identified correctly.

The following guidelines help the engine recognize a table and the cells within the table:

  • Gridlines surrounding the entire table.
  • Gridlines that show the cells.
  • Cells cannot intersect each other.
  • All cells must have a rectangular shape.
  • Cells need to be all on the same horizontal row, meaning that along any particular row, there are not cells whose bottom of the characters are interesting in the middle of characters in a horizontally adjacent cell.

When using table identification, do not use line removal. While line removal can generally help improve recognition, when recognizing a table, the lines and cells are critical for reliable table identification. Obviously, there can be a trade-off here between choosing line removal vs. not performing line removal to get the best results.

When table processing is needed in your application, it is recommended to test with a large number of documents and review how accurately your tables are identified by the engine.

If the engine recognizes the tables with a high accuracy, then using table identification could work for your implementation.

If the tables in your documents are not being identified by the engine, then you may consider a different approach to process your documents other than relying on the engine to correctly identify your table structure.

Be aware that table detection and block structures can change from document to document, even if the forms are the same. As an engine is enhanced, the heuristic detection algorithms can also change. The result is that the detected layouts, text blocks and table structures for a page can be different in a new version of the product. Applications that are using block actions will need to handle the potential changes or be updated to handle changed detected blocks.

2.7.3. Zoning a Table

In the cases where a table is on the page but does not have gridlines, or if the table has gridlines but the engine is determining that things outside the table are part of the table or ignoring parts of the table, then the engine can be told the location of a table by using a zone. This is supported by the OCR/A engine and one table per page can be identified using a zone.

The zone can be predetermined by using a fingerprint, or a zone can be determined at run-time if there is unique text on the page that can be used to identify the table boundaries. When a zone that indicates the boundaries of the table is provided to the engine, it typically does a better job of identification of rows and columns. Of course, there can still be mistakes, particularly if the table does not have grid lines. Refer to the Recognize action help in the OCR/A action library for more information.

2.8. Voting

Datacap has a feature called “voting”, which compares results of multiple recognition engines. This is a technique for field recognition where two different recognition engines are used to perform recognition on a field. If the results match, then the confidence of that character is raised to the highest confidence. If the engines recognize different characters, then the confidence is lowered to the lowest level. For more details, refer to voting actions such as RecognizeFeidlVoteOCR_A.

2.9. Check Processing

As would be expected, checks are recognized better when the image is clean of background noise as is true for any recognized image. Here are some additional guidelines to improve the check recognition.

2.9.1. Image DPI

Checks must have a DPI between 200 and 300. Check recognition is less accurate when outside this range.

2.9.2. Check Borders

The check must be straight and the borders must be cropped to the size of the check. The check engine has a tolerance of a small number of pixels beyond the expected size of the check. If the checks are crooked or have a larger boarder, the image needs to be straightened and the border cropped. Use the image enhancement features to find values that work best for your checks.

Remember that the order of operation can be adjusted and operations can be performed twice, if that helps with your images.

2.9.3. Photos of Checks

Checks from camera photos vs. a scanner can have several problems that make it harder to recognize the check. The image can be skewed or appear as more of a trapezoid instead of a rectangular shape. These should be adjusted.

Photos have issues with the physical size of the check. A check that is 5” x 3” at 200 dpi would have an image size of 1000 x 600 pixels. The same check at 300 dpi would have an image size of 1500 x 900 pixels. A photo could have an image size that varies from camera to camera, the dpi is typically set to 96 and the check may be taking up just part of the frame. If provided to the recognition engine as-is, the check may not even be recognized as a check, even if the check area is within the ranges mentioned earlier. To have this check recognized, it should be straightened and cropped. Once the image is adjusted to only show the check and is at the correct size, the dpi and number of pixels might be outside the range required by the recognition engine.

One way to address this issue is to use the SetImageDPIByWidth action in the ImageConvert library. Using this action, provide the expected width of the image, for example 5 inches, and the expected dpi. The action then adjusts the image to that physical size and dpi. This often improves check recognition in many cases. Of course, if the source photo had a very small image of the check or if the check has a lot of noise, it still may not recognize after adjusting and would need to be manually checked.

Also, review the discussion about the DPI settings for photographed images for additional information.

2.9.4. Check Color

Checks are typically a color image. Converting them to black and white can cause problems, especially if there is a background picture that is prominent enough so it remains after the conversion. Depending on the country of the check to be recognized, the engine may be able to accept a color check image, potentially providing better recognition than first converting it to black and white. Review the action help of the check processing actions for details about supported color depths of checks.

2.10. Signature Verification

Signature verification can be performed on checks or on pages. When verifying the signature of a check, the signature does not need to be zoned. When verifying the signature on a non-check image, then the signature must be in a zone.

For a typical application, a minimum of 5 to 10 references for a single person’s signature that is to be verified is required. Having as many as 15 signature references can further improve the verification matching. A person's signature can vary over time. It can also vary from day to day, based on a number of factors such as mood, writing instrument, hand position, incline of the surface, and so on. While the engine attempts account for these kinds of differences, providing multiple examples of a reference signature helps to improve the accuracy of recognition.

3. Concluding remarks

Recognition is not 100% accurate. What makes IBM Datacap so important is the availability of a vast number of tools to control the recognition and the post-processing of recognition to avoid or fix the mistakes, thus reducing the need for a user to verify them manually.

You cannot rely on recognition alone, but you can utilize the actions provided by IBM Datacap to validate and adjust the data. It is recommended that you review all of the action libraries along with available guides and IBM Redbooks for application creation.

If more complex or application-specific processing is required, IBM Datacap provides the tools to create custom actions to process the data as needed by your specific application.

[{"Line of Business":{"code":"LOB45","label":"Automation"},"Business Unit":{"code":"BU053","label":"Cloud & Data Platform"},"Product":{"code":"SSZRWV","label":"IBM Datacap"},"ARM Category":[{"code":"a8m0z000000GoDQAA0","label":"Best Practice"}],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"All Version(s)"}]

Document Information

Modified date:
04 February 2021

UID

swg27050111