Billions of transactions occur online every day. With increasing volumes and the need to share data in the digital space, financial institutions such as M&T—one of the largest commercial bank holding companies in the U.S.—are expected to process and communicate banking information with high levels of speed and accuracy.
Previously, M&T conducted periodic bulk transfers in which large volumes of raw data were moved between core systems and functions, such as a payment decisioning application. The data elements then had to be aggregated and blended to create intelligible account information to be used by the payment decisioning application. This process took several hours to complete, resulting in various banking workloads consuming outdated information.
M&T also wanted to make core deposit information more readily available and consumable for their application developers, data scientists and business analysts. The complex data structures and formats associated with the core banking system meant that creating and modifying new applications could take over six weeks, even for minor changes.
Application developers had to work with multiple teams to access and interpret raw data elements before they could compose the necessary information in their applications. Similarly, data scientists and business analysts were unable to access core banking information for analysis without requesting a test bed be created from large data extracts specific to their project. M&T wanted to provide their employees with a self-service model in which information could be accessed on-demand without hindering production or affecting the bank’s core systems.
In search of a way to reduce information delays and make core banking information more accessible, M&T sought to enable their z/OS applications to follow an event-driven architecture that could generate current and consumable information without impacting day-to-day operations.