Application discovery for business alignment pattern
Use understanding of applications through detailed analysis to guide modernization in response to business needs.
Overview
← Back to Application modernization patterns
Organizations rely on core applications that maintain system-of-record (SOR) data on the mainframe. Online and batch programs within these applications contain business logic to maintain data integrity across business entities and to implement business policies, processes, and rules and these are the core business assets of an organization. IBM Z’s strict backward compatibility ensures that these core assets can remain in use even with evolving technology. Hence these transactional and batch applications that were often designed decades ago and have evolved over the years to become very large and complex are difficult to understand and often have the tacit knowledge lost. However, in response to ongoing business needs, they need to continue to evolve, more quickly than ever. This requires a framework of rapid application understanding.
Application discovery can support a wide spectrum of modernization activities and hence it is important to understand the objectives of discovery. Here are a few:
- Have a high-level understanding of the portfolio and how applications in the portfolio integrate.
- Have a deep understanding of how a particular application works: how the programs are connected to each other, which data they manipulate, etc. This enables maintenance or new features in the application.
- Understand how a particular modernization activity (like performance improvement, SCM migration etc.) can be undertaken.
Based on the objective, practitioners will choose a particular discovery entry point.
Let’s review a couple of business needs for analysis:
Understand your applications for maintenance and new services
Organizations often face challenges regarding the development and maintenance of their mainframe applications. Mainframe applications typically have been developed over several decades and as a result the knowledge of these applications is not at the same level as it is with more recently developed applications. The architecture across the applications may be uneven, or some layers were added to solve a problem but have persisted ever since. Now, no one dares to touch these areas anymore. In a nutshell, it becomes a challenge to nurture application knowledge at a high enough level to perform what the business expects from the team
As a result, the changes are done with uncertainty and risk. Sometimes this leads projects to derail or fail at an alarming rate. Even routine maintenance becomes harder: velocity of application maintenance is dropping.
Analysis helps developers, architects and business analyst to take back ownership of their code, to understand how the different pieces interact, to understand the data model involved. It provides up to date information with a given version of the application that very likely differs from what has been documented by the team in the past. It gives the knowledge needed to perform maintenance or implement a new feature.
Identify APIs and expose services
Discover Code Segments that are aligned to certain Business functions or rules or Data Processing Algorithms that can be reused from different parts of the application landscape. These modular codes are the qualified candidates to be converted into APIs under in place modernization or for a rearchitected application. For example, a complex processing performed on a set of relational tables can be identified by tracking the usage of these tables in the code. Refactoring of the code can be performed to isolate the single business function and expose it.
Standardize languages and technologies across applications
Traditional Mainframe applications developed long back often suffer from organizational standards and become aligned to the language / technology that the development team had been comfortable with. Over time, the technology might have become outdated, support have been costlier, or the availability of support resource have become scarce. Therefore, there is a need to align the applications to the most appropriate technology for the organization which aligns it to its future state. Inventory capabilities of analysis helps to plan the appropriate changes.
Componentize your source code to adopt DevOps practices
Shorter cycle times help to get earlier feedback and reduce effort for corrective actions, but also involve building and deploying more frequently. Time to market reduction, or maintenance of simultaneous versions makes parallel development an imperative. Defining the right content of a git repository allows teams to work in parallel and establish clear interfaces and adoption processes when inter-related components evolve. Analysis is required to define these boundaries and identify interfaces.
Performance analysis
TCO is considered one of the most important driving forces behind mainframe modernization. Lack of coding standards and software quality management processes, along with the growth of application data, usage of old and outdated software increases the MIPS consumption of mainframe applications which results in increased TCO directly or indirectly. Discovery of application components that can be optimized for reduced CPU usage and reduction of risks of failure are the focus areas for performance analysis. Software version upgrade, efficient code and processes, reduced batch cycles, decommissionin of unused or ineffective components are some of the results for performance analysis discovery of mainframe applications.
Understand your applications for maintenance and new services
Organizations often face challenges regarding the development and maintenance of their mainframe applications. Mainframe applications typically have been developed over several decades and as a result the knowledge of these applications is not at the same level as it is with more recently developed applications. The architecture across the applications may be uneven, or some layers were added to solve a problem but have persisted ever since. Now, no one dares to touch these areas anymore. In a nutshell, it becomes a challenge to nurture application knowledge at a high enough level to perform what the business expects from the team
As a result, the changes are done with uncertainty and risk. Sometimes this leads projects to derail or fail at an alarming rate. Even routine maintenance becomes harder: velocity of application maintenance is dropping.
Analysis helps developers, architects and business analyst to take back ownership of their code, to understand how the different pieces interact, to understand the data model involved. It provides up to date information with a given version of the application that very likely differs from what has been documented by the team in the past. It gives the knowledge needed to perform maintenance or implement a new feature.
Identify APIs and expose services
Discover Code Segments that are aligned to certain Business functions or rules or Data Processing Algorithms that can be reused from different parts of the application landscape. These modular codes are the qualified candidates to be converted into APIs under in place modernization or for a rearchitected application. For example, a complex processing performed on a set of relational tables can be identified by tracking the usage of these tables in the code. Refactoring of the code can be performed to isolate the single business function and expose it.
Standardize languages and technologies across applications
Traditional Mainframe applications developed long back often suffer from organizational standards and become aligned to the language / technology that the development team had been comfortable with. Over time, the technology might have become outdated, support have been costlier, or the availability of support resource have become scarce. Therefore, there is a need to align the applications to the most appropriate technology for the organization which aligns it to its future state. Inventory capabilities of analysis helps to plan the appropriate changes.
Componentize your source code to adopt DevOps practices
Shorter cycle times help to get earlier feedback and reduce effort for corrective actions, but also involve building and deploying more frequently. Time to market reduction, or maintenance of simultaneous versions makes parallel development an imperative. Defining the right content of a git repository allows teams to work in parallel and establish clear interfaces and adoption processes when inter-related components evolve. Analysis is required to define these boundaries and identify interfaces.
Performance analysis
TCO is considered one of the most important driving forces behind mainframe modernization. Lack of coding standards and software quality management processes, along with the growth of application data, usage of old and outdated software increases the MIPS consumption of mainframe applications which results in increased TCO directly or indirectly. Discovery of application components that can be optimized for reduced CPU usage and reduction of risks of failure are the focus areas for performance analysis. Software version upgrade, efficient code and processes, reduced batch cycles, decommissioning of unused or ineffective components are some of the results for performance analysis discovery of mainframe applications.
Solution and pattern for IBM Z®
Given the size, importance and history of these applications, organizations need to apply a high degree of due diligence in maintaining and evolving them. Their teams first need to understand and analyze how an application works in detail and understand its relationship with other applications before they can assess the impact of changes. This is where software analysis comes into play. When teams collect data from the production version of the source code and related assets, they can perform an accurate analysis of the application based on a single truth.
Application discovery helps the team to regain knowledge about the internals of their application, from a high level to a very detailed level. Application discovery is a set of tools and processes that provide readable, consumable, up-to-date information to the architects and developers about their mainframe software assets and their related resources.
Static analysis of your source code and assets
Static analysis is a method of application discovery that processes the source code and resource definitions without executing the code. It is performed on a version of the code and resources, typically the production version.
Because it is based on the source code and related definitions about the middleware, it reflects the reality of the implementation of the applications, their structure, program flow, control flow and data flow, and it helps teams regain control and plan changes more effectively.
The very first step is to populate an analysis repository with a consistent set of software assets. These assets include the application’s source code, and often also information about its resources, transactions, jobs, and schedules. These additional assets help to paint a complete picture of the application and its pieces. This information matters as it is not biased. It comes from the truth of the software itself: the source code and the definitions in the system. It needs to be extensive so that a detailed analysis can be performed.
An analysis tool such as IBM® Application Discovery and Delivery Intelligence (ADDI) provides up-to-date, consumable information about your mainframe software assets and resources. Architects and developers can visualize application flow, perform impact analysis, and generate reports to act on their modernization strategy and plan increments with confidence.
Gathering the analysis data is the first step of the process: This data is built in a similar manner as when the source code is compiled. The purpose of this build process is not to produce an executable module, but to assemble and resolve data and logic flows so that they can be reviewed, measured, and reported.
Using the data from the repository is the second step for the process: The repository will be accessed by different tools to support several workflows: provide metrics, browse artifacts, perform impact analysis, and display graphs with different levels of details and focus. Typically, developers and architects will perform these workflows in their Eclipse-based Integrated Development Environment (IDE). Other team members may use web browsers or some of the many reports that can be generated. Teams can also use the provided APIs to support automation scenarios, to implement gates in a pipeline, or to feed various dashboards.
Runtime analysis
Dynamic calls are a challenge for static analysis. Even with advanced algorithms that simulate branches of the code and calculate the possible values of variables, some dynamic calls remain unresolved. It happens for example when the name of the called programs are not present in the source files, and the analysis cannot be extended to obtain such values easily.
Runtime analysis can be used, for example though the analysis of SMF records, to identify the programs involved in such dynamic calls. The data that has been obtained through runtime analysis becomes an additional input to the analysis process. By augmenting the data found through static analysis, we enhance the quality of the data captured and therefore of the analysis itself.
Performance analysis
Performance Analysis focuses on the optimal usage of resources to get the maximum processing efficiency or minimizing operating expenses. Performance Analysis engagement can be aligned to different objectives. Some examples can be reducing batch cycle window, faster processing of online transactions, SQL or database access tuning, MIPS reduction, storage cost optimization etc. A typical Performance Analysis engagement is recommended as an infrastructure performance analysis followed by application performance analysis. The first stage for executing this project is to conduct workshops with existing Infrastructure and application teams to define the objectives and inventory details in terms of hardware versions, software versions, technology mix, MIPS costing model, process methods and tools in order to engage the right SMEs / Consultants for the performance analysis and definition of the performance goals. The next stage is collection of the performance related data in terms SMF data, TADZ reports, program CPU consumption pattern, zIIP processing, program call chains, code complexity, scheduler dump, SQL details etc. Tools for SMF analysis and Code Discovery can be used at this stage. The next phase is analyzing the data using analytical / statistical methods and models to obtain solutions aligned to the project objectives. The final delivery contains the recommendation in terms of quick solutions, mid-term solutions and long-term solutions. The solution levers include hardware / software upgrade, monitoring tools recommendation, SQL Optimization, Segregation of Transactional and Analytical workloads, redundancy removal, moving workload to zIIP etc.
Here are some real-life examples that illustrate how analysis has been critical in a project. Each example illustrates a different use case:
-
How is an existing component used across applications?
A car manufacturer wanted to modernize an application that generates identification information for mechanical parts. These identification codes are the starting point of the traceability process and the proof of authenticity for a mechanical part. The identification number generation process was written in assembler and couldn’t be maintained by the team anymore. It needed to be replaced by a new system, that will evolve more gracefully. But before replacing it, the company needed to assess all the impacts: this process was in the middle of a big part of the information system.
After building the application analysis from the source code into a repository, the customer used the graphical analysis to identify the different entry points for the car identification generation process. They used the high level, but still detailed, flow of the process. Then the customer performed impact analysis across the relevant pieces of the IT system in order to understand all of the connections to the system to be decommissioned. By doing that, the customer was able to plan the adoption of the new component with confidence and make sure they had a complete list of software components to modify.
-
Change the persistency of an application
A bank planned to rationalize the various middleware used in the organization: after many years and several acquisitions, different database technologies are being used in the company and the new standard on the mainframe is Db2 for z/OS. One big system uses IMS and still relies on IMS-DB. Migrating from a hierarchical database such as IMS-DB to a relational database such as Db2 is a project by itself. But the project was much more than this technical aspect: the database itself is used by hundreds of programs through their PSBs. These programs need to be identified and will need to evolve to deal with data coming from the relational database. The number of PSBs involved, the position of the programs in the chain that invokes them makes this work either trivial or extremely complex. Metrics were calculated in order to size the effort and identify the complexity. By using analysis, the customer was able to build the right picture of their applications and assess the amount of work required to perform such a transformation.
-
Adopt DevOps and scope the content of git repositories
A bank planned a DevOps adoption and a migration to git from their existing mainframe library manager. A detailed analysis of the Line of Businesses and domain allowed them to identify the right scope of the applications. Access control could be enforced at that level. The interactions between the applications were reviewed in order to identify interfaces (mostly COBOL copybooks) that were owned by an application and consumed by others. Changes to these interfaces would be planned and tracked across teams. Cross cutting components were also identified and managed separately, within their own repository. With this transformation, the bank achieved to have a similar continuous integration pipeline across mainframe and distributed applications, similar development practices and become more agile.
-
Reduce CPU consumption of an application
A bank was desperately looking for MIPS cost reduction options. The application was built with lower versions of COBOL and DB2 but was using the then latest mainframe systems. The discovery indicated a risk of running out of system resources during critical batch cycles, big table joins and unions, redundant and duplicate batch jobs that were consuming too much of system resources. The recommended solution included redundancy removal, storage and software upgrade, SQL tuning and usage of DB2® Analytics Accelerator and DB2® OMEGAMON.
Advantages
Mainframe applications can become complex and hard to understand without help from an automated analysis process. The key benefit of this pattern is to reduce risk and uncertainty in designing and implementing changes, such as those to meet new regulations and business requirements. Architects can make knowledge-based decisions when planning architecture changes. With the regained understanding of the system, project managers can build a precise bill of materials for their teams or SIs. It also becomes easier to onboard new developers as they can understand the existing complex applications.
By applying the discovery and analysis pattern, organizations can accomplish different business targets, such as:
-
React to disruption, regulations and business requirements with reduced risk and uncertainty.
-
Take knowledge-based decisions to adopt DevOps practices, expose APIs, enable architecture changes and upgrade to hybrid cloud.
-
Onboard new developers and build their application knowledge faster.
-
Delegate projects to System Integrators (SIs) and internal teams with a clean bill of materials.
Considerations
Discovery and Analysis are part of an agile development cycle. Analysis that reflects production code at the scale of the organization is required but is not necessarily sufficient to help the developers. Analysis should also be possible on a smaller scope, the one that the developer cares about, with the current content being developed. This content may differ from production but is meant to be integrated and deployed at some point into production.
Because of the nature of mainframe software, the repository often deals with a very large number of artifacts. Systems with tens of thousands of programs, programs with tens of thousands of lines are typical. The repository, but also the workflows supported by the UI, need to take into account large scales in order to provide usable and valuable information.
The Modernization journey of a mainframe application starts with an understanding of the as-is applications using the available processes, techniques, and tools. The discovery processes depend upon the objectives of discovery – while some focus on the functional side of the application (e.g., API discovery) others focus on the technical aspects of the application (e.g., performance discovery). Correct application understanding ensures the selection of the correct levers for modernization to the desired target state. A trusted partner, like IBM Consulting, with wide advisory and execution experience helps to perform the right kind of discovery for the most desired modernization benefit.
What's next
- Review these other patterns that can directly benefit from this discovery and analysis:
- Review the article “An overview on the Static Code Analysis approach in Software Development” for a theoretical overview of static analysis.
- Learn more about IBM ADDI. Read
- Short video clips about IBM ADDI functionality
- Try ADDI in a ready to use environment (zTrial)
- ADDI Online Education – ADDI Accelerator
- Browse the ADDI product documentation
- IBM ADDI Resources page
- Find out how IBM Consulting can help you on your modernization journey – IBM Consulting – Mainframe Application Modernization Services
Contributors
Rami Katan
STSM, Chief Architect for zDevOps AI & ML Systems IBM
Nicolas Dangeville
STSM – Chief Architect for ADDI
Joydeep Banerjee
Associate Partner and Offering Manager Mainframe Application Modernization Consulting IBM