Imagine this:
For all these problems, it seems like increased automation could help. After all, you can’t ask your teams to keep doing more in the same amount of time, and more streamlined automated processing could solve some of the issues. Automation can handle the routine issues and thus help free up peoples’ time to work on harder issues and improve customer service.
Maybe you have brought in external consultants who have told you about the wonders of robotic process automation (RPA), AI-powered decision management, and business process management – and those things seem to work elsewhere. Maybe you have used RPA on a specific task, but it did not improve the overall process. Your business analysts have drawn up process maps and interviewed employees and have some ideas on where the issues stem from. But the investment in automation needs a clear business case for your management to agree to spend money ahead of seeing results — and so far, the business case is a bit wishy-washy. You are not sure the ROI will really be there.
What is missing in this scenario?
In all the examples above, management has a good intuition that something needs to be fixed, and the KPIs used to measure the business are showing there is a problem. But discovering the true roots of the problem and ensuring that proposed solutions will have the right impact requires something else — actual data about how business processes are being executed today, root cause analysis of this data and simulation of the impact of automations on an accurate model of how the business works.
Process mining brings that missing link to the table.
Process mining gives you the tools and methodology that you need to unlock the data that shows how processes work, how people do their job and where problems are coming from. It gives you analytics to dig deeper into the business and uncover where automation and other process changes can have the biggest impact. It can then simulate how the business would operate with the automations in place, letting you focus on the solutions that maximize expected results, radically increasing your confidence that your investment in automation will really pay off. Then you can use the same tools to measure the improvement against the estimates and help you continue your journey to automation.
In this chapter, we will describe the various steps in process mining and some of the analyses available to help you create your own data-driven roadmap to automation. We will be using IBM Process Mining, the IBM-enhanced version of myInvenio, as the tool of choice. IBM Process Mining has an especially rich set of analytics and simulation capabilities, with links to the rest of the IBM Automation portfolio. It includes capabilities like business rule mining, task mining, multilevel process mining, reference model comparisons and the ability to create simulation data for a process model to get insights independently of historical data.
The overall paradigm of process mining is straightforward. Look at this diagram:
The idea is to go from process execution data (found in either system logs or recorded from peoples’ desktops), to analyses of that data to help understand how the process works, to discovering where there are meaningful opportunities for automation, to simulating the impact of the proposed changes using the model created during the analysis. Then you can create the automations that will have the most impact using RPA, decision management and the other technologies in the automation toolbox. To close the loop, you can then measure the impact of the changes by gathering new data from the updated process and repeating the cycle again.
Let’s look at each of these areas in a little more detail, then dive into some of the analyses and tools you can use to dig into your business and IT processes.
The first step in doing process mining is to gather the data that will be needed for the analysis. This is usually the most time-consuming part of a process-mining project. You will need to figure out where the data sits, how to access it and how to format it in a way that the process mining tool can use.
We are looking for data that shows how people execute processes. There are two primary sources of this data – system logs from systems that people interact with and records of the actions that people do on their desktops when engaged in executing a process.
Systems that people use include ERP systems, CRM systems, IT ticketing systems, accounting systems and so on. You first need to do an inventory of the systems people use in the process you are trying to analyze and the kinds of information these systems store. Often, these systems put an entry in a log or database whenever someone executes a transaction or change with them. We are looking for these transaction or event logs. The data should include the action executed (which task was being done), an ID of a process being executed (typically a contract number, client number, ticket number or similar), a timestamp for when the event occurred, who executed the action (the user ID), and maybe other information that is interesting for the analysis – how long something took, the outcome of the event, and other interesting information.
This phase of the project needs the involvement of the folks in IT who understand how these systems work, where the data can be accessed and what its format is so that it can be read and transformed into the right format for the process-mining tool. For some systems, such as SAP and others, the process-mining tool helps by providing predefined connectors that do a lot of this work.
The other main source of useful data is watching what people do on their desktops when executing processes. We can install recorders on the desktops and configure the recorder to store events whenever a user does something related to their process execution job (and ignore everything else they do to ensure privacy for unrelated activities). Then, we can send these event logs to a central server where they are consolidated with all the other peoples’ records; when enough data has been gathered, it can then be fed to the process-mining tool for analysis.
System logs and task mining are complementary ways of getting historical process execution data and are often used for different purposes. System logs are good for doing an overall process analysis and seeing the big picture, especially for processes that are centered on modifying data in one system or a small number of systems (e.g., ERP systems for accounting, CRM systems for sales and marketing processes or IT ticketing systems for help desks). Task mining is good for getting down to the details – exactly what actions does a person take to execute a task, under what circumstances, with what variations and so on. This is very helpful when you are considering automating these actions using RPA tools that focus on that fine-grained detail.
Once the event data has been prepared and fed into the process-mining tool, it can analyze the data to produce a set of visualizations that can be used to pinpoint problems.
One basic analytics visualization is the process map, which shows the set of tasks executed during the process and how they are connected — which ones follow which other ones, the order in which they are executed, etc. Because the process may follow a different sequence depending on the different types of cases, the process map shows the different “process variants” that have been used. In addition to showing the process map, the analytics can show which tasks and which variants are executed most often, which ones take the most time or which ones cost the most. This is your first clue to finding issues — the tasks and sequence variants that are seen most often, that take the most time or that cost the most are good candidates for further investigation.
The image below shows how this works. Each task in the process is a labeled box. The darker the color of the box, the more often it is executed in the data set provided. The number in the box shows the number of times the task was executed. For example, the task “Authorization Requested” was executed 46,415 times. The arrows indicate which tasks follow which others. In our case, the “Authorization Requested” task led to the “BO Service Closure” task 44,560 times (presumably, the remaining 1,855 cases were either rejected or still pending when the data was analyzed):
This starts to become even more interesting when we look at the time spent between various tasks:
The darker colors indicate that more time is spent on that task. We can see a couple that pop out as problematic — “Pending Request for Network Information” and “Network Adjustment Requested” seem to take a lot of time. Perhaps there is something we can do to automate the network information and network adjustment requests that would speed them up? You can see how powerful this kind of information is.
You can also see which paths through the tasks are happening with what frequency, which is another clue to finding issues:
In this example, the highlighted path is the most frequent way this process is executed – used 27.8% of the time and taking 19 days and 13 hours, on the average. If we can improve this variant first, it will probably have the biggest impact on the overall process. In this variant, there is no Network Information Requested or Network Adjustment Requested, so maybe our previous idea was not, in fact, the right place to look. If those tasks are not executed very often, improving them won’t help the overall KPIs, even if we improve them by a lot.
But how can we see these KPIs? The tool allows us to compute KPIs based on the data available, and how those KPIs respond to the different areas at which we are looking. See this image:
Here, we can see how a KPI (namely, maverick buying) can be displayed in the context of a process variant. In this example, we can see both the amount of maverick buying over time and a breakdown of maverick buying by vendor. This could point us to some products and vendors to focus on to reduce maverick buying, which might lead us to introduce automatic alerts or process redirects when purchase orders for those vendors are detected.
Another clue to finding process irregularities comes from conformance checking — finding process variants that do not correspond to a predefined process map that shows how the process should be working. In IBM Process Mining, you can upload a “reference model” that indicates the prescribed way a business process should be executed; for instance, by using a process map made with BlueWorks Live. The analytics can then compare that reference model to the actual data and point out inconsistencies. These are typically good places to start looking for errors and problems.
The image below shows how the conformance checking is displayed in the tool. In this example, we can see that 39,300 cases are non-conformant and the red tasks indicate which ones are unexpected for particular process variants. In this case, we can also see that the tool has calculated that USD 3,439 is spent per non-conformant case, based on time spent and the cost of people’s time:
The tool has many other ways to display information and drill down into the data. A couple of these include the following:
There are many more. As you gain expertise with the tool you will discover many new ways of gaining insight into your business.
Once you have understood your process better, it is time to figure out what to do to improve the process. There are many things you can do, but let’s focus on one area that is particularly useful with process mining — how to use RPA bots to automate bottleneck tasks to improve the overall process flow.
RPA bots, for the most part, replicate repetitive human actions on the desktop so they can be done more easily, thus freeing up people to spend their time on activities that require deeper thinking. Good candidates for RPA bots are tasks that are done often, that are repetitive, where you can save time by automating them and that have a positive overall impact on the business KPIs (for instance, overall process resolution time that can lead to better customer service or alignment with regulatory deadlines).
Once you create an RPA bot, you can replicate it and execute it on different virtual desktops, and you can mix human activity with automated activity, depending on how much you want to scale the RPA solution.
When you use task mining with IBM Process Mining, the tool does a lot of the work for you in helping determine the best task candidates for RPA. Take a look at this screenshot:
Here, the process-mining tool has pointed out two activities whose automation with a bot might have a big positive impact on the overall process: Network Service Closure and BO Service Closure. Furthermore, there are parameters that can be adjusted to estimate the overall impact depending on what percentage of these tasks are automated with a bot, and how many variant versions of these tasks are included in the bot. With the current parameters, the estimated savings per process instance is USD 369.36, and the overall savings in terms of human labor available for other activities is USD 43,054.76 for the 116,566 cases in this data set. These are definitely viable candidates for effective automation.
Now, we can also simulate the overall process execution in this RPA scenario. See the image below:
Here, we have run a simulation with a certain percentage of human tasks in the original data replaced with the RPA bots that execute the tasks much more quickly. We can see that in this simulation run, the overall average process execution time for this account closure example went down from 18 days and 16 hours to 16 days and 6 hours — a 10% overall time savings that has direct impact on customer satisfaction and regulatory compliance. The ROI for automation is clear and well-defined in terms of both financial savings and KPI improvement.
The next steps are to create and deploy the RPA bot, and then start investigating the real impact of the change using the same process-mining tool and setup. This will let us both determine if the ROI was actually met and point out the next set of automations to improve the business process. This repeated process monitoring, measurement and automation is the motor that will drive ongoing business improvement — and that is one of the main drivers behind the excitement around process mining as the next technology advance in automation.
We have seen how we can take business process execution data, create detailed analytics for drill-down to understand how the business really operates and then use those analytics to create automations that drive significant improvements to our business, both in terms of cost savings and customer satisfaction improvements.
But we have really just scratched the surface of what we can do with process mining — here are some other areas where process mining can help:
You can now understand why process mining is generating such excitement in the market and how it is really the best first step in your journey to automating your business.
Make sure you check out The Art of Automation podcast (link resides outside ibm.com), especially Episode 17, from which this chapter came.
The Art of Automation: Landing Page
The Art of Automation: Table of Contents