For some organizations, AI tools may have been perceived as “nice-to-have” technologies prior to 2020. In a 2019 IBM/Morning Consult survey of businesses, 22% of respondents worldwide reported they are not currently using or exploring the use of AI. But in a future characterized by uncertainty, only organizations that embrace the most advanced AI tools will be able to weather future storms. The COVID-19 pandemic remains an immediate threat, but all kinds of organizations are looking ahead to build resilient systems that can better withstand future pandemics, as well as natural disasters, cyberthreats, and other destabilizing scenarios. The current crisis is an opportunity to examine the performance of the technological systems that we use to manage the various aspects of human existence. We can spot failure points and bottlenecks, and imagine how AI can prevent or minimize such failures in the future, especially in moments of extreme stress.

The battle at hand

A number of organizations have deployed AI to provide information to the public during a time when lots of people are looking for accurate data, and traditional channels of presenting that information are not able to readily scale to meet the demand. IBM, in partnership with The Weather Company, has released the Weather Channel Interactive Incidents Map, which presents the latest COVID-19 data at the local level. The IBM Cognos team released the IBM Global COVID-19 Statistics Dashboard, a more robust tool that services data scientists, researchers, media organizations and others who need to conduct deeper analysis. Both of these tools offer richer up-to-date information by using Watson Natural Language Understanding (NLU) and Watson Discovery to aggregate county and state-level COVID-19 data stored in heterogeneous, often unstructured and semi-structured forms, synthesizing them into one structured presentation. They point to a future where AI will be commonly used to provide up-to-the-minute, accurate information culled from a variety of sources and data types.

Aiding overwhelmed help desks

In times of uncertainty, customers call their airline to ask about the company’s in-flight safety protocols. They call a retailer to ensure their delivery won’t be impacted by shipping delays. They call their hospital to check if it’s safe to come in for a routine checkup. These kinds of queries can be readily fielded by a virtual assistant, especially when AI is further leveraged to source the latest information from ever-changing data sets.

Citizens also require fast, accurate information from the public sector. In response to the current crisis, IBM trained watsonx Assistant on trusted information from the CDC and other sources and offered it at no cost for at least 90 days to governments, businesses, healthcare and academic organizations. It uses Natural Language Processing (NLP) capabilities to help institutions provide the latest information from trusted sources during severe spikes in stakeholder queries. This frees up the time of human agents to address the most complex issues, and provides stakeholders with faster, more accurate responses to simple or common questions. We’re going to see more of this throughout 2020, with virtual assistants addressing increasingly complex queries as NLP advances.

Repairing broken supply chains

During times of disruption, every link in the supply chain presents a potential point of failure for every participating organization. The COVID-19 pandemic brought demand spikes and incapacitated workforces, resulting in production lines, shipping networks, and other supply chain components grinding to a halt. For example, household items like hand sanitizer and disinfectant spray were initially inaccessible to millions. When personal protective equipment was unavailable for long stretches, hospitals resorted to supplying their nurses and doctors with garbage bags. In the U.S., months after the initial stateside outbreak, testing supplies are not accessible in many locations.

Traditional supply chains are designed to deliver an appropriate amount of supply across the chain based on historical data, which is determined using basic statistical analysis. But smarter supply chains powered by AI can be built for resilience and flexibility during such disruptions. Such supply chains are characterized by automated decision-making and real-time insights into productivity, inventory, and other elements of the supply chain. Forty-six percent of supply chain executives anticipate that AI/cognitive computing and cloud applications will be their greatest areas of investment in digital operations over the next three years.

An AI-infused supply chain is animated by sensors, RFID tags, actuators, GPS, news media data, and more — all of which are constantly pinging a central database with updated information. Machine learning can be used to process all of that data and make recommendations like adjusting purchase volume, expediting a shipment, or ramping up production in a factory. When a disaster impacts a link in the supply chain, managers can respond quickly and effectively.

Cutting through the paperwork

Organizations are using machine learning to reduce the administrative burden of paperwork, and to aid researchers in the search for therapies and vaccines. Six hospitals are now using watsonx Assistant, not only to field stakeholder queries, but to consolidate the query data so they can improve the quality of future responses.

It’s not just the sheer volume of paperwork that poses a problem for large organizations — it’s the rate at which data can change in value. For example, U.S. healthcare industry regulation has ramped up in response to COVID-19. In order for organizations in the healthcare space to provide accurate data to their stakeholders, they will need to constantly, even multiple times daily, check government data and manually transcribe that data into their own databases.

In the early days of the outbreak, the healthcare industry saw as much regulatory change in 18 days as could normally be expected from a five-year period. Healthcare organizations have to get information out on the latest regulatory change, so that hospitals, skilled nursing centers, and patient care facilities can serve patients instead of dealing with endless paperwork. The good news is that data ingestion can be automated, and new trusted information served to stakeholders with minimal or no human involvement.

Robotic process automation (RPA) is the use of software bots to automate highly repetitive, routine tasks normally performed by knowledge workers. The shift toward remote work has brought an opportunity to rethink the way work is done throughout the enterprise. Data entry and other manual tasks can be automated, reducing completion times for repetitive tasks by as much as 90 percent. The result is faster, more accurate communication between the organization and its stakeholders, at a tremendous cost savings.

Filling in the gaps when work is spread out

Most companies with a large office presence have been forced to transition toward a distributed work model, with workforces that are working from some non-office location, usually home. While distributed work has many benefits, it presents challenges to employees who are accustomed to synchronous working hours in close proximity. Distributed work is especially challenging for IT departments because their work often entails interfacing with physical technological components such as servers and hard drives. There are also often data access and governance concerns involved in employees using networks and equipment that are not owned and operated by the organization.

With the vast amounts and types of data that the modern enterprise must process every day, IT professionals cannot hope to manage their data infrastructure using standard analytics techniques. The emerging field of AIOps addresses this problem using artificial intelligence. AIOps involves collecting data from across the organization and providing alerts and notifications to IT managers in their usual chat environment, such as Slack. This data can be used to resolve outages before they impact the bottom line, reducing the operating costs and improving the productivity of the IT department along the way. Then the AIOps platform can automate the process of resolving an issue, such as routing traffic from one server to another, or rebooting a network remotely. This transforms IT problem resolution from a reactive process to a proactive one, and helps IT managers perform their role from a remote location.

The necessity of remote work will accelerate deployments of AIOps solutions for managing enterprise data. It’s not difficult to imagine a future where every department in the organization uses AI tools to keep their remote employees abreast of the latest developments, and on the same page.

More trustworthy AI

One of the effects of living in disruptive times is that human behavior can change in unpredictable ways, making the jobs of those who design and manage predictive algorithms difficult. Machine learning models are built to respond to shifts in data sets. But poorly designed models, or those that aren’t routinely monitored, underperform when input data and training data are substantially disparate. For example, imagine a streaming entertainment company using current data to predict viewing patterns in 2021. The extreme spikes in 2020 resulting from quarantined viewers binging an unusual amount of television might skew the recommendations the algorithm provides to the company’s managers.

One solution to this problem is simply to include a high volume of data, including data from periods characterized by disruptive events. But beyond that, it’s crucial for trust and transparency to be imbued into any AI, so that when outlier data is generated by unusual human behavior during extreme times, data scientists can ensure their models aren’t making faulty recommendations based on misinterpretations of that data.

A study of 5,000 C-level executives by IBM shows that 82 percent of businesses surveyed are considering using AI, but 60 percent are hindered by concerns over trust and compliance.

Fortunately, better trust and transparency measures are now available to data scientists. Products like IBM Watson Studio on IBM Cloud Pak for Data, which tracks the activity of AI models so managers can interpret the reasoning behind their recommendations, can help to address bias and boost confidence in outcomes.

While many organizations are struggling to maintain their status quo, it’s important for leaders to see the current crisis as a learning opportunity. With an AI-powered approach to solving the above problems, uncertainty can be minimized and managed. When CIOs embed these AI tools in their infrastructure, they will ensure that the organization can do more than simply play catch-up during volatile times. Instead, they will lead their organization to new heights where global disruptions are less likely to disrupt in the first place.

See how Watson is helping businesses adapt to a changing workplace

Was this article helpful?
YesNo

More from Artificial intelligence

Responsible AI is a competitive advantage

3 min read - In the era of generative AI, the promise of the technology grows daily as organizations unlock its new possibilities. However, the true measure of AI’s advancement goes beyond its technical capabilities. It’s about how technology is harnessed to reflect collective values and create a world where innovation benefits everyone, not just a privileged few. Prioritizing trust and safety while scaling artificial intelligence (AI) with governance is paramount to realizing the full benefits of this technology. It is becoming clear that…

Taming the Wild West of AI-generated search results

4 min read - Companies are racing to integrate generative AI into their search engines, hoping to revolutionize the way users access information. However, this uncharted territory comes with a significant challenge: ensuring the accuracy and reliability of AI-generated search results. As AI models grapple with "hallucinations"—producing content that fills in gaps with inaccurate information—the industry faces a critical question: How can we harness the potential of AI while minimizing the spread of misinformation? Google's new generative AI search tool recently surprised users by…

Are bigger language models always better?

4 min read - In the race to dominate AI, bigger is usually better. More data and more parameters create larger AI systems, that are not only more powerful but also more efficient and faster, and generally create fewer errors than smaller systems. The tech companies seizing the news headlines reinforce this trend. “The system that we have just deployed is, scale-wise, about as big as a whale,” said Microsoft CTO Kevin Scott about the supercomputer that powers Chat GPT-5. Scott was discussing the…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters