On June 12th, IBM debuted AutoAI, a new set of capabilities for Watson Studio designed to automate critical yet time-consuming tasks associated with designing and optimizing AI in the enterprise. As a result, data scientists can be liberated to execute more data science and AI projects in their organizations. Read more about AutoAI in the announcement.

To learn more about what these developments mean for the data science community, I sat down with Alexander Gray, vice president of AI, IBM Research, to get his perspective. Alexander has more than 25 years of experience researching machine learning and AI algorithms, theoretical frameworks and designing solutions for difficult use cases across many industries.

Alexander, what is your view on the business appetite for transforming businesses through automation of AI, aka, AutoAI? And what use cases are most popular with automating AI today?

We see that the appetite is very high. Organizations are motivated by the potential of automation to dramatically reduce time to market and to increase the number of AI projects that they can take on with their available staffing. One thing that many are not as aware of, but will certainly discover later, is that automation designs can significantly increase the quality of solutions as well.

Tell me the real scoop. What routine tasks are people using automated AI for today? And what is still science fiction?

The current state of automation technology still encounters challenges around use cases that rely heavily on domain knowledge. I would say most data scientists we encounter who are using automation to their advantage today are mainly doing hyper-parameter optimization (HPO). There are many existing technologies that focus on this area. It should be noted that while it is a good place to start, such tools only address a small fraction of the data science process. The part that I would say is still science fiction – meaning too far out to put on a roadmap – is the ability to specify the actual business goals and constraints that would require AGI (artificial general intelligence), and we’re not there yet.

For businesses starting out in new data science projects, what part of data science is ideal for automating?

For those new to automating data science, the most straightforward place to start is at the end of the data science pipeline, the modeling stage. It is easy and straightforward to automate HPO because you can see immediate gains in your data science projects. Then one can move to automating the choice of machine learning model. We are focused on going beyond this stage to also address data preparation, because it is of high interest to data scientists, being where they spend most of their energy typically. We regard this as one of the most important research frontiers.

Is AutoAI going to take jobs away from data scientists?

It is commonly recognized that we have a growing demand for data scientists worldwide. This is not accounting for the potential number of AI projects within the realm of possibility but not even considered at this moment. In my experience, most data scientists have a giant backlog of both exploratory and critical AI projects that they and their organizations would love to get to, but they don’t have the bandwidth. Also, when we say “automation” here, we typically mean assistance for humans along some spectrum from light to heavy.

In terms of “taking jobs away,” automation of AI is actually a mechanization of tedious activities, a time-saving benefit that data scientists embrace because they seemingly enjoy thinking more than tedium. We are simply making data scientists’ tools smarter and more powerful. Using more powerful tools is simply a new skill, compared to using less powerful tools. As data science skills shift to use these new kinds of tools, job roles will have increased business responsibility and impact because they are able to create more value.

With AutoAI, will businesses get more from their AI and data science investments? What are the potential drawbacks and misconceptions?

We believe many small vendors and open source projects will appear in the face of automation. While automation offers the potential to do things better and faster, it also has the potential to propagate human errors if there is poor science underneath them. It is much easier for this to occur than most people think. In my experience, even teams of PhDs from top schools can commonly make errors in statistical nuances which leads to poorer models than would otherwise be possible. For this reason, a high degree of mathematical expertise behind the automation is critical in order to be able to rely on the decisions made by automated AI. And the need for data scientists to have strong understanding of the underlying principles will not go away, because human oversight will always be needed for the most important applications.

Do you have a personal prediction on how automation of AI impacts our society? Is AI “scary?”

Data science automation will actually create new opportunities for many new people to enter data science who previously had limited ways to participate. I believe it will enable the creation of entirely new job categories, allowing much wider participation in the AI revolution. We believe this is exciting rather than scary.

What interesting AutoAI research are you currently working on? Any potential breakthroughs you can share?

We are working toward treating the problems of data science automation in a much more fundamental way than I have previously observed. It begins with formalizing the problem of data science in mathematical terms, which we cannot find in existing text books. We predict that placing all of the “grungy” aspects of data science on solid mathematical foundations will have deep benefits in both error prevention and quality of solutions. It’s an exciting time in AI, so stay tuned.

Follow Alexander on LinkedIn. Watch Alexander talk about the future of AI, and explore what AutoAI can do for your business.

Was this article helpful?
YesNo

More from Analytics

IBM acquires StreamSets, a leading real-time data integration company

3 min read - We are thrilled to announce that IBM has acquired StreamSets, a real-time data integration company specializing in streaming structured, unstructured and semistructured data across hybrid multicloud environments. Acquired from Software AG along with webMethods, this strategic acquisition expands IBM's already robust data integration capabilities, helping to solidify our position as a leader in the data integration market and enhancing IBM Data Fabric’s delivery of secure, high-quality data for artificial intelligence (AI).  According to a Forrester study conducted on behalf of…

Fine-tune your data lineage tracking with descriptive lineage

4 min read - Data lineage is the discipline of understanding how data flows through your organization: where it comes from, where it goes, and what happens to it along the way. Often used in support of regulatory compliance, data governance and technical impact analysis, data lineage answers these questions and more.  Whenever anyone talks about data lineage and how to achieve it, the spotlight tends to shine on automation. This is expected, as automating the process of calculating and establishing lineage is crucial to…

Reimagine data sharing with IBM Data Product Hub

3 min read - We are excited to announce the launch of IBM® Data Product Hub, a modern data sharing solution designed to accelerate data-driven outcomes across your organization. Today, we're making this product generally available to our clients across the world, following its announcement at the IBM Think conference in May 2024. Data sharing has become the lifeblood of modern organizations, fueling growth and driving innovation. But traditional approaches to data sharing can often be a bottleneck constricting the seamless sharing of data.…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters