October 7, 2024 By Sascha Brodsky 3 min read

Voice AI technology is rapidly evolving, promising to transform enterprise operations from customer service to internal communications.

In the last few weeks, OpenAI has launched new tools to simplify the creation of AI voice assistants and expanded its Advanced Voice Mode to more paying customers. Microsoft has updated its Copilot AI with enhanced voice capabilities and reasoning features, while Meta has introduced voice AI to its messaging apps.

According to IBM Distinguished Engineer Chris Hay, these advances “could change how businesses talk to customers.”

AI speech for customer service

Hay envisions a dramatic shift in how businesses of all sizes engage with their customers and manage operations. He says the democratization of AI-powered communication tools could create unprecedented opportunities for small businesses to compete with larger enterprises.

“We’re entering the era of AI contact centers,” says Hay. “Every mom-and-pop shop can have the same level of customer service as an enterprise. That’s incredible.”

Hay says the key is the development of real-time APIs that allow for extremely low-latency communication between humans and AI. This enables the kind of back-and-forth exchanges that people expect in everyday conversation.

“To have a natural language speech conversation, the latency of the models needs to be around 200 milliseconds,” Hay notes. “I don’t want to wait three seconds… I need to get a response quickly.”

New voice AI technology is becoming accessible to developers through APIs offered by companies like OpenAI. “There’s a production-at-scale developer API where anybody can just call the API and build that functionality for themselves, with very limited model knowledge and development knowledge,” Hay says.

The implications could be far-reaching. Hay predicts a “massive wave of audio virtual assistants” emerging in the coming months and years as businesses of all sizes adopt the technology. This could lead to more personalized customer service, the emergence of new AI communication industries and a shift in jobs toward AI management.

For consumers, the experience may soon be indistinguishable from speaking with a human agent. Hay points to recent demonstrations of AI-generated podcasts through Google’s NotebookLM as evidence of how far the technology has come.

“If nobody had told me that was AI, I honestly would not have believed it,” he says of one such demo. “The voices are emotional. Now you’re conversing with the AI in real-time, and that will get better.”

AI voices get personal, literally

The major tech companies are racing to enhance their AI assistants’ personalities and capabilities. Meta’s approach involves introducing celebrity voices for its AI assistant across its messaging platforms. Users can choose AI-generated voices based on stars like Awkwafina and Judi Dench.

However, along with the promise comes potential risks. Hay acknowledges that the technology could be a boon for scammers and fraudsters if it falls into the wrong hands.

“You are going to see a new generation of scammers within the next six months who have got authentic-sounding voices that sound like those podcast hosts you heard, with inflection and emotion in their voice,” he warns. “Models that are there to get money out of people, essentially.” This could render traditional red flags obsolete, like unusual accents or robotic-sounding voices. “That’s going to be hidden away,” Hay says.

He likens the situation to a plot point in the Harry Potter novels, where characters must ask personal questions to verify someone’s identity. In the real world, people may need to adopt similar tactics.

“How am I going to know that I’m talking to my bank,” Hay muses. “How am I going to know that I’m speaking to my daughter, who’s asking for money? Humans are going to have to get used to being able to ask those questions.”

Despite these concerns, Hay remains optimistic about the technology’s potential. He points out that voice AI could significantly improve accessibility, allowing people to interact with businesses and government services in their native language.

“Think of things like benefit applications, right? And you get all these confusing documents. Think of the ability to be able to call up [your benefits provider] and it’s in your native language, and then being able to translate things—really complex documents—into a simpler language that you’re more likely to understand.”

AI voice technology continues to evolve, and Hay believes we’re only scratching the surface of potential applications. He envisions a future where AI assistants are seamlessly integrated into wearable devices like the Orion augmented reality glasses that Meta recently unveiled.

“When that real-time API is in my glasses, I can speak to that real-time as I’m on the move,” Hay says. “Combined with AR, that will be game-changing.” Though he acknowledges the ethical challenges, including a recent incident in which smart glasses were able to instantly discover people’s identities, Hay remains bullish on the technology’s prospects.

“The ethics will need to be worked out, and ethics are critical,” he concedes. “But I’m optimistic.”

eBook: How to choose the right foundation model
Was this article helpful?
YesNo

More from Artificial intelligence

New IBM study: How business leaders can harness the power of gen AI to drive sustainable IT transformation

3 min read - As organizations strive to balance productivity, innovation and environmental responsibility, the need for sustainable IT practices is even more pressing. A new global study from the IBM Institute for Business Value reveals that emerging technologies, particularly generative AI, can play a pivotal role in advancing sustainable IT initiatives. However, successful transformation of IT systems demands a strategic and enterprise-wide approach to sustainability. The power of generative AI in sustainable IT Generative AI is creating new opportunities to transform IT operations…

IBM Research data loader enhances AI model training for open-source community

3 min read - How do you overcome bottlenecks when you’re training AI models on massive quantities of data? At this year’s PyTorch conference, IBM Research showcased a groundbreaking data loader for large-scale LLM training. The tool, now available to PyTorch users, aims to simplify large-scale training for as broad an audience as possible. The origins of the research The idea for the high-throughput data loader stemmed from practical issues research scientists observed during model training, as their work required a tool that could…

How IBM Data Product Hub helps you unlock business intelligence potential

4 min read - Business intelligence (BI) users often struggle to access the high-quality, relevant data necessary to inform strategic decision making. These professionals encounter a range of issues when attempting to source the data they need, including: Data accessibility issues: The inability to locate and access specific data due to its location in siloed systems or the need for multiple permissions, resulting in bottlenecks and delays. Inconsistent data quality: The uncertainty surrounding the accuracy, consistency and reliability of data pulled from various sources…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters