October 3, 2024 By Anabelle Nicoud 3 min read

How do you overcome bottlenecks when you’re training AI models on massive quantities of data? At this year’s PyTorch conference, IBM Research showcased a groundbreaking data loader for large-scale LLM training. The tool, now available to PyTorch users, aims to simplify large-scale training for as broad an audience as possible.

The origins of the research

The idea for the high-throughput data loader stemmed from practical issues research scientists observed during model training, as their work required a tool that could process large amounts of data across multiple devices—all while keeping up with progressively efficient GPUs. As IBM Research notes in its blog about the release, “It’s all thanks to a team of researchers who were simply building the tools they needed to get a job done.”

Davis Wertheimer of IBM Research explains some of the challenges that can emerge during large-scale training: “There’s something of an 80/20 rule when it comes to large-scale training. Eighty percent of all the published literature is looking at algorithmic tradeoffs between GPU memory and communication and computation. But when you actually try to build something, 80% of the time, you can depend on a very long tail of all these other practical issues because the pipeline runs at the speed of the narrowest bottleneck.”

As the IBM team developed their training platform, they continued encountering bottlenecks. “As we get better and better at using our GPUs, more and more often the bottleneck is the data loader,” observes Wertheimer.

This realization led to a dual development process. “There’s been a parallel journey of, on the one hand, evolving our training platform, and, on the other hand, constantly evolving our data loader to keep up with the speed demands from our training platform to avoid bottlenecking it,” he explains.

Key features of the world-class data loader

IBM Research’s Linsong Chu outlines the essential features of the data loader:

Stateful and checkpointable: “Whenever you save a model, your data loader state is also saved, and whenever you recover from a checkpoint, both the model state and data loader states need to be recovered at the same time,” says Chu.

Auto-rescaling of checkpoints: The data loader automatically adjusts to workload changes during extended training sessions. “Training could easily take weeks or months, and there are tons of reasons why you might have to rescale your workload in the middle,” notes Chu.

Efficient data streaming: The system supports data streaming with zero build overhead for shuffling.

Asynchronous distributed operation: “We want the data loader to be non-blocking,” Chu explains. “While saving the data loader state, we want the saving to be distributed in a form where zero communication is involved.”

Dynamic data mixing: The data loader can adapt to different data mixing ratios, which is useful for evolving training needs.

Efficient global shuffling: The tool addresses memory bottlenecks when handling large datasets, making shuffling efficient even as data grows.

PyTorch native, modular and extensive: Designed for adaptability and scalability, the data loader is prepared for future growth. “What if next year we have to deal with 30 trillion, 50 trillion or 100 trillion tokens?” asks Chu. “The world is changing fast, so we need to build the data loader so it can not only survive today, but also survive for tomorrow.”

Real-world performance

The IBM Research team rigorously tested their data loader over several months, running hundreds of small and large jobs. They observed stable and smooth code numbers. Moreover, the entire data loader operates asynchronously and is non-blocking.

“We leveraged a lot of built-in PyTorch capabilities in order to make all this happen,” says Wertheimer. “That’s why we’re contributing, contributing it back.”

eBook: How to choose the right foundation model
Was this article helpful?
YesNo

More from Artificial intelligence

IBM watsonx Platform: Compliance obligations to controls mapping

4 min read - US regulators including the Office of the Comptroller of the Currency (OCC), Securities and Exchange Commission (SEC), Federal Reserve Board (FRB) and others mandate financial services organizations to prove that laws, rules and regulations (LRRs) are covered across their risk governance framework. This oversight helps ensure a secure and sound control environment that aligns with the organization's risk tolerance and heightened regulatory standards. However, interpreting banking regulations can be complex and subjective, requiring expert judgment to determine applicability to specific…

How a company transformed employee HR experience with an AI assistant

3 min read - IBM Build Partner Inspire for Solutions Development is a regional consulting firm that provides enterprise IT solutions across the Middle East. Jad Haddad, Head of AI at Inspire for Solutions Development has embraced the IBM watsonx™ AI and data platform to enhance the HR experience for its 450 employees. Next-gen HR for a next-gen workforce As a new generation of digital natives enters the workforce, we are seeing new expectations around the employee experience. Gen Z employees prefer an HR…

Navigating the data deluge with robust data intelligence

3 min read - In the age of relentless digital progression, businesses stand on the brink of a data renaissance. The proliferation of digital devices and interactions has resulted in an unparalleled influx of data, which businesses must navigate with precision and strategy. Enterprises require more than just traditional data management; they need to harness the momentum of advanced data intelligence solutions to help ensure innovative prowess and maintain market dominance. The data conundrum: Managing the rise of data creation As we delve into…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters