The end of AI scaling may not be near: Here's what's next

The end of AI scaling may not be near: Here’s what’s next

Technology News

Join our daily and weekly newsletters for the latest updates and exclusive content on top AI coverage. More information


As AI systems achieve superhuman performance at increasingly complex tasks, the industry grapples with whether larger models are even possible—or whether innovation must take a different route.

The general approach to developing large language models (LLMs) has been that bigger is better and that performance scales with more data and more computing power. However, recent media discussions have focused on how LLMs are approaching their limits. “Is AI hitting a wall?” The Verge questioned while Reuters stated that “OpenAI and others are looking for a new path to smarter AI as current methods run into limitations.”

The point is that the scaling that has driven advancements for years may not extend to the next generation of models. Reports suggest that the development of frontier models such as GPT-5, which push the current limits of artificial intelligence, may face problems due to declining performance gains during pre-training. Information reported on these challenges at OpenAI and Bloomberg covered similar news at Google and Anthropic.

This problem has led to concerns that these systems may be subject to the law of diminishing returns – where each added unit of input yields progressively smaller profits. As LLMs grow, the cost of obtaining high-quality training data and scaling infrastructure increases exponentially, reducing the return on performance improvements for new models. This problem is compounded by the limited availability of high-quality new data, as much of the available information has already been incorporated into existing training datasets.

This doesn’t mean the end of power gains for AI. It simply means that further engineering through innovation in model architecture, optimization techniques, and data usage is needed to sustain progress.

Lessons from Moore’s Law

A similar pattern of diminishing returns emerged in the semiconductor industry. For decades, the industry benefited from Moore’s Law, which predicted that the number of transistors would double every 18 to 24 months, leading to dramatic improvements in performance through smaller, more efficient designs. Even that eventually hit diminishing returns, starting somewhere between 2005 and 2007 due to Dennard Scaling—the principle that shrinking transistors also reduces power consumption—hit its limits, fueling predictions of the death of Moore’s Law.

I had a close look at this problem when I worked with AMD from 2012-2022. This problem didn’t mean that semiconductors—and by extension computer processors—stopped making performance improvements from one generation to the next. This meant that the improvements came from chip designs, high-bandwidth memory, optical switches, larger buffers, and accelerated computing architectures rather than shrinking transistors.

New ways to progress

Similar phenomena are already observed in current LLMs. Multimodal AI models such as GPT-4o, Claude 3.5, and Gemini 1.5 have demonstrated the power of integrating text and image understanding, enabling progress in complex tasks such as video analysis and contextual image captioning. More tuning of both the training and inference algorithms will lead to further performance gains. Agent technologies that enable LLMs to perform tasks autonomously and seamlessly coordinate with other systems will soon greatly expand their practical applications.

Future breakthrough models may emerge from one or more hybrid AI architecture designs combining symbolic reasoning with neural networks. OpenAI’s o1 reasoning model already shows the potential for model integration and performance extension. Although only now emerging from an early stage of development, quantum computing holds promise for accelerating AI training and inference by tackling current computational bottlenecks.

This scaling wall seems unlikely to end future gains as the AI ​​research community continues to demonstrate its ingenuity in overcoming challenges and unlocking new capabilities and performance improvements.

In fact, not everyone agrees that the wall even exists. OpenAI CEO Sam Altman was succinct in his views: “There is no wall.”

Source:

In an interview for “Diary of the CEOpodcast, former Google CEO and co-author Genesis Eric Schmidt agreed with Altman in principle, saying he doesn’t believe there is a scaling wall — at least there won’t be for the next five years. “In five years, you’ll have another two or three turns on the crank of these LLMs.” Each of these cranks looks like it’s double, triple, quadruple the capability, so let’s say you turn a crank on all these systems and you get 50 times or 100 times the power,” he said.

Leading AI innovators remain optimistic about the pace of progress, as well as the potential for new methodologies. This optimism is evident in a recent interview on “Lenny’s Podcast” with OpenAI CPO Kevin Weil and Anthropic CPO Mike Krieger.

Source: https://www.youtube.com/watch?v=IxkvVZua28k

In that discussion, Krieger described what OpenAI and Anthropic are working on today “feels like magic,” but acknowledged that in just 12 months, “we’re going to look back and say, can you believe we used that garbage? …That’s how fast (AI development) goes.”

It’s true – it feels like magic, as I recently experienced using OpenAI’s advanced voice mode. Talking to ‘Juniper’ was completely natural and seamless, demonstrating how AI is evolving to understand and respond with emotion and nuance in real-time conversations.

Krieger also discusses the recent o1 model, referring to it as “a new way of scaling intelligence, and we feel like we’re just at the very beginning.” He added: “The models will get smarter at an accelerating pace.”

These anticipated advances suggest that while traditional scaling approaches may or may not face diminishing returns in the near future, the AI ​​field is poised for further breakthroughs through new methodologies and creative engineering.

Does scaling even matter?

While the challenges of scaling dominate much of the current discourse on LLM, recent studies suggest that current models are already capable of extraordinary results, raising the provocative question of whether scaling up more even matters.

A recent study predicted that ChatGPT will help doctors make diagnoses when complicated patient cases arise. A study conducted with an early version of GPT-4 compared ChatGPT’s diagnostic abilities to those of doctors with and without AI assistance. The surprising result revealed that ChatGPT alone significantly outperformed both groups, including doctors using AI assistance. There are several reasons for this, from a lack of understanding of how best to use the robot to their belief that their knowledge, experience and intuition are inherently superior.

This is not the first study to show that robots perform better than professionals. VentureBeat reported on a study earlier this year that showed LLMs can perform financial statement analysis with an accuracy that rivals — and even exceeds — that of professional analysts. Also using GPT-4, another goal was to predict future earnings growth. GPT-4 achieved 60% accuracy in predicting the direction of future returns, significantly higher than the 53-57% range of human analysts’ estimates.

Notably, both of these examples are based on models that are already outdated. These results underscore that even without new breakthroughs in scaling, existing LLMs are already able to outperform experts in complex tasks, challenging assumptions about the need for further scaling to achieve impressive results.

Scaling, skill, or both

These examples show that current LLMs are already highly capable, but scaling alone may not be the only way forward for future innovation. But with greater scaling possible and other emerging techniques promising to improve performance, Schmidt’s optimism reflects the rapid pace of AI progress, suggesting that in just five years the models could evolve into polymaths, seamlessly answering complex questions across many disciplines.

Whether through scaling, skills, or entirely new methodologies, the next frontier of artificial intelligence promises to transform not only the technology itself, but also its role in our lives. The challenge ahead is to ensure that progress remains accountable, equitable and impactful for all.

Gary Grossman is EVP of the Technology Practice at Edelman and Global Head of the Edelman AI Center of Excellence.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is a place where experts, including technical data people, can share insights and innovations related to data.

If you want to read about the cutting edge ideas and current information, best practices and the future of data and data technology, join us at DataDecisionMakers.

You might even consider contributing your own article!

Read more from DataDecisionMakers

Leave a Reply

Your email address will not be published. Required fields are marked *