The LLM Multiplier (L2M2) Phenomenon: Navigating the Exponential Growth of Large Language Models

A Comprehensive Guide to L2M2, its Implications for AGI, and Investment Opportunities in the Next Frontier of AI

Brian Bell
12 min readApr 9, 2024

The domain of artificial intelligence is undergoing a seismic transformation, thanks largely to Large Language Models (LLMs) like GPT from OpenAI and PaLM from Google. These groundbreaking models are instrumental in a plethora of applications, from natural language understanding and translation to more intricate tasks like automated content creation and decision-making support.

A startling aspect of this advancement is the exponential growth rate: these LLMs are seeing a consistent tenfold boost in their complexity and performance each year. This pace explains phenomena like the sudden emergence of ChatGPT in the Fall of 2022; with complexity increasing tenfold annually, innovations can seemingly leap from obscurity to prominence almost overnight.

Given the velocity and scale of this development, naming and adequately framing this trend becomes an imperative. A well-defined term will aid scholarly research, provide public clarity, and offer a succinct descriptor for the far-reaching repercussions of this acceleration. This article aspires to dissect this extraordinary trend, gauge its durability, and explore its ramifications for the future of both artificial intelligence and society writ large.

Quantifying the Complexity

To quantify the complexity of Large Language Models, several key metrics can be employed:

  1. Number of Parameters: This is perhaps the most straightforward metric, gauging the model’s capacity to store information.
  2. Training Data Size: The volume of data used for training offers insights into the breadth of information the model has been exposed to.
  3. Performance Benchmarks: Metrics such as accuracy, F1 score, and perplexity provide quantitative assessments of how well the model performs in various tasks.

Methods for Collecting Data and Tracking Growth

Accurate data collection is crucial for understanding the trajectory of LLM development. Multiple methods can be used:

  1. Version Histories: Examining the progression of versions (e.g., GPT-2 to GPT-3) can reveal rate of improvements.
  2. Public Benchmarks: Using standardized benchmarks allows for unbiased comparison across different LLMs.
  3. Company Reports: AI research companies often publish annual or bi-annual reports summarizing their advancements, providing a valuable resource.
  4. Open-source Repositories: These can offer real-time data and are particularly useful for gauging developments in the academic community.

Comparative Analysis with Other Technological Trends

To put the growth of LLMs in context, it’s instructive to compare it with other technological trends. Moore’s Law, the principle that the number of transistors on a microchip doubles approximately every two years, provides a familiar baseline. While Moore’s Law has started to show signs of slowing, the development of LLMs appears to be on a faster trajectory, exhibiting a tenfold increase year-over-year. This comparison not only illuminates the astonishing pace of LLMs but also begs the question of whether this trend is sustainable in the long run, a subject explored in later sections of this article.

Historical Analysis

Year-Over-Year Trends in Major LLMs

To understand the trajectory of Large Language Models, a closer look at the annual trends in leading models such as OpenAI’s GPT series, Google’s BERT, Facebook AI Research’s Llama, and Baidu’s ERNIE is indispensable. While these models often exhibit significant leaps in the number of parameters from one version to the next, it is crucial to distinguish this from a corresponding leap in capabilities.

Causes of Growth: Underlying Factors

Three main factors contribute to the rapid advancement of LLMs:

  1. More Data: Larger and more diverse datasets enable models to train on a more extensive range of topics, improving their generality and versatility.
  2. Increased Parameters: The expansion in computational power has facilitated the development of models with far more parameters, often cited as a reason for improved performance.
  3. Algorithmic Innovations: Breakthroughs in machine learning techniques, such as attention mechanisms and transformer architectures, have also played a significant role in enhancing model effectiveness.

It’s important to note that while the number of parameters in these models is showing an approximately tenfold increase year-over-year, this does not directly translate to a tenfold increase in capabilities or performance. The nuances of this relationship form the basis for ongoing research and discussion in the field.

What’s in a Name?

Existing Terminology

The phenomenon of exponential growth in technology isn’t new. However, each domain often has its unique identifier for such a trend:

  • Moore’s Law: Specifically refers to the doubling of transistors on a microchip approximately every two years, making computers exponentially faster over time.
  • Law of Accelerating Returns: A theory by Ray Kurzweil suggesting that the rate of change in a wide variety of evolutionary systems (including technology) tends to increase exponentially.
  • Neural Scaling Law: Centers on the correlation between the number of parameters in neural networks and their performance, although it doesn’t specify the pace of this scaling.

Introducing L2M2

Recognizing the unique acceleration in Large Language Models (LLMs), we propose a dedicated term: L2M2. This name captures the nature of the 10x annual increase in the parameters of LLMs. “L2M2” stands for “Large Language Model Multiplier,” emphasizing the multiplicative surge in their complexity and capabilities.

Your Turn

Naming this phenomenon isn’t just an academic exercise — it helps to frame conversations and shape the direction of future research and public discourse. Do you have a term you think captures this trend effectively? We invite you to join the discussion and share your suggestions.

By coining a term, we aim to standardize the way we talk about this remarkable growth trend, making it easier to study its causes, limitations, and far-reaching implications.

Is There a Limit?

The rapid acceleration characterized by L2M2 naturally raises questions about its long-term viability. Is there an upper bound to the complexity and capability of LLMs? This section delves into the various factors that could potentially slow or halt this remarkable trend.

Physical Limitations

The story of Moore’s Law provides a cautionary tale, revealing how physical limitations in semiconductor technology have begun to decelerate its pace. Similar constraints — such as data storage capacity, energy consumption, and thermal dynamics — could also serve as roadblocks to the relentless growth described by L2M2.

Economic Considerations

The burgeoning complexity of LLMs is not without financial implications. As the models grow, so do the operational costs associated with energy and computational resources. These rising costs could serve as a natural counterbalance to the exponential trajectory posited by L2M2.

Expert Opinions on Performance Plateau

The academic community offers a spectrum of views regarding the sustainability of L2M2. Some experts believe that LLMs will continue to make incremental advances, while others suggest that a paradigmatic shift will be required for further significant leaps. Regardless, there is a general agreement that the landscape is too volatile for definitive forecasts.

Balancing Act

The evolution of LLM complexity hinges on a delicate interplay between technological advancements and practical constraints. While it may become increasingly challenging for smaller entities to stay competitive, larger organizations could exploit economies of scale to sustain momentum. This could lead to market consolidation, introducing a different set of limitations to the L2M2 phenomenon.

This section underscores the complexities involved in assessing the future trajectory of L2M2. Whether the trend will reach a plateau or experience paradigm-shifting advancements remains a topic of fervent debate and inquiry.

Implications for Artificial General Intelligence (AGI)

Theories on Fast-Tracking to AGI

The advent of L2M2 has reignited discussions about the imminence of Artificial General Intelligence (AGI). Some theorists suggest that the rapid advancements in LLMs could serve as a catalyst, bringing AGI into existence within a decade. This line of reasoning usually draws on the exponential nature of the L2M2 phenomenon and speculates on the potential for these models to acquire a broad range of human-like cognitive abilities.

Counterarguments: Inherent Limitations and Architectural Constraints

Despite the optimism, there are substantial counterarguments that caution against equating LLM growth with the emergence of AGI. Critics point out that current LLMs, for all their complexity, still operate under narrow constraints and lack the understanding or consciousness exhibited by human intelligence. Many experts argue that achieving AGI might require fundamentally different architectures or approaches that diverge from merely scaling up existing models.

The Need for New Architectural Approaches

The limitations of current LLMs suggest that simply scaling up — no matter how dramatically — may not suffice for achieving AGI. Elements like self-awareness, emotional intelligence, and the ability to understand context deeply could necessitate innovative approaches that go beyond the parameter increases captured by L2M2.

A Balanced Perspective

While the fast-paced growth indicated by L2M2 serves as an exciting indicator for the field of AI, its direct implications for the development of AGI remain far from settled. The challenge likely involves more than just adding complexity; it calls for a comprehensive understanding of intelligence from biological, psychological, and computational perspectives.

By understanding both sides of the debate, this section aims to offer a balanced view on how L2M2 informs the timeline and likelihood of achieving AGI. Whether L2M2 will serve as a stepping stone to AGI or merely remain a fascinating but limited development in the field of AI is an open question that continues to captivate researchers and futurists alike.

L2M2 and the Complexity of Human Intelligence

Human Brain: The Ultimate Benchmark

The human brain remains the gold standard for intelligence, possessing an estimated 86 billion neurons with a vastly greater number of interconnections. While the parameter-to-neuron comparison is not a straightforward one, some estimates place the human brain’s complexity at the equivalent of about 100 trillion to 1 quadrillion “parameters,” if we were to loosely equate synapses to parameters.

Similarities and Differences Between LLMs and the Human Brain

LLMs and human brains both process information and learn from data, but the similarities often end there. The human brain excels at transferring knowledge across domains, exhibiting emotional intelligence, and understanding context, nuances that LLMs largely lack despite the incredible strides indicated by L2M2.

The Parameter Threshold for Emulating Human Intelligence

Theorizing about the number of parameters needed to emulate human intelligence is a subject of much debate. Even if LLMs were to match the brain in “parameter count,” the architecture and data processing mechanisms would still be fundamentally different. However, with L2M2’s trajectory, crossing this hypothetical parameter threshold seems increasingly plausible within years rather than decades.

Have We Already Crossed the Line?

As LLMs continue to scale, the question arises: are we closing in on the complexity of the human brain? Traditional wisdom suggests that despite massive strides, sheer parameter count doesn’t equate to the multifaceted nature of human intelligence. However, if rumors are to be believed, ChatGPT 4’s 100 trillion parameters may mark a significant milestone.

A Milestone in Complexity: ChatGPT 4’s Rumored 100 Trillion Parameters

The speculative 100 trillion parameter count for ChatGPT 4 would make it several orders of magnitude larger than its predecessor, ChatGPT 3, which contains 1.5 billion parameters. This massive increase is indicative of the pace at which these models are growing, further validated by the L2M2 metric. While parameter count alone isn’t the end-all-be-all, a 100 trillion figure would undoubtedly redefine what LLMs can accomplish in tasks such as text generation, translation, and decision support.

More Than Just Parameters: Quality and Architecture

Beyond parameters, other elements like the quality of training data and the architectural intricacies of the model play crucial roles in performance. A balanced evaluation of LLM capabilities requires these additional dimensions to be taken into account.

Inferencing Speed: Biological Brain vs. LLMs

The biological brain operates at a different timescale, processing data in real-time and allowing for simultaneous multi-modal interactions. In contrast, even the most advanced LLMs face limitations in inferencing speed, especially as they grow more complex. If LLMs reach or surpass the parameter complexity of the human brain but cannot process information as quickly, questions would remain about their applicability in real-world, real-time situations.

A Countdown to a Potential Crossover

Given the L2M2 trajectory, the possibility of crossing this monumental line could be a year or two away if the current pace holds. However, this would necessitate not just equivalent parameter counts but also advancements in inferencing speed and multi-modal capabilities.

In contemplating whether we’ve crossed the line, the comparison isn’t merely in numbers but in versatile intelligence. The acceleration observed through L2M2 suggests that this moment of crossing, if it occurs, is not in the distant future but potentially just around the corner.

A Grounded Outlook on L2M2’s Role in AGI

While L2M2 serves as an exhilarating pointer for AI growth, its relevance for AGI remains a complex issue. The challenge extends beyond accumulating parameters; it necessitates a multi-disciplinary understanding of what makes intelligence “intelligent.” Whether L2M2 will be a catalyst for AGI or simply another chapter in the AI development story is an enthralling question, subject to ongoing investigation and debate by researchers, policymakers, and futurists.

Societal, Ethical, and Technological Implications

Ethical Considerations

The exponential growth trend encapsulated by L2M2 is not solely a technological phenomenon; it carries with it an array of ethical considerations. Questions of bias, representation, and the potential misuse of such powerful models are salient issues that can’t be ignored.

Data Privacy and Ownership

As LLMs continue to evolve, so does their voracious appetite for data. This raises critical questions about data privacy and ownership. The ethical considerations extend to the type of data being fed into these models and who gets to control this data, as the line between publicly available information and individual privacy blurs.

Societal Impacts

The capabilities promised by the L2M2 trend also hold potential for profound societal change. From automating certain job sectors to redefining human-computer interaction, the implications are far-reaching. While the advancements could result in numerous benefits like improved healthcare diagnostics and environmental modeling, they also pose risks such as increased social stratification and potential job losses.

Policy Recommendations

To navigate the intricate web of opportunities and challenges presented by L2M2, proactive policy measures are essential. Recommendations could include:

  1. Regulatory frameworks for bias and fairness
  2. Standards for data privacy and security
  3. Guidelines for responsible research and development
  4. Public-private partnerships to foster innovation while ensuring public interest

The Need for Public Discourse

As L2M2 continues to drive LLM evolution at an unprecedented rate, public engagement in these topics becomes increasingly crucial. Policymakers, technologists, and the general public should be part of a robust discourse that addresses these implications in a comprehensive manner.

By covering the ethical, societal, and technological facets, this section aims to provide a holistic view of the broader implications of L2M2. It invites continued exploration and dialogue among all stakeholders to guide the responsible development and deployment of future LLMs.

Investor’s Corner

The Platform Shift: AI as the Next Frontier

We are witnessing a pivotal moment in technology, where AI is becoming the next significant platform shift. The industry disruption is not confined to AI companies but extends to various sectors that are in the process of being redefined by AI-driven solutions. This environment offers a tremendous opportunity for startups that can harness the exponential growth indicated by L2M2.

AI is Eating Software: A New Paradigm

As Marc Andreessen famously said, “Software is eating the world,” but now AI, and specifically L2M2-accelerated LLMs, are eating software. The phenomenon is particularly evident in how generative AI and LLMs are not just aiding but actively creating software, making English the hottest new programming language.

Vertical Opportunities: Hundreds of Winners

Contrary to the belief that a few big players will dominate the AI platform landscape, the real investment opportunity lies in the vertical application layer. Startups focusing on verticalized use-cases requiring minimal fine-tuning and smaller, specialized models are poised to thrive. These firms will tackle everything from healthcare and finance to transportation and education, carving out lucrative niches.

Efficiency and Proliferation of Startups

The capabilities of LLMs, empowered by L2M2, are set to redefine organizational structures. Startups can operate leaner and more efficiently, requiring less initial funding. This shift will increase the quantity of early-stage startups, making it an exciting period for venture capitalists.

Specific Actions for Investors

  • Investigate startups that are leveraging AI to disrupt traditional industries, focusing on those with scalable, verticalized applications.
  • Monitor developments in generative AI that could revolutionize software development, reducing the barrier to entry for non-technical founders.
  • Prioritize investments in startups that combine AI efficiencies with ethical and sustainable practices, aligning with future governance norms.

In this transformative era, the L2M2 trend amplifies the need for keen investment strategies that go beyond technological prowess to encompass ethical and societal considerations. By doing so, venture capitalists can mitigate risk and position themselves for sustained success in a rapidly evolving landscape.

Final Thoughts

The advent of L2M2 is a watershed moment in the realm of AI, emphasizing the exponential growth in the complexity and capabilities of Large Language Models. This phenomenon presents a disruptive yet immensely fertile ground for innovation across sectors. Whether this trend will culminate in Artificial General Intelligence remains uncertain. What is undeniable, however, is the plethora of investment opportunities, especially in vertical applications.

Urgent Unanswered Questions

  • How sustainable is the L2M2 trend in terms of physical and economic limitations?
  • What are the ethical considerations surrounding rapid AI growth, especially in terms of data privacy and societal impact?
  • How close are we to achieving AGI, given the rumored 100 trillion parameters of ChatGPT 4 and the speed of inferencing in these models?

A Multidisciplinary Call to Action

The complexities and potential ramifications of L2M2 require a collective effort. It calls for the involvement of technologists to drive innovation, ethicists to ensure responsible growth, policymakers to set up appropriate governance structures, and investors to fuel sustainable, impactful ventures. It’s no longer a question of whether AI will affect us, but how we can channel its exponential growth for collective benefit responsibly.

To harness the full potential of this L2M2 era, we must move beyond isolated silos of expertise and foster interdisciplinary collaborations. Only then can we hope to answer the pressing questions and realize the opportunities that lie ahead.

--

--

No responses yet