BigChat vs. LilChat — On Track To Decentralized Artificial General Intelligence

BigChat vs. LilChat — On Track To Decentralized Artificial General Intelligence

Large Language Models (LLMs) are spinning their wheels in a quest to achieve artificial general intelligence (AGI). Now, new breeds of slimmed-down AI models are gaining traction — the destination may surprise us.

Someone Forgot To “Under Promise And Over Deliver”

The meteoric launch of OpenAI’s ChatGPT in January 2023 reached 100 million users in just two months, sparking a race with Microsoft, Google, Meta, and Anthropic to build ever-bigger general-purpose LLMs.

Big Tech pit crews are hell-bent on crossing the AGI finish line, where AI surpasses the cognitive abilities of most humans. Early LLMs captured headlines for their broad intelligence, but now, the “bigger is better” mantra shows diminishing improvement in ever-larger, more costly LLM versions.

One speed bump is the shortage of training data. Giant LLMs are data guzzlers. Not just any data — they need high-quality, relevant data that is hard to find. 

It’s like baking a cake with a recipe that calls for a bunch of ingredients you don’t have. 

You end up with something, but it won’t win any blue ribbons at the county fair.

Large models also backfire with “hallucinations,” spewing out answers that sound plausible but are factually incorrect or just plain nonsensical. 

Imagine asking for directions and being told to drive off a bridge because it’s the most “efficient” route. 

These confabulations undermine trust and lead to productivity-robbing manual reviews.

So much for “Hello, I’m AI, and here to help.”

Smaller Models Find Their Lane

While big LLMs hit roadblocks, smaller, targeted models are paving the way to efficient, tailored solutions. These “little chats” focus on specialized knowledge, excelling where general-purpose LLMs fall short. 

It’s like visiting your general practitioner for an annual checkup, but you’d rather have a specialist look into that pesky rash.

Organizations are already seeing results by augmenting LLMs with smaller Retrieval Augmented Generation (RAG) methods trained with private, reliable data. Others are unveiling proprietary Small Language Models (SLMs) designed to excel at specialized tasks.

  • Morgan Stanley uses RAG trained on the company’s in-house knowledgebase to research and deliver client-customized financial advice in seconds.
  • Salesforce’s internal SLMs have led to remarkable gains for customers, with average increases of 30% in sales lead conversion, 38% in employee productivity, and 45% in customer satisfaction.
  • Thomson Reuters’ RAG-powered Assistant is helping law firms reduce the time required to complete legal document drafts,, with savings ranging from $117 to $558 per drafting task.
  • FinTech company Klarna’s RAG-type AI chatbot handles customer inquiries in 35 languages — replacing 700 humans and increasing profits by $40M.

Even the big LLMs are overhauling their lineups with sporty models like OpenAI’s 01-mini, Google’s Gemini Nano, and Hugging Face’s TinyBERT — to name a few. 

Virtually every major player is back to the drawing boards, designing smaller, specialized, and efficient siblings of their foundational models.

Where The Rubber Meets The Road

These ‘tiny titans’ of AI deliver the goods by focusing on high-quality data and domain-specific roles. Models fine-tuned on internal company data provide competitive advantages while safeguarding privacy and ensuring reliable outputs. 

This allows companies to create AI secret weapons that know the organization’s specifics inside and out. 

It’s not about having a lot of data in the tank; it’s about having the right data.

While LLMs have vacuumed up most of the public internet, we are far from running out of internal sources suitable for smaller AI models. Organizations should ask what data they possess that contains valuable embedded knowledge.

Look for data with accuracy, depth, completeness, and domain context that aligns with the business goals. Less promising data will be noisy, sparse, imbalanced, outdated,  or lack context alignment with an enduring business problem.

When considering AI models, look at the potential of cross-domain synergy to cut across organizational silos. 

Envision assigning an AI Agent, trained with years of customer chats, to automate a customer service workflow from ticket open to close. Early benefits include lower labor costs and improved customer satisfaction.

Building on that success, cross-functional agents in Product Development pick the AI’s brain for product issues and feature demand. Likewise, Marketing and Sales gain an AI teammate with uncanny insights into customer use cases and up-sell opportunities.

Note that the choice of external vs. internal data is far from binary. In practice, most implementations pair internal and external data to form hybrid AI models. Consider matching internal sales data with external industry data to improve domain context, allowing for more prescient and timely forecasting.

Far from becoming depleted, internal data resources are renewable and continually replenished — if given a nourishing environment. Fostering business-involved data engineering helps set the stage for successful AI implementations.

From Talking To Doing

Chatbots gave voice to AI models, opening the door to problem-solving conversations. We listened to AI’s advice, but it was up to us to act. Now, we find ourselves speeding toward a world where AI not only talks — it takes real-world action.

Such Agentic AI takes instructions and goes off independently to design and execute complex, multistep workflows across a connected world. An entire workforce of specialized AI Agents emerges with skills that are mixed and matched to meet the challenge.

Instead of one know-it-all LLM, you have an AI project manager with a Rolodex full of AI specialists to call upon.

Agentic AI will accelerate fastest in the digital lane, where AI agents transact with existing online platforms to order products, make payments, and book reservations. AI can even engage in phone calls with humans to complete a task. 

Yay! No more waiting on hold.

Digital agents will hook up with their robotic counterparts to enable physical actions like dispatching drones, building products, and moving inventory.

Rather than focusing on one point of automation, the Agentic AI workforce will connect the dots, orchestrating entire value chains from research and development to operations and sales.

With the ability to carry out tasks in the physical world, AI will learn from its successes and failures. This experiential learning will allow AI to graduate from compilations and syntheses based on preexisting information to making its own genuinely original discoveries — did somebody say Singularity?

Distributed Artificial General Intelligence

The scenario where millions of Agentic AI agents spin in and out of existence, doing their master’s bidding, paints an intriguing picture. Let’s take a peek at what surprises await.

First, the emergence of AGI will likely resemble the distribution of knowledge among humans, where the combined network of specially trained AI models achieves superiority.

To Big Tech’s dismay, ownership of many specialty AI models will decentralize among knowledge centers where data is curated and AI is trained. A generation of firms will emerge whose product lines consist of portfolios of AI knowledge models.

Expect a gold rush to claim the nuggets of high-value data.

A world of agentic economics will develop to track transactions and compensation among agents. AI performance metrics will evolve into a hierarchy of standards, certifications, and regulations, with top performers commanding higher premiums.

Can you see the diplomas lining AI agent’s walls?

As with humans, the agentic AI ecosystem will be highly diverse, fiercely competitive, and, in some places, ingeniously corrupt. As the technology matures, we should be wary of the changes unleashed.

Beyond The Checkered Flag

In the same way that online shopping upended brick-and-mortar retail and social media reshaped our political discourse, AI is poised to bring its own unexpected ramifications. 

Such Agentic Effects will have systemic impacts on society, industry, and everyday life. Let’s think out of the box with a few scenarios.

Consider the stock market where one AI trading agent turned $1K into $50K in 30 days, and another returned a 560% profit in five months. When AI competes against AI on the trading floor, will there be room for humans?

Imagine an AI Shark Tank where business concepts are pitched, funded, and spun off into incubators that bring AI-built products to market. Just what we need, an AI version of Mr. Wonderful.

How about an AI-generated influencer touting its own brand of clothing designed and manufactured by agentic AI. Will AI become the next Coco Chanel?

On a darker note, what happens when we replace soldiers on the battlefield with agentic AI drones? Fear a future where the shooter has as much empathy as a machine screwing lids onto pickle jars.

Crossing the Finish Line Together

As we race toward the future of AI, the shift from all-powerful LLMs to a dynamic ecosystem of specialized, agentic models marks a pivotal turn. These “tiny titans” aren’t just incremental upgrades — they’re a new paradigm that can unlock efficiency, precision, and accessibility. 

By leveraging high-quality, domain-specific data, fostering collaboration across internal and external resources, and embracing decentralized stewardship, businesses and innovators alike can pave the way for AI-driven impact.

Yet, the road ahead is as unpredictable as it is exciting. From orchestrating entire value chains to redefining the boundaries of creativity and innovation, the potential of agentic AI will reshape industries, economies, and even societal norms.

With great power comes great responsibility, and we must navigate AI advances with foresight, ethics, and adaptability. Just as AI evolves, so too must our understanding of its role in shaping a future where humans and machines collaborate.

Breaking News

Google and Sakana recently announced two new neural network designs, “Titans” and “Transformers Squared,” respectively, that mimic the human brain. These new architectures show promise at making AI systems smarter, faster, and more versatile without making them bigger or more expensive. Good news if they can couple with specialized data.

Author

  • Dennis-Mulryan-Rounded-Photo

    As a founding member of Prometheus Endeavor, Dennis applies his advisory and hands-on experiences to gain insights into the short and long-term impacts of emerging technologies. He has authored multiple articles on the evolving architecture and implications of Artificial Intelligence in the workplace and society.

    View all posts

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *