The Societal and Economic Ripple Effects of AGI: A 2026 Perspective

In 2026, “AGI” has stopped being a cocktail-party acronym. It has become a planning problem.

That shift is not coming from science fiction. It is coming from people with real incentives to be careful with their words. Demis Hassabis, the CEO of Google DeepMind, has said AGI is “on the horizon,” and he has repeatedly put the timeline in the five-to-ten-year range. He has also argued it could be more transformative than the Industrial Revolution.

When someone says that, you do not have to agree with the exact timeline to take the implication seriously. If there is even a meaningful chance that human-level general intelligence arrives inside a decade, then every institution built on the scarcity of human cognition has a decision to make. Law is one of those institutions. So is work. So is governance. So is meaning.

Infinite Counsel frames the AI revolution as different in kind because it reaches into the domain that kept humans indispensable. We are not just amplifying muscle anymore. We are competing with our own comparative advantage.

What follows is a sober look at the ripple effects, starting with economics, then moving outward into society, governance, and what it means to live in a world where intelligence is abundant.

Economic transformations: abundance, displacement, and a fight over who owns the machine

A productivity surge that looks like magic, at first

There is a reason so many forecasts come with “trillions” attached. When we model AI as a general-purpose technology, the productivity upside becomes hard to ignore.

McKinsey’s long-standing modeling work estimated AI could add about $13 trillion to global economic output by 2030, roughly 16% higher cumulative GDP than today. Separately, McKinsey estimated generative AI alone could add $2.6 to $4.4 trillion annually across identified use cases.

Even if those figures prove optimistic, the direction matters. The short-term feel of this transition is likely to be intoxicating for businesses and consumers. Things get faster. Costs fall. Capabilities that used to require teams become accessible to one person with the right tools.

That is the upside. It is real. It is also the part everyone wants to talk about.

The labor shock is not a side effect. It is the mechanism.

The same force that creates the productivity surge creates the labor shock. The more tasks intelligence can do at low marginal cost, the more the economy gets pulled toward “output without wages.”

Goldman Sachs estimated that generative AI could raise global GDP by 7% and “expose the equivalent of 300 million full-time jobs” to automation through shifts in workflows. The IMF has warned that AI will affect a large share of jobs globally, with higher exposure in advanced economies, and that inequality is likely to worsen absent policy intervention.

Two clarifications matter here.

First, “affected” is not the same as “eliminated.” Some jobs are automated. Some are reshaped. Some become more valuable.

Second, the distributional question is not secondary. It is the central political economy question of the next decade.

If AI makes output cheaper and faster, then the key variable becomes who captures the surplus.

Inequality: the ownership problem

A world of powerful AI tends to concentrate power unless you deliberately design it not to.

If the primary beneficiaries are those who own frontier models, compute infrastructure, proprietary data, and distribution, then productivity gains can coexist with wage stagnation or wage decline. That is not an abstract fear. It is a pattern we have seen repeatedly when capital substitutes for labor.

Economists like Daron Acemoglu and others have long emphasized that automation can reduce labor’s share unless counterbalanced by the creation of new tasks where humans retain an advantage. The open question for AGI is whether “new tasks” arrive fast enough, and whether they arrive at scale.

If AGI reaches the point where it can do most cognitive tasks that pay the bills for the modern middle class, then we will be forced to confront something we have avoided saying out loud.

The lump of labor fallacy may stop being a fallacy, at least for a period of time.

The deflation question and the policy response that keeps coming back

If labor costs compress across large segments of the economy, you can get deflationary pressure in the goods and services where AI substitutes for human work. Some sectors may see prices fall. Some may see margins expand instead.

Either way, the political response will be shaped by a simple reality: people are not paid in “GDP.” They are paid in wages, benefits, and access.

That is why Universal Basic Income, negative income tax proposals, wage subsidies, and “AI dividend” ideas keep reappearing. Not because they are fashionable, but because they may provide the cleanest ways to decouple survival from employment if employment becomes less reliable as a distribution mechanism.

None of these are easy. They raise moral hazard concerns. They raise legitimacy concerns. They raise “who pays” concerns. But they have something going for them that most proposals lack.

They are actually aimed at the right problem.

Societal shifts: the meaning problem, the legitimacy problem, and the human role problem

When work is optional, meaning becomes the scarce asset

Work has never only been income. It is time structure, identity, belonging, status, and often a proxy for purpose. Remove work as necessity and you do not automatically get a utopia. You get a vacuum.

Some people will fill it with art, family, learning, service, religion, or exploration. Others will fill it with despair, resentment, and social fragmentation.

A post-AGI society will either build new institutions of meaning or watch old institutions break under the strain of idle capacity and lost status.

That is not a philosophical footnote. It is the social stability question.

The legitimacy problem: “Why should I accept this system?”

If AI-driven abundance arrives while inequality widens, public legitimacy will erode quickly.

People will not tolerate a world where machines do the work, productivity explodes, and the gains flow upward while the majority gets told to “reskill” for jobs that do not exist at scale. That story has an expiration date.

If you want a stable transition, you need two things at once.

  1. Broad participation in the upside.

  2. A credible path for personal dignity that does not depend on being “economically necessary.”

A jagged landscape, not a clean switch

Even in 2026, the most important truth is that adoption is uneven. Some sectors will be transformed early because they are information work. Others are bounded by the physical world, regulation, safety, and trust.

That means you should expect a jagged transition. Islands of radical efficiency next to sectors that feel unchanged. White collar disruption that outpaces blue collar disruption, until robotics catches up. Fast-moving private markets and slower-moving public institutions.

This mismatch is where much of the anxiety will come from, because the lived experience of the economy will not match the headline numbers.

Governance: the hardest part is not building AGI, it is integrating it without breaking society

AGI does not just raise economic questions. It raises state-capacity questions.

A government that cannot understand, regulate, or compete with frontier intelligence becomes either weak or authoritarian. Sometimes both.

You will see calls for new regulatory bodies, international coordination, and “licensing” regimes for the most advanced systems. You will also see the counterargument that regulation will freeze innovation and hand leadership to less cautious jurisdictions.

Both sides are partly right.

The practical governance goal should be narrower and more concrete than people admit.

Build systems that:

  • reduce catastrophic misuse risk,

  • preserve competition and prevent monopoly capture,

  • protect privacy and procedural fairness,

  • and keep humans accountable for decisions that affect human rights.

If that sounds like law, it is.

What this means for lawyers and law students in 2026

If you are reading this as a lawyer or a law student, the AGI conversation is not interesting because it is futuristic. It is interesting because it changes the pricing of cognition.

Law is a cognitive profession. That means we will be early in the blast radius.

You can already see the outline of the next model.

  • The routine parts of legal work become cheap and fast.

  • The value of judgment, credibility, and client trust becomes the differentiator.

  • Billing by time becomes harder to justify.

  • The pyramid model strains when “junior work” is increasingly done by systems that do not sleep.

This is the core thesis of Infinite Counsel. The book is not a hype piece. It is a practical map of the economic transition that starts with today’s large language models and ends in a world where “expertise” is no longer scarce.

If you want a useful way to think about your own positioning, start here.

  1. Assume the tools get better faster than your institution changes.

  2. Assume clients adopt faster than firms are comfortable admitting.

  3. Assume the winners will be the professionals who can blend AI speed with human accountability.

Then act accordingly.

The question that matters most

AGI is not just a technology. It is a mirror.

It will amplify what we reward. It will expose what we tolerate. It will force us to decide whether we want abundance to be broadly shared or narrowly owned.

If you want to have a say in how this plays out, you do not wait until the debate is settled. You learn the tools, learn the economics, and learn the governance questions now, while the future is still negotiable.

And if you want to specifically understand how AI rewires the economics of legal practice, including what happens when competent legal assistance becomes abundant, Infinite Counsel is the longer conversation.

Next
Next

Should AI Conversations Be Privileged? Balancing Privacy, Policy, and the Law