AI

Capability Is Not Diffusion: Why the Arrival of AGI Won’t Instantly Transform the Economy

By
Andy Walters
March 18, 2025

I spend a lot of time discussing the future of AI with fellow executives and lately, one question keeps coming up in boardrooms and dinner conversations alike: “If and when true artificial general intelligence (AGI) arrives, will it instantly upend our entire economy?” 

The hype around advanced AI—and fears stoked by some “AI doomers”—might make it seem like the day an AI becomes as smart as humans, everything changes in a flash. Machines will suddenly do all our work, businesses will be reinvented overnight, and productivity will skyrocket immediately.

But history tells a different story. Revolutionary capabilities do not automatically equate to instant diffusion or instant impact. I’m going to argue that even if AGI were achieved tomorrow, it wouldn’t be a plug-and-play economic miracle. In fact, we’d likely see a slow rollout over years or decades before AGI’s full effects are felt across industries. This isn’t pessimism; it’s realism grounded in history.

Why should we believe this? Because every major General Purpose Technology (GPT) in the past—whether it was the steam engine, electricity, the automobile, computers, or the internet—went through a long gestation period between its invention and its broad societal payoff. The pattern is remarkably consistent: a transformative new capability emerges, but the economy only transforms after we invest in complementary infrastructure, skills, processes, and societal adaptations. In other words, it takes time to go from “We have a new technology” to “Everyone is reaping its benefits.”

This perspective matters right now. As leaders plotting strategy in the age of AI, we need to temper excitement with understanding. It’s our job to separate capability from reality—to recognize that having a technology isn’t the same as using it effectively at scale. So, let’s take a tour through history’s great innovations, see how diffusion lagged behind capability, and then explore what that means for the coming era of AGI.

When Inventions Shook the World … Slowly

To appreciate why AGI won’t instantly transform the economy, consider how past breakthroughs played out. Let’s travel back in time and revisit a few epoch-defining technologies. You’ll notice a theme: initial wonder, followed by years of incremental adoption and problem-solving, and only later a true revolution in daily life and productivity.

Steam Power: The Slow Burn of a Revolution

In late 1770s Britain, James Watt has just perfected his steam engine. This invention is often credited with launching the First Industrial Revolution. Watt’s engine could harness steam to drive machinery, promising to free industry from the limits of waterwheels and muscle power. Yet if you walked into most factories or mines in 1780, or even 1800, you might wonder what all the fuss was about. Early steam engines were big, clunky, and inefficient, mostly useful for pumping water out of mines. They gobbled coal and delivered limited power. In fact, the contribution of steam to Britain’s economy was almost negligible prior to 1830—very few machines in the country were actually run by steam at that point.

Why the delay? It turns out Watt’s breakthrough was just the first step. Complementary innovations had to follow. Engineers spent decades tinkering to make steam engines smaller, stronger, and more fuel-efficient. High-pressure engines (which Watt himself avoided due to safety concerns) were needed to really expand steam’s usefulness, but Watt’s patents effectively froze that avenue of innovation until his patents expired around 1800. Once inventors like Richard Trevithick and others were free to experiment, steam engines improved. By the mid-1800s, new boiler designs and a better scientific understanding of thermodynamics dramatically cut the coal needed per horsepower.

Even more important, infrastructure and skills had to catch up. Coal mining had to scale up to feed all those engines, ironworkers had to craft sturdy parts, and a generation of mechanics had to learn how to build, install, and maintain steam machinery. It wasn’t like you could just drop a steam engine into any factory in 1780 and watch productivity soar—many factories weren’t designed for them, and engineers who knew how to integrate steam power were in short supply.

It was only about 80 years later, in the mid-late 19th century, that steam power truly started boosting economic productivity at large. As one analysis notes, the steam engine was a marvel of the early 1700s, but it was “only about 80 years later—in the second half of the 19th century—that it delivered a boost to aggregate productivity and raised economic prosperity”. By the 1850s and 1860s, steam-powered factories, locomotives, and steamships were finally widespread and efficient enough to matter. Britain’s living standards and GDP per capita began to climb noticeably as steam technology reached maturity. In other words, the revolution arrived long after the invention. The steam engine had to diffuse, evolve, and intertwine with complementary changes—railroads, factory design, coal supply chains, skilled operators—before it transformed the economy.

The lesson from steam’s story is that even a groundbreaking capability like harnessing steam power didn’t rewrite the rules overnight. And importantly, there were bottlenecks and inertia: Watt’s own business decisions (and patent) delayed certain innovations, early engines weren’t efficient enough to justify rapid adoption, and many business owners were cautious, sticking to tried-and-true water mills until steam proved itself. Only after those hurdles were cleared did steam fulfill its promise. Keep this in mind as we think about AGI—we might have an “engine” of intelligence, but we’ll need a lot of complementary change for it to truly propel the whole economy.

Electrification: Rewiring Industry at the Speed of Molasses

Next stop: the electric age. If steam powered the 19th century, electricity was poised to energize the 20th. Thomas Edison opened the first practical electric power station in 1882, and soon cities were abuzz with electric lights. By all accounts, electricity was cleaner, safer, and more versatile than steam. So you’d think factories and households would ditch their boilers and generators en masse and plug into the grid, flipping the productivity switch to “on.” But again, the historical reality was different.

In the 1890s, forward-thinking factory owners did start buying electric motors—only to use them in the old way. I recall a famous anecdote from economic history: a factory manager proudly installs an electric motor but uses it to drive the same central shaft and belt system that the steam engine used. In effect, nothing about the factory’s workflow changed; they just replaced one power source with another. The result? Only marginal gains in efficiency. Economist Paul David later studied this puzzling lag and found that early adopters of electricity were, as he put it, “overlaying one technical system upon a preexisting stratum”. They kept the old factory layout—machines clumped around a central drive shaft—which squandered most of electricity’s advantages.

The true leap in productivity from electrification came when businesses figured out the new way to use it. Instead of one giant motor and a tangle of belts, factories started giving each machine its own small motor (a practice called “unit drive”) and rearranging floor plans around efficient workflow rather than proximity to a power source. By the 1920s and 1930s, these redesigned factories—with machines placed logically for production, not chained to a central engine—finally unlocked huge efficiency gains. One classic study noted that electrification didn’t significantly boost productivity until the 1920s, a few decades after its introduction, but then it accounted for half of all manufacturing productivity growth in that roaring decade. In other words, there was an S-curve: a slow start, then a sharp acceleration once complementary innovations and organizational changes caught up. Indeed, historians observe that the “electrical age” began in the 1880s, but its critical payoff was only seen 40 years later in the 1920s.

Consider some of the complements required for electricity’s impact to diffuse: We needed an entire electrical grid—power plants, transmission lines, and distribution networks—built out across countries. That didn’t happen overnight; utilities expanded gradually, often with government incentives (for example, many rural areas in the U.S. only got electricity in the 1930s through New Deal programs). Factories had to be retooled and sometimes rebuilt to accommodate new wiring and new layouts. A workforce trained to maintain steam boilers had to learn about circuits and motors. Even mundane things like appliances had to be invented! An electric grid alone isn’t useful until you have refrigerators, washers, and radios to plug in. Those came in waves over the early 20th century, each requiring design and manufacturing industries to develop. All told, it was a protracted process to electrify the economy.

By 1929, the transformation was in full swing: electric motors provided about 80% of mechanical drive power in American factories, up from essentially 0% a few decades earlier (when steam and water ruled). But that tipping point was the result of decades of investment. From Edison’s first power station to near-universal industrial electricity usage took roughly two generations. Again, the implication for AGI is clear: even when a technology is objectively superior (electricity was cleaner and more efficient than steam), businesses and societies don’t turn on a dime. They have to rewire—often literally—their operations, and that takes time, money, and learning.

Automobiles: Paving the Way (Slowly) for the Car Economy

Our next case study in patience is the automobile. It’s hard to imagine life without cars and trucks today—personal transportation and logistics define our modern economy. But when the first gasoline automobiles appeared in the late 1800s, nobody could yet see the traffic jams of Los Angeles or the interstate highway system over the horizon. Initially, cars were expensive playthings for the rich (the “horseless carriage” as they called it). In 1900, only a few thousand automobiles existed in the entire United States. Most people still relied on horses, trains, or their own feet to get around.

The big turning point, as every car buff knows, was Henry Ford’s Model T in 1908—and more importantly, Ford’s innovation of the moving assembly line in 1913. Suddenly, cars could be mass-produced far more cheaply. By making automobiles affordable to the masses, Ford effectively kick-started the automobile as a true general purpose technology that could reshape society. Car ownership skyrocketed in the 1910s and 1920s: by 1920 there were over 8 million registered cars in the U.S., and by 1929, over 23 million Americans were licensed drivers. That’s a huge jump in a short time—nearly a threefold increase in the Roaring Twenties alone, as Americans fell in love with the open road.

Yet, once again, capability needed complements. A car is great, but “cars aren’t much use without roads,” as the old saying goes. In the early days, driving was an adventure: roads were typically dirt or mud, signage was inconsistent, and long-distance travel was a genuine challenge. In fact, it was pressure from early motorists that led to the Good Roads Movement in the U.S., pushing governments to pave and improve roads nationwide. Major public investments followed: for example, the Federal Highway Act of 1921 provided funding for state highways, and later the Interstate Highway System (launched in 1956) created the high-speed road network we take for granted. All these were complementary infrastructure projects to make the automobile broadly useful.

Fuel distribution was another hurdle. In the 1890s, if you owned a car, finding gasoline could be tricky—you might have bought it in cans from a pharmacy! The first dedicated gas station in the U.S. didn’t open until 1913. By the 1920s, gas stations started dotting the landscape, but that required oil companies to invest in refining capacity, distribution trucks, and retail stations—essentially building a nationwide fuel supply chain from scratch. Mechanics and repair shops had to emerge, because what good is a car if it breaks and nobody can fix it? New industries sprang up to support automobiles: tire manufacturers, roadside motels, drive-through restaurants, insurance companies, and more. Society also had to adjust its rules and norms: driver’s licenses were introduced, traffic laws and stoplights became necessary once streets teemed with cars instead of horses. City planning changed too—think of how suburbia in the 1950s was designed around the assumption of two cars in every driveway.

All of that took time and coordination. In economic terms, the diffusion timeline of the automobile spanned decades. By the mid-20th century (1950s–60s), car ownership had become common in the U.S., fundamentally altering lifestyles and commerce, but not without interim bumps. The Great Depression in the 1930s stalled car sales for a while (many families couldn’t afford new cars), and World War II saw civilian auto production halted entirely as factories churned out tanks and planes. It was really after WWII, with a booming economy and cheap gasoline, that the U.S. reached near-saturation car ownership and the car-centric economy blossomed. Other countries took even longer—many nations didn’t build extensive highway networks or see mass car adoption until the late 20th century.

The moral of the automobile story: infrastructure and adoption had to go hand in hand. A revolutionary product (the car) still needed roads, fuel, regulation, and cultural acceptance to truly transform how we live. It’s a vivid reminder that even if AGI arrives in a lab, turning that into real economic impact will require building “roads and bridges” in a metaphorical sense—from data infrastructure to legal frameworks.

Computers & IT: You Can See Tech Everywhere Except in Productivity (At First)

Fast forward to the late 20th century: the era of computers, microchips, and IT. By the 1970s and 1980s, we had mainframes, personal computers, and even the beginnings of the internet. Computing power was improving at an exponential clip. Surely the economic impact should have been immediate and huge? And yet, in the 1980s, U.S. productivity growth slowed to a crawl—puzzling economists who expected the opposite. The Nobel laureate Robert Solow joked in 1987, “You can see the computer age everywhere but in the productivity statistics.” His quip captured a widespread sentiment: despite rapid advances in information technology (IT), the measurable gains were MIA. Annual labor productivity growth in the U.S. was only around 0.5% in the mid-1980s, even as PCs and IT systems were spreading in business.

This apparent contradiction was dubbed the “productivity paradox” of IT. But like those earlier cases, the paradox eventually resolved—with a lag. By the late 1990s and early 2000s, productivity growth had surged above 2% per year. The consensus became that yes, computers and IT eventually did revolutionize the economy—but not until organizations learned how to restructure around them. One analysis noted it was roughly two decades after the introduction of personal computers before we saw big productivity payoffs, coinciding with when businesses finally reached a critical mass of IT adoption and know-how.

Why the delay this time? A combination of factors we’ve seen before, plus some new twists. Complementary investments in intangibles were paramount. Simply buying computers wasn’t enough; companies needed to develop software tailored to their operations, digitize their data, and overhaul processes. I remember old accounting departments in the early PC era—they would take a fancy new computer and basically use it like a faster typewriter or calculator, inputting the same data manually as before. The real efficiency came when accounting systems were fully redesigned to automate data flows and integrate with other systems (like inventory and sales). That kind of process re-engineering took years. It meant retraining staff, rewriting job descriptions, and sometimes completely reorganizing departments.

During this adjustment period, many firms essentially used IT to do old things a little better, rather than to do fundamentally new things. Economists Brynjolfsson and Hitt later showed that the biggest gains from IT came to companies that paired IT investments with organizational innovation—flattening hierarchies, empowering employees with data, and creating new business models. Early on, those complementarities weren’t in place. As one study described it, cultural and organizational inertia led firms to “automate existing processes (yielding only minor gains) before discovering new, IT-enabled ways of operating (which produced major gains)”. Only after this learning period did productivity leap forward.

Additionally, infrastructure and connectivity were critical. A single PC is great, but the true power of IT emerged when computers were networked internally (think LANs in offices) and externally (the internet). In the 1980s, many computers were isolated or limited to rudimentary networks. By the late 90s, with more robust networks and the internet maturing, computers could really flex their muscles through instant communication and massive data sharing. This coincided with the productivity jump. It’s no surprise that the dot-com boom of the late 90s—when businesses finally embraced online operations en masse—aligns with the end of Solow’s paradox. (Of course, the boom turned to bust in 2000, a reminder that hype can race ahead of reality. But even the dot-com crash didn’t stop the longer-term diffusion; it just cleared out some excess exuberance.)

From the IT revolution, we learn that human capital and know-how are just as important as the hardware and software. In the 80s, few workers were computer literate; by the 2000s, an entire generation had grown up with PCs in school and was comfortable using them on the job. Companies that invested early in training their people (or hiring new talent with IT skills) eventually pulled ahead. The economy needed time to accumulate not just the devices, but the skills and organizational culture to leverage them.

So here again we see a general purpose technology that eventually transformed everything—but not without a productivity lag and a lot of complementary change. Today, it’s obvious that IT is indispensable in every corner of business. But if you had assumed in 1980 that by 1985 we’d see massive productivity gains, you’d have been dead wrong. It took until around 1995–2005 for the promise to fully show up in the numbers. The “J-curve” of productivity—slow growth then rapid rise—is a feature of these GPTs. We should expect a similar trajectory for advanced AI.

Historical diffusion timelines for GPs—This chart captures the gap between a breakthrough invention and its widespread economic impact. from steam engines to electricity, automobiles, computers, and the internet, i've seen how each innovation required decades of complementary investments—like infrastructure, regulatory adaptation, and skill development—before transforming society.

The Internet: Building the World Wide Web (and Waiting for Everyone to Log On)

Finally, let’s talk about the internet—the global network of networks that arguably was the glue that bound the PC revolution together and created entirely new industries. The internet’s case is interesting because it diffused somewhat faster than the older technologies (thanks to a more developed world and heavy upfront government investment), but it still wasn’t instantaneous by any means.

The technical foundation for the internet was laid in the late 1960s (ARPANET and other research networks), and by the late 1980s, the pieces were there for a worldwide system. But in practical terms, the internet as a commercial and mass-use platform took off in the mid-1990s with the advent of the World Wide Web, browsers like Mosaic/Netscape, and the decision to open the network to general users. I remember getting online via a dial-up modem in those days—it was slow, noisy, and you couldn’t use the phone at the same time!

In 1995, only about 14% of American adults used the internet at all. Even by the year 2000, roughly half of U.S. adults were online—meaning half were not. Broadband, which made the internet truly useful for rich content, was just beginning to replace dial-up around 2000. High-speed connections then expanded steadily through the 2000s; in the U.S., home broadband adoption grew from near-zero in 1996 to around 60% of households by 2009. It took that long for fast internet to reach most homes, even in a wealthy country. Meanwhile, companies were grappling with how to use this new network. Early on, many businesses just put up static “brochure” websites or experimented timidly with e-commerce. Only years later did we get mature e-commerce platforms, fully digital supply chains, cloud computing services, and media companies retooled for online distribution.

Think of some complements and obstacles in the internet’s diffusion: we needed telecom companies to lay fiber optic cables and wire up broadband to neighborhoods—a massive infrastructure project. PCs (or later, smartphones) had to be in the hands of consumers to access the internet, so the diffusion of devices was a prerequisite. People had to be trained or learn how to use the web, from using browsers to practicing safe online behavior. Trust was a big issue at first—I recall how hesitant folks were in the 90s to enter their credit card info online. It took things like the advent of secure payment systems, better user education, and simply time (plus seeing peers successfully shop online) to build trust in e-commerce. Regulators also had to clarify rules for online commerce, like digital signatures and consumer protections, which they gradually did in the late 90s and early 2000s.

The dot-com bubble of 1999–2000 exemplified what happens when expectations race ahead of adoption. Investors thought the internet would revolutionize business overnight, poured billions into dot-com startups, and then reality set in—most people weren’t yet ready to buy groceries or clothes online in 1999, and logistical and technical infrastructure wasn’t fully baked either. The bubble burst spectacularly in 2000. But importantly, the infrastructure groundwork remained. The fiber networks, data centers, and skilled tech workforce that the 90s boom built didn’t disappear in the bust; they were the foundation for the real boom in the following decade. By the mid-2000s, with broadband commonplace and second-generation internet companies emerging (Google, Amazon’s second wind, social media, etc.), the internet became a true economic driver. Productivity gains from things like online supply chains, digital media, and cloud services started to show. But remember—Tim Berners-Lee wrote the first web page in 1991; it took nearly 15 years from that point for the internet to become truly ubiquitous and integrated into everyday economic activity.

By now, the internet has of course transformed how we communicate, shop, work, and entertain ourselves. It’s hard to imagine the world without it. But it didn’t happen overnight. It required that critical mass of users and infrastructure. One fascinating data point: as late as 2007, only about 20% of the world’s population was online. (Today it’s around 65%—the other 35% took another 15 years to connect.) In advanced economies the saturation came earlier, but globally we see the long tail of diffusion.

These historical cases—from steam to the internet—all hammer home a point: invention is the beginning, not the end, of a technological revolution. Having a groundbreaking capability is one thing; spreading it throughout society is another. Typically, we see a lag period where people figure out how best to use the new tech, supporting infrastructure gets built, costs come down, and complementary innovations arise to unleash the tech’s full value. Only after that do we get the broad-based economic transformation that in hindsight seems inevitable.

Now, let’s turn back to the present and near future. What does all this mean for AGI—an AI that can potentially perform any intellectual task as well as a human? If such a thing is achieved, why shouldn’t we expect an immediate economic earthquake? Let’s explore the barriers and complementary factors that would likely temper the instant impact of AGI.

Why AGI Diffusion Will (Still) Take Time

I often tell colleagues and clients: “AGI won’t be a magic on/off switch for the economy.” Yes, it will be an astounding capability if and when it arrives. And yes, ultimately it could be as transformative as any of the GPTs we just discussed (perhaps even more so). But if history is a guide, there will be a marathon between the invention and the impact. In this section, I’ll outline some modern parallels to those historical complementary investments and hurdles—from regulatory puzzles to cultural acceptance to business model reinvention. These are the factors that will likely slow down the diffusion of AGI’s benefits, regardless of how amazing its raw capabilities are.

There are significant hurdles to overcome before AGI can have widespread economic impact

There are significant hurdles to overcome before AGI can have widespread economic impact

Compliance and Regulation: Navigating the Rulebooks

One of the first things that comes to mind is the regulatory environment. New powerful technologies often enter a kind of gray area where laws and guidelines struggle to catch up. We’re already seeing this with current AI systems. Who is responsible if an AI makes a bad decision? How do we protect privacy when an AI can analyze data at unprecedented scale? Should there be safety certifications for AI models, the way we have for airplanes or medicines?

With AGI, these questions will become even more urgent—and the answers won’t come immediately. Uncertainty around regulation can be a major adoption killer. Businesses may hesitate to deploy AGI widely if they fear running afoul of future laws or facing liability for AI-driven actions. In Europe, for example, companies are wary of rolling out advanced AI without clarity on the EU’s upcoming AI Act requirements. A recent survey found that 21% of European businesses identified compliance and legal uncertainties as a barrier to adopting AI—and among those already using AI at scale, 45% cited regulatory uncertainty as a significant obstacle. When companies feel unsure about the rules, they understandably invest less; in that survey, those worried about compliance planned to invest 48% less in AI over the next three years compared to those without such concerns.

I expect AGI to face a similar initial slowdown due to regulatory ambiguity. Governments will likely scramble to establish frameworks for AI accountability, data usage, intellectual property (e.g. who owns content created by an AGI?), and even ethical boundaries. But policy tends to lag technology. Lawmakers will debate and take input; various jurisdictions might implement conflicting rules at first; court cases will shape precedents. All of that means in the early years of AGI, companies might hold back on certain high-stakes applications until the legal dust settles.

On the flip side, too much or too heavy-handed regulation early on could also stunt diffusion. If, say, strict licensing is required to use AGI systems, only a few players might deploy it initially. Striking the right balance will be tricky. History shows that while policy can facilitate diffusion (think of governments funding highways for cars or basic research for IT), it rarely can force instant adoption, and if done wrong, it can indeed slow it. In the case of AI, regulators worldwide are still coming to grips with even narrow AI systems; AGI would raise that to a new level. We as business leaders will need to actively engage in these discussions to help shape sensible rules, but also be prepared for a period of caution.

The bottom line: Legal and compliance uncertainty = businesses tapping the brakes. Even if AGI can do incredible things out of the gate, many firms will pilot it in limited ways until they’re confident about the compliance aspect. We saw that with cloud computing a decade ago—lots of enterprises were slow to move to the cloud due to data security/regulation worries. Eventually they did, but only after standards and assurances improved. We can expect a similar measured approach with AGI.

Organizational Inertia: Old Dogs, New Tricks

A second barrier is something less tangible but very powerful: organizational inertia and mindset. Companies are made up of people, processes, and cultures that don’t transform overnight just because a new tool is available. I’ve consulted for firms that installed cutting-edge software systems, yet continued to run their operations almost exactly as before—yielding little benefit. It takes enlightened leadership and often painful change management to really leverage new tech.

With AGI, consider this: if it truly can perform a broad range of intellectual tasks, it could theoretically upend how we do everything from customer service to R&D. But implementing it effectively would require rethinking workflows, retraining staff, and sometimes restructuring teams or entire business units. That’s hard work! Early on, many organizations might choose the path of least resistance—using AGI in limited, familiar ways. For example, maybe they drop an AGI into their call center to assist human agents, but they don’t yet reorganize the customer support model around AI. That limited use might yield limited gains, analogous to those early factory electrifiers who just swapped power sources but kept the old factory layout.

We have a telling historical parallel in how companies first used computers: they automated some existing processes (like crunching payroll numbers faster), but often didn’t improve the process itself until later. As noted earlier, many firms in the 1980s simply digitized paper-based workflows without reimagining them, resulting in only minor efficiency improvements. It took a new generation of managers to really re-engineer businesses around what IT could do. The same will likely be true for AGI—the second phase of adoption, where organizations design processes that couldn’t even exist before AGI, will be where the big productivity jumps happen. The first phase, in contrast, might be marked by retrofitting AGI into existing processes, with more modest results.

Why wouldn’t companies go big right away? Fear of disruption, for one. If you have a profitable business model, you don’t throw it out on Day 1 of a new technology—that feels like jumping off a cliff to see if you can build wings on the way down. Most firms will prefer an incremental approach: proofs of concept, pilot programs, gradual scaling. There’s also the human element: employees often resist change, especially if they fear being made obsolete. We may have AGI capable of doing a task, but labor practices, contracts, or morale concerns might slow its deployment in place of human workers initially. Change management—communicating the vision, training people for new roles alongside AGI, and reorganizing workflows—will be a project that unfolds over years in any large enterprise.

In short, the tendency to apply new tech to old processes will be strong at first, and it will limit immediate gains. We as leaders need to guard against this by fostering a culture that’s open to fundamental change—but realistically, not every company will succeed in doing so on day one. As with prior GPTs, it may take time, new blood in management ranks, or competitive pressures before organizations fully exploit AGI in transformative ways rather than superficial ones.

Human Capital and Skill Gaps: The People Side of the Equation

Even if an AGI can theoretically do any job, you still need people who know how to work with AGI—to build it, tailor it, feed it the right data, interpret its output, maintain it, and improve it. In the foreseeable future, I don’t envision AGI as a completely autonomous, self-maintaining black box. It will be a tool (albeit a powerful one) used by people. And that means we’ll have a skills gap to overcome.

Right now, already with just advanced AI and machine learning, there’s a shortage of talent: data scientists, machine learning engineers, AI ethicists, etc., are in high demand and short supply. If AGI comes along, initially there will be very few experts who truly understand its architecture or how to harness it safely. We’ll need to drastically scale up education and training—from specialized PhDs to upskilling programs for the average knowledge worker to comfortably use AI in their job.

Surveys confirm that talent shortage is a top barrier to AI adoption today. In a 2024 Deloitte poll of over 2,000 global executives, respondents cited lack of technical talent and skills as the single biggest barrier to adopting generative AI at scale. Only 22% of those leaders felt their organizations were “highly prepared” to address their talent gaps related to AI. Think about that—less than a quarter feel ready on the people front, even as they recognize the tech’s importance. Similarly, a separate industry survey found 76% of large companies reported a severe shortage of AI-skilled personnel, which is holding back their AI plans. These numbers tell a clear story: we can’t find enough people with the right skills now, let alone if the bar gets raised to AGI-level expertise.

What kinds of roles and skills are we talking about? To effectively integrate AGI, companies will need AI strategists who can identify where it adds value, data engineers to ensure the AI has quality data (garbage in, garbage out still applies!), AI ethicists and risk managers to oversee its decisions, and lots of AI-fluent domain specialists who can pair their subject knowledge with AI capabilities. We’ll also need maybe new job categories we can’t fully envision yet—just as the IT revolution eventually led to roles like UX designers or cybersecurity analysts that weren’t a thing in 1970.

Bridging this gap requires education, training, and hiring on a massive scale. Universities might need to overhaul curricula to churn out more AI-literate graduates. Companies will likely invest heavily in reskilling programs for their current employees—after all, if you have great industry experts, you’d rather teach them to use AI than replace them outright. But these things take time. Training programs can take months or years to create and complete. Cultural acceptance of new skill requirements can lag as well—some workers may resist or feel intimidated by the need to learn AI skills.

Until we have a workforce that’s broadly comfortable working with and alongside AGI, we won’t realize its full potential in the economy. In the interim, the few organizations that have top talent will move faster, and others might be stuck in pilot purgatory due to talent bottlenecks. It’s akin to earlier times when only specialized technicians could operate a computer—computing didn’t revolutionize offices until using a PC became as common as using a phone. We’ll need to get to the point where using AI tools is second nature for most professionals, and getting there is a gradual process, not an overnight switch.

Infrastructure Requirements: Scaling Up to AI Superpower

Let’s not forget the physical and digital infrastructure needed to support AGI. By infrastructure, I mean everything from computing hardware to data centers, cloud capacity, networking bandwidth, and even electricity (since training and running large AI models can consume tremendous power). If AGI is as big as promised, the demand on infrastructure will be enormous.

We already see glimpses of this: Cutting-edge AI models today (like large language models) require thousands of specialized chips (GPUs or TPUs) running in parallel, often in massive cloud data centers. The supply of those high-end AI chips is limited—currently a hot topic as companies and countries race to secure GPUs for AI development. Now imagine trying to embed AGI into every corner of the economy. We’d be asking every factory, office, hospital, and school to tap into supercomputer-level processing on demand. Our existing cloud data centers were not all built with that level of high-density AI workload in mind. A recent survey of IT leaders found that most current cloud and on-premises data centers weren’t designed to handle the kind of high-density, low-latency workloads that AI demands. In essence, we might be compute-constrained in the short term.

Scaling up infrastructure is doable, but it’s a massive capital project. It means building more data centers, possibly upgrading network backbones to handle heavier data flows (think of all the information an AGI might ingest and produce), and deploying AI acceleration hardware widely. Companies will need to invest in their IT backbones—and many are planning to. In one study, 59% of organizations with AI roadmaps said that increasing IT infrastructure investment is part of their plan. This echoes what happened with electricity (building power grids) and with the internet (laying fiber and 5G networks). The difference is, an AI-centric infrastructure might also involve more emphasis on data storage and cybersecurity (more on that shortly).

Speaking of cybersecurity—that’s another piece of the infrastructure puzzle. If a lot of critical decisions and operations are handled by AGI, the systems must be extremely secure. We will likely need new security frameworks to protect AI models from being tampered with or misused, and to protect the sensitive data they train on. Many organizations today worry that increased AI use also increases vulnerability to cyber threats (95% of respondents in one survey believed that, with 40% saying their security teams don’t yet know how to protect AI applications). Cybersecurity investment and innovation will need to keep pace, and initially, those concerns might slow down putting AGI in charge of mission-critical tasks until safeguards are proven.

We should also consider deployment infrastructure. It’s one thing to have AGI running in a lab; it’s another to deploy it at scale in the field. For instance, if AGI is used in robotics (say, self-driving vehicles or automated factories), then physical robots and sensors need to be deployed widely too. That’s expensive and logistically complex. If AGI is mostly cloud-based, then reliable connectivity becomes a must-have in every location (e.g., you can’t have your factory AI brain go offline due to a network hiccup). This might drive further expansion of high-speed internet and even edge computing (processing power closer to the user) to ensure low latency and reliability for AI services.

All these infrastructure elements—computing power, data pipelines, security, hardware devices—will take years to roll out broadly. We could certainly see pockets of rapid AGI implementation (say, major tech firms or well-resourced companies enabling it in certain functions), but economy-wide transformation needs economy-wide infrastructure. Historically, even something as straightforward as electrification took decades to reach everywhere. Building the “brain” of AGI might happen in a research setting relatively quickly, but building the body (the infrastructure that carries and supports that brain) is a longer slog.

Cultural Acceptance and User Trust: Winning Hearts and Minds

Another barrier that’s easy to overlook in technical discussions is human trust and cultural readiness. Simply put, people—whether consumers, employees, or the general public—need to trust a technology to use it widely. If AGI arrives amidst fear and skepticism, there may be resistance that slows its integration.

Right now, public opinion on AI is mixed. People are amazed by what tools like ChatGPT can do, but they’re also wary. Concerns range from “will AI take my job?” to “can I trust AI’s decisions?” to more sci-fi tinged fears about autonomous superintelligence. Some of these concerns might be overblown, but they are real in the public psyche. Companies introducing AGI-driven products or services will have to convince users of their safety, reliability, and benefits. That often means a gradual approach: starting with AI as an assistant or second pair of eyes, rather than an autonomous controller. For example, doctors might be okay with an AGI suggesting diagnoses, but patients (and malpractice insurers) might balk at an AGI alone making a medical decision without human sign-off. Trust is earned, typically through consistent performance, transparency, and alignment with social values.

In one recent Deloitte survey report, they explicitly noted that “lack of trust remains one of the main barriers to large-scale GenAI adoption”, underscoring that building widespread trust is essential for scaling up use. Even as organizations see the promise, they know that their employees and customers need to feel comfortable with AI. Nearly three-quarters of executives in that study said their organization’s trust in AI had increased since the emergence of newer AI breakthroughs—so trust can grow—but that comes from seeing positive results and improved transparency over time.

AGI, by definition, would be even more complex and harder to fully explain than today’s narrower AI. It might make decisions or come to conclusions in ways that even its creators don’t fully understand (the “black box” issue). That can be a trust nightmare. If people don’t understand how an AI made a decision, they might be reluctant to accept it—especially in high-stakes situations. We will likely need advances in AI explainability and governance to give people confidence. That could involve developing methods for an AGI to articulate its reasoning in human-understandable terms, or setting up oversight structures (like AI review boards or auditors) that verify the system’s integrity.

There’s also a broader societal discourse that needs to play out: ethical debates, religious or philosophical comfort with AI, and generational shifts in attitudes. Think about biotechnology and GMOs—the science might be solid, but public acceptance can lag or vary regionally. With AI, some may culturally resist handing too much agency to machines. We might see calls for “slow AI” movements, or at least a careful validation of AGI in critical roles (like justice, healthcare, defense) before letting it loose.

In practice, this means even if AGI is available, many institutions might opt for a cautious trial period. They might limit AGI to advisory roles initially, keeping humans in the loop until the AGI has proven itself. That period of proof could effectively delay the full productivity gains. It’s a bit like autopilot in airplanes—the tech could do a lot, but we still have pilots on every flight as a failsafe and to make passengers (and regulators) comfortable. Over time, as trust builds, we might remove some of those training wheels. But time is the key word.

Strategic Alignment and Business Model Evolution: Reinventing Enterprise for AGI

Lastly, let’s talk strategy and business models. To really harness a general-purpose technology, companies often have to rethink what business they’re in and how they deliver value. AGI could open up opportunities (and threats) that make today’s business models obsolete. But incumbents rarely pivot overnight.

Consider how the internet birthed new business models—e.g. Google’s search advertising model, or Netflix’s streaming model—which in turn disrupted incumbents (like print media or Blockbuster Video). Those incumbents often tried to use the internet in a half-hearted way at first (Barnes & Noble put up a website, but Amazon, born-digital, still beat them; Blockbuster started an online rental service only after Netflix was eating their lunch). Similarly, existing companies in various sectors might initially just apply AGI to cut costs or improve current products. That’s fine, but the really big value might come from entirely new offerings or models that AGI enables.

For example, AGI might allow services that are currently highly specialized (and expensive) to be democratized. Imagine a “personal CEO advisor” AGI that any small business can use to get strategic guidance—something only big consulting firms provide now. If I’m a traditional consulting firm, do I integrate that into my model or do I get disrupted by a startup that leverages AGI to offer consulting at scale? These kinds of questions will force businesses to adapt. But adapting strategy—especially for large organizations—is not quick. It may involve cannibalizing existing revenue streams, which is often resisted internally. There’s a famous line in innovation theory: “People don’t resist change; they resist loss.” If deploying AGI threatens a division’s turf or a manager’s empire, there can be internal pushback.

To truly benefit from AGI, companies might need to undergo a phase of business model innovation. That could mean creating new products, targeting new markets, or restructuring value chains. Some companies will navigate this well; others will stumble or wait too long. The overall economy’s transformation, therefore, might be gated by the rate at which businesses figure out the best models for the AGI era. Some startups will certainly emerge, built from the ground up around AGI capabilities—these might scale fast (just as digital-native companies did). But mainstream adoption across all sectors requires incumbent participation too, and incumbents will adopt at the pace their strategy refresh cycles allow.

One way to frame this is: technological diffusion goes hand-in-hand with managerial and entrepreneurial innovation. The technology alone doesn’t change the world; how we choose to use it does. In previous GPT cases, we saw this clearly—factories had to be reinvented for electricity, business processes reinvented for IT, etc. With AGI, perhaps entire industries will be reinvented. That’s exciting but also daunting. In the near term, I suspect we’ll see a lot of experimentation: companies trying pilots, exploring use cases, seeing what sticks. That period of experimentation and alignment could be a few years where productivity gains are modest—you’re basically in learning mode, not full exploitation mode.

One encouraging thought: it seems these diffusion cycles have shortened somewhat over history. Steam took many decades; electricity maybe a few decades; IT perhaps two decades to really show impact. The hope is that AGI, if it arrives, might diffuse relatively faster because our economy is already primed for rapid tech adoption (global communication is instant, capital can flow quickly to winners, etc.). But even “faster” in this context likely means years, not months. Some analyses of past GPTs describe the pattern as a J-curve or S-curve—slow growth at first (even a possible dip as old systems are disrupted), then takeoff, then eventually saturation. Leaders like us should be prepared for that timeline.

Closing Thoughts: A Revolution That Won’t Be Televised Overnight

Standing on the cusp of incredible AI advancements, I am as excited as anyone about the potential of AGI. It could be the steam engine of our era, the electricity of the 21st century, the digital superbrain that amplifies human capability in ways we can barely imagine. But as we’ve explored, capabilities ≠ diffusion. The mere arrival of AGI will not be the same as its instant, widespread adoption. History’s message is loud and clear: transformative technologies yield transformative economic impact only after we do the hard work of integration—building infrastructure, adapting organizations, developing skills, adjusting regulations, and evolving culture and business strategies around the technology.

I like to say, achieving AGI will be a scientific milestone, but achieving its full economic benefit will be a societal project. If AGI appeared in 2025, we might not feel its maximal impact until 2030, 2040, or beyond—just as the inventions of past GPTs preceded the broad changes by many years. There may even be an initial paradox or productivity dip as we invest heavily in AGI and disrupt existing workflows without yet seeing proportional output gains (remember Solow’s paradox, or the fact that early electric factories were less efficient than the status quo until they were re-engineered). Policymakers will have a role in smoothing the path—funding research, updating laws, and perhaps jumpstarting certain infrastructure—but they cannot eliminate the need for this gradual adoption and learning process.

For executives and leaders, the takeaway is to be both optimistic and pragmatic. Optimistic in that we should explore AGI, invest in pilots, and be ready to seize opportunities—those who learn early will have an advantage when the upswing of the S-curve comes. But pragmatic in planning for a journey, not a lightning strike. Budget for complementary investments: training your people, upgrading your IT backbone, rethinking workflows. Engage with policymakers to help shape sensible standards. Begin fostering a culture that’s adaptable, because you might need to pivot your business model when the time is right. In essence, position your organization to ride the wave of diffusion when it comes, rather than expecting to instantly teleport into a new reality.

I often invoke a simple mantra: “Revolutions are usually gradual—until they’re not.” There will be a day when we look around and realize AGI is as ubiquitous as electricity, and we’ll marvel at how different everything is. But that day will have been built by years of collective effort that started slowly. As leaders, we must navigate those years with foresight and patience. The arrival of AGI will indeed be momentous, but the work of transforming the economy will still be a marathon, not a sprint.

So, the next time someone claims AI is about to instantly take over the world, I’ll smile and recount the story of the factory that gained an electric motor and changed nothing—at least, not right away. And I’ll continue to champion a thoughtful approach: embrace the new capability, but build the complementary capabilities around it. That’s how we turn a powerful invention into a true economic revolution—not overnight, but through diligent, strategic effort over time.

Explore our latest posts