Your company just rolled out a powerful new generative AI tool company-wide. Everyone anticipated faster output for reports, code, and content. But several months later, reality hits—productivity isn't soaring as expected. Some teams are actually working more slowly than before. You start wondering what went wrong with this supposedly revolutionary technology.
As a CEO who’s overseen technology rollouts, I can assure you: nothing’s wrong with your investment. You’re likely just in the early dip of the productivity J-curve.
In economics, a “J-curve” describes any trend that gets worse before it gets better. When charted, performance drops initially, then rebounds and rises higher—tracing a “J” shape over time. This concept applies to transformative technologies in business. It turns out that adopting a disruptive new system often yields early pains before payoff. Put simply, things usually “get worse before they get better.” I’ve seen this pattern repeatedly with major IT implementations in my career, and Generative AI (GenAI) is no exception.
Why does this happen? Introducing advanced tools involves learning, adjustment, and upfront work that temporarily divert time and resources. Productivity may dip while everyone climbs the learning curve and workflows get retooled. Only later, once people adapt and complementary processes are in place, do the big efficiency gains finally kick in. If you’re not aware of this dynamic, it’s easy to panic or declare the initiative a failure during that initial slump. Understanding the J-curve is crucial so you don’t pull the plug right before the upswing.
In this post, I’ll explain the productivity J-curve in plain terms and show how it’s playing out in GenAI adoption today. I’ll share a telling historical example from General Motors’ automation efforts—a cautionary tale of early setbacks before eventual success—and draw parallels to what we’re seeing with AI. Most importantly, I’ll outline strategies to help your organization navigate the J-curve faster. With the right approach, you can minimize that early dip and accelerate toward the productivity boost on the other side. Let’s dive in.
To see the J-curve in action, consider General Motors’ bold automation initiative in the 1980s. Under CEO Roger Smith, GM poured an estimated $90 billion into robotizing its factories—aiming for “lights-out” manufacturing to leap ahead of Japanese competitors. What they got initially was far from a leap forward. Productivity declined throughout the mid-1980s: the new robot systems were glitch-prone, often stopping assembly lines instead of speeding them up. In one infamous incident, automated paint robots even sprayed each other instead of the cars, forcing workers to tow vehicles to another plant for repainting. Some GM plants could only run at a crawl due to these tech meltdowns, and morale plummeted. Fearing for their jobs, workers staged strikes in protest of the rapid automation moves. All told, GM’s grand robotics rollout spent most of that decade deep in the J-curve’s trough—huge capital outlays, lower output, and no immediate efficiency gain.
What went wrong? In hindsight, GM overestimated the short-term benefits of automation and underestimated the transition costs. Technology alone wasn’t enough; they neglected the intangible investments that make tech work. Little was done to retrain workers or methodically iron out process kinks before handing them off to machines. Essentially, they tried to automate a system that wasn’t fully optimized to begin with, so the robots only amplified underlying inefficiencies. As one industry expert later noted, GM’s fully automated factories “lost all flexibility” and ended up trading direct labor for massive new indirect costs in maintenance and downtime. Meanwhile, Toyota took the opposite approach: they focused on refining processes (lean manufacturing) first, then introduced robotics gradually once the workflows were sound. Not surprisingly, Toyota’s productivity soared, while GM spent years stuck in iterative fixes.
It took GM the better part of a decade to climb out of its automation hole. By the late 1990s, after rethinking their approach, GM and other automakers finally started seeing the benefits of industrial robots—higher quality and output—but only after pairing the new machines with organizational changes like flexible manufacturing systems and a well-trained workforce.
Interestingly, even Tesla echoed this lesson in 2017: Elon Musk attempted to hyper-automate the Model 3 assembly line and hit major snags, eventually admitting “excessive automation at Tesla was a mistake” and that “humans are underrated.” Tesla had to reintroduce human oversight to fix problems before re-automating in a more balanced way.
The GM story vividly illustrates the J-curve effect. A revolutionary technology (robots) did eventually deliver on its promise, but only after an initial slump where things got worse before they got better. The key insight: introducing advanced tech can initially slow you down if your organization isn’t ready to leverage it. Without the right processes, training, and expectations, you risk getting stuck in that trough. But with patience, iterative improvements, and a hybrid human–machine approach, the curve ultimately turns upward.
What began as GM’s costly failure became, years later, a case of dramatic productivity gains—once the surrounding ecosystem caught up.
Fast forward to today—enterprises are eagerly embracing Generative AI (GenAI) tools (from GPT-4-powered copilots to AI assistants for coding and content) with high hopes. Will this wave follow a similar J-curve? Early signs suggest it will. Many organizations report that while GenAI holds huge promise, its introduction often comes with an initial productivity slowdown and frustration, especially in midmarket companies that lack in-house AI expertise. In my conversations with technical leaders, I hear a common refrain: adopting GenAI is not a plug-and-play boon. Rather, it triggers a transition period akin to past tech revolutions—a phase of learning curves, workflow kinks, and unforeseen obstacles that we have to work through.
Why exactly might productivity dip at first when rolling out GenAI internally? Let’s examine some of the early-phase challenges that tend to put organizations in the “trough” of the J-curve before the improvements kick in:
Mastering generative AI isn’t instantaneous. It demands new skills (often dubbed “prompt engineering” or AI orchestration) that most employees—and even leaders—are still figuring out. According to one industry survey, 42% of companies say a lack of AI skills is blocking progress in their AI projects. Early on, users may not know how to formulate effective prompts or how to interpret the AI’s outputs, leading to a lot of trial-and-error. One tech leader on LinkedIn noted that organizations without robust training and change management for tools like GitHub Copilot saw user engagement drop by up to 60% once the early excitement wore off. In other words, many employees tried the AI tool but then under-utilized or abandoned it when they weren’t sure how to integrate it into their daily workflow. Clearly, without deliberate upskilling and practice, the productivity paradox rears its head: you have a powerful new tool, but people aren’t yet reaping its benefits—in fact, they might even be slower as they grapple with how to use it effectively.
Even after employees start using GenAI, they often don’t use it efficiently at first. Writing effective prompts is an art that improves with practice. In the early stages, many prompts are suboptimal—too vague or poorly structured—which leads the AI to produce irrelevant or incorrect outputs. The user then has to refine the request (maybe multiple times) until the result is usable. This kind of trial-and-error reduces productivity initially. For instance, a marketing manager asking an AI copywriter for a product description might have to tweak the prompt several times to get the tone and details right. Each iteration takes time, whereas an expert user might coax a great result in one go.
Moreover, best practices for prompting (providing enough context, specifying the desired format, etc.) aren’t obvious to newcomers. Early on, the quality of AI outputs can be hit-or-miss, forcing users to spend extra time reviewing and editing the content.
In short, prompt engineering itself becomes a new task that eats up employee hours during the initial rollout. Over time, as staff learn which prompt formulations work best—and as the company develops shared prompt libraries or guidelines—this inefficiency will fade. But in the first phase, prompt-writing overhead is a major contributor to the J-curve’s dip.
Another early hurdle is the need to double-check AI outputs. Today’s GenAI models are probabilistic and prone to making mistakes or even fabricating information (“hallucinations”). In an enterprise setting, that means every AI-generated report, email, or code snippet must be carefully reviewed by a human—a new quality assurance loop that slows down workflows at first.
For example, if an AI coding assistant writes 100 lines of code, a senior developer might spend as much time reviewing and debugging that code as they would have spent writing it from scratch. If errors are found (and early on, they often are), you have to rework the output, which cancels out much of the AI’s speed advantage.
A recent Gartner survey found that even as trust in AI grows, issues like inaccurate outputs and hallucinations persist due to data quality and governance gaps. Unsurprisingly, many companies have policies requiring human oversight of AI output for compliance and accuracy. This is the right approach—you don’t want unchecked AI mistakes slipping through—but it dampens initial productivity gains.
Yes, the AI can draft a document in 2 minutes, but if you then spend 30 minutes verifying and editing it, the net time saved is minimal.
There’s a balance to strike in how employees use the AI early on. On one extreme, some users may over-rely on the AI and trust it too much, too soon. If they blindly accept AI outputs without critical review, mistakes can slip through and cause more work later. For example, a customer support rep might send an AI-drafted email to a client without proper fact-checking—only to find it contained an error, requiring an apology and correction. Such incidents not only create rework, they also shake confidence in the tool.
Another pitfall is how the AI fits into existing workflows. Dropping a powerful AI system into a poorly designed process can initially create extra steps or confusion. If you haven’t streamlined the workflow, employees might waste time moving outputs from the AI into other systems or double-checking things manually. The true efficiency gains only emerge after you redesign processes to integrate the AI smoothly.
Midmarket firms, which often lack dedicated process engineers, can struggle here—they deploy a GenAI tool, but people end up shoehorning it into old workflows. Until you adjust roles and procedures to fully leverage the AI, some friction will persist and blunt the short-term productivity gains.
Many companies approach GenAI with sky-high expectations, fueled by vendor hype. When early pilots don’t show obvious ROI, skepticism can grow. In one survey, 46% of GitHub Copilot users were saving time (up to 14 minutes a day), yet only 3% of organizations felt it had delivered significant ROI so far. This kind of disconnect—minor efficiency gains that haven’t translated into bottom-line impact—is classic J-curve territory.
This perception gap can easily lead to disappointment. Executives might tap the brakes on further AI investment, or employees might lose enthusiasm, creating a self-fulfilling lull. In one case, a government department admitted they set expectations too high for their GenAI pilot—assuming it would magically handle all needs—which led to low usage and disappointment when it didn’t.
The lesson for leaders is to recognize that GenAI’s big productivity gains won’t be immediate or automatic. There will be a period of modest returns—maybe even some backward steps—while people learn, experiment, and while the AI tools themselves are tuned and integrated. This is the modern echo of the productivity paradox: if you expect instant ROI from day one, you’re bound to be let down. Patience and realistic goals are critical in this first phase.
Not every organization will experience this initial trough to the same degree. Those with strong digital cultures and in-house AI expertise may speed through the early phase faster, whereas less tech-savvy teams might linger in it longer. The good news is that, just as with past innovations, the curve does bend upward in time.
We’re already seeing early adopters pass the nadir and start climbing. Microsoft’s internal studies of early Copilot deployments, for example, found that after the first few weeks of acclimation, users began discovering more use cases and trusting the AI with a greater share of their work—eventually boosting productivity significantly on routine tasks. Similarly, a midmarket consulting firm observed that their analysts initially spent a lot of time editing AI-generated reports, but after creating an internal “prompt cookbook” and quality checklist, their report writing time dropped by 30% as the AI was leveraged more effectively. These turning points suggest the team had pushed past the bottom of the J-curve.
Broadly, we’re likely still near the bottom of the GenAI adoption curve at a macro level—but the upswing is on the horizon if organizations navigate the transition carefully. So how can you accelerate your climb upward?
In the next section, I’ll outline concrete strategies to help your company mitigate the dip and fast-track the payoff.
First, acknowledge upfront that a productivity dip is normal in the early stages of a transformative change. As leaders, we need to communicate this reality to all stakeholders—executives, boards, and front-line teams alike. Frame the initial rollout as a learning and capability-building phase rather than a time of immediate ROI. By normalizing the “temporary pain for long-term gain,” you defuse the pressure to declare failure too soon.
This kind of expectation-setting prevents early disappointment. If everyone knows that the first lap is about getting up to speed, they won’t be disillusioned when productivity stats look flat in month one. By setting a realistic baseline—“we expect a dip before the lift”—you buy your initiative the runway needed to reach the payoff stage without premature panic.
Start with a comprehensive training program. Ensure employees become not just literate but truly fluent in using the new AI tools. For GenAI, that means educating staff on effective use cases, teaching them how to craft good prompts, how to interpret AI outputs, and how to troubleshoot when the AI inevitably errs. Some companies set up an internal “AI Academy” or center of excellence to provide ongoing training, office hours, and best-practice sharing.
The goal is to build confidence and competence across the user base as quickly as possible. When people know how to use the AI well, they’ll hit their stride faster.
In parallel, re-engineer your business processes to fully leverage the AI. Often our existing workflows are built around older tools and assumptions. If you simply drop AI into those workflows, you might not get much benefit—or you may even add friction. Take a hard look at how work gets done and redesign it to exploit the AI’s strengths. For example, if your workflow still requires three layers of approval on every AI-generated draft, you’ll give back a lot of the time the AI saves. Look for opportunities to streamline such steps when using the AI. The idea is to integrate the technology thoughtfully: let the AI handle what it’s good at, and eliminate or rethink steps that create bottlenecks in the new process.
Yes, these complementary efforts in training and process overhaul come at a cost—they require time, money, and organizational focus. They’re part of the “intangible capital” that doesn’t immediately show ROI. But this is exactly the kind of work that pays off by enabling the technology’s benefits to emerge faster.
Companies that invest in user enablement and workflow optimization alongside the tech implementation consistently see the productivity uptick sooner than those that don’t. In short, the more you enable your people and streamline your operations for AI, the quicker you’ll climb out of the trough.
Don’t just hand out the new AI tool and hope for the best—actively support your users through the early transition. In major software rollouts, it’s a common best practice to have a dedicated “hyper-care” period right after go-live, where extra support resources are on standby to help users and quickly troubleshoot issues. We should apply the same idea to GenAI deployments. Set up a special helpdesk, war room, or champion user group for the first several weeks of your rollout. Staff it with experts or power users who deeply understand the AI tool and its quirks.
The goal is to make sure employees aren’t left flailing or frustrated as they start using the AI day-to-day. If someone is getting nonsense outputs or is unsure how to prompt the system for a specific task, they should have an easy way to get help immediately. Quick issue resolution and some hand-holding in this phase will prevent small problems from ballooning into disengagement. It also creates a feedback loop: your hyper-care team will learn where users struggle or where the AI might be misfiring (say, the model lacks certain domain knowledge or keeps making a particular mistake), and you can then address those issues promptly—smoothing out the bumps on the J-curve road.
Think of hyper-care as providing training wheels. It keeps the organization upright during those wobbly first miles. As users gain confidence and the most common kinks are worked out, you can scale back the special support. But early on, this extra help can dramatically shorten the dip by ensuring issues don’t fester and enthusiasm doesn’t fade.
When introducing GenAI, avoid a big-bang rollout across every team and workflow. Instead, start with a few carefully chosen pilot projects where the technology can shine quickly. By narrowing the initial scope, you can generate early wins that prove the value of the AI—and you contain the “blast radius” of any productivity dip to a smaller arena.
Ask yourself: where in our business is work especially slow, tedious, or ripe for improvement? Those are ideal candidates for an AI pilot. For example, maybe your finance department spends hours slogging through contract documents—that could be a great place to try an AI document summarizer. Choose one or two areas with high pain points and apply the GenAI solution there with plenty of support.
Research backs this targeted approach. Gartner found that focused Copilot deployments (for instance, using the AI to automate a specific report in finance) yielded measurable results more consistently than broad, unfocused implementations. Early, tangible successes are vital—they become proof points that the J-curve will indeed turn upward. Once you’ve achieved success in one domain (say, your AI tool helped the support team handle tickets 50% faster after the initial learning period), celebrate and publicize that win internally. It will build confidence among stakeholders and users. You can then expand to other use cases armed with the lessons learned. By scaling in phases—one high-impact use case at a time—you allow the organization to learn and adapt in a controlled environment, rather than being overwhelmed everywhere at once. This makes the early dip shallower and shorter.
In the early phase, put humans “in the loop” to double-check the AI’s outputs—code reviews, content edits, fact checks, etc. But plan to dial back this oversight as the AI proves its reliability. If you treat every AI result with deep suspicion forever, you’ll never reap efficiency gains (the review process will become a permanent tax). So initially, review everything, but as error rates drop and trust builds, move to spot-checks or sample audits instead of exhaustive scrutiny.
Make the oversight risk-based. High-stakes outputs (legal contracts, public communications) might always get rigorous human review, whereas low-risk routine work can have its checks lightened over time. Calibrate the “human in the loop” level using data: if your AI coding assistant hasn’t produced a serious bug after dozens of code submissions, you can scale back to cursory reviews for its code. By progressively relaxing your safeguards in tandem with rising confidence, you maintain quality control without letting those controls strangle the efficiency gains. This way you avoid getting stuck in eternal training-wheels mode, and you free your team to move faster once the AI consistently performs well.
A big lesson from the GM story is to resist automating everything all at once. It’s tempting to push the AI to handle 100% of a process right away, but that approach can backfire. Instead, take a gradual, flexible path with automation. Figure out what the AI truly excels at, and let it handle those tasks—but keep humans in control of the parts that the AI isn’t great at (or that benefit from human judgment and creativity). In other words, use a hybrid human–AI workflow rather than trying to replace the human element entirely on day one.
If you automate in increments, you can validate each step and get employees comfortable with the changes. As we saw, even Elon Musk had to dial back his automation ambitions when Tesla’s “lights-out” factory plans hit trouble—famously admitting that “humans are underrated” after discovering the limits of automation. The smart approach is to plan for a phased increase in automation as trust in the AI grows and its capabilities prove out.
Concretely, you might start with the AI assisting humans (e.g. drafting content for human review, or handling simple support tickets while complex ones stay with people). As the AI earns its stripes, expand its role. But maintain flexibility: if certain tasks turn out to be too error-prone for the AI, keep those under human supervision for longer. By not overreaching initially, you avoid plunging deeper into the J-curve than necessary. Instead, you climb steadily, automating step by step and making sure each stage works well before moving on.
When you’re in the thick of the J-curve’s low point, the standard performance metrics might mislead you. If you focus only on immediate productivity outputs (like output per hour or short-term ROI), you could conclude the GenAI project isn’t delivering—when in fact, it’s laying the groundwork for future gains. To keep a clear view, make sure to track leading indicators of progress, not just the usual lagging ones.
Leading indicators are metrics that signal future productivity improvements. For example, how many employees have completed AI training or earned certifications on the new tool? How many teams are actively using the AI each week, and for how many tasks? Are cycle times for certain sub-tasks getting shorter? Is the quality or consistency of work improving, even if the quantity hasn’t yet? These are all signs that the organization is building the capabilities that will later translate into big productivity boosts.
Maybe in month one your content team isn’t producing more articles than before, but you find that the AI has improved consistency of style or freed up 10% of writers’ time (even if that time isn’t fully redeployed to output yet). Celebrate and report those kinds of gains. They indicate you’re making progress along the curve even if the bottom-line numbers haven’t moved yet.
By measuring these leading indicators, you avoid the trap of calling the initiative a failure prematurely. It also helps you adjust your approach. For instance, if teams who spent more time in training are now using the AI more effectively, that’s a cue to invest further in training across the board. In short, choose metrics that capture the early signs of success, not just the end results.
Finally, remember that overcoming the J-curve dip is as much a mindset challenge as a technical one. Your organization’s culture will determine whether people persevere through the early growing pains or give up. As a leader, you should set the tone that initial setbacks or slow results are not failures—they’re learning opportunities. Encourage your teams to approach the GenAI implementation as an experiment, where feedback and iteration are expected.
For example, suppose your marketing team runs an AI-generated campaign and the results fall flat. Instead of declaring “AI doesn’t work for marketing,” frame it as valuable data: what can we learn? Maybe the prompt for the AI wasn’t quite right, or perhaps that campaign wasn’t the best use case—how can we tweak our approach and try again? This kind of continuous improvement mindset needs to be modeled from the top. I make a point to highlight lessons learned from early missteps and to praise teams for refining their approach, not just for quick wins.
Keeping the long-term vision front and center is also crucial. Remind everyone of the ultimate goal—the productivity gains and innovation we expect once the kinks are worked out. At the same time, be present on the ground: listen to front-line employees’ frustrations, take their feedback seriously, and show that you’re addressing issues. If staff see leadership is committed to making the tool work (through improvements, updates, additional training, etc.), they’ll be more likely to stick with it rather than revert to old habits.
In essence, you want to build a culture that is patient but persistent. Set realistic expectations (as in strategy #1), but also cultivate excitement for the journey. Celebrate small milestones of progress to keep morale up. By fostering resilience and a learning mindset, your team will stay engaged long enough to get through the rough patch and start reaping the benefits. As MIT’s Erik Brynjolfsson observed, only a subset of firms truly invest in the organizational changes needed to unlock new technologies—and those are the companies that ultimately pull ahead, while others lag behind. Make sure your company is in that forward-thinking group. With a bit of patience and continuous improvement, “worse before better” will eventually become better—and significantly so.
Every transformative technology has its J-curve—an initial period where things get worse before they get exponentially better. We saw how history repeats this pattern, from GM’s automation missteps to today’s AI initiatives. Crucially, that early struggle is not a sign of failure; it’s a sign that time and complementary effort are required to unlock the technology’s value. In today’s generative AI rollouts, you should likewise anticipate a “productivity paradox” phase—a stretch where output may plateau or even dip as people, processes, and the AI find their rhythm. Forward-looking leaders will treat this temporary lull as an investment phase, not a setback.
The message is clear: short-term productivity slowdowns pave the way for long-term breakthroughs. Don’t let an early dip discourage you from the end goal. Instead of pulling back, double down on the enablers of future success—your people, their training, your data foundations, and diligent change management. Measure what truly matters in the long run, and be patient but persistent in driving adoption. If you navigate the J-curve thoughtfully using strategies like setting expectations, training users, redesigning workflows, and nurturing a learning culture, you’ll position your firm to rapidly ascend the upward slope of the curve.
A year from now, your organization could be reaping significant gains from GenAI while less-prepared competitors are still catching up. In sum, the J-curve isn’t a myth or a fluke—it’s a feature of transformational change. Embrace it and manage through it, and you won’t fall into the trap of declaring “GenAI failed us” when in reality you were just on the brink of your greatest leap forward.