In 1987, economist Robert Solow famously said that the computer age was “everywhere except in the productivity statistics.” While computers were clearly becoming more prevalent in the economy, their effects on productivity were not yet showing up in the data.

The problem wasn’t that computers were inherently unproductive, but rather that it takes time for businesses and organizations to adapt to new technologies and figure out how to use them effectively. This is true for all new technologies, not just computers.

Some technologies are transformative, but many are just useful, or interesting or fun. It’s difficult to predict where new technologies will fall along this spectrum.

Electricity was a revolutionary technology that became widely available in the late 1800s, but its impact on productivity took decades. Edison built his first power plant in 1881, but 20 years later less than 5% of mechanical drive power in American factories was coming from electric motors.

For factory owners, swapping their factory’s single, big steam engine for a big electric engine had a high cost, but not a huge benefit. One big engine at the core of a factory powered all movement, and switching the source of that energy wasn’t hugely beneficial.

It wasn’t until they started rethinking their processes and business models that they were able to truly take advantage of the new technology and boost their productivity. Steam was very inefficient at small scales, but electrical motors could be any size you wanted. Electricity allowed power to be delivered exactly where and when it was needed, and the use of multiple small electric motors allowed for the organization of factories around the logic of a production line, rather than being centered around a single drive shaft. This allowed for more efficient and flexible production.

This pattern can be seen throughout history. Whenever a new technology emerges, it takes time for businesses and organizations to figure out how to use it effectively. We’re slower to adopt new technologies at first, until we get the hang of it. Once we do, however, the benefits can be enormous.

So how should we think about current AI technologies and the path they will take to becoming felt in the productivity statistics?

Machine Learning, a subset of AI, had its first big breakthroughs in the early 2010s, but it hasn’t yet made noticeable impact on general productivity.

So far, most of the benefits of machine learning have been seen in consumer-facing applications, such as auto-predict to complete sentences, vastly improved voice recognition, and better digital recommendations. This was Predictive AI. But in 2022, we saw some quantum leaps in another branch of Machine Learning – Generative AI.

The first wave of Predictive AI took large amounts of data, analysed patterns and used them to make predictions – the next word in a sentence, the next song on a playlist, if an image contained a cat, if a mole was cancerous. The next wave of Generative AI is using the same analysis to generate new patterns. Patterns of text (i.e. sentences and paragraphs), images, music and more.

For example, Large language models (like GPT-3) are AI systems that are trained on vast amounts of text data, which allow them to generate patterns of text which match the patterns of real-world text.

Generative AI is already helping developers write code, assisting lawyers in drafting basic legal documents and producers generate music.

So how long will it take before we see the effects of Generative AI in GDP numbers, if at all? And how should we think about the impact it might have on jobs and the future of work?

To answer that question, we need to understand the inherent capabalities and limitations of both Predictive and Generative AI.

Currently, there are certain tasks that Generative AI is decent-but-not-great at, but it is likely to rapidly improve over time. It writes like a 16 year old, but soon that will be college student, then intern, then junior employee. Any area where humans can set a goal, the AI can generate a pattern or a prediction, which can the be rated against that goal. Defined fields like coding, law, medicine or translation all seem like prime candidates as code either works or it doesn’t, a diagnosis is either correct or it isn’t.

On the other hand, there are certain tasks that Predictive and Generative AI may never be able to do as well as humans, or may never be able to do at all. It can draft a legal document, but we still need humans to review and approve it, and of course to set it the right task in the first place.

Once we have a better understanding of what Generative AI is capable of, we can then think about the jobs and tasks within those jobs that it may be able to do better than humans. This will help us think about the potential impact on employment.

Lastly, we need to get an accurate idea of the time horizons we’re looking at. Not just of when the technology will be capable, but how long it will take before new, more productive models of work are built around it. Technological capability looks set to advance dramatically over the next five years – will the widespread productivity impact take a further 5, 10 or 20 years to materialise?

All of this thinking might also help us generate some ideas about how we can speed this process up. What if someone in 1881 had done the analysis and developed conceptual frameworks to show that modularisation was a key benefit of electricity? Could it have pulled the electric age 10 years forward? Can academics, business and policy makers make that same acceleration for AI?

That’s a key topic I’ll be exploring in the newsletter in 2023, alongside the usual political economy topics. I’m excited. Happy new year!