Gen AI vs Gen Z — Artificial Intelligence And Jobs

Generative AI will neither wipe out white collar jobs, nor turn everyone into an expert. Investing in education and skills remains crucial.

Marco Annunziata - Mar 09, 2024

This article is republished from Just Think with permission from Marco Annunziata.

I singled out Gen Z because it makes for a catchy title, but also because its members (born between 1997 and 2012) will be the first to feel the full brunt of the new wave of Artificial Intelligence. 

The rise of Large Language Models like ChatGPT and generative AI programs like DALL-E and Midjourney has brought back predictions of a massive technological disruption to the labor market, this time with a new twist. Until recently, the mainstream view was that innovation would automate “routine” jobs, both manual and cognitive, whereas humans would maintain a decisive advantage in jobs requiring creativity, critical thinking, or dexterity.  (See for example Autor (2015) and Autor, Levy and Murnane (2003)). In other words, robots and AI would continue the trend that over the past couple of decades has driven an increasing segmentation of the labor market, exacerbating income inequality. 

AI: The Great leveler?

Now, however, Generative AI has been presented as a great leveler: capable of creativity and critical thinking, hungry to take over the more interesting and remunerated white collar jobs. An AI that can easily beat us at Chess and Go would no longer accept being relegated to a support role, let alone performing boring monotonous tasks. College graduates would face a greater threat than factory workers.

The righteous satisfaction vibrating in these arguments has been tempered by the uncomfortable consideration that this new development might leave humans with no place to hide—if machines can outperform us at creative cognitive tasks as well, we might all soon be out of a job. 

Generative deception

Generative AI, however, is a deceptive technology. Because it talks like us and draws like us, we think it’s like us. It’s easy to poke fun at a clumsy humanoid robot, but when ChatGPT gives us a coherent answer in fluid English (or French, or…) we gape in awe. Take this quote from a Wall Street Journal interview with Boston Consulting Group’s François Candelon: “While many innovative technologies are met with a certain level of incomprehension, people seem to immediately grasp the applications of ChatGPT.” This to me perfectly captures the fallacy: we are immediately convinced that we grasp the implications, whereas we do not. 

MIT economist David Autor published a very thoughtful piece where he takes a more constructive view of how Gen AI will impact jobs. He argues that AI will “enable a larger set of workers equipped with necessary foundational training to perform higher-stakes decision-making tasks currently arrogated to elite experts, such as doctors, lawyers, software engineers and college professors,” and thereby bolster the middle class. 

This is a powerful argument: it claims that Gen AI can lift many more people into the high-earning echelons of experts. But it needs to be handled with caution. First, all the examples Autor himself brings in the article are existing experts being made more productive by AI, not people enabled by AI to join the ranks of the experts, to perform jobs that were previously out of their reach. And indeed he later says “rather than making expertise unnecessary, tools often make it more valuable by extending its efficacy and scope.” So the devil is in the detail of the first sentence I quoted, namely that the benefits will accrue to “workers equipped with necessary foundational training.” AI might help more people climb higher on the skilled jobs ladder, but only if they have the right skills. 

In casual banter with ChatGPT this fallibility might not be too different from what we experience with our average fellow human, but in a critical industrial setting the standards are very different: there is little tolerance for tiny margins of error, let alone complete hallucinations.

The second, very important caution lies in the spectacular fallibility of Gen AI models. I have discussed this in my previous two posts, and AI guru 

Gary Marcus keeps documenting some of the most striking and hilarious examples. Gen AI hallucinates, lies, makes stuff up, and gets very basic things completely, utterly wrong—recent examples include four-legged ants and seven-sided chessboards. Gen AI appears unable to build a model of reality, and therefore unable to generalize, learn and understand the rules of physics, distinguish fact from fiction. In casual banter with ChatGPT this might not be too different from what we experience with our average fellow human, but in a critical industrial setting the standards are very different: there is little tolerance for tiny margins of error, let alone complete hallucinations. 

The AI handicap

The WSJ interview I mentioned above also reveals that humans aided by AI can perform worse than humans alone. Worse. Here’s BCG’s Candelon:

“On the creative product innovation task, humans supported by AI were much better than humans. But for the problem solving task, [the combination of] human and AI was not even equal to humans…meaning that AI was able to persuade humans, who are well known for their critical thinking…of something that was wrong.”

Just think: humans, “well known for critical thinking,” were easily misled by the AI into believing something wrong, and therefore underperformed their peers who were working without AI support. We give Gen AI too much credit. Because it speaks in an authoritative human-like tone, and we’re told over and over how amazing it is, we end up suspending our critical thinking and trusting the machine—even if the machine can’t tell truth from fiction. 

This is an especially depressing development, because with previous generations of AI the opposite seemed to be the case: in chess for example, a human playing in team with the AI would defeat both humans alone and AIs alone. But those AIs did not mistake a pawn for a queen, or think that a knight can move like a bishop. And the humans did not place blind trust in the AI.

Now instead, on problem solving tasks, humans ascribing intelligence to Gen AI are more easy led astray. And if we need to constantly be on the alert for AI errors and hallucinations, human workers will need the same level of expertise as now. Delegation does not work when you need to double-check everything yourself. 

Raising the average?

The BCG study also indicates that workers considered below average benefit more from AI than those rated above average; a similar result has been found in other studies, for example Brynjolfsson, Li and Raymond (2023) who looked at contact center workers. What do we make of this? Brynjolfsson et al suggest that Gen AI makes it easier for less experienced workers to learn the best practices of more experienced and skilled ones. That sounds similar to emerging markets growing at a faster pace by adopting technology developed in advanced countries. It helps close the gap, but the advanced countries—and the above average workers—remain ahead.

The danger is that by closing the gap, Gen AI might reduce the incentive to excel. As long as trying harder gets me a 50% performance boost, with the corresponding upside in compensation, it’s definitely worth it; if it just gets me a 5% boost, it might not be. 

Keep honing your skills

Gen AI is being presented—and perceived—as a super powerful brain: it can make you an “expert,” we are told; it can raise the average, turn mediocre into good. If that is the message that gets through, the reaction will be to think that there is no point in studying hard and acquiring skills. 

That would be a mistake. The fallibility and unreliability of Gen AI implies we are still far away from the point where it can supplant human expertise in performance-critical environments (in plain English: wherever getting things done right matters). And the studies showing that Gen AI narrows the gap between high and low performers are just a snapshot: I suspect high-performers will find ways to raise the bar further. If Gen AI then keeps pulling the low performers up by spreading the new best practices, that would indicate that high performers bring even more value to the table, raising the efficiency of the broader workforce; in a competitive environment they should be able to capture a good share of this added value. 

Bottomline: the smart bet is still to give it your best and acquire as much skill as you can, whether it’s a STEM degree, a college education heavily geared toward critical thinking, or solid vocational skills, which are sorely needed and will remain in demand through industry. Gen Z, don’t make the mistake of thinking Gen AI will do the thinking for you.

Previous
Previous

Generative AI: Creativity vs Productivity

Next
Next

GE’s Breakup Lessons