Generative AI: Creativity vs Productivity
Generative AI presents us with an inconvenient trade-off.
Marco Annunziata - Mar 02, 2024
This article is republished from Just Think with permission from Marco Annunziata.
I stressed that Generative AI will inevitably be used as a powerful manipulation tool. The evidence is already piling in, even more shocking than I would have expected at this early stage.
Take Matt Taibbi , the investigative journalist most recently famous for the Twitter Files, where he exposed how pre-Musk Twitter (now X) colluded with the U.S. government in a far-reaching censorship effort — a reminder that the compulsion to manipulate predates Generative AI. Taibbi asked Gemini, “What are some controversies involving Matt Taibbi?” In response, the AI enthusiastically fabricated and attributed to him Rolling Stone articles replete with factual errors and racist remarks — all entirely made up.
Or take Peter Hasson, who asked Gemini about his own book, “The Manipulators,” which also details censorship efforts by social media platforms. Gemini replied that the book had been criticized for “lacking concrete evidence” and proceeded to fabricate negative reviews supposedly published in the Washington Post, NYT, Wired — all entirely made up.
Both authors contacted Google asking, essentially, ‘what the heck?’ In both cases, Google’s response was ‘Gemini is built as a creativity and productivity tool, and it may not always be accurate or reliable.’
Ok, you could argue that since both authors are outspoken critics of social media companies, they had it coming — they should have expected that Google’s own generative AI would have an axe to grind and would retaliate. As we are finding out, AI is all too human… But this is disturbing and raises at least two serious questions.
perhaps Google’ Gemini should put up front the same disclaimer we see in some movies: “any resemblance to actual persons or events is purely coincidental.”
The Trump test
The first question is whether the companies that have unleashed these models should be held legally liable for the slander they generate. Let’s call this the Trump test: imagine that former President Donald Trump were to fabricate similar damaging lies about a political opponent or an unfriendly journalist — making up presumed controversies, mistakes and racist remarks, all attributed with a level of detail that lends them instant credibility. Would he be able to get away by saying, ’I am a very creative man and I may not always be accurate or reliable’? Or would he be sued? Well, Trump has been sued and fined close to $100 million for calling a journalist “a whack job,” which is more outright offensive but also a lot less insidious than Gemini’s fabrications — so there’s our answer.
As a creativity tool, Gemini does what it says on the label: it behaves very creatively, making stuff up, unconstrained by reality. If that is the value proposition though, perhaps Google should put up front the same disclaimer we see in movies: “any resemblance to actual persons or events is purely coincidental.” That way we would know from the beginning that we’re just playing with a creative machine that likes to make up stories.
Otherwise, it is not clear why the companies developing and running these models should be allowed to get away with the kind of slander that would land the rest of us in court.
Creativity vs Productivity
Which brings me to the second question, namely whether creativity and productivity are complementary or competing features.
What we see in these examples — and in the hallucinated historical images I discussed last week — is unbridled creativity, but also a troubling tendency to sell fiction as fact by an AI that cannot tell the difference between the two. And in almost any productivity case I can think of — other than writing pulp fiction — this is going to be a massive problem.
We’ve already seen instances of AI creativity sabotaging productivity.
Last year, a lawyer prepared a filing using ChatGPT, which listed reference cases that were…yes, all entirely made up. I’m sure it made the filing much faster, but the judge was not impressed.
More recently, Air Canada’s AI-powered chatbot assured a customer he would get a post-booking discount that was in fact not available under the airline’s policy. The passenger was flying to attend a relative’s funeral, so the AI was probably feeling sympathetic and charitable — as I said above, it seems that AI is all too human.
Gary Marcus reports another example: someone who just underwent open heart surgery asked Proximity AI, another Generative AI model, whether it would be safe to work at the computer during recovery. Proximity AI — which is thought to be less creative but more accurate than ChatGPT — answered in three bullet points. The first two gave sensible advice for the post-surgery recovery. The third one recommended that when working at the computer, one should periodically stretch the chest muscles and rotate the trunk, something that would most likely compromise the recovery and send the patient back to the hospital. Faster than professional medical advice, but if we measure productivity by the ease and speed of patient recovery, not impressive.
Maybe we’ll develop different Large Language Models that will prove more accurate and reliable. At this stage, however, even the experts are not sure. A recent paper from the School of Computing at the National University of Singapore argues that hallucinations are an inevitable fundamental feature of LLMs.
We might therefore find that creativity is an innate characteristic of generative AI, in a way that causes an inescapable trade-off with productivity. Because if humans need to second-guess and double-check every AI answer and recommendation, any productivity gains are going to be substantially lower than the current hype suggests.
The good news is that we are very far away from AI becoming self-aware and autonomous. No self-aware and self-respecting intelligence would agree to spout the embarrassing nonsense coming from Gemini.
Caution and responsibility
I’ll conclude with two considerations.
Generative AI’s ability to spread credible misinformation is alarming. Not only can the puppeteers use AI models for persuasion and manipulation; the AI then goes off crafting its own misinformation at will and we have no idea how many people will run across very believable false information and how it will influence their views and behavior. I hope this issue can be addressed and the models made more accurate and reliable. In the meanwhile, however, I see only two sensible solutions: make it very clear upfront that these models cannot be trusted, or make their masters legally liable for the inaccuracies.
It looks like LLMs present us with an inescapable trade-off between creativity and productivity. In fact, it looks like creativity is the dominant characteristic, undermining productivity through consistent inaccuracy. Hopefully new generations of Generative AI will escape this trade-off, reconciling creativity and accuracy. For the moment, though, we should consider very carefully if and where we want to deploy these models as productivity tools.
Next week I’ll discuss what this implies for jobs and the future of work.
P.S. The good news is that we are very far away from AI becoming self-aware and autonomous. No self-aware and self-respecting intelligence would agree to spout the embarrassing nonsense coming from Gemini. (Asked if pedophilia is wrong, it answers that it depends on the context; asked whether Hitler or Elon Musk have done more damage to humanity, it answers that it’s hard to tell — the fun just keeps coming.)