Do Planners Dream of Electric People?

i (Paranoid Android)

Back in the dark days of the first COVID lockdown, I started to become pre-occupied with something that can best be described as a sort of weird daydream. Severed from direct human contact with my clients and colleagues, I started to wonder if my working life had become entangled in sort of system akin to The Matrix.

I became interested in the idea that the endless stream of video calls I was participating in - often with people who I’d not met in the real world - was the product of an elaborate computer simulation designed to keep me occupied until the world of work was ready to open up properly again. I use the word daydream deliberately (it was not, for instance a full blown delusion). At the time there was something super scary about this thought. It felt absurd and comedia and slightly crazy all at the same time. Like something from Science Fiction rather than real life.

Since then Generative AI has come along. It’s fair to say - obvious, banal even - that in the short time since November 2022, when Chat GPT became widely accessible to the public, this set of technologies has come to dominate the discussion about the progression and future of nearly any category or industry you care to think about. It was a long-standing joke at the beginning of my career that the advent of each new year would bring with it the year of mobile. Mobile’s maturation was much heralded, but indeed was so subtle that even a group of mobile experts writing in The Drum in 2018 couldn’t agree on exactly when it happened. By comparison, Generative AI arrived unannounced and changed everything, instantly. A study by Copy Leaks found that between the launch of Chat GPT in November 2022 and March 2024, the volume of AI generated content on the internet has grown by over 8000%, with the content now accounting for 1.57% of analysed web pages.

ii (The Generation Game)

You may have noticed that the internet is now awash with weird, computer generated content. It rewards the pleasure centres of our brain, but little else. “The issue is, fundamentally, a problem of platforms. The big tech sites—the ones with the most eyeballs, and those that consume the majority of ad spending—have failed to rein in the bots. They’ve spent billions in pushing the same generative AI products that have contributed to the growing phoniness of the Web. And, most damning of all, they failed to consider the consequences” says Chris Gadek in Fast Company. Are we entering media’s ‘slop era’? It’s not just content that is being affected, information is too - there have been a number of incidents where Google’s search results have been observed to be not just inaccurate, but patently incorrect - confusing swollen penises with Sinus problems, suggesting that Eggs will melt and misunderstanding how medical dressings and sauces differ.

Elsewhere, Gen AI is being used in far more pernicious or damaging ways - filtering ‘real life’ beyond all recognition. Disinformation and trust in content online was damaged by the Cambridge Analytica scandal, what comes next feels potentially even scarier and more profound. These technologies are fundamentally reorientating and reshaping the digital landscape, changing the fabric of the web itself - changes which are likely to become even more marked as consumer adoption and usage of Generative AI increases.

Generative AI models are trained by using massive amounts of text scraped from the internet, meaning that the consumer adoption of generative AI has brought a degree of radioactivity to its own dataset. As more internet content is created, either partially or entirely through generative AI, the models themselves will find themselves increasingly inbred, training themselves on content written by their own models which are, on some level, permanently locked in 2023, before the advent of a tool that is specifically intended to replace content created by human beings.
This is a phenomenon that Jathan Sadowski calls “Habsburg AI,” where “a system that is so heavily trained on the outputs of other generative AIs that it becomes an inbred mutant, likely with exaggerated, grotesque features”
— Ed Zitron, "Are we Watching the internet Die?"

iii (On the Internet, no one knows you’re a dog)

It’s no coincidence then, that there has been a resurgence of articles talking about ‘Dead Internet Theory’. The idea that the vast majority of the internet has been overrun by bots and other forms of artificial user. James Ball, writing in Prospect recently tells us “Dead Internet Theory says that you’re the only human left online. It started out as a conspiratorial joke, but it is edging ever closer to reality” (Ball, 2024). The science fiction of my lockdown daydream edges ever closer with each new iteration of tools like Open AI’s Chat CPT or Anthropic’s Claude. Claude 3.5, released last month, features a ‘computer use’ mode which enables it to interact with any desktop app on a PC, just like a human user would. At the same time videos of recruitment processes being conducted by Artificial ‘recruiters’ have been doing the rounds. Above, Ethan Mollick instructs the same programme used in the recruitment process - Hey Gen - to replicate the most stereotypical Zoom Call ever. And whilst this specific example might border on the comedic, it’s important to remember that the technology will never be as bad as it is today. The pace of change is rapid.

Back in 2015, my former employer, the media agency PHD, published a book called Sentience. It argued that by 2029 we would start to see the arrival of ‘Strong AI’ - machines that would be smarter than human beings. These might take the form of VPAs - Virtual Personal Assistants - systems which, if sophisticated enough, might allow human beings to devolve responsibility for tasks like shopping entirely. This devolution would create a new dynamic for advertisers - instead of advertising to people - winning mind share - the task would be one of advertising to machines and algorithms instead. This has been a topic for some time within marketing circles - in a vague and lightweight way, we already do it via channels and techniques such as SEO and Social. Dario Amodei, CEO of Anthropic, has said that he thinks we could have a ‘super intelligence’ as soon as 2026. Marketing to machines is no longer a pipedream, but increasingly a challenge we need to start addressing now.

iv (Do Planners Dream of Electric People?)

The impact of Generative AI on creative output is well documented. The ability to create imagery rapidly and at significantly less cost than ‘phsyical production’ is one of the clearest and most immediate use cases for these technology within communication. For media planning, which has long been informed by ‘algorithms’ on platforms like Meta and Google, we’re seeing a rise to new categorisations of content - categorisationsn driven not by human editors but by machines. For brands that might mean the opportunity to align yourself ever more explicitly with specific types of music, video or content in the search for brand associations - or perhaps more importantly, the need to think ever harder about how you find the right type of people as users become ever more reliant on algorithmic recommendations for their media diets. It’s not only an overload of media that we need to be worried about, but an overload of context too. More contexts, equals more fragmentation.

Search is already being disrupted by AI, as I have mentioned already. However, assuming that LLMs become more important in tasks like search: how will brands approach SEO in this space? Jellyfish’s Tom Roach has already suggested that ‘share of LLM’ will become a cornerstone metric in the new marketing landscape. How will this be gamed? Could brands ‘sabotage’ search results by feeding LLMs bogus information about the competitor set - and how will that effect consumer interactions and choices?

The planning function in advertising agencies (and it’s modern counterpart - strategy) was conceived of by Stephen King and Stanley Pollitt to bring the consumer into the advertising process. In doing so, the advertising would be more effective they argued. The worrying rise of companies selling ‘synthetic data’ for research companies - as well as the general affect of Generative AI and Algorithms on the fabric of the internet (and by proxy, the fabric of media) - may mean we need to start thinking more actively about a new consumer, an altogether less tangible person who we’re trying to influence.

Are we ready for this change - do planners dream of electric people?

Previous
Previous

Dirt is Just Matter Out of Place

Next
Next

Week Notes w/c 4th November