0 Items: 0

Want to start reading immediately? Get a FREE ebook with your print copy when you select the "bundle" option. T&Cs apply.

The Rise of the Machines

Image of robot outside a shopping stall

The following is an extract from A Marketer’s Guide to Digital Advertising.

What if the digital market is increasingly non-human in its very nature? There are two fundamental understandings that all digital advertising practitioners need to have in order to make informed decisions at every level:

  1. 50% of internet events are now generated by non-human programs that operate on the web.
  2. ~25% of new content on the internet as of 2022 is created by nonhuman programs.

This means that 50% of events online are currently driven by bots, and 25% of content on the web is now written by bots. Things like jasper.ai, which “writes copy” for agencies, or the feature on Instagram/TikTok that has a computer-generated voice to narrate short clips where the human user has only entered text are recent examples. Understanding and embracing the prevalence of non-human presence in the internet experience will help avoid defaulting to advertising practices and processes that assume that humanity is behind 99% of creation and consumption online.

The impact of bots visiting pages or engaging with social media posts extends to the choices that actual humans make about what to read or watch.

The visit counter or curated list of trending content has a significant influence on what we decide to consume as a human collective. When reading and watching various things online, we spend 60% of the time scrolling a feed versus 40% on the expanded content itself. This data quantifies a social landscape where people can process current events from simply the text of the headlines themselves.

The industry is not shy in co-opting content creation innovations from elsewhere and pointing to their revolutionary potential for adland.

For example, ChatGPT-3, the AI-driven conversational bot program that was launched in late 2022, is an example of something that has the clear potential to change fundamental parts of the advertising industry (and others). Google themselves think of it as a potential game changer and a “code red” competitor for them. The capabilities that this open-source tool is already demonstrating pose critical questions around just how much, for example, the search side of the online ad business could change if this tool itself is properly harnessed and monetized.

But therein sits the rub. Just how such an AI-driven chatbot can be leveraged by advertisers is a question that has yet to be fully addressed. There are clear implications for how such AI-driven tools could revolutionize copywriting and the creative industries, for example, but a viable path to sponsored bot-driven content is as yet unclear.

The wider concern is how these GPT (generative pre-training) models will be affected over time when fed non-human-generated data. The things that AI and big data software models are currently providing are often based on data that represents a lot of human activity, but with an increasing amount of text and posted activity on the web being generated by non-human programs instead of humans, the inputs for these tools are questionable and the future murky. Teams within OpenAI, the non-profit parent organization that operates the research for ChatGPT, are already discussing how to inform these generative models and whether the source data they collect or analyze can contain “watermarks” that help determine whether it is bot- or human-created.

All marketers, media buyers, copywriters and their colleagues should keep an eye on these types of software as their proliferation is inevitable, and their full impact unknown.

In many ways the growth of automated curation and distribution of certain types of content, and the willingness of both publishers and users to accept them, is leading to a clickbait content culture.

We often present it at conferences and client seminars as a mutually dependent ecosystem of the sea (social and content platforms), sport fishers (publishers), clickbait (headlines and thumbnails), chumbuckets (scrolling feeds) and eagerly biting fish (human attention).

The reason for using sport fishers in the analogy is that the fish are always thrown back into the sea after the bait is taken and the fish have been photographed, similar to being back in a scrolling feed after exiting the linked content. Over a long enough period of time, the bites become more valuable to the fisherman (publishers) than the fish (human attention) itself, in the same way that in a given food chain, the food that the prey eats will have an impact on the predator. Content, and therefore information, consumption has a similar chain of impact that can become toxic for all involved.

There have been numerous studies conducted by psychologists and government health departments on the significant impacts of embellished social media content on teenage mental health. The potential impact of misrepresentations or falsehoods on adults is nothing short of that, often amplified by bot traffic pushing those pages into users’ scrolling feeds or trending lists.

Having analyzed the real-world impact of this, the Center for Information Technology and Society (CITS) at UC Santa Barbara released and continues to update a diligently researched and thoughtful paper called “A Citizen’s Guide to Fake News.”

Having been called to task by activists and watchdogs on social media for helping monetize these inflammatory fake news sites, advertisers have relied on brand safety and content analysis technologies to help them avoid advertising on content determined to be unsuitable by a morality hivemind. You can think of brand safety technology like a fact-checker program for full bodies of text that needs to operate with 99.99% accuracy at a speed of over 100,000 decisions per second, based only on letters in the URL of the page in question.

As an industry, we have seen many erroneous assumptions around how, for example, content can or cannot be moderated for safe human consumption.

Similar to voices in the news around social or economic issues, it is often the extremes at the ends of the spectrum that find themselves amplified. Rational sceptics don’t typically yearn for the spotlight.

Advocates of the monitoring technologies involved say they are sufficient, but most critics want them to do more, rather than less. Moderating content, and indirectly moderating or censoring speech, is a messy job that has been debated in the public forum since at least 2015. The work that the Brand Safety Institute, Sleeping Giants and Check My Ads have done is testament to both the scale and durability of the problems that persist.

These debates centred around depicting social platforms like Facebook and Twitter as the modern town square that should or should not be limiting what someone can say, when and to whom. The key difference is that the old town squares did not take every utterance from each citizen and carve it into stone for any and all to see for eternity.

In our modern world, old statements in old contexts can be essentially teleported into a contemporary environment for judgment. Society is struggling with this dynamic and the solution is not in sight; our degree of willingness to raise pitchforks is directly correlated to what we currently collectively consider as monstrous today.

Brands will not be immune to this, so check your old tweets.

Related Content

Article
Branding, Marketing Strategy & Planning, Equity, Diversity & Inclusion
Video
Marketing Strategy & Planning, Sales & Account Management
Article
Marketing Strategy & Planning, Digital Marketing


Get tailored expertise every week, plus exclusive content and discounts

For information on how we use your data read our  privacy policy