Source Adobe Stock

What does generative AI mean for brand safety?

Generative AI is able to take text, image, video, or any multimedia content and create brand new content. Give generative AI an image and prompt it to write a song, and it will create a melody - unbelievable.

Marketing is really at the beginning of the fourth industrial revolution, AI, and its re-shaping how we think about everything, including brand safety.

For as long as I can remember, the internet has struggled with 'made for advertising' (MFA). Ask anyone with experience in search engine optimisation (SEO) and they will tell you a well-searched-for prime keyword that has got MFA ranking well. It just shows you, even the biggest players (Google) continue to struggle with MFA even after a quarter century.

With 21% of impressions already being delivered to MFA sites as of 2023 (ANA), this number is only going to surge for the unprotected. But protection is changing.

Generative AI is creating new issues for brand safety

Websites can spawn 10,000 of pages instantly, reading and looking like websites we know and trust, with the exception they are generated purely by AI. YouTube channels can serve 100,000 of videos that are generated by AI. Your ads can appear above or below AI generated content on Instagram that is stolen from authentic creators.

DoubleVerify is seeing the threat; in a recent webinar, they suggested +19% growth in MFA due to generative AI with lots of MFA performing ‘on par’ with their benchmarks (ad fraud & viewability).

Generative AI not only produces content, but it can also be trained to consider brand safety verification techniques like viewability, for example. Similar to MFA in SEO, people who gain financially from MFA will work very hard to stay ahead of the algorithms used to detect their ill-doing.

Google learned this, too, even though I think generative AI is equally a threat and benefit, with new products like Performance Max or Demand Generator being in the latter category; in a recent report, 90% of MFA were served via Google Ads (Newsguard) – surely correlated with the rise in said products. Tough correlation to argue against.

Generative AI increases the area of where MFA can live

As supply transforms across audio, TV and outdoor, these addressable channels will inherit generative AIs issues, too; new podcasts developed purely with generative AI, for example. Adding to existing brand safety challenges where 10% of ads are running on devices that are connected by turned-off (GroupM & iSpot).

So what is the solution in this doom cycle of cat & mouse, where generative AI's bad actors will work tirelessly to find loopholes to exploit brand safety and ultimately, ad spend?

When you look at more human-based, brand safety metrics like attention, then it is a much better way to gauge if site is MFA or not – particularly for video – according to DoubleVerify. Why, well video is more emotive which generative AI is less mature at mimicking.

Fight AI with AI and train your own custom bidding model to add weight to factors that are important to you, and your brand safety, as well as your performance. Custom bidding algorithms can factor in metrics like viewability & attention. This macro approach is not for everyone as you have to accept that when working retrospectively, you will deliver some poor ad placements.

New marketplaces, like TheTradeDesk’s SP500+ looks to find a balance between scale and brand safety where there is enough scale to find 90% of people without running on placements that are not credible. We’ve had inclusion lists (previously known as “white lists”) forever, but as the issues with Forbes highlights (Traffic Guard) there is no replacement for humans.

Direct to publisher is often seen as ‘traditional’, but programmatic direct or private marketplaces, are still very effective at balancing scale and brand safety with many publishers being part of scalable conglomerates like Bauer, Future or Reach.

User-generated content (UGC) is the most susceptible to generative AIs impact on brand safety as it lacks editorial control. However, brands can access more premium placements on UGC publishers like YouTube and Snap, for example. “Select” ads run across premium content on YouTube.

Generative AI is creating new challenges for brand safety, which ultimately pose a financial risk for advertisers; they are paying for ads that don’t add value. This financial risk is often hidden in averages, like CPMs & CPAs. Averages do not accommodate the MFA that you paid for.

Brands need to start by identifying their appetite for risk; high, medium and low

Based on your risk, look at protection as an investment and use third-party verification partners like DoubleVerify as default.

Take a vested interest in your ad placements. Let the technology add more non-human data about your ad placements but still carve out time to visit and explore where your ads are served so you can start to develop your own perspective on generative AI.

It's not all bad; lots of people want to know what A.C. Slater from Saved by the Bell looks like now. That is probably generative AI produced content that passes Google’s E-E-A-T for example. But for every acceptable piece of generative AI content, there will be ninety-nine other examples that are not.


Authored by Michael Thomson, Head of Digital, EssenceMediacom Scotland.
Published on 10 July 2024

Tags

Newsletter

Enjoy this? Get more.

Our monthly newsletter, The Edit, curates the very best of our latest content including articles, podcasts, video.

CAPTCHA
Enter the characters shown in the image.
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.

Become a member

Not a member yet?

Now it's time for you and your team to get involved. Get access to world-class events, exclusive publications, professional development, partner discounts and the chance to grow your network.