Understanding the Risks of AI Advertising Algorithms
This piece is an excerpt from an academic paper I submitted for my Strategies for Accountable AI program at Wharton. I’m researching Google’s ads and data sharing algorithms. It’s not in the writing style I typically use for my website, but I thought others might find these insights interesting. Here you go:
The increasing sophistication of AI advertising algorithms brings notable ethical challenges. Here I explore three of those challenges along with current risk mitigation tactics.
Technical Literacy and Lack of Awareness
Many users lack awareness of what data they’re actually sharing when they browse the web. Websites now require visitors to accept or decline cookies, but users often accept quickly to resume browsing without fully understanding what they’re consenting to.
Digital distrust is at an all-time high, and while Google and other online platforms took this “cookie-consent step” to build trust among users and give them more control over their data, privacy disclaimers are often too technical and verbose.
This approach ticks the box for transparency but raises ethical questions about how effective it really is in fostering awareness.
Algorithmic Bias and its Impact on Society
In an era where people turn to podcasters for news and form opinions from memes, it’s clear that AI algorithms have massive influence. They’re already known to drive discriminatory advertising practices, and beyond ads, they create echo chambers that individuals don’t realize they’re in.
Once a user shares their data, they are sent down a rabbit hole designed just for them. This effectively removes the need for independent thought and exacerbates harm from misinformation sharing.
Google openly states they “share information with advertisers, business partners, sponsors, and other third parties.” Yes, apps and websites declare this practice, but somehow many users still assume their online activity is private.
AI Ethics in Advertising
Advertisers play a crucial role in shaping the ethical landscape of AI-driven advertising. By actively avoiding discriminatory targeting and endorsing responsible data practices, we can help mitigate potential harms caused by these powerful algorithms.
Although most advertisers don’t directly access or utilize customer data, we are actively leveraging Google’s data systems in ways that influence how information is shared and perceived. This involvement means we bear a degree of moral responsibility for any negative outcomes arising from misuse or unintended biases within the data.
Fortunately, Google—one of the largest players in digital advertising—has established a comprehensive framework for ethical practices. Their efforts at transparency set a standard in the industry, offering companies of all types a solid foundation upon which to build responsible data use strategies.
For companies using Google’s advertising tools, this can help navigate potential risks, form compliance best practices and develop data usage audits. By relying on Google’s resources, advertisers can make informed choices that support both their business objectives and the well-being of their audiences.
Conclusion
The risks associated with Google Ads and its AI-driven systems are significant and Google has taken notable steps to mitigate them. It is essential for both advertisers and users to remain vigilant. By understanding the potential harms and supporting ethical AI practices, we can leverage these tools and mitigate negative impacts on individuals and society alike.
With nearly two decades in the industry, Belle Strategies’ owner, Rachel Creveling, is a seasoned business consultant who crafts comprehensive frameworks that integrate operations, marketing, sales and HR to position her clients for optimal success. She excels at incorporating trending tech ethically and studied Strategies for Accountable AI at Wharton.