Navigating AI's Impact: Balancing progress and ethical responsibility

Tom Leahey
By Tom Leahey | 10 August 2023
 
Tom Leahey.

Tom Leahey, Head of Digital in Melbourne, Hearts & Science

The rise of AI has become ubiquitous with countless breakthroughs flooding social platforms, but what we’re not talking about is its potential destructive ramifications. While AI undeniably has the potential to revolutionise operations and increase workforce efficiency, there are concerns about potential job replacements and ethical implications. Tech industry experts, in response to recent AI advancements, have issued an open letter urging a temporary halt to the training of AI systems more powerful than GPT-4. It begs the question: are we overlooking the consequences of AI, or is this just fear-mongering? Does the eager adoption of AI threaten to undo the transparency and integrity we've fought so hard to bring to media over recent years?

The Power of AI in Media

The ease of access to generative AI has allowed for great efficiencies in content creation through AI technologies like natural language processing and machine learning algorithms. These technologies can now generate and optimise content at scale. This unlocks a wealth of efficiencies and can supercharge audience engagement through personalised content and ads and enhanced user experiences.

Behind the scenes, AI-powered analytics can play a critical role in providing valuable insights into audience behaviour and content performance, enabling data-driven decision making for optimisations and bidding.

This is just scraping the surface of the current capabilities. Integrating AI into our existing tools and systems will allow for further efficiencies across our internal processes and workflows. The dreaded days of manual timesheets can be a thing of the past. Time spent analysing data to produce detailed and actionable insights can be cut down to minutes rather than hours. Social listening can be managed quickly and efficiently in-house, enabling sentiment analysis across your social posts for a fraction of the cost.

Tread with Caution

While it’s incredibly tempting to jump headfirst into the world of AI, reap the exciting benefits and play with the latest shiny toys, there are a lot of road bumps and red flags that need to be considered.

As AI is trained on a vast volume of data to then make predictions and deliver these personalised experiences, a critical yet rarely touched upon caution within AI is data privacy. The collection, storage, and usage of this data raise important considerations regarding user privacy and data protection. Firstly, we must acknowledge that AI tools such as the very popular ChatGPT use the data we provide to improve their models, and therefore pose a huge risk that AI systems are gaining access to unauthorised data and potentially personal or proprietary data.

Another notable and more widely discussed red flag with AI is the potential for bias. As mentioned, AI algorithms have been developed to make decisions based on patterns and correlations in data. If historical data used to train these algorithms contains biased patterns the algorithms can perpetuate and amplify those biases. An example of this is co­ntent bias; you will have subconsciously noticed this within your social feeds when you’re endlessly scrolling but pause and linger on certain types of content for longer than other content. The AI algorithm will learn based on this that you like a specific type of content and continue to deliver more of the same content. The concern with this is that if algorithms predominantly promote content that aligns with users' existing beliefs and preferences, it can reinforce confirmation bias and limit exposure to diverse perspectives resulting in polarisation in society. This can contribute to the spread of misinformation and erode public trust in media sources and platforms directly impacting media investment due to brand safety and brand suitably concerns.

Striking the Right Balance

As we seemingly approach a new stage in our evolution, we need to seriously consider our next moves. Before passing the point of no return with our AI advancement there is a huge need for regulation and ethical consideration before an even wider rollout of the use of AI across everything we’re doing. This isn’t just for the media industry but across all industries.

To put the recent advancement of AI into perspective, GPT-3.5, which was released in November 2022, can process 4,000 tokens (c. 3,125 words) and GPT-4, released in March 2023, can process 32,000 tokens (c. 25,000 words). The sheer volume increase in token processing power isn’t the only advancement. We’ve also seen AI tools advance from the initial language model to a multimodal model, where the GPT-4 can integrate inputs from multiple modalities such as text, images, and sounds. The rate at which the AI capability and access to these tools has progressed needs to be closely monitored and regulated.

Any further progression at its current rate can produce some devastating outcomes. In a broader concept, unregulated AI models can create severe bias and discrimination. It can easily produce fake content and spread misinformation if the training data is inaccurate or corrupted.

If these models are trained on sensitive data without proper consent or with limited oversight, there's a risk of violating individuals' privacy and data protection rights. This can result in complex legal challenges, including issues related to liability and ownership of AI-generated content. The auto industry is going through a very similar stage with autonomous vehicles, deciding who is at fault in an accident with a fully autonomous vehicle. The manufacturer? The owner? System updates/software bugs?

Embracing Ethical Responsibility in the Age of AI

Although the Open Letter calling for a pause on the training of AI systems more powerful than GPT-4 hasn’t sparked widespread discussions at the desired levels, we as an industry need to step up and take responsibility for how we use and interact with AI systems.  

Before deploying any AI system we must ensure that we, as an industry, are self-regulating. We need to develop ethical guidelines and frameworks when implementing AI technologies, whether this is in conjunction with AANA or IAB, or even independently. Within these frameworks, a strong focus on aligning AI practices with legal, social, and cultural norms is a must. As part of following this framework we must also conduct ethical impact assessments before deploying any AI systems. This should involve evaluating the potential risks such as biases and societal implications where AI adoption needs to mitigate harm and ensure responsible use.

It’s crucial we apply this rigorous lens to all integrated AI systems. We must prioritise ethical responsibility as we embrace new AI technologies, ensuring that AI services the best interests of everyone, while upholding true ethical human values.

comments powered by Disqus