Apple’s AI News Alerts: A Wake-Up Call for Ethical AI Use

You may have seen today’s BBC article about Apple’s news alerts feature, which uses AI to summarise news articles. Unfortunately, it’s not just inaccurate — in some cases, it generates entirely false claims. This is a prime example of AI hallucination, a critical issue that underscores why ethical AI use is non-negotiable.

What is AI hallucination?

AI hallucination occurs when an AI model fabricates information, presenting it as fact. When you’re using AI personally for tasks like drafting emails or reports, you can identify and correct these inaccuracies. But the real danger arises when no one is fact-checking the output, and people start trusting the falsehoods.

Apple’s case is particularly alarming because it involves misinformation on a massive scale. When AI creates and spreads inaccuracies, the damage can be irreversible. Once misinformation is out in the world, it’s almost impossible to contain.

Who’s accountable?

Several organisations, including the BBC, The Guardian, and the National Union of Journalists, have demanded that Apple withdraw its AI feature, Apple Intelligence, to prevent further harm. However, Apple has so far refused, offering only vague assurances that it is working to "clarify" the summaries. This response has been widely criticised, with Reporters Without Borders calling it insufficient.

This refusal to take meaningful action is not just negligent; it’s unethical. Apple is essentially using the public as test subjects without their consent, releasing an AI feature that spreads misinformation and then dragging its feet when confronted with the consequences.

Why ethical AI matters

Apple’s failure to act responsibly highlights the urgent need for ethical standards in AI development and deployment. Companies that release AI tools must be held accountable when their technology causes harm. Without human oversight and strict ethical guidelines, AI becomes a tool for misinformation, eroding public trust and damaging society.

This isn’t just about Apple. It’s a warning to every company working with AI: reckless deployment has real-world consequences. If they won’t regulate themselves, then we must demand regulation to protect the public.

The bottom line

Apple’s mishandling of its AI news alerts is a textbook case of what happens when ethical considerations are ignored. Misinformation is a threat that AI can amplify exponentially, and without accountability, the risks will only grow.

It’s time to hold companies like Apple to account and insist on a future where AI is used responsibly. Anything less is unacceptable.