The BBC is urging Apple to ditch its generative AI feature after it created a glaring inaccuracy that it implied the BBC actually reported.
A form of artificial intelligence known as Generative AI made headlines over a false headline. Apple’s new Apple Intelligence rolled out in the United Kingdom last week. One of its tasks was to summarize and group together headlines in what looked like a push alert sent from the BBC.
The trouble is, that notification contained a major blunder. Even worse than the fact error itself, the BBC never reported the error.
The mistake involved Luigi Mangione, the suspect in the killing of UnitedHealthcare CEO Brian Thompson in New York City on Dec. 4.
The BBC published a screenshot of the inaccurate notification. It contained three brief headlines separated by semicolons. They included updates on shooting case, the overthrow of Bashar al-Assad’s regime in Syria and an update on South Korean President Yoon Suk Yeol.
The second and third were fine. The first, however, read:
Luigi Mangione shoots himself
He didn’t shoot himself. One would have to wonder, as guarded as he presumably is, how that could even be possible. But given the BBC’s reputation, some might immediately assume it must be correct and then click the notification to read the details.
They wouldn’t find those details because it didn’t happen. The actual BBC story, as best I can tell, reported that the suspect faced multiple charges filed in New York. That’s a far cry from what the generative AI claimed the BBC published.
BBC, journalists group file complaints against Apple
A BBC spokesperson confirmed it reached out to Apple “to raise this concern and fix the problem.”
“BBC News is the most trusted news media in the world,” the BBC spokesperson added.
“It is essential to us that our audiences can trust any information or journalism published in our name and that includes notifications.”
A group called Reporters Without Borders, an international nonprofit, also weighed in with its own complaint:
“RSF calls on Apple to act responsibly by removing this feature. The automated production of false information attributed to a media outlet is a blow to the outlet’s credibility and a danger to the public’s right to reliable information on current affairs.”
Given how many people claim to distrust the media, I can understand the concerns. It’s inexcusable to rely on artificial intelligence to create headlines without at the very least having a human being check behind it.
If you were the BBC, you’d be mad as hell if some machine misrepresented what you reported. You’d have every right to be.
If you worked hard to make sure you work was accurate because you knew millions of people depended on you, you’d raise a stink about some computer “guessing” at the result of your labor. You’d be crazy not to.
I hope everyone’s paying attention
As we face a future in which some employers look to artificial intelligence as a potential way to save money on payroll, it’s important to note major failures like this one.
I tried a little AI experiment at this blog two years ago. I asked AI to generate an article for me on the subject of common grammar errors. The AI produced a listicle, which surprised me. But the first item on the list was incorrect in that it listed the error as correct and the correct usage as an error.
Needless to say, I haven’t even attempted to work with AI since then. A social management tool I’ve used in the past has an AI function that will compose social posts. I’ve never even tried it.
And in the real job, our company has a detailed policy on AI listing multiple pieces of software that are expressly forbidden to use at all. The list is routinely updated as additional programs and apps are tested. But the overall message of the AI policy is quite clear: in certain cases, AI may be used to assist in certain aspects of a product.
But in a nutshell, AI cannot be permitted to produce the final product or anything and can at no point can it be used without human supervision and verification.
That, it seems to me, is the way it should be.
Accuracy is always more important than convenience
In Apple’s case, from what I can tell, the idea is to take multiple notifications a news service might send and then condense them into a single one with the “hottest” stories. That way, someone doesn’t have to sort through multiple notifications.
It sounds like Apple Intelligence produced the notifications without any human supervision at all. No matter how much Apple may trust its AI — and it appears it places too much trust in it — AI should never have final approval on anything. There should always be a human signing off on the finished product.
That simple step could have spared Apple this embarrassing blunder.
I’d take multiple notifications — or skipping past them after reading the most recent few — that are generated by the outlet itself over something a computer “pieces together” from what it detects the subject matter might be.
I hope employers everywhere hear this message: There are certain things humans have to do. Cutting costs with AI is just asking for trouble.
And while I’ve said this before, it’s always worth repeating: When you allow computers to do all your thinking for you, you’re only going to anger the customers you try to claim you’re out to “better serve.”