A media report that said a rogue AI drone ‘killed’ its operator caught my eye and left me deeply worried. Hours later, the US Air Force denied that it ran an AI simulation test in which a drone decided to ‘kill’ its operator who was attempting to prevent it from achieving its mission.
Even though there is an official statement that said this did not happen, it is far from reassuring as there is a distinct possibility of an artificial intelligence-based system having a mind of its own.
Previously, I had cited examples of experts who had stated that AI can turn sentient.
In the latest example, a military official was quoted as saying: "It killed the operator because that person was keeping it from accomplishing its objective."
Had it transpired, the AI drone incident would not have been anything short of an apocalypse.
This brings me to the larger question of whether we are prepared to handle scenarios in which artificial intelligence may push the ethical boundaries.
This may happen sooner, rather than later.
Such simulation scenarios also have larger implications for journalism and mass communication.
Let’s undertake two simulation exercises for journalism. (Please note that these are imaginary scenarios).
In the first instance, a reporter files a story which is edited by a sub-editor before being uploaded. The AI based algorithm notices that the story is not doing well and decides to change the headline and the introduction to catch attention. This is a plausible scenario since generative AI can, on its own, tweak the news story.
When the traffic to the news story improves, the AI based algorithm decides that further improvement will lead to even more traffic. That is when it starts to generate more versions of the same story even though the editors may feel that the newer versions are digressing from the core point of the news.
In the second scenario, an AI based news algorithm may start generating news stories so that it goes viral. The AI based tool may learn from other viral stories, notice the pattern and decide on its own to put out stories that have the potential to go viral. The second scenario is even more grave as in this there is practically no human intervention.
In both the journalism simulation exercises, artificial intelligence may start to do stuff without human prompting and may go so far as to reject human interference. That will be the gravest of real world situations.
The US Air Force simulation exercise is just the start, gear up to witness more such instances in the near future.
Emails: aitechinfluencer@gmail.com, deepakgarg108@gmail.com
info@aitechinfluencer
© AiTechInfluencer. All Rights Reserved. Design by AiTechInfluencer