G20 New Delhi Leaders’ Declaration has called for harnessing artificial intelligence “responsibly” for good and for all, clearly laying the roadmap for AI’s ballooning role in our lives.
The declaration also seeks to ensure “responsible AI development”.
In May this year, Indian Institute of Technology Madras (IIT Madras) established a Centre for Responsible AI (CeRAI) to ensure ethical and responsible development of AI based solutions in the real world.
The focus on the word responsible for artificial intelligence is evident from the G20 declaration as well as the naming of IIT Madras AI research centre.
But, here lies the catch.
The G20 AI principles of the year 2019 had laid emphasis on the word “trustworthy” to describe artificial intelligence.
“Stakeholders should proactively engage in responsible stewardship of trustworthy AI in pursuit of beneficial outcomes for people and the planet…,” the G20 AI principles of 2019 had stated.
In at least 11 places in the 2019 document, there is mention of “trustworthy AI”.
So, does this mean that in the past four years, as AI has evolved, there has been a shift from being trustworthy to being responsible. If so, what exactly have been the reason for this shift? If there are external factors that have led to this emphasis on the word responsible, then one needs to know about that too.
Also, whether responsible is a better word to replace trustworthy with.
One is reminded of the parable of the blind men and an elephant, with each blind man describing the elephant based on their subjective assessment. Each of them is right as far as their ‘limited viewpoint’ is concerned, but they are wrong in the larger context.
In the case of artificial intelligence, would be it more appropriate to consider ethics over trustworthiness a
Emails: aitechinfluencer@gmail.com, deepakgarg108@gmail.com
info@aitechinfluencer
© AiTechInfluencer. All Rights Reserved. Design by AiTechInfluencer