Who is responsible when AI goes wrong?

By Saina Mohanty

In today's world, when artificial intelligence is used every day to assist with the decision-making process, it is common for errors and failures associated with AI systems. Examples of harm caused by AI include biased algorithmic decisions and faulty automated decision-making.

Who is responsible when AI goes wrong?

With these challenges come some very important questions in our digital age - if AI fails, who is liable? Is it the system itself, its creator(s), the users or the institution using the system?

Artificial Intelligence: An Illusion of Independence

Most individuals are under the impression that artificial intelligence will develop into an independent entity in the future. Nevertheless, artificial intelligence does not operate autonomously, but instead relies on a human created and trained infrastructure. The algorithms utilized by arificial intelligence are created by humans, and so too are the datasets that aid in the training of those systems. Humans also have a hand in creating human-controlled behavior patterns for artificial intelligence systems. Therefore treating artificial intelligence as a completely autonomous source of knowledge, and allowing it to act autonomously from its developer's perspective, diminishes the level of responsibility assigned to those who created and trained them.

The role of developers and designers

Developers always play an important role in determining the behavior of an AI system. Decisions and limitations related to an AI model show their impact on its performance. In some cases, certain decisions may lead to discriminations and errors because of some faulty decisions or beliefs. In most of the cases, developers are responsible for different kinds of errors that result from poor planning and testing of an AI system.

The role of organisations and user responsibility

The companies that utilize AI technology also have their share of accountability. This is because AI technology, while being implemented, must have some sort of vigilante in place as AI technology has the potential to create dangerous situations in highly sensitive areas such as labor, medical, and educational, as well as law and order situations. Users are not completely immune from accountability. It matters a great deal how people engage with AI tools; whether they challenge results or rely on them without question. Understanding constraints, confirming information, and identifying when human judgement is required are all part of ethical use. AI mistakes can be exacerbated by misuse or over-reliance.

Conclusion

It is crucial to understand the need for collective responsibility in the age of AI.
In the case of AI failure, there are no parties responsible. It is something that everyone shares responsibility for. It is very important that everyone recognizes this shared responsibility in order to keep AI ethical and human-value aligned in a world that becomes increasingly automated.

More News  

For Quick Alerts
ALLOW NOTIFICATIONS  
For Daily Alerts

--Or--
Select a Field of Study
Select a Course
Select UPSC Exam
Select IBPS Exam
Select Entrance Exam
Notifications
Settings
Clear Notifications
Notifications
Use the toggle to switch on notifications
  • Block for 8 hours
  • Block for 12 hours
  • Block for 24 hours
  • Don't block
Gender
Select your Gender
  • Male
  • Female
  • Others
Age
Select your Age Range
  • Under 18
  • 18 to 25
  • 26 to 35
  • 36 to 45
  • 45 to 55
  • 55+