Ethics & Compliance

Key Takeaways:

(i) AI algorithms pose significant ethical concerns, among them discrimination, lack of transparency, inherent biases

(ii) One of the reasons for many AI products ethical failures is the lack of ethics and compliance assurances over product design and pre-launch certifications

(iii) Compliance officers will be increasingly finding themselves more closely engaged with AI tech teams on developing ethics frameworks for AI products

(iv) Many ethics and compliance practitioners will move towards becoming ethical technologists

Overview

With the use of artificial intelligence, people and businesses are increasingly transferring more and more of their decisions to machines. We allow AI algorithms to navigate us across towns, recommend what we should read, whom we should date, predict the likelihood of criminal reoffending, students’ grades, and customers’ credit defaults. We allow them to decide whom to recruit and allow into public businesses like nightclubs, among many other things. Although today’s AI & Robotics products do not have a comprehensive human cognition and we are still living in the narrow-AI age, there are many indicators that in several years it may reach a new level and begin replicating more advanced human brain functions.

A logical question arises: can we trust such technologies? This is currently one of the major and growing concerns in the AI & tech world – how we can ensure that AI & Robotics products do not cause harm to societies, act fairly, and do not discriminate against minorities. Numerous scandals related to unethical decisions of AI algorithms have demonstrated significant potential risks in this area along with the need for adequate ethical assurance over these products. To name a few examples: 

 

1) AI algorithm allocating health care to patients dramatically discriminated people on a racial basis

2) AI algorithm for recruitment of new employees favored male candidates over female ones

3) AI algorithm for predicting the likelihood of criminal reoffending had systematic racial discrimination

4) AI Chabot designed for Twitter conversations engaged in racist, inflammatory, and political rhetoric

5) Artificial intelligence algorithms meant to detect and moderate hate speech online have built-in biases on a racial basis

6) AI Grading algorithm predicting the grades of students ended up favoring students from private schools and affluent areas, leaving high-achievers from free, state schools disproportionately affected

7) Artificial intelligence algorithms meant to detect and moderate hate speech online have built-in biases on a racial basi