Can robots inherit human prejudices? Yes. Now evil has a face.


People might not notice artificial intelligence in their everyday life, but it is there. AI is now used to review mortgage applications and sort resumes to find a small pool of suitable candidates before scheduling job interviews. AI systems organize content for every individual on Facebook. Phone calls to customer services of cable operators, utilities, and banks, among other institutions, are processed by AI-based voice recognition systems.

This “invisible” AI can, however, make itself visible in unexpected and sometimes disturbing ways. In 2018, Amazon phased out some of its AI recruiting software because it demonstrated a prejudice against women. As reported by Reuters, Amazon’s own machine learning specialists realized their algorithm training data had been pulled from resume templates submitted over 10 years when men dominated the software industry.

ProPublica has found problems with a risk assessment tool that is widely used in the criminal justice system. The machine is designed to predict recidivism (relapse into criminal behavior) in the prison population. Risk estimates incorrectly identified African-American defendants as being more likely to commit future crimes than Caucasian defendants.

These unintended consequences were less of a problem in the past, as every piece of software logic was explicitly hand-coded, reviewed and tested. AI algorithms, on the other hand, learn from existing examples without relying on explicit rule-based programming. This is a useful approach when there is sufficient and accurately representative data and when it can be difficult or expensive to model the rules by hand – for example, being able to distinguish between a cat or a cat. a dog in a picture. But, depending on various circumstances, this methodology can lead to problems.

There is growing concern that AI sometimes generates distorted views of topics, leading to bad decisions. In order for us to effectively shape the future of technology, we must study and understand its anthropology.

The concept of distorted data can be too abstract to grasp, making it difficult to identify. After the Congressional hearings on Facebook, I felt there was a need for greater awareness of these concepts among the general public.

Art can help create this awareness. In a photographic project called “Human Trials”, I created an artistic representation of this distortion based on possible portraits of people who do not exist, created using AI algorithms.

Stay with me as I explain how I did the portraits.

The process used two AI algorithms. The former was trained to look at pictures of people and distinguish them from other pictures. The second generates images to try to trick the first algorithm into believing that its generated image belongs to a group of real people that I photographed in my studio. This process repeats and the second algorithm continues to improve until it consistently fools the first algorithm.

The thispersondoesnotexist.com website used this kind of algorithm to create amazingly realistic images of people who, as the name of the website makes clear, do not exist. What I did differently was photograph my original and real subjects using a technique called “light painting”. During a 20 minute exposure, I used a flashlight to illuminate each person’s face unevenly as the subjects moved, creating images of the subjects with parts of their face distorted or missing. The images created by the algorithm are, in turn, distorted. If you were creating a depiction of a human being and didn’t have all the information to put it together, you would end up with distorted images like this.

When a mortgage company, recruiting service, or crime prediction software develops a distorted version of people, it is invisible harm. These photographs make the pain visible by applying the process to a human face.

What can we do to prevent bias in AI and the damage it causes?

An important aspect of good data is that it needs to be broad and deep. For example, looking at data on a large number of customers and more in-depth data on each customer. This allows models to handle situations better and more predictably and helps reduce bias; in fact, it was this lack of data breadth that Amazon had to contend with in its recruiting software. AI researchers are defining more ways to improve fairness for groups and individuals.

Some solutions that seem promising, however, don’t actually work. As it turns out, removing protected attributes, such as gender, race, religion, or disability, from pre-modeling training data does nothing to address prejudices and may even admit them. This is because this “unconscious fairness,” as it has been called, ignores redundant encodings – ways of inferring a protected attribute, like race or ethnicity, from unprotected characteristics, like, for example, a postcode in a very segregated city. or a Hispanic surname. To resolve this problem, we remove the attributes that are strongly correlated with the protected attribute. The algorithm can also be checked early on for false positives and false negatives.

In 2014, Stephen Hawking speculated aloud about a future that many Hollywood movies have described: “The development of full artificial intelligence could spell the end of the human race.

This disturbing quote is often thought to refer to phenomena like self-aware, AI-enabled robots that could eventually take over the world. While the AI ​​currently in use is far too narrow to result in the end of humans, it has already created some worrisome issues.

What many don’t know is what Hawking said next, “I’m an optimist and I believe we can create AI for the good of the world. May he function in harmony with us. We just need to be aware of the dangers, identify them, use the best possible practices and management, and prepare for their consequences well in advance.

Making AI fair is not just a good idea, but a social imperative. If we study and challenge technology from many angles, AI has the potential to improve the quality of life for everyone on the planet, increase our earning potentials, and help us live longer and live longer. better health.

Haq Rash (@rashehaq) is an artist and engineer in artificial intelligence and robotics. His latest book is “Enterprise Artificial Intelligence Transformation”. His “Human Trials” series won the 2021 Lenscratch Art + Science Prize.



Comments are closed.