The (broken) mirror: understanding the flaws and risks of AI facial recognition
AI claims to be able to read human facial expressions, but the science says this claim is both wrong and dangerous to society
Facial analysis is one of the more public, prevalent, and intriguing applications across the current AI landscape. The goals behind systems that can identify someone from an image, known as facial recognition, are generally understood by most people. Computers are trained to scan the contours and details of a face and then identify that person correctly. Affect recognition, however, another aspect of facial analysis, is less understood; however, its increasing use means that it’s essential to understand this less common technique’s uses and efficacy. New research from Kate Crawford, a professor at USC Annenberg, aims to do just that.
The modern story of affect recognition has an unlikely birthplace: the Salpêtrière asylum in Paris, which in the 1800s’ housed up to 5,000 people with a wide range of mental illnesses and neurological conditions. It was there that a doctor named Guillaume Duchenne de Boulogne decided to use patients to photograph and categorize facial movements and expressions. His analysis, Mécanisme de la physionomie humaine, which tried to connect facial expressions with emotional and psychological states, was foundational to luminaries like Charles Darwin but also to contemporaries of ours, most notably a man named Paul Ekman.
Keep reading with a 7-day free trial
Subscribe to Thematiks to keep reading this post and get 7 days of free access to the full post archives.