The first beauty contest judged by complex algorithms has sparked controversy after biased results.
Wild Systems

The First Beauty Contest Judged by AI

Do you have what it takes to become the world’s most beautiful man or woman? Roughly 6.000 people from over 100 countries think they do. The contestants submitted their photos after Beauty.AI launched their beauty contest this year. What was remarkable was not so much its content, but rather its judges: the “human beauty” of the participants was judged by complex algorithms.

Powered by deep neural networks – or, “deep learning”, a technique also used by Facebook and Google – the artificial intelligence based its critiques on five scores, carried out by five judges. The team of algorithms consisted of RYNKL, which analyzed the wrinkles within the contestant’s age groups; PIMPL, which scored the participants on the amount of pimples and pigmentation; MADIS, which compared people’s similarities to models within their racial group; AntiAgeist, which estimated the difference between the chronological age and perceived age; and Symmetry Master, which evaluated the symmetry of the face.

From all entries, the judges picked 44 winners. However, it appeared the algorithms performed racial bias, as 36 of the winners were white, and only one of them was black. Even though the majority of contestants were white, many people of color submitted their photos, including large groups from India and Africa.

While it might seem easy to joke about racist robots (remember Tay?), the sparking controversy over the growing use of discriminatory AI systems can cause disastrous results for minorities. A study by ProPublica found that predictive criminal software is biased against black people. But it does not end here. Another research found that significantly fewer women received online ads for high-paying jobs in contrast to men. And last year, Google Photos identified two African American people as gorillas.

In response to the event, Alex Zhavoronkov, Beauty.AI’s chief science officer said: “If you have not that many people of color within the dataset, then you might actually have biased results”. Commenting on the next round of the AI beauty contest, he said: “We will try to correct it”. His answer appears to be a quick fix for a politically loaded situation in which “trying” to fix it probably isn’t good enough. The results of the beauty contest raise questions about the increasing use of algorithms, predictive software and artificial intelligence systems. And there are no winners in that.

Sources: The Guardian, Quartz

Welcome back!

We have noticed you are a frequent visitor to our website. Do you think we are doing a good job? Support us by becoming a member.

Join