Scientists launched a robot-judged beauty contest. What could go wrong? A lot.

"Computer computer, on my screen — what's the fairest face you've ever seen?"

Presumably, that's what the folks at Youth Laboratories were thinking when they launched Beauty.AI, the world's first international beauty contest judged entirely by an advanced artificial intelligence system.

More than 600,000 people from across the world entered the contest, which was open to anyone willing to submit a selfie taken in neutral lighting without any makeup.


According to the scientists, their system would use algorithms based on facial symmetry, wrinkles, and perceived age to define "objective beauty" — whatever that means.

This murderous robot understands my feelings. GIF via CNBC/YouTube.

It's a pretty cool idea, right?

Removing all the personal taste and prejudice from physical judgment and allowing an algorithm to become the sole arbiter and beholder of beauty would be awesome.

What could possibly go wrong?

"Did I do that?" — These researchers, probably. GIF from "Family Matters."

Of the 44 "winners" the computer selected, seven of them were Asian, and one was black. The rest were white.

This is obviously proof that white people are the most objectively attractive race, right? Hahaha. NO.

Instead, it proves (once again) that human beings have unconscious biases, and that it's possible to pass those same biases on to machines.

Basically, if your algorithm is based mostly on white faces and 75% of the people who enter your contest are white Europeans, the white faces are going to win based on probability, even if the computer is told to ignore skin tone.

Plus, most cameras are literally optimized for light skin, so that probably didn't help the problem, either. In fact, the AI actually discarded some entries that it deemed to be "too dim."

So, because of shoddy recruitment, a non-diverse team, internal biases, and a whole slew of other reasons, these results were ... more than a little skewed.

Thankfully, Youth Laboratories acknowledged this oversight in a press release. They're delaying the next stage in their robotic beauty pageant until they iron out the kinks in the system.

Ironically, Alex Zhavoronkov, their chief science officer, told The Guardian, "The algorithm ... chose people who I may not have selected myself."

Basically, their accidentally racist and not-actually-objective robot also had lousy taste. Whoops.

Ooooh baby, racist robots! Yeah! GIF from Ruptly TV/YouTube.

This begs an important question: As cool as it would be to create an "objective" robot or algorithm, is it really even possible?

The short answer is: probably not. But that's because people aren't actually working on it yet — at least, not in the way they claim to be.

As cool and revelatory as these cold computer calculations could potentially be, getting people to acknowledge and compensate for their unconscious biases when they build the machines could be the biggest hurdle. Because what you put in determines what you get out.

"While many AI safety activists are concerned about machines wiping us out, there are very few initiatives focused on ensuring diversity, balance, and equal opportunity for humans in the eyes of AI," said Youth Laboratories Chief Technology Officer Konstantin Kiselev.

Of course you like that one. GIF from "Ex Machina."

This is the same issue we've seen with predictive policing, too.

If you tell a computer that blacks and Hispanics are more likely to be criminals, for example, it's going to provide you with an excuse for profiling that appears on the surface to be objective.

But in actuality, it just perpetuates the same racist system that already exists — except now, the police can blame the computer instead of not taking responsibility for themselves.

"There is no justice. There is ... just us." GIF from "Justice League."

Of course, even if the Beauty.AI programmers did find a way to compensate for their unconscious biases, they'd still have to deal with the fact that, well, there's just no clear definition for "beauty."

People have been trying to unlock that "ultimate secret key" to attractiveness since the beginning of time. And all kinds of theories abound: Is attractiveness all about the baby-makin', or is it some other evolutionary advantage? Is it like Youth Laboratories suggests, that "healthy people look more attractive despite their age and nationality"?

Also, how much of beauty is strictly physical, as opposed to physiological? Is it all just some icky and inescapable Freudian slip? How much is our taste influenced by what we're told is attractive, as opposed to our own unbiased feelings?

Simply put: Attractiveness serves as many different purposes as there are factors that define it. Even if this algorithm somehow managed to unlock every possible component of beauty, the project was flawed from the start. Humans can't even unanimously pick a single attractive quality that matters most to all of us.

GIF from "Gilligan's Island."

The takeaway here? Even our technology starts with our humanity.

Rather than creating algorithms to justify our prejudices or preferences, we should focus our energies on making institutional changes that bring in more diverse voices to help make decisions. Embracing more perspectives gives us a wider range of beauty — and that's better for everyone.

If your research team or board room or city council actually looks like the world it's supposed to represent, chances are they're going to produce results that look the same way, too.

True
Frito-Lay

Did you know one in five families are unable to provide everyday essentials and food for their children? This summer was also the hungriest on record with one in four children not knowing where their next meal will come from – an increase from one in seven children prior to the pandemic. The effects of COVID-19 continue to be felt around the country and many people struggle to secure basic needs. Unemployment is at an all-time high and an alarming number of families face food insecurity, not only from the increased financial burdens but also because many students and families rely on schools for school meal programs and other daily essentials.

This school year is unlike any other. Frito-Lay knew the critical need to ensure children have enough food and resources to succeed. The company quickly pivoted to expand its partnership with Feed the Children, a leading nonprofit focused on alleviating childhood hunger, to create the "Building the Future Together" program to provide shelf-stable food to supplement more than a quarter-million meals and distribute 500,000 pantry staples, school supplies, snacks, books, hand sanitizer, and personal care items to schools in underserved communities.

Keep Reading Show less

Sir David Attenborough has one of the most recognized and beloved voices in the world. The British broadcaster and nature historian has spent most of his 94 years on Earth educating humanity about the wonders of the natural world, inspiring multiple generations to care about the planet we all call home.

And now, Attenborough has made a new name for himself. Not only has he joined the cool kids on Instagram, he's broken the record for reaching a million followers in the shortest period. It only took four hours and 44 minutes, which is less time than it took Jennifer Aniston, who held the title before him at 5 hours and 16 minutes.

A day later, Attenborough is sitting at a whopping 3.4 million followers. And he only has two Instagram posts so far, both of them videos. But just watch his first one and you'll see why he's attracted so many fans.

Keep Reading Show less
True

$200 billion of COVID-19 recovery funding is being used to bail out fossil fuel companies. These mayors are combatting this and instead investing in green jobs and a just recovery.

Learn more on how cities are taking action: c40.org/divest-invest


Schools often have to walk a fine line when it comes to parental complaints. Diverse backgrounds, beliefs, and preferences for what kids see and hear will always mean that schools can't please everyone all the time, so educators have to discern what's best for the whole, broad spectrum of kids in their care.

Sometimes, what's best is hard to discern. Sometimes it's absolutely not.

Such was the case this week when a parent at a St. Louis elementary school complained in a Facebook group about a book that was read to her 7-year-old. The parent wrote:

"Anyone else check out the read a loud book on Canvas for 2nd grade today? Ron's Big Mission was the book that was read out loud to my 7 year old. I caught this after she watched it bc I was working with my 3rd grader. I have called my daughters school. Parents, we have to preview what we are letting the kids see on there."

Keep Reading Show less

One night in 2018, Sheila and Steve Albers took their two youngest sons out to dinner. Their 17-year-old son, John, was in a crabby mood—not an uncommon occurrence for the teen who struggled with mental health issues—so he stayed home.

A half hour later, Sheila's started getting text messages that John wasn't safe. He had posted messages with suicidal ideations on social media and his friends had called the police to check on him. The Albers immediately raced home.

When they got there, they were met with a surreal scene. Their minivan was in the neighbor's yard across the street. John had been shot in the driver's seat six times by a police officer who had arrived to check on him. The officer had fired two shots as the teen slowly backed the van out of the garage, then 11 more after the van spun around backward. But all the officers told the Albers was that John had "passed" and had been shot. They wouldn't find out until the next day who had shot and killed him.

Keep Reading Show less