Capitol Records has been at the center of controversy over the last few weeks after “signing” — and eventually scrapping — a record deal with an artificial intelligence rapper.
The “artist,” named FN Meka, was created in 2019 by music manager Anthony Martini and video game artist Brandon Le. The virtual rapper uses a real human’s voice, but its lyrics and melodies are created using an AI that analyzes popular music, according to The Guardian.
The character they made looks like a mashup of rappers 6ix9ine and Trippie Redd, with a hint of Kevin Gates. Capitol Records faced criticism from activists this week for FN Meka’s use of the N-word and for a 2019 post from its now-private Instagram account that depicted a virtual police officer beating up the so-called robot rapper.
With stories like these, it always feels like we should add a caveat. In the age of the internet, it’s impossible to know whether things like this are done in earnest or out of sheer desire to go viral. So “signing” FN Meka may have been a troll job. In fact, I think the grotesqueness of the stereotypes used, and the dryness of Capitol Records’ apology make the troll theory seem likely.
But there’s an important point to be made in the aftermath — a point activists have made for several years when it comes to AI. That is, putting powerful technologies in the hands of well-funded-yet-woefully-uncreative people leads to awful outcomes.
Martini and Le lacked the creativity to develop this character in a way that didn’t rely on racist stereotypes. They were apparently incapable of, or opposed to, foreseeing the predictable backlash to the character. (And surely, the same can be said of the executives at Capitol Records).
These are people whose personal failures — whether creative or ethical failures — led to a bad product. And the same can be said for other major players in the AI space. In the past I’ve written about activist tech experts, like former Google employee Timnit Gebru, who’ve sounded the alarm about the ways AI algorithms often discriminate against nonwhite people.
In the past couple months, we’ve gotten more evidence of this phenomenon. Research released in July revealed pulse oximeters that use AI to read people’s oxygen levels were more likely to produce incorrect results for people with darker skin.
And one need only look at the documented proliferation of racist hate speech across social platforms like Facebook and Twitter to suspect their technology doesn’t prioritize racial sensitivity either.
What Capitol Records exposed in just about the most absurd way possible is a trend in the world of AI: eager developers looking to capitalize on creations that, regardless of their intentions, too often end up harming nonwhite people.
Related: