icon bookmark-bicon bookmarkicon cameraicon checkicon chevron downicon chevron lefticon chevron righticon chevron upicon closeicon v-compressicon downloadicon editicon v-expandicon fbicon fileicon filtericon flag ruicon full chevron downicon full chevron lefticon full chevron righticon full chevron upicon gpicon insicon mailicon moveicon-musicicon mutedicon nomutedicon okicon v-pauseicon v-playicon searchicon shareicon sign inicon sign upicon stepbackicon stepforicon swipe downicon tagicon tagsicon tgicon trashicon twicon vkicon yticon wticon fm
18 Oct, 2021 20:20

Google doesn’t like being shown inconvenient problems, ex-ethical AI co-lead ‘fired’ by the tech giant over her research tells RT

Google doesn’t like being shown inconvenient problems, ex-ethical AI co-lead ‘fired’ by the tech giant over her research tells RT

Google clamped down on Timnit Gebru, the former co-lead of ethical AI team, because she not only revealed bias in its large language models, but also called for structural changes in the AI field, Gebru told ‘Going Underground.’

Dr. Gebru was the first black female research scientist at Google, and her controversial parting with the tech giant made headlines last year. It followed her refusal to fulfill the company’s demand to retract a paper on ethical problems arising from the large language models (LLMs) that are used by Google Translate and other apps. Gebru insists she was fired for her stance, while Google claims she filed her resignation.

In Monday’s episode of RT’s ‘Going Underground’ program, she told the show’s host, Afshin Rattansi, why she split with Google.

Ethical AI is “a field that tries to ensure that, when we work on AI technology, we’re working on it with foresight and trying to understand what the negative potential societal effects are and minimizing those,” Gebru said.

And this was exactly what she had been pursuing at Google, before she was – in her view – fired by the company. “I’m never going to say that I resigned. That’s not going to happen,” Gebru said.

The 2020 paper prepared by the expert and her colleagues highlighted the “environmental and financial costs” of the large language models, and warned against making them too big. The LLMs “consume a lot of computer power,” she explained. “So, if you’re working on larger and larger language models, only the people with those kinds of huge compute powers are going to be able to use them … Those benefiting from the LLMs aren’t those who are paying the costs.” And that state of affairs amounted to “environmental racism,” she said.

Also on rt.com Facebook apologizes after its AI put ‘primates’ label on video about black men

The large language models use data from the internet to learn, but that doesn’t necessarily mean they incorporate all the opinions available online, Gebru, an Ethiopian American, said. The paper highlighted the “dangers” of such an approach, which could potentially have seen AI being trained to incorporate bias and hateful content.

With LLMs, you can get something that sounds really fluid and coherent, but is completely wrong.

The most vivid example of that, she said, was the experience of a Palestinian man who was allegedly arrested by the Israeli police after Facebook’s algorithms mistakenly translated his post that read, “Good morning,” as “Attack them.”

Gebru said she discovered her Google bosses really didn’t like it “whenever you showed them a problem and it was inconvenient” or too big for them to admit to. Indeed, the company wanted her to retract her academic peer-reviewed paper, which was about to be published at a scientific conference. She insists this demand wasn’t supported by any reasoning or research, with her supervisors just saying it “showed too many problems” with the LLMs.

The strategy of major players such as Google, Facebook, and Amazon is to pretend AI’s bias is “purely a technical issue, purely an algorithmical issue … [that] it has nothing to with anti-trust laws, monopoly, labor issues, or power dynamics. It’s just purely that technical thing that we need to work out,” Gebru said.

Also on rt.com New Netflix doc ‘Coded Bias’ is so keen to show AI is racist that it ignores how tech tyranny is dehumanizing EVERYONE

“We need regulations. And I think that’s … why all of these organizations and companies wanted to come down hard on me and a few other people: because they think what we’re advocating for isn’t a simple algorithmic tweak – it’s for larger structural changes,” she said.

With other whistleblowers following in her footsteps, society has finally begun to understand the need to regulate developments in AI, Gebru said, adding that the public must also be ready to protect those who reveal the wrongdoings of corporations from their pressure.

Since leaving Google, Gebru has launched the Black in AI group to unite scientists of color working in the field of AI. She’s also assembling an interdisciplinary research team to continue her work in the field. She said she won’t be looking to make a lot a money with the project – which is a non-profit – because “if your number-one goal is to maximize profits, then you’re going to cut corners” and end up creating precisely the opposite of ethical AI.

Also on rt.com ‘Blood on my hands’: Second Facebook ‘whistleblower’ comes forward to call for content crackdown, says she’ll testify in Congress

Like this story? Share it with a friend!

Podcasts
0:00
26:14
0:00
28:21