Biography of Timnit Gebru
|Real Name||Timnit Gebru|
|Famous as||Computer scientist, Advocate, Artificial Intelligence team at Google|
|Hometown||Addis Ababa, Ethiopia|
Timnit Gebru, a co-leader of the Ethical Artificial Intelligence team at Google
Gebru was born and raised in Addis Ababa, Ethiopia. Her father and two oldest sisters are electrical engineers. Her father died when she was five years old and she was raised by her mother. Both her parents are from Eritrea.
Gebru is an Eritrean origin born and raised in Ethiopia. Both her parents are Eritreans. Her father and two oldest sisters are electrical engineers. Her father died when she was five years old and she was raised by her mother. She escaped potential forced deportation to Eritrea by Ethiopian government in the late 1990’s and traveled to Ireland. She then immigrated to the United States to join her mother (who also fled from Ethiopia few months prior) and her two older sisters who were living in U.S. Gebru is the youngest of three. After completing her high school in Massachusetts, she was accepted to study at Stanford University. There she earned her Bachelor’s and Master’s degrees in electrical engineering. Gebru worked at Apple Inc., developing signal processing algorithms for the first IPad. Gebru earned her doctorate under the supervision of Fei-Fei Li at Stanford University in 2017. She used data mining of publicly available images. She was interested in the amount of money spent by governmental and non-governmental organisations trying to collect information about communities. To investigate alternatives, Gebru combined deep learning with Google Street View to estimate the demographics of United States neighbourhoods, showing that socioeconomic attributes such as voting patterns, income, race and education can be inferred from observations of cars. If the number of pickup trucks outnumbers the number of sedans, the community are more likely to vote for the Republican party. They analysed over 15 million images from the 200 most populated US cities. The work was extensively covered in the media, being picked up by BBC News, Newsweek, The Economist and The New York Times.
Timnit Gebru, one of Google’s top artificial intelligence researchers, says the company abruptly fired her yesterday. The technical co-lead of Google’s Ethical Artificial Intelligence Team claims managers were upset about an email she’d sent to colleagues.
The email, which was sent to the Brain Women and Allies listserv, voiced frustration that managers were trying to get Gebru to retract a research paper. The full text was first published in Platformer. “A week before you go out on vacation, you see a meeting pop up at 4:30pm PST on your calendar,” it reads. “Then in that meeting your manager’s manager tells you ‘it has been decided’ that you need to retract this paper by next week… You are not worth having any conversations about this, since you are not someone whose humanity (let alone expertise recognized by journalists, governments, scientists, civic organizations such as the electronic frontiers foundation etc) is acknowledged or valued in this company.”
After the email went out, Gebru told managers that certain conditions had to be met in order for her to stay at the company. Otherwise, she would have to work on a transition plan.
She co-founded Black in AI, an advocacy group that has held workshops at major AI conferences and pushed for more Black roles in AI development and research. She has also regularly criticized tech companies, including Google, for failing to hire more workers of color and treating them differently once they’re on board.
Two days before announcing her firing, Gebru had solicited advice regarding whistleblower-like protections for AI-ethics researchers, tweeting, “With the amount of censorship & intimidation that goes on towards people in specific groups, how does anyone trust any real research in this area can take place?”
Tensions between Gebru and the company also stemmed from research by Gebru’s team that was critical of AI systems, known as large language models, said one machine-learning researcher who had reviewed the study and requested anonymity because they were not authorized to discuss the unpublished work. The company may one day seek to capitalize on such systems in consumer-facing products that could generate convincing passages of text that are difficult to distinguish from human writing, the researcher said.