The very first citation in this stupid letter, https://futureoflife.org/open-letter/pause-giant-ai-experiments/, is to our #StochasticParrots Paper,
"AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1]"
EXCEPT
that one of the main points we make in the paper is that one of the biggest harms of large language models, is caused by CLAIMING that LLMs have "human-competitive intelligence."
They basically say the opposite of what we say and cite our paper?
"AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1]"
EXCEPT
that one of the main points we make in the paper is that one of the biggest harms of large language models, is caused by CLAIMING that LLMs have "human-competitive intelligence."
They basically say the opposite of what we say and cite our paper?
Pause Giant AI Experiments: An Open Letter - Future of Life Institute
We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.Future of Life Institute
Klaus Baldermann 📜
in reply to Timnit Gebru (she/her) • • •That's typical for #LLM-generated papers 🙊
Timnit Gebru (she/her)
in reply to Timnit Gebru (she/her) • • •https://www.currentaffairs.org/2021/07/the-dangerous-ideas-of-longtermism-and-existential-risk
The Dangerous Ideas of “Longtermism” and “Existential Risk” ❧ Current Affairs
Current AffairsTimnit Gebru (she/her)
in reply to Timnit Gebru (she/her) • • •Imagine if you were seeing the leaders of scientology being the ones leading the "AI" research, discourse, and policy recommendations & the press is talking about everything except that they're leaders of scientology & that's their motivating ideology for all this.
Timnit Gebru (she/her)
in reply to Timnit Gebru (she/her) • • •😂😂😂 I. Just. Can't. LOYAL TO WHOM?
" robust public funding for technical AI safety research;"
THESE ARE THE SAME PEOPLE. The same assholes that got as here call themselves "AI safety" people. That's what Open AI people called themselves and these bozos who wrote this paper highly influence those clowns.
Timnit Gebru (she/her)
in reply to Timnit Gebru (she/her) • • •Why should we COPE? Why should we just plan to have disruptions and just "let" this shit happen?
"Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards, engineer these systems for the clear benefit of all, & give society a chance to adapt"
Timnit Gebru (she/her)
in reply to Timnit Gebru (she/her) • • •“We can do so here. Let's enjoy a long AI summer, not rush unprepared into a fall.”
The only people having an “AI summer” are the billionaires & tech bros. Workers are being exploited, artists are talking about committing suicide, and gig workers are struggling to pay any bills.
InfoSecBen
in reply to Timnit Gebru (she/her) • • •Pausing AI Developments Isn't Enough. We Need to Shut it All Down
Eliezer Yudkowsky (Time)Timnit Gebru (she/her)
in reply to InfoSecBen • • •InfoSecBen
in reply to Timnit Gebru (she/her) • • •Blood Price - Penny Arcade
www.penny-arcade.com