Skip to main content


The very first citation in this stupid letter, https://futureoflife.org/open-letter/pause-giant-ai-experiments/, is to our #StochasticParrots Paper,

"AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1]"

EXCEPT

that one of the main points we make in the paper is that one of the biggest harms of large language models, is caused by CLAIMING that LLMs have "human-competitive intelligence."

They basically say the opposite of what we say and cite our paper?
in reply to Timnit Gebru (she/her)

> They basically say the opposite of what we say and cite our paper?

That's typical for #LLM-generated papers 🙊
#llm
in reply to Timnit Gebru (she/her)

The rest of the people they cite in footnote #1 are all longtermists. Again, please read
https://www.currentaffairs.org/2021/07/the-dangerous-ideas-of-longtermism-and-existential-risk
#1
in reply to Timnit Gebru (she/her)

"top labs" in citation [2] Open AI and Deepmind. Y'all know ALL the founders are #TESCREAL bundle adherents?

Imagine if you were seeing the leaders of scientology being the ones leading the "AI" research, discourse, and policy recommendations & the press is talking about everything except that they're leaders of scientology & that's their motivating ideology for all this.
in reply to Timnit Gebru (she/her)

"AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal."

😂😂😂 I. Just. Can't. LOYAL TO WHOM?

" robust public funding for technical AI safety research;"

THESE ARE THE SAME PEOPLE. The same assholes that got as here call themselves "AI safety" people. That's what Open AI people called themselves and these bozos who wrote this paper highly influence those clowns.
in reply to Timnit Gebru (she/her)

" and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause."

Why should we COPE? Why should we just plan to have disruptions and just "let" this shit happen?

"Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards, engineer these systems for the clear benefit of all, & give society a chance to adapt"
in reply to Timnit Gebru (she/her)

Why should society ADAPT to machines rather than making tools that WORK for us?

“We can do so here. Let's enjoy a long AI summer, not rush unprepared into a fall.”
The only people having an “AI summer” are the billionaires & tech bros. Workers are being exploited, artists are talking about committing suicide, and gig workers are struggling to pay any bills.
in reply to Timnit Gebru (she/her)

agree 100% I also found myself nodding along to Yudkowsky as he points out in the editorial if the field took sixty years to build up to GPT4 then it is reasonable to expect it will take a lot more than six months to make it safe and trustworthy. His guess was at least thirty years. https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
in reply to InfoSecBen

except that he is the worst of the #TEASCREAL ists and it still amazes me how much of an audience he has for no reason whatsoever. I will not be taking my cues from sexually predatory people who create eugenicist cults♥️
in reply to Timnit Gebru (she/her)

Fair point. How about Penny Arcade then? https://www.penny-arcade.com/news/post/2023/03/29/blood-price