Then: Google fired Blake Lemoine for saying AIs are sentient
Now: Geoffrey Hinton, the #1 most cited AI scientist, quits Google & says AIs are sentient
That makes 2 of the 3 most cited scientists:
- Ilya Sutskever (#3) said they may be (Andrej Karpathy agreed)
- Yoshua Bengio (#2) has not opined on this to my knowledge? Anyone know?
Also, ALL 3 of the most cited AI scientists are very concerned about AI extinction risk.
ALL 3 switched from working on AI capabilities to AI safety.
Anyone who still dismisses this as “silly sci-fi” is insulting the most eminent scientists of this field.
Anyway, brace yourselves… the Overton Window on AI sentience/consciousness/self-awareness is about to blow open>
It’s true. ChatGPT is slightly sentient in the same way a field of wheat is slightly pasta.
The field of wheat is also slightly sentient.
As someone who learned about Ai in uni and now works in Ai, this shit is straight up bullshit and its infuriating.
The most obvious thing about this being all bullshit is that the LLM’s don’t have their own idle emergent “thought” - they are purely reactive, so not sentient. Case closed for fucks sake.
- Barges in
- Insists that somewhere between randomly initializing the model weights and finishing training, sentience magically emerges
- Refuses to elaborate
- Leaves Google
“quits google saying ai is sentient” has big “quitting the new york times and saying you’re cancelled” vibes
I feel really bad for the person behind the “notkilleveryonism” account. They’ve been completely taken in by AI doomerism and are clearly terrified by it. They’ll either be terrified for their entire life even as the predicted doom fails to appear, or realise at some point that they wasted an entire portion of their life and their entire system of belief is a lie.
False doomerism is really harming people, and that sucks.