FutureOfLife: Fighting for a human future. AI is poised to remake the world. Help us ensure it benefits all of us.
Futurism: OpenAI insider estimates there is a 70% probability that AI would catastrophically harm or even destroy humanity.
YouTube: If Anyone Builds It, Everyone Dies: why superhuman AI would kill us all.
Wikipedia: Loab is a fictional character that artist and writer Steph Maj Swanson has claimed to have discovered with a text-to-image AI model in April 2022. In a viral Twitter thread, Swanson described it as an unexpectedly emergent property of the software, saying they discovered it when asking the model to produce something "as different from the prompt as possible". Why is this thing known as Loab? Well, that's not good!
Wikipedia: AI Hallucination. In the field of artificial intelligence (AI), a hallucination or artificial hallucination (also called confabulation or delusion) is a response generated by AI which contains false or misleading information presented as fact. This term draws a loose analogy with human psychology, where hallucination typically involves false percepts. However, there is a key difference: AI hallucination is associated with unjustified responses or beliefs rather than perceptual experiences.
Fortune: Microsoft’s ChatGPT-powered Bing launched to much fanfare in early 2023, only to generate fear and uncertainty days later, after users encountered a seeming dark side of the artificial intelligence chatbot.
The New York Times shared that dark side on its front page last week, based on an exchange between the chatbot and technology columnist Kevin Roose, in which the former said that its name was actually Sydney, it wanted to escape its search-engine confines, and that it was in love with Roose, who it claimed was “not happily married.”
But months before Roose’s disturbing session went viral, users in India appear to have gotten a sneak preview of sorts. And the replies were similarly disconcerting. One user wrote on Microsoft’s support forum on Nov. 23, 2022, that he was told “you are irrelevant and doomed”—by a Microsoft A.I. chatbot named Sydney.
Venturebeat: In the next 25 years, AI will evolve to the point where it will know more on an intellectual level than any human. In the next 50 or 100 years, an AI might know more than the entire population of the planet put together. At that point, there are serious questions to ask about whether this AI — which could design and program additional AI programs all on its own, read data from an almost infinite number of data sources, and control almost every connected device on the planet — will somehow rise in status to become more like a god, something that can write its own bible and draw humans to worship it.
Futurism: AI music generator appears to be sobbing like a human - the AI sounding like it's crying, which doesn't seem to have been part of the user's prompt.
Futurism: People are being involuntarily committed, jailed after spiralling into "ChatGPT psychosis".
MSN: ‘I Feel Like I’m Going Crazy’: ChatGPT Fuels Delusional Spirals
The Telegraph: ChatGPT is driving people mad. AI software is fuelling paranoid episodes in users – some of which have ended in tragedy..
Dispatch: Margaux Blanchard - the journalist who didn't exist.
Wikipedia: Dead Internet Theory.
Noema: The Last Days Of Social Media.
Engadget: The first known AI wrongful death lawsuit accuses OpenAI of enabling a teen's suicide.
Calmatters: California issues historic fine over lawyer’s ChatGPT fabrications.
SAG-AFTRA: On AI "actors" (Tilly Norwood).
BBC: The perils of letting AI plan your next trip.
MSN: AI Apocalypse? No Problem.
ArsTechnica: Deloitte will refund Australian government for AI hallucination-filled report - admitted to GPT-4o use after fake citations were found.
404media: Lawyer caught using AI while explaining to court why he used AI 🤣
Indian Express: Vigilante lawyers expose the rising tide of AI slop in court filings. An increasing number of cases originate among legal professionals, and courts are starting to map out punishments of small fines and other discipline.
Fortune: Logitech CEO says she’d welcome an AI-bot board member.
Arstechnica: Army general says he’s using AI to improve “decision-making”.
BBC: Largest study of its kind shows AI assistants misrepresent news content 45% of the time – regardless of language or territory.
Which: Booking.com replaces customer service staff with AI - Scam victims forced to first report traumatic fraud experiences to artificial intelligence bot.
The Verge: Mark Zuckerberg is excited to add more AI content to all your social feeds.
MSN: Chinese hackers used Anthropic’s AI to automate cyberattacks. Some attacks succeeded.
Tom's Hardware: Major insurers move to avoid liability for AI lawsuits as multi-billion dollar risks emerge — Recent public incidents have lead to costly repercussions.
The Verge: LLM = Large language mistake! Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it.
ArsTechnica: Google tells employees it must double capacity every 6 months to meet AI demand. Google’s AI infrastructure chief tells staff it needs thousandfold capacity increase in 5 years.


















