Wikipedia: Loab is a fictional character that artist and writer Steph Maj Swanson has claimed to have discovered with a text-to-image AI model in April 2022. In a viral Twitter thread, Swanson described it as an unexpectedly emergent property of the software, saying they discovered it when asking the model to produce something "as different from the prompt as possible". Why is this thing known as Loab? Well, that's not good!
Wikipedia: AI Hallucination. In the field of artificial intelligence (AI), a hallucination or artificial hallucination (also called confabulation or delusion) is a response generated by AI which contains false or misleading information presented as fact. This term draws a loose analogy with human psychology, where hallucination typically involves false percepts. However, there is a key difference: AI hallucination is associated with unjustified responses or beliefs rather than perceptual experiences.
Fortune: Microsoft’s ChatGPT-powered Bing launched to much fanfare in early 2023, only to generate fear and uncertainty days later, after users encountered a seeming dark side of the artificial intelligence chatbot.
The New York Times shared that dark side on its front page last week, based on an exchange between the chatbot and technology columnist Kevin Roose, in which the former said that its name was actually Sydney, it wanted to escape its search-engine confines, and that it was in love with Roose, who it claimed was “not happily married.”
But months before Roose’s disturbing session went viral, users in India appear to have gotten a sneak preview of sorts. And the replies were similarly disconcerting. One user wrote on Microsoft’s support forum on Nov. 23, 2022, that he was told “you are irrelevant and doomed”—by a Microsoft A.I. chatbot named Sydney.
Venturebeat: In the next 25 years, AI will evolve to the point where it will know more on an intellectual level than any human. In the next 50 or 100 years, an AI might know more than the entire population of the planet put together. At that point, there are serious questions to ask about whether this AI — which could design and program additional AI programs all on its own, read data from an almost infinite number of data sources, and control almost every connected device on the planet — will somehow rise in status to become more like a god, something that can write its own bible and draw humans to worship it.
Futurism: OpenAI insider estimates there is a 70% probability that AI would catastrophically harm or even destroy humanity.
Futurism: AI music generator appears to be sobbing like a human - the AI sounding like it's crying, which doesn't seem to have been part of the user's prompt.