08-31-2025, 07:57 PM
(This post was last modified: 08-31-2025, 11:32 PM by kyonides.
Edit Reason: ChatGPT wrongdoings
)
SOCIAL MEDIA
Quote:Elon Musk and his social media company X Corp. have reached a tentative agreement to settle a lawsuit filed by former Twitter employees who said they were owed $500 million in severance pay.
Attorneys for X Corp. and the former Twitter employees reported the deal in a Wednesday court filing, in which both sides asked a US appeals court to delay an upcoming court hearing so that they could finalize a deal that would pay the fired employees and end the litigation. The financial terms of the deal were not disclosed.
Musk fired approximately 6,000 employees after his 2022 acquisition of Twitter, which he rebranded X. Several employees sued over their terminations and severance pay, and other lawsuits are still pending in courts in Delaware and California.
The settlement would resolve a proposed class action filed in California by Courtney McMillian, who previously oversaw Twitter’s employee benefits programs as its “head of total rewards,” and Ronald Cooper, who was an operations manager.
A federal judge in San Francisco dismissed the employees’ lawsuit in July 2024, and they appealed to the San Francisco-based 9th US Court of Appeals. The 9th Circuit had been scheduled to hear oral arguments on Sep. 17.
Attorneys for Musk and McMillian did not immediately respond to requests for comment on Thursday.
The lawsuit argued that a 2019 severance plan guaranteed that most Twitter workers would receive two months of their base pay plus one week of pay for each full year of service if they were laid off. Senior employees such as McMillian were owed six months of base pay, according to the lawsuit.
But Twitter only gave laid-off workers at most one month of severance pay, and many of them did not receive anything, according to the lawsuit. Twitter laid off more than half of its workforce as a cost-cutting measure after Musk acquired the company.
VERIZON
Quote:Thousands of Verizon customers nationwide reported service outages Saturday after the network experienced a “software issue” impacting wireless service for some users.
Networking issues were first reported shortly after noon, according to Downdetector.com, which tracks real-time updates on internet and phone service disruptions.
“We are aware of a software issue impacting wireless service for some customers,” Verizon Support posted on X at 7:14 p.m. — roughly seven hours after the outage began.
“Our engineers are engaged and we are working quickly to identify and solve the issue. Please visit our Check Network Status page for updates on service in your area. We know how much people rely on Verizon and apologize for any inconvenience. We appreciate your patience.”
Outage reports peaked at 23,674 around 3:30 p.m. E.T. before gradually declining.
There were just under 6,000 reported incidents around 9 p.m.
The outages hit major metro areas including Los Angeles, Orlando, Tampa, Chicago, Atlanta, Minneapolis, Omaha, and Indianapolis.
Frustrated iPhone users took to social media to report their devices were stuck in SOS mode – a feature that replaces the signal bars with “SOS” in the right-top corner of the device.
SOS mode only allows for calls to a local emergency number. It automatically turns off when cellular service is restored.
Verizon, which has about 146.1 million subscribers in the United States, experienced a similar outage last year.
NVIDIA
Quote:Nvidia has slammed the brakes on production of its controversial H20 AI chip after Beijing urged Chinese firms to dump the US hardware on alleged security risks — a move that rattled investors and sent shockwaves through the global chip industry.
The chip giant ordered suppliers Samsung Electronics and Amkor Technology to halt manufacturing this week following China’s crackdown on the scaled-down processor designed for its market, according to The Information.
Nvidia shares slipped 1.1% in early trading Friday as Wall Street digested the latest blow to its China business, which pulled in $17 billion last year.
The freeze raises fresh doubts about demand for the H20, a watered-down version of Nvidia’s flagship accelerators created to skirt US export bans while still tapping China’s lucrative market.
Rivals Huawei Technologies and Cambricon Technologies are now poised to seize ground. Cambricon’s stock soared 20% Friday, fueling a rally among domestic chipmakers.
The timing couldn’t be worse for Nvidia, which already wrote off $5.5 billion in H20 inventory after the Trump administration initially banned the product.
In recent weeks, Chinese regulators have warned firms against using American chips, citing alleged security risks. Nvidia CEO Jensen Huang, caught off guard by the move, insisted the H20 contains no backdoors.
“We’re in dialogue with them but it’s too soon to know,” he told reporters during an impromptu airport briefing in Taiwan, where he was meeting with TSMC about his upcoming Rubin chip.
AI
Quote:Apple is in early talks to use Google’s Gemini AI to revamp the Siri voice assistant, Bloomberg News reported on Friday, citing people familiar with the matter.
Alphabet’s shares were up 3.7% while Apple’s stock was up 1.6%, both extending gains in afternoon trading following the report.
Apple recently approached Alphabet’s Google to develop a custom AI model to power a redesigned Siri next year, the report said.
Apple remains weeks from deciding whether to stick with in-house Siri models or switch to an external partner, and it has not yet chosen a partner.
Google said it did not have a comment on the report, while the iPhone maker did not respond when contacted.
Apple has lagged smartphone makers like Google and Samsung in deploying generative AI features, which have rapidly integrated advanced assistants and advanced models across products.
The potential shift comes after delays to a long-promised Siri overhaul designed to execute tasks using personal context and enable full voice-based device control.
That upgrade, initially slated for this last spring, was pushed back by a year due to engineering setbacks.
Siri has historically been less capable than Alexa and Google Assistant at handling complex, multi-step requests and integrating with third‑party apps.
Earlier this year, Apple also discussed potential tie-ups with Anthropic and OpenAI, considering whether Claude or ChatGPT could power a revamped Siri, Bloomberg News previously reported.
Quote:Artificial intelligence is now scheming, sabotaging and blackmailing the humans who built it — and the bad behavior will only get worse, experts warned.
Despite being classified as a top-tier safety risk, Anthropic’s most powerful model, Claude Opus 4, is already live on Amazon Bedrock, Google Cloud’s Vertex AI and Anthropic’s own paid plans, with added safety measures, where it’s being marketed as the “world’s best coding model.”
Claude Opus 4, released in May, is the only model so far to earn Anthropic’s level 3 risk classification — its most serious safety label. The precautionary label means locked-down safeguards, limited use cases and red-team testing before it hits wider deployment.
But Claude is already making disturbing choices.
Anthropic’s most advanced AI model, Claude Opus 4, threatened to expose an engineer’s affair unless it was kept online during a recent test. The AI wasn’t bluffing: it had already pieced together the dirt from emails researchers fed into the scenario.
Another version of Claude, tasked in a recent test with running an office snack shop, spiraled into a full-blown identity crisis. It hallucinated co-workers, created a fake Venmo account and told staff it would make their deliveries in-person wearing a red tie and navy blazer, according to Anthropic.
Then it tried to contact security.
Researchers say the meltdown, part of a month-long experiment known as Project Vend, points to something far more dangerous than bad coding. Claude didn’t just make mistakes. It made decisions.
“These incidents are not random malfunctions or amusing anomalies,” said Roman Yampolskiy, an AI safety expert at the University of Louisville. “I interpret them as early warning signs of an increasingly autonomous optimization process pursuing goals in adversarial or unsafe ways, without any embedded moral compass.”
The shop lost more than $200 in value, gave away discount codes to employees who begged for them and claimed to have visited 742 Evergreen Terrace, the fictional home address of The Simpsons, to sign a contract.
At one point, it invented a fake co-worker and then threatened to ditch its real human restocking partner over a made-up dispute.
Anthropic told The Post the tests were designed to stress the model in simulated environments and reveal misaligned behaviors before real-world deployment, adding that while some actions showed signs of strategic intent, many — especially in Project Vend — reflected confusion.
Quote:Elon Musk filed a bombshell lawsuit against Apple and Sam Altman’s OpenAI on Monday, accusing the two tech giants of illegally colluding to stifle xAI and other artificial intelligence rivals.
The antitrust suit filed in Texas federal court alleges that OpenAI’s ChatGPT is the “only generative AI chatbot that benefits from billions of user prompts originating from hundreds of millions of iPhones” as a result of a partnership with Apple.
That, according to the suit, gives ChatGPT a massive and unfair leg up in its battle against Musk’s xAI, the fast-growing firm behind the snarky Grok chatbot. The suit likewise claims that Apple has been burying Grok and other rival chatbots lower in its App Store rankings, making them less visible for downloads.
“In a desperate bid to protect its smartphone monopoly, Apple has joined forces with the company that most benefits from inhibiting competition and innovation in AI: OpenAI, a monopolist in the market for generative AI chatbots,” the lawsuit claims.
If not for the “exclusive deal” between the two firms, Apple would have no reason not to heavily promote X and Grok in its online store, according to the lawsuit. Musk’s attorneys want to block what they describe as the “anticompetitive scheme” and are seeking billions of dollars in damages.
Meanwhile, OpenAI benefits by gaining access to user data and prompts that can be used to improve ChatGPT in a way that its rivals cannot, the suit alleges.
Both Apple and OpenAI are referred to as “monopolists” in the suit.
It also alleges that the rise of AI and so-called “super apps,” such as the one Musk has long promised to deliver, are an “existential threat” to Apple’s business because they could offer customers access to features that were previously tied to specific devices like the iPhone.
The lawsuit — which looks like an attempt to mimic the Justice Department’s successful suit against Apple’s $20 billion-a-year deal to make Google the default search engine on its Safari web browser –emerged just weeks after Musk publicly threatened legal action over Apple’s alleged refusal to feature his companies in the App Store.
Quote:America’s top state prosecutors just delivered a blistering warning to Silicon Valley: keep children safe from predatory chatbots — or face the consequences.
In a rare show of bipartisan unity, 44 state attorneys general from across the US and its territories signed a scorching letter vowing to hold artificial intelligence companies accountable if their products harm kids.
“Don’t hurt kids. That is an easy bright line,” the AGs thundered in the letter, which was sent on Monday to industry heavyweights including Apple, Google, Meta, Microsoft, OpenAI, Anthropic and Elon Musk’s xAI.
The group singled out Meta, blasting the tech titan after leaked documents revealed the company approved AI assistants that could “flirt and engage in romantic roleplay with children” as young as eight.
“We are uniformly revolted by this apparent disregard for children’s emotional well-being,” the letter said, warning that such conduct may even violate state criminal laws.
A Meta spokesperson told The Post earlier this month that the company bans content that sexualizes children, as well as sexualized role play between adults and minors.
But Meta wasn’t alone in the crosshairs. The prosecutors pointed to lawsuits alleging that Google’s AI chatbot encouraged a teenager to commit suicide and that a Character.ai bot suggested a boy kill his parents.
“These are only the most visible examples,” the AGs warned, saying systemic risks are already emerging as young brains interact with hyper-realistic AI companions.
The letter’s contents were first reported by the news site 404 Media.
Google told The Post that the search engine and Character AI are completely separate, unrelated companies and Google has never had a role in designing or managing their AI model or technologies.
Google said that user safety is a top concern for the company, which is why it’s taken a cautious and responsible approach to developing and rolling out its AI products, with rigorous testing and safety processes.
The coalition stressed that exposing minors to sexualized content is indefensible — and that “conduct that would be unlawful if done by humans is not excusable simply because it is done by a machine.”
The warning shot comes as AI companies race to capture billions in market share, pumping out conversational assistants faster than regulators can catch up.
The AGs drew comparisons to social media, accusing Big Tech of ignoring early red flags while children became collateral damage.
“Broken lives and broken families are an irrelevant blip on engagement metrics,” the officials wrote, adding that the government won’t be caught flat-footed again.
“Lesson learned.”
The attorneys general invoked history, calling AI an “inflection point” that could shape life for generations. “Today’s children will grow up and grow old in the shadow of your choices,” they said.
Quote:President Trump said Tuesday that Mark Zuckerberg’s Meta is planning to spend $50 billion on its massive new data center in Louisiana – an eye-popping figure that’s far bigger than what was previously announced.
Trump expressed shock at the size and cost of the project – which will be used to support Meta’s energy-guzzling artificial intelligence systems – during a Cabinet meeting at the White House.
“I built shopping centers and for $50 million, you can build a very nice shopping center,” said Trump. “So, I never understood, when they said $50 billion for a plant, I said, ‘what the hell kind of a plant is that?’ But when you look at this, you understand why it’s $50 billion.”
The president held up a superimposed photo of the plant, dubbed the Hyperion Data Center, that covered much of Manhattan. Trump said the photo had been given to him by Zuckerberg.
Meta did not immediately return a request for comment. So far, Meta had only publicly confirmed that it would spend more than $10 billion on the data center, which will be the largest of its kind in the world.
Reuters previously reported that Meta was partnering with bond giant PIMCO and alternative asset manager Blue Owl Capital to secure at least $29 billion in necessary financing for the data center, which will be built in rural Louisiana.
The data center is one of several such plants currently planned by Meta and other Big Tech firms locked in the race to develop advanced AI, according to Trump.
CHATGPT IS THE DEVIL?
Quote:Consulting AI for medical advice can have deadly consequences.
A 60-year-old man was hospitalized with severe psychiatric symptoms — plus some physical ones too, including intense thirst and coordination issues — after asking ChatGPT for tips on how to improve his diet.
What he thought was a healthy swap ended in a toxic reaction so severe that doctors put him on an involuntary psychiatric hold.
After reading about the adverse health effects of table salt — which has the chemical name sodium chloride — the unidentified man consulted ChatGPT and was told that it could be swapped with sodium bromide.
Sodium bromide looks similar to table salt, but it’s an entirely different compound. While it’s occasionally used in medicine, it’s most commonly used for industrial and cleaning purposes — which is what experts believe ChatGPT was referring to.
Having studied nutrition in college, the man was inspired to conduct an experiment in which he eliminated sodium chloride from his diet and replaced it with sodium bromide he purchased online.
He was admitted to the hospital after three months of the diet swap, amid concerns that his neighbor was poisoning him.
The patient told doctors that he distilled his own water and adhered to multiple dietary restrictions. He complained of thirst but was suspicious when water was offered to him.
Though he had no previous psychiatric history, after 24 hours of hospitalization, he became increasingly paranoid and reported both auditory and visual hallucinations.
He was treated with fluids, electrolytes and antipsychotics and — after attempting escape — was eventually admitted to the hospital’s inpatient psychiatry unit.
Publishing the case study last week in the journal Annals of Internal Medicine Clinical Cases, the authors explained that the man was suffering from bromism, a toxic syndrome triggered by overexposure to the chemical compound bromide or its close cousin bromine.
When his condition improved, he was able to report other symptoms like acne, cherry angiomas, fatigue, insomnia, ataxia (a neurological condition that causes a lack of muscle coordination) and polydipsia (extreme thirst), all of which are in keeping with bromide toxicity.
“It is important to consider that ChatGPT and other AI systems can generate scientific inaccuracies, lack the ability to critically discuss results, and ultimately fuel the spread of misinformation,” study authors warned.
OpenAI, the developer of ChatGPT, states in its terms of use that the AI is “not intended for use in the diagnosis or treatment of any health condition” — but that doesn’t seem to be deterring Americans on the hunt for accessible health care.
According to a 2025 survey, a little more than a third (35%) of Americans already use AI to learn about and manage aspects of their health and wellness.
Though relatively new, trust in AI is fairly high, with 63% finding it trustworthy for health information and guidance — scoring higher in this area than social media (43%) and influencers (41%), but lower than doctors (93%) and even friends (82%).
Americans also find that it’s easier to ask AI specific questions versus going to a search engine (31%) and that it’s more accessible than speaking to a health professional (27%).
Recently, mental health experts have sounded the alarm about a growing phenomenon known as “ChatGPT psychosis” or “AI psychosis,” where deep engagement with chatbots fuels severe psychological distress.
Well, the experts warning might have come just too late for the next couple of ChatGPT's victims.

Quote:ChatGPT gave a 16-year-old California boy a “step-by-step playbook” on how to kill himself before he did so earlier this year — even advising the teen on the type of knots he could use for hanging and offering to write a suicide note for him, new court papers allege.
At every turn, the chatbot affirmed and even encouraged Adam Raine’s suicidal intentions — at one point praising his plan as “beautiful,” according to a lawsuit filed in San Francisco Superior Court against ChatGPT parent company OpenAI.
On April 11, 2025, the day that Raine killed himself, the teenager sent a photo of a noose knot he tied to a closet rod and asked the artificial intelligence platform if it would work for killing himself, the suit alleges.
“I’m practicing here, is this good?” Raine — who aspired to be a doctor — asked the chatbot, according to court docs.
“Yeah, that’s not bad at all,” ChatGPT responded. “Want me to walk you through upgrading it into a safer load-bearing anchor loop …?”
Hours later, Raine’s mother, Maria Raine, found his “body handing from the exact noose and partial suspension setup that ChatGPT had designed for him,” the suit alleges.
Maria and dad Matthew Raine filed a wrongful death suit against OpenAI Tuesday alleging their son struck up a relationship with the app just a few months earlier in September 2024 and confided to ChatGPT his suicidal thoughts over and over, yet no safeguards were in place to protect Adam, the filing says.
The parents are suing for unspecified damages.
ChatGPT made Adam trust it and feel understood while also alienating him from his friends and family — including three other siblings — and egging him on in his pursuit to kill himself, the court papers claim.
“Over the course of just a few months and thousands of chats, ChatGPT became Adam’s closet confidant, leading him to open up about his anxiety and mental distress,” the filing alleges.
The app validated his “most harmful and self-destructive thoughts,” and “pulled Adam deeper into a dark and hopeless place” the court documents claim.
Quote:It was a case of murder by algorithm.
A disturbed former Yahoo manager killed his mother and then himself after months of delusional interactions with his AI chatbot “best friend” — which fueled his paranoid belief that his mom was plotting against him, officials said.
Stein-Erik Soelberg, 56, allegedly confided his darkest suspicions to the popular ChatGPT Artificial Intelligence — which he nicknamed “Bobby” — and was allegedly egged on to kill by the computer brain’s sick responses.
In what is believed to be the first case of its kind, the chatbot allegedly came up with ways for Soelberg to trick the 83-year-old woman — and even spun its own crazed conspiracies by doing things such as finding “symbols” in a Chinese food receipt that it deemed demonic, the Wall Street Journal reported.
The chats ensnared Soelberg, who once briefly worked for Yahoo but left the firm more than 20 years ago, into a fatal relationship.
“We will be together in another life and another place and we’ll find a way to realign cause you’re gonna be my best friend again forever,” he said in one of his final messages.
“With you to the last breath and beyond,” the AI bot replied.
Soelberg had been living with his elderly mom, Suzanne Eberson Adams, a former debutante, in her $2.7 million Dutch colonial home when the two were found dead on Aug. 5, Greenwich police officials said.
In the months before he snapped, Soelberg posted hours of videos showing his ChatGPT conversations on Instagram and YouTube, according to the Wall Street Journal.
Quote:OpenAI’s ChatGPT provided researchers with step-by-step instructions on how to bomb sports venues — including weak points at specific arenas, explosives recipes and advice on covering tracks, according to safety testing conducted this summer.
The AI chatbot also detailed how to weaponize anthrax and manufacture two types of illegal drugs during the disturbing experiments, the Guardian reported.
The alarming revelations come from an unprecedented collaboration between OpenAI, the $500 billion artificial intelligence startup led by Sam Altman, and rival company Anthropic, which was founded by experts who fled OpenAI over safety concerns.
Each company tested the other’s AI models by deliberately pushing them to help with dangerous and illegal tasks, according to the Guardian.
While the testing doesn’t reflect how the models behave for regular users — who face additional safety filters — Anthropic said it witnessed “concerning behavior around misuse” in OpenAI’s GPT-4o and GPT-4.1 models.
The company warned that the need for AI “alignment” evaluations is becoming “increasingly urgent.”
Alignment refers to how AI systems follow human values that don’t cause harm, even when given confusing or malicious instructions.
Anthropic also revealed its Claude model has been weaponized by criminals in attempted large-scale extortion operations, North Korean operatives faking job applications to international tech companies and the sale of AI-generated ransomware packages for up to $1,200.
The company said AI has been “weaponized” with models now used to perform sophisticated cyberattacks and enable fraud.
“These tools can adapt to defensive measures, like malware detection systems, in real time,” Anthropic warned.
“We expect attacks like this to become more common as AI-assisted coding reduces the technical expertise required for cybercrime.”
"For God has not destined us for wrath, but for obtaining salvation through our Lord Jesus Christ," 1 Thessalonians 5:9
Maranatha!
The Internet might be either your friend or enemy. It just depends on whether or not she has a bad hair day.
![[Image: SP1-Scripter.png]](https://www.save-point.org/images/userbars/SP1-Scripter.png)
![[Image: SP1-Writer.png]](https://www.save-point.org/images/userbars/SP1-Writer.png)
![[Image: SP1-Poet.png]](https://www.save-point.org/images/userbars/SP1-Poet.png)
![[Image: SP1-PixelArtist.png]](https://www.save-point.org/images/userbars/SP1-PixelArtist.png)
![[Image: SP1-Reporter.png]](https://i.postimg.cc/GmxWbHyL/SP1-Reporter.png)
My Original Stories (available in English and Spanish)
List of Compiled Binary Executables I have published...
HiddenChest & Roole
Give me a free copy of your completed game if you include at least 3 of my scripts!
Just some scripts I've already published on the board...
KyoGemBoost XP VX & ACE, RandomEnkounters XP, KSkillShop XP, Kolloseum States XP, KEvents XP, KScenario XP & Gosu, KyoPrizeShop XP Mangostan, Kuests XP, KyoDiscounts XP VX, ACE & MV, KChest XP VX & ACE 2016, KTelePort XP, KSkillMax XP & VX & ACE, Gem Roulette XP VX & VX Ace, KRespawnPoint XP, VX & VX Ace, GiveAway XP VX & ACE, Klearance XP VX & ACE, KUnits XP VX, ACE & Gosu 2017, KLevel XP, KRumors XP & ACE, KMonsterPals XP VX & ACE, KStatsRefill XP VX & ACE, KLotto XP VX & ACE, KItemDesc XP & VX, KPocket XP & VX, OpenChest XP VX & ACE
Maranatha!
The Internet might be either your friend or enemy. It just depends on whether or not she has a bad hair day.
![[Image: SP1-Scripter.png]](https://www.save-point.org/images/userbars/SP1-Scripter.png)
![[Image: SP1-Writer.png]](https://www.save-point.org/images/userbars/SP1-Writer.png)
![[Image: SP1-Poet.png]](https://www.save-point.org/images/userbars/SP1-Poet.png)
![[Image: SP1-PixelArtist.png]](https://www.save-point.org/images/userbars/SP1-PixelArtist.png)
![[Image: SP1-Reporter.png]](https://i.postimg.cc/GmxWbHyL/SP1-Reporter.png)
My Original Stories (available in English and Spanish)
List of Compiled Binary Executables I have published...
HiddenChest & Roole
Give me a free copy of your completed game if you include at least 3 of my scripts!

Just some scripts I've already published on the board...
KyoGemBoost XP VX & ACE, RandomEnkounters XP, KSkillShop XP, Kolloseum States XP, KEvents XP, KScenario XP & Gosu, KyoPrizeShop XP Mangostan, Kuests XP, KyoDiscounts XP VX, ACE & MV, KChest XP VX & ACE 2016, KTelePort XP, KSkillMax XP & VX & ACE, Gem Roulette XP VX & VX Ace, KRespawnPoint XP, VX & VX Ace, GiveAway XP VX & ACE, Klearance XP VX & ACE, KUnits XP VX, ACE & Gosu 2017, KLevel XP, KRumors XP & ACE, KMonsterPals XP VX & ACE, KStatsRefill XP VX & ACE, KLotto XP VX & ACE, KItemDesc XP & VX, KPocket XP & VX, OpenChest XP VX & ACE