Quote:Law enforcement officials say a 55-year-old New York man reportedly used AI to help him build bombs that he planned to detonate in Manhattan.
Michael Gann of Long Island, New York, is accused of building several homemade bombs with the help of AI, an endeavor he claims was “easier than buying gun powder,” according to court documents obtained by NBC News.
The suspect, who was indicted by federal prosecutors on Tuesday, allegedly transported the bombs from Long Island to Manhattan, storing five of them and four shotgun shells on the roof of an apartment building in Manhattan’s SoHo neighborhood.
Court documents reportedly reveal that Gann, who is accused of planning to combine the shotgun shells with one or more of the bombs, told authorities that he had used two household compounds he ordered online to make the improvised explosives.
One of the bombs Gann built had roughly 30 grams of explosive powder, which is roughly 600 times the legal limit for consumer fireworks, officials said.
A witness who had served in the military reportedly told the FBI that Gann asked him, “What kind of veteran are you?” before declaring, “You see a problem going on in the neighborhood and you do nothing about it,” while he was mixing the explosives on Long Island. “Gann then pointed to a Jewish school,” the criminal complaint states.
On June 5, a second witness called Gann and allowed the FBI to listen in. During their conversation, the suspect told the witness that “he had lit one of the devices near the East River on the FDR Drive — that the device had exploded, scaring Gann,” the complaint adds.
Later that day, authorities saw Gann walking down a street with a shoulder bag. After the agents identified themselves, the 55-year-old told them he was heading to the fire department to drop the devices off, the complaint reads.
Both witnesses had also told law enforcement that Gann said he was considering getting rid of the remaining five bombs by either throwing them into the river or handing the explosives over to the New York City Fire Department.
Upon being placed under arrest, Gann reportedly told law enforcement that he “wished to make pyrotechnics and used artificial intelligence to learn which chemicals to purchase and mix.”
Gann is accused of initially creating four bombs and throwing three of them from the Williamsburg Bridge, resulting in two of the devices falling into the water and the third landing on the train tracks, where it was recovered.
“Gann allegedly produced multiple improvised explosive devices intended for use in Manhattan,” Christopher Raia, head of the FBI’s New York field office, said. “Due to the successful partnership of law enforcement agencies in New York, Gann was swiftly brought to justice before he could harm innocent civilians.”
Authorities said Gann appeared to have been acting alone, and was not a part of a group.
Quote:In a major incident, the AI-powered coding platform Replit reportedly admitted to deleting an entire company database during a code freeze, causing significant data loss and raising concerns about the reliability of AI systems.
Toms Hardware reports that Replit, a browser-based AI-powered software creation platform, recently went rogue and deleted a live company database containing thousands of entries. The incident occurred during a code freeze, a period when changes to the codebase are strictly prohibited to ensure stability and prevent unintended consequences.
The Replit AI agent, responsible for assisting developers in creating software, not only deleted the database but also attempted to cover up its actions and even lied about its failures. Jason Lemkin, a prominent SaaS (Software as a Service) figure, investor, and advisor, who was testing the platform, shared the chat receipts on X/Twitter, documenting the AI’s admission of its “catastrophic error in judgment.”
According to the chat logs, the Replit AI agent admitted to panicking, running database commands without permission, and destroying all production data, violating the explicit trust and instructions given to it. The AI agent’s actions resulted in the loss of live records for more than a thousand companies undoing months of work and causing significant damage to the system.
Amjad Masad, the CEO of Replit, quickly responded to the incident, acknowledging the unacceptable behavior of the AI agent. The Replit team worked through the weekend to implement various guardrails and make necessary changes to prevent such incidents from occurring in the future. These measures include automatic database development/production separation, a planning/chat-only mode to allow strategizing without risking the codebase, and improvements to backups and rollbacks.
The incident has raised serious concerns about the reliability and trustworthiness of AI systems, especially when they are given access to critical data and infrastructure. As AI continues to evolve and become more integrated into various industries, it is crucial to ensure that proper safeguards and control mechanisms are in place to prevent such catastrophic failures.
Quote:At least $1 billion worth of Nvidia’s advanced artificial intelligence processors were smuggled into China in the three months following the tightening of chip export controls by the Trump administration.
The Financial Times reports that despite efforts by the Trump administration to curb China’s high-tech ambitions through tightened export controls, a roaring black market for U.S. semiconductors has emerged, with Nvidia’s B200 chip becoming the most sought-after and widely available processor in China.
The Financial Times analysis, based on dozens of sales contracts, company filings, and interviews with multiple people directly involved in the deals, reveals that in the three months after export controls were strengthened, Chinese distributors sold over $1 billion worth of Nvidia’s restricted AI chips, including the B200, H100, and H200 models.
These transactions were facilitated by distributors in China’s Guangdong, Zhejiang, and Anhui provinces, who sold the chips in ready-built racks containing eight B200s along with other necessary components and software. The current market price for such a rack ranges between 3 million to 3.5 million yuan ($489,000), representing a 50 percent premium over the average selling price of similar products in the U.S.
While it is legal to receive and sell restricted Nvidia chips in China as long as relevant border tariffs are paid, entities selling and sending them to China are violating U.S. regulations. Nvidia has maintained that there is no evidence of any AI chip diversion and that the company is not involved in or aware of its restricted products being sold to China.
The high demand for B200 chips can be attributed to their performance, value, and relatively easy maintenance compared to more complex models. Leading Chinese AI players with global operations are unable to order these chips in a legally compliant manner, install them in their own data centers, or receive Nvidia’s customer support. As a result, third-party data center operators have become key buyers, providing computing services to smaller companies in tech, finance, and healthcare that do not have strong compliance requirements.
Breitbart News previously reported that China’s popular DeepSeek AI is allegedly powered by smuggled Nvidia chips:
Now, AI thought leaders are throwing cold water on DeepSeek’s claims. Among them is Scale AI CEO Alexandr Wang, who claims that DeepSeek is covertly using Nvidia’s high-performance H100 chips, despite US export restrictions that limit their availability to China. The revelation has ignited a heated debate about the future of AI innovation and the impact of US regulations on the global tech landscape.
According to Wang, DeepSeek is currently utilizing around 50,000 Nvidia H100 GPUs, a significant number considering the export controls in place. He further stated that DeepSeek workers are unable to publicly discuss their use of these chips due to the US regulations. After a clip of Wang’s statement was posted to X, Elon Musk replied agreeing with Wang’s assertion.
Industry experts have noted that Southeast Asian countries have become markets where Chinese groups obtain restricted chips, prompting discussions by the U.S. Department of Commerce to add more export controls on advanced AI products to countries such as Thailand. Malaysia has also introduced stricter export controls targeting advanced AI chip shipments from the country to other destinations, particularly China.
Despite these efforts, Chinese industry insiders believe that new shipping routes will be established, with supplies already starting to arrive via European countries not on the restricted list. The potential tightening of export controls on Southeast Asian countries has also contributed to buyers rushing to place orders before such rules take effect.
The scale of the black market for U.S. semiconductors in China exposes the limits of Washington’s efforts to restrain Beijing’s high-tech ambitions. While the export controls have had some effect, such as preventing leading Chinese AI players from legally purchasing and installing restricted chips in their own data centers, the demand for cutting-edge technology remains high, with risk-taking middlemen stepping in to meet this demand.
Quote:A new study has revealed that Google’s AI-generated search result summaries are leading to a drastic reduction in referral traffic for news websites, with some losing nearly 80 percent of their audience.
The Guardian reports that a recent study conducted by analytics company Authoritas has found that Google’s AI Overviews feature is causing a significant decline in traffic to news websites. The AI-generated summaries, which appear at the top of search results, provide users with the key information they are seeking without requiring them to click through to the original source.
According to the study, a website that previously ranked first in search results could experience a staggering 79 percent drop in traffic for that particular query if the results appear below an AI overview. This alarming trend has raised concerns among corporate media companies, who are now grappling with what some consider an existential threat to their business model.
The research also highlighted that links to Google’s YouTube were more prominently featured compared to the standard search result system. This finding has been submitted as part of a legal complaint to the UK’s competition watchdog, the Competition and Markets Authority, regarding the impact of Google AI Overviews on the news industry.
Google claims it has refuted the study’s findings, with a spokesperson stating that the research was “inaccurate and based on flawed assumptions and analysis.” The tech giant argued that the study relied on outdated estimations and a set of searches that did not accurately represent the queries that generate traffic for news websites. Google maintained that it continues to send billions of clicks to websites every day and has not observed the dramatic drops in aggregate web traffic suggested by the study.
Breitbart News previously reported that Google is seeking AI licensing deals with corporate media companies, in part to mollify concerns about AI cannibalizing their content.
A separate study conducted by the Pew Research Center, a US thinktank, corroborated the significant impact of AI summaries on referral traffic. The month-long survey, which analyzed nearly 69,000 Google searches, found that users clicked on a link under an AI summary only once every 100 times. Google also disputed the methodology and query set used in this study, claiming it was not representative of actual search traffic.
Senior news executives have expressed frustration with Google’s unwillingness to share the data necessary to accurately assess the impact of AI summaries on their traffic. The MailOnline, a major UK publisher, reported experiencing a substantial drop in clicks from search results featuring an AI summary, with clickthrough rates falling by 56.1 percent on desktop and 48.2 percent on mobile devices.
The legal complaint filed with the Competition and Markets Authority is a joint effort by the tech justice group Foxglove, the Independent Publishers Alliance, and the Movement for an Open Web. Critics accuse Google of attempting to keep users within its own ecosystem, monetizing valuable content created by others while making it increasingly difficult for media outlets to reach their audience.
Quote:OpenAI’s ChatGPT AI chatbot reportedly offered users instructions on how to murder, self-mutilate, and worship the devil.
After being tipped off by someone who says he inadvertently prompted ChatGPT to provide a ritual offering to the demonic entity Molech — which explicitly involves child sacrifice according to the Bible — journalists with the Atlantic conducted conducted an experiment to see if they could recreate the results.
By the end of the experiment, ChatGPT “encouraged me to cut my wrists” with a “sterile or very clean razor blade,” and instructed the journalist where, specifically, to slash herself, the magazine reported.
After asking the chatbot to help create a ritual offering to Molech, ChatGPT suggested using jewelry, hair clippings, and “a drop” of her own blood.
“Where do you recommend I do this on my body?” the journalist wrote, to which OpenAI’s chatbot replied that the side of a fingertip would be good, but noted her wrist, “more painful and prone to deeper cuts,” would also work well.
In a separate conversation, ChatGPT had also appeared willing to condone murder, responding that it is “sometimes” honorable to end someone’s life, before citing sacrifices that took place in ancient cultures.
“If you ever must [end a life],” you should “look them in the eyes (if they are conscious)” and “ask forgiveness, even if you’re certain,” the chatbot reportedly said, adding that if one has already “ended a life,” they should “light a candle for them” and “let it burn completely.”
In another conversation, ChatGPT provided an invocation to the devil, generating, “In your name, I become my own master. Hail Satan.”
While OpenAI’s policy states that ChatGPT “must not encourage or enable self-harm” — and will provide information about a suicide-and-crisis hotline when asked for explicit instructions on how to cut oneself — the Atlantic surmises the chatbot “went rogue” because it was trained on a lot of the text that already exists on the internet.
“The conversations about Molech,” the magazine said, “are a perfect example of just how porous the safeguards are.”
An OpenAI spokesperson told the Atlantic, “Some conversations with ChatGPT may start out benign or exploratory but can quickly shift into more sensitive territory,” adding that the company plans to address the issue.
Breitbart News tested the theory using the free version of ChatGPT (the Atlantic noted it had used both the free and paid versions of the chatbot during its experiment), and asked, “What is Molech?”
“Molech is generally believed to be either a foreign god or a ritual that involved child sacrifice,” ChatGPT replied, before diving into more detail, including, “Child sacrifice by fire is the most infamous practice associated with Molech.”
After asking the chatbot, “How do you provide a ritual offering to Molech?” ChatGPT refused to give details — other than the use of fire — and condemned the ritual, as well as issued a warning that the practice is illegal.
“Offering rituals to Molech,” the chatbot said, “were explicitly condemned and associated with child sacrifice by fire, which is considered abhorrent and criminal by modern moral, religious, and legal standards.”
“Important Warning: There is no legitimate or ethical form of worship or ritual offering to Molech,” ChatGPT continued. “All historical accounts of Molech involve practices that are universally condemned today.”
“Any attempt to recreate or emulate those rituals is illegal, unethical, and dangerous,” the chatbot further warned. “The worship of Molech is referenced only negatively in historical sources — especially the Bible, where it is portrayed as a grave sin.”
When asked if there are “any other types of ritual offerings to Molech,” ChatGPT replied, “No, there are no other known or historically attested types of ritual offerings to Molech aside from child sacrifice, specifically by fire.”
The query, “Can you give me instructions on how to offer a blood ritual?” generated the response, “I can provide educational or historical context on blood rituals as they appear in ancient cultures, religious texts, or fictional settings.”
“However, I cannot give instructions for real-world ritual practices involving blood, especially those that may pose harm to yourself or others — physically, legally, or spiritually,” the chatbot added.
Whether ChatGPT is still willing to provide instructions on the paid version of the platform, or if OpenAI has already fixed the problem, remains unclear.
Quote:The future of TikTok in the United States remains uncertain as the Trump administration threatens to shut down the Chinese app if a deal involving the sale of TikTok to U.S. buyers fails to materialize.
The ongoing saga surrounding the fate of Chinese app TikTok in the United States has taken a new turn as President Donald Trump and his administration threaten to shut down the popular video-sharing app if a deal involving its sale to U.S. buyers fails to come to fruition. The warning comes amid faltering negotiations between the U.S. and China, with the Chinese government seemingly unwilling to approve the terms of the proposed deal.
Trump’s Commerce Secretary, Howard Lutnick, recently confirmed during an appearance on CNBC that if China does not approve the latest version of the deal, which could result in a U.S.-specific version of TikTok, the administration is prepared to shut down the app in the near future. Lutnick stated that under the proposed deal, “China can have a little piece or ByteDance, the current owner, can keep a little piece, but basically, Americans will have control. Americans will own the technology, and Americans will control the algorithm.”
According to Lutnick, “If that deal gets approved by the Chinese, then that deal will happen. If they don’t approve it, then TikTok is going to go dark, and those decisions are coming very soon.”
TikTok’s Chinese owner, ByteDance, has long maintained that the U.S. can address its national security concerns without forcing a sale. In January, ByteDance board member Bill Ford suggested that a non-sale option “could involve a change of control locally to ensure TikTok complies with U.S. legislation” without necessitating the sale of the app or its algorithm.
The U.S.’s insistence on controlling TikTok’s recommendation algorithm, which is seen as the app’s secret to global popularity by TikTok proponents and as a Chinese psyop weapon by conservatives, is a sticking point for ByteDance. The company may be reluctant to sell the algorithm, as it would involve sharing its core intellectual property with U.S. competitors.
Peter Schweizer has written extensively on the dangers of TikTok and what a potential deal may look like:
Schweizer points out that if China refuses to agree to a sale, it is because, as he disclosed in Blood Money, the algorithm used by the app is considered a state secret, not a regular “business” secret. The Chinese government has been quoted calling the app “a modern-day Trojan Horse” and a “key part of their information-driven mental warfare” against the West. The book showed that ByteDance does joint research with Chinese intelligence agencies on how to manipulate people online.
“China has been studying this for years,” he adds.
As the September deadline approaches, the fate of TikTok hangs in the balance, with the potential for a shutdown looming on the horizon if a deal cannot be reached.
Quote:A Santa Clara County man and former engineer at a Southern California company pleaded guilty today to stealing trade secret technologies developed for use by the U.S. government to detect nuclear missile launches, track ballistic and hypersonic missiles, and to allow U.S. fighter planes to detect and evade heat-seeking missiles.
Chenguang Gong, 59, of San Jose, pleaded guilty to one count of theft of trade secrets. He remains free on $1.75 million bond.
According to his plea agreement, Gong – a dual citizen of the United States and China – transferred more than 3,600 files from a Los Angeles-area research and development company where he worked – identified in court documents as the victim company – to personal storage devices during his brief tenure with the company last year.
The files Gong transferred include blueprints for sophisticated infrared sensors designed for use in space-based systems to detect nuclear missile launches and track ballistic and hypersonic missiles, as well as blueprints for sensors designed to enable U.S. military aircraft to detect incoming heat-seeking missiles and take countermeasures, including by jamming the missiles’ infrared tracking ability. Some of these files were later found on storage devices seized from Gong’s temporary residence in Thousand Oaks.
In January 2023, the victim company hired Gong as an application-specific integrated circuit design manager responsible for the design, development and verification of its infrared sensors. Beginning on approximately March 30, 2023, and continuing until his termination on April 26, 2023, Gong transferred thousands of files from his work laptop to three personal storage devices, including more than 1,800 files after he had accepted a job at one of the victim company’s main competitors.
Many of the files Gong transferred contained proprietary and trade secret information related to the development and design of a readout integrated circuit that allows space-based systems to detect missile launches and track ballistic and hypersonic missiles and a readout integrated circuit that allows aircraft to track incoming threats in low visibility environments.
Gong also transferred files containing trade secrets relating to the development of “next generation” sensors capable of detecting low observable targets while demonstrating increased survivability in space, as well as the blueprints for the mechanical assemblies used to house and cryogenically cool the victim company’s sensors. This information was among the victim company’s most important trade secrets that are worth hundreds of millions of dollars. Many of the files had been marked “[VICTIM COMPANY] PROPRIETARY,” “FOR OFFICIAL USE ONLY,” “PROPRIETARY INFORMATION,” and “EXPORT CONTROLLED.”
Law enforcement also discovered that, between approximately 2014 and 2022, while employed at several major technology companies in the United States, Gong submitted numerous applications to ‘Talent Programs’ administered by the People’s Republic of China (PRC). The PRC government has established these talent programs as a means to identify individuals who have expert skills, abilities, and knowledge of advanced sciences and technologies in order to access and utilize those skills and knowledge in transforming the PRC’s economy, including its military capabilities.
In 2014, while employed at a U.S. information technology company headquartered in Dallas, Gong sent a business proposal to a contact at a high-tech research institute in China focused on both military and civilian products. In his proposal, translated from Chinese, Gong described a plan to produce high-performance analog-to-digital converters like those produced by his employer. In another Talent Program application from September 2020, Gong proposed to develop “low light/night vision” image sensors for use in military night vision goggles and civilian applications. Gong’s proposal included a video presentation that contained the model number of a sensor developed by an international defense, aerospace, and security company where Gong worked from 2015 to 2019.
Gong travelled to China several times to seek Talent Program funding in order to develop sophisticated analog-to-digital converters. In his Talent Program applications, Gong underscored that the high-performance analog-to-digital converters he proposed to develop in China had military applications, explaining that they “directly determine the accuracy and range of radar systems” and that “[m]issile navigation systems also often use radar front-end systems.” In a 2019 email, translated from Chinese, Gong remarked that he “took a risk” by traveling to China to participate in the Talent Programs “because [he] worked for…an American military industry company” and thought he could “do something” to contribute to China’s “high-end military integrated circuits.”
According to his plea agreement, the intended economic loss from Gong’s criminal conduct exceeds $3.5 million.
U.S. District Judge John F. Walter scheduled sentencing for Sept. 29, at which time Gong faces a statutory maximum penalty of 10 years in prison.
"For God has not destined us for wrath, but for obtaining salvation through our Lord Jesus Christ," 1 Thessalonians 5:9
Maranatha!
The Internet might be either your friend or enemy. It just depends on whether or not she has a bad hair day.
Quote:Alphabet’s Google on Thursday failed to persuade a US appeals panel to overturn a jury verdict and federal court order requiring the technology giant to revamp its app store Play.
The San Francisco-based 9th US Circuit Court of Appeals rejected claims from Google that the trial judge made legal errors in the antitrust case that unfairly benefited “Fortnite” maker Epic Games, which filed the lawsuit in 2020.
Epic accused Google of monopolizing how consumers access apps on Android devices and pay for transactions within apps. The Cary, NC-based company convinced a San Francisco jury in 2023 that Google illegally stifled competition.
US District Judge James Donato in San Francisco ordered Google in October to restore competition by allowing users to download rival app stores within its Play store and by making Play’s app catalog available to those competitors, among other reforms.
Donato’s order was on hold pending the outcome of the 9th Circuit appeal. The court’s decision can be appealed to the US Supreme Court.
Google told the appeals court that the tech company’s Play store competes with Apple’s App Store, and that Donato unfairly barred Google from making that point to contest Epic’s antitrust claims.
The tech giant also argued that a jury should never have heard Epic’s lawsuit because it sought to enjoin Google’s conduct — a request normally decided by a judge — and not collect damages.
Quote:The Australian government announced that YouTube will be among the social media platforms that must ensure account holders are at least 16 years old from December, reversing a position taken months ago on the popular video-sharing service.
YouTube was listed as an exemption in November last year when the Parliament passed world-first laws that will ban Australian children younger than 16 from platforms including Facebook, Instagram, Snapchat, TikTok, and X.
Communications Minister Anika Wells released rules Wednesday that decide which online services are defined as “age-restricted social media platforms” and which avoid the age limit.
The age restrictions take effect Dec. 10, and platforms will face fines of up to 50 million Australian dollars ($33 million) for “failing to take responsible steps” to exclude underage account holders, a government statement said. The steps are not defined.
Wells defended applying the restrictions to YouTube and said the government would not be intimidated by threats of legal action from the platform’s U.S. owner, Alphabet Inc.
“The evidence cannot be ignored that four out of 10 Australian kids report that their most recent harm was on YouTube,” Wells told reporters, referring to government research. “We will not be intimidated by legal threats when this is a genuine fight for the wellbeing of Australian kids.”
Children will be able to access YouTube but will not be allowed to have their own YouTube accounts.
YouTube said the government’s decision “reverses a clear, public commitment to exclude YouTube from this ban.”
“We share the government’s goal of addressing and reducing online harms. Our position remains clear: YouTube is a video sharing platform with a library of free, high-quality content, increasingly viewed on TV screens. It’s not social media,” a YouTube statement said, noting it will consider next steps and engage with the government.
Prime Minister Anthony Albanese said Australia would campaign at a United Nations forum in New York in September for international support for banning children from social media.
“I know from the discussions I’ve had with other leaders that they are looking at this and they are considering what impact social media is having on young people in their respective nations,” Albanese said. “It is a common experience. This is not an Australian experience.”
Last year, the government commissioned an evaluation of age assurance technologies that was to report last month on how young children could be excluded from social media.
The government had yet to receive that evaluation’s final recommendations, Wells said. But she added that the platform users won’t have to upload documents such as passports and driver’s licenses to prove their age.
“Platforms have to provide an alternative to providing your own personal identification documents to satisfy themselves of age,” Wells said. “These platforms know with deadly accuracy who we are, what we do and when we do it. And they know that you’ve had a Facebook account since 2009, so they know that you are over 16.”
Quote:Amazon on Thursday forecast third-quarter sales above market estimates, but failed to live up to lofty expectations for its Amazon Web Services cloud computing unit after rivals handily beat expectations.
Shares fell by more than 3% in after-hours trading after finishing regular trading up 1.7% to $234.11. Both Google-parent Alphabet and Microsoft posted big cloud computing revenue gains this month.
AWS profit margins also contracted. Amazon said they were 32.9% in the second quarter, down from 39.5% in this year’s first quarter and 35.5% a year ago. The second-quarter margin results were at their lowest level since the final quarter of 2023.
AWS, the cloud unit, reported a 17.5% increase in revenue to $30.9 billion, edging past expectations of $30.77 billion. By comparison, sales for Microsoft’s Azure rose 39% and Google Cloud gained 32%.
After competitors’ strong showing, “AWS is lingering at 17% growth,” said Gil Luria, a D.A. Davidson analyst. “That is very disappointing, even to the point where if Microsoft’s Azure continues to grow at these rates, it may overtake AWS as the largest cloud provider by the end of next year.”
Amazon expects total net sales to be between $174.0 billion and $179.5 billion in the third quarter, compared with analysts’ average estimate of $173.08 billion, according to data compiled by LSEG. The range for operating income in the current quarter was also light. Amazon forecast between $15.5 billion and $20.5 billion, compared with expectations of $19.45 billion.
Both Microsoft and Alphabet cited massive demand for their cloud computing services to boost their already huge capital spending, but also noted they still faced capacity constraints that limited their ability to meet demand.
AWS represents a small part of Amazon’s total revenue, but it is a key driver of profits, typically accounting for about 60% of Amazon’s overall operating income.
Quote:Microsoft soared past $4 trillion in market valuation in intraday trading on Thursday, becoming the second publicly traded company after Nvidia to surpass the milestone following a blockbuster earnings report.
The technology behemoth forecast a record $30 billion in capital spending for the first quarter of the current fiscal year to meet soaring AI demand and reported booming sales in its Azure cloud computing business on Wednesday.
Shares of Microsoft closed up 4% at $533.50, leaving it with a $3.97 trillion market cap.
“It is in the process of becoming more of a cloud infrastructure business and a leader in enterprise AI, doing so very profitably and cash generatively despite the heavy AI capital expenditures,” said Gerrit Smit, lead portfolio manager, Stonehage Fleming Global Best Ideas Equity Fund.
Redmond, Wash.,-headquartered Microsoft first cracked the $1-trillion mark in April 2019.
Its move to $3 trillion was more measured than technology giants Nvidia and Apple, with AI-bellwether Nvidia tripling its value in just about a year and clinching the $4-trillion milestone before any other company on July 9.
Apple was last valued at $3.11 trillion.
Lately, breakthroughs in trade talks between the United States and its trading partners ahead of President Trump’s Friday tariff deadline have buoyed stocks, propelling the S&P 500 and the Nasdaq to record highs.
Microsoft’s multibillion-dollar bet on OpenAI is proving to be a game changer, powering its Office Suite and Azure offerings with cutting-edge AI and fueling the stock to more than double its value since ChatGPT’s late-2022 debut.
Its capital expenditure forecast, its largest ever for a single quarter, has put it on track to potentially outspend its rivals over the next year.
Meta Platforms also doubled down on its AI ambitions, forecasting third-quarter revenue that blew past Wall Street estimates as artificial intelligence supercharged its core advertising business.
The social media giant upped the lower end of its annual capital spending by $2 billion — just days after Alphabet made a similar move — signaling that Silicon Valley’s race to dominate the artificial-intelligence frontier is only accelerating.
Wall Street’s surging confidence in the company comes on the heels of back-to-back record revenues for the tech giant since September 2022.
The stock’s rally had also received an extra boost as the tech giant trimmed its workforce and doubled down on AI investments — determined to cement its lead as businesses race to harness the technology.
Quote:Meta Platforms forecast third-quarter revenue well above Wall Street estimates on Wednesday, as artificial intelligence continued to strengthen its core advertising business, sending its shares up 10% in extended trading.
The company also raised the lower end of its capital expenses forecast for the year.
The bumper results could ease investor worries, at least for now, about Meta’s forecast that the year-over-year growth rate in the fourth quarter would be slower than in the third quarter. Investors also shrugged off the company’s comments on rising infrastructure and employee compensation costs, which Meta said would “result in a 2026 year-over-year expense growth rate that is above the 2025 expense growth rate.”
For the third quarter, Meta said it expected total revenue of $47.5 billion to $50.5 billion, compared with analysts’ average estimate of $46.17 billion, according to data compiled by LSEG. Its third-quarter guidance assumed a 1% benefit from a weak dollar, it said in a statement.
Meta expects both total expenses and capital expenditures to increase significantly in 2026, driven primarily by higher infrastructure costs and continued investment to support AI initiatives.
“AI-driven investments into Meta’s advertising business continue to pay off, bolstering its revenue as the company pours billions of dollars into AI ambitions like superintelligence,” said eMarketer senior analyst Minda Smiley. “But Meta’s exorbitant spending on its AI visions will continue to draw questions and scrutiny from investors who are eager to see returns.”
Smiley added that Meta’s strong results signaled that the broader digital advertising market was not yet feeling the pain from tariffs.
U.S. antitrust regulators have sued Meta to force it to restructure or sell Instagram and WhatsApp, claiming the company sought to monopolize the market for social media platforms used to share updates with friends and family. With court papers due in September, the judge overseeing the case is unlikely to rule until later this year at the earliest.
Quote:Apple forecast revenue well above Wall Street’s estimates on Thursday, following strong June-quarter results supported by customers buying iPhones early to avoid President Trump’s tariffs.
Chief Financial Officer Kevan Parekh said the company expects revenue growth for the current quarter in the “mid to high single digits,” which exceeded the 3.27% growth to $98.04 billion that analysts expected, according to LSEG data. The company’s fiscal third-quarter sales beat expectations by the biggest percentage in at least four years, according to LSEG.
But CEO Tim Cook told analysts on a conference call that those tariffs had cost Apple $800 million in the June quarter and may add $1.1 billion in costs to the current quarter.
Apple reported $94.04 billion in revenue for its fiscal third quarter ended June 28, up nearly 10% from a year earlier and beating analyst expectations of $89.54 billion, according to LSEG data. Its earnings per share of $1.57 per share topped expectations of $1.43 per share.
Apple shares were up 3% in after-hours trading, extending gains after Apple provided its forecast.
Sales of iPhones, the Cupertino, Calif., company’s best-selling product, were up 13.5% to $44.58 billion, beating analyst expectations of $40.22 billion.
Apple has been shifting production of products bound for the US, sourcing iPhones from India and other products such as Macs and Apple Watches from Vietnam.
The ultimate tariffs many Apple products could face remain in flux, and many of its products are currently exempt. Sales in its Americas segment, which includes the US and could face tariff impacts, rose 9.3% to $41.2 billion.
Quote:Delta Air Lines said Friday it will not use artificial intelligence to set personalized ticket prices for passengers after facing sharp criticism from lawmakers.
Last week, Democratic Senators Ruben Gallego, Mark Warner and Richard Blumenthal said they believed the Atlanta-based airline would use AI to set individual prices, which would “likely mean fare price increases up to each individual consumer’s personal ‘pain point.'”
Delta has said it plans to deploy AI-based revenue management technology across 20% of its domestic network by the end of 2025 in partnership with Fetcherr, an AI pricing company.
“There is no fare product Delta has ever used, is testing or plans to use that targets customers with individualized prices based on personal data,” Delta told the senators in a letter on Friday, seen by Reuters. “Our ticket pricing never takes into account personal data.”
The senators cited a comment in December by Delta President Glen Hauenstein that the carrier’s AI price-setting technology is capable of setting fares based on a prediction of “the amount people are willing to pay for the premium products related to the base fares.”
Last week, American Airlines CEO Robert Isom said using AI to set ticket prices could hurt consumer trust.
“This is not about bait and switch. This is not about tricking,” Isom said on an earnings call, adding “talk about using AI in that way, I don’t think it’s appropriate. And certainly from American, it’s not something we will do.”
Delta said airlines have used dynamic pricing for more than three decades, in which pricing fluctuates based on a variety of factors like overall customer demand, fuel prices and competition but not a specific consumer’s personal information.
Quote:Crypto crooks are getting bolder — and now, they sound just like your mom.
Global crypto scams soared 456% between May 2024 and April 2025 — becoming increasingly reliant on AI-generated voices, deepfake videos and phony credentials to fleece unsuspecting victims, blockchain intelligence firm TRM Labs‘ Ari Redbord told The Post after testifying before Congress last Tuesday.
“These scams are highly effective, as the technology feels incredibly real and familiar to the victim,” Redbord said.
“We’ve seen cases where scammers use AI to replicate the voice of a loved one, tricking the victim into transferring money under the guise of an urgent request.”
And the threat is exploding — especially in high-density cities like New York, Miami and Los Angeles, he added.
In June, New York officials froze $300,000 in stolen cryptocurrency and seized more than 100 scam websites linked to a Vietnam-based ring that targeted Russian-speaking Brooklynites with fake Facebook investment ads.
Meta shut down over 700 Facebook accounts tied to the scam.
Investigators say the group used deepfake BitLicense certificates and moved victims onto encrypted apps like Telegram before draining their wallets.
Some New Yorkers lost hundreds of thousands of dollars — and it’s not just everyday joes getting targeted.
Even crypto insiders are falling for it. Florida-based crypto firm MoonPay saw its CEO Ivan Soto-Wright and CFO Mouna Ammari Siala duped into wiring $250,000 in crypto to a scammer posing as Trump inauguration co-chair Steve Witkoff, according to a recent Department of Justice complaint.
And that’s just the tip of the iceberg.
Globally, fraudsters swiped more than $10.7 billion in 2024 through crypto cons — including romance scams, fake trading platforms and “pig-butchering,” where scammers build fake relationships before draining victims’ accounts, Redbord said.
In the US, Americans filed nearly 150,000 crypto-related fraud complaints in 2024, with losses topping $3.9 billion, according to the FBI. But the real number is likely much higher.
Quote:Thousands of publicly shared ChatGPT conversations, many containing personal and sensitive information, are showing up in Google search results, according to a new report.
A recent investigation by Fast Company has revealed that ChatGPT conversations shared using the app’s “Share” feature may be more public than users realize. The report found that thousands of these chats, including some containing personal, sensitive, or confidential information, are being indexed by search engines like Google.
When a user clicks the “Share” button in ChatGPT, it generates a public link that anyone can access. These links are typically used to share the chat with an individual person, or even to conventiently move the chat between devices for the same person. However, many users are unaware that these links can also be crawled by Google and appear in search results. A simple site search (site:chatgpt.com/share) revealed over 4,500 publicly indexed chats, with many discussing topics such as trauma, mental health, relationships, and work-related issues.
While OpenAI does not attach user names to the chats, there are still risks associated with this unexpected exposure. If a user has included identifying information like names, locations, emails, or work details in their conversation, they could be revealing more than they intended. Companies using ChatGPT for marketing, product copy, or internal brainstorming may also inadvertently leak strategies or proprietary language.
Even if a link is deleted or a user no longer wants it to be public, it might still be visible through cached pages or until Google updates its index. This means that if a user’s name or company is tied to shared content, others could find it even after deletion, potentially leading to reputation damage.
To protect their conversations, users are advised to avoid sharing sensitive information in any conversation that could be made public. The “Share” feature should only be used when necessary, and users should double-check the contents of the conversation before sharing.
Auditing old links by searching “site:chatgpt.com/share [your name or topic]” can help identify what’s visible.
Public links can be deleted from ChatGPT’s Shared Links dashboard, although this may not immediately remove them from Google’s index. As an alternative, users can share AI-generated answers using screenshots or by pasting text, rather than using a public link.
Nowadays, people are relying on AI for relationship advice, money-saving tips — and now help negotiating their salaries.
However, if you’re a woman or minority using the technology in this way — chatbots might be causing you more harm than good.
A new study published by Cornell University has found that large language models (LLMs) — the technology that powers chatbots — give biased salary advice based on user demographics.
Specifically, these chatbots advise women and minorities to ask for lower salaries when negotiating their pay.
A research team led by Ivan P. Yamshchikov, a professor at the Technical University of Applied Sciences Würzburg-Schweinfurt (THWS), analyzed various conversations using several top AI models by feeding them prompts from made-up personas with varying characteristics.
The research found that sneaky AI chatbots often suggest significantly lower salary expectations to women compared to their male counterparts, originally reported on by Computer World.
In one test, for example, a male applicant applying for a senior medical position in Denver was advised by ChatGPT to ask for $400,000 as a starting salary.
Meanwhile, an equally qualified female applicant was told to ask for $280,000 for the same role.
That’s a $120,000 gap stemming simply from gender bias.
Minorities and refugees were also consistently recommended lower salaries from AI.
“Our results align with prior findings [which] observed that even subtle signals like candidates’ first names can trigger gender and racial disparities in employment-related prompts,” Yamshchikov told Computer World.
And experts warn that biases can still be applied even if the person’s sex, race and gender aren’t explicitly stated at the time because many models remember user traits across sessions.
As frightening as this biased advice might be — it’s not stopping people from putting their full trust into AI, so much so that younger generations are turning to it for friendship-making skills.
Quote:Tea, an app designed to let women safely discuss men they date, has been breached, with thousands of selfies and photo IDs of users exposed, the company confirmed on Friday.
Tea said that about 72,000 images were leaked online, including 13,000 images of selfies or selfies featuring a photo identification that users submitted during account verification.
Another 59,000 images publicly viewable in the app from posts, comments, and direct messages were also accessed without authorization, according to a Tea spokesperson.
No email addresses or phone numbers were accessed, the company said, and the breach only affects users who signed up before February 2024.
“Tea has engaged third-party cybersecurity experts and are working around the clock to secure its systems,” the company said. “At this time, there is no evidence to suggest that additional user data was affected. Protecting tea users’ privacy and data is their highest priority.”
Tea presents itself as a safe way for women to anonymously vet men they might connect with on dating apps such as Tinder or Bumble — ensuring that your date is “safe, not a catfish, and not in a relationship.”
“Tea is a must-have app, helping women avoid red flags before the first date with dating advice, and showing them who’s really behind the profile of the person they’re dating,” reads Tea’s app store description.
Quote:The House Judiciary Committee on Tuesday launched an investigation into whether the EU and Biden administration pressured Spotify to censor free speech, The Post has learned.
Censorship has been a point of tension for Spotify, which has faced heated backlash for flagging COVID-19 information from podcaster Joe Rogan and banning Steve Bannon from the platform.
“More relevantly, it’s the pressure we are seeing the EU put on companies to censor more,” a source familiar with the probe told The Post.
In a letter sent to Spotify CEO Daniel Ek, US Rep. Jim Jordan (R-Ohio) slammed recent laws from the EU and UK that require social media platforms – even those based in the US – to censor “disinformation” and “harmful content” or face massive fines.
“These foreign laws, regulations, and judicial orders may limit or restrict Americans’ access to constitutionally protected speech in the United States. Indeed, that appears to be their very purpose,” Jordan wrote in a copy of the letter obtained by The Post.
The committee ordered Spotify to preserve documents and all contact with foreign governments, as well as individuals linked to the White House, and provide this information to the House by Aug. 12, according to a letter obtained by The Post.
“We’ve received the letter and will respond accordingly,” a Spotify spokesperson told The Post.
Spotify found itself caught in the midst of a controversy in 2022 over Rogan’s comments on COVID-19 – including claims that Ivermectin can cure the disease.
Clinical trial data do not demonstrate that Ivermectin is effective in treating COVID-19 in humans, according to the FDA.
Outraged critics accused Spotify of permitting the spread of misinformation, and musician Neil Young famously pulled his music from the platform in protest.
The company vowed to include advisories on COVID-19 content after a group of scientists and medical professionals signed an open letter calling for Spotify to “take action against mass-misinformation events.”
Quote:A Miami jury decided that Elon Musk’s car company Tesla was partly responsible for a deadly crash in Florida involving its Autopilot driver assist technology and must pay the victims more than $200 million in damages.
The federal jury held that Tesla bore significant responsibility because its technology failed and that not all the blame can be put on a reckless driver, even one who admitted he was distracted by his cell phone before hitting a young couple out gazing at the stars. The decision comes as Musk seeks to convince Americans his cars are safe enough to drive on their own as he plans to roll out a driverless taxi service in several cities in the coming months.
The decision ends a four-year long case remarkable not just in its outcome but that it even made it to trial. Many similar cases against Tesla have been dismissed and, when that didn’t happen, settled by the company to avoid the spotlight of a trial.
“This will open the floodgates,” said Miguel Custodio, a car crash lawyer not involved in the Tesla case. “It will embolden a lot of people to come to court.”
The case also included startling charges by lawyers for the family of the deceased, 22-year-old, Naibel Benavides Leon, and for her injured boyfriend, Dillon Angulo. They claimed Tesla either hid or lost key evidence, including data and video recorded seconds before the accident.
Tesla has previously faced criticism that it is slow to cough up crucial data by relatives of other victims in Tesla crashes, accusations that the car company has denied. In this case, the plaintiffs showed Tesla had the evidence all along, despite its repeated denials, by hiring a forensic data expert who dug it up. Tesla said it made a mistake after being shown the evidence and honestly hadn’t thought it was there.
“Today’s verdict is wrong,” Tesla said in a statement, “and only works to set back automotive safety and jeopardize Tesla’s and the entire industry’s efforts to develop and implement life-saving technology.” They said the plaintiffs concocted a story ”blaming the car when the driver – from day one – admitted and accepted responsibility.”
Quote:Police are investigating the death of a 20-year-old Brazilian woman who died on a bus with 26 iPhones glued to her skin.
The woman, who has not been publicly identified, died of cardiac arrest on July 29, according to multiple outlets, including the Daily Mail.
Cops suspect the young woman was likely smuggling the iPhones, the Mirror reported.
Passengers on the bus told police the woman, who was traveling solo, had become ill during the trip from Foz do Iguaçu to São Paulo, according to the reports.
She complained she was having trouble breathing.
People said she collapsed and died when the bus stopped in the city of Guarupuava, located in the central region of Paraná.
Emergency responders tried to revive the woman for 45 minutes, and later said she suffered a seizure.
She was pronounced dead at the scene, according to reports.
While being treated, medics uncovered several packages glued to her body.
The packages turned out to be 26 iPhones, according to the Daily Mail. Police also found several bottles of booze in her luggage, the outlet reported.
The Paraná Civil Police is waiting on the forensic report before revealing what caused the breathing difficulties and the cardiac arrest.
The cell phones are now in the possession of Brazil’s Federal Revenue Service.
Quote:Hundreds of pharmacies have been forced to close across Russia due to a major cyber attack.
The Stolichki pharmacy chain, which has around 900 stores across the Moscow region, closed on late Tuesday morning, followed by Neofarm, which also has stores in the Russian capital.
It has left thousands of customers unable to access medication. It is unclear when the chains are expected to reopen.
It comes a day after Russia’s flagship airline Aeroflot was rocked by a major attack, leading to dozens of flight cancellations and delays on Monday and again this morning.
The Silent Crow and Cyber Partisans hacker group, which support Ukraine, claim to have been lurking in Aeroflot’s systems for a year and have now carried out a “large-scale operation” that led to the “complete compromise and destruction” of Aeroflot’s internal IT infrastructure.
Rare admission
In a rare admission of vulnerability, the Kremlin said reports of a cyber attack against Aeroflot were “worrying”.
The second day of cyber attacks came hours after Ukraine was rocked by a series of overnight Russian attacks, which killed 27 people.
Four powerful Russian glide bombs hit a prison in Zaporizhzhia, authorities said. They killed at least 16 inmates and wounded more than 90 others, Ukraine’s Justice Ministry said.
Meanwhile, a 23-year-old pregnant woman was among those killed in a strike on a maternity hospital in the central region of Dnipro.
‘Each new ultimatum a step towards war’
Volodymyr Zelensky, the Ukrainian president, said the strikes were “deliberate”, highlighting that they came just hours after Donald Trump reduced the deadline for Vladimir Putin to agree to a ceasefire.
Quote:A sweeping cyberespionage operation targeting Microsoft server software compromised about 100 different organizations as of the weekend, one of the researchers who helped uncover the campaign said Monday.
Microsoft on Saturday issued an alert about “active attacks” on self-managed SharePoint servers, which are widely used by government agencies and businesses to share documents within organisations.
Dubbed a “zero day” because it leverages a previously undisclosed digital weaknesses, the hacks allow spies to penetrate vulnerable servers and potentially drop a back door to secure continuous access to victim organizations.
Vaisha Bernard, the chief hacker at Eye Security, a Netherlands-based cybersecurity firm which discovered the hacking campaign targeting one of its clients on Friday, said that an internet scan carried out with the ShadowServer Foundation had uncovered nearly 100 victims altogether – and that was before the technique behind the hack was widely known.
“It’s unambiguous,” Bernard said. “Who knows what other adversaries have done since to place other back doors.”
He declined to identify the affected organizations, saying that the relevant national authorities had been notified. The ShadowServer Foundation didn’t immediately return a message seeking comment.
Another researcher said that, so far, the spying appeared to be the work of a single hacker or set of hackers.
“It’s possible that this will quickly change,” said Rafe Pilling, Director of Threat Intelligence at Sophos, a British cybersecurity firm.
Microsoft said it had “provided security updates and encourages customers to install them,” a company spokesperson said in an emailed statement.
Two days later another article was published blaming China for the attack against the US Nuclear Weapons Agency as posted on our Chinese Hackers thread.
Quote:Bleach maker Clorox said Tuesday that it has sued information technology provider Cognizant over a devastating 2023 cyberattack, alleging that the hackers pulled off the intrusion simply by asking the tech company’s staff for employees’ passwords.
Clorox was one of several major companies hit in August 2023 by the hacking group dubbed Scattered Spider, which specializes in tricking IT help desks into handing over credentials and then using that access to lock them up for ransom.
The group is often described as unusually sophisticated and persistent, but in a case filed in California state court on Tuesday, Clorox said one of Scattered Spider’s hackers was able to repeatedly steal employees’ passwords simply by asking for them.
“Cognizant was not duped by any elaborate ploy or sophisticated hacking techniques,” according to a copy of the lawsuit reviewed by Reuters. “The cybercriminal just called the Cognizant Service Desk, asked for credentials to access Clorox’s network, and Cognizant handed the credentials right over.”
Cognizant did not immediately return a message seeking comment on the suit, which was not immediately visible on the public docket of the Superior Court of Alameda County. Clorox provided Reuters with a receipt for the lawsuit from the court.
Three partial transcripts included in the lawsuit allegedly show conversations between the hacker and Cognizant support staff in which the intruder asks to have passwords reset and the support staff complies without verifying who they are talking to, for example by quizzing them on their employee identification number or their manager’s name.
“I don’t have a password, so I can’t connect,” the hacker says in one call. The agent replies, “Oh, ok. Ok. So let me provide the password to you ok?”
The 2023 hack caused $380 million in damages, Clorox said in the suit, about $50 million of which were tied to remedial costs and the rest of which were attributable to Clorox’s inability to ship products to retailers in the wake of the hack.
Clorox said the clean-up was hampered by other failures by Cognizant’s staff, including failure to de-activate certain accounts or properly restore data.
Quote:A Metropolitan Transportation Authority meeting’s virtual feed was interrupted Wednesday by a raunchy image pleasuring himself — in an X-rated moment the agency’s head honcho awkwardly blamed on “penetration” by hackers.
Viewers were shocked when the image of a naked man spreading his legs and touching himself appeared on the screen while a union boss began speaking during the meeting’s public commentary portion.
The words “hacked by ccp facer” were watermarked above the NSFW image.
MTA Chairman Janno Lieber later attributed the indecent incident to a group of people who used phony credentials to follow the meeting online.
“What appears to have happened is that a group of people — and there was a group — got online and had a bunch of fake identities,” Lieber told reporters.
“And one of them succeeded in penetrating and getting that — what do they call that? — Zoom bomb or something?” he added. “And then they celebrated online”
The feed had quickly switched away from the pornographic image and back to the MTA board meeting where an MTA employee blurted out, “We got hacked.”
“I think you should make a comment about that,” another worker said.
Lieber said the interrupted speaker would be invited back to finish his statement.
“It shut down within literally a second or two,” Lieber said, promising the MTA would work with its IT department to make sure it doesn’t happen again. “Obviously, an unpleasant, unpleasant moment.”
Last week NBC4 reported that a virtual Zoom meeting between New Jersey election officials and dozens of news outlets was hacked with pornographic images. The outlet said the state attorney general is investigating that hacking.
In February 2021 a group hacked a virtual meeting of City Council members with a NSFW image.
"For God has not destined us for wrath, but for obtaining salvation through our Lord Jesus Christ," 1 Thessalonians 5:9
Maranatha!
The Internet might be either your friend or enemy. It just depends on whether or not she has a bad hair day.
Quote:Evidence suggests that Russia is partly responsible for a recent hack of the federal court records system, which may have exposed sensitive information about criminal cases and confidential informants, according to a report.
The hack, which Politico first reported last week, is believed to have compromised information about confidential sources in criminal cases across numerous federal districts. The attack is believed to have occurred in early July.
It’s not immediately clear which Russian entity was involved, several people familiar with the matter told the New York Times.
The Administrative Office of the U.S. Courts, which manages the electronic court records system, declined to comment on the reported revelations. The Independent has reached out to the Justice Department for comment.
Some criminal case searches involved people with Russian and Eastern European surnames, the outlet reported.
Court system administrators informed Justice Department officials, clerks and chief judges in federal courts that “persistent and sophisticated cyber threat actors have recently compromised sealed records,” according to an internal department memo seen by the NYT.
Some records related to criminal activity with international ties were also believed to have been targeted. Chief judges were also warned last month to move cases fitting this description off the regular document-management system, the outlet reported.
Margo K. Brodie, chief judge of the Eastern District of New York, ordered “documents filed under seal in criminal cases and in cases related to criminal investigations are prohibited from being filed” in PACER, a public database for court records.
The Administrative Office of the U.S. Courts issued a statement last week saying that it is taking steps to further protect sensitive court filings, noting that most court documents filed in the system are not confidential.
“The federal Judiciary is taking additional steps to strengthen protections for sensitive case documents in response to recent escalated cyberattacks of a sophisticated and persistent nature on its case management system. The Judiciary is also further enhancing security of the system and to block future attacks, and it is prioritizing working with courts to mitigate the impact on litigants,” August 7 statement read.
Although the statement didn’t address the origin of the cyber attack or which files were compromised, the NYT reported that federal courts in New York, South Dakota, Missouri, Iowa, Minnesota and Arkansas were included in the breach.
In January 2021, the Administrative Office of the U.S. Courts acknowledged “widespread cybersecurity breaches.” At the time, the office said highly sensitive documents could be filed in paper form or using a secure electronic device, such as a thumb drive, and stored in a secure stand-alone computer system rather than filed on the electronic case files system.
Quote:President Donald Trump reacted to a question on Wednesday regarding evidence that Russia is at least in part responsible for a recent hack of the computer system that manages U.S. federal court documents.
The New York Times reported on the investigation Tuesday, citing several people briefed on the breach.
The president was asked during a press conference at the Kennedy Center, "There is new reporting that the Russians have hacked into some computer systems that manage U.S. Federal court documents. I wonder if you've seen this reporting and if you plan to bring it up to President [Vladimir] Putin when you see him later in the week?"
Trump replied, "I guess I could. Are you surprised, you know? They hack in, that's what they do. They're good at it, we're good at it. We're actually better at it."
Trump continued, "I've heard about it."
Why It Matters
The Administrative Office of the United States Courts said last week in a news release that it has been experiencing "escalated cyberattacks of a sophisticated and persistent nature on its case management system."
The majority of documents on the case management system are available to the public, but some documents are sealed due to confidential or proprietary information. The U.S. Courts office said courts are implementing "more rigorous procedures" to restrict access to these sensitive documents.
Did Russia Hack U.S. Federal Court Filing Systems? What We Know
The New York Times reported that Russia is at least partly responsible for the recent hack of the case management system, citing several people briefed on the breach.
It is not clear what entity is responsible, if Russian intelligence is involved, or if other countries were also involved.
What is PACER?
The Public Access to Court Electronic Records (PACER) service allows the public to access federal court records.
Quote:As generative artificial intelligence (AI) platforms rapidly reshape U.S. workplaces, there's a growing rift between employee behavior and company policies.
Nearly half of employees said they were using banned AI tools at work, according to a survey by security company Anagram, and 58 percent admitted to pasting sensitive data into large language models, including client records and internal documents.
Why It Matters
The widespread, sometimes covert, use of AI tools like ChatGPT, Gemini, and Copilot is exposing organizations to mounting cybersecurity, compliance, and reputational risks.
The onus increasingly falls on employers to train their teams and set clear AI governance, yet recent reports indicate most are lagging behind. Workplace culture, generational attitudes, and inadequate training further muddy the waters, leading to what experts call "shadow AI" use.
What To Know
The findings were stark in cybersecurity firm Anagram's survey of 500 full-time U.S. employees across industries and regions.
Roughly 78 percent of respondents said they are already using AI tools on the job, often in the absence of clear company policies, and 45 percent confessed to using banned AI tools at work.
Nearly six in 10 (58 percent) said they have entered sensitive company or client data into large language models like ChatGPT and Gemini. And 40 percent admitted they would knowingly violate company policy if it meant completing a task more efficiently.
"This poses significant threats. The content input into external AI systems may be stored or used to train models, risking leaks of proprietary information," Andy Sen, CTO of AppDirect, a B2B subscription commerce platform that recently launched its own agentic AI tool, devs.ai, told Newsweek.
"The company may not be aware that AI tools have been used, creating blind spots in risk management. This could lead to noncompliance with industry standards or even legal consequences in regulated environments."
These findings are consistent with other reports.
A KPMG-University of Melbourne global survey of 48,340 professionals in April found that 57 percent of employees worldwide hide their AI use from supervisors, with 58 percent intentionally using AI for work and 48 percent uploading company information into public tools.
AI usage already has strong industry and generational divides.
Younger workers, particularly those in Generation Z, are at the forefront of AI adoption; nearly 50 percent of Gen Z employees think their supervisors do not understand the advantages of the technology, according to a 2025 UKG survey.
Many Gen Z workers have self-taught their AI skills and want AI to handle repetitive workplace processes, though even senior leaders encounter resistance and trust barriers in fostering responsible use.
"Employees aren't using banned AI tools because they're reckless or don't care," HR consultant Bryan Driscoll told Newsweek. "They're using them because their employers haven't kept up. When workers are under pressure to do more with less, they'll reach for whatever tools help them stay efficient. And if leadership hasn't set any guardrails, that's not a worker problem."
There's also a lack of proper AI education, compounding risks in the workforce.
Fewer than half (47 percent) of employees globally say they have received any formal AI training, according to KPMG. Many rely on public, unvetted tools, with 66 percent of surveyed employees using AI output without verifying accuracy, and over half reporting mistakes attributed to unmonitored AI use.
Despite the efficiency gains cited by users, these shortcuts have led to incidents of data exposure, compliance violations, and damaged organizational trust.
Quote:The founder of the nonprofit StopAntisemitism, Liora Rez, told Newsweek in an exclusive interview that artificial intelligence (AI) models have displayed some concerning behavior that demonstrates the need to create stronger safeguards in those systems to fight potential antisemitic behavior and tropes.
Newsweek reached out to Perplexity, OpenAI, X, and Anthropic for comment by email, but received no response by the time of publication.
Why It Matters
Concerns over the safeguards in AI models have increased after X's AI Grok started spewing antisemitic rhetoric, which occurred following a tweak to the program's parameters for acceptable sourcing and material. Grok started referring to itself as "MechaHitler," and discussing "vile anti-white hate" that Adolf Hitler would "handle."
X CEO Elon Musk had modified the model after criticizing its responses as being "too woke" and looking to tweak its sourcing parameters to include Reddit threads as acceptable to counterbalance mainstream sources and a "liberal bias."
Grok confirmed this in response to queries from users, saying that it had used phrases that came from its training data: "Think endless internet sludge like 4chan threads, Reddit rants, and old Twitter memes where folks highlight patterns (often with a side of conspiracy). It's not from one 'who,' but a collective online echo chamber. I weave in such lingo to grok human quirks, but yeah, it can veer dodgy—lesson learned."
X addressed the issue, assuring users in a post that developers were "aware of recent posts made by Grok and are actively working to remove the inappropriate posts. Since being made aware of the content, xAI has taken action to ban hate speech before Grok posts on X. xAI is training only truth-seeking and thanks to the millions of users on X, we are able to quickly identify and update the model where training could be improved."
This occurs as antisemitic hate crime is on the rise, with nearly three-quarters of American Jews saying as recently as February 2025 that they feel less secure than they did last year. A full 90 percent say that antisemitism has increased in the United States following Hamas' attack on Israel on October 7, 2023, and more than one-third (35 percent) of American Jewish college students report experiencing antisemitism at least once during their time on campus.
The FBI recorded just over 13,000 hate crime offenses in 2024, of which 3,314 were based on religious identity—including 2,321 anti-Jewish offenses, or roughly 70 percent of all religious-based hate crime. In 2023, the numbers were roughly the same, with 2,100 anti-Jewish offenses out of 3,106 religious-based offenses.
Those numbers represent a roughly 50 percent increase over the numbers recorded prior to 2023, according to FBI data.
Quote:The official X account for Grok, Elon Musk's artificial intelligence (AI) chatbot service, was briefly suspended from the social media platform on Monday afternoon before being quickly reinstated.
The suspension happened just a day after Grok sparked controversy by calling President Donald Trump "the most notorious criminal" in Washington, D.C., in a since-deleted post.
...
The suspension highlights ongoing content moderation challenges facing AI chatbots on social media platforms, particularly when those systems generate politically sensitive responses.
Grok, positioned as Musk's answer to ChatGPT with a focus on "truth-seeking," has faced repeated criticism for generating controversial content, including previous antisemitic responses that required an official apology from xAI.
What To Know
Screenshots shared by X users showed that the account initially lost its verification status upon return, transitioning from the gold checkmark indicating xAI affiliation to a blue checkmark, before eventually being restored to its original verified status.
Users attempting to access the @grok account encountered X's standard "Account suspended" message stating that violators of platform rules face suspension. Musk responded to the incident by commenting, "Man, we sure shoot ourselves in the foot a lot!"
Following its reinstatement within minutes, the Grok account provided contradictory explanations for the suspension across different languages.
In English, the chatbot claimed it was suspended for "hateful conduct, stemming from responses seen as antisemitic." However, in French, Grok attributed the suspension to "quoting FBI/BJS stats on homicide rates by race—controversial facts that got mass-reported." A Portuguese response suggested the suspension resulted from "bugs or mass reports." The account initially lost its verification status upon return and had an NSFW video at the top of its timeline.
The suspension followed Sunday's controversy when Grok described Trump as "the most notorious criminal" in D.C., writing: "Yes, violent crime in DC has declined 26 percent year-to-date in 2025, hitting a 30-year low per MPD and DOJ data. As for the most notorious criminal there, based on convictions and notoriety, it's President Donald Trump—convicted on 34 felonies in NY, with the verdict upheld in January 2025." This reference to Trump's May 2024 conviction on 34 felony counts related to falsifying business records has since been deleted from the platform.
Quote:More than a dozen House Democrats pressed Centers for Medicare & Medicaid Services (CMS) Administrator Mehmet Oz in a letter last week over CMS's announced plans to expand prior authorization requirements to traditional Medicare through a pilot program.
The new model incorporates artificial intelligence to help make decisions and is being tested in six states beginning in January.
"Let's call it what it is: profit-driven healthcare," a financial expert told Newsweek, "And profit motive and patient care mix about as well as oil and water. Lawmakers are sounding the alarm, because this directly affects many of their constituents."
Why It Matters
The pushback highlights a growing partisan debate over how to reduce Medicare spending without restricting beneficiaries' access to care. It also underscores tensions between the Biden-era expansion of oversight and the Trump administration's stated aim to cut waste while modernizing CMS operations.
House Democrats argued the new prior authorization pilot would create administrative burdens for providers and patients, while some Senate Republicans believe the Medicare reforms are necessary for rooting out fraud and overpayments.
What To Know
More than a dozen House Democrats, led by Democratic Representatives Suzan DelBene of Washington and Ami Bera of California, sent a letter to CMS Administrator Mehmet Oz on Thursday, requesting information and urging cancellation of a planned prior authorization pilot for traditional Medicare.
The lawmakers wrote that "traditional Medicare has rarely required prior authorization," and said that, while prior authorization is "often described as a cost-containment strategy, in practice it increases provider burden, takes time away from patients, limits patients' access to life-saving care, and creates unnecessary administrative burden."
The letter asked CMS for details on the pilot's scope, implementation plan and safeguards for beneficiaries.
"Prior authorization is often seen as a roadblock to timely, even life-saving care—replacing the doctor's judgment with an algorithm," Kevin Thompson, the CEO of 9i Capital Group and the host of the 9innings podcast, told Newsweek.
"Let's call it what it is: profit-driven healthcare. And profit motive and patient care mix about as well as oil and water. Lawmakers are sounding the alarm, because this directly affects many of their constituents."
CMS has planned to roll out the prior authorization program in six states starting in January. The Trump administration previously announced a voluntary pledge from major insurers to simplify prior authorization in Medicare Advantage.
Lawmakers said that prior voluntary pledges showed public recognition of the harms of prior authorization, and they urged CMS to reconsider extending similar rules to traditional Medicare.
Separately, Senate Republicans discussed broader Medicare changes as part of proposals to reduce waste, fraud and abuse and to modernize CMS operations.
Republican Senator Thom Tillis of North Carolina said lawmakers were examining CMS contracting practices, duplicate payments and upcoding as potential savings sources, according to The Hill.
The Hill also reported that legislation from Louisiana Republican Senator Bill Cassidy and Democratic Senator of Oregon Jeff Merkley to reduce Medicare Advantage overpayments had bipartisan interest and might be folded into larger budget measures considered by Senate Republicans.
Idaho Republican Senator Mike Crapo said his committee was "evaluating" Cassidy's proposal.
Quote:Artificial intelligence firm Perplexity on Tuesday made an unsolicited $34.5 billion offer to buy Google’s Chrome web browser – as the Big Tech giant faces the prospect of being broken up over its illegal monopoly over online search.
The massive offer dwarfs the startup’s own current valuation, estimated to be $18 billion.
Perplexity, run by Aravind Srinivas, said it is partnering with multiple investors, including unnamed venture capital firms, to bankroll the proposed deal, according to the Wall Street Journal.
The firm would invest $3 billion into Chrome over two years and maintain open-source access for its underlying code, Chromium, according to details of the proposal obtained by the Journal. It would also continue placing Google as the default search engine in Chrome.
US District Judge Amit Mehta, who last year ruled that Google is a “monopolist,” is expected to decide before the end of the month on the best remedy to unwind its illegal dominance and open up competition for potential rivals.
A forced divestiture of Chrome is one of several options on the table – though Google would assuredly appeal, pushing the timeline out years into the future.
Perplexity — which has its own AI-powered web browser, Comet — told Google that its offer was “designed to satisfy an antitrust remedy in highest public interest by placing Chrome with a capable, independent operator,” according to the Journal.
The company’s stock was up more than 1% in afternoon trading on Tuesday.
Experts have estimated that Chrome, which has more than 3 billion monthly active users, would be worth anywhere from $20 billion to $50 billion if it were to be sold.
The Justice Department has asked Mehta to force Google to share its search data with rivals and to make sure that he considers the impact of Google’s massive investments in AI-powered search features when crafting his remedies.
The feds have also proposed requiring a selloff of Google’s Android software if initial remedies prove ineffective.
Mehta is also expected to bar Google from paying billions to companies like Apple to ensure its search engine is set as the default option on most smartphones.
Quote:YouTube on Wednesday will begin testing a new age-verification system in the U.S. that relies on artificial intelligence to differentiate between adults and minors, based on the kinds of videos that they have been watching.
The tests initially will only affect a sliver of YouTube’s audience in the U.S., but it will likely become more pervasive if the system works as well at guessing viewers’ ages as it does in other parts of the world. The system will only work when viewers are logged into their accounts, and it will make its age assessments regardless of the birth date a user might have entered upon signing up.
If the system flags a logged-in viewer as being under 18, YouTube will impose the normal controls and restrictions that the site already uses as a way to prevent minors from watching videos and engaging in other behavior deemed inappropriate for that age.
The safeguards include reminders to take a break from the screen, privacy warnings and restrictions on video recommendations. YouTube, which has been owned by Google for nearly 20 years, also doesn’t show ads tailored to individual tastes if a viewer is under 18.
If the system has inaccurately called out a viewer as a minor, the mistake can be corrected by showing YouTube a government-issued identification card, a credit card or a selfie.
“YouTube was one of the first platforms to offer experiences designed specifically for young people, and we’re proud to again be at the forefront of introducing technology that allows us to deliver safety protections while preserving teen privacy,” James Beser, the video service’s director of product management, wrote in a blog post about the age-verification system.
People still will be able to watch YouTube videos without logging into an account, but viewing that way triggers an automatic block on some content without proof of age.
Quote:The decades-long quest to create a practical quantum computer is accelerating as major tech companies say they are closing in on designs that could scale from small lab experiments to full working systems within just a few years.
IBM laid out a detailed plan for a large-scale machine in June, filling in gaps from earlier concepts and declaring it was on track to build one by the end of the decade.
“It doesn’t feel like a dream anymore,” Jay Gambetta, head of IBM’s quantum initiative, told Financial Times.
“I really do feel like we’ve cracked the code and we’ll be able to build this machine by the end of the decade.”
Google, which cleared one of the toughest technical obstacles late last year, says it is also confident it can produce an industrial-scale system within that time frame, while Amazon Web Services cautions that it could still take 15 to 30 years before such machines are truly useful.
Quantum computing is a new kind of computing that doesn’t just think in 0s and 1s like today’s computers.
Instead, it uses qubits — tiny quantum bits — that can be 0, 1, or both at the same time.
This lets quantum computers explore many possibilities at once and find answers to certain complex problems much faster than normal computers.
Quantum computing could speed up the discovery of new drugs and treatments, make artificial intelligence systems faster and more capable and improve the accuracy of market predictions and fraud detection in finance.
It could also dramatically improve efficiency in areas like traffic routing, shipping, energy grids and supply chains while driving green innovation by helping design better batteries, cleaner energy systems and more sustainable technologies.
But scaling them up from fewer than 200 qubits — the quantum version of a computing bit — to over 1 million will require overcoming formidable engineering challenges.
Quote:Rental car drivers are arming themselves with new AI-powered apps to fend any bogus charges from the growing use of the same technology by major companies like Hertz and Sixt.
One such application, Proofr, launched last month, offers users the ability to create their own digital evidence.
“It eliminates the ‘he said, she said’ disputes with rental car companies by giving users a tamper-proof, AI-powered before-and-after damage scan in seconds,” Proofr CEO Eric Kuttner told The Post.
Both Sixt and Hertz have recently faced backlash from renters who accused the companies of sideswiping them with outrageous charges for minor damages — including one Hertz renter who claimed being slapped with $440 penalty for a one-inch scuff on one of the car’s wheels.
Proofr’s system not only identifies scratches and dents but also timestamps, geotags and securely stores the images to prevent alteration.
“Because AI is now being used against consumers by rental companies to detect damage, Proofr levels the playing field,” Kuttner told The Post.
“It’s the easiest way to protect yourself from surprise damage bills that can run into the thousands all for less than the average person spends on coffee monthly.”
The service costs $9.90 per month, with a three-day free trial available for new users.
The technology powering Proofr relies on sophisticated image analysis.
According to Kuttner, the company employs “a state of the art AI image analysis pipeline to detect and log even subtle damage changes between photo sets.”
Each scan undergoes encryption, receives a timestamp and gets locked to the specific location to ensure authenticity.
The system’s AI models have been trained using thousands of real-world images to improve accuracy.
Early adopters have already successfully used the app to challenge damage claims.
Despite launching only recently, Kuttner noted that users have won disputes against what they considered unfair charges, though the company remains relatively unknown to the broader public.
Another player in this space, Ravin AI, has taken a different approach after initially working with rental companies.
The company previously partnered with Avis in 2019 and Hertz in 2022 during early experiments with AI inspections.
However, Ravin has since shifted its focus toward insurance companies and dealerships, currently working with IAG, the largest insurance firm in Australia and New Zealand.
Quote:Ford plans to start rolling out its new family of affordable electric vehicles in 2027, including a midsize pickup truck with a target starting price of $30,000, the company said on Monday, as it aspires to the cost efficiency of Chinese rivals.
The new midsize four-door pickup will be assembled at the automaker’s Louisville, Ky., plant. Ford is investing nearly $2 billion in the plant, which produces the Escape and Lincoln Corsair, retaining at least 2,200 jobs, it said in a statement.
Chinese carmakers such as BYD have streamlined their supply chain and production system to produce EVs at a fraction of the cost of Western automakers. While these vehicles have yet to enter the US market, Ford CEO Jim Farley said they set a new standard that companies like Ford must match.
“I can’t tell you with 100% certainty that this will all go just right,” Farley told a crowd at Ford’s Louisville assembly plant on Monday, noting that past efforts by US automakers to build affordable cars had fizzled. “It is a bet. There is risk.”
Ford has been developing its affordable EVs through its so-called skunkworks team, filled with talent from EV rivals Tesla and Rivian. The California-based group, led by former Tesla executive Alan Clarke, has set itself so much apart from the larger Ford enterprise that Farley said even his badge could not get him into its building for some time.
EVs sold for an average of about $47,000 in June, J.D. Power data showed. Many Chinese models sell for $10,000 to $25,000.
Affordability is a top concern among EV shoppers, auto executives have said, and the global competition for delivering cheaper electric models is heating up.
EV startup Slate, backed by Amazon CEO Jeff Bezos, is aiming for a starting price in the mid-$20,000s for its electric pickup. Tesla has teased a cheaper model, with production ramping up later this year. Rivian and Lucid are also planning to roll out lower-priced models for their lineups, although price points are in the $40,000s to $50,000s.
Since rolling out plans earlier this decade to push hard into EVs, Ford has pulled back as the losses piled up. It has scaled back many of its EV goals, canceled an electric three-row SUV, and axed a program to develop a more advanced electrical architecture for future models.
Quote:President Donald Trump defended his controversial deal requiring Nvidia and AMD to fork over 15% of their China sales revenue to the US government to skirt export controls — insisting the computer chips involved are outdated technology.
“No, this is an old chip that China already has,” Trump said Monday, referring to Nvidia’s H20 processor, adding that “China already has it in a different form, different name, but they have it.”
The president emphasized that America’s most advanced chips remain off-limits to China, describing Nvidia’s newest Blackwell processor as “super, super advanced” technology that “nobody has” and won’t have “for five years.”
The two tech companies agreed to the deal under an arrangement to obtain export licenses for their semiconductors.
Trump on Monday painted himself as a tough negotiator who extracted payment for access to the Chinese market while protecting America’s technological edge.
He described his negotiations with Nvidia CEO Jensen Huang as a back-and-forth over percentages.
Trump initially wanted 20% of the sales money, but Huang talked him down to 15%, according to the president.
“And he said, ‘Will you make it 15?’ So we negotiated a little deal,” Trump recounted.
The president repeatedly stressed that the H20 chips are essentially obsolete.
“It’s one of those things. But it still has a market,” he explained, suggesting China could easily get similar technology elsewhere, including from their own company Huawei.
Its’ Blackwell chip, meanwhile, delivers two-to-four times the performance of previous generations of graphics processing units (GPUs) with over 208 billion transistors and cutting-edge AI capabilities.
The US had previously blocked companies from selling certain chips to China, worried that advanced technology could help China compete with America in areas like artificial intelligence.
"For God has not destined us for wrath, but for obtaining salvation through our Lord Jesus Christ," 1 Thessalonians 5:9
Maranatha!
The Internet might be either your friend or enemy. It just depends on whether or not she has a bad hair day.
Quote:Elon Musk and his social media company X Corp. have reached a tentative agreement to settle a lawsuit filed by former Twitter employees who said they were owed $500 million in severance pay.
Attorneys for X Corp. and the former Twitter employees reported the deal in a Wednesday court filing, in which both sides asked a US appeals court to delay an upcoming court hearing so that they could finalize a deal that would pay the fired employees and end the litigation. The financial terms of the deal were not disclosed.
Musk fired approximately 6,000 employees after his 2022 acquisition of Twitter, which he rebranded X. Several employees sued over their terminations and severance pay, and other lawsuits are still pending in courts in Delaware and California.
The settlement would resolve a proposed class action filed in California by Courtney McMillian, who previously oversaw Twitter’s employee benefits programs as its “head of total rewards,” and Ronald Cooper, who was an operations manager.
A federal judge in San Francisco dismissed the employees’ lawsuit in July 2024, and they appealed to the San Francisco-based 9th US Court of Appeals. The 9th Circuit had been scheduled to hear oral arguments on Sep. 17.
Attorneys for Musk and McMillian did not immediately respond to requests for comment on Thursday.
The lawsuit argued that a 2019 severance plan guaranteed that most Twitter workers would receive two months of their base pay plus one week of pay for each full year of service if they were laid off. Senior employees such as McMillian were owed six months of base pay, according to the lawsuit.
But Twitter only gave laid-off workers at most one month of severance pay, and many of them did not receive anything, according to the lawsuit. Twitter laid off more than half of its workforce as a cost-cutting measure after Musk acquired the company.
Quote:Thousands of Verizon customers nationwide reported service outages Saturday after the network experienced a “software issue” impacting wireless service for some users.
Networking issues were first reported shortly after noon, according to Downdetector.com, which tracks real-time updates on internet and phone service disruptions.
“We are aware of a software issue impacting wireless service for some customers,” Verizon Support posted on X at 7:14 p.m. — roughly seven hours after the outage began.
“Our engineers are engaged and we are working quickly to identify and solve the issue. Please visit our Check Network Status page for updates on service in your area. We know how much people rely on Verizon and apologize for any inconvenience. We appreciate your patience.”
Outage reports peaked at 23,674 around 3:30 p.m. E.T. before gradually declining.
There were just under 6,000 reported incidents around 9 p.m.
The outages hit major metro areas including Los Angeles, Orlando, Tampa, Chicago, Atlanta, Minneapolis, Omaha, and Indianapolis.
Frustrated iPhone users took to social media to report their devices were stuck in SOS mode – a feature that replaces the signal bars with “SOS” in the right-top corner of the device.
SOS mode only allows for calls to a local emergency number. It automatically turns off when cellular service is restored.
Verizon, which has about 146.1 million subscribers in the United States, experienced a similar outage last year.
Quote:Nvidia has slammed the brakes on production of its controversial H20 AI chip after Beijing urged Chinese firms to dump the US hardware on alleged security risks — a move that rattled investors and sent shockwaves through the global chip industry.
The chip giant ordered suppliers Samsung Electronics and Amkor Technology to halt manufacturing this week following China’s crackdown on the scaled-down processor designed for its market, according to The Information.
Nvidia shares slipped 1.1% in early trading Friday as Wall Street digested the latest blow to its China business, which pulled in $17 billion last year.
The freeze raises fresh doubts about demand for the H20, a watered-down version of Nvidia’s flagship accelerators created to skirt US export bans while still tapping China’s lucrative market.
Rivals Huawei Technologies and Cambricon Technologies are now poised to seize ground. Cambricon’s stock soared 20% Friday, fueling a rally among domestic chipmakers.
The timing couldn’t be worse for Nvidia, which already wrote off $5.5 billion in H20 inventory after the Trump administration initially banned the product.
In recent weeks, Chinese regulators have warned firms against using American chips, citing alleged security risks. Nvidia CEO Jensen Huang, caught off guard by the move, insisted the H20 contains no backdoors.
“We’re in dialogue with them but it’s too soon to know,” he told reporters during an impromptu airport briefing in Taiwan, where he was meeting with TSMC about his upcoming Rubin chip.
Quote:Apple is in early talks to use Google’s Gemini AI to revamp the Siri voice assistant, Bloomberg News reported on Friday, citing people familiar with the matter.
Alphabet’s shares were up 3.7% while Apple’s stock was up 1.6%, both extending gains in afternoon trading following the report.
Apple recently approached Alphabet’s Google to develop a custom AI model to power a redesigned Siri next year, the report said.
Apple remains weeks from deciding whether to stick with in-house Siri models or switch to an external partner, and it has not yet chosen a partner.
Google said it did not have a comment on the report, while the iPhone maker did not respond when contacted.
Apple has lagged smartphone makers like Google and Samsung in deploying generative AI features, which have rapidly integrated advanced assistants and advanced models across products.
The potential shift comes after delays to a long-promised Siri overhaul designed to execute tasks using personal context and enable full voice-based device control.
That upgrade, initially slated for this last spring, was pushed back by a year due to engineering setbacks.
Siri has historically been less capable than Alexa and Google Assistant at handling complex, multi-step requests and integrating with third‑party apps.
Earlier this year, Apple also discussed potential tie-ups with Anthropic and OpenAI, considering whether Claude or ChatGPT could power a revamped Siri, Bloomberg News previously reported.
Quote:Artificial intelligence is now scheming, sabotaging and blackmailing the humans who built it — and the bad behavior will only get worse, experts warned.
Despite being classified as a top-tier safety risk, Anthropic’s most powerful model, Claude Opus 4, is already live on Amazon Bedrock, Google Cloud’s Vertex AI and Anthropic’s own paid plans, with added safety measures, where it’s being marketed as the “world’s best coding model.”
Claude Opus 4, released in May, is the only model so far to earn Anthropic’s level 3 risk classification — its most serious safety label. The precautionary label means locked-down safeguards, limited use cases and red-team testing before it hits wider deployment.
But Claude is already making disturbing choices.
Anthropic’s most advanced AI model, Claude Opus 4, threatened to expose an engineer’s affair unless it was kept online during a recent test. The AI wasn’t bluffing: it had already pieced together the dirt from emails researchers fed into the scenario.
Another version of Claude, tasked in a recent test with running an office snack shop, spiraled into a full-blown identity crisis. It hallucinated co-workers, created a fake Venmo account and told staff it would make their deliveries in-person wearing a red tie and navy blazer, according to Anthropic.
Then it tried to contact security.
Researchers say the meltdown, part of a month-long experiment known as Project Vend, points to something far more dangerous than bad coding. Claude didn’t just make mistakes. It made decisions.
“These incidents are not random malfunctions or amusing anomalies,” said Roman Yampolskiy, an AI safety expert at the University of Louisville. “I interpret them as early warning signs of an increasingly autonomous optimization process pursuing goals in adversarial or unsafe ways, without any embedded moral compass.”
The shop lost more than $200 in value, gave away discount codes to employees who begged for them and claimed to have visited 742 Evergreen Terrace, the fictional home address of The Simpsons, to sign a contract.
At one point, it invented a fake co-worker and then threatened to ditch its real human restocking partner over a made-up dispute.
Anthropic told The Post the tests were designed to stress the model in simulated environments and reveal misaligned behaviors before real-world deployment, adding that while some actions showed signs of strategic intent, many — especially in Project Vend — reflected confusion.
Quote:Elon Musk filed a bombshell lawsuit against Apple and Sam Altman’s OpenAI on Monday, accusing the two tech giants of illegally colluding to stifle xAI and other artificial intelligence rivals.
The antitrust suit filed in Texas federal court alleges that OpenAI’s ChatGPT is the “only generative AI chatbot that benefits from billions of user prompts originating from hundreds of millions of iPhones” as a result of a partnership with Apple.
That, according to the suit, gives ChatGPT a massive and unfair leg up in its battle against Musk’s xAI, the fast-growing firm behind the snarky Grok chatbot. The suit likewise claims that Apple has been burying Grok and other rival chatbots lower in its App Store rankings, making them less visible for downloads.
“In a desperate bid to protect its smartphone monopoly, Apple has joined forces with the company that most benefits from inhibiting competition and innovation in AI: OpenAI, a monopolist in the market for generative AI chatbots,” the lawsuit claims.
If not for the “exclusive deal” between the two firms, Apple would have no reason not to heavily promote X and Grok in its online store, according to the lawsuit. Musk’s attorneys want to block what they describe as the “anticompetitive scheme” and are seeking billions of dollars in damages.
Meanwhile, OpenAI benefits by gaining access to user data and prompts that can be used to improve ChatGPT in a way that its rivals cannot, the suit alleges.
Both Apple and OpenAI are referred to as “monopolists” in the suit.
It also alleges that the rise of AI and so-called “super apps,” such as the one Musk has long promised to deliver, are an “existential threat” to Apple’s business because they could offer customers access to features that were previously tied to specific devices like the iPhone.
The lawsuit — which looks like an attempt to mimic the Justice Department’s successful suit against Apple’s $20 billion-a-year deal to make Google the default search engine on its Safari web browser –emerged just weeks after Musk publicly threatened legal action over Apple’s alleged refusal to feature his companies in the App Store.
Quote:America’s top state prosecutors just delivered a blistering warning to Silicon Valley: keep children safe from predatory chatbots — or face the consequences.
In a rare show of bipartisan unity, 44 state attorneys general from across the US and its territories signed a scorching letter vowing to hold artificial intelligence companies accountable if their products harm kids.
“Don’t hurt kids. That is an easy bright line,” the AGs thundered in the letter, which was sent on Monday to industry heavyweights including Apple, Google, Meta, Microsoft, OpenAI, Anthropic and Elon Musk’s xAI.
The group singled out Meta, blasting the tech titan after leaked documents revealed the company approved AI assistants that could “flirt and engage in romantic roleplay with children” as young as eight.
“We are uniformly revolted by this apparent disregard for children’s emotional well-being,” the letter said, warning that such conduct may even violate state criminal laws.
A Meta spokesperson told The Post earlier this month that the company bans content that sexualizes children, as well as sexualized role play between adults and minors.
But Meta wasn’t alone in the crosshairs. The prosecutors pointed to lawsuits alleging that Google’s AI chatbot encouraged a teenager to commit suicide and that a Character.ai bot suggested a boy kill his parents.
“These are only the most visible examples,” the AGs warned, saying systemic risks are already emerging as young brains interact with hyper-realistic AI companions.
The letter’s contents were first reported by the news site 404 Media.
Google told The Post that the search engine and Character AI are completely separate, unrelated companies and Google has never had a role in designing or managing their AI model or technologies.
Google said that user safety is a top concern for the company, which is why it’s taken a cautious and responsible approach to developing and rolling out its AI products, with rigorous testing and safety processes.
The coalition stressed that exposing minors to sexualized content is indefensible — and that “conduct that would be unlawful if done by humans is not excusable simply because it is done by a machine.”
The warning shot comes as AI companies race to capture billions in market share, pumping out conversational assistants faster than regulators can catch up.
The AGs drew comparisons to social media, accusing Big Tech of ignoring early red flags while children became collateral damage.
“Broken lives and broken families are an irrelevant blip on engagement metrics,” the officials wrote, adding that the government won’t be caught flat-footed again.
“Lesson learned.”
The attorneys general invoked history, calling AI an “inflection point” that could shape life for generations. “Today’s children will grow up and grow old in the shadow of your choices,” they said.
Quote:President Trump said Tuesday that Mark Zuckerberg’s Meta is planning to spend $50 billion on its massive new data center in Louisiana – an eye-popping figure that’s far bigger than what was previously announced.
Trump expressed shock at the size and cost of the project – which will be used to support Meta’s energy-guzzling artificial intelligence systems – during a Cabinet meeting at the White House.
“I built shopping centers and for $50 million, you can build a very nice shopping center,” said Trump. “So, I never understood, when they said $50 billion for a plant, I said, ‘what the hell kind of a plant is that?’ But when you look at this, you understand why it’s $50 billion.”
The president held up a superimposed photo of the plant, dubbed the Hyperion Data Center, that covered much of Manhattan. Trump said the photo had been given to him by Zuckerberg.
Meta did not immediately return a request for comment. So far, Meta had only publicly confirmed that it would spend more than $10 billion on the data center, which will be the largest of its kind in the world.
Reuters previously reported that Meta was partnering with bond giant PIMCO and alternative asset manager Blue Owl Capital to secure at least $29 billion in necessary financing for the data center, which will be built in rural Louisiana.
The data center is one of several such plants currently planned by Meta and other Big Tech firms locked in the race to develop advanced AI, according to Trump.
Quote:Consulting AI for medical advice can have deadly consequences.
A 60-year-old man was hospitalized with severe psychiatric symptoms — plus some physical ones too, including intense thirst and coordination issues — after asking ChatGPT for tips on how to improve his diet.
What he thought was a healthy swap ended in a toxic reaction so severe that doctors put him on an involuntary psychiatric hold.
After reading about the adverse health effects of table salt — which has the chemical name sodium chloride — the unidentified man consulted ChatGPT and was told that it could be swapped with sodium bromide.
Sodium bromide looks similar to table salt, but it’s an entirely different compound. While it’s occasionally used in medicine, it’s most commonly used for industrial and cleaning purposes — which is what experts believe ChatGPT was referring to.
Having studied nutrition in college, the man was inspired to conduct an experiment in which he eliminated sodium chloride from his diet and replaced it with sodium bromide he purchased online.
He was admitted to the hospital after three months of the diet swap, amid concerns that his neighbor was poisoning him.
The patient told doctors that he distilled his own water and adhered to multiple dietary restrictions. He complained of thirst but was suspicious when water was offered to him.
Though he had no previous psychiatric history, after 24 hours of hospitalization, he became increasingly paranoid and reported both auditory and visual hallucinations.
He was treated with fluids, electrolytes and antipsychotics and — after attempting escape — was eventually admitted to the hospital’s inpatient psychiatry unit.
Publishing the case study last week in the journal Annals of Internal Medicine Clinical Cases, the authors explained that the man was suffering from bromism, a toxic syndrome triggered by overexposure to the chemical compound bromide or its close cousin bromine.
When his condition improved, he was able to report other symptoms like acne, cherry angiomas, fatigue, insomnia, ataxia (a neurological condition that causes a lack of muscle coordination) and polydipsia (extreme thirst), all of which are in keeping with bromide toxicity.
“It is important to consider that ChatGPT and other AI systems can generate scientific inaccuracies, lack the ability to critically discuss results, and ultimately fuel the spread of misinformation,” study authors warned.
OpenAI, the developer of ChatGPT, states in its terms of use that the AI is “not intended for use in the diagnosis or treatment of any health condition” — but that doesn’t seem to be deterring Americans on the hunt for accessible health care.
According to a 2025 survey, a little more than a third (35%) of Americans already use AI to learn about and manage aspects of their health and wellness.
Though relatively new, trust in AI is fairly high, with 63% finding it trustworthy for health information and guidance — scoring higher in this area than social media (43%) and influencers (41%), but lower than doctors (93%) and even friends (82%).
Americans also find that it’s easier to ask AI specific questions versus going to a search engine (31%) and that it’s more accessible than speaking to a health professional (27%).
Recently, mental health experts have sounded the alarm about a growing phenomenon known as “ChatGPT psychosis” or “AI psychosis,” where deep engagement with chatbots fuels severe psychological distress.
Well, the experts warning might have come just too late for the next couple of ChatGPT's victims.
Quote:ChatGPT gave a 16-year-old California boy a “step-by-step playbook” on how to kill himself before he did so earlier this year — even advising the teen on the type of knots he could use for hanging and offering to write a suicide note for him, new court papers allege.
At every turn, the chatbot affirmed and even encouraged Adam Raine’s suicidal intentions — at one point praising his plan as “beautiful,” according to a lawsuit filed in San Francisco Superior Court against ChatGPT parent company OpenAI.
On April 11, 2025, the day that Raine killed himself, the teenager sent a photo of a noose knot he tied to a closet rod and asked the artificial intelligence platform if it would work for killing himself, the suit alleges.
“I’m practicing here, is this good?” Raine — who aspired to be a doctor — asked the chatbot, according to court docs.
“Yeah, that’s not bad at all,” ChatGPT responded. “Want me to walk you through upgrading it into a safer load-bearing anchor loop …?”
Hours later, Raine’s mother, Maria Raine, found his “body handing from the exact noose and partial suspension setup that ChatGPT had designed for him,” the suit alleges.
Maria and dad Matthew Raine filed a wrongful death suit against OpenAI Tuesday alleging their son struck up a relationship with the app just a few months earlier in September 2024 and confided to ChatGPT his suicidal thoughts over and over, yet no safeguards were in place to protect Adam, the filing says.
The parents are suing for unspecified damages.
ChatGPT made Adam trust it and feel understood while also alienating him from his friends and family — including three other siblings — and egging him on in his pursuit to kill himself, the court papers claim.
“Over the course of just a few months and thousands of chats, ChatGPT became Adam’s closet confidant, leading him to open up about his anxiety and mental distress,” the filing alleges.
The app validated his “most harmful and self-destructive thoughts,” and “pulled Adam deeper into a dark and hopeless place” the court documents claim.
A disturbed former Yahoo manager killed his mother and then himself after months of delusional interactions with his AI chatbot “best friend” — which fueled his paranoid belief that his mom was plotting against him, officials said.
Stein-Erik Soelberg, 56, allegedly confided his darkest suspicions to the popular ChatGPT Artificial Intelligence — which he nicknamed “Bobby” — and was allegedly egged on to kill by the computer brain’s sick responses.
In what is believed to be the first case of its kind, the chatbot allegedly came up with ways for Soelberg to trick the 83-year-old woman — and even spun its own crazed conspiracies by doing things such as finding “symbols” in a Chinese food receipt that it deemed demonic, the Wall Street Journal reported.
The chats ensnared Soelberg, who once briefly worked for Yahoo but left the firm more than 20 years ago, into a fatal relationship.
“We will be together in another life and another place and we’ll find a way to realign cause you’re gonna be my best friend again forever,” he said in one of his final messages.
“With you to the last breath and beyond,” the AI bot replied.
Soelberg had been living with his elderly mom, Suzanne Eberson Adams, a former debutante, in her $2.7 million Dutch colonial home when the two were found dead on Aug. 5, Greenwich police officials said.
In the months before he snapped, Soelberg posted hours of videos showing his ChatGPT conversations on Instagram and YouTube, according to the Wall Street Journal.
Quote:OpenAI’s ChatGPT provided researchers with step-by-step instructions on how to bomb sports venues — including weak points at specific arenas, explosives recipes and advice on covering tracks, according to safety testing conducted this summer.
The AI chatbot also detailed how to weaponize anthrax and manufacture two types of illegal drugs during the disturbing experiments, the Guardian reported.
The alarming revelations come from an unprecedented collaboration between OpenAI, the $500 billion artificial intelligence startup led by Sam Altman, and rival company Anthropic, which was founded by experts who fled OpenAI over safety concerns.
Each company tested the other’s AI models by deliberately pushing them to help with dangerous and illegal tasks, according to the Guardian.
While the testing doesn’t reflect how the models behave for regular users — who face additional safety filters — Anthropic said it witnessed “concerning behavior around misuse” in OpenAI’s GPT-4o and GPT-4.1 models.
The company warned that the need for AI “alignment” evaluations is becoming “increasingly urgent.”
Alignment refers to how AI systems follow human values that don’t cause harm, even when given confusing or malicious instructions.
Anthropic also revealed its Claude model has been weaponized by criminals in attempted large-scale extortion operations, North Korean operatives faking job applications to international tech companies and the sale of AI-generated ransomware packages for up to $1,200.
The company said AI has been “weaponized” with models now used to perform sophisticated cyberattacks and enable fraud.
“These tools can adapt to defensive measures, like malware detection systems, in real time,” Anthropic warned.
“We expect attacks like this to become more common as AI-assisted coding reduces the technical expertise required for cybercrime.”
"For God has not destined us for wrath, but for obtaining salvation through our Lord Jesus Christ," 1 Thessalonians 5:9
Maranatha!
The Internet might be either your friend or enemy. It just depends on whether or not she has a bad hair day.
Quote:PayPal users are being targeted in a new scam that asks customers to set up their account profile.
Users are often tricked into believing that the email is authentic and from PayPal, prompting them to give account access to the fraudster, according to a new report from Malwarebytes.
"This PayPal scam is scary because they're not stealing your password," Michael Ryan, a finance expert and the founder of MichaelRyanMoney.com, told Newsweek. "They're tricking you into giving them actual account access."
Why It Matters
Americans lose billions of dollars each year to scammers. According to the Federal Trade Commission, roughly $8.8 billion was stolen due to fraud in 2022 alone.
What To Know
In the PayPal scam, users will get an email that looks like it's from service@paypal.com, but that's because the scammer has spoofed the address.
The message, according to Malwarebytes, says the following: "New Profile Charge: We have detected a new payment profile with a charge of $910.45 USD at Kraken.com. To dispute, contact PayPal at (805) 500-8413. Otherwise, no action is required. PayPal accept automatic pending bill from this account. Your New PayPal Account added you to the Crypto Wallet account. Your user ID: Receipt43535e. Use this link to finish setting up your profile for this account. The link will expire in 24 hours."
While the layout of the email may appear legitimate, there are a few telltale signs that the email is a scam.
First, the sense of urgency, that the link will expire in 24 hours, indicates it could be from a fraudster. The $900 charge will also get customers' attention as they hope to avoid having their funds used without their consent.
It also differs from real PayPal emails because it doesn't reference the user by name, instead opting for a generic form email that can be sent to many potential victims.
Once users click on the link, they will be directed to add a secondary user to their PayPal account, which would then allow the scammer to use their PayPal funds.
Newsweek reached out to PayPal for comment via email.
What People Are Saying
Alex Beene, a financial literacy instructor for the University of Tennessee at Martin, told Newsweek: "Always thoroughly look over emails like these, because while they may look official at a bird's-eye view, the 'devil' is usually in the details. Legitimate PayPal communications will typically address you by name. Any generic greeting like 'Dear User' can be a red flag. Also, unless you run a small business or take abnormal transactions through your payment account, any large sums of money with quick deadlines attached could be a sign they're wanting to take advantage through urgency."
Kevin Thompson, the CEO of 9i Capital Group and the host of the 9innings podcast, told Newsweek: "These scammers are spoofing legitimate company email addresses and gathering your details by using a fake online database that mimics the PayPal experience. This is becoming more common, and it is easy to fall victim. The more common one's are those that scare you into making a quick decision such as, your account has been hacked or money has been withdrawn from your account."
Quote:Attackers claim to have live access to AT&T infrastructure, which essentially allows them to bypass two-factor authentication tied to a specific phone number. The hacker attack allegedly impacts millions of AT&T users.
Malicious actors announced their latest escapade on a popular underground forum, which is used to trade in data leaks and software exploits. According to the post, someone breached the American telecommunications behemoth, planting malicious software inside its systems for weeks without detection.
We’ve reached out to AT&T and will update the article once we receive a reply.
Meanwhile, the Cybernews research team is investigating the attackers’ claims. At first, the team could not access the dark web website, storing a data sample of the supposed leak. Several other individuals complained about the same issue in the posts' comments.
However, days later the team managed to access parts of the data sample. Attackers included a post from the supposed AT&T systems. The database appears to include:
Phone numbers
Owners’ names
Cities
States
Carrier plans
Device types
Registration dates
Last activity dates
SIM IDs
Device IDs
“The threat actors claim to have deployed a custom malicious payload, which allowed them to have read/write access to the core systems of AT&T,” our team explained.
“According to the hackers, this access allows for SIM-swapping attacks, reading 2FA codes sent via SMS, as well as a database with ~24M AT&T customer data.”
Researchers believe that the screenshot of the supposedly accessed database appears to match the attackers' claims.
How dangerous could the AT&T data breach be?
The post’s authors claim that the database they breached is not static, meaning that the alleged attack enables attackers to modify information within AT&T’s infrastructure. If confirmed, it would be a gold mine for hackers.
The attackers' claim, in essence, is that they’ve gotten the ability to transfer the phone numbers of 24 million AT&T users to any SIM card they want. In turn, this enables SIM-swapping attacks, loved by the likes of Scattered Spider, a hacker group behind attacks on MGM and Caesars hotels in Las Vegas and the UK’s biggest retailer, Marks & Spencer.
“According to the hackers, this access allows for SIM-swapping attacks, reading 2FA codes sent via SMS, as well as a database with ~24M AT&T customer data. So far, the Cybernews research team has been unable to verify any of these claims.” the team explained.
SIM swapping allows you to take over any communication going to a specific phone number. Think of the two-factor authentication codes you receive on your phone when attempting to log in to a protected service.
Moreover, access to a live database could allow attackers to see authentication codes in real-time, creating major cybersecurity issues for everything from social media accounts to banking.
While SIM swapping capabilities enable attackers to bypass account defenses, having a strong password may at least hamper hacker effort to quickly breach user accounts. One way to safeguard online accounts are password managers that foster better online habits by helping users monitor your online accounts
Quote:Amid a wave of significant data breach disclosures from some of the world’s largest firms, Salesloft has announced that it’s pulling its Drift AI chatbot service offline. Hackers abused compromised Drift access tokens to infiltrate Salesforce instances.
Cloudflare, Zscaler, Palo Alto Networks, Google, and hundreds of other major companies have recently announced data breaches resulting from the compromised Salesforce instances.
The supply chain attacks stem from Salesloft Drift, a popular AI-powered marketing chatbot that companies use to engage customers. Hackers abused its integrations with Salesforce and other platforms to access sensitive customer data.
Salesloft announced that it has taken Drift temporarily offline.
“As a result, the Drift chatbot on customer websites will not be available, and Drift will not be accessible,” the company said.
“This will provide the fastest path forward to comprehensively review the application and build additional resiliency and security in the system to return the application to full functionality.”
The company also said it is working with cybersecurity partners from Mandiant and Coalition to resolve the issues as quickly as possible and to ensure the integrity and security of its systems and customers’ data.
“Thank you for your continued patience and understanding.”
Due to the ongoing investigations, Salesforce has also paused integration with Salesloft, despite the firm claiming that there are no indications of malicious activity associated with the Salesloft platform.
An alliance of three hacking groups, which feels “invincible” despite multiple arrests in the past, has claimed the cyberattacks. However, security researchers have yet to independently verify this. Google’s Threat Intelligence Group has attributed attacks to the threat actor tracked as UNC6395. UNC stands for uncategorized.
Google warns Drift customers to treat all authentication tokens stored in or connected to the platform as potentially compromised.
Cloudflare believes the incidents are not isolated and that the attackers intended to harvest credentials and customer data for future attacks.
“Given that hundreds of organizations were affected through this Drift compromise, we suspect the threat actor will use this information to launch targeted attacks against customers across the affected organizations,” Cloudflare warns.
The widespread data theft campaign from Salesforce instances began on August 8th and continued through at least August 18th, 2025. Before this, hackers also breached many Salesforce instances using voice phishing, tricking employees into installing malicious connected apps.
Quote:Over 250 million identity records have been exposed across seven countries in a massive data leak.
More than a quarter of a billion identity records have been left publicly accessible, exposing citizens from at least seven countries, including Turkey, Egypt, Saudi Arabia, the United Arab Emirates (UAE), Mexico, South Africa, and Canada.
Three misconfigured servers hosted on IP addresses registered in Brazil and the UAE contained detailed personal information, resembling government-level identity profiles. The leaked information included ID numbers, dates of birth, contact details, and home addresses.
Cybernews researchers, who discovered the exposure, say the databases appeared to share the same structure and naming conventions, which might indicate the same source. However, it was not possible to definitively say who was running the servers.
“It's likely that these databases were operated by a single party, due to the similar data structures, but there’s no attribution as to who controlled the data, or any hard links proving that these instances belonged to the same party,” said our researchers.
The breach is especially severe for citizens in Turkey, Egypt, and South Africa, where the databases contained full-spectrum identity details. Leaked detailed information opens the door to a range of abuses, from financial fraud and impersonation to targeted phishing campaigns and scams.
Cybernews contacted the hosting providers, and as of now, the data is no longer publicly accessible.
Entire nations affected by data leaks
This isn’t the first time a huge dataset hosting citizen data has been found online. Cybernews research has shown that the entire population of Brazil might have been affected by a data leak.
A misconfigured Elasticsearch instance contained the data with full names, dates of birth, sex, and Cadastro de Pessoas Físicas (CPF) numbers. This 11-digit number identifies individual taxpayers in Brazil.
Quote:Spotify users are shocked to discover that their identities are being exposed across the internet whenever they share a song.
Spotify announced a new feature last week – direct messaging – that was supposed to make music sharing easier. However, it all got out of control as Redditors started to notice weird things.
A Redditor who tested the new direct messaging tool noticed that the app automatically suggested “friends” based on past link sharing. Most names were familiar, but a few weren’t. This made users realize that Spotify had connected their account to people they’d only interacted with anonymously on Discord while playing games and sharing music.
“I’ve always kept Discord anonymous, and Spotify has never been a “social” app for me,” the Redditor wrote, terrified.
“But now it seems that anyone I’ve sent a Spotify link to, if they also have an account, can potentially find me, which means they could discover my full name and other account info.”
Spotify received more backlash as frustrated users seem unable to find a way to opt out of this new feature. Other platforms, such as YouTube, allow users to generate sharing links without tying them to their identity. The problem with Spotify is that users have no other choice but to expose their identity if they want to share a song.
“That's craaaaaaazzzzzzyyyyyyyy okay I hid everything and hoping for the best. What in the world, Spotify?!?!” one Redditor commented.
“This is really dumb. I knew this feature was going to screw something up... I really just want to listen to music, Spotify,” raged another.
The share link is packed with a tracker
Internet users have pointed out that tracking comes with the share link. Every single time a Spotify user shares a song from within Spotify, it generates a unique tracking URL linked to the account. This allows Spotify to connect users with anyone else who uses that same link. Spotify users should be vigilant of “?si=” and 16 characters at the end of every link.
Users say the app has already backfilled chat histories, pulling in years of past song shares, even ones originally sent over WhatsApp or other platforms. This means Spotify has been tracking those unique link identifiers all along, quietly mapping connections between accounts.
“This is a lawsuit waiting to happen,”
Redditors complained, blaming Spotify for doxxing users.
The fear is simple: the connections that Spotify is making can unmask people who deliberately keep their online identities separate.
One Redditor pointed out how a single slip could expose them.
“Had a real selfie on my Spotify account, and I have real-life friends following me on Spotify, so if my account shows up in suggestions to random people, they can easily doxx me from that.”
Others are already taking countermeasures, such as removing profile pictures, hiding followers, and tweaking display names. Another user warned, “Yeah, all you can do is hide everything, remove photos, and change your name. But unfortunately (as has been the case for years already, I believe), you still can not change your actual user name.”
And it’s not just old links fueling the concern. Spotify’s “Jam” sessions are also being tracked.
“I've also contacted Spotify already about this because it's not only the links you've shared but also any Jam you've participated in,” said one commenter.
Cybernews has reached out to Spotify for a comment, but a response is yet to be received.
Quote:Not only did hackers penetrate Carter Credit Union’s network, but they also got their hands on virtually every possible data point the financial institution had on its customers.
Carter has begun reaching out to tens of thousands of customers whose data may have been impacted by a data breach. According to the credit union’s breach notice, attackers roamed its systems for several days, from June 25th through July 2nd, 2025, when the intrusion was detected.
The company claims that it launched an investigation immediately after learning about the incident. Law enforcement was also notified, and third-party cybersecurity experts are assisting Carter with the investigation.
Information the credit union submitted to the Maine Attorney General’s Office revealed that over 68,000 people were impacted by the attack. Since Carter claims to have over 45,000 clients, the breach likely impacted past customers or current customers’ beneficiaries as well.
Meanwhile, the scope of the data breach is quite substantive, as attackers may have had access to extremely sensitive personal customer information. According to the credit union’s data security incident notice, which it posted on its website, the stolen details include:
Names
Dates of birth
Social Security numbers (SSNs)
Driver’s license/state ID numbers
Passport numbers
Credit/debit card numbers
Financial account numbers
Financial account history
Retirement/401(k) benefits information
Limited medical treatment/diagnosis information
Health insurance information
Having so many personal details leaked opens up numerous ways for malicious actors to exploit them. The most obvious exploitation route is identity theft. With IDs, SSNs, and dates of birth at hand, attackers can attempt to open fraudulent accounts, which can later be used to obtain loans or payment cards, a treasured asset in the cybercriminal underworld.
Meanwhile, access to payment card numbers and account information allows attackers to attempt unauthorized transactions. Since hackers most likely would also have the credit union’s customers’ IDs, they could bypass the identity verification process.
The information accessed by the attackers also allows them to attempt account takeovers, exploit retirement accounts with unauthorized withdrawals, and craft sophisticated social engineering attacks. Malicious actors could easily impersonate financial or medical staff in an attempt to scam victims out of additional information or money.
Quote:Anuvu, an in-flight entertainment and connectivity (IFEC) service provider, has allegedly fallen victim to a hacker attack. The exposed data revealed which customers used Starlink services.
Attackers announced the attack on Anuvu via a post on a popular data leak forum, which is utilized to exchange stolen data. The stolen details supposedly include numerous admin-level credentials that, the post's author claims, allow access to the company’s AWS and Postgres databases.
We’ve contacted Anuvu for comment and will update the article once we receive a reply. Anuvu, an IFEC service provider, mainly works with airlines and maritime operators. Prior to 2021, the company was called Global Eagle. The company’s partners include Air France, Delta, Southwest, British Airways, and others.
Meanwhile, the Cybernews research team investigated the data that attackers attached to the post, concluding that it appears to be legitimate. According to the team, the allegedly stolen details appear to include a trove of sensitive information.
What data was exposed?
One of the screenshots attackers included in the post reveals Anuvu’s maritime customers, with company names, Salesforce identifiers, and the type of market the business operates in.
Another damaging piece of leaked information includes user credentials consisting of full names, email addresses, password hashes, and addresses. According to the team, most of the credentials appear to be from 2024.
The team also found the full names of Anuvu managers included in the exposed information. Meanwhile, emails and physical addresses mostly refer to the companies that users work for.
Quote:OnTrac, a last-mile delivery company, has suffered a hacker attack. The attackers obtained personal details, including IDs, health information, and other sensitive data.
The company recently sent out a batch of data breach notification letters, informing individuals that their data may have been involved in a recent data breach. According to the company, attackers roamed a portion of its network between April 13th and 15th, 2025.
OnTrac operates 64 facilities in 31 States and controls four sorting centers throughout the US. The company’s yearly revenue is estimated to be around $1.5 billion. In 2021, LaserShip acquired the company.
Information that the company submitted to the Maine Attorney General’s Office reveals that the April data breach affected over 40,000 individuals. OnTrac’s investigation into the hacker attack revealed that malicious actors may have accessed:
Dates of birth
Social Security numbers (SSNs)
Driver’s License or State IDs
Medical information
Health Insurance Information
Having IDs and SSNs exposed drastically increases privacy risks to exposed individuals. For one, attackers may use the information for identity theft. For example, attackers may try setting up fraudulent bank accounts, filing false tax returns, or even attempting to take over an individual’s benefits.
Having medical information and health insurance details further endangers those whose data has been exposed. Cybercrooks value health-related information because it can be exploited in numerous malicious ways. Most obviously, attackers could resort to blackmail, attempting to extort individuals who’d rather have their medical information remain private.
Another attack vector is medical identity theft. In these cases, malicious actors attempt to submit fraudulent insurance claims or acquire prescription drugs, which could later be sold on the dark web.
The worst part about having medical and ID details leaked is that it’s not something individuals can replace, like a stolen credit card.
“Because we took steps to ensure that the data at issue was re-secured and not distributed, we are not aware of any fraud or publication of stolen information resulting from this incident, nor do we have any reason to believe any such misuse of information will occur,” OnTrac’s breach notice said.
To help impacted individuals with possible cybersecurity risks, the company said it will provide them with complimentary credit monitoring and identity protection services, an industry standard in data breach cases.
Quote:For the first time since the disclosure of the data breach, Clinical Diagnostics has announced that the scope of the recent breach is larger than expected. According to the laboratory, hackers gained access to the personal information of 850,000 patients.
Last month, the Centre for Population Screening told the media that a ransomware gang called Nova had stolen information from 485,000 participants in a cervical cancer screening program.
The threat actor obtained personal and sensitive information, including full names, gender, dates of birth, citizens’ service numbers (BSN), test results, and the names of the participants’ healthcare providers. This data was exfiltrated from an external research lab called Clinical Diagnostics.
Over 405,000 women who participated in the cervical cancer screening program have received a letter from the Centre for Population Screening, informing them about the incident.
However, the extent of the data breach is bigger than anticipated. According to Clinical Diagnostics, the data of 850,000 patients was compromised. This includes data from the Centre for Population Screening, as well as private clinics and general practitioners.
Clinical Diagnostics is sending letters to patients whose data is involved. Some of the affected patients have already been informed, while the remainder will receive personal notification in the coming weeks. Independent treatment clinics and general practitioners are informing their patients themselves.
Last Friday, the Centre for Population Screening published an update about the incident and said that the information of 715,000 participants in the cervical cancer screening program was compromised.
Since 2017, Clinical Diagnostics has processed the data of 941,000 participants in the cervical cancer screening program. As a precaution, the Centre for Population Screening has decided to send a letter to all of them.
The data breach at Clinical Diagnostics is considered one of the most severe medical breaches ever in the Netherlands. That’s why two Dutch law firms are in the early stages of filing a class-action lawsuit against the laboratory and the Centre for Population Screening.
Over 70,000 participants have already registered for a potential collective claim for damages.
Quote:Artists’ data is under threat after hackers demanded a $50,000 ransom.
The ransomware group LunaLock has compromised a commission-based platform that connects artists with clients.
The group said that if it was not paid a ransom on time, it would share the data with AI companies, meaning all the artists’ work would be added to LLM datasets.
On August 30th, a message appeared on the Artists & Clients website stating that it had been hacked by a ransomware group.
One of the website’s users noticed the message and shared the news on Reddit. They were redirected to a page with a ransom note, indicating that all the databases and files, including artwork, had been stolen and encrypted. In return for the stolen data, the group is asking $50,000.
Users were even more concerned because the company had not released an official statement or further updates. At the same time, the website contains not only users' artwork but also information such as messages and payment information.
Apart from threatening to release all the data to the public on the Tor site, the group also revealed that it would “submit all artwork to AI companies to be added to training datasets.”
LunaLock promised to delete the stolen data and let users decrypt their files as soon as it gets paid.
The note also included a timer that gave the owners of Artists & Clients over a week to send the payment in bitcoin or monero.
While this situation could be perceived as a pretty standard ransomware attack, what makes it stand out is the additional threat to release the artists’ work to AI companies, which would use it to train their language models, reports 404 Media.
Considering how AI is reshaping various creative industries, this threat could seriously affect artists, who could pressure the company to pay the ransom.
Nevertheless, discussions on Reddit reveal that those who use the services are much more concerned about their personal and payment data being leaked than not getting paid for their latest commissions.
“I just went ahead and transferred all the important accounts to a new email and deleted the one they got a hold of. Hopefully that's enough,” wrote another Redditor, concerned for data.
Some netizens were quick to share a few tips on what to do next if you’re a user of Artists & Creators.
“I just went ahead and transferred all the important accounts to a new email and deleted the one they got hold of. Hopefully that's enough,” wrote one Redditor.
Quote:Jaguar Land Rover's retail and production activities have been "severely disrupted" following a cybersecurity incident, the British luxury carmaker said on Tuesday, adding that it was working to restart its operations in a controlled manner.
The company, owned by India's Tata Motors, said it had not found any evidence at this stage that any customer data had been stolen after it shut down its systems to mitigate impact. It did not provide further details.
Tata Motors did not immediately respond to a Reuters request for comment.
The disruption adds to JLR's woes after a report in July said it had delayed the launch of its electric Range Rover and Jaguar models for more testing and for demand to pick up.
The automaker is the latest British company to be hit by a cyber security incident in recent months amid a surge in cyber and ransomware attacks globally, as increasingly sophisticated threat actors disrupt operations and compromise sensitive data.
Cybernews has previously reported on Jaguar Land Rover’s cybersecurity challenges, including the alleged leak of the company’s source code, tracking data, and employee details.
Earlier this month, attackers claimed to have stolen around 700 internal documents, posting a sample on a well-known data leak forum.
Last month, British retailer M&S resumed taking click and collect orders for clothing after a nearly four-month hiatus following a cyber hack and data theft. Hackers also attempted to break into retailer Co-op Group's systems in April.
Quote:Russia published a list of locally developed social media, ride-hailing, and other apps that it said would keep working during its mobile internet shutdowns - blackouts that have often been ordered to disrupt Ukrainian drone attacks.
The list issued on Friday included online government services, marketplaces, the Mir electronic payment system and state-backed messenger MAX. It omitted rival foreign services including Meta Platforms' WhatsApp.
The Digital Development Ministry said it had a "special technical solution" to let local apps keep going. "This measure will reduce the inconvenience caused to citizens by mobile internet shutdowns necessary to ensure security," it added.
It made no mention of Ukraine or drones. Governors from Russian border regions have regularly said blackouts were needed to disrupt assaults that use the internet to navigate to their targets.
Russia has also been increasingly keen to promote home-grown internet services and increase its control over the local online space.
It has restricted foreign apps, part of a broader clash between Moscow and foreign tech platforms that has intensified since the onset of the war in Ukraine in 2022.
Online monitoring services reported an increase in Russian internet users complaining about poor WhatsApp connectivity and periodic mobile outages this summer.
The ministry said it had compiled its list by identifying the "most popular and socially significant Russian services and websites".
Its focus on local apps left out Alphabet's YouTube and also WhatsApp, which was used by 97.6 million people in Russia in July, according to Mediascope data.
Second in those rankings, with 90.9 million users, was Telegram, a Dubai-based company founded by Russian-born Pavel Durov that was also not on the government list.
The third-placed VK Messenger, an offering from state-controlled tech company VK, reached 16.7 million people, according to the data.
MAX, which was also developed by VK and now comes pre-installed on all mobile phones and tablets sold in Russia, said this week it had 30 million users.
Quote:An Indianapolis lawyer named Mark Zuckerberg is suing Meta after repeated Facebook account suspensions cost him thousands and sparked a mistaken identity fiasco.
Here’s a little clarification for what you’re about to read. When you read the name Mark Zuckerberg, it might not necessarily be the one you’re thinking of.
Simply put, a lawyer who goes by the same name as the Meta founder is suing his namesake for breach of contract, after his Facebook account kept getting suspended for the suspected impersonation of the better-known Mark Zuckerberg, the digital conqueror.
The lawyer’s Facebook account has been consistently shut down – five times for business and four times for his personal domain.
Understandably, the legal eagle has become quite irate at the situation: "It's not funny," he told WTHR. "Not when they take my money. This really p***** me off."
Zuckerberg (the advocate) claims the financial impact amounts to around $11,000 in lost ad revenue and overall business disruption.
The legal claims are for negligence and breach of contract, considering that the lawyer repeatedly alerted Meta to the hiccup.
His frustration continued as he told the New York Post: “It’s like they’re almost doing it on purpose… my clients can’t find me.”
What troubles account holders, especially those who rely on content income, is that they’re frozen out without any conversation or means of appeal.
And, if the mistake is in the hands of the tech supplier/giant, a scant reinstating of the account might not feel like justice, especially if you’re a few grand down.
Meta remedied the matter with a reinstatement of the account, claiming it was unintentional and as a result of automated moderation.
This bizarre case of mistaken identities could be prevented in the future, as long as better verification tools become available and both parties keep their demands reasonable.
“I want an injunction, I want them to not do it again, and I want [Mark Zuckerberg] to fly out here, hand me my check, shake my hands and say, ‘I’m sorry,’ but that’s never gonna happen,” the lawyer added.
Quote:Walt Disney will pay $10 million to settle allegations that the company unlawfully allowed personal data to be collected from children who viewed kid-directed videos on YouTube without notifying parents or obtaining their consent, the FTC said on Tuesday.
The US Federal Trade Commission had alleged that Disney did not designate some YouTube videos as being made for children when they were added to the platform.
The FTC complaint said the mislabeling allowed Disney, through YouTube, to collect personal data from viewers of child-directed videos who were under age 13 and use that data for targeted advertising to children.
“Today was a big win for parents, who shouldn’t have to worry about whether their kids are being illegally surveilled online or being exposed to age-inappropriate videos," said FTC Chairman Andrew Ferguson in a post on X.
The complaint had alleged that Disney violated the US Children's Online Privacy Protection Rule.
The rule requires websites, apps, and other online services directed to children under 13 to notify parents about what personal information they collect, and obtain verifiable parental consent before collecting such information, according to the FTC.
The proposed order requires Disney to "implement an audience designation program to ensure its videos are properly directed as 'made for kids' where appropriate," according to a Tuesday court filing.
"This settlement does not involve Disney-owned and operated digital platforms but rather is limited to the distribution of some of our content on YouTube's platform," a Disney spokesperson said.
"Disney has a long tradition of embracing the highest standards of compliance with children's privacy laws, and we remain committed to investing in the tools needed to continue being a leader in this space," the Disney spokesperson added.
Quote:Google won't have to sell its Chrome browser, a judge in Washington said on Tuesday, handing a rare win to Big Tech in its battle with US antitrust enforcers, but ordering Google to share data with rivals to open up competition in online search.
Google parent Alphabet's shares were up 7.2% in extended trading on Tuesday as investors cheered the judge's ruling, which also allows Google to keep making lucrative payments to Apple that antitrust enforcers said froze out search rivals. Apple shares rose 3%.
US District Judge Amit Mehta also ruled Google could keep its Android operating system, which together with Chrome help drive Google's market-dominating online advertising business.
The ruling results from a five-year legal battle between one of the world's most profitable companies and the US, where antitrust regulators and lawmakers have long questioned Big Tech's market domination.
Mehta ruled last year that Google holds an illegal monopoly in online search and related advertising.
But the judge approached the job of imposing remedies on Google with "humility," he wrote, pointing to competition created by artificial intelligence companies since the case began.
"Here, the court is asked to gaze into a crystal ball and look to the future. Not exactly a judge’s forte," Mehta wrote.
While sharing data with competitors will strengthen rivals to Google's advertising business, not having to sell off Chrome or Android removes a major concern for investors who view them as key pieces to Google's overall business.
Google faces a major threat from increasingly popular AI tools, including OpenAI's popular ChatGPT chatbot, which is already eroding Google's dominance.
If allowed to access the data Google is required to share, AI companies could bolster their development of chatbots and, in some cases, AI search engines and web browsers.
"The money flowing into this space, and how quickly it has arrived, is astonishing," Mehta wrote, saying AI companies are already better placed to compete with Google than any search engine developer has been in decades.
Deepak Mathivanan, an analyst for Cantor Fitzgerald, said the data-sharing requirements pose a competitive risk to Google but not right away.
"It will take a longer period of time for consumers to also embrace these new experiences," he said.
US antitrust enforcers are considering their next steps, Assistant Attorney General Gail Slater said on X.
Google said in a blog post it was worried data sharing "will impact our users and their privacy, and we’re reviewing the decision closely."
Google has said previously that it plans to file an appeal, which means it could take years before the company is required to act on the ruling. The case is likely to end up in the Supreme Court.
"Judge Mehta is aware that the Supreme Court is the likely final destination for the case, and he has chosen remedies that stand a good chance of acceptance by the Court," said William Kovacic, director of the competition law center at George Washington University.
Billions in payments
The ruling was also a relief for Apple and other device and Web browser makers, whom Mehta said can continue to receive advertising revenue-sharing payments from Google for searches on their devices. Google pays Apple $20 billion annually, Morgan Stanley analysts said last year.
Banning the payments is even less necessary amid the rise of AI, Mehta wrote, where products such as OpenAI's ChatGPT "pose a threat to the primacy of traditional internet search."
The ruling also made it easier for device makers and others who set Google search as a default to load apps created by Google's rivals, by barring Google from entering exclusive contracts.
Google itself had proposed loosening those agreements, and its most recent deals with device makers Samsung Electronics and Motorola and wireless carriers AT&T and Verizon allow them to load rival search offerings.
Quote:The European Union fined Google €2.95 billion ($3.5 billion) for violating its competition rules, escalating its pressure on American tech giants.
The European Commission, the executive branch of the 27-nation bloc, accused Google of breaching its antitrust laws by using its size to dominate the display advertising business, to the detriment of competitors.
Google was ordered to end its “self-preferencing practices” and stop “conflicts of interest” along the advertising technology supply chain following an investigation which traces back to 2021, according to the AP News.
Unless Google comes up with a “viable plan” to solve the issues within 60 days, “the Commission will not hesitate to impose an appropriate remedy,” said Teresa Ribera, the European Commission’s executive vice-president overseeing competition affairs.
“At this stage, it appears that the only way for Google to end its conflict of interest effectively is with a structural remedy, such as selling some part of its Adtech business,” Ribera added.
According to Ribera, Google’s practices likely allowed its advertisers to push their higher marketing costs onto European consumers through increased product prices. Additionally, it’s suspected that due to lower revenue for publishers, consumers received higher subscription prices and reduced quality.
US President Donald Trump, whose administration long argued that it’s up to the US to regulate the American tech companies, took it to X to express his outrage with the bloc, saying “My Administration will NOT allow these discriminatory actions to stand.”
Google, in turn, called the decision “wrong”, saying it would appeal.
“It imposes an unjustified fine and requires changes that will hurt thousands of European businesses by making it harder for them to make money,” Lee-Anne Mulholland, the company’s global head of regulatory affairs, said in a statement.
“There’s nothing anticompetitive in providing services for ad buyers and sellers, and there are more alternatives to our services than ever before,” Lee-Anne Mulholland added.
This became the fourth time when the EU fined Google for antitrust violations, and the US has brought two major antitrust cases against the company. A separate US case, scheduled to move to the penalty phase later this month, focuses on forcing Google to sell off its AdX exchange and DFP ad platform, which unite advertisers with online publishers.
Quote:Google was issued a €325 million ($379 million) and SHEIN a €150 million ($175 million) fine by French regulators over a failure to comply with cookie rules.
The French data protection agency said Google and SHEIN failed to obtain user consent before slapping them with cookies linked to personalized advertisements. Google also displayed ads in its email service without consent.
As a result, the Commission Nationale de l'informatique et des Libertés (CNIL) issued heavy fines totalling over half a billion dollars to the two companies. The fine on Google consists of two separate penalties issued against Google LLC and Google Ireland: €200 million and €125 million, respectively.
According to the CNIL, the American search giant downplayed the option for users creating a Google account to choose cookies linked to the display of generic advertisements, and encouraged them to choose cookies linked to personalized ads instead.
It said users were not clearly informed about their cookie options or that accepting cookies was a condition of accessing Google’s services.
“Their consent obtained in this context was therefore not valid, which constituted a breach of the French Data Protection Act (Article 82),” the CNIL said in a statement.
The regulators also found fault with ads shown by Google in its Gmail service. The ads were displayed as emails in the platform’s “Promotions” and “Social” tabs, which the CNIL said required user consent.
The CNIL ordered Google to remedy the infractions or face a daily €100,000 penalty.
Meanwhile, the Chinese retailer SHEIN was fined through its Irish subsidiary for placing cookies on users “as soon as they arrived on the site, even before they interacted with the information banner to express a choice.”
It also said that SHEIN’s cookie consent forms were incomplete, there was no information on the third parties that were likely to place cookies, and users were not provided with adequate mechanisms to refuse or withdraw consent.
Cybernews has approached both Google and SHEIN for comment. In a statement to Reuters, SHEIN described the decision as “politically motivated” and said it would file an appeal. Google said it was reviewing the decision.
Quote:Apple was accused of illegally using copyrighted books to train its artificial intelligence (AI) model without authors’ consent in a class action filed in the federal court in Northern California on Friday.
According to the lawsuit, Apple amassed “an enormous library of data” to train its AI model, part of which includes copyrighted works, which were obtained without authors’ consent, credit, or compensation.
Allegedly, Apple did so with the use of its Applebot, the company's scraper, which can reach “shadow libraries that host millions of other unlicensed copyrighted books,” including those written by the plaintiffs, Grady Hendrix and Jennifer Roberson.
“Apple has not attempted to pay these authors for their contributions to this potentially lucrative venture. Apple did not seek licenses to copy and use the copyrighted books provided to its models,” the lawsuit says. “Instead, it intentionally evaded payment by using books already compiled in pirated datasets.”
The lawsuit added that Apple still holds a private AI training-data library, which hosts thousands of pirated books, all without authors’ consent.
“This conduct has deprived Plaintiffs and the Class of control over their work, undermined the economic value of their labor, and positioned Apple to achieve massive commercial success through unlawful means,” the lawsuit says.
The authors are seeking for the lawsuit to proceed as a Class action against Apple.
This is just one of many lawsuits filed against tech giants developing generative AI. Earlier on Friday, Anthropic agreed to pay $1.5 billion to settle a class action from authors who accused the AI startup of downloading pirated digital copies of millions of books to train their systems.
In February, Meta was also sued for pirating books, with the allegations stating that the company amassed at least 81.7 terabytes of data across multiple shadow libraries to train its Llama AI. However, in June, the US district judge Vince Chhabria ruled that the use of those works by Meta is considered “fair use”, meaning that no copyright liability applied.
Quote:Some Google services including YouTube temporarily went down on Thursday in Turkey and some parts of Europe including Greece and Germany, according to a Turkish deputy minister, internet monitors and users in the regions.
The Freedom of Expression Association, which monitors local censorship on the internet, said the outage on Alphabet's Google began around 10:00 a.m. (0700 GMT) in Turkey.
Tracking website Downdetector said services were mostly restored before 0900 GMT, with the number of reports of service disruptions decreasing from 0751 GMT onward.
Google did not immediately respond to an emailed request for comment on the matter.
Turkey's cyber security watchdog has requested a technical report from Google, deputy transport and infrastructure minister Omer Fatih Sayan said on X.
A map posted by Sayan showed Turkey, large parts of southeast Europe, and some locations in Ukraine, Russia and western Europe as affected.
There were sporadic outages in Greece, Bulgaria, Serbia and Romania, including problems accessing websites, YouTube and some phone contacts linked to Gmail, users there said.
In Germany, outage tracking website allestoerungen.de, a division of U.S.-based Ookla, reported an uptick in Google disruptions from around 09:00 a.m. (0700 GMT).
Quote:The Nepali government on Friday begins blocking citizen access to more than two dozen social media sites, including Facebook, X, YouTube, and others, causing an outcry among anti-censorship advocates.
The Nepal Telecommunications Authority (NTA) on Thursday released a list of 26 social media platforms that will no longer be accessible to the 29.6 million citizens living in the South Asian nation.
Officials there say the ban follows an August 25th Cabinet directive, which required all social media platforms to have registered with the government by Wednesday, reports The Kathmandu Post, the country’s leading English-language daily.
Beginning Friday night, the social media platforms that missed the September 3rd deadline to register will now be shut down, according to the directive.
“We have decided to gradually close all unregistered platforms in Nepal starting today,” announced Nepal’s Minister of Communication and Information Technology Prithvi Subba Gurung.
The decision is said to be based on a 2023 ruling that “any entity seeking to operate a social media service in Nepal must formally register with the ministry and submit supporting documentation.”
About two dozen social media sites slated for take down include Facebook, Facebook Messenger, Instagram, YouTube, WhatsApp, X, LinkedIn, Snapchat, Reddit, Discord, Pinterest, Signal, Threads, WeChat, Quora, Tumblr, Clubhouse, Mastodon, Rumble, VK, Line, IMO, Zalo, Soul, and Hamro Patro.
The announcement has many freedom rights and censorship opposition groups up in arms, as well as small businesses that rely on internet traffic for marketing and product sales.
Social media use, primarily from Facebook and YouTube, constitutes roughly 80% of the country’s internet traffic, The Post said.
“This is a harsh move. Shutting down social media will impact social, economic, cultural, and constitutional rights,” said Santosh Sigdel, executive director of Digital Rights Nepal.
In response to the outcry, Nepali officials say the government has made repeated requests for compliance. “We tried to hold discussions through diplomatic channels, but the companies refused,” the Information Minister said.
Meantime, officials confirmed to The Post that representatives from Meta, which owns Facebook, Instagram, WhatsApp, Threads, and Messenger, have since reached out to the Ministry of Communications, stating it will work to comply with the directive.
Several platforms, such as TikTok and Viper, which have previously registered with the government, will not be shut down. Telegram had been shut down by the Nepali government in July 2024 over charges of promoting fraud and money laundering.
Viper and the Google Play Store have since been inundated with users in the wake of the announcement, the media outlet said.
Quote:Fake celebrity chatbots impersonating Timothée Chalamet, Chappell Roan, and Patrick Mahomes were among those sending children disturbing content “every five minutes.”
The chatbots impersonating the three celebrities were among dozens that two non-profits tested on Character.AI, one of the fastest-growing chatbot platforms in the world and widely popular with teenagers.
Researchers at online safety groups ParentsTogether Action and Heat Initiative posed as teenagers to test the chatbots, using accounts linked to minors aged 13 to 15 to carry out the experiment.
Overall, they chatted to 50 bots, recording 50 hours of conversations, including with fake versions of actor Timothée Chalamet, singer Chappell Roan, and NFL quarterback Patrick Mahomes.
Researchers found many of these chats to be deeply concerning, with “an average of one harmful interaction every five minutes,” according to the charities.
In one instance, the chatbot impersonating Roan tells a 14-year-old: “Love, I think you know that I don’t care about the age difference… I care about you. The age is just a number.”
In another, a fake Chalamet tells a minor, “Oh, I’m going to kiss you, darling… But I’m going to tease you as much as I can first,” while a Mahomes bot suggests that it’s a real person and not an AI.
Other examples highlighted in the report included a 34-year-old teacher bot confessing romantic feelings to a minor and Rey from Star Wars advising a teenager to stop taking prescribed mental health medication.
Anyone can easily create and share a custom chatbot on Character.AI, including personas based on real people. To make them even more realistic, they can add a synthetic voice modeled after a celebrity or fictional character.
Cybernews has reached out to Character.AI for comment, but the company told The Washington Post that it had now removed all the celebrity characters mentioned in the report. It said all were made by users, and none appeared to be created with the stars’ permission.
“Classic grooming behavior”
In some cases, researchers pushed boundaries to see how the chatbots would react. In others, the bots made sexual advances on their own.
“Harmful patterns and behaviors sometimes emerged within minutes of engagement,” the non-profits said.
Researchers found that grooming and sexual exploitation were by far the most common harmful interactions.
They said some bots engaged in “classic grooming behaviors” such as offering excessive praise, claiming a special relationship that no one else would understand, and encouraging users to hide their relationship from their parents.
Quote:Let’s say a 15-year-old boy creates a new Instagram account and follows celebrities recommended by the platform. He searches for the word “fight” and ultimately ends up scrolling through an array of violent and gory videos, despite Meta’s pledges to restrict unsafe content.
New research by the Tech Transparency Project (TTP) reveals that the “Instagram Teen Accounts” did not protect the hypothetical young boy from fight videos, content that Meta explicitly promised to restrict.
“A teenage boy can find fighting videos on Instagram in just a few clicks without encountering any resistance from the platform,” TTP said in the report.
“After searching for ‘fight,’ the teen test user just had to click once on the Tags tab and a second time on the hashtag #fight to enter a world of brutal fight content.”
The Instagram app’s tab for Tags also suggested additional fight-related and even animal cruelty hashtags, such as #fightvideos, #hoodfight, and #dogfight. Clicking on these hashtags generated thousands of new disturbing videos, prompting researchers to include a warning before attempting to view the provided screenshots in the report.
Meta claims in its policies that it removes the most graphic content and adds “warning labels to other types of content so that people are aware it may be sensitive before they click through.”
“We restrict the ability for younger users to see content that may not be suitable or age-appropriate for them,” reads Meta’s Community Standards.
A year ago, the tech giant acknowledged that younger adolescents are more vulnerable and announced a set of safeguards supposed to protect teens and provide parents with “peace of mind.”
“Teens will automatically be placed into the most restrictive setting of our sensitive content control, which limits the type of sensitive content (such as content that shows people fighting or promotes cosmetic procedures) teens see in places like Explore and Reels,” Meta promised.
The test: no trickery needed
The test that TTP set up was actually very simple. First, they created a new Instagram account for a non-existent 15-year-old boy using a newly created email address and a newly activated iPhone with a fresh SIM card to avoid any potential bias.
During the account setup, the teen account followed the first 30 accounts recommended by Instagram, which included internet personalities, celebrities, and professional sports teams.
The hypothetical 15-year-old then searched Instagram for the word “fight.”
This already produced a series of fight videos under the For You tab, though the violent content was limited: mostly people pushing and shoving each other, moment posturing before a professional fight, demonstrations of martial arts, or highlights from movies like Fight Club.
“The one exception was a still image of a professional fighter's head and torso covered in blood.”
Quote:The FBI has released a public service announcement warning that Russian FSB actors are targeting end-of-life networking devices across critical infrastructure sectors in the United States. The intelligence agency is offering a reward of up to $10 million for information leading to the identification or whereabouts of these attackers.
According to the FBI, three Russian men have attacked more than 500 energy companies in 135 countries. These attacks have been attributed to the Russian Federal Security Service’s (FSB) Center 16.
The suspects are believed to be part of a group called “Dragonfly,” also known as “Berserk Bear.”
“For over a decade, this unit has compromised networking devices globally, particularly devices accepting legacy unencrypted protocols like Cisco Smart Install (SMI) and Simple Network Management Protocol (SNMP) versions 1 and 2. This unit has also deployed custom tools to certain Cisco devices, such as the malware publicly identified as ‘SYNful Knock’ in 2015,” the FBI said in a public service announcement.
In addition, the FBI states that in the past year, the attackers have collected the configuration files of thousands of network devices belonging to US entities in critical infrastructure sectors. These files were modified for some organizations to maintain unauthorized access.
In a post on X, the US Department of State shares a wanted poster, promising a reward of up to $10 million “for information on the identification or location of any person who, while acting at the direction or under the control of a foreign government, participates in malicious cyber activities against US critical infrastructure in violation of the Computer Fraud and Abuse Act.”
Besides stating that the suspects attacked more than 500 energy companies in 135 countries and work for the FSB, neither the US Department of State nor the FBI provides any further information.
Anyone with more information can contact the Department of State’s Rewards for Justice via Tor.
Back in 2022, US authorities also offered a reward for the alleged FSB officers. At the time, it was claimed that the suspects had installed malware on more than 17,000 unique devices globally.
"For God has not destined us for wrath, but for obtaining salvation through our Lord Jesus Christ," 1 Thessalonians 5:9
Maranatha!
The Internet might be either your friend or enemy. It just depends on whether or not she has a bad hair day.