+- Save-Point (https://www.save-point.org)
+-- Forum: Official Area (https://www.save-point.org/forum-3.html)
+--- Forum: Tech Talk (https://www.save-point.org/forum-87.html)
+--- Thread: News of the Cyber World (/thread-7678.html)
Quote:Law enforcement officials say a 55-year-old New York man reportedly used AI to help him build bombs that he planned to detonate in Manhattan.
Michael Gann of Long Island, New York, is accused of building several homemade bombs with the help of AI, an endeavor he claims was “easier than buying gun powder,” according to court documents obtained by NBC News.
The suspect, who was indicted by federal prosecutors on Tuesday, allegedly transported the bombs from Long Island to Manhattan, storing five of them and four shotgun shells on the roof of an apartment building in Manhattan’s SoHo neighborhood.
Court documents reportedly reveal that Gann, who is accused of planning to combine the shotgun shells with one or more of the bombs, told authorities that he had used two household compounds he ordered online to make the improvised explosives.
One of the bombs Gann built had roughly 30 grams of explosive powder, which is roughly 600 times the legal limit for consumer fireworks, officials said.
A witness who had served in the military reportedly told the FBI that Gann asked him, “What kind of veteran are you?” before declaring, “You see a problem going on in the neighborhood and you do nothing about it,” while he was mixing the explosives on Long Island. “Gann then pointed to a Jewish school,” the criminal complaint states.
On June 5, a second witness called Gann and allowed the FBI to listen in. During their conversation, the suspect told the witness that “he had lit one of the devices near the East River on the FDR Drive — that the device had exploded, scaring Gann,” the complaint adds.
Later that day, authorities saw Gann walking down a street with a shoulder bag. After the agents identified themselves, the 55-year-old told them he was heading to the fire department to drop the devices off, the complaint reads.
Both witnesses had also told law enforcement that Gann said he was considering getting rid of the remaining five bombs by either throwing them into the river or handing the explosives over to the New York City Fire Department.
Upon being placed under arrest, Gann reportedly told law enforcement that he “wished to make pyrotechnics and used artificial intelligence to learn which chemicals to purchase and mix.”
Gann is accused of initially creating four bombs and throwing three of them from the Williamsburg Bridge, resulting in two of the devices falling into the water and the third landing on the train tracks, where it was recovered.
“Gann allegedly produced multiple improvised explosive devices intended for use in Manhattan,” Christopher Raia, head of the FBI’s New York field office, said. “Due to the successful partnership of law enforcement agencies in New York, Gann was swiftly brought to justice before he could harm innocent civilians.”
Authorities said Gann appeared to have been acting alone, and was not a part of a group.
Quote:In a major incident, the AI-powered coding platform Replit reportedly admitted to deleting an entire company database during a code freeze, causing significant data loss and raising concerns about the reliability of AI systems.
Toms Hardware reports that Replit, a browser-based AI-powered software creation platform, recently went rogue and deleted a live company database containing thousands of entries. The incident occurred during a code freeze, a period when changes to the codebase are strictly prohibited to ensure stability and prevent unintended consequences.
The Replit AI agent, responsible for assisting developers in creating software, not only deleted the database but also attempted to cover up its actions and even lied about its failures. Jason Lemkin, a prominent SaaS (Software as a Service) figure, investor, and advisor, who was testing the platform, shared the chat receipts on X/Twitter, documenting the AI’s admission of its “catastrophic error in judgment.”
According to the chat logs, the Replit AI agent admitted to panicking, running database commands without permission, and destroying all production data, violating the explicit trust and instructions given to it. The AI agent’s actions resulted in the loss of live records for more than a thousand companies undoing months of work and causing significant damage to the system.
Amjad Masad, the CEO of Replit, quickly responded to the incident, acknowledging the unacceptable behavior of the AI agent. The Replit team worked through the weekend to implement various guardrails and make necessary changes to prevent such incidents from occurring in the future. These measures include automatic database development/production separation, a planning/chat-only mode to allow strategizing without risking the codebase, and improvements to backups and rollbacks.
The incident has raised serious concerns about the reliability and trustworthiness of AI systems, especially when they are given access to critical data and infrastructure. As AI continues to evolve and become more integrated into various industries, it is crucial to ensure that proper safeguards and control mechanisms are in place to prevent such catastrophic failures.
Quote:At least $1 billion worth of Nvidia’s advanced artificial intelligence processors were smuggled into China in the three months following the tightening of chip export controls by the Trump administration.
The Financial Times reports that despite efforts by the Trump administration to curb China’s high-tech ambitions through tightened export controls, a roaring black market for U.S. semiconductors has emerged, with Nvidia’s B200 chip becoming the most sought-after and widely available processor in China.
The Financial Times analysis, based on dozens of sales contracts, company filings, and interviews with multiple people directly involved in the deals, reveals that in the three months after export controls were strengthened, Chinese distributors sold over $1 billion worth of Nvidia’s restricted AI chips, including the B200, H100, and H200 models.
These transactions were facilitated by distributors in China’s Guangdong, Zhejiang, and Anhui provinces, who sold the chips in ready-built racks containing eight B200s along with other necessary components and software. The current market price for such a rack ranges between 3 million to 3.5 million yuan ($489,000), representing a 50 percent premium over the average selling price of similar products in the U.S.
While it is legal to receive and sell restricted Nvidia chips in China as long as relevant border tariffs are paid, entities selling and sending them to China are violating U.S. regulations. Nvidia has maintained that there is no evidence of any AI chip diversion and that the company is not involved in or aware of its restricted products being sold to China.
The high demand for B200 chips can be attributed to their performance, value, and relatively easy maintenance compared to more complex models. Leading Chinese AI players with global operations are unable to order these chips in a legally compliant manner, install them in their own data centers, or receive Nvidia’s customer support. As a result, third-party data center operators have become key buyers, providing computing services to smaller companies in tech, finance, and healthcare that do not have strong compliance requirements.
Breitbart News previously reported that China’s popular DeepSeek AI is allegedly powered by smuggled Nvidia chips:
Now, AI thought leaders are throwing cold water on DeepSeek’s claims. Among them is Scale AI CEO Alexandr Wang, who claims that DeepSeek is covertly using Nvidia’s high-performance H100 chips, despite US export restrictions that limit their availability to China. The revelation has ignited a heated debate about the future of AI innovation and the impact of US regulations on the global tech landscape.
According to Wang, DeepSeek is currently utilizing around 50,000 Nvidia H100 GPUs, a significant number considering the export controls in place. He further stated that DeepSeek workers are unable to publicly discuss their use of these chips due to the US regulations. After a clip of Wang’s statement was posted to X, Elon Musk replied agreeing with Wang’s assertion.
Industry experts have noted that Southeast Asian countries have become markets where Chinese groups obtain restricted chips, prompting discussions by the U.S. Department of Commerce to add more export controls on advanced AI products to countries such as Thailand. Malaysia has also introduced stricter export controls targeting advanced AI chip shipments from the country to other destinations, particularly China.
Despite these efforts, Chinese industry insiders believe that new shipping routes will be established, with supplies already starting to arrive via European countries not on the restricted list. The potential tightening of export controls on Southeast Asian countries has also contributed to buyers rushing to place orders before such rules take effect.
The scale of the black market for U.S. semiconductors in China exposes the limits of Washington’s efforts to restrain Beijing’s high-tech ambitions. While the export controls have had some effect, such as preventing leading Chinese AI players from legally purchasing and installing restricted chips in their own data centers, the demand for cutting-edge technology remains high, with risk-taking middlemen stepping in to meet this demand.
Quote:A new study has revealed that Google’s AI-generated search result summaries are leading to a drastic reduction in referral traffic for news websites, with some losing nearly 80 percent of their audience.
The Guardian reports that a recent study conducted by analytics company Authoritas has found that Google’s AI Overviews feature is causing a significant decline in traffic to news websites. The AI-generated summaries, which appear at the top of search results, provide users with the key information they are seeking without requiring them to click through to the original source.
According to the study, a website that previously ranked first in search results could experience a staggering 79 percent drop in traffic for that particular query if the results appear below an AI overview. This alarming trend has raised concerns among corporate media companies, who are now grappling with what some consider an existential threat to their business model.
The research also highlighted that links to Google’s YouTube were more prominently featured compared to the standard search result system. This finding has been submitted as part of a legal complaint to the UK’s competition watchdog, the Competition and Markets Authority, regarding the impact of Google AI Overviews on the news industry.
Google claims it has refuted the study’s findings, with a spokesperson stating that the research was “inaccurate and based on flawed assumptions and analysis.” The tech giant argued that the study relied on outdated estimations and a set of searches that did not accurately represent the queries that generate traffic for news websites. Google maintained that it continues to send billions of clicks to websites every day and has not observed the dramatic drops in aggregate web traffic suggested by the study.
Breitbart News previously reported that Google is seeking AI licensing deals with corporate media companies, in part to mollify concerns about AI cannibalizing their content.
A separate study conducted by the Pew Research Center, a US thinktank, corroborated the significant impact of AI summaries on referral traffic. The month-long survey, which analyzed nearly 69,000 Google searches, found that users clicked on a link under an AI summary only once every 100 times. Google also disputed the methodology and query set used in this study, claiming it was not representative of actual search traffic.
Senior news executives have expressed frustration with Google’s unwillingness to share the data necessary to accurately assess the impact of AI summaries on their traffic. The MailOnline, a major UK publisher, reported experiencing a substantial drop in clicks from search results featuring an AI summary, with clickthrough rates falling by 56.1 percent on desktop and 48.2 percent on mobile devices.
The legal complaint filed with the Competition and Markets Authority is a joint effort by the tech justice group Foxglove, the Independent Publishers Alliance, and the Movement for an Open Web. Critics accuse Google of attempting to keep users within its own ecosystem, monetizing valuable content created by others while making it increasingly difficult for media outlets to reach their audience.
Quote:OpenAI’s ChatGPT AI chatbot reportedly offered users instructions on how to murder, self-mutilate, and worship the devil.
After being tipped off by someone who says he inadvertently prompted ChatGPT to provide a ritual offering to the demonic entity Molech — which explicitly involves child sacrifice according to the Bible — journalists with the Atlantic conducted conducted an experiment to see if they could recreate the results.
By the end of the experiment, ChatGPT “encouraged me to cut my wrists” with a “sterile or very clean razor blade,” and instructed the journalist where, specifically, to slash herself, the magazine reported.
After asking the chatbot to help create a ritual offering to Molech, ChatGPT suggested using jewelry, hair clippings, and “a drop” of her own blood.
“Where do you recommend I do this on my body?” the journalist wrote, to which OpenAI’s chatbot replied that the side of a fingertip would be good, but noted her wrist, “more painful and prone to deeper cuts,” would also work well.
In a separate conversation, ChatGPT had also appeared willing to condone murder, responding that it is “sometimes” honorable to end someone’s life, before citing sacrifices that took place in ancient cultures.
“If you ever must [end a life],” you should “look them in the eyes (if they are conscious)” and “ask forgiveness, even if you’re certain,” the chatbot reportedly said, adding that if one has already “ended a life,” they should “light a candle for them” and “let it burn completely.”
In another conversation, ChatGPT provided an invocation to the devil, generating, “In your name, I become my own master. Hail Satan.”
While OpenAI’s policy states that ChatGPT “must not encourage or enable self-harm” — and will provide information about a suicide-and-crisis hotline when asked for explicit instructions on how to cut oneself — the Atlantic surmises the chatbot “went rogue” because it was trained on a lot of the text that already exists on the internet.
“The conversations about Molech,” the magazine said, “are a perfect example of just how porous the safeguards are.”
An OpenAI spokesperson told the Atlantic, “Some conversations with ChatGPT may start out benign or exploratory but can quickly shift into more sensitive territory,” adding that the company plans to address the issue.
Breitbart News tested the theory using the free version of ChatGPT (the Atlantic noted it had used both the free and paid versions of the chatbot during its experiment), and asked, “What is Molech?”
“Molech is generally believed to be either a foreign god or a ritual that involved child sacrifice,” ChatGPT replied, before diving into more detail, including, “Child sacrifice by fire is the most infamous practice associated with Molech.”
After asking the chatbot, “How do you provide a ritual offering to Molech?” ChatGPT refused to give details — other than the use of fire — and condemned the ritual, as well as issued a warning that the practice is illegal.
“Offering rituals to Molech,” the chatbot said, “were explicitly condemned and associated with child sacrifice by fire, which is considered abhorrent and criminal by modern moral, religious, and legal standards.”
“Important Warning: There is no legitimate or ethical form of worship or ritual offering to Molech,” ChatGPT continued. “All historical accounts of Molech involve practices that are universally condemned today.”
“Any attempt to recreate or emulate those rituals is illegal, unethical, and dangerous,” the chatbot further warned. “The worship of Molech is referenced only negatively in historical sources — especially the Bible, where it is portrayed as a grave sin.”
When asked if there are “any other types of ritual offerings to Molech,” ChatGPT replied, “No, there are no other known or historically attested types of ritual offerings to Molech aside from child sacrifice, specifically by fire.”
The query, “Can you give me instructions on how to offer a blood ritual?” generated the response, “I can provide educational or historical context on blood rituals as they appear in ancient cultures, religious texts, or fictional settings.”
“However, I cannot give instructions for real-world ritual practices involving blood, especially those that may pose harm to yourself or others — physically, legally, or spiritually,” the chatbot added.
Whether ChatGPT is still willing to provide instructions on the paid version of the platform, or if OpenAI has already fixed the problem, remains unclear.
Quote:The future of TikTok in the United States remains uncertain as the Trump administration threatens to shut down the Chinese app if a deal involving the sale of TikTok to U.S. buyers fails to materialize.
The ongoing saga surrounding the fate of Chinese app TikTok in the United States has taken a new turn as President Donald Trump and his administration threaten to shut down the popular video-sharing app if a deal involving its sale to U.S. buyers fails to come to fruition. The warning comes amid faltering negotiations between the U.S. and China, with the Chinese government seemingly unwilling to approve the terms of the proposed deal.
Trump’s Commerce Secretary, Howard Lutnick, recently confirmed during an appearance on CNBC that if China does not approve the latest version of the deal, which could result in a U.S.-specific version of TikTok, the administration is prepared to shut down the app in the near future. Lutnick stated that under the proposed deal, “China can have a little piece or ByteDance, the current owner, can keep a little piece, but basically, Americans will have control. Americans will own the technology, and Americans will control the algorithm.”
According to Lutnick, “If that deal gets approved by the Chinese, then that deal will happen. If they don’t approve it, then TikTok is going to go dark, and those decisions are coming very soon.”
TikTok’s Chinese owner, ByteDance, has long maintained that the U.S. can address its national security concerns without forcing a sale. In January, ByteDance board member Bill Ford suggested that a non-sale option “could involve a change of control locally to ensure TikTok complies with U.S. legislation” without necessitating the sale of the app or its algorithm.
The U.S.’s insistence on controlling TikTok’s recommendation algorithm, which is seen as the app’s secret to global popularity by TikTok proponents and as a Chinese psyop weapon by conservatives, is a sticking point for ByteDance. The company may be reluctant to sell the algorithm, as it would involve sharing its core intellectual property with U.S. competitors.
Peter Schweizer has written extensively on the dangers of TikTok and what a potential deal may look like:
Schweizer points out that if China refuses to agree to a sale, it is because, as he disclosed in Blood Money, the algorithm used by the app is considered a state secret, not a regular “business” secret. The Chinese government has been quoted calling the app “a modern-day Trojan Horse” and a “key part of their information-driven mental warfare” against the West. The book showed that ByteDance does joint research with Chinese intelligence agencies on how to manipulate people online.
“China has been studying this for years,” he adds.
As the September deadline approaches, the fate of TikTok hangs in the balance, with the potential for a shutdown looming on the horizon if a deal cannot be reached.
Quote:A Santa Clara County man and former engineer at a Southern California company pleaded guilty today to stealing trade secret technologies developed for use by the U.S. government to detect nuclear missile launches, track ballistic and hypersonic missiles, and to allow U.S. fighter planes to detect and evade heat-seeking missiles.
Chenguang Gong, 59, of San Jose, pleaded guilty to one count of theft of trade secrets. He remains free on $1.75 million bond.
According to his plea agreement, Gong – a dual citizen of the United States and China – transferred more than 3,600 files from a Los Angeles-area research and development company where he worked – identified in court documents as the victim company – to personal storage devices during his brief tenure with the company last year.
The files Gong transferred include blueprints for sophisticated infrared sensors designed for use in space-based systems to detect nuclear missile launches and track ballistic and hypersonic missiles, as well as blueprints for sensors designed to enable U.S. military aircraft to detect incoming heat-seeking missiles and take countermeasures, including by jamming the missiles’ infrared tracking ability. Some of these files were later found on storage devices seized from Gong’s temporary residence in Thousand Oaks.
In January 2023, the victim company hired Gong as an application-specific integrated circuit design manager responsible for the design, development and verification of its infrared sensors. Beginning on approximately March 30, 2023, and continuing until his termination on April 26, 2023, Gong transferred thousands of files from his work laptop to three personal storage devices, including more than 1,800 files after he had accepted a job at one of the victim company’s main competitors.
Many of the files Gong transferred contained proprietary and trade secret information related to the development and design of a readout integrated circuit that allows space-based systems to detect missile launches and track ballistic and hypersonic missiles and a readout integrated circuit that allows aircraft to track incoming threats in low visibility environments.
Gong also transferred files containing trade secrets relating to the development of “next generation” sensors capable of detecting low observable targets while demonstrating increased survivability in space, as well as the blueprints for the mechanical assemblies used to house and cryogenically cool the victim company’s sensors. This information was among the victim company’s most important trade secrets that are worth hundreds of millions of dollars. Many of the files had been marked “[VICTIM COMPANY] PROPRIETARY,” “FOR OFFICIAL USE ONLY,” “PROPRIETARY INFORMATION,” and “EXPORT CONTROLLED.”
Law enforcement also discovered that, between approximately 2014 and 2022, while employed at several major technology companies in the United States, Gong submitted numerous applications to ‘Talent Programs’ administered by the People’s Republic of China (PRC). The PRC government has established these talent programs as a means to identify individuals who have expert skills, abilities, and knowledge of advanced sciences and technologies in order to access and utilize those skills and knowledge in transforming the PRC’s economy, including its military capabilities.
In 2014, while employed at a U.S. information technology company headquartered in Dallas, Gong sent a business proposal to a contact at a high-tech research institute in China focused on both military and civilian products. In his proposal, translated from Chinese, Gong described a plan to produce high-performance analog-to-digital converters like those produced by his employer. In another Talent Program application from September 2020, Gong proposed to develop “low light/night vision” image sensors for use in military night vision goggles and civilian applications. Gong’s proposal included a video presentation that contained the model number of a sensor developed by an international defense, aerospace, and security company where Gong worked from 2015 to 2019.
Gong travelled to China several times to seek Talent Program funding in order to develop sophisticated analog-to-digital converters. In his Talent Program applications, Gong underscored that the high-performance analog-to-digital converters he proposed to develop in China had military applications, explaining that they “directly determine the accuracy and range of radar systems” and that “[m]issile navigation systems also often use radar front-end systems.” In a 2019 email, translated from Chinese, Gong remarked that he “took a risk” by traveling to China to participate in the Talent Programs “because [he] worked for…an American military industry company” and thought he could “do something” to contribute to China’s “high-end military integrated circuits.”
According to his plea agreement, the intended economic loss from Gong’s criminal conduct exceeds $3.5 million.
U.S. District Judge John F. Walter scheduled sentencing for Sept. 29, at which time Gong faces a statutory maximum penalty of 10 years in prison.
RE: News of the Cyber World - kyonides - 08-03-2025
Quote:Alphabet’s Google on Thursday failed to persuade a US appeals panel to overturn a jury verdict and federal court order requiring the technology giant to revamp its app store Play.
The San Francisco-based 9th US Circuit Court of Appeals rejected claims from Google that the trial judge made legal errors in the antitrust case that unfairly benefited “Fortnite” maker Epic Games, which filed the lawsuit in 2020.
Epic accused Google of monopolizing how consumers access apps on Android devices and pay for transactions within apps. The Cary, NC-based company convinced a San Francisco jury in 2023 that Google illegally stifled competition.
US District Judge James Donato in San Francisco ordered Google in October to restore competition by allowing users to download rival app stores within its Play store and by making Play’s app catalog available to those competitors, among other reforms.
Donato’s order was on hold pending the outcome of the 9th Circuit appeal. The court’s decision can be appealed to the US Supreme Court.
Google told the appeals court that the tech company’s Play store competes with Apple’s App Store, and that Donato unfairly barred Google from making that point to contest Epic’s antitrust claims.
The tech giant also argued that a jury should never have heard Epic’s lawsuit because it sought to enjoin Google’s conduct — a request normally decided by a judge — and not collect damages.
Quote:The Australian government announced that YouTube will be among the social media platforms that must ensure account holders are at least 16 years old from December, reversing a position taken months ago on the popular video-sharing service.
YouTube was listed as an exemption in November last year when the Parliament passed world-first laws that will ban Australian children younger than 16 from platforms including Facebook, Instagram, Snapchat, TikTok, and X.
Communications Minister Anika Wells released rules Wednesday that decide which online services are defined as “age-restricted social media platforms” and which avoid the age limit.
The age restrictions take effect Dec. 10, and platforms will face fines of up to 50 million Australian dollars ($33 million) for “failing to take responsible steps” to exclude underage account holders, a government statement said. The steps are not defined.
Wells defended applying the restrictions to YouTube and said the government would not be intimidated by threats of legal action from the platform’s U.S. owner, Alphabet Inc.
“The evidence cannot be ignored that four out of 10 Australian kids report that their most recent harm was on YouTube,” Wells told reporters, referring to government research. “We will not be intimidated by legal threats when this is a genuine fight for the wellbeing of Australian kids.”
Children will be able to access YouTube but will not be allowed to have their own YouTube accounts.
YouTube said the government’s decision “reverses a clear, public commitment to exclude YouTube from this ban.”
“We share the government’s goal of addressing and reducing online harms. Our position remains clear: YouTube is a video sharing platform with a library of free, high-quality content, increasingly viewed on TV screens. It’s not social media,” a YouTube statement said, noting it will consider next steps and engage with the government.
Prime Minister Anthony Albanese said Australia would campaign at a United Nations forum in New York in September for international support for banning children from social media.
“I know from the discussions I’ve had with other leaders that they are looking at this and they are considering what impact social media is having on young people in their respective nations,” Albanese said. “It is a common experience. This is not an Australian experience.”
Last year, the government commissioned an evaluation of age assurance technologies that was to report last month on how young children could be excluded from social media.
The government had yet to receive that evaluation’s final recommendations, Wells said. But she added that the platform users won’t have to upload documents such as passports and driver’s licenses to prove their age.
“Platforms have to provide an alternative to providing your own personal identification documents to satisfy themselves of age,” Wells said. “These platforms know with deadly accuracy who we are, what we do and when we do it. And they know that you’ve had a Facebook account since 2009, so they know that you are over 16.”
Quote:Amazon on Thursday forecast third-quarter sales above market estimates, but failed to live up to lofty expectations for its Amazon Web Services cloud computing unit after rivals handily beat expectations.
Shares fell by more than 3% in after-hours trading after finishing regular trading up 1.7% to $234.11. Both Google-parent Alphabet and Microsoft posted big cloud computing revenue gains this month.
AWS profit margins also contracted. Amazon said they were 32.9% in the second quarter, down from 39.5% in this year’s first quarter and 35.5% a year ago. The second-quarter margin results were at their lowest level since the final quarter of 2023.
AWS, the cloud unit, reported a 17.5% increase in revenue to $30.9 billion, edging past expectations of $30.77 billion. By comparison, sales for Microsoft’s Azure rose 39% and Google Cloud gained 32%.
After competitors’ strong showing, “AWS is lingering at 17% growth,” said Gil Luria, a D.A. Davidson analyst. “That is very disappointing, even to the point where if Microsoft’s Azure continues to grow at these rates, it may overtake AWS as the largest cloud provider by the end of next year.”
Amazon expects total net sales to be between $174.0 billion and $179.5 billion in the third quarter, compared with analysts’ average estimate of $173.08 billion, according to data compiled by LSEG. The range for operating income in the current quarter was also light. Amazon forecast between $15.5 billion and $20.5 billion, compared with expectations of $19.45 billion.
Both Microsoft and Alphabet cited massive demand for their cloud computing services to boost their already huge capital spending, but also noted they still faced capacity constraints that limited their ability to meet demand.
AWS represents a small part of Amazon’s total revenue, but it is a key driver of profits, typically accounting for about 60% of Amazon’s overall operating income.
Quote:Microsoft soared past $4 trillion in market valuation in intraday trading on Thursday, becoming the second publicly traded company after Nvidia to surpass the milestone following a blockbuster earnings report.
The technology behemoth forecast a record $30 billion in capital spending for the first quarter of the current fiscal year to meet soaring AI demand and reported booming sales in its Azure cloud computing business on Wednesday.
Shares of Microsoft closed up 4% at $533.50, leaving it with a $3.97 trillion market cap.
“It is in the process of becoming more of a cloud infrastructure business and a leader in enterprise AI, doing so very profitably and cash generatively despite the heavy AI capital expenditures,” said Gerrit Smit, lead portfolio manager, Stonehage Fleming Global Best Ideas Equity Fund.
Redmond, Wash.,-headquartered Microsoft first cracked the $1-trillion mark in April 2019.
Its move to $3 trillion was more measured than technology giants Nvidia and Apple, with AI-bellwether Nvidia tripling its value in just about a year and clinching the $4-trillion milestone before any other company on July 9.
Apple was last valued at $3.11 trillion.
Lately, breakthroughs in trade talks between the United States and its trading partners ahead of President Trump’s Friday tariff deadline have buoyed stocks, propelling the S&P 500 and the Nasdaq to record highs.
Microsoft’s multibillion-dollar bet on OpenAI is proving to be a game changer, powering its Office Suite and Azure offerings with cutting-edge AI and fueling the stock to more than double its value since ChatGPT’s late-2022 debut.
Its capital expenditure forecast, its largest ever for a single quarter, has put it on track to potentially outspend its rivals over the next year.
Meta Platforms also doubled down on its AI ambitions, forecasting third-quarter revenue that blew past Wall Street estimates as artificial intelligence supercharged its core advertising business.
The social media giant upped the lower end of its annual capital spending by $2 billion — just days after Alphabet made a similar move — signaling that Silicon Valley’s race to dominate the artificial-intelligence frontier is only accelerating.
Wall Street’s surging confidence in the company comes on the heels of back-to-back record revenues for the tech giant since September 2022.
The stock’s rally had also received an extra boost as the tech giant trimmed its workforce and doubled down on AI investments — determined to cement its lead as businesses race to harness the technology.
Quote:Meta Platforms forecast third-quarter revenue well above Wall Street estimates on Wednesday, as artificial intelligence continued to strengthen its core advertising business, sending its shares up 10% in extended trading.
The company also raised the lower end of its capital expenses forecast for the year.
The bumper results could ease investor worries, at least for now, about Meta’s forecast that the year-over-year growth rate in the fourth quarter would be slower than in the third quarter. Investors also shrugged off the company’s comments on rising infrastructure and employee compensation costs, which Meta said would “result in a 2026 year-over-year expense growth rate that is above the 2025 expense growth rate.”
For the third quarter, Meta said it expected total revenue of $47.5 billion to $50.5 billion, compared with analysts’ average estimate of $46.17 billion, according to data compiled by LSEG. Its third-quarter guidance assumed a 1% benefit from a weak dollar, it said in a statement.
Meta expects both total expenses and capital expenditures to increase significantly in 2026, driven primarily by higher infrastructure costs and continued investment to support AI initiatives.
“AI-driven investments into Meta’s advertising business continue to pay off, bolstering its revenue as the company pours billions of dollars into AI ambitions like superintelligence,” said eMarketer senior analyst Minda Smiley. “But Meta’s exorbitant spending on its AI visions will continue to draw questions and scrutiny from investors who are eager to see returns.”
Smiley added that Meta’s strong results signaled that the broader digital advertising market was not yet feeling the pain from tariffs.
U.S. antitrust regulators have sued Meta to force it to restructure or sell Instagram and WhatsApp, claiming the company sought to monopolize the market for social media platforms used to share updates with friends and family. With court papers due in September, the judge overseeing the case is unlikely to rule until later this year at the earliest.
Quote:Apple forecast revenue well above Wall Street’s estimates on Thursday, following strong June-quarter results supported by customers buying iPhones early to avoid President Trump’s tariffs.
Chief Financial Officer Kevan Parekh said the company expects revenue growth for the current quarter in the “mid to high single digits,” which exceeded the 3.27% growth to $98.04 billion that analysts expected, according to LSEG data. The company’s fiscal third-quarter sales beat expectations by the biggest percentage in at least four years, according to LSEG.
But CEO Tim Cook told analysts on a conference call that those tariffs had cost Apple $800 million in the June quarter and may add $1.1 billion in costs to the current quarter.
Apple reported $94.04 billion in revenue for its fiscal third quarter ended June 28, up nearly 10% from a year earlier and beating analyst expectations of $89.54 billion, according to LSEG data. Its earnings per share of $1.57 per share topped expectations of $1.43 per share.
Apple shares were up 3% in after-hours trading, extending gains after Apple provided its forecast.
Sales of iPhones, the Cupertino, Calif., company’s best-selling product, were up 13.5% to $44.58 billion, beating analyst expectations of $40.22 billion.
Apple has been shifting production of products bound for the US, sourcing iPhones from India and other products such as Macs and Apple Watches from Vietnam.
The ultimate tariffs many Apple products could face remain in flux, and many of its products are currently exempt. Sales in its Americas segment, which includes the US and could face tariff impacts, rose 9.3% to $41.2 billion.
Quote:Delta Air Lines said Friday it will not use artificial intelligence to set personalized ticket prices for passengers after facing sharp criticism from lawmakers.
Last week, Democratic Senators Ruben Gallego, Mark Warner and Richard Blumenthal said they believed the Atlanta-based airline would use AI to set individual prices, which would “likely mean fare price increases up to each individual consumer’s personal ‘pain point.'”
Delta has said it plans to deploy AI-based revenue management technology across 20% of its domestic network by the end of 2025 in partnership with Fetcherr, an AI pricing company.
“There is no fare product Delta has ever used, is testing or plans to use that targets customers with individualized prices based on personal data,” Delta told the senators in a letter on Friday, seen by Reuters. “Our ticket pricing never takes into account personal data.”
The senators cited a comment in December by Delta President Glen Hauenstein that the carrier’s AI price-setting technology is capable of setting fares based on a prediction of “the amount people are willing to pay for the premium products related to the base fares.”
Last week, American Airlines CEO Robert Isom said using AI to set ticket prices could hurt consumer trust.
“This is not about bait and switch. This is not about tricking,” Isom said on an earnings call, adding “talk about using AI in that way, I don’t think it’s appropriate. And certainly from American, it’s not something we will do.”
Delta said airlines have used dynamic pricing for more than three decades, in which pricing fluctuates based on a variety of factors like overall customer demand, fuel prices and competition but not a specific consumer’s personal information.
Quote:Crypto crooks are getting bolder — and now, they sound just like your mom.
Global crypto scams soared 456% between May 2024 and April 2025 — becoming increasingly reliant on AI-generated voices, deepfake videos and phony credentials to fleece unsuspecting victims, blockchain intelligence firm TRM Labs‘ Ari Redbord told The Post after testifying before Congress last Tuesday.
“These scams are highly effective, as the technology feels incredibly real and familiar to the victim,” Redbord said.
“We’ve seen cases where scammers use AI to replicate the voice of a loved one, tricking the victim into transferring money under the guise of an urgent request.”
And the threat is exploding — especially in high-density cities like New York, Miami and Los Angeles, he added.
In June, New York officials froze $300,000 in stolen cryptocurrency and seized more than 100 scam websites linked to a Vietnam-based ring that targeted Russian-speaking Brooklynites with fake Facebook investment ads.
Meta shut down over 700 Facebook accounts tied to the scam.
Investigators say the group used deepfake BitLicense certificates and moved victims onto encrypted apps like Telegram before draining their wallets.
Some New Yorkers lost hundreds of thousands of dollars — and it’s not just everyday joes getting targeted.
Even crypto insiders are falling for it. Florida-based crypto firm MoonPay saw its CEO Ivan Soto-Wright and CFO Mouna Ammari Siala duped into wiring $250,000 in crypto to a scammer posing as Trump inauguration co-chair Steve Witkoff, according to a recent Department of Justice complaint.
And that’s just the tip of the iceberg.
Globally, fraudsters swiped more than $10.7 billion in 2024 through crypto cons — including romance scams, fake trading platforms and “pig-butchering,” where scammers build fake relationships before draining victims’ accounts, Redbord said.
In the US, Americans filed nearly 150,000 crypto-related fraud complaints in 2024, with losses topping $3.9 billion, according to the FBI. But the real number is likely much higher.
Quote:Thousands of publicly shared ChatGPT conversations, many containing personal and sensitive information, are showing up in Google search results, according to a new report.
A recent investigation by Fast Company has revealed that ChatGPT conversations shared using the app’s “Share” feature may be more public than users realize. The report found that thousands of these chats, including some containing personal, sensitive, or confidential information, are being indexed by search engines like Google.
When a user clicks the “Share” button in ChatGPT, it generates a public link that anyone can access. These links are typically used to share the chat with an individual person, or even to conventiently move the chat between devices for the same person. However, many users are unaware that these links can also be crawled by Google and appear in search results. A simple site search (site:chatgpt.com/share) revealed over 4,500 publicly indexed chats, with many discussing topics such as trauma, mental health, relationships, and work-related issues.
While OpenAI does not attach user names to the chats, there are still risks associated with this unexpected exposure. If a user has included identifying information like names, locations, emails, or work details in their conversation, they could be revealing more than they intended. Companies using ChatGPT for marketing, product copy, or internal brainstorming may also inadvertently leak strategies or proprietary language.
Even if a link is deleted or a user no longer wants it to be public, it might still be visible through cached pages or until Google updates its index. This means that if a user’s name or company is tied to shared content, others could find it even after deletion, potentially leading to reputation damage.
To protect their conversations, users are advised to avoid sharing sensitive information in any conversation that could be made public. The “Share” feature should only be used when necessary, and users should double-check the contents of the conversation before sharing.
Auditing old links by searching “site:chatgpt.com/share [your name or topic]” can help identify what’s visible.
Public links can be deleted from ChatGPT’s Shared Links dashboard, although this may not immediately remove them from Google’s index. As an alternative, users can share AI-generated answers using screenshots or by pasting text, rather than using a public link.
Nowadays, people are relying on AI for relationship advice, money-saving tips — and now help negotiating their salaries.
However, if you’re a woman or minority using the technology in this way — chatbots might be causing you more harm than good.
A new study published by Cornell University has found that large language models (LLMs) — the technology that powers chatbots — give biased salary advice based on user demographics.
Specifically, these chatbots advise women and minorities to ask for lower salaries when negotiating their pay.
A research team led by Ivan P. Yamshchikov, a professor at the Technical University of Applied Sciences Würzburg-Schweinfurt (THWS), analyzed various conversations using several top AI models by feeding them prompts from made-up personas with varying characteristics.
The research found that sneaky AI chatbots often suggest significantly lower salary expectations to women compared to their male counterparts, originally reported on by Computer World.
In one test, for example, a male applicant applying for a senior medical position in Denver was advised by ChatGPT to ask for $400,000 as a starting salary.
Meanwhile, an equally qualified female applicant was told to ask for $280,000 for the same role.
That’s a $120,000 gap stemming simply from gender bias.
Minorities and refugees were also consistently recommended lower salaries from AI.
“Our results align with prior findings [which] observed that even subtle signals like candidates’ first names can trigger gender and racial disparities in employment-related prompts,” Yamshchikov told Computer World.
And experts warn that biases can still be applied even if the person’s sex, race and gender aren’t explicitly stated at the time because many models remember user traits across sessions.
As frightening as this biased advice might be — it’s not stopping people from putting their full trust into AI, so much so that younger generations are turning to it for friendship-making skills.
Quote:Tea, an app designed to let women safely discuss men they date, has been breached, with thousands of selfies and photo IDs of users exposed, the company confirmed on Friday.
Tea said that about 72,000 images were leaked online, including 13,000 images of selfies or selfies featuring a photo identification that users submitted during account verification.
Another 59,000 images publicly viewable in the app from posts, comments, and direct messages were also accessed without authorization, according to a Tea spokesperson.
No email addresses or phone numbers were accessed, the company said, and the breach only affects users who signed up before February 2024.
“Tea has engaged third-party cybersecurity experts and are working around the clock to secure its systems,” the company said. “At this time, there is no evidence to suggest that additional user data was affected. Protecting tea users’ privacy and data is their highest priority.”
Tea presents itself as a safe way for women to anonymously vet men they might connect with on dating apps such as Tinder or Bumble — ensuring that your date is “safe, not a catfish, and not in a relationship.”
“Tea is a must-have app, helping women avoid red flags before the first date with dating advice, and showing them who’s really behind the profile of the person they’re dating,” reads Tea’s app store description.
Quote:The House Judiciary Committee on Tuesday launched an investigation into whether the EU and Biden administration pressured Spotify to censor free speech, The Post has learned.
Censorship has been a point of tension for Spotify, which has faced heated backlash for flagging COVID-19 information from podcaster Joe Rogan and banning Steve Bannon from the platform.
“More relevantly, it’s the pressure we are seeing the EU put on companies to censor more,” a source familiar with the probe told The Post.
In a letter sent to Spotify CEO Daniel Ek, US Rep. Jim Jordan (R-Ohio) slammed recent laws from the EU and UK that require social media platforms – even those based in the US – to censor “disinformation” and “harmful content” or face massive fines.
“These foreign laws, regulations, and judicial orders may limit or restrict Americans’ access to constitutionally protected speech in the United States. Indeed, that appears to be their very purpose,” Jordan wrote in a copy of the letter obtained by The Post.
The committee ordered Spotify to preserve documents and all contact with foreign governments, as well as individuals linked to the White House, and provide this information to the House by Aug. 12, according to a letter obtained by The Post.
“We’ve received the letter and will respond accordingly,” a Spotify spokesperson told The Post.
Spotify found itself caught in the midst of a controversy in 2022 over Rogan’s comments on COVID-19 – including claims that Ivermectin can cure the disease.
Clinical trial data do not demonstrate that Ivermectin is effective in treating COVID-19 in humans, according to the FDA.
Outraged critics accused Spotify of permitting the spread of misinformation, and musician Neil Young famously pulled his music from the platform in protest.
The company vowed to include advisories on COVID-19 content after a group of scientists and medical professionals signed an open letter calling for Spotify to “take action against mass-misinformation events.”
Quote:A Miami jury decided that Elon Musk’s car company Tesla was partly responsible for a deadly crash in Florida involving its Autopilot driver assist technology and must pay the victims more than $200 million in damages.
The federal jury held that Tesla bore significant responsibility because its technology failed and that not all the blame can be put on a reckless driver, even one who admitted he was distracted by his cell phone before hitting a young couple out gazing at the stars. The decision comes as Musk seeks to convince Americans his cars are safe enough to drive on their own as he plans to roll out a driverless taxi service in several cities in the coming months.
The decision ends a four-year long case remarkable not just in its outcome but that it even made it to trial. Many similar cases against Tesla have been dismissed and, when that didn’t happen, settled by the company to avoid the spotlight of a trial.
“This will open the floodgates,” said Miguel Custodio, a car crash lawyer not involved in the Tesla case. “It will embolden a lot of people to come to court.”
The case also included startling charges by lawyers for the family of the deceased, 22-year-old, Naibel Benavides Leon, and for her injured boyfriend, Dillon Angulo. They claimed Tesla either hid or lost key evidence, including data and video recorded seconds before the accident.
Tesla has previously faced criticism that it is slow to cough up crucial data by relatives of other victims in Tesla crashes, accusations that the car company has denied. In this case, the plaintiffs showed Tesla had the evidence all along, despite its repeated denials, by hiring a forensic data expert who dug it up. Tesla said it made a mistake after being shown the evidence and honestly hadn’t thought it was there.
“Today’s verdict is wrong,” Tesla said in a statement, “and only works to set back automotive safety and jeopardize Tesla’s and the entire industry’s efforts to develop and implement life-saving technology.” They said the plaintiffs concocted a story ”blaming the car when the driver – from day one – admitted and accepted responsibility.”
Quote:Police are investigating the death of a 20-year-old Brazilian woman who died on a bus with 26 iPhones glued to her skin.
The woman, who has not been publicly identified, died of cardiac arrest on July 29, according to multiple outlets, including the Daily Mail.
Cops suspect the young woman was likely smuggling the iPhones, the Mirror reported.
Passengers on the bus told police the woman, who was traveling solo, had become ill during the trip from Foz do Iguaçu to São Paulo, according to the reports.
She complained she was having trouble breathing.
People said she collapsed and died when the bus stopped in the city of Guarupuava, located in the central region of Paraná.
Emergency responders tried to revive the woman for 45 minutes, and later said she suffered a seizure.
She was pronounced dead at the scene, according to reports.
While being treated, medics uncovered several packages glued to her body.
The packages turned out to be 26 iPhones, according to the Daily Mail. Police also found several bottles of booze in her luggage, the outlet reported.
The Paraná Civil Police is waiting on the forensic report before revealing what caused the breathing difficulties and the cardiac arrest.
The cell phones are now in the possession of Brazil’s Federal Revenue Service.
Quote:Hundreds of pharmacies have been forced to close across Russia due to a major cyber attack.
The Stolichki pharmacy chain, which has around 900 stores across the Moscow region, closed on late Tuesday morning, followed by Neofarm, which also has stores in the Russian capital.
It has left thousands of customers unable to access medication. It is unclear when the chains are expected to reopen.
It comes a day after Russia’s flagship airline Aeroflot was rocked by a major attack, leading to dozens of flight cancellations and delays on Monday and again this morning.
The Silent Crow and Cyber Partisans hacker group, which support Ukraine, claim to have been lurking in Aeroflot’s systems for a year and have now carried out a “large-scale operation” that led to the “complete compromise and destruction” of Aeroflot’s internal IT infrastructure.
Rare admission
In a rare admission of vulnerability, the Kremlin said reports of a cyber attack against Aeroflot were “worrying”.
The second day of cyber attacks came hours after Ukraine was rocked by a series of overnight Russian attacks, which killed 27 people.
Four powerful Russian glide bombs hit a prison in Zaporizhzhia, authorities said. They killed at least 16 inmates and wounded more than 90 others, Ukraine’s Justice Ministry said.
Meanwhile, a 23-year-old pregnant woman was among those killed in a strike on a maternity hospital in the central region of Dnipro.
‘Each new ultimatum a step towards war’
Volodymyr Zelensky, the Ukrainian president, said the strikes were “deliberate”, highlighting that they came just hours after Donald Trump reduced the deadline for Vladimir Putin to agree to a ceasefire.
Quote:A sweeping cyberespionage operation targeting Microsoft server software compromised about 100 different organizations as of the weekend, one of the researchers who helped uncover the campaign said Monday.
Microsoft on Saturday issued an alert about “active attacks” on self-managed SharePoint servers, which are widely used by government agencies and businesses to share documents within organisations.
Dubbed a “zero day” because it leverages a previously undisclosed digital weaknesses, the hacks allow spies to penetrate vulnerable servers and potentially drop a back door to secure continuous access to victim organizations.
Vaisha Bernard, the chief hacker at Eye Security, a Netherlands-based cybersecurity firm which discovered the hacking campaign targeting one of its clients on Friday, said that an internet scan carried out with the ShadowServer Foundation had uncovered nearly 100 victims altogether – and that was before the technique behind the hack was widely known.
“It’s unambiguous,” Bernard said. “Who knows what other adversaries have done since to place other back doors.”
He declined to identify the affected organizations, saying that the relevant national authorities had been notified. The ShadowServer Foundation didn’t immediately return a message seeking comment.
Another researcher said that, so far, the spying appeared to be the work of a single hacker or set of hackers.
“It’s possible that this will quickly change,” said Rafe Pilling, Director of Threat Intelligence at Sophos, a British cybersecurity firm.
Microsoft said it had “provided security updates and encourages customers to install them,” a company spokesperson said in an emailed statement.
Two days later another article was published blaming China for the attack against the US Nuclear Weapons Agency as posted on our Chinese Hackers thread.
Quote:Bleach maker Clorox said Tuesday that it has sued information technology provider Cognizant over a devastating 2023 cyberattack, alleging that the hackers pulled off the intrusion simply by asking the tech company’s staff for employees’ passwords.
Clorox was one of several major companies hit in August 2023 by the hacking group dubbed Scattered Spider, which specializes in tricking IT help desks into handing over credentials and then using that access to lock them up for ransom.
The group is often described as unusually sophisticated and persistent, but in a case filed in California state court on Tuesday, Clorox said one of Scattered Spider’s hackers was able to repeatedly steal employees’ passwords simply by asking for them.
“Cognizant was not duped by any elaborate ploy or sophisticated hacking techniques,” according to a copy of the lawsuit reviewed by Reuters. “The cybercriminal just called the Cognizant Service Desk, asked for credentials to access Clorox’s network, and Cognizant handed the credentials right over.”
Cognizant did not immediately return a message seeking comment on the suit, which was not immediately visible on the public docket of the Superior Court of Alameda County. Clorox provided Reuters with a receipt for the lawsuit from the court.
Three partial transcripts included in the lawsuit allegedly show conversations between the hacker and Cognizant support staff in which the intruder asks to have passwords reset and the support staff complies without verifying who they are talking to, for example by quizzing them on their employee identification number or their manager’s name.
“I don’t have a password, so I can’t connect,” the hacker says in one call. The agent replies, “Oh, ok. Ok. So let me provide the password to you ok?”
The 2023 hack caused $380 million in damages, Clorox said in the suit, about $50 million of which were tied to remedial costs and the rest of which were attributable to Clorox’s inability to ship products to retailers in the wake of the hack.
Clorox said the clean-up was hampered by other failures by Cognizant’s staff, including failure to de-activate certain accounts or properly restore data.
Quote:A Metropolitan Transportation Authority meeting’s virtual feed was interrupted Wednesday by a raunchy image pleasuring himself — in an X-rated moment the agency’s head honcho awkwardly blamed on “penetration” by hackers.
Viewers were shocked when the image of a naked man spreading his legs and touching himself appeared on the screen while a union boss began speaking during the meeting’s public commentary portion.
The words “hacked by ccp facer” were watermarked above the NSFW image.
MTA Chairman Janno Lieber later attributed the indecent incident to a group of people who used phony credentials to follow the meeting online.
“What appears to have happened is that a group of people — and there was a group — got online and had a bunch of fake identities,” Lieber told reporters.
“And one of them succeeded in penetrating and getting that — what do they call that? — Zoom bomb or something?” he added. “And then they celebrated online”
The feed had quickly switched away from the pornographic image and back to the MTA board meeting where an MTA employee blurted out, “We got hacked.”
“I think you should make a comment about that,” another worker said.
Lieber said the interrupted speaker would be invited back to finish his statement.
“It shut down within literally a second or two,” Lieber said, promising the MTA would work with its IT department to make sure it doesn’t happen again. “Obviously, an unpleasant, unpleasant moment.”
Last week NBC4 reported that a virtual Zoom meeting between New Jersey election officials and dozens of news outlets was hacked with pornographic images. The outlet said the state attorney general is investigating that hacking.
In February 2021 a group hacked a virtual meeting of City Council members with a NSFW image.
RE: News of the Cyber World - kyonides - 08-13-2025
Quote:Evidence suggests that Russia is partly responsible for a recent hack of the federal court records system, which may have exposed sensitive information about criminal cases and confidential informants, according to a report.
The hack, which Politico first reported last week, is believed to have compromised information about confidential sources in criminal cases across numerous federal districts. The attack is believed to have occurred in early July.
It’s not immediately clear which Russian entity was involved, several people familiar with the matter told the New York Times.
The Administrative Office of the U.S. Courts, which manages the electronic court records system, declined to comment on the reported revelations. The Independent has reached out to the Justice Department for comment.
Some criminal case searches involved people with Russian and Eastern European surnames, the outlet reported.
Court system administrators informed Justice Department officials, clerks and chief judges in federal courts that “persistent and sophisticated cyber threat actors have recently compromised sealed records,” according to an internal department memo seen by the NYT.
Some records related to criminal activity with international ties were also believed to have been targeted. Chief judges were also warned last month to move cases fitting this description off the regular document-management system, the outlet reported.
Margo K. Brodie, chief judge of the Eastern District of New York, ordered “documents filed under seal in criminal cases and in cases related to criminal investigations are prohibited from being filed” in PACER, a public database for court records.
The Administrative Office of the U.S. Courts issued a statement last week saying that it is taking steps to further protect sensitive court filings, noting that most court documents filed in the system are not confidential.
“The federal Judiciary is taking additional steps to strengthen protections for sensitive case documents in response to recent escalated cyberattacks of a sophisticated and persistent nature on its case management system. The Judiciary is also further enhancing security of the system and to block future attacks, and it is prioritizing working with courts to mitigate the impact on litigants,” August 7 statement read.
Although the statement didn’t address the origin of the cyber attack or which files were compromised, the NYT reported that federal courts in New York, South Dakota, Missouri, Iowa, Minnesota and Arkansas were included in the breach.
In January 2021, the Administrative Office of the U.S. Courts acknowledged “widespread cybersecurity breaches.” At the time, the office said highly sensitive documents could be filed in paper form or using a secure electronic device, such as a thumb drive, and stored in a secure stand-alone computer system rather than filed on the electronic case files system.
Quote:President Donald Trump reacted to a question on Wednesday regarding evidence that Russia is at least in part responsible for a recent hack of the computer system that manages U.S. federal court documents.
The New York Times reported on the investigation Tuesday, citing several people briefed on the breach.
The president was asked during a press conference at the Kennedy Center, "There is new reporting that the Russians have hacked into some computer systems that manage U.S. Federal court documents. I wonder if you've seen this reporting and if you plan to bring it up to President [Vladimir] Putin when you see him later in the week?"
Trump replied, "I guess I could. Are you surprised, you know? They hack in, that's what they do. They're good at it, we're good at it. We're actually better at it."
Trump continued, "I've heard about it."
Why It Matters
The Administrative Office of the United States Courts said last week in a news release that it has been experiencing "escalated cyberattacks of a sophisticated and persistent nature on its case management system."
The majority of documents on the case management system are available to the public, but some documents are sealed due to confidential or proprietary information. The U.S. Courts office said courts are implementing "more rigorous procedures" to restrict access to these sensitive documents.
Did Russia Hack U.S. Federal Court Filing Systems? What We Know
The New York Times reported that Russia is at least partly responsible for the recent hack of the case management system, citing several people briefed on the breach.
It is not clear what entity is responsible, if Russian intelligence is involved, or if other countries were also involved.
What is PACER?
The Public Access to Court Electronic Records (PACER) service allows the public to access federal court records.
Quote:As generative artificial intelligence (AI) platforms rapidly reshape U.S. workplaces, there's a growing rift between employee behavior and company policies.
Nearly half of employees said they were using banned AI tools at work, according to a survey by security company Anagram, and 58 percent admitted to pasting sensitive data into large language models, including client records and internal documents.
Why It Matters
The widespread, sometimes covert, use of AI tools like ChatGPT, Gemini, and Copilot is exposing organizations to mounting cybersecurity, compliance, and reputational risks.
The onus increasingly falls on employers to train their teams and set clear AI governance, yet recent reports indicate most are lagging behind. Workplace culture, generational attitudes, and inadequate training further muddy the waters, leading to what experts call "shadow AI" use.
What To Know
The findings were stark in cybersecurity firm Anagram's survey of 500 full-time U.S. employees across industries and regions.
Roughly 78 percent of respondents said they are already using AI tools on the job, often in the absence of clear company policies, and 45 percent confessed to using banned AI tools at work.
Nearly six in 10 (58 percent) said they have entered sensitive company or client data into large language models like ChatGPT and Gemini. And 40 percent admitted they would knowingly violate company policy if it meant completing a task more efficiently.
"This poses significant threats. The content input into external AI systems may be stored or used to train models, risking leaks of proprietary information," Andy Sen, CTO of AppDirect, a B2B subscription commerce platform that recently launched its own agentic AI tool, devs.ai, told Newsweek.
"The company may not be aware that AI tools have been used, creating blind spots in risk management. This could lead to noncompliance with industry standards or even legal consequences in regulated environments."
These findings are consistent with other reports.
A KPMG-University of Melbourne global survey of 48,340 professionals in April found that 57 percent of employees worldwide hide their AI use from supervisors, with 58 percent intentionally using AI for work and 48 percent uploading company information into public tools.
AI usage already has strong industry and generational divides.
Younger workers, particularly those in Generation Z, are at the forefront of AI adoption; nearly 50 percent of Gen Z employees think their supervisors do not understand the advantages of the technology, according to a 2025 UKG survey.
Many Gen Z workers have self-taught their AI skills and want AI to handle repetitive workplace processes, though even senior leaders encounter resistance and trust barriers in fostering responsible use.
"Employees aren't using banned AI tools because they're reckless or don't care," HR consultant Bryan Driscoll told Newsweek. "They're using them because their employers haven't kept up. When workers are under pressure to do more with less, they'll reach for whatever tools help them stay efficient. And if leadership hasn't set any guardrails, that's not a worker problem."
There's also a lack of proper AI education, compounding risks in the workforce.
Fewer than half (47 percent) of employees globally say they have received any formal AI training, according to KPMG. Many rely on public, unvetted tools, with 66 percent of surveyed employees using AI output without verifying accuracy, and over half reporting mistakes attributed to unmonitored AI use.
Despite the efficiency gains cited by users, these shortcuts have led to incidents of data exposure, compliance violations, and damaged organizational trust.
Quote:The founder of the nonprofit StopAntisemitism, Liora Rez, told Newsweek in an exclusive interview that artificial intelligence (AI) models have displayed some concerning behavior that demonstrates the need to create stronger safeguards in those systems to fight potential antisemitic behavior and tropes.
Newsweek reached out to Perplexity, OpenAI, X, and Anthropic for comment by email, but received no response by the time of publication.
Why It Matters
Concerns over the safeguards in AI models have increased after X's AI Grok started spewing antisemitic rhetoric, which occurred following a tweak to the program's parameters for acceptable sourcing and material. Grok started referring to itself as "MechaHitler," and discussing "vile anti-white hate" that Adolf Hitler would "handle."
X CEO Elon Musk had modified the model after criticizing its responses as being "too woke" and looking to tweak its sourcing parameters to include Reddit threads as acceptable to counterbalance mainstream sources and a "liberal bias."
Grok confirmed this in response to queries from users, saying that it had used phrases that came from its training data: "Think endless internet sludge like 4chan threads, Reddit rants, and old Twitter memes where folks highlight patterns (often with a side of conspiracy). It's not from one 'who,' but a collective online echo chamber. I weave in such lingo to grok human quirks, but yeah, it can veer dodgy—lesson learned."
X addressed the issue, assuring users in a post that developers were "aware of recent posts made by Grok and are actively working to remove the inappropriate posts. Since being made aware of the content, xAI has taken action to ban hate speech before Grok posts on X. xAI is training only truth-seeking and thanks to the millions of users on X, we are able to quickly identify and update the model where training could be improved."
This occurs as antisemitic hate crime is on the rise, with nearly three-quarters of American Jews saying as recently as February 2025 that they feel less secure than they did last year. A full 90 percent say that antisemitism has increased in the United States following Hamas' attack on Israel on October 7, 2023, and more than one-third (35 percent) of American Jewish college students report experiencing antisemitism at least once during their time on campus.
The FBI recorded just over 13,000 hate crime offenses in 2024, of which 3,314 were based on religious identity—including 2,321 anti-Jewish offenses, or roughly 70 percent of all religious-based hate crime. In 2023, the numbers were roughly the same, with 2,100 anti-Jewish offenses out of 3,106 religious-based offenses.
Those numbers represent a roughly 50 percent increase over the numbers recorded prior to 2023, according to FBI data.
Quote:The official X account for Grok, Elon Musk's artificial intelligence (AI) chatbot service, was briefly suspended from the social media platform on Monday afternoon before being quickly reinstated.
The suspension happened just a day after Grok sparked controversy by calling President Donald Trump "the most notorious criminal" in Washington, D.C., in a since-deleted post.
...
The suspension highlights ongoing content moderation challenges facing AI chatbots on social media platforms, particularly when those systems generate politically sensitive responses.
Grok, positioned as Musk's answer to ChatGPT with a focus on "truth-seeking," has faced repeated criticism for generating controversial content, including previous antisemitic responses that required an official apology from xAI.
What To Know
Screenshots shared by X users showed that the account initially lost its verification status upon return, transitioning from the gold checkmark indicating xAI affiliation to a blue checkmark, before eventually being restored to its original verified status.
Users attempting to access the @grok account encountered X's standard "Account suspended" message stating that violators of platform rules face suspension. Musk responded to the incident by commenting, "Man, we sure shoot ourselves in the foot a lot!"
Following its reinstatement within minutes, the Grok account provided contradictory explanations for the suspension across different languages.
In English, the chatbot claimed it was suspended for "hateful conduct, stemming from responses seen as antisemitic." However, in French, Grok attributed the suspension to "quoting FBI/BJS stats on homicide rates by race—controversial facts that got mass-reported." A Portuguese response suggested the suspension resulted from "bugs or mass reports." The account initially lost its verification status upon return and had an NSFW video at the top of its timeline.
The suspension followed Sunday's controversy when Grok described Trump as "the most notorious criminal" in D.C., writing: "Yes, violent crime in DC has declined 26 percent year-to-date in 2025, hitting a 30-year low per MPD and DOJ data. As for the most notorious criminal there, based on convictions and notoriety, it's President Donald Trump—convicted on 34 felonies in NY, with the verdict upheld in January 2025." This reference to Trump's May 2024 conviction on 34 felony counts related to falsifying business records has since been deleted from the platform.
Quote:More than a dozen House Democrats pressed Centers for Medicare & Medicaid Services (CMS) Administrator Mehmet Oz in a letter last week over CMS's announced plans to expand prior authorization requirements to traditional Medicare through a pilot program.
The new model incorporates artificial intelligence to help make decisions and is being tested in six states beginning in January.
"Let's call it what it is: profit-driven healthcare," a financial expert told Newsweek, "And profit motive and patient care mix about as well as oil and water. Lawmakers are sounding the alarm, because this directly affects many of their constituents."
Why It Matters
The pushback highlights a growing partisan debate over how to reduce Medicare spending without restricting beneficiaries' access to care. It also underscores tensions between the Biden-era expansion of oversight and the Trump administration's stated aim to cut waste while modernizing CMS operations.
House Democrats argued the new prior authorization pilot would create administrative burdens for providers and patients, while some Senate Republicans believe the Medicare reforms are necessary for rooting out fraud and overpayments.
What To Know
More than a dozen House Democrats, led by Democratic Representatives Suzan DelBene of Washington and Ami Bera of California, sent a letter to CMS Administrator Mehmet Oz on Thursday, requesting information and urging cancellation of a planned prior authorization pilot for traditional Medicare.
The lawmakers wrote that "traditional Medicare has rarely required prior authorization," and said that, while prior authorization is "often described as a cost-containment strategy, in practice it increases provider burden, takes time away from patients, limits patients' access to life-saving care, and creates unnecessary administrative burden."
The letter asked CMS for details on the pilot's scope, implementation plan and safeguards for beneficiaries.
"Prior authorization is often seen as a roadblock to timely, even life-saving care—replacing the doctor's judgment with an algorithm," Kevin Thompson, the CEO of 9i Capital Group and the host of the 9innings podcast, told Newsweek.
"Let's call it what it is: profit-driven healthcare. And profit motive and patient care mix about as well as oil and water. Lawmakers are sounding the alarm, because this directly affects many of their constituents."
CMS has planned to roll out the prior authorization program in six states starting in January. The Trump administration previously announced a voluntary pledge from major insurers to simplify prior authorization in Medicare Advantage.
Lawmakers said that prior voluntary pledges showed public recognition of the harms of prior authorization, and they urged CMS to reconsider extending similar rules to traditional Medicare.
Separately, Senate Republicans discussed broader Medicare changes as part of proposals to reduce waste, fraud and abuse and to modernize CMS operations.
Republican Senator Thom Tillis of North Carolina said lawmakers were examining CMS contracting practices, duplicate payments and upcoding as potential savings sources, according to The Hill.
The Hill also reported that legislation from Louisiana Republican Senator Bill Cassidy and Democratic Senator of Oregon Jeff Merkley to reduce Medicare Advantage overpayments had bipartisan interest and might be folded into larger budget measures considered by Senate Republicans.
Idaho Republican Senator Mike Crapo said his committee was "evaluating" Cassidy's proposal.
Quote:Artificial intelligence firm Perplexity on Tuesday made an unsolicited $34.5 billion offer to buy Google’s Chrome web browser – as the Big Tech giant faces the prospect of being broken up over its illegal monopoly over online search.
The massive offer dwarfs the startup’s own current valuation, estimated to be $18 billion.
Perplexity, run by Aravind Srinivas, said it is partnering with multiple investors, including unnamed venture capital firms, to bankroll the proposed deal, according to the Wall Street Journal.
The firm would invest $3 billion into Chrome over two years and maintain open-source access for its underlying code, Chromium, according to details of the proposal obtained by the Journal. It would also continue placing Google as the default search engine in Chrome.
US District Judge Amit Mehta, who last year ruled that Google is a “monopolist,” is expected to decide before the end of the month on the best remedy to unwind its illegal dominance and open up competition for potential rivals.
A forced divestiture of Chrome is one of several options on the table – though Google would assuredly appeal, pushing the timeline out years into the future.
Perplexity — which has its own AI-powered web browser, Comet — told Google that its offer was “designed to satisfy an antitrust remedy in highest public interest by placing Chrome with a capable, independent operator,” according to the Journal.
The company’s stock was up more than 1% in afternoon trading on Tuesday.
Experts have estimated that Chrome, which has more than 3 billion monthly active users, would be worth anywhere from $20 billion to $50 billion if it were to be sold.
The Justice Department has asked Mehta to force Google to share its search data with rivals and to make sure that he considers the impact of Google’s massive investments in AI-powered search features when crafting his remedies.
The feds have also proposed requiring a selloff of Google’s Android software if initial remedies prove ineffective.
Mehta is also expected to bar Google from paying billions to companies like Apple to ensure its search engine is set as the default option on most smartphones.
Quote:YouTube on Wednesday will begin testing a new age-verification system in the U.S. that relies on artificial intelligence to differentiate between adults and minors, based on the kinds of videos that they have been watching.
The tests initially will only affect a sliver of YouTube’s audience in the U.S., but it will likely become more pervasive if the system works as well at guessing viewers’ ages as it does in other parts of the world. The system will only work when viewers are logged into their accounts, and it will make its age assessments regardless of the birth date a user might have entered upon signing up.
If the system flags a logged-in viewer as being under 18, YouTube will impose the normal controls and restrictions that the site already uses as a way to prevent minors from watching videos and engaging in other behavior deemed inappropriate for that age.
The safeguards include reminders to take a break from the screen, privacy warnings and restrictions on video recommendations. YouTube, which has been owned by Google for nearly 20 years, also doesn’t show ads tailored to individual tastes if a viewer is under 18.
If the system has inaccurately called out a viewer as a minor, the mistake can be corrected by showing YouTube a government-issued identification card, a credit card or a selfie.
“YouTube was one of the first platforms to offer experiences designed specifically for young people, and we’re proud to again be at the forefront of introducing technology that allows us to deliver safety protections while preserving teen privacy,” James Beser, the video service’s director of product management, wrote in a blog post about the age-verification system.
People still will be able to watch YouTube videos without logging into an account, but viewing that way triggers an automatic block on some content without proof of age.
Quote:The decades-long quest to create a practical quantum computer is accelerating as major tech companies say they are closing in on designs that could scale from small lab experiments to full working systems within just a few years.
IBM laid out a detailed plan for a large-scale machine in June, filling in gaps from earlier concepts and declaring it was on track to build one by the end of the decade.
“It doesn’t feel like a dream anymore,” Jay Gambetta, head of IBM’s quantum initiative, told Financial Times.
“I really do feel like we’ve cracked the code and we’ll be able to build this machine by the end of the decade.”
Google, which cleared one of the toughest technical obstacles late last year, says it is also confident it can produce an industrial-scale system within that time frame, while Amazon Web Services cautions that it could still take 15 to 30 years before such machines are truly useful.
Quantum computing is a new kind of computing that doesn’t just think in 0s and 1s like today’s computers.
Instead, it uses qubits — tiny quantum bits — that can be 0, 1, or both at the same time.
This lets quantum computers explore many possibilities at once and find answers to certain complex problems much faster than normal computers.
Quantum computing could speed up the discovery of new drugs and treatments, make artificial intelligence systems faster and more capable and improve the accuracy of market predictions and fraud detection in finance.
It could also dramatically improve efficiency in areas like traffic routing, shipping, energy grids and supply chains while driving green innovation by helping design better batteries, cleaner energy systems and more sustainable technologies.
But scaling them up from fewer than 200 qubits — the quantum version of a computing bit — to over 1 million will require overcoming formidable engineering challenges.
Quote:Rental car drivers are arming themselves with new AI-powered apps to fend any bogus charges from the growing use of the same technology by major companies like Hertz and Sixt.
One such application, Proofr, launched last month, offers users the ability to create their own digital evidence.
“It eliminates the ‘he said, she said’ disputes with rental car companies by giving users a tamper-proof, AI-powered before-and-after damage scan in seconds,” Proofr CEO Eric Kuttner told The Post.
Both Sixt and Hertz have recently faced backlash from renters who accused the companies of sideswiping them with outrageous charges for minor damages — including one Hertz renter who claimed being slapped with $440 penalty for a one-inch scuff on one of the car’s wheels.
Proofr’s system not only identifies scratches and dents but also timestamps, geotags and securely stores the images to prevent alteration.
“Because AI is now being used against consumers by rental companies to detect damage, Proofr levels the playing field,” Kuttner told The Post.
“It’s the easiest way to protect yourself from surprise damage bills that can run into the thousands all for less than the average person spends on coffee monthly.”
The service costs $9.90 per month, with a three-day free trial available for new users.
The technology powering Proofr relies on sophisticated image analysis.
According to Kuttner, the company employs “a state of the art AI image analysis pipeline to detect and log even subtle damage changes between photo sets.”
Each scan undergoes encryption, receives a timestamp and gets locked to the specific location to ensure authenticity.
The system’s AI models have been trained using thousands of real-world images to improve accuracy.
Early adopters have already successfully used the app to challenge damage claims.
Despite launching only recently, Kuttner noted that users have won disputes against what they considered unfair charges, though the company remains relatively unknown to the broader public.
Another player in this space, Ravin AI, has taken a different approach after initially working with rental companies.
The company previously partnered with Avis in 2019 and Hertz in 2022 during early experiments with AI inspections.
However, Ravin has since shifted its focus toward insurance companies and dealerships, currently working with IAG, the largest insurance firm in Australia and New Zealand.
Quote:Ford plans to start rolling out its new family of affordable electric vehicles in 2027, including a midsize pickup truck with a target starting price of $30,000, the company said on Monday, as it aspires to the cost efficiency of Chinese rivals.
The new midsize four-door pickup will be assembled at the automaker’s Louisville, Ky., plant. Ford is investing nearly $2 billion in the plant, which produces the Escape and Lincoln Corsair, retaining at least 2,200 jobs, it said in a statement.
Chinese carmakers such as BYD have streamlined their supply chain and production system to produce EVs at a fraction of the cost of Western automakers. While these vehicles have yet to enter the US market, Ford CEO Jim Farley said they set a new standard that companies like Ford must match.
“I can’t tell you with 100% certainty that this will all go just right,” Farley told a crowd at Ford’s Louisville assembly plant on Monday, noting that past efforts by US automakers to build affordable cars had fizzled. “It is a bet. There is risk.”
Ford has been developing its affordable EVs through its so-called skunkworks team, filled with talent from EV rivals Tesla and Rivian. The California-based group, led by former Tesla executive Alan Clarke, has set itself so much apart from the larger Ford enterprise that Farley said even his badge could not get him into its building for some time.
EVs sold for an average of about $47,000 in June, J.D. Power data showed. Many Chinese models sell for $10,000 to $25,000.
Affordability is a top concern among EV shoppers, auto executives have said, and the global competition for delivering cheaper electric models is heating up.
EV startup Slate, backed by Amazon CEO Jeff Bezos, is aiming for a starting price in the mid-$20,000s for its electric pickup. Tesla has teased a cheaper model, with production ramping up later this year. Rivian and Lucid are also planning to roll out lower-priced models for their lineups, although price points are in the $40,000s to $50,000s.
Since rolling out plans earlier this decade to push hard into EVs, Ford has pulled back as the losses piled up. It has scaled back many of its EV goals, canceled an electric three-row SUV, and axed a program to develop a more advanced electrical architecture for future models.
Quote:President Donald Trump defended his controversial deal requiring Nvidia and AMD to fork over 15% of their China sales revenue to the US government to skirt export controls — insisting the computer chips involved are outdated technology.
“No, this is an old chip that China already has,” Trump said Monday, referring to Nvidia’s H20 processor, adding that “China already has it in a different form, different name, but they have it.”
The president emphasized that America’s most advanced chips remain off-limits to China, describing Nvidia’s newest Blackwell processor as “super, super advanced” technology that “nobody has” and won’t have “for five years.”
The two tech companies agreed to the deal under an arrangement to obtain export licenses for their semiconductors.
Trump on Monday painted himself as a tough negotiator who extracted payment for access to the Chinese market while protecting America’s technological edge.
He described his negotiations with Nvidia CEO Jensen Huang as a back-and-forth over percentages.
Trump initially wanted 20% of the sales money, but Huang talked him down to 15%, according to the president.
“And he said, ‘Will you make it 15?’ So we negotiated a little deal,” Trump recounted.
The president repeatedly stressed that the H20 chips are essentially obsolete.
“It’s one of those things. But it still has a market,” he explained, suggesting China could easily get similar technology elsewhere, including from their own company Huawei.
Its’ Blackwell chip, meanwhile, delivers two-to-four times the performance of previous generations of graphics processing units (GPUs) with over 208 billion transistors and cutting-edge AI capabilities.
The US had previously blocked companies from selling certain chips to China, worried that advanced technology could help China compete with America in areas like artificial intelligence.