+- Save-Point (https://www.save-point.org)
+-- Forum: Official Area (https://www.save-point.org/forum-3.html)
+--- Forum: Tech Talk (https://www.save-point.org/forum-87.html)
+--- Thread: News of the Cyber World (/thread-7678.html)
Quote:CBS News reports that the tech giant has set aside $35 million to compensate users who reported covered audio issues to Apple or paid for repairs related to the problem. The settlement comes as a result of a lawsuit filed in 2019 by plaintiffs Joseph Casillas and De’Jhontai Banks, who claimed they experienced distorted sound and inability to hear callers without using the speaker function on their iPhone 7 devices purchased in 2017.
According to the complaint, “Plaintiff Casillas noticed that his phone’s sound was distorted with audible static while attempting to play a video on his phone. Plaintiff Banks noticed that she was unable to hear callers unless she used her iPhone’s speaker function. These are common indications of the Audio IC Defect.”
The lawsuit alleged that the audio chip issue was caused by inadequate casing on the phones and that Apple was aware of the problem but routinely refused to repair affected devices free of charge. However, in the settlement agreement, Apple denied the existence of any audio issues and maintained that it did nothing improper or unlawful.
How to know if you’re eligible
Customers who either paid for a iPhone 7 or 7 Plus audio repair or reported an audio issue to Apple without paying for a repair are eligible.
You must have owned an iPhone 7 or 7 Plus between September 16, 2016, and January 3, 2023.
Customers who paid for repairs are eligible to receive up to $349.
Those that reported issues but did not pay for repairs are eligible to receive up to $125.
The minimum payout is $50.
You can find out if you qualify for compensation and submit a claim by June 3rd at the settlement website here.
Quote:The European Union on Friday banned four more Russian media outlets from broadcasting in the 27-nation bloc for what it calls the spread of propaganda about the invasion of Ukraine and disinformation as the EU heads into parliamentary elections in three weeks.
The latest batch of broadcasters consists of Voice of Europe, RIA Novosti, Izvestia and Rossiyskaya Gazeta, which the EU claims are all under control of the Kremlin. It said in a statement that the four are in particular targeting “European political parties, especially during election periods.”
Belgium already last month opened an investigation into suspected Russian interference in June’s Europe-wide elections, saying its country’s intelligence service has confirmed the existence of a network trying to undermine support for Ukraine.
The Czech government has imposed sanctions on a number of people after a pro-Russian influence operation was uncovered there. They are alleged to have approached members of the European Parliament and offered them money to promote Russian propaganda.
Since the war started in February 2022, the EU has already suspended Russia Today and Sputnik among several other outlets.
Quote:Fortune reports that at an event held at the company’s headquarters in Redmond, Washington, Microsoft CEO Satya Nadella introduced the new class of AI-imbued personal computers, emphasizing their potential to anticipate users’ needs and intentions. “We’re entering this new era where computers not only understand us, but can actually anticipate what we want and our intent,” Nadella stated.
The new AI-powered features include Windows Recall, which provides the Copilot assistant with a “photographic memory” of a user’s virtual activity. Microsoft claims that its privacy settings will be protected by allowing them to filter out unwanted tracking and keeping the tracking data on the device itself.
The announcement comes amidst heightened competition from rival tech giants, such as Google and OpenAI, who have recently unveiled their own generative AI technologies. Google introduced a revamped search engine that incorporates AI-generated summaries and showcased its in-development AI assistant, Astra, which can interact with users based on visual inputs from a smartphone camera. Meanwhile, OpenAI launched a new version of its ChatGPT chatbot, demonstrating an AI voice assistant capable of engaging in natural conversations and even attempting to assess a person’s emotions.
Microsoft’s new AI-enhanced Windows PCs, set to launch on June 18, will be available on devices manufactured by partners such as Acer, ASUS, Dell, HP, Lenovo, and Samsung, as well as on Microsoft’s own Surface line. However, these cutting-edge features will be reserved for premium models starting at $999.
Quote:The Register reports that Google, the tech giant facing an antitrust jury trial over allegations of monopolizing the online advertising market, has taken an unconventional approach to avoid the case being heard by a jury. According to a recent federal court filing in Virginia, Google has offered the DoJ a check for an undisclosed amount, claiming that it covers the entirety of the monetary damages sought by the government.
The move comes as Google asserts that the DOJ’s request for a jury trial is unwarranted, arguing that the case is highly technical and beyond the comprehension of most prospective jurors. “DOJ manufactured a damages claim at the last minute in an attempt to secure a jury trial in a case even they describe as ‘highly technical’ and ‘outside the everyday knowledge of most prospective jurors,’” Google stated to the Register.
Despite offering the payment, Google maintains its innocence against the charges of abusing its monopoly position in the online advertising market. The company emphasizes its eagerness to defend its operations and strategies in court, albeit without the involvement of a jury.
The antitrust case, filed in early 2023, has garnered significant attention, with the number of state plaintiffs growing to 17 in addition to the DoJ. The original complaint accused Google of acquiring competitors, coercing publishers and advertisers into using its tools, and manipulating ad space auctions to eliminate or diminish any threat to its dominance in the digital advertising technology sector.
While the exact amount offered by Google remains undisclosed, the company claims that the DOJ’s damages case has significantly diminished from its initial description. Originally, the DOJ claimed damages exceeding $100 million for ads placed by certain federal agencies. However, Google asserts that through the discovery process, the claim has been reduced to less than a million dollars, an amount less than what the company spent on hiring experts for the case.
Quote:The Verge reports that the CSRB report, released in April, highlighted Microsoft’s “deprioritized” approach to enterprise security, which has led to preventable errors and serious breaches. Google, in a blog post on Monday, capitalized on these findings, emphasizing the importance of platforms holding strong security practices and their responsibility to do so.
Without directly naming its rival, Google repeatedly referred to Microsoft as “the vendor” throughout its post, underlining the need for governments to use “systems and products that are secure-by-design.” The tech giant recently committed to new security principles and urged public sector entities to regularly subject their tech products and services to security recertification.
In a pointed recommendation, Google advised governments to avoid “using the same vendor for operating systems, email, office software, and security tooling” – a clear jab at Microsoft, which provides all of these services and more to its vast enterprise customer base.
Microsoft’s security woes have been compounded by an ongoing breach perpetrated by Midnight Blizzard, a Russian hacker group that has gained access to the company’s executive communications and stolen source code. This breach, along with others, was cited in the CSRB report as evidence of Microsoft’s deprioritization of enterprise security.
As Microsoft grapples with the fallout and attempts to win back trust, CEO Satya Nadella has urged employees to prioritize security whenever faced with competing priorities. However, the company has yet to outline a clear plan to address its security shortcomings.
Quote:Semafor reports that in the face of dwindling clicks from Facebook due to algorithm changes and erratic search traffic due to Google’s tinkering with searches, many digital publishers found themselves struggling by the end of 2023. However, a new lifeline emerged in the form of Apple News+, a partnership program that has proven to be a lifesaver for struggling publications.
The leftist outlet Daily Beast joined the Apple News+ program in late 2023, making its exclusive content available to paying Apple subscribers behind the platform’s paywall. The impact was immediate, with the Beast on track to generate between $3-4 million in revenue from Apple News alone this year, surpassing its standalone subscription program without incurring significant additional costs. Clearly, Apple’s slush fund for corporate media will support the Daily Beast in pushing fake news narratives, such as its claim that Hunter Biden’s “Laptop from Hell” was stolen.
The Beast is not the only publication reaping the benefits of Apple News+. Executives from major media companies, including Condé Nast, Penske Media, Vox, Hearst, and Time, have all reported substantial revenue streams from the partnership. Time, for example, considers Apple News as one of its most important partners, delivering 7-figure annual revenue and attracting 5 million unique visitors in a single month.
Apple News, with its free version being the most widely used news application in the United States, the UK, Canada, and Australia, boasted over 125 million monthly users in 2020. The launch of News+ in 2019, following Apple’s acquisition of the startup Texture, marked a deepening business relationship between the tech giant and news publishers.
Over the past two years, Apple News+ has rapidly expanded its partnerships, adding dozens of local and regional newspapers to its roster. The company licenses articles from behind publishers’ paywalls and pays them monthly based on the time audiences spend on each piece. Publishers can also sell advertising on their content and keep 100% of any affiliate link revenue generated through product recommendations and reviews.
Quote:Ars Technica reports that Neuralink, the brain-computer interface company owned by Elon Musk, has encountered challenges with its first human patient, 29-year-old Noland Arbaugh. A recent report from The Wall Street Journal reveals that only about 15 percent of the electrode-bearing threads implanted in Arbaugh’s brain continue to work properly. The remaining 85 percent of the threads became displaced, with many of the threads that were left receiving little to no signals being shut off.
The company’s brain-chip consists of 64 threads, each thinner than a human hair and carrying multiple electrodes. In total, the chip boasts 1,024 electrodes, which are surgically implanted near neurons of interest to record signals that can be decoded into intended actions. Neuralink had previously disclosed in a May 8 blog post that “a number” of these threads had “retracted” in the first patient’s brain.
Arbaugh shared his emotional journey with the Journal, admitting that he had cried upon learning of the setback and had asked Neuralink to perform another surgery to fix or replace the implant. However, the company declined, stating that it wanted to wait for more information. Arbaugh has since recovered from the initial disappointment and remains hopeful about the technology’s potential, stating, “I thought that I had just gotten to, you know, scratch the surface of this amazing technology, and then it was all going to be taken away. But it only took me a few days to really recover from that and realize that everything I’ve done up to that point was going to benefit everyone who came after me.”
As Neuralink gears up to implant its chip into a second trial participant, the company believes it can prevent thread movement by implanting the fine wires deeper into brain tissue. The FDA, which oversees clinical trials, has reportedly given the green light for Neuralink to implant the threads 8 millimeters into the brain of the second patient, a significant increase from the 3 mm to 5 mm depth used in Arbaugh’s implantation. The company hopes to perform the second surgery sometime in June.
Quote:Bloomberg reports that in a letter addressed to shareholders on Monday, a coalition led by Amalgamated Bank and SOC Investment Group, along with six other signatories, argued that Musk’s involvement in five other companies he controls is detrimental to Tesla’s best interests. The group also urged shareholders to vote against the reelection of directors Kimbal Musk, Elon Musk’s brother, and James Murdoch.
The shareholders’ letter stated, “Tesla is suffering from a material governance failure which requires our urgent attention and action.” This plea comes as Tesla’s board is asking investors to approve Musk’s pay package for a second time, following a Delaware judge’s decision to void the deal in January due to shareholders being inadequately informed of key details.
Musk’s pay package, initially approved by shareholders in 2018, granted him equity awards as Tesla achieved certain market capitalization and operational targets. Despite the company meeting all the conditions for Musk to receive the full payout of stock options, the judge’s ruling has necessitated a revote. In response, Tesla’s board has hired a strategic adviser to increase retail investor participation ahead of the annual meeting scheduled for June 13.
The shareholder group expressed concerns about Musk’s numerous commitments, noting that many of the signatories had published a separate open letter to Tesla’s board more than a year ago, seeking a meeting with board chair Robyn Denholm, which went unanswered. They argue that Musk’s decision to buy X/Twitter has “played a material role in Tesla’s underperformance,” undermining one of the board’s primary justifications for the substantial pay award – to keep Musk focused on the company’s long-term success.
The shareholders wrote, “If this was one of the primary reasons for the 2018 pay package, then it has been an abysmal failure, as six years later Musk’s outside business commitments have only increased.” They also pointed out that Musk founded another startup last year, xAI, which has hired away artificial intelligence specialists from Tesla.
Quote:CNBC reports that Tesla has been grappling with a series of challenges that have led to multiple rounds of layoffs over the past month. The latest round of layoffs, reported in a Worker Adjustment and Retraining Notification (WARN) Act filing obtained by CNBC, impacted a wide range of positions, from entry-level roles to directors, spanning various departments such as factory workers, software developers, and robotics engineers. This move comes as Tesla faces weakening demand for its electric vehicles and increased competition in the market.
CEO Elon Musk had previously announced in an April memo that the company would cut more than 10 percent of its global workforce, which stood at 140,473 employees at the end of 2023. Prior filings revealed that Tesla would cut over 6,300 jobs across California, Austin, Texas, and Buffalo, New York.
During Tesla’s quarterly earnings call on April 23, Musk suggested that the company had built up a 25 percent to 30 percent “inefficiency” over the past several years, implying that the ongoing layoffs could impact tens of thousands more employees than initially stated.
The job cuts in Fremont, home to Tesla’s first U.S. manufacturing plant, included 378 positions involved in staffing and running vehicle assembly, as well as 65 cuts at the company’s Kato Rd. battery development center. Among the highest-level roles eliminated were two environmental health and safety directors and a user experience design director. In Palo Alto, Tesla’s engineering headquarters, 233 more employees lost their jobs, including two directors of technical programs.
According to two former employees, Tesla has also terminated a majority of employees involved in designing and improving apps made for customers and employees. The WARN filing confirms this, with many cuts coming from the team at Tesla’s Hanover Street location in Palo Alto.
Quote:Ars Technica reports that in a recent ruling, US District Judge Rita Lin in the Northern District of California determined that a lawsuit filed by California resident Thomas LoSavio against Tesla can move forward on allegations of fraud. The lawsuit, which seeks class-action status, alleges that Tesla and its CEO, Elon Musk, made false claims about the self-driving capabilities of Tesla vehicles starting in October 2016.
LoSavio, who purchased a 2017 Tesla Model S with “Enhanced Autopilot” and “Full Self-Driving Capability,” points to specific representations made by Tesla that he claims were misleading. These include statements that Tesla vehicles have the hardware needed for full self-driving capability and that a Tesla car would be able to drive itself cross-country in the coming year.
Judge Lin dismissed some of LoSavio’s claims but allowed the lawsuit to proceed on the basis of the alleged misrepresentations. The ruling stated:
The remaining claims, which arise out of Tesla’s alleged fraud and related negligence, may go forward to the extent they are based on two alleged representations: (1) representations that Tesla vehicles have the hardware needed for full self-driving capability and, (2) representations that a Tesla car would be able to drive itself cross-country in the coming year. While the Rule 9(b) pleading requirements are less stringent here, where Tesla allegedly engaged in a systematic pattern of fraud over a long period of time, LoSavio alleges, plausibly and with sufficient detail, that he relied on these representations before buying his car.
The complaint argues that Tesla’s cars have not achieved the promised level of autonomy, stalling at SAE Level 2 (“Partial Driving Automation”), which requires constant human supervision and control. LoSavio alleges that Tesla’s cars lack the necessary combination of sensors, including lidar, to achieve full autonomy.
Tesla had previously won a significant ruling in the case when a different judge upheld the carmaker’s arbitration agreement, requiring four plaintiffs to go to arbitration. However, LoSavio had opted out of the arbitration agreement and was allowed to file an amended complaint.
Quote:China’s largest-ever military drills with Cambodia on Thursday included a showcase performance by the literal dogs of war — a squad of robot dogs with automatic rifles mounted on their backs.
Both the People’s Liberation Army (PLA) of China and the U.S. military have experimented with militarizing the increasingly common “robodog” design. The South China Morning Post (SCMP) reported in March that the PLA saw an opportunity to pull ahead in killer robot research because American designers seemed more interested in using the robots for utility tasks on the battlefield, such as carrying supplies.
A Chinese research team published a study in February that found robodogs equipped with 7.62mm guns firing 750 rounds per minute could achieve respectable accuracy at ranges up to a hundred meters, equaling or exceeding the average accuracy of human riflemen.
The team thought quadruped platforms could become game-changing weapons for urban warfare, if the kinks in navigating dense and confusing city environments could be worked out. The paper suggested American planners were making a mistake by simply strapping guns to their military robodog test platforms, rather than customizing the weapon mounts to give the robot better accuracy and absorb recoil. Chinese researchers predicted they could use artificial intelligence (AI) to design a robot system that accounts for all the complexities of movement, recoil, and visibility on the battlefield.
“Chinese robotic dogs can now navigate stairs, perform acrobatic feats such as backflips, traverse garbage dumps or tropical rainforests, and maintain a continuous run for nearly four hours while carrying a 20kg load,” the SCMP noted.
American quadruped platforms appear to be somewhat more sophisticated for the time being, but China is catching up rapidly and its manufacturing advantages allow it to deliver field-ready units at prices far lower than U.S. companies. The industry-leading Boston Dynamics “Spot” robodog costs at least $70,000 per unit, while China’s model costs $3,000.
Quote:Mashable reports that in a joint blog post released on Thursday, Reddit and OpenAI shed light on the details of their agreement, which involves Reddit trading its content for access to AI tools and an advertising partnership. The move has raised concerns among Reddit users about the privacy and control of their posts, comments, and images.
According to the blog post, “Keeping the internet open is crucial, and part of being open means Reddit content needs to be accessible to those fostering human learning and researching ways to build community, belonging, and empowerment online.” The post further emphasizes the importance of Reddit as a “uniquely large and vibrant community that has long been an important space for conversation on the internet.”
Under the terms of the agreement, OpenAI will utilize Reddit’s Data API to access real-time, structured, and unique content from the platform. This will enable OpenAI’s AI tools, including ChatGPT, to better understand and showcase Reddit content, particularly on recent topics. The implications for Reddit users are significant, as any content posted on the platform, past or present, will be used to train the AI.
Reddit’s User Agreement sheds light on the rights users relinquish when posting on the platform. By creating or submitting content, users grant Reddit a “worldwide, royalty-free, perpetual, irrevocable, non-exclusive, transferable, and sublicensable license” to use, copy, modify, adapt, prepare derivative works of, distribute, store, perform, and display their content across all media formats and channels. This license allows Reddit to make user content available for syndication, broadcast, distribution, or publication by partnering companies, organizations, or individuals.
For users concerned about their privacy and the use of their content for AI training, the options are limited. While refraining from posting or commenting on Reddit moving forward may prevent future content from being used, the fate of previously posted content remains unclear. Even if users decide to delete their content, it may already be too late, as deleted content can still be archived.
RE: News of the Cyber World - kyonides - 06-10-2024
Quote:The FTC is investigating whether Microsoft structured its recent deal with artificial intelligence startup Inflection AI to avoid a government antitrust review.
The Wall Street Journal reports that in March, Microsoft hired Inflection AI’s co-founder and nearly all of its employees, agreeing to pay the startup approximately $650 million as a licensing fee to resell its technology. The deal has caught the attention of the FTC, which is now scrutinizing the transaction to determine if Microsoft crafted the agreement to gain control of Inflection while dodging FTC review.
Companies are required to report acquisitions valued at more than $119 million to federal antitrust-enforcement agencies, which have the option to investigate a deal’s impact on competition. The FTC and the Department of Justice share antitrust authority and can sue to block mergers or investments if an investigation finds the deal would substantially reduce competition or lead to a monopoly.
FTC Chair Lina Khan has expressed concern that tech giants could eventually acquire or control the most promising AI applications, giving them a tight grip on systems with humanlike abilities to converse, create art, and write computer code. The FTC has been sifting through AI investments made by leading companies such as Microsoft and Google.
The agency is now focusing on Microsoft’s deal with Inflection, seeking information about how and why they negotiated their partnership. Civil subpoenas sent recently to Microsoft and Inflection seek documents going back about two years. If the agency finds that Microsoft should have reported and sought government review of its deal with Inflection, the FTC could bring an enforcement action against Microsoft, potentially leading to fines and a suspension of the transaction pending a full-scale investigation.
Inflection AI, based in the San Francisco Bay Area, built one of the world’s biggest large language models and launched an AI chatbot called Pi. The company is one of several that have built and sold access to large language models, alongside OpenAI, the creator of ChatGPT, and Google.
Microsoft was an investor in both OpenAI and Inflection. In January, the FTC opened a broad investigation of Microsoft’s investment in OpenAI and Alphabet’s relationship with Anthropic, a rival of OpenAI founded by former OpenAI engineers in 2021.
Quote:PragerU announced on Friday that Google took down its app from the Google Play Store, accusing the organization of “hate speech” over its documentary Dear Infidels: A Warning to America.
PragerU published a screenshot of message from Google, in which the tech giant notified the organization of its suspension due to content on its app “asserting that a protected group is inhuman, inferior or worthy of being hated.”
“We don’t allow apps that promote violence, or incite hatred against individuals or groups based on race or ethnic origin, religion, disability, age, nationality, veteran status, sexual orientation, gender, gender identity, or any other characteristic that is associated with systemic discrimination or marginalization,” the tech giant said of its hate speech policy.
PragerU slammed Google for “using Soviet-style tactics and attempting to silence us.”
“According to Google, sharing the stories of a former Palestinian refugee, an Arab Muslim born in Israel, and brave U.S. Navy SEALs who witnessed the horrors of Muslim extremism constitutes ‘hate speech,”‘ PragerU said in a statement on its website. “This is a blatant attempt to silence truth and censor speech. We urgently need your help to fight back against this suppression.”
PragerU also asked for donations to “help us counter these attacks, expand our reach on other platforms, and continue our mission to educate and inspire Americans with the truth.”
Quote:PragerU announced that its app had been reinstated to the Google Play Store hours after Google removed it claiming it promoted “hate speech.”
The organization, which is known as Prager University, posted an update in a post on X, writing that Google had “reinstated the PragerU app on Google Play store” after conducting another review.
Following an earlier report by Breitbart News about the app’s removal from the Google Play Store, a Google spokesperson told Breitbart that the PragerU app was suspended in error and was quickly reinstated upon further investigation.
As Breitbart News reported, PragerU posted screenshots of a message from Google in which the tech company claimed the non-profit organization had asserted “that a protected group is inhuman, inferior or worthy of being hated.”
“In an email from Google, they said our recently removed app is now available ‘after further re-review,'” PragerU wrote. “Thank you to all of our amazing supporters who helped publicize this issue to force Google to reverse their earlier decision to remove our app from the store entirely.”
In its explanation revealing that the PragerU app had been suspended, Google said it does not “allow apps that promote violence, or incite hatred against individuals or groups based on race or ethnic origin,” among other things:
We don’t allow apps that promote violence, or incite hatred against individuals or groups based on race or ethnic origin, religion, disability, age, nationality, veteran status, sexual orientation, gender, gender identity, or any other characteristic that is associated with systemic discrimination or marginalization.
...
PragerU was founded by Dennis Prager, a radio talk show host. The organization is known for creating short educational videos centered around conservative topics.
In 2017, PragerU sued Google and YouTube, claiming that the organizations had censored their videos. Years later, in January 2019, PragerU filed another lawsuit against Google, accusing the company of censorship and “engaging in unlawful, misleading, and unfair businesses practices.”
PragerU Chief Marketing Officer Craig Strazzeri described the removal of the PragerU app from the Google Play Store as not being “that surprising given their track record” of restricting the organization’s videos.
Quote:Despite the U.S. government’s efforts to prevent advanced AI chips from falling into the hands of Chinese companies, some American corporations are finding ways to circumvent these restrictions. Oracle in particular has reportedly helped China’s TikTok by “renting” AI chips to the communist social media company.
Engadget reports that in 2022, the United States banned companies like Nvidia from selling their most advanced AI chips to China, citing concerns over potential military, surveillance, and economic implications. However, a recent report by the Information has revealed that U.S.-based cloud computing company Oracle is allowing TikTok owner ByteDance to “rent” Nvidia’s cutting-edge H100 chips to train AI models on US soil.
ByteDance, a Chinese company that has direct ties to the Chinese government, is reportedly taking advantage of this loophole to access the coveted chips. While the practice runs against the spirit of the U.S. government’s chip regulations, it is technically allowed because Oracle is merely renting out the chips on American soil, not selling them directly to companies in China.
Former ByteDance employees have raised concerns about the company’s Project Texas initiative, which claims to separate TikTok’s U.S. operations from its Chinese leadership. They describe the project as “largely cosmetic,” alleging that ByteDance’s U.S. wing regularly works closely with its Beijing-based leadership.
ByteDance is not the only Chinese company seeking to exploit these loopholes. Alibaba and Tencent are reportedly discussing similar arrangements to gain access to the sought-after chips. These deals could be more difficult to prevent, as these companies have their own U.S.-based data centers and would not need to rent servers from American companies.
The U.S. Commerce Department, responsible for closing such loopholes, may already be aware of these practices. Earlier this year, the department proposed a rule requiring U.S. cloud providers to verify foreign customers’ identities and notify the U.S. if any of them were training AI models that “could be used in malicious cyber-enabled activity.” However, most cloud providers disapproved of the proposal, claiming that the additional requirements might outweigh the intended benefits, leaving the proposed rule in limbo.
Quote:Joan Donovan, a supposed expert on “misinformation” and former research director at Harvard University’s Shorenstein Center, has been accused of spreading misinformation about her departure from the once prestigious institution.
The Chronicle of Higher Education reports that Joan Donovan, a supposed expert on misinformation and former research director at Harvard University’s Shorenstein Center on Media, Politics, and Public Policy, finds herself embroiled in a controversy surrounding her departure from the Ivy League institution. Donovan, known for her work on media manipulation and the spread of disinformation online, has made several claims about the circumstances of her leaving Harvard, which have been called into question by former colleagues and university officials.
In a 248-page document released in December 2022, Donovan alleged that Harvard had mistreated her and her team, the Technology and Social Change Project, due to the university’s ties to Meta (formerly Facebook). She claimed that Harvard had eliminated her role and the team she led under pressure from Meta executives, particularly Elliot Schrage, a former Facebook executive and Harvard alumnus. Donovan pointed to a Zoom call in October 2021, during which Schrage allegedly monopolized the discussion and accused her of inaccurately reading documents related to Facebook. However, a recording of the meeting contradicts Donovan’s account, showing that Schrage spoke for only three minutes and did not bring up the leaked Facebook files.
Interviews with former team members, Shorenstein Center staff, and university officials have revealed inconsistencies in Donovan’s narrative. Eleven former Technology and Social Change Project members and Shorenstein staffers stated that they had seen no evidence of Meta exerting pressure on Donovan’s team or that its influence led to the team’s disbandment. Some of Donovan’s other claims, such as Harvard owning the copyright to her book “Meme Wars” and the university stealing her plans to publish confidential Facebook documents, have also been disputed by those directly involved.
Donovan’s allegations regarding the FBarchive, a project aimed at creating a searchable archive of leaked Facebook documents, have also been contested. While Donovan claims that the project was her brainchild and that she was cut out of it due to Meta’s influence, Latanya Sweeney, a professor who worked on the project, called Donovan’s version of events “gross mischaracterizations and misstatements.” Sweeney stated that she had acquired her own cache of the files before Donovan and that the vast majority of the work on the project was done by Sweeney and her team.
Former colleagues have also expressed concerns about Donovan’s management style and behavior during her final years at Harvard. They cite instances of canceled meetings, complaints about university administrators, and attempts to influence funders to withdraw their support for the team, which would have put staff members’ jobs at risk earlier than anticipated. Brandi Collins-Dexter, a former associate director of research, felt that Donovan was using her as “a shield and a weapon” in service of her own brand and public perception.
Quote:A former OpenAI governance researcher has made a chilling prediction: the odds of AI either destroying or catastrophically harming humankind sit at 70 percent.
In a recent interview with the New York Times, Daniel Kokotajlo, a former OpenAI governance researcher and signee of an open letter claiming that employees are being silenced against raising safety issues, accused the company of ignoring the monumental risks posed by artificial general intelligence (AGI) due to its decision-makers being enthralled with its possibilities. “OpenAI is really excited about building AGI,” Kokotajlo stated, “and they are recklessly racing to be the first there.”
Kokotajlo’s most alarming claim was that the chance AI will wreck humanity is around 70 percent—odds that would be unacceptable for any major life event, yet OpenAI and its peers are barreling ahead with anyway. The term “p(doom),” which refers to the probability that AI will usher in doom for humankind, is a topic of constant controversy in the machine learning world.
After joining OpenAI in 2022 and being asked to forecast the technology’s progress, the 31-year-old became convinced not only that the industry would achieve AGI by 2027 but also that there was a great probability it would catastrophically harm or even destroy humanity. Kokotajlo and his colleagues, including former and current employees at Google DeepMind and Anthropic, as well as Geoffrey Hinton, the “Godfather of AI” who left Google last year over similar concerns, are asserting their “right to warn” the public about the risks posed by AI.
Kokotajlo became so convinced of the massive risks AI posed to humanity that he personally urged OpenAI CEO Sam Altman to “pivot to safety” and spend more time implementing guardrails to reign in the technology rather than continue making it smarter. Although Altman seemed to agree with him at the time, Kokotajlo felt it was merely lip service.
Fed up, Kokotajlo quit the firm in April, telling his team in an email that he had “lost confidence that OpenAI will behave responsibly” as it continues trying to build near-human-level AI. “The world isn’t ready, and we aren’t ready,” he wrote. “And I’m concerned we are rushing forward regardless and rationalizing our actions.”
Covert Chinese Propaganda Outlet But It's Not TikTok
Quote:NewsBreak, the most downloaded news app in the United States, has come under fire for publishing fake news stories generated by AI and its ties to China, raising concerns about the spread of misinformation and data privacy.
Reuters reports that popular news app NewsBreak, which has offices in Mountain View, California, Beijing, and Shanghai, recently published an alarming but entirely false story about a Christmas Day shooting in Bridgeton, New Jersey. The local police department dismissed the AI-generated article as “fiction” in a Facebook post on December 27. NewsBreak eventually removed the inaccurate story four days after publication, attributing the error to the content source.
As local news outlets across America have struggled in recent years, NewsBreak has become an option to fill the void, boasting over 50 million monthly users. The app publishes licensed content from media outlets and rewrites information scraped from the internet using AI. However, Reuters found that NewsBreak’s use of AI tools has led to the publication of at least 40 fake news stories since 2021, affecting the communities it aims to serve.
In addition to false news stories, NewsBreak has faced copyright infringement lawsuits from local news providers, including Patch Media and Emmerich Newspapers, for republishing content without permission or credit. The app has also been criticized for creating stories under fictitious bylines, a practice that former consultant Norm Pearlstine warned could “destroy the NewsBreak brand.”
NewsBreak’s ties to China have also raised concerns. The app was initially launched as a subsidiary of Yidian, a Chinese news aggregation app, and both companies were founded by Jeff Zheng, NewsBreak’s CEO. Although Yidian divested from NewsBreak in 2019, the two companies share a U.S. patent for an “Interest Engine” algorithm. About half of NewsBreak’s 200 employees, including a significant portion of its engineering staff, are based in China.
The app’s use of China-based engineers has raised questions about the potential access to American user data in China, drawing comparisons to the recent controversy surrounding TikTok. NewsBreak maintains that it complies with U.S. data and privacy laws and stores data on U.S.-based Amazon servers, with staff in China only accessing anonymous data.