06-10-2024, 02:57 AM
FTC Probes MS
Quote:The FTC is investigating whether Microsoft structured its recent deal with artificial intelligence startup Inflection AI to avoid a government antitrust review.
The Wall Street Journal reports that in March, Microsoft hired Inflection AI’s co-founder and nearly all of its employees, agreeing to pay the startup approximately $650 million as a licensing fee to resell its technology. The deal has caught the attention of the FTC, which is now scrutinizing the transaction to determine if Microsoft crafted the agreement to gain control of Inflection while dodging FTC review.
Companies are required to report acquisitions valued at more than $119 million to federal antitrust-enforcement agencies, which have the option to investigate a deal’s impact on competition. The FTC and the Department of Justice share antitrust authority and can sue to block mergers or investments if an investigation finds the deal would substantially reduce competition or lead to a monopoly.
FTC Chair Lina Khan has expressed concern that tech giants could eventually acquire or control the most promising AI applications, giving them a tight grip on systems with humanlike abilities to converse, create art, and write computer code. The FTC has been sifting through AI investments made by leading companies such as Microsoft and Google.
The agency is now focusing on Microsoft’s deal with Inflection, seeking information about how and why they negotiated their partnership. Civil subpoenas sent recently to Microsoft and Inflection seek documents going back about two years. If the agency finds that Microsoft should have reported and sought government review of its deal with Inflection, the FTC could bring an enforcement action against Microsoft, potentially leading to fines and a suspension of the transaction pending a full-scale investigation.
Inflection AI, based in the San Francisco Bay Area, built one of the world’s biggest large language models and launched an AI chatbot called Pi. The company is one of several that have built and sold access to large language models, alongside OpenAI, the creator of ChatGPT, and Google.
Microsoft was an investor in both OpenAI and Inflection. In January, the FTC opened a broad investigation of Microsoft’s investment in OpenAI and Alphabet’s relationship with Anthropic, a rival of OpenAI founded by former OpenAI engineers in 2021.
Google & PragerU
Quote:PragerU announced on Friday that Google took down its app from the Google Play Store, accusing the organization of “hate speech” over its documentary Dear Infidels: A Warning to America.
PragerU published a screenshot of message from Google, in which the tech giant notified the organization of its suspension due to content on its app “asserting that a protected group is inhuman, inferior or worthy of being hated.”
“We don’t allow apps that promote violence, or incite hatred against individuals or groups based on race or ethnic origin, religion, disability, age, nationality, veteran status, sexual orientation, gender, gender identity, or any other characteristic that is associated with systemic discrimination or marginalization,” the tech giant said of its hate speech policy.
PragerU slammed Google for “using Soviet-style tactics and attempting to silence us.”
“According to Google, sharing the stories of a former Palestinian refugee, an Arab Muslim born in Israel, and brave U.S. Navy SEALs who witnessed the horrors of Muslim extremism constitutes ‘hate speech,”‘ PragerU said in a statement on its website. “This is a blatant attempt to silence truth and censor speech. We urgently need your help to fight back against this suppression.”
PragerU also asked for donations to “help us counter these attacks, expand our reach on other platforms, and continue our mission to educate and inspire Americans with the truth.”
Quote:PragerU announced that its app had been reinstated to the Google Play Store hours after Google removed it claiming it promoted “hate speech.”
The organization, which is known as Prager University, posted an update in a post on X, writing that Google had “reinstated the PragerU app on Google Play store” after conducting another review.
Following an earlier report by Breitbart News about the app’s removal from the Google Play Store, a Google spokesperson told Breitbart that the PragerU app was suspended in error and was quickly reinstated upon further investigation.
As Breitbart News reported, PragerU posted screenshots of a message from Google in which the tech company claimed the non-profit organization had asserted “that a protected group is inhuman, inferior or worthy of being hated.”
“In an email from Google, they said our recently removed app is now available ‘after further re-review,'” PragerU wrote. “Thank you to all of our amazing supporters who helped publicize this issue to force Google to reverse their earlier decision to remove our app from the store entirely.”
In its explanation revealing that the PragerU app had been suspended, Google said it does not “allow apps that promote violence, or incite hatred against individuals or groups based on race or ethnic origin,” among other things:
We don’t allow apps that promote violence, or incite hatred against individuals or groups based on race or ethnic origin, religion, disability, age, nationality, veteran status, sexual orientation, gender, gender identity, or any other characteristic that is associated with systemic discrimination or marginalization.
...
PragerU was founded by Dennis Prager, a radio talk show host. The organization is known for creating short educational videos centered around conservative topics.
In 2017, PragerU sued Google and YouTube, claiming that the organizations had censored their videos. Years later, in January 2019, PragerU filed another lawsuit against Google, accusing the company of censorship and “engaging in unlawful, misleading, and unfair businesses practices.”
PragerU Chief Marketing Officer Craig Strazzeri described the removal of the PragerU app from the Google Play Store as not being “that surprising given their track record” of restricting the organization’s videos.
Comrade Oracle
Quote:Despite the U.S. government’s efforts to prevent advanced AI chips from falling into the hands of Chinese companies, some American corporations are finding ways to circumvent these restrictions. Oracle in particular has reportedly helped China’s TikTok by “renting” AI chips to the communist social media company.
Engadget reports that in 2022, the United States banned companies like Nvidia from selling their most advanced AI chips to China, citing concerns over potential military, surveillance, and economic implications. However, a recent report by the Information has revealed that U.S.-based cloud computing company Oracle is allowing TikTok owner ByteDance to “rent” Nvidia’s cutting-edge H100 chips to train AI models on US soil.
ByteDance, a Chinese company that has direct ties to the Chinese government, is reportedly taking advantage of this loophole to access the coveted chips. While the practice runs against the spirit of the U.S. government’s chip regulations, it is technically allowed because Oracle is merely renting out the chips on American soil, not selling them directly to companies in China.
Former ByteDance employees have raised concerns about the company’s Project Texas initiative, which claims to separate TikTok’s U.S. operations from its Chinese leadership. They describe the project as “largely cosmetic,” alleging that ByteDance’s U.S. wing regularly works closely with its Beijing-based leadership.
ByteDance is not the only Chinese company seeking to exploit these loopholes. Alibaba and Tencent are reportedly discussing similar arrangements to gain access to the sought-after chips. These deals could be more difficult to prevent, as these companies have their own U.S.-based data centers and would not need to rent servers from American companies.
The U.S. Commerce Department, responsible for closing such loopholes, may already be aware of these practices. Earlier this year, the department proposed a rule requiring U.S. cloud providers to verify foreign customers’ identities and notify the U.S. if any of them were training AI models that “could be used in malicious cyber-enabled activity.” However, most cloud providers disapproved of the proposal, claiming that the additional requirements might outweigh the intended benefits, leaving the proposed rule in limbo.
"Expert" Spreading Misinformation
Quote:Joan Donovan, a supposed expert on “misinformation” and former research director at Harvard University’s Shorenstein Center, has been accused of spreading misinformation about her departure from the once prestigious institution.
The Chronicle of Higher Education reports that Joan Donovan, a supposed expert on misinformation and former research director at Harvard University’s Shorenstein Center on Media, Politics, and Public Policy, finds herself embroiled in a controversy surrounding her departure from the Ivy League institution. Donovan, known for her work on media manipulation and the spread of disinformation online, has made several claims about the circumstances of her leaving Harvard, which have been called into question by former colleagues and university officials.
In a 248-page document released in December 2022, Donovan alleged that Harvard had mistreated her and her team, the Technology and Social Change Project, due to the university’s ties to Meta (formerly Facebook). She claimed that Harvard had eliminated her role and the team she led under pressure from Meta executives, particularly Elliot Schrage, a former Facebook executive and Harvard alumnus. Donovan pointed to a Zoom call in October 2021, during which Schrage allegedly monopolized the discussion and accused her of inaccurately reading documents related to Facebook. However, a recording of the meeting contradicts Donovan’s account, showing that Schrage spoke for only three minutes and did not bring up the leaked Facebook files.
Interviews with former team members, Shorenstein Center staff, and university officials have revealed inconsistencies in Donovan’s narrative. Eleven former Technology and Social Change Project members and Shorenstein staffers stated that they had seen no evidence of Meta exerting pressure on Donovan’s team or that its influence led to the team’s disbandment. Some of Donovan’s other claims, such as Harvard owning the copyright to her book “Meme Wars” and the university stealing her plans to publish confidential Facebook documents, have also been disputed by those directly involved.
Donovan’s allegations regarding the FBarchive, a project aimed at creating a searchable archive of leaked Facebook documents, have also been contested. While Donovan claims that the project was her brainchild and that she was cut out of it due to Meta’s influence, Latanya Sweeney, a professor who worked on the project, called Donovan’s version of events “gross mischaracterizations and misstatements.” Sweeney stated that she had acquired her own cache of the files before Donovan and that the vast majority of the work on the project was done by Sweeney and her team.
Former colleagues have also expressed concerns about Donovan’s management style and behavior during her final years at Harvard. They cite instances of canceled meetings, complaints about university administrators, and attempts to influence funders to withdraw their support for the team, which would have put staff members’ jobs at risk earlier than anticipated. Brandi Collins-Dexter, a former associate director of research, felt that Donovan was using her as “a shield and a weapon” in service of her own brand and public perception.
OpenAI Whistleblower's Warning
Quote:A former OpenAI governance researcher has made a chilling prediction: the odds of AI either destroying or catastrophically harming humankind sit at 70 percent.
In a recent interview with the New York Times, Daniel Kokotajlo, a former OpenAI governance researcher and signee of an open letter claiming that employees are being silenced against raising safety issues, accused the company of ignoring the monumental risks posed by artificial general intelligence (AGI) due to its decision-makers being enthralled with its possibilities. “OpenAI is really excited about building AGI,” Kokotajlo stated, “and they are recklessly racing to be the first there.”
Kokotajlo’s most alarming claim was that the chance AI will wreck humanity is around 70 percent—odds that would be unacceptable for any major life event, yet OpenAI and its peers are barreling ahead with anyway. The term “p(doom),” which refers to the probability that AI will usher in doom for humankind, is a topic of constant controversy in the machine learning world.
After joining OpenAI in 2022 and being asked to forecast the technology’s progress, the 31-year-old became convinced not only that the industry would achieve AGI by 2027 but also that there was a great probability it would catastrophically harm or even destroy humanity. Kokotajlo and his colleagues, including former and current employees at Google DeepMind and Anthropic, as well as Geoffrey Hinton, the “Godfather of AI” who left Google last year over similar concerns, are asserting their “right to warn” the public about the risks posed by AI.
Kokotajlo became so convinced of the massive risks AI posed to humanity that he personally urged OpenAI CEO Sam Altman to “pivot to safety” and spend more time implementing guardrails to reign in the technology rather than continue making it smarter. Although Altman seemed to agree with him at the time, Kokotajlo felt it was merely lip service.
Fed up, Kokotajlo quit the firm in April, telling his team in an email that he had “lost confidence that OpenAI will behave responsibly” as it continues trying to build near-human-level AI. “The world isn’t ready, and we aren’t ready,” he wrote. “And I’m concerned we are rushing forward regardless and rationalizing our actions.”
Covert Chinese Propaganda Outlet But It's Not TikTok
Quote:NewsBreak, the most downloaded news app in the United States, has come under fire for publishing fake news stories generated by AI and its ties to China, raising concerns about the spread of misinformation and data privacy.
Reuters reports that popular news app NewsBreak, which has offices in Mountain View, California, Beijing, and Shanghai, recently published an alarming but entirely false story about a Christmas Day shooting in Bridgeton, New Jersey. The local police department dismissed the AI-generated article as “fiction” in a Facebook post on December 27. NewsBreak eventually removed the inaccurate story four days after publication, attributing the error to the content source.
As local news outlets across America have struggled in recent years, NewsBreak has become an option to fill the void, boasting over 50 million monthly users. The app publishes licensed content from media outlets and rewrites information scraped from the internet using AI. However, Reuters found that NewsBreak’s use of AI tools has led to the publication of at least 40 fake news stories since 2021, affecting the communities it aims to serve.
In addition to false news stories, NewsBreak has faced copyright infringement lawsuits from local news providers, including Patch Media and Emmerich Newspapers, for republishing content without permission or credit. The app has also been criticized for creating stories under fictitious bylines, a practice that former consultant Norm Pearlstine warned could “destroy the NewsBreak brand.”
NewsBreak’s ties to China have also raised concerns. The app was initially launched as a subsidiary of Yidian, a Chinese news aggregation app, and both companies were founded by Jeff Zheng, NewsBreak’s CEO. Although Yidian divested from NewsBreak in 2019, the two companies share a U.S. patent for an “Interest Engine” algorithm. About half of NewsBreak’s 200 employees, including a significant portion of its engineering staff, are based in China.
The app’s use of China-based engineers has raised questions about the potential access to American user data in China, drawing comparisons to the recent controversy surrounding TikTok. NewsBreak maintains that it complies with U.S. data and privacy laws and stores data on U.S.-based Amazon servers, with staff in China only accessing anonymous data.
"For God has not destined us for wrath, but for obtaining salvation through our Lord Jesus Christ," 1 Thessalonians 5:9
Maranatha!
The Internet might be either your friend or enemy. It just depends on whether or not she has a bad hair day.
My Original Stories (available in English and Spanish)
List of Compiled Binary Executables I have published...
HiddenChest & Roole
Give me a free copy of your completed game if you include at least 3 of my scripts!
Just some scripts I've already published on the board...
KyoGemBoost XP VX & ACE, RandomEnkounters XP, KSkillShop XP, Kolloseum States XP, KEvents XP, KScenario XP & Gosu, KyoPrizeShop XP Mangostan, Kuests XP, KyoDiscounts XP VX, ACE & MV, KChest XP VX & ACE 2016, KTelePort XP, KSkillMax XP & VX & ACE, Gem Roulette XP VX & VX Ace, KRespawnPoint XP, VX & VX Ace, GiveAway XP VX & ACE, Klearance XP VX & ACE, KUnits XP VX, ACE & Gosu 2017, KLevel XP, KRumors XP & ACE, KMonsterPals XP VX & ACE, KStatsRefill XP VX & ACE, KLotto XP VX & ACE, KItemDesc XP & VX, KPocket XP & VX, OpenChest XP VX & ACE
Maranatha!
The Internet might be either your friend or enemy. It just depends on whether or not she has a bad hair day.
My Original Stories (available in English and Spanish)
List of Compiled Binary Executables I have published...
HiddenChest & Roole
Give me a free copy of your completed game if you include at least 3 of my scripts!
Just some scripts I've already published on the board...
KyoGemBoost XP VX & ACE, RandomEnkounters XP, KSkillShop XP, Kolloseum States XP, KEvents XP, KScenario XP & Gosu, KyoPrizeShop XP Mangostan, Kuests XP, KyoDiscounts XP VX, ACE & MV, KChest XP VX & ACE 2016, KTelePort XP, KSkillMax XP & VX & ACE, Gem Roulette XP VX & VX Ace, KRespawnPoint XP, VX & VX Ace, GiveAway XP VX & ACE, Klearance XP VX & ACE, KUnits XP VX, ACE & Gosu 2017, KLevel XP, KRumors XP & ACE, KMonsterPals XP VX & ACE, KStatsRefill XP VX & ACE, KLotto XP VX & ACE, KItemDesc XP & VX, KPocket XP & VX, OpenChest XP VX & ACE