FOX News

China, Iran-based threat actors have found new ways to to use American AI models for covert influence: Report

China, Iran-based threat actors have found new ways to to use American AI models for covert influence: Report

Threat actors, some likely based in China and Iran, are formulating new ways to hijack and utilize American artificial intelligence (AI) models for malicious intent, including covert influence operations, according to a new report from OpenAI.

The February report includes two disruptions involving threat actors that appear to have originated from China. According to the report, these actors have used, or at least attempted to use, models built by OpenAI and Meta.

In one example, OpenAI banned a ChatGPT account that generated comments critical of Chinese dissident Cai Xia. The comments were posted on social media by accounts that claimed to be people based in India and the U.S. However, these posts did not appear to attract substantial online engagement.

That same actor also used the ChatGPT service to generate long-form Spanish news articles that « denigrated » the U.S. and were subsequently published by mainstream news outlets in Latin America. The bylines of these stories were attributed to an individual and, in some cases, a Chinese company.

CHINA, IRAN AND RUSSIA CONDEMNED BY DISSIDENTS AT UN WATCHDOG’S GENEVA SUMMIT

During a recent press briefing that included Fox News Digital, Ben Nimmo, Principal Investigator on OpenAI’s Intelligence and Investigations team, said that a translation was listed as sponsored content on at least one occasion, suggesting that someone had paid for it.

OpenAI says this is the first instance in which a Chinese actor successfully planted long-form articles in mainstream media to target Latin American audiences with anti-U.S. narratives.

« Without a view of that use of AI, we would not have been able to make the connection between the tweets and the web articles, » Nimmo said.

He added that threat actors sometimes give OpenAI a glimpse of what they’re doing in other parts of the internet because of how they use their models.

« This is a pretty troubling glimpse into the way one non-democratic actor tried to use democratic or U.S.-based AI for non-democratic purposes, according to the materials they were generating themselves, » he continued.

WHAT IS ARTIFICIAL INTELLIGENCE (AI)?

The company also banned a ChatGPT account that generated tweets and articles that were then posted on third-party assets publicly linked to known Iranian IOs (input/output). IO is the process of moving data between a computer and the outside world, including the movement of audio, video, software, and text.  

These two operations have been reported as separate efforts.

« The discovery of a potential overlap between these operations – albeit small and isolated – raises a question about whether there is a nexus of cooperation amongst these Iranian IOs, where one operator may work on behalf of what appear to be distinct networks, » the threat report states.

In another example, OpenAI banned a set of ChatGPT accounts that were using OpenAI models to translate and generate comments for a romance baiting network, also known as « pig butchering, » across platforms like X, Facebook and Instagram. After reporting these findings, Meta indicated that the activity appeared to originate from a « newly stood up scam compound in Cambodia.

WHAT IS CHINESE AI STARTUP DEEPSEEK?

Last year, OpenAI became the first AI research lab to publish reports on efforts to prevent abuse by adversaries and other malicious actors by supporting the U.S., allied governments, industry partners, and stakeholders.

OpenAI says it has greatly expanded its investigative capabilities and understanding of new types of abuse since its first report was published and has disrupted a wide range of malicious uses.

The company believes, among other disruption techniques, that AI companies can glean substantial insights on threat actors if the information is shared with upstream providers, such as hosting and software providers, as well as downstream distribution platforms (social media companies and open-source researchers).

OpenAI stresses that their investigations also benefit greatly from the work shared by peers.

« We know that threat actors will keep testing our defenses. We’re determined to keep identifying, preventing, disrupting and exposing attempts to abuse our models for harmful ends, » OpenAI stated in the report. 



Source link : https://www.foxnews.com/media/china-iran-based-threat-actors-found-new-ways-use-american-ai-models-covert-influence-report

Author :

Publish date : 2025-02-21 15:25:00

Copyright for syndicated content belongs to the linked Source.

Tags : FOX News
Quitter la version mobile