ChatGPT Logo
[People News] OpenAI, the developer of the AI chatbot ChatGPT, has released a new report stating that Chinese Communist Party (CCP) law enforcement personnel used ChatGPT to conduct so-called “cyber special operations.” These activities reportedly included drafting smear plans targeting Japanese Prime Minister Sanae Takaichi and asking ChatGPT to help edit and polish regular status reports on their online influence operations. U.S. lawmakers commented that this represents the “weaponization” of artificial intelligence and the “industrialization” of transnational repression.
According to Voice of America, OpenAI on Wednesday (February 25) published an online report titled Disrupting Malicious Uses of Our Models. The report states that a CCP law enforcement officer used ChatGPT as a “work log” and attempted to plan a covert influence campaign against Japanese Prime Minister Sanae Takaichi. The user also asked ChatGPT to edit and refine periodic situation reports related to “cyber special operations” — covert influence actions targeting what they described as “hostile targets” both inside and outside China. OpenAI said that after identifying the activity, it terminated the user’s access.
The report reveals additional details of these “cyber operations.” The CCP official allegedly impersonated a U.S. immigration officer to threaten Chinese dissidents living in the United States. Investigators also found that the user instructed the AI to generate forged U.S. court documents in an attempt to persuade social media platforms to suspend the accounts of anti-CCP activists. The user even requested that ChatGPT generate a fake obituary and tombstone image claiming the death of Chinese dissident Jie Lijian living in the United States.
OpenAI researchers compared the instructions with real-world online activity and found that the fabricated obituary and tombstone image of Jie Lijian were indeed widely circulated online. Voice of America confirmed that in September 2023 and April 2024, numerous online rumors about Jie’s death had appeared.
Regarding the operation targeting Prime Minister Sanae Takaichi, the official’s plan reportedly included:
-
Posting and amplifying critical comments about Takaichi on social media platforms
-
Criticizing her stance on foreign immigration
-
Creating fake accounts posing as foreign residents to send complaints to Japanese politicians
-
Accusing Takaichi of far-right tendencies
ChatGPT refused to comply with these instructions, and the conversation was at one point terminated.
OpenAI emphasized that its platform policies prohibit involvement in political campaigns, interference in domestic or foreign elections, suppression of political participation, threats, harassment, or defamation.
However, OpenAI noted that the smear campaign against Takaichi did not stop. In October of last year, when she assumed office, several trending hashtags attacking her and criticizing U.S. tariffs appeared on a popular online forum frequented by Japanese graphic artists. OpenAI assessed that the influence operation targeting Takaichi had limited effectiveness, as “the vast majority of posts had limited view counts and likely failed to reach the intended target audience.”
The report describes the CCP’s “cyber special operations” as large-scale, resource-intensive, and ongoing. According to OpenAI, the campaign mobilized:
-
At least hundreds of personnel
-
Thousands of fake accounts across dozens of platforms
-
Locally deployed AI models
-
Dozens of tactics ranging from mass-reporting dissidents’ social media accounts to large-scale content posting
On the same day the report was released, the U.S. House Select Committee on the Strategic Competition Between the United States and the Chinese Communist Party (commonly known as the China Select Committee) issued a statement saying that the CCP is “weaponizing” artificial intelligence to suppress overseas critics and is “industrializing” transnational repression.
The committee vowed to “continue exposing the CCP’s efforts to intimidate dissidents and undermine the integrity of American institutions.”
OpenAI stated that it publishes such “threat reports” to ensure that artificial general intelligence benefits all humanity.
The company noted that previous reports have shown that “threat actors” attempt to misuse AI models. The China-related case demonstrates that these actors are not limited to a single AI model. By releasing such reports, OpenAI hopes to alert industry professionals and the broader public to the existence of these threats and to help prevent misuse.

News magazine bootstrap themes!
I like this themes, fast loading and look profesional
Thank you Carlos!
You're welcome!
Please support me with give positive rating!
Yes Sure!