OpenAI is ????? ????? ??????? ????? ???? ??????? ?? ????weeding out more bad actors using its AI models. And, in a first for the company, they've identified and removed Russian, Chinese, and Israeli accounts used in political influence operations.
According to a new report from the platform's threat detection team, the platform discovered and terminated five accounts engaging in covert influence operations, such as propaganda-laden bots, social media scrubbers, and fake article generators.
"OpenAI is committed to enforcing policies that prevent abuse and to improving transparency around AI-generated content," the company wrote. "That is especially true with respect to detecting and disrupting covert influence operations (IO), which attempt to manipulate public opinion or influence political outcomes without revealing the true identity or intentions of the actors behind them."
Terminated accounts include those behind a Russian Telegram operation dubbed "Bad Grammar" and those facilitating Israeli company STOIC. STOIC was discovered to be using OpenAI models to generate articles and comments praising Israel's current military siege, that were then posted across Meta platforms, X, and more.
OpenAI says the group of covert actors were using a variety of tools for a "range of tasks, such as generating short comments and longer articles in a range of languages, making up names and bios for social media accounts, conducting open-source research, debugging simple code, and translating and proofreading texts."
In February, OpenAI announced it had terminated several "foreign bad actor" accounts found engaging in similarly suspicious behavior, including using OpenAI's translation and coding services to bolster potential cyber attacks. The effort was in collaboration with Microsoft Threat Intelligence.
As communities rev up for a series of global elections, many are keeping a close eye on AI-boosted disinformation campaigns. In the U.S., deep-faked AI video and audio of celebrities, and even presidential candidates, led to a federal call on tech leaders to stop their spread. And a report from the Center for Countering Digital Hate found that — despite electoral integrity commitments from many AI leaders — AI voice cloning is still easily manipulated by bad actors.
Learn more about how AI might be at play in this year's election, and how you can respond to it.
Topics Cybersecurity Politics OpenAI
Is Jon Snow appearing in 'Call of Duty'?14 reasons why Hillary Clinton is a big f*cking deal todayMeryl Streep's transformation into Donald Trump is deeply disturbingBumble enlightens male user on how to speak to womenRepublicans are shaking their heads at Trump, but won't withdraw supportJ.K. Rowling has 2 very firm tweets for critics of black HermioneThis ceramic can pot pipe will give you stoner nostalgiaDad fed up with son smoking weed in car sells it on CraigslistApple rounded up iPhone users' best videos for new ad campaignThe vagina video game that was too racy for Apple How to spot signs of alien life Charging your EV at home is super slow. That's finally changing. Pandemic dating often feels like a period drama courtship What to know before diving into the debate over reopening schools 32 of the biggest dating app bio red flags, as told by users MyPillow CEO released a movie pushing election fraud conspiracies, YouTube and Vimeo took it down Use words to drive exploding cats from your lawn in The Oatmeal's mobile game Everything we hope to learn from 3 historic missions to Mars GM's Super Cruise feels like it's self 'WandaVision' cracked the MCU open with a $71.3 billion cameo
0.1795s , 9927.1796875 kb
Copyright © 2025 Powered by 【????? ????? ??????? ????? ???? ??????? ?? ????】OpenAI stopped five covert influence operations in the last three months,Global Hot Topic Analysis