OpenAI, the brains behind the popular ChatGPT generative AI solution, released a report saying it blocked more than 20 operations and dishonest networks worldwide in 2024 so far. The operations differed in objective, scale, and focus, and were used to create malware and write fake media accounts, fake bios, and website articles.
OpenAI confirms it has analyzed the activities it has stopped and provided key insights from its analysis. “Threat actors continue to evolve and experiment with our models, but we have not seen evidence of this leading to meaningful breakthroughs in their ability to create substantially new malware or build viral audiences,” the report says.
This is especially important given that it’s an election year in various countries, including the United States, Rwanda, India, and the European Union. For example, in early July, OpenAI banned a number of accounts that created comments about the elections in Rwanda that were posted by different accounts on X (formerly Twitter). So, it’s good to hear that OpenAI says the threat actors couldn’t make much headway with the campaigns.
Another win for OpenAI is disrupting a China-based threat actor known as “SweetSpecter” that attempted spear-phishing OpenAI employees’ corporate and personal addresses. The report goes on to say that in August, Microsoft exposed a set of domains that they attributed to an Iranian covert influence operation known as “STORM-2035.” “Based on their report, we investigated, disrupted and reported an associated set of activity on ChatGPT.”
OpenAI also says the social media posts that their models created didn’t get much attention because they received few or no comments, likes, or shares. OpenAI ensures they will continue to anticipate how threat actors use advanced models for harmful ends and plan to take the necessary actions to stop it.
Services Marketplace – Listings, Bookings & Reviews