Advertisement

Advertisement

While the White House has issued some guidance on combating the risks of AI, the U.S. is still miles behind on any real AI legislation. There is some movement within Congress like the year-old Algorithmic Accountability Act, and more recently with a proposed “AI Task Force,” but in reality there’s nothing on the books that can deal with the rapidly expanding world of AI implementation.

The EU, on the other hand, modified a proposed AI Act to take into account modern generative AI like chatGPT. Specifically, that bill could have huge implications for how large language models like OpenAI’s GPT-4 are trained on terabyte upon terabyte of scraped user data from the internet. The ruling European body’s proposed law could label AI systems as “high risk” if they could be used to influence elections.

Advertisement

Of course, OpenAI isn’t the only big tech company wanting to at least seem like it’s trying to get in front of the AI ethics debate. On Thursday, Microsoft execs did a media blitz to explain their own hopes for regulation. Microsoft President Brad Smith said during a LinkedIn livestream that the U.S. could use a new agency to handle AI. It’s a line that echoes Altman’s own proposal to Congress, though he also called for laws that would increase transparency and create “safety breaks” for AI used in critical infrastructure.

Even with a five-point blueprint for dealing with AI, Smith’s speech was heavy on hopes but feather light on details. Microsoft has been the most-ready to proliferate AI compared to its rivals, all in an effort to get ahead of big tech companies like Google and Apple. Not to mention, Microsoft is in an ongoing multi-billion dollar partnership with OpenAI.

Advertisement

On Thursday, OpenAI revealed it was creating a grant program to fund groups that could decide rules around AI. The fund would give out 10, $100,000 grants to groups willing to do the legwork and create “proof-of-concepts for a democratic process that could answer questions about what rules AI systems should follow.” The company said the deadline for this program was in just a month, by June 24.

OpenAI offered some examples of what questions grant seekers should look to answer. One example was whether AI should offer “emotional support” to people. Another question was if vision-language AI models should be allowed to identify people’s gender, race, or identity based on their images. That last question could easily be applied to any number of AI-based facial recognition systems, in which case the only acceptable answer is “no, never.”

Advertisement

And there’s quite a few ethical questions that a company like OpenAI is incentivized to leave out of the conversation, particularly in how it decides to release the training data for its AI models.

Advertisement

Which goes back to the everlasting problem of letting companies dictate how their own industry can be regulated. Even if OpenAI’s intentions are, for the most part, driven by a conscious desire to reduce the harm of AI, tech companies are financially incentivized to help themselves before they help anybody else.


Want to know more about AI, chatbots, and the future of machine learning? Check out our full coverage of artificial intelligence, or browse our guides to The Best Free AI Art Generators, The Best ChatGPT Alternatives, and Everything We Know About OpenAI’s ChatGPT.

Services MarketplaceListings, Bookings & Reviews

Entertainment blogs & Forums