Remember in May when Scarlett Johansson threatened OpenAI CEO Sam Altman with legal action for using an AI-generated version of her voice for ChatGPT 4.0? The CEO and the actress went back and forth about the accusation, with Altman completely denying it. Scarlett would be happy to see YouTube become the first platform to allow users to request the removal of AI-generated content that they believe looks or sounds like them. This will hopefully pave the way for other platforms to catch up.
Sarah Perez at TechCrunch noticed this update on the video streaming company’s Privacy Guidelines page and believes it builds on the site’s recent ‘responsible AI’ plan, launched earlier this year.
Advertisement
YouTube explains that the content must be “uniquely identifiable,” which it elaborates on as having “enough information that allows others to recognize you.” It adds that it will consider a few factors, such as whether the content is satire or parody or if it includes a public figure engaged in sensitive activity.
Advertisement
YouTube is only accepting first-party claims, with a few exceptions for people without internet access, vulnerable individuals, minors, or deceased people. Other than that, YouTube will not accept third parties to submit claims on behalf of others.
Advertisement
After filing a request, the content in question won’t immediately be taken down. The uploader will have 48 hours to act upon the removal request. During this time, they can trim or blur the content or remove the video entirely. YouTube states that making a video private is not an acceptable action. If the uploader fails to take action within 48 hours, the complaint will be passed onto YouTube for review, where further action may take place.
Services Marketplace – Listings, Bookings & Reviews