YouTube will soon require creators to disclose whether a video was made with generative AI.
On Tuesday,?? ?? ??? the video streaming giant announced this, and other updates, to mitigate the misleading or harmful effects of generative AI.
"When creators upload content, we will have new options for them to select to indicate that it contains realistic altered or synthetic material," said Jennifer Flannery O'Connor and Emily Moxley, YouTube product management VPs.
Creators who fail to consistently do this might face penalties, such as content removal or suspension from the YouTube Partner Program. The announcement also said artists and creators will be able to request the removal of content (including music) that uses their likeness without consent.
This Tweet is currently unavailable. It might be loading or has been removed.
The widespread availability of generative AI has heightened the threat of deepfakes and misinformation, especially with the upcoming presidential election. Both the public and private sector have acknowledged a need to detect and prevent the nefarious use of generative AI.
For example, President Biden's AI executive order specifically addressed the need for labeling or watermarking AI-generated content. OpenAI is working on its own tool, a "provenance classifier," that detects whether an image was made with its DALL-E 3 AI generator. Just last week, Meta announced a new policy that requires political advertisers to disclose whether an ad uses generative AI.
SEE ALSO: Political ads on Facebook, Instagram required to disclose use of AIOn YouTube, when a creator uploads a video, they'll be given the option of indicating whether it "contains realistic altered or synthetic material," the blog post said. "For example, this could be an AI-generated video that realistically depicts an event that never happened, or content showing someone saying or doing something they didn't actually do."
Labels informing viewers that a video has AI-generated or altered content will be added to the description panel. A "more prominent label" will be added to content involving sensitive topics. Even if AI-generated content is appropriately labeled, if it violates YouTube's community guidelines, it will be taken down.
How will all of this content moderation be enforced? By AI of course. In addition to creating fake content that looks convincingly real, generative AI can also successfully identify and catch content that violates content policies. YouTube will be deploying generative AI technology to help contextualize and understand threats at scale.
Topics Artificial Intelligence YouTube
iPad is now forced to 'open up' like iPhone, but only for some — here's whySRH vs. RR 2024 livestream: Watch IPL for freeSRH vs. RR 2024 livestream: Watch IPL for freeBest Amazon deal: Get a Samsung Galaxy SmartTag2 Bluetooth tracker for under $21What Threads needs to be the next Twitter (sorry, X)The iPad Air is $200 off at Best Buy for one day onlyFritz vs. Rublev 2024 livestream: Watch Madrid Open for freeNYT's The Mini crossword answers for May 2How to change your Gmail passwordSubstack introduces new Chat features YouTube revives sort by oldest video button on user channels ‘Extraction 2’ review: Big, dumb, but not enough fun 'Quordle' today: See each 'Quordle' answer and hints for June 18 Reddit to protesting mods: Reopen or be removed as API protest continues Reddit hackers want API changes reversed or else they'll publish data In $250 million lawsuit against Twitter, 17 music publishers allege copyright infringement 'Final Fantasy XVI' review: Being different is fine 'Black Mirror' Season 6: 'Joan Is Awful,' explained Reddit suffers partial outage amid blackout protests Apple is trying to trademark depictions of actual apples
0.2336s , 9830.046875 kb
Copyright © 2025 Powered by 【?? ?? ???】YouTubers face penalties if they use generative AI — unless they comply with this new rule,Global Hot Topic Analysis