YouTube extends AI deepfake detection to all adult users
The move marks a significant shift from previous limited rollouts, addressing growing concerns about non-public figures facing AI-generated impersonation.

YouTube is expanding its artificial intelligence likeness detection tool to all users aged 18 and over, granting broad access to a feature that scans the platform for facial matches and allows individuals to request the removal of potential deepfakes. The announcement, made via YouTube’s creator forum, represents a substantial shift in the company’s approach to digital identity protection, moving beyond previous iterations that were restricted to content creators, government officials, politicians, journalists, and the entertainment industry.
The likeness detection feature operates by utilising a selfie-style scan of a person’s face to monitor YouTube for lookalikes. If the system identifies a match, it alerts the user, who then has the option to submit a request for content removal. Takedown requests are assessed against YouTube’s privacy policy, with the company considering factors such as the realism of the content, whether it is labelled as AI-generated, and if the individual can be uniquely identified. Exemptions apply to content classified as parody or satire, and the tool currently covers facial likeness only, excluding other identifying features such as voice.
YouTube has previously reported that the volume of removal requests has been very small, though specific statistics were not provided. The expansion aims to protect private citizens, addressing concerns that teenagers and non-public figures are increasingly targeted by deepfake technology, including instances of classmates creating fake content and legal actions involving the xAI Grok chatbot. By opening the tool to all adults, YouTube is effectively allowing the average person to monitor content on the platform that could use their likeness.
Spokesperson Jack Malon stated that there are no specific requirements to qualify as a creator for access to the tool, ensuring that protection levels remain consistent regardless of a user’s history on the platform. Users retain the ability to withdraw from the program and have their data deleted, maintaining control over their participation in the detection system. The company noted that whether creators have been uploading for a decade or are just starting, they will have access to the same level of protection.
This policy update follows a phased rollout strategy that initially tested the feature with content creators before extending it to high-profile groups. The broadening of access reflects the evolving landscape of AI-generated content, where the barrier to creating convincing digital replicas has lowered, impacting not just celebrities and politicians but private individuals as well. The move underscores the platform’s response to growing regulatory and social pressure to mitigate the harms associated with non-consensual deepfakes.


