background

The rise of deepfake MCs is causing concerns as they are being used to spread fake news on social media platforms.

blog image
By using deepfake technology, the images and voices of famous MCs are being manipulated to spread false information on platforms such as TikTok and YouTube. 

The issue came to light when TikToker Krishna Sahay posted a video where he was interviewed by CBS News anchor Anne-Marie Green about being the sole survivor of a recent school shooting. The video quickly went viral and attracted hundreds of thousands of views. However, Forbes reports that Sahay is just one of many TikTokers and YouTubers who are using deepfake technology to create MCs with the same face and voice as hosts from major news agencies. They then build sensational and false scenarios to attract interaction from the online community. Many videos even have the news channel's logo, causing viewers to mistakenly believe that the news is authentic or exclusive. This highlights the need for social media platforms and news agencies to take measures to prevent the spread of fake news and deepfake MCs.

Last week, CNN reporter Clarissa Ward became a victim of deepfake while reporting near the Gaza-Israel border. The video of her avoiding missiles was accompanied by fake audio, leading to some misunderstandings about the conflict. In early October, Gayle King, the host of CBS This Morning, suddenly appeared in an AI-created video, advertising a product she had never tried. Even Donald Trump mentioned a deepfake video in which CNN host Anderson Cooper was quoting his bogus statements. 

Forbes reported that news reports hosted by deepfake MCs are appearing widely and often attracting more interaction than content posted on the owner's account. For instance, a video by TikToker Krishna Sahay about Margaret Brennan, the host of Face The Nation, has 300,000 likes, while the most popular video on Face The Nation's official TikTok channel only reached 7,000 likes. Although Sahay's account and many other TikTokers have now been deleted due to violating regulations, their deepfake content still exists online because many people shared it.

Fake news created by deepfake technology has been around for a long time, but when associated with famous MC images, they have a strong viral effect and cause unpredictable consequences. Professor Hany Farid at UC Berkeley explains that "videos from media channels are an attractive means for bad guys to spread fake news. In many cases, the host is familiar and trusted by the audience. The news interface also makes the content familiar and trustworthy, thus making it more reliable."

Representatives from TikTok and YouTube have revealed measures they have taken to combat the spread of deepfake technology on their platforms. Ariane Selliers, speaking on behalf of TikTok, stated that the platform has permanently banned videos that use deepfake technology to create misleading or harmful content. Creators are required to label AI content and warn viewers of its use, and failure to comply may result in videos being removed for "misleading content" or "impersonation of an individual to cause harm". Selliers emphasized the importance of protecting influencers and their audiences from abuse and deception related to politics or money.

Similarly, Elena Hernandez, YouTube's representative, revealed that the platform has introduced a "misinformation policy" to tackle the spread of AI-generated fake news. 

Deepfake technology uses AI algorithms to analyze a person's gestures, facial expressions, and voice and then creates realistic-looking photos or videos. Kevin Goldberg, a lawyer at Freedom Forum, has highlighted the need for new regulations to address the growing threat of deepfake news, whether it is used for harmful or entertaining purposes. Experts recommend that users improve their ability to evaluate information, verify links and URLs, and ensure that they receive news from official sources.
Tags
Share