After being a victim of fake news once or twice, some developers have developed systems that can detect if a video or image had been doctored or manipulated. A new report has however come and poured cold water on that notion that the intelligence developed can protect us from manipulated videos and images known as deepfakes.
According to Data and Society, it’s doubtful that automated solutions alone will ever work alone to stop people from manipulating media in this 4th Industrial revolution. People usually doctor images and video to suit the narrative they want to perpetuate.
Introducing the Pindula News Mobil App
Download from Google Play Store
Facebook and other American companies are working towards having systems and algorithms that will detect if the media about to be shared has been doctored before posting. This may help curb spreading fake news but according to The Verge, the efforts will yield minimal results. The Verge wrote:
Today, deepfakes have taken manipulation even further by allowing people to manipulate videos and images using machine learning, with results that are almost impossible to detect with the human eye. Now, the report says, “anyone with a public social media profile is fair game to be faked.” Once the fakes exist, they can go viral on social media in a matter of seconds.
The Publication suggests that strict laws be put in place against deepfakes or manipulated media to discourage perpetrators from spreading fake news and to redress the issue should there be any fallouts.
Chesney and Paris agree that some sort of technical fix is needed and that it must work alongside the legal system to prosecute bad actors and stop the spread of faked videos. “We need to talk about mitigation and limiting harm, not solving this issue,” Chesney added. “Deepfakes aren’t going to disappear
More: The Verge