Machine Learning: The Future of Fake News

The internet is an important invention for information distribution across the globe. However, as an openly accessible technology, abuse is unavoidable, especially for reasons of propaganda and fake news. The main companies in the technology space, such as Facebook and Google, already accommodate fake news to make a profit, but we are just standing at the beginning of a much bigger issue. Artificial Intelligence, especially Machine Learning, may soon elevate fake news to the next level with fake videos that will become real and efficient enough to fool everyone. So-called Deepfakes have already invaded the porn industry, causing a number of concerns. In the end, Machine Learning will have to fix itself, and the sooner the better.

Facebook, Google, and the Echo Chamber

Call it however you may like, Google and Facebook have contributed greatly to the phenomenon of Fake News. By making money through ads and personalized content, they created an echo chamber, or filter bubble, that encapsulates all we do and see on social media. This encapsulation potentially increases and intensifies fake news spread and effect, as it almost always hits its target. It’s not a new issue, but with the increasing divide in our political landscape, it’s been a greater problem than it’s ever been. Sure, there are attempts to identify fake news quickly enough with the help of to-be-developed algorithms and human workforce, but that has not really been effective so far.

The inherent problem of the situation is this – all we can do is react to fake news as of now. There is no way to anticipate them, and even if we react, it takes time to properly identify them. Take down legitimate news and there will be the allegation of censorship, so we currently have to prove beyond a reasonable doubt that a piece of information is fake. Machine Learning algorithms will soon even the battlefield in that regard, but while we scramble to detect and contain written fake news, technology is already advancing in directions we are not prepared for.

Deepfakes: The Worrisome Reality of Machine Learning

When thinking about it – creating a Machine Learning algorithm that detects Fake News is fairly simple compared to what comes next. The key phrase to all of this, without exception, is Artificial Intelligence and Machine Learning. Just recently a new tool surfaced, called the Deepfakes App, and its repercussions were enormous. A coder apparently created a Machine Learning algorithm that can put the face of any person onto another person in a video and make it look reasonably real. This is achieved by feeding this Artificial Intelligence with a video you want to alter and as many pictures as possible of the person in the video and the person you want them replaced with. The algorithm then cross-references pictures of both persons and attempts to accurately replace the face frame by frame in the video. And guess what it was used for so far? Porn of course.

There are already several so-called Deepfakes on the internet, replacing porn actresses’ faces with those of celebrities like Sophie Turner or Gal Gadot. The worst part: this is just the beginning. The Deepfakes App is streamlined and can already be used by anyone without too much technical knowledge. Future versions of the app will probably make it even easier, which could have huge ramifications no matter how they are used. Sure it doesn’t look spot on yet, but the point of Artificial Intelligence and Machine Learning is that the algorithm can and will improve itself over time until, eventually, we won’t be able to realize if we watch a genuine video or a fake one.

Take this rather harmless example video of Nicolas Cage as Captain Picard.

Who Cares about Fake News?

The only defense we have to counter this would be Machine Learning algorithms that work in the other direction, scanning videos for alterations. But as it stands, we haven’t even begun to successfully tackle written fake news, so how well are we prepared for fake videos entering the scene?

The problem with the internet is, things go viral quickly. The average Facebook user doesn’t fact check. They believe their eyes and minds above all else and share away regardless. They don’t care if something is fake or not if it aligns with their views or intrigues them. We’d have to have countermeasures scanning content for inaccuracies when it is submitted and before it’s published. Because if it’s published, it’s as good as real. But even then, an attacker will always figure out a way to come up ahead. It’s an uphill battle we are already losing, with private programmers and hackers that are way more ambitious and successful with their creations to fake reality, than those that need to take responsibility, like Google or Facebook.

Deepfakes are only the beginning. There have been different approaches that could potentially be misused in a similar way. The University of Washington, for example, managed to create realistic videos from audio files, as shown with the example of Barack Obama in a YouTube video.

Artifical Intelligence on the Rise

This means, as soon as the technology is perfected, anyone that sounds like Obama, or anyone else for that matter, would be able to literally make him say anything they want him to on video. A process that used to take hours of intensive labor can now be done with a tool that works by itself over a period of time and produces incredibly realistic results that are inherently fake. And this is not the only tool of the sort. A project called Face2Face, published in 2016, makes it possible to alter videos in real time with your own facial expressions.

Again, the implications of such a technology may be huge for the use of propaganda and Fake News. While it is certainly obvious and detectable now, we will inevitably reach a point where these Machine Learning Algorithms will get too good in simulating real life, so that we have to fight fire with fire. Artificial Intelligence against Artificial Intelligence and Machine Learning against Machine Learning. Unfortunately, it seems that progress is only achieved when there is money or novelty waiting at the end of the road. And so far, the money undeniably lies with the Fake News.

About Andreas Salmen

Born and raised in Germany, learned a job in IT and Business and ultimately decided that this wasn't exactly where my life was going to end. Left everything behind to become a writing backpacker instead. The world's crumbling away anyway so why not write about it and get a few good Instagram pics on the way, am I right?

All Articles
28 Shares
Share19
Tweet9
+1