Everywhere in the world, and in the US in particular, social media is now part of the fabric of everyday life. At last count, virtually every American used a social media platform (or more than one) every day, with seven in ten using Facebook alone. The growth has been explosive, but not without problems.
Recently, social media sites have come under fire for all manner of issues. There's YouTube's repeated problems with child exploitation on their platform. There's Instagram's embarrassing policy lapse that allowed a marketing company to access the data of millions of users. And of course, there's Facebook – who has had so many problems that they've even drawn the ire of Congress.
All of it adds up to an online environment fraught with dangers for the average social media user. And what's worse – cybercriminals know that users won't stay away no matter how bad things get. For that reason, social media platforms have become the number one vector for digital scams in recent years, making social media safety a hot topic. They've made it easier to spread misinformation, conduct social engineering, and perpetrate fraud on countless unsuspecting victims.
Now, social media sites are trying to combat these problems with a little help from technology. Here's what they're up to.
Stopping the Spread of Fake News
The kinds of scams that circulate on social media often have one thing in common: they're based on misinformation spread in the form of so-called "fake news". It's an issue that's affecting everything from vaccination rates to presidential elections. It's also providing the scare power that makes many common digital scams possible.
The good news is that social media sites finally seem to be taking the problem of fake news spreading on their platforms more seriously. Facebook, for one, is working on sharpening its AI tools to help them to automate the problem out of existence. The only hold-up is that most experts agree that today's most sophisticated AI can't stop all instances of fake news, citing the need for a much larger training dataset. Facebook, however, has access to enough data that they may be able to make some real advances in this area.
Using AI to Flag Problematic Content
Although most people don't associate Google's video platform YouTube with the promotion of scams, it's increasingly being used for just that purpose. Recently, a major scam started making the rounds that involved the impersonation of well-known YouTubers who solicited users to follow links to get free gifts. Needless to say, the sites that users were directed to were anything but legitimate.
YouTube, however, employs a powerful machine learning algorithm to spot scams and problematic content, and it's getting more accurate every day. In fact, from April to June of 2019, the platform removed over 9 million videos for various violations of the site's terms of service. Of those, almost a full 8 million were removed via automated flagging by an algorithm – proving just how adept the system has become.
The Looming Deepfake Threat
Despite all of the progress that social media platforms are making in putting an end to scams, there's another bit of technology that's poised to make their jobs even harder. It uses AI to digitally alter photos and videos to change faces, spoken words, or add people who were never there. It's known as Deepfake, and the technology behind it has gotten so good that it's getting hard for people to spot when something they're looking at has been altered.
To try and stay ahead of the emerging threat, social media companies like Facebook are pouring millions of dollars into developing AI solutions that can identify altered photos and videos. They're even enlisting the help of academic researchers around the world to do it. If the recently-created video depicting Facebook founder Mark Zuckerberg speaking menacingly about controlling the future is any indicator – they'd better redouble their efforts.
The Last Line of Defense
Of course, most of the scams that circulate on social media could be stopped if users would educate themselves about how to spot them. In truth, there will never be a better defense against scams than their would-be victims avoiding them in the first place. That won't stop the social media platforms from trying to help, though. As the above examples confirm, the major social media services know they're enabling a massive problem – but they're doing everything they can think of to solve it.