As Russia’s war in Ukraine continues, the information war is picking up online.
Fake news, photoshopped posts, manipulated media, and all sorts of propaganda and misinformation is being disseminated by both bad actors and those being duped by them. So, what are the big tech companies doing to help stop bad information from spreading?
Mashable reached out to several major social media platforms in order to get a comprehensive look at what exactly is being done to stop misinformation as Russian forces continue to advance in Ukraine.
Facebook and Instagram are certainly no strangers to disinformation campaigns stemming from Russia. So has Mark Zuckerberg learned from the attempts to sway elections? What is Meta doing this time around?
On Facebook and Instagram, some major steps have been taken to clamp down on falsehoods being spread. Meta has blocked Russian state-run media, such as Russia Today and Sputnik, in the EU and in Ukraine. The company has also cut its revenue share with these outlets so they can’t monetize their content in areas where they haven’t yet been banned. In addition, Meta will continue to label state-run media as it previously did, turning down Russia’s request to stop fact-checking and labeling its content.
On Tuesday, Meta announced it would be taking even further action against Russian state-run media on Facebook and Instagram by demoting its content on newsfeeds and taking action to ensure its algorithm will not be recommending its content to users.
Users in Ukraine may have also noticed a Facebook “lock profile” tool, which provides people in the country with easy access to additional security and privacy measures. If a user turns this feature on, only friends on the platform will be able to share or download photos or see the user’s posts on their timeline.
Meta announced that it had stopped two disinformation campaigns. One campaign attempted to control the narrative by creating fake accounts that claimed to be Ukrainian journalists. The pages used AI-generated photos to further hide the fact that these individuals did not really exist. The other campaign was tied to a hacking group from Belarus. Both disinformation networks spread anti-Ukrainian propaganda. Meta removed dozens of accounts connected to the campaigns.
Meta’s messaging platform, WhatsApp, has also shared best practices with its users on how to secure their accounts and take advantage of certain privacy features such as Disappearing Mode, which deletes messages after 24 hours.
Like Facebook, Twitter is a platform where misinformation campaigns take hold. And, as the social network where news often breaks, Twitter seems to have taken its role rather seriously during the Russian invasion of Ukraine.
Users may have noticed the information being shared by official Twitter profiles to ensure users are following best practices to secure their accounts. The company says it is monitoring “vulnerable” high-profile users in order to stop any manipulation or account takeovers.
Twitter also has policies surrounding manipulated or synthetic media, i.e. edited video or deepfakes intended to spread disinformation. Content can be labeled or removed and entire accounts can be suspended based on the severity of the violation. The company has already removed manipulated content from the platform, such as a clip purported to be from Ukraine that was actually footage from a video game.
The social microblogging platform also amped up its policies on labeling Russian state-run media on its platform. Previously, Twitter labeled accounts belonging to outlets such as Russia Today as “Russian state-run media.” However, as of Monday, the platform began adding warning labels to all tweets linking to Russian state-run media as well.
“This Tweet links to a Russia state-run affiliated media website,” the label reads. Twitter says it will reduce the reach of these tweets, too.
Along with those changes, a number of anchors, columnists and others employed by Russian state-run media outlets have begun reporting that their own personal accounts that promote their work have been affixed with the Twitter warning label as well.
On Sunday, Twitter announced that it had suspended more than a dozen accounts for violating its platform manipulation and spam policy. Violating this policy usually entails the use of fake accounts in order to spread content and “artificially inflate” engagement.
“Our investigation is ongoing; however, our initial findings indicate that the accounts and links originated in Russia and were attempting to disrupt the public conversation around the ongoing conflict in Ukraine,” said Twitter in a public statement.
According to NBC News, these accounts were sharing links from a new propaganda outlet called Ukraine Today.
Before Russian troops even entered the country, Twitter had already suspended advertising in Ukraine and in Russia so adverts didn’t minimize crucial information in users’ feeds. Twitter also paused tweet recommendations from accounts users did not already follow. The company says this action was taken to “reduce the spread of abusive content.”
“Twitter’s top priority is keeping people safe, and we have longstanding efforts to improve the safety of our service,” said a Twitter spokesperson. “We remain vigilant and will continue to closely monitor the situation on the ground.”
The company also seems to have made it clear that at least some of its existing policies won’t be paused due to the conflict. When the Ukrainian National Guard tweeted an Islamophobic video of a Neo-Nazi battallion embedded in the country, Twitter hid the clip behind a warning label as per its hate speech policies.
Russian state-run media is a powerhouse on YouTube. Russia Today, specifically, has found success on the platform over the years. RT’s main channel has more than 4.5 million subscribers. RT boasts that it has received more than 10 billion views across all of its YouTube channels.
With numbers like that, YouTube monetization could result in a pretty lucrative revenue stream. That is, until this weekend, when YouTube demonetized RT and all Russian state-run media.
“In light of extraordinary circumstances in Ukraine, we’re taking a number of actions,” read a public statement from YouTube. “We’re pausing a number of channels’ ability to monetize on YouTube, including several Russian channels affiliated with recent sanctions.”
The statement goes on to say that YouTube will also be “significantly limiting” the platform’s recommendations to content on these channels. In addition to revoking their monetization. YouTube is also “restricting access” to RT and other channels for users in Ukraine.
YouTube also shared that it had removed a number of low-subscriber channels that were part of a “Russian influence operation.”
Due to the nature of how Snapchat works — mainly private feeds and ephemeral content — the social messaging app has rather successfully avoided becoming a hub for misinformation and other problematic content. Even so, Snap states that it will remove any misinformation it comes across on its platform regarding Ukraine.
“The app has actually been designed to make it hard for misinformation to spread,” said a Snap spokesperson in a statement. “We limit the size of group chats and snaps disappear. Unlike traditional social platforms, we don’t feature an open, unvetted newsfeed and the content on the public parts of the app — Discover and Spotlight — only host pre-moderated content. If we find misinformation, we remove it immediately.”
TikTok has long outgrown online challenges and viral dance crazes. Current events, however, may be the greatest measure of just how much the shortform video app has expanded beyond the teen content it was originally known for.
The war in Ukraine has seen TikTok used as a platform for the latest news as well as updates from people on the ground about what’s happening. Unfortunately, though, the young platform has also seen itself grow as a major outlet for misinformation and propaganda.
Videos purportedly from Ukraine have spread on the platform, often turning out to portray conflicts from years earlier and in completely different parts of the world. Scams have also descended on TikTok livestreams. Scammers are raising money with fake live videos that make it appear as if they are Ukranians sharing their wartime experiences.
TikTok has also just announced a new feature that has some critics scratching their heads at the timing. The shortform video platform has announced it will now support video uploads of up to 10 minutes long. As Media Matters point out, the platform was already struggling to handle misinformation before Russia invaded Ukraine and when it was dealing with 3-minute long videos.
The company, for its part, has said it has taken action against users acting in bad faith and will remove content breaking TikTok’s rules regarding the spread of misinformation.
“We continue to closely monitor the situation, with increased resources to respond to emerging trends and remove violative content, including harmful misinformation and promotion of violence,” said a TikTok spokesperson in a statement provided to Mashable. “We also partner with independent fact-checking organizations to further aid our efforts to help TikTok remain a safe and authentic place.”
TikTok has also partnered with organizations like MediaWise and the National Association of Media Literacy Education in order to help educate its users on digital media literacy.
While most probably think of LinkedIn as business networking tool, the social network has had its fair share of fake news and misinformation spread throughout the platform.
The Microsoft-owned platform says its “safety teams are closely monitoring conversations on the platform” and its global editing team is making sure news and updates are coming from trusted sources. LinkedIn will take action on any content that does not abide by its Professional Community Policies, which prohibits misinformation, false content, and manipulated media.
Mashable will continue to update this post as policies change.