Could propaganda and censorship on TikTok cause harm to America or other societies? In our previous article, we discussed TikTok’s potential to act as a spying tool. Although there isn’t any solid evidence that it’s worse than other social media platforms, there are certainly some situations where we should be wary of it. This is mainly due to its China-based parent company, ByteDance.
This time, we will look into the other possible dangers. Because of TikTok’s popularity and role in media, it has the potential to censor information and to spread propaganda.
The fears surrounding censorship and propaganda on TikTok stem from two major factors. The first involves the immense power that social media platforms have.
Social networks have tremendous reach and many people conduct a significant amount of their daily interactions over them. A large portion of people use them as their primary news source, and they also feature significant amounts of political and social discussion.
This results in these platforms having strong influences on our cultures and politics. Not only could outside entities take advantage of these platforms for their own disinformation campaigns, but the platforms themselves could influence the topics that spread more rapidly in their favor. Due to their reach, changes to their algorithms or company policies can have huge effects on what users see.
Either of these approaches can have negative effects on the societies that use these tools. This is not something limited to TikTok, and is something that Facebook, Twitter and others are still struggling with.
The second major factor, and the main reason why TikTok is currently attracting more flak than its competitors, is because of its links to China. We discussed the background more thoroughly in the potential risks of Chinese technology section our previous article, but it boils down to:
- TikTok’s parent company is based in China.
- China has tight national security legislation and the Chinese Government often exerts control over the nation’s businesses.
- China and many Western countries currently have strained relationships.
At least theoretically, the Chinese Government could pressure TikTok to censor information that makes the Chinese State look bad, or spread propaganda in attempts to harm other nations.
While there is merit in these concerns, we also need to be mindful that much of the public debate over TikTok has been politicized, and there are a whole lot of accusations against the company out there, many of these with little factual basis.
Let’s look at the evidence instead. It’s best to start by taking a step back and looking at censorship and propaganda. Then we can dive into the allegations against TikTok, before investigating the broader problems of social media. We’ll finish by discussing potential solutions.
Propaganda & censorship
Before we dive in too deep, it’s important to have a solid grounding of what propaganda and censorship are, as well as how they can affect societies. Let’s start with Merriam-Webster. It defines propaganda as:
the spreading of ideas, information, or rumor for the purpose of helping or injuring an institution, a cause, or a person.
Censorship is defined as the institution, system or practice of censoring, which the dictionary describes as:
to examine in order to suppress or delete anything considered objectionable.
While the terms are loaded with negative connotations, at least in principle, they are valueless. Most of us would agree that it’s important to censor extreme violence in media made for children, and few would be against government propaganda that promotes healthy eating.
The major worries in the TikTok situation are that these tactics won’t be used in such benevolent ways. Instead, it’s feared that the Chinese Government could leverage its powers over TikTok to censor information and spread propaganda for its own benefit, presumably to the detriment of the targeted countries.
At least theoretically, TikTok could be censored to hide information that the Chinese Government doesn’t want people to see, whether it’s posts about the Uighur minority or pictures of Winnie the Pooh. This would leave TikTok users with a critical gap in their understanding of global issues.
States can also use propaganda to manipulate their targets or to sow discord among a country’s citizens. One of the main fears is that foreign powers could influence the outcomes of elections by smearing certain political figures, causing the electorate to doubt the fairness of an election, steering the debate toward certain topics, or increasing the polarization of the populace.
Countries have long histories of using both of these techniques against each other as they struggle for power in the geopolitical arena. However, the internet and social media have completely changed the battleground.
Has TikTok censored information or spread propaganda?
We’ve had a look at all of the allegations of TikTok spreading propaganda or censoring information. The following are the most notable examples:
Documents leaked to The Guardian
In September 2019, The Guardian leaked moderation guidelines from TikTok. These showed two categories of censorship. The first are violations that result in the content being deleted from the site and potentially banning the user behind it.
The second are lesser infringements, which lead to the content being marketed as “visible to self”, which limits its distribution on the platform without making it clear to the user that they have posted infringing content that is being hidden.
This is known as shadow banning, and these documents seem to be the most damning evidence we have of TikTok engaging in the practice. Other social media platforms also do it too, but the fears over Chinese Government censorship make it more worrisome on TikTok.
The article has some other significant revelations, but it engages in some dodgy journalism to make them stand out. It posted the article with a headline of “Revealed: how TikTok censors videos that do not please Beijing” and lead the piece with “TikTok, the popular Chinese-owned social network, instructs its moderators to censor videos that mention Tiananmen Square, Tibetan independence, or the banned religious group Falun Gong”.
Now, if you were one of the many who only read the start of the article, you would presume that TikTok specifically censors topics that are considered controversial to the Chinese regime. It’s only halfway down that the article puts things in their proper context:
Another ban covers “demonisation or distortion of local or other countries’ history such as May 1998 riots of Indonesia, Cambodian genocide, Tiananmen Square incidents”.
A more general purpose rule bans “highly controversial topics, such as separatism, religion sects conflicts, conflicts between ethnic groups, for instance exaggerating the Islamic sects conflicts, inciting the independence of Northern Ireland, Republic of Chechnya, Tibet and Taiwan and exaggerating the ethnic conflict between black and white”.
As we can see, the policy is not China-specific, as The Guardian made it seem.
It’s fair to disagree with TikTok’s position on banning these sensitive subjects, but framing the headline and intro as if only subjects sensitive to China are being censored is simple manipulation of the facts. This is hardly the smoking gun we’re after if we are looking for conclusive evidence that TikTok is censoring its platform in the Chinese Government’s favor.
ByteDance said that this version of the moderation guidelines had been retired in May 2019, months before The Guardian’s article. It stated that:
“In TikTok’s early days we took a blunt approach to minimising conflict on the platform, and our moderation guidelines allowed penalties to be given for things like content that promoted conflict… As TikTok began to take off globally last year, we recognised that this was not the correct approach, and began working to empower local teams that have a nuanced understanding of each market…”
Censoring a video about the Uighurs
In November 2019, TikTok was involved in another scandal after it locked a teenager’s account and took down her video that criticized the Chinese treatment of the Uighur minority.
The platform alleges that the user was locked out of her account because it was associated with the same device as an earlier account that had been banned. TikTok claims that the device was locked out as part of a simultaneous action that affected 2,406 devices.
The platform states that while the user was locked out, the account remained active and its videos continued to receive views.
When it comes to the video concerning Uighurs, TikTok claims that “Due to a human moderation error, the viral video from November 23 was removed. It’s important to clarify that nothing in our Community Guidelines precludes content such as this video, and it should not have been removed.”
It follows by saying that the removal decision was overridden by a senior moderator and reinstated within 50 minutes of being taken down.
It’s hard to know how to interpret this series of events. It seems very convenient to say that the content was just taken down by accident, especially considering its subject matter. The user did not buy TikTok’s story:
UPDATE: tik tok has issued a public apology and gave me my account back. Do I believe they took it away because of a unrelated satirical video that was deleted on a previous deleted account of mine? Right after I finished posting a 3 part video about the Uyghurs? No.
But if we are to believe TikTok’s timeline of events, which do not seem to have been disputed by the user, the video was up for almost exactly four days, and only taken down for 50 minutes. It seems hard to believe that enough outrage and media attention was produced in under 50 minutes to force the platform to reverse its decision.
Again, we can’t know what was really happening behind the scenes, but this story may not be as damning for the platform as it seemed in the headlines.
Censorship of Hong Kong protests
In September 2019, the Washington Post published an article questioning whether TikTok was censoring content about the Hong Kong protests. It noted that #hongkong on Twitter featured extensive coverage of the protests, while the same hashtag on TikTok mainly showed playful videos.
The article asked why popular protest hashtags on other platforms were barely being covered on TikTok. The #antielab tag had 34,000 posts on Instagram, but only 11 on TikTok. As an aside, it noted that the #TiananmenSquare tag only had around 20 videos.
A BuzzFeed News article (not the listicle BuzzFeed, the award-winning news team, BuzzFeed News) countered many of the questions in the Washington Post’s article, stating that it “found no evidence that TikTok blocks pro–Hong Kong democracy videos”.
BuzzFeed News posted its own videos about the Hong Kong protests to the platform and also talked to three teenagers who had posted their own. The investigation found that none of them were removed, and that BuzzFeed News’ videos “received a small but steady amount of organic engagement over the summer.”
At the time the article was published, one of the teenagers had 55,000 likes and 420 comments on a Hong Kong protest themed post. Another had 700 likes and 100 comments on a Hong Kong-related post. The final one had 72 likes and 11 comments on a similar piece of content.
BuzzFeed News stated that the most likely explanation for the relative paucity of Hong Kong protest information on TikTok is that it’s “not a particularly hot topic for TikTok’s mostly teenage users in the US.”
Another explanation offered by a spokesperson from the Hong Kong protest movement’s press room is that TikTok is unpopular in Hong Kong, and that activists don’t consider using it for fear of surveillance.
BuzzFeed News’ take is pretty reasonable. TikTok is known as a platform for lip-syncing, dancing and performing short skits. It’s also known to have a very young audience. Although it certainly does have some political content as well, these factors make it seem entirely logical that we don’t see as much interaction on a topic like the Hong Kong protests on the platform.
But of course, BuzzFeed News’ investigation is far from conclusive evidence that no censorship is occurring on the platform. As the Washington Post noted, “It’s impossible to know what videos are censored on TikTok: ByteDance’s decisions about the content it surfaces or censors are largely opaque.”
We can’t see inside TikTok’s algorithm, or into the actual practices of its content moderation team. Although we know that it leaves up posts about the Hong Kong protests, we can’t know whether or not they are being shadow banned or penalized.
If the platform were to just remove all of these posts, it would be suspicious. If they were to leave them up instead, but have an algorithm rigged against them, they could publicly state that the topics just aren’t popular when they are actually being subtly restricted.
Other social media platforms could do the exact same thing, and we have no evidence that this type of censorship is currently being used on TikTok. However, because of the potential for pressure from the Chinese Government, we still need to be skeptical.
Other incidents
- There have been allegations that TikTok censors posts in India about its border clash with China. There are also claims that it shadow bans content about Indian unity. There seem to be some strange things happening, although TikTok denies restricting any political content unless it goes against its community guidelines. The app has since been banned in India.
- The New York Times quoted an anonymous former content moderator of TikTok who told the paper that the company’s policy was to “allow such political posts to remain on users’ profile pages but to prevent them from being shared more widely in TikTok’s main video feed”. Although the source stayed anonymous and their credibility is unknown, so many mentions of shadow banning start to make one wary of the platform.
- Former TikTok guidelines limited the reach of content from LGBT people, those with disabilities, and those that are overweight. TikTok claims that these were a flawed attempt to prevent bullying on the platform, and that the guidelines are no longer in place. While such practices are obviously abhorrent, they aren’t overly important to the political censorship issues that we are mainly talking about in this article.
Propaganda & censorship in the social media age
Many of the allegations we mentioned are concerning, but it’s important to discuss the issues in the right context. TikTok is a relatively recent social media phenomenon, so if we want to understand how it could be used for propaganda and censorship, we need to consider how it compares to other social media platforms, as well as how they can affect our societies.
Social media has completely changed the way that people interact with each other and receive much of their information. According to WeAreSocial the most popular platforms have the following number of monthly active users:
- Facebook – 2.5 billion
- YouTube – 2 billion
- WhatsApp – 1.6 billion
- WeChat – 1.15 billion
- Instagram – 1 billion
- TikTok – 0.8 billion
Worldwide, internet users between the ages of 16 and 64 spend an average of almost 2.5 hours on social media, with Americans averaging just over 2 hours. A 2019 Pew Research survey found that 55 percent of Americans get their news from social media, either “often” or “sometimes”. Facebook was a news source for 52 percent of American adults.
The numbers make it clear. Social media plays a pivotal role in how people get their information. It’s not just where they read their news, either. It’s also a place where people have political discussions and see text posts, memes, pictures and videos that can influence them as well.
Blackbox algorithms are in control
It’s not only the broad reach that makes social media more concerning than previous mediums like websites, television, the radio and newspapers. One of the chief worries is that the information we see is controlled by complex algorithms.
Companies keep the specifics of how these function as closely guarded secrets. The workings of these algorithms are commonly referred to as black boxes, because we can’t see inside them.
In earlier times, political information was mainly controlled by politicians, journalists and editors. While these individuals certainly still have a role in the way information is disseminated (and they were never impartial), they are no longer the gatekeepers they were before. These apps stand in between, and there is very little transparency about what actually happens under the hood of these algorithms.
This makes it easy for these social media companies to manipulate us. They do it openly in their advertising to make money. Theoretically, they could also do so to meet their own political goals. There have even been many allegations that Facebook censors conservatives.
Not only is there potential for these companies to censor information and spread propaganda for their own objectives, but there’s also the risk that they can be manipulated by third parties. We will discuss an example of this in our section on Russian interference in the 2016 elections.
Micro-targeting
Another major issue is the amount of data these companies hold on their users, and how they can wield it to be more effective in their goals. Let’s compare political advertising on TV to social media, to give you an idea of just how much power it has
If you’re a campaign manager in a hotly contested election, you might want to run a smear ad against an opponent. When you run the ad on TV, the best you can really do is run it on a certain channel or during a program that has a particular audience.
You might be able to get it in front of conservative by putting it on Fox News, or in front of a more liberal audience during a show like Saturday Night Live, but how would you get it in front of more specific groups, such as undecided voters in swing states?
Something like Facebook’s political ad microtargeting platform allows campaigns to target far more specifically, down to details like age, income, location and much more. The company even offers a look-alike audiences feature that allows advertisers to upload lists of their existing audience and use Facebook’s complex algorithm to reach out to others who will be just as receptive to the messaging.
Tools like these can help immensely in creating effective and efficient campaigns. In a study called Politics in the Facebook era, Liberini et al. estimated that intense Facebook campaigns could increase the likelihood of voting by 5-10 percent, and that targeted campaigning could increase the likelihood of a non-aligned voter voting for a certain candidate by 5 percent.
These are immense powers that the platforms have, but they can also be abused by both domestic and foreign actors.
Note that TikTok does not allow political ads, but it’s not out of the realm of possibility for users of certain demographics to be targeted with supposed organic content that carries a specific political message. There is no evidence of this having happened on the platform, but it’s important to discuss potential dangers.
The rapid spread of false news
The rise of social media has also changed the way that false information is disseminated. MIT’s The Spread of True and False News Online came to some harrowing conclusions about the spread of false news on Twitter. The researchers found that, “Falsehood diffused significantly farther, faster, deeper, and more broadly than the truth in all categories.”
They found that false news was more than 70 percent more likely to be retweeted than the truth. The results were even more concerning when it came to false political news. It spreads so rapidly that it reaches 20,000 people at three times the speed it takes for all other types of false news to reach 10,000 people.
The researchers hypothesized that people were more likely to retweet false news because it was novel, surprising and could convey social status. If a person was one of the earliest to retweet this novel information, it could imply that they were connected or had inside information. Further testing concluded that falsity was more novel than the truth, and that people were more likely to retweet information that was more novel.
Interestingly, the researchers found that although bots accelerated the spread of news, they do not favor the spread of false news. They spread true and false news at a similar rate.
These results are particularly concerning when we consider how easy this makes it for foreign actors to spread propaganda. Because people like the novelty of false news and are unlikely to verify the claims, they practically do the job for them.
Manufactured support
Social media also opens up a new venue for manufacturing support for ideas, governments, political figures and more. The dominant conversation can be manipulated by likes, shares, comments and other actions that appear to be organic, but actually originate from bots, paid commenters and other shills.
These troll armies can sway narratives, make it seem like there is mass support where there isn’t, and also pillory those who go against the grain. Ultimately, these can have dramatic effects on the dominant views of a culture.
The Oxford report linked above found that by 2017, there was organized social media manipulation in at least 28 countries. In the majority of cases, it was used by governments against their own populations, but some targeted the citizens of other countries.
In its 2017 Freedom on the Net report, Freedom House found that “manipulation and disinformation tactics played an important role” in the elections of 18 countries.
Foreign influence over social media
There’s nothing new about states trying to influence politics in other countries. Its just a part of the constant geopolitical struggle for power. However, using social media as a tool for censorship and propaganda is a recent development.
The best example we have is Russia’s meddling in the 2016 elections and beyond, which we’ll cover in the following section. Another major player is Iran, which was found by Reuters to be linked to influence campaigns in Egypt, Pakistan, Yemen, Sudan and other countries.
In 2019, Twitter shut down more than 7,000 accounts linked to Iran, many of which were tweeting anti-Trump messages. The National Counterintelligence and Security Center (NCSC) Director William Evanina released a statement assessing that “…Iran seeks to undermine U.S. democratic institutions, President Trump, and to divide the country in advance of the 2020 elections…” because it perceives that Trump’s reelection would result in continued pressure on the Iranian regime.
To be fair, it’s not just countries with bad reputations in Western media that are engaging in this type of manipulation. The US is guilty too, with programs such as Operation Earnest Voice that use fake social media accounts to spread information that aligns with US objectives.
Russian interference in the 2016 US elections & beyond
It wasn’t until late 2017, almost a year after the 2016 presidential elections, that Russia-linked groups were discovered to have been buying ads on Facebook. It was part of an attempt to divide the US populace and influence the elections.
The Internet Research Agency’s former building by Voice of America licensed under CC0.
Over the coming years, a massive operation was unraveled, said to involve at least 1,000 operatives at Russia’s Internet Research Agency (IRA) and having unleashed ads on almost 130 million Americans through Facebook alone.
But it wasn’t just Facebook that the IRA used to disseminate the propaganda. Other platforms included:
- YouTube
- Google+
- Vine
- Meetup
- LiveJournal
- Pokemon Go
- Tumblr
- Medium
The IRA’s activities focused on divisive issues, discouraging voter turnout, and promoting candidates from the political edges, including Bernie Sanders, Donald Trump and Jill Stein. Ultimately, the goal seems to have been to divide the country and make the political scene more chaotic.
It aimed to reduce the vote for Hillary Clinton by posting political content with anti-Clinton sentiments. The IRA targeted specific groups, encouraging a range of different actions. These included:
- Muslims – Content pushed them toward candidates to the left of Clinton, or even for Trump.
- African Americans – Posts aimed to make them boycott the elections.
- Extreme right-wing voters – Content encouraged them to be more confrontational.
There are so many separate factors that influence how an individual votes, that we can never be certain how much of an effect Russia’s campaign had on the 2016 elections. Intelligence officials, academics and politicians sit on either side of the fence, along with many who believe that we can’t know for sure.
Whether or not the Russian campaign did influence the election doesn’t really matter. The more significant point is that an adversary launched a wide-sweeping disinformation campaign that could have influenced the election, could have undermined democracy, and may have contributed to many of the polarization issues we see today.
These elements are so important, that the potential alone means that we have to combat them with full force. Unfortunately, Russia’s social media meddling didn’t end in 2016.
In late 2019, Mark Zuckerberg announced that more sophisticated campaigns from Russia, Iran and China had been discovered and thwarted. In 2020, journalists from CNN also discovered social media accounts linked to Russia that stoke racial tensions. A 2020 study from the University of Wisconsin Madison found that Russia-linked accounts were still promoting divisive issues such as race, gun laws and immigration.
If you still don’t think that social media manipulation is a threat to our democracies, you need to remember that Russian operatives have been acting on social media platforms that they have no links to. Imagine a worst-case scenario where the Chinese Government pressures TikTok to alter its algorithm to increase propaganda and sow discord. How much more devastating could it be?
Propaganda & censorship in China
One of the main worries about TikTok is that its parent company is based in China. While China is far from the only country with a questionable record for its censorship and propaganda policies, it is one of the most extreme. Both the Government’s actions inside and outside of China raise concerns about what could happen if the Government decided to pressure TikTok.
Propaganda & censorship within China
The policies and technical measures that makeup China’s internet censorship system are known as the Great Firewall of China. Not only does the wordplay reflect its breadth and immensity, but the serious lengths that the government is willing to go to keep foreign influence out.
Not only is the activity on these apps shared with Chinese authorities upon request, but it is also heavily censored. This is usually done by human censors or automatic filters that block content based on certain keywords.
We will use WeChat as an example, but other platforms face similar controls. A research project from the University of Hong Kong found that the most censored topics on WeChat in 2018 included the trade war with the US, US sanctions against ZTE, the arrest of Huawei’s CFO and the investigation of a businessman for economic crimes.
Logo of WeChat by WeChat licensed under CC0.
A study by researchers from Citizenlab found that coronavirus information is also being censored on WeChat. This included criticism of the government and references to Dr Li Wenliang.
These censorship mechanisms are complemented by groups such as the 50 Cent Party, which is paid to flood social media with pro-government comments, and disparage opposition voices. A 2017 Harvard study estimated that the 50 Cent Party had fabricated 448 million social media posts in support of the government narrative.
External Propaganda & censorship linked to China
When it comes to places outside mainland China, the Chinese Government doesn’t have the same powers to censor information. However, it is still active in influence campaigns. One example was the Twitter and Facebook accounts that were removed amid the Hong Kong protests.
Twitter removed close to 1,000 accounts, stating that they originated from within China and “were deliberately and specifically attempting to sow political discord in Hong Kong, including undermining the legitimacy and political positions of the protest movement on the ground… we have reliable evidence to support that this is a coordinated state-backed operation.”
Facebook removed seven pages, three groups and five accounts, citing “…coordinated inauthentic behavior as part of a small network that originated in China and focused on Hong Kong.”
Chinese state media outlets aimed at English-speaking audiences have also been attempting to shape the narrative of coronavirus to paint the country in a better light.
There are also strong allegations that Chinese operatives have been attempting to influence Taiwanese elections through social media.
Although there doesn’t appear to be any evidence that China is harnessing social media to influence US elections, the National Counterintelligence and Security Center Director William Evanina released a statement revealing that it was active in other ways:
China has been expanding its influence efforts ahead of November 2020 to shape the policy environment in the United States, pressure political figures it views as opposed to China’s interests, and deflect and counter criticism of China. Although China will continue to weigh the risks and benefits of aggressive action, its public rhetoric over the past few months has grown increasingly critical of the current Administration’s COVID-19 response, closure of China’s Houston Consulate, and actions on other issues.
The statement does not make any mention of social media as a medium for action, but it does imply that the Chinese Government is considering the pros and cons of various forms of influence as tensions between the two countries change.
While it’s good news that there aren’t any direct confirmations of Chinese electoral influence via social media, we should also keep in mind that the extent of Russian interference in the 2016 elections was only discovered after the fact.
How is TikTok different from other platforms?
We’ve discussed social media in general so far, because it’s important to understand what these platforms are at their essence, and what they are capable of. Since TikTok’s popularity is quite recent, it has yet to be abused on a similar scale to those that we have already covered.
However, there are a few differences between TikTok and these other platforms, which are important to highlight:
- TikTok banned political ads – In late 2019, TikTok banned political ads from appearing on its platform. This makes it impossible for foreign actors to promote politicians through micro-targeted political ads. However, it’s still possible to create or fund content from individuals, which could still have significant effects.
- TikTok’s audience skews young – It’s hard to know the precise demographics of TikTok, with various sources asserting a wide range. The New York Times claims to have seen internal documents claiming that 18 million, a third of TikTok’s daily active users in the US, are under 14. A further 11 million are of an unknown age. These numbers imply that a large portion are under the voting age, which may lead many to write off the platform as politically unimportant. However, that still leaves tens of millions over the voting age, and the number of over 18s has apparently tripled in a one year period. Before we dismiss TikTok as a threat, we also need to remember that in Facebook’s early days it was mainly populated by young people, and it only became more universal over time.
- TikTok is a short-form video-streaming app – TikTok allows users to create 15 second videos, or to stitch several together for a one minute clip. By nature, this limits the complexity of any political discourse on the app. The fact that the content is in video format also makes it harder to measure and analyze trends on the platform, as well as to detect false information.
TikTok’s attempts to distance itself from propaganda & censorship concerns
The incidents we have just discussed raise reasonable fears about TikTok’s potential for propaganda and censorship. Despite this, they don’t really amount to any hard evidence that TikTok has done anything worse than any of its competitors. However, we need to hold the company to a higher standard, due to its links to China and the potential risks.
In the wake of all of the media attention and condemnation TikTok has received in the past six months, it seems to be fighting hard to prove its separation from China and that it has no nefarious intentions.
Perhaps it’s just genius PR and lobbying, but it’s hard to deny the efforts that the company has gone to:
- TikTok hired an American CEO, Kevin Mayer, formerly of Disney.
- TikTok announced plans to hire 10,000 people in the US.
- TikTok banned political ads.
- TikTok will partner with PolitiFact and Lead Stories to monitor misinformation ahead of the 2020 presidential elections.
- TikTok is working with experts to protect against foreign influence on the platform. These include the Department of Homeland Security’s Countering Foreign Influence Task Force.
- TikTok is banning manipulated content like shallow or deepfakes.
- TikTok has formed a content advisory council to advise the platform on content moderation policies. The council includes two former congressmen, as well as academics involved in law, government, tech and mental health.
- TikTok releases transparency reports that show how many videos were removed and in which countries.
- TikTok is opening Transparency and Accountability Centers in both Washington and Los Angeles. The company promises to allow guests to look over how content moderators review content, let them examine the platform’s data security practices, and allow them to review its source code. TikTok states that the Los Angeles center was already supposed to be open, but has been delayed due to coronavirus. The company’s website states that virtual tours are available, but at the time of writing, they could not be found via a site-search.
- ByteDance is seemingly open to divesting its US interests to other investors due to the current political situation and the threat of a ban. Presumably, if this were to take place, it would remove the links to China and any of the heightened worries regarding propaganda and censorship.
Is TikTok still dangerous?
Many of the steps that TikTok has taken are admirable. They may only be the result of political and media pressure, but the company does seem to be going to greater lengths to be more transparent than any of its rivals.
To be fair to TikTok, most of the concerns surrounding it are no worse than those of its American competitors. You can compare what we’ve discussed in this article and the previous one to Facebook’s long list of controversies. For the most part, there isn’t really anything that exceptional about TikTok.
There are still some legitimate worries, particularly surrounding shadow banning. However, there is no indication that China is trying to influence the US over TikTok. Even the Director of the National Counterintelligence and Security Center did not make any mention of China using social media to influence US elections, while it expects attempts from “Kremlin-linked actors”.
This doesn’t mean that it couldn’t happen, that it won’t happen, or that intelligence agencies don’t have information that they are keeping from us. However, it’s worth keeping in mind that China and the US have a very different relationship compared to Russia and the US.
From January to June 2020, the US exported about $2.3 billion worth of goods to Russia, importing almost $8.6 billion. In comparison, it exported almost $49.5 billion worth of goods to China, and imported around $181 billion.
These figures show that China has far more to lose than Russia if it antagonizes the US. While China still certainly agitates and jostles for power, it seems less likely to do something as egregious as trying to influence US elections over social media like Russia did. This would likely result in significant sanctions and have strong detrimental effects on trade.
One factor that is often neglected in this debate, is that even if the Chinese Government could force TikTok to act as its own propaganda machine, would it want to? If conclusive evidence of influence over the platform was discovered, it could lead to sanctions against China, as well as a significant drop to TikTok’s international revenue.
Would the Chinese Government really want to sacrifice the company to influence or disrupt the US? TikTok is a multi-billion dollar company with plenty of room to grow. It also carries with it a certain prestige as the first Chinese-linked software company to challenge the American major players. Would it be worth it?
The Director of the National Counterintelligence and Security Center assessed that China wants President Trump to lose the 2020 presidential election. Given that an average of the major polls has President Trump trailing Biden by an average of 9 percent at the time of writing, would it make sense to risk TikTok’s international business?
Despite all of this, there are a few major points that we can’t escape:
- The US and many Western nations currently have strained relationships with China.
- TikTok’s parent company is based in Beijing. Chinese law and the status quo can make it difficult for Chinese companies to be independent.
- Social media platforms can have strong influences on our societies and elections.
- We may not find out about meddling until it’s too late, just like the Russian interference in the 2016 elections.
In our previous article, we concluded that for the average person, TikTok is no more of a threat than any other social media platform, at least from the perspectives of spying and data collection. However, there are still some situations where it should be avoided.
But when it comes to matters of propaganda and censorship, can we really come to the same conclusions? The stakes are much higher. We aren’t just talking about the potential for personal data to end up on a government’s servers instead of Facebook’s. We’re talking about having the power to divide our societies and sway elections against the will of the people.
Even if you believe that the Chinese Government has no current intentions of using TikTok to manipulate the US or other Western countries, is it really a risk worth taking? Given the high stakes, should we take steps to try and eliminate the risk, even if it’s already small?
So what do we do about TikTok?
If you consider the status quo of TikTok to be a chink in the armor that needs to be eliminated, it looks like we have three major options:
- Ban TikTok.
- Sell TikTok to other investors.
- Allow TikTok to operate as is, but under close scrutiny and tighter regulations
If we were to try to ban TikTok, we run into a series of technical and legal issues. As we discussed in our previous article, it’s never been done before and may even violate the First Amendment. For these reasons, it’s probably not the best approach.
Selling TikTok to other investors probably isn’t the worst idea. It should eliminate the chances of the Chinese Government being able to manipulate the platform to pursue its own ends. The Chinese Government has also made it incredibly difficult for Western social media platforms to operate on the mainland, so it wouldn’t necessarily be unfair, either.
But selling TikTok wouldn’t help to make it more transparent and accountable, which is one of the issues that we face with all of these social media giants. If you agree that these platforms have immense power to influence our societies, then it follows that not being able to see inside them makes countries vulnerable.
Facebook, TikTok, Google, Twitter and others guard their algorithms, meaning that only insiders know precisely how they operate. There is no government oversight, nor third-party inspections.
Given how much power they have, maybe there should be.
This isn’t a call for them to open up their proprietary code or share their intellectual property with the world, but to recognize the tremendous power these platforms have, and give them the necessary oversight to protect all of us, not just their financial interests.
Perhaps it could involve opening up government inspection bodies, or allowing trusted organizations like the Electronic Frontier Foundation to inspect their code and oversee their moderation practices at any time.
There could be a whole pile of protocols and NDAs to protect company IP, while still giving trusted outsiders insight into these platforms that have such dramatic effects on our world.
Of course, this is a pipe dream that these companies would fight against at every turn, but at least it’s a proposition to solve some of the issues that exist industry-wide, rather than those that surround a single company.
TikTok seems to be taking many of these steps on its own with its Transparency and Accountability Centers. It seems to be doing this by choice, as a means to combat the allegations against it. We don’t know whether these go far enough yet, but surely it would be a positive step if we had regulation that forced everyone in the industry to be more transparent.
See also: How to Unblock TikTok with a VPN