A few months before Taiwan’s local elections in 2018, a tropical storm hit the city of Chiayi, killing at least six people and displacing thousands of others. President Tsai Ing-wen traveled to Chiayi to show her support for the flood victims, but soon after, an innocuous photo of the president riding in an armored vehicle became a nationwide scandal. The image was cropped and cast in black and white, and a caption was added, falsely suggesting that she came with armed gunmen and was smiling cruelly at the misfortune of a nearby man biking through knee-high floodwaters.
The photo was shared widely on social media and forwarded in private group chats. By election day, 85% of people in Taiwan said they had seen the image, according to one report. That year, candidates from the president’s Democratic Progressive Party, who often support Taiwan’s independence from China, lost in a landslide under a cloud of suspicion that Beijing may have tried to sway the results.

The image from Chiayi was part of a wider wave of disinformation that spread online in Taiwan ahead of the 2018 elections. Afterward, the Taiwanese government went on high alert and quickly established a working group within the Executive Yuan, the country’s executive branch, to tackle the issue, as the country geared up for its presidential elections in 2020. Much of the false content had circulated on the encrypted messaging app Line, which is one of the most popular communication platforms in Asia and is used by 90% of Taiwan’s population. The elections pushed the Tokyo-based social media company to reckon with the scope of misinformation on its platform.
By July 2019, Line had partnered with local fact-checking organizations like Taiwan Fact Check Center and Cofacts to launch a new program in Taiwan aimed at tackling misinformation. Line Fact Checker allows users to voluntarily report suspicious messages and receive an answer in real time about whether they’re true or not. After users add the official Line Fact Checker account to their contacts, they can forward or copy messages they want verified. The system will immediately respond with a truthfulness rating: An “X” symbol indicates the content is completely false, an orange triangle means it’s partially false, and a green check mark confirms it’s true. All the responses link to full reports authored by Line’s fact-checking partners.
The initiative has allowed the company and its partners to track some forms of harmful content, without breaking end-to-end encryption. For years, other encrypted platforms like WhatsApp and Signal have struggled with how to balance user privacy with the need to combat harmful content. The overall scope of Line’s program is limited, but something like it could be replicated by other messaging platforms around the world, especially since it doesn’t rely on particularly futuristic tech.
“This is novel, in that a big social media company is pressing on this, in a way no other tech company has,” said Anunay Kulshrestha, a cryptography researcher at Princeton University. But, he said, “In terms of whether it’s technically novel or methodologically novel, I doubt that is the case.” In other words, it wouldn’t be hard for Signal or WhatsApp to build a similar system.
Taiwan is perhaps uniquely accustomed to dealing with disinformation. The Chinese Communist Party considers the island to be part of China, and researchers say it has spread pro-Beijing propaganda in the country for years. “Disinformation is not new in Taiwan,” said Nick Monaco, chief innovation officer at Miburo Solutions, a digital consulting firm focused on defense and disinformation. “It’s functioning as a de facto independent country, and China does not want that to be the narrative.”
Monaco co-authored an extensive report on Chinese-sponsored disinformation during the months leading up to Taiwan’s presidential election last year, which he said was one of the most aggressive campaigns he had ever seen. The country also has a robust domestic public relations sector, which is notorious for disseminating political disinformation for the right price. In the first half of 2019, 46% of people in the country reported receiving or reading suspicious information online, according to a survey conducted by Line.
Monaco said that Line has increasingly become the primary platform where disinformation spreads in Taiwan, over other sites, like Facebook. Research conducted by Austin Wang, a political science professor at the University of Nevada, Las Vegas, has also found that Line users have lower levels of media literacy on average. That makes Line’s fact-checking program especially urgent.
When users flag a message to the Fact Checker account, it automatically analyzes the contents for specific keywords and phrases. Line then searches a database of previously identified misinformation for matches. If there isn’t an exact match, the system will suggest a carousel of similar topics that have circulated on the app.
In cases where there’s no corresponding material, Line will ask the user if they want their message fact-checked. If they say yes, Line sends the content to researchers at its partner fact-checking groups, according to Kara Huang, the project manager for Line Fact Checker. A new report usually takes from three to seven days to create. Line’s system tracks how many times an unchecked claim or article has been submitted and prioritizes the most frequently reported content (misinformation about Covid-19 is always considered high priority).
Irene Pasquetto, a professor at the University of Michigan who has researched misinformation on WhatsApp, said she’s found it tends to spread in smaller group chats with between five and 10 people. “We found that the most effective way to debunk disinformation was to have the correct information coming from someone who had a close relationship,” she said. “And this was true across countries.” Thirty-three percent of Line users in Taiwan said they would be willing to report suspicious content they received, and 25% said they would forward a fact-check to their own contacts, according to market research Line conducted before the Fact Checker program launched.

Even if users don’t flag anything themselves, they can still view debunked content that Line highlights each day on the Line Today homepage, an in-app portal that aggregates mainstream news outlets and where all official fact-checking reports are published. Each report has detailed screenshots and explanations and even displays the number of people who have reported each piece of misinformation.
The project has some obvious limitations. Line Fact Checker can’t automatically review visual content, meaning political memes and videos uploaded directly to Line are outside the bounds of the system. (Line told Rest of World that it’s working on launching the capability later this year.) The onus is also still on individual users to report what they see. “Ordinary people have no incentive to check and help spread the facts,” said Wang. “Some have the motivation during the election because it may impact the election results, but it’s the environment that’s the motivating factor.” Wang’s current research has found that only 18% of people in Taiwan visited a fact-checking site during the 2020 election season.
But among peers like WhatsApp, Telegram, and Signal, Line Taiwan is still the first to build a centralized fact-checking program directly within its app. WhatsApp has taken steps to address misinformation, like introducing a feature that allows users in some countries to search Google for more context about forwarded messages. Line, though, is the only app that can verify claims in real time and offers a centralized portal like Line Today.
A spokesperson for WhatsApp said the company has been “working for years to build new ways for users to get more information about the messages they have received,” including working with over 48 fact-checking organizations. “Even if we were not end-to-end encrypted, the real-time nature of text messaging is very different than more public forms of social media. SMS is not encrypted yet there is no fact checking on SMS. We believe it’s our role to help connect people to organizations they choose.”
Part of the problem is the way that end-to-end encryption works. The technology, which is used by Line, WhatsApp, Signal, and other apps, makes it impossible for anyone but the sender or receiver of a message to view its contents. That leaves companies in the dark about what’s happening on their own platforms but also means they can avoid taking responsibility for it. “They say it’s the nature of the communication on WhatsApp, but the truth is that it completely absolves them from having any sort of responsibility, in case dangerous and problematic information is shared,” said Pasquetto.
“We cannot see what people are talking about; that’s the top priority.”
Line Fact Checker is built around preserving end-to-end encryption, which the platform calls “letter-sealing.” But when users report a message to the program, they agree that anything shared will essentially be decrypted. It’s a user-initiated action, said Judy Wu, a spokesperson for Line Taiwan, and doesn’t contradict the company’s promise that chats are secure. “We cannot see what people are talking about; that’s the top priority,” she explained. “We don’t want this kind of service to lead to Line being in the daily conversations of each one of your chat rooms.”
Cryptography researchers have begun to poke holes in the assumption that more moderation isn’t possible on secure messaging apps. Harmful images, for example, can be flagged using technology called “perceptual hashes.” Once a database of banned images is compiled, the platform can tag each one with a unique key, or hash. When a user shares a picture, that image hash can then be checked against the list of banned content and blocked before it can even be sent. On WhatsApp, this process could happen entirely on the user’s device and without breaking encryption, according to a report by researchers from MIT and Universidade Federal de Minas Gerais. Facebook has also already used a similar technique to curb the spread of revenge porn.
If platforms like Line or WhatsApp were to implement a hash system, the companies would still need to effectively communicate how it works to their users. “I think the challenge is, how do you make it clear to users that even with having a hash list on your local device, this is not in any way giving access to the platform owner about your messages?” said Scott Hale, a misinformation and cryptography researcher at the Oxford Internet Institute.
In Taiwan, the success of Line Fact Checker may be as much about the company as about the island itself. Line had the luxury of building the program on top of Taiwan’s existing infrastructure, which includes a robust network of fact-checking organizations, the Executive Yuan that shapes public policy around misinformation, and an active civic tech sector. “Everyone helps each other combat misinformation,” said Billion Lee, co-founder of the Taiwanese fact-checking organization Cofacts, which previously built an independent fact-checking bot on Line that predates the company’s official program.
And though Line’s initiative is more robust than those on similar chat apps, some experts argue that the company could still be doing more. “Line doesn’t support the fact-checking organizations financially, the way Facebook does with its fact-checking partners,” said Ttcat, a Taiwanese LGBTQI activist and founder of the misinformation research organization Doublethink. Lee echoed the same sentiment, pointing out that Line functions as a platform for the fact-checking work to be shared, but the heavy lifting is still done by volunteers at organizations like Cofacts.
For now, it’s not clear if other companies will adopt a similar approach. But the initiative still serves as important evidence that encrypted apps can do more to curb misinformation — if they want. “I think [Line] is innovating and trying something new,” said Hale. “And this is an area where we need innovation, where we need people and platforms to try new approaches.”