Following the deadly Easter Sunday bombings, access to social media was blocked. The only way we could get around it was by using a VPN. And then on the morning of the 1st of May 2019, this ban was lifted. Only 12 hours later, Facebook held the keynote of its annual F8 developer conference. In the days that followed, weāve been subjected to social media blocks again following episodes of mob violence. The justification of these blocks has been to combat hate speech and misinformation. So what do the announcements from F8 this year mean for us in light of these social media bans?
The future is private?
āWe donāt exactly have the strongest reputation on privacy right now, to put it lightly. But Iām committed to doing this well and starting a new chapter for our product,ā said Mark Zuckerberg – CEO of Facebook during the opening keynote. Heās right about that. The past few years have seen the company deal with multiple scandals. These have ranged from data breaches to interfering with elections.

So itās not surprising that practically everyone taking the stage at F8 said, āThe future is private.ā But outside the euphoric halls of F8, few would believe these words anymore. But earlier this year, we saw the departure of Chris Cox from Facebook. He served as Chief Product Officer. Heās also considered to be the architect behind the News Feed. Following his departure, Mark Zuckerberg unveiled a new philosophy for the company in a 3,200-word blog post.
Describing this new philosophy Mark Zuckerberg said, āI believe the future of communication will increasingly shift to private, encrypted services where people can be confident what they say to each other stays secure and their messages and content won’t stick around forever.ā In other words, the company will move away from the iconic News Feed. Instead, it will invest its efforts in private messaging.
Redesigning for groups, events, and friends: FB5
āThis is the biggest change weāve made to the Facebook app and site in five years,ā is how Mark Zuckerberg described it. At F8 2019, he announced that the entire Facebook app would be redesigned. Dubbed FB5, this redesign would focus more on Groups and Events. The update immediately landed on iOS and Android. A desktop version is expected to arrive in the coming months.
The new update puts a dedicated tab for groups in the center of the app. Inside this tab, youāll find a feed with updates from all the groups youāre in. Thereās also a discovery tab for new groups you might be interested in. Over time, Facebook has stated that it aims to make groups more integral across other parts of the app.

Alongside this groups, the tab will be another for events. This updated feed aims to give you a better idea of events around you and find ones you might like. Facebook also announced a new feature called Meet New Friends. As the name suggests, this aims to help you find new friends that share your interests. An expansion on the existing, People You May Know feature.
This is the future of Facebook. Groups where you would ideally make real-world connections with people. But amidst all these announcements, it was obvious that something was missing. There was hardly any mention of the iconic News Feed. Yet, its disappearance hardly surprised anyone. After all, much of Facebookās woes have been attributed to it.
Hate Speech and Misinformation: A Sri Lankan Tale
Only a few hours after the deadly Easter Sunday bombings, the government announced its second social media ban. Following, the riots in Negombo, the government once again blocked access to social media for a brief period. At the time of writing, weāre living through the fourth social media ban in Sri Lankan history following another episode of mob violence in the North Western Province.

As we learned during the first social media ban during the Digana riots, the President has full authority to do so during a state of emergency. However, while the initial social media ban following the Easter Sunday Bombings was announced on the 21st of April, the state of emergency came into effect on the 22nd of April. This, of course, raises questions as to whether the ban was legal.
However, one could argue that drastic actions are necessary when fighting terrorists. Hardly an hour passed after the tragedy before fake news started popping across social media. It wasnāt long before this transformed into hate speech. Yet, itās an argument that sounds nice in theory. But in practice, it doesnāt work.

Sanjana Hattotuwa – Senior Researcher at the Center for Policy Alternatives has the data to prove it. His data shows us that during the social media ban following the Easter Sunday bombings, there was hardly any reduction in content produced and engagement on gossip, memes, and Sinhala media pages. Furthermore, even the government was producing content on Facebook during this social media ban.
Ray Serrato – Social Media Analyst at Avaaz also analyzed the frequency of posts on 16 Sri Lankan Facebook Groups. His data also shows no significant drops in activity following the ban after the Easter Sunday bombings. Based on this data, we can see that the social media ban wasnāt effective. People may not understand their privacy settings. But they do know how to use a VPN.

The block didnāt stop them from posting a fake bomb scare or a spoiler for Game of Thrones. But when we look at how fake news and hate speech spread following the deadly Easter Sunday bombings, we canāt ignore WhatsApp. Another Facebook product thatās famous for spreading fake news and hate speech at an industrial scale.
WhatsApp and a systematic approach to misinformation
Weāre seeing this phenomenon unfold in India. WhatsApp has become an important tool for Prime Minister Narendra Modi and his party the BJP. It has an army of volunteers spreading fake messages that are anti-muslim and critical of rival political parties. This is the strategy that was utilized by Brazilian President Jair Bolsonaro to come into power. All it takes for misinformation to spread is a single message in a WhatsApp group.
That message is then forwarded to other groups. Thatās how it lands in one of the groups youāre in. To you, itās a message from an old friend in the school batch group or a coworker in the office WhatsApp group. You donāt think too much. You know these people. So you forward it to your groups and people you care about.

This is how misinformation systematically spread across WhatsApp. Itās simple yet, blazingly effective. Thatās why weāre seeing it happen everywhere including Sri Lanka. All it takes is a few taps and a message goes viral with the words, āForwarded as received.ā But at F8, there was no mention of anything to address this.
Instead, the announcements were focused on new features aimed at businesses. At F8 the company announced that it would now allow businesses to have product catalogs on WhatsApp. Furthermore, it would also allow businesses to accept payments directly through the app. Facebook expects these features to be widely adopted by small businesses like home bakers. Though whether we in Sri Lanka will see these any time soon is a mystery.
What Facebook has done in Sri Lanka
A Facebook spokesperson shared with ReadMe, āPeople rely on our services to communicate with their loved ones and offer help to those in need, and we remain committed to helping our community connect safely during difficult times. There is no place for hate, violent or extremist content on our services, and we are taking all steps to remove content that is violating our policies. This includes working with partners in the region to help identify misinformation that has the potential to contribute to imminent violence, identifying content which violates our policies, and ensuring language support for content review.ā

The company shared with ReadMe that it immediately designated the deadly Easter Sunday bombings as an act of terrorism. In doing so, it banned organizations and individuals involved in the attack. It also began removing any content that was found praising or supporting the attacks and those involved.
Facebookās Community Operations team is also working with a number of civil society organizations. The purpose of this partnership is to identify misinformation that has the potential to contribute to imminent violence or physical harm. The civil societies assist the Community Operations team in identifying such content in Sinhala or Tamil. Additionally, the company also stated that itās working with AFP, which its fact-checking partner to tackle misinformation.

The Facebook spokesperson also shared with ReadMe, āFacebook is committed to helping Sri Lanka and its communities. Earlier this year, we created a dedicated team across product, engineering, and policy to work on issues specific to Sri Lanka and other countries where online content can lead to violence.ā
The work this team has done covers multiple aspects. It has worked to help update and enforcing Facebookās community policies. It has also formed partnerships with many civil societies. The team has also conducted digital literacy workshops with local Sri Lankan non-profits. The company stated that it has trained 15,000 students on how to stay safe on the internet. Facebook aims to train 20,000 students by the end of June 2019.

This team at Facebook is also tasked with helping improve Facebookās technology to tackle misinformation and hate speech. Interestingly, the Facebook spokesperson shared with ReadMe, āThe rate at which bad content is reported in Sri Lanka, whether itās hate speech or misinformation, is low. So weāre investing heavily in artificial intelligence that can proactively flag posts that break our rules.ā
We also learned that the company has expanded its automatic machine translation to Sinhala to better identify harmful content. The spokesperson also added, āTo help foster more authentic engagement, teams at Facebook have reviewed and categorized hundreds of thousands of posts to inform a machine learning model that can detect different types of engagement bait. Posts that use this tactic will be shown less in News Feed. For Sri Lanka, particularly we have extended this to Sinhalese content as well.ā
Why itās an uphill battle: Language
We can see that Facebook relies on two things to identify a violation of its community guidelines. This includes using the platform to both spreading fake news and extremist ideas. The first is the traditional reporting mechanism by users. Once reported, these are reviewed by moderators who check and take it down if it violates Facebookās community guidelines. The second is by utilizing AI to proactively hunt down such content that might not be caught by moderators.

These methods have stopped videos of decapitations going viral. But they arenāt perfect and more needs to be done. For starters, the moderators at Facebook work in horrifying and traumatic conditions. Furthermore, Zahran Hashim promoted his venomous extremist ideology in videos that were published on Facebook. Yet, these were never caught by any of Facebookās systems.
This isnāt the first time itās happened. When hate speech went viral during the Digana riots, the company stated that it couldnāt moderate content in Sinhala. Similarly, it failed moderate hate speech in Myanmar that was produced in Burmese. The result of this failure was a horrific campaign of ethnic cleansing.

But these failures were in the past. As mentioned above, the company has stated that it has expanded its efforts to monitor content in Sinhala. This includes forming partnerships with civil society organizations. The company is also investing in its AI tools to better moderate content in Sinhala. However, researchers disagree with the effectiveness of these efforts by Facebook
Yudhanjaya Wijeratne – Data Scientist at LIRNEAsia shared on Twitter why AI systems fail at monitoring languages that arenāt English. In heavily simplified terms, these AI systems rely on natural language processing to translate and identify hate speech. But natural language processing is designed to work primarily with English. But with languages like Sinhala and Tamil, youāll get weird results.

Yudhanjaya elaborates on this saying, āLanguages such as Sinhala and Tamil are what practitioners call āresource-poorāāthe ones that just donāt have the statistical resources required for ready analysis. Years of work are required before firms can do for these languages what can be done for English with a few lines of code. Until the fundamental data is gathered, these are difficult nuts to crack, even for Facebook.ā
What Facebook can do tomorrow & today
Itās clear that Facebookās army of moderators can only go so far without being traumatized. As such, much of the moderation in the future has to be carried out by automated systems. But to help these systems overcome the language barrier, Facebook has to invest in research. That means not only pumping in money but also closely working with researchers.
This is necessary not only in Sri Lanka but also in many other countries. Take India, which has 22 official languages where the challenge is compounded further due to multiple regional dialects. Unless companies like Facebook closely work with researchers in these countries, itāll never overcome this barrier. Such collaboration is not alien to the company either.

Facebook like many other tech companies built its AI systems on the work done by academics in various universities. Of course, it could take years before we see tangible results from this research. So what can Facebook do right now? Well, it could start by expanding itās misinformation efforts to the rest of the world.
For the European Parliamentary elections, the company had created an operations room. Located inside Facebookās Ireland HQ, this room was staffed with a team of 40 with speakers of all 24 official EU languages. It’s purpose? To monitor misinformation, fake accounts, and election interference. Previously, the company enacted such measures for the US midterm elections and the Brazillian presidential elections.

One could argue that the effectiveness of these efforts can be questionable. After all, Brazilās President Jair Bolsonaro was accused of running a misinformation campaign. But we live in Sri Lanka where social media is blocked at the first sign of a crisis. As such, one could further argue that Facebook should implement such teams whenever a crisis occurs. Why? Because the company could better support the authorities in their efforts to fight misinformation and hate speech online.
Working with the authorities to respond
Once a crisis occurs, it wonāt take long before panicked rumors started spreading. This is a problem thatās been around long before social media. However, social media allows such rumors to spread like wildfire. To help mitigate this, Facebook should ideally have a team monitoring its platforms and combating misinformation.

Such teams shouldnāt work in isolation either. They should be working in collaboration with the local authorities. For example, let’s assume that the Police find a viral video promoting hate speech and more violence. In such instances, the Police should be able to immediately contact this team from Facebook and have that video taken down.
Yet, this is only Facebook. There are many more instances of misinformation spreading across WhatsApp. These are harder to intercept when messages on the app are encrypted. If so, this team should be able to take extreme measures such as removing forwarding or other features.

Such actions may not eliminate misinformation. But it can severely slow it down and hamper its impact. In an ideal world, weād already had a team from Facebook here. One that can proactively fight or react to misinformation at a momentās notice. This would be far more effective than a blanket social media ban. After all, the data has shown us that such bans are ineffective.
Facebook and its Future with Groups
If weāre attempting to look at the future misinformation, we canāt ignore Facebookās redesign. This grand redesign, which was announced at F8 2019 emphasized groups. One could argue by doing so it takes away prominence from the news feed. This, in turn, would make it harder for pages to spread hate speech and misinformation. Even if they throw millions at ad campaigns, their false content wonāt reach groups.

Yet, one could also argue that it makes the process easier. Why? Because by focusing on groups, we allow the same structure of communities to be recreated on Facebook. Therefore, the same tactics to spread misinformation across groups on WhatsApp could now be applied to Facebook.
The Facebook app on Android and iOS already allows you to post content in any group with a quick tap when publishing a status. At the time of writing, this feature is yet to be added on desktop and other forms of Facebook. But with the company focusing on groups, it likely wonāt be long before we see such features rolling out.

Yet, by focusing on groups, one could argue that Facebook gains an army of new moderators. But this again is an argument that sounds good in theory. In practice, this is the exact problem Reddit has to deal with. The social network where the responsibility of moderating content is left to individual moderators. As a result, weāve seen communities like r/beatingwomen, r/CringeAnarchy, r/deepfakes, r/Incels, and so many more, which were only taken down after intense media backlash.
The failure of good governance
At the end of the day, Facebook can only go so far. The company can invest millions if not billions into research. Hopefully, one day itāll have AI systems that can detect and remove hate speech, misinformation, and other harmful content. But it will be years before we see that day.
The company can hire more moderators and form more war rooms. But more and more, we are seeing how this approach is ineffective. Invariably, some of these malicious content will slip through the cracks. Hence the need for a strong relationship with the authorities to conduct more detailed investigations when required. But a chain is only as strong as its weakest link.

As weāve seen over the past three weeks, the weakest link in the chain has been our government. Prior to the deadly Easter Sunday bombings, Indian intelligence agencies had shared multiple warnings (three in April alone) that were ignored. Even as far back as March 2017, the Muslim community held protests against them and had warned the authorities about these terrorists spreading their venomous extremist ideology.
But these were also ignored. It’s not like we donāt have laws covering hate speech either. Under Sections 219A&B of the penal code, you can get arrested for hate speech. Therefore, the authorities couldāve easily arrested them in 2017. Had they done so, we wouldnāt be having this conversation after the tragic deaths of over 250 people and terrifying episodes of mob violence.
Sadly, the reality is often disappointing. Even now the Ministry of Defence has stated that those spreading fake news would be prosecuted under the Emergency Regulations. The Police have also established a special unit to identify and arrest those spreading hate speech on social media. However, only a handful of people have been arrested under these laws to date. Some examples being 2 from Colombo 15, 1 cleric from Vavuniya, and 1 from Chilaw.

Meanwhile, hate speech and misinformation continues to run rampant across social media. It has now morphed into mob violence. This begs the question of how effective efforts by companies like Facebook can truly be? They can invest in advanced AI systems or an army of moderators to identify and take down hate speech and misinformation.
However, that wonāt stop terrorists and other criminals from simply moving to another platform. Hence the need for a strong relationship with the authorities to conduct further investigations and take action where necessary. But if the authorities simply ignore these warnings, then the entire system fails. Thus the entire cycle of terror continues as we live in fear and praying for peace.