A Case for Regulation: The Role of Technology in Global Conflicts

https://www.state.gov/burma-genocide/

Content warning: death, sexual assault

In 2017, Myanmar’s military forces launched a targeted attack against the country’s Rohingya Muslims, an ethnic minority in the northern state of Rakhine. Over 700,000 Rohingya fled the territory when the military began a widespread campaign of “systematic murder, rape and burning of homes.” It was determined in March 2022 that the Burmese government had committed genocide and crimes against humanity against the Rohingya — marking the eighth time genocide was committed since the Holocaust.

The attacks were preceded by a decades-long period of state-sponsored discrimination, oppression, and persecution — amounting to apartheid rule — but they became more violently physical in nature due to the prevalence of hate-filled echo chambers on Facebook. The platform faced a deluge of Islamophobic content from accounts linking back to the Myanmar military and even radical Buddhist nationalist groups. Posts contained disinformation describing an “impending Muslim takeover” and called the Rohingya “invaders.”

Community activists frequently derided Meta for their failure to act to prevent such vitriol, citing that the company had a responsibility to do so in the interest of human and civil rights. After an independent assessment by the Business for Social Responsibility, Meta admitted to their complicity in the targeted attacks in a statement published Nov. 5, 2018:

“The report concludes that, prior to this year, we weren’t doing enough to help prevent our platform from being used to foment division and incite offline violence. We agree that we can and should do more.”

While Meta did begin to institute policies to combat hate speech and discrimination on their platforms, the consequences that ensued warrant a larger discussion about the role social media platforms play in global events.

Facebook’s Role and Influence in Myanmar

It is widely said that Facebook is the internet in Myanmar. Because of its low data usage and user-friendly interface, Burmese residents do not only rely on Facebook as a way to stay connected with their friends and family — they use the social media giant for information and news. Even Burmese authorities disseminate vital information primarily through Facebook.

With only one source of critical information, such streams of knowledge, like Facebook, can be easily opened up to government interference — and eventually, government control, as was seen in Myanmar. More specifically, Meta’s role in Myanmar served as an essential case study for the immense power of social media to promote both good and bad.

When Myanmar’s military launched an official coup on February 1, 2021, Burmese residents watched the events unfold on Facebook. Democratic activists used the platform for community gathering and organization and mutual encouragement. Increased visibility of the military’s tactics during the coup increased global visibility, and thus, awareness. Social media as a tool for community gathering is not a new phenomenon, according to a London School of Economics report — Western governments and pundits alike have long since idealized social media as a catalyst for democratic change, particularly during times of unrest. In fact, a Facebook Civic Engagement Manager said in a 2018 Facebook series, “I believe that a more connected world can be a more democratic one.”

https://www.wired.com/story/how-facebooks-rise-fueled-chaos-and-confusion-in-myanmar/

One strategy activists in Myanmar used to incentivize Burmese citizens to protest the coup and delegitimize commander-in-chief Senior General Min Aung Hlaing’s rule was to post photographs of those arrested, injured, or killed in the military’s crackdown on protests. Millions of photographs circulated of Kyal Sin, a 19-year-old girl who was shot dead in March 2021, with thousands of comments on posts supporting overall resistance to the coup.

Far before Facebook’s use for anti-coup activism, however, began overt emergence of hate speech committed against Myanmar’s Rohingya Muslim population, detected in intensity as early as 2014. A notable escalation occurred after an anti-Muslim extremist shared a post alleging that a Buddhist girl had been raped by Muslim men — discovered to be a complete fabrication. Despite the falsehood, a mob descended on people accused of being involved, with two dying in the violence that ensued.

Meta’s failure to act to address this vitriol — and efforts to deflect blame onto users — depict a lack of accountability that allowed military influence over public support for the Rohingya genocide to grow to where it has come today. To identify the root problem of social media’s role in the genocide and discuss solutions, one must turn an eye to Facebook’s algorithm.

The Facebook Algorithm

Researchers at Global Witness conducted an experiment in March 2022 to test Facebook’s advertising review automation process. Global Witness created adverts written in Burmese using eight real examples of hate speech committed against Rohingya Muslims, by United Nations guidelines, which they submitted to Meta for review.

According to the report, “all eight of the adverts were accepted by Facebook for publication.”

Social media algorithms are essential to nearly all platforms today, from Tik Tok to Pinterest. They determine what posts show up on each individual’s home page, and in what order. Facebook’s algorithm evaluates each post, assigns a score, and arranges the posts in descending, non-chronological order by interest for each individual. The theory is to keep viewer’s interests piqued, thus generating more ad revenue based on the amount of time users spent on the platform. However, in 2021, Facebook came under fire for increasing the circulation of controversial posts in particular.

It was found that posts with misleading or inflammatory contexts generated the most clicks and engagement — and thus, the most profit. Without adequate filters to limit hate speech, Facebook’s algorithm increasingly publicized malicious content, generating divisiveness and polarization. People who subscribed to the same topics were shown the same content, thus interacting in a closed community. This phenomenon occurred not only in Myanmar, but also notably during the 2020 presidential election, with right-wing political groups spreading misinformation through targeted advertisements.

https://www.shutterstock.com/image-photo/website-html-code-on-laptop-display-1243062634

Anyone is capable of using this deficiency to their advantage to promote propaganda, as determined by the U.S. Air Force Lieutenant Colonel Jarred Prier:

“Using existing online networks in conjunction with automatic ‘bot’ accounts, foreign agents can insert propaganda into a social media platform, create a trend, and rapidly disseminate a message faster and cheaper than through any other medium.”

Prier emphasized that this is particularly doable in foreign countries — Meta intrinsically has more moderation on content published in the United States, with significantly less oversight overseas. Indeed, according to The Guardian, Facebook employed only two Burmese speaking content moderators in 2015, as opposed to over 1.2 million Burmese users in 2014 with no natural language processing due to technical difficulties.

While Facebook eventually banned Myanmar’s armed forces in an all-encompassing action in 2021, the damage had been done. Propaganda missions proved successful in generating community support for the military’s genocide of Rohingya Muslims. Burmese activists attribute the hesitation incurred by Meta’s executives to remove the inflammatory content because of its profitability, proving an inherent inadequacy in the company’s ability to monitor its own content. Myanmar’s period of civic and political unrest does not prove otherwise; rather, it shows that social media has been corrupted past its original means.

Solutions to Mitigate Facebook’s Role in the Global Society

The first step is to address social media companies’ inability to self-regulate — and establish regulatory agencies in response. Since Meta is a privately owned company in the United States, this may initially prove difficult due to the autonomy granted to private companies, as well as the Constitution’s 1st Amendment protection of free speech.

Herein lies the case for Meta’s partial deprivatization. Already, governments around the world have begun to impose transparency requirements for social media platforms, allowing companies to continue under private ownership while opening them up to meet ethical business expectations. One such policy was established in the state of California in September 2022, requiring that companies clearly outline terms of service policies and also submit reports containing data on policy violations to the state’s attorney general.

To overcome challenges from individual federal governments, it may fall on international bodies like the United Nations and the International Criminal Court to establish basic requirements outlining a company’s responsibility to surveil speech with additional conditions for transparency. Fo example, they can require companies to submit reports on the number of content violations, company response times, and advertising guidelines — and subsequently charge such companies for their roles in the gravest of crimes including genocide, war crimes, crimes against humanity, and crime of aggression.

Still, more regulations can be implemented to address the harm caused by algorithms. Today, algorithms are used for far more than simple social media engagement boosting. They are used to predict future crime, detect fraudulent unemployment claims, and even distribute health benefits by analyzing past data. In Facebook’s case, two algorithms are in active play: one to promote content engagement and another to automatically filter posts for harmful content. The latter’s failure lies in the fact that even the most advanced artificial intelligence tools are incapable of predicting the extent of the human tendency to diverge from the norm; algorithms make their predictions using the average measures of human behavior, thus failing to account for the extremes. Meanwhile, the former requires immediate modification — or eradication entirely.

https://case.edu/law/our-school/news/rohingya-report-prepared-cwru-professors-alumna-cited-recent-decision-international-criminal-court

By creating political echo chambers that increase polarization, engagement algorithms work against the goal to join the world in a global network, and without adequate global regulation, CEOs with outsized influence in their companies now indirectly play a role in conflicts.

Regulating artificial intelligence, however, faces a significant uphill battle. For one, both lawmakers and the public have low visibility on the algorithms social media agencies use, how they are designed, and how much they influence corporate decisions. Furthermore, in states where investment in technology has generated large amounts of revenue, business interests often play a substantial role in influencing legislators’ roles, like in California. In the global economy, technology is seen as the cornerstone of improvement in the standard of living, for which reason anti-regulatory interests and promoters of free-market operations may cite any attempts to increase public oversight as interference.

To combat this, global legislative bodies can maintain that Meta — and other social media platforms — maintain minimum levels of use of human intelligence to evaluate and limit sensitive language, while permitting continued use of algorithms. Meta can also establish permanent offices with divisions to review content in all countries where they bear influence, much like most international organizations.

Human capital will inevitably lag as social media grows too wide for surveillance, but a combination of that and technology will address the issue of size while accounting for technology’s inability to adapt to human variability.

Ethical Risks and Side Effects

Excess regulation has its detractors, with historical precedent showing a pattern of infringement on individual freedom and increased government surveillance. One example is the Patriot Act, passed by President George W. Bush to mitigate the use of social media and other online networks for terrorism. However, as the ACLU reports, the government was able to legally monitor phone and email communications, bank and credit reporting records, and other activities of everyday American citizens in the name of national security.

Similarly, offering the responsibility of social media regulation to governing bodies increases the chances of infringing on the right to free speech. Acknowledging this, in 2019, the UN released a plan of action to address hate speech, outlining their own policies at the time in a statement:

“Rather than prohibiting hate speech as such, international law prohibits the incitement to discrimination, hostility and violence… Hate speech that does not reach the threshold of incitement is not something that international law requires States to prohibit. It is important to underline that even when not prohibited, hate speech may be harmful.”

Until the development of a global actionable strategy to respond to hate speech, regulatory policies must aim to limit content that incites “discrimination, hostility and violence.” To avoid crossing ethical boundaries, such bodies must be democratized, bipartisan, and independent of outsized corporation influence. Globalization emerged as the paragon of success for all businesses and organizations in the modern world. However, just as federal committees were created to address heightened income inequality and exploitation during the industrial revolution, globalization now requires that minimum regulations be established on an international scale.