By Deborah Brown*
Billions of people around the world have come to rely on the services Amazon, Apple, Facebook, and Google provide to exercise their basic human rights. But for many people, both within and outside the U.S., the concentration of power in these companies has meant considerable harm.
It’s nearly impossible to avoid using one of these companies’ products when online. Facebook and Google in particular have become gateways for accessing and disseminating information. Each month, almost 3 billion people use Facebook or WhatsApp or Instagram (which it acquired in recent years). More than 90 percent of Facebook’s users are outside the United States. More than 90 percent of the world uses Google Search, and Google’s Android software backs at least three of every four of the world’s smartphones.
That concentration of power exacerbates the harm that’s done when these companies fail to protect privacy or regulate content responsibly and in line with human rights.
The source of many of human rights concerns associated with Facebook and Google’s services is their surveillance-based business model. This model allowed email, social media, search, video, or other services to grow into huge, dominant networks because billions of users could sign up without paying any fees.
Instead, these platforms monetized our data, by turning it into ad revenue. Their algorithms are engineered to maximize “user engagement”— clicks, likes, and shares — which leads to more engagement with their products, generates more data, and leads to more advertising revenue. Studies have shown divisive and sensationalist content are more likely to drive engagement.
This is especially problematic because these companies have rushed to capture markets without fully understanding the societies and political environments in which they operate. Facebook targeted countries in the Global South with low internet penetration rates to promote a Facebook-centric version of the internet through an app called Free Basics as well as other initiatives. It entered into partnerships with telecom companies to provide free access to Facebook and a limited number of approved websites, along with its aggressive strategy of buying up competitors like WhatsApp and Instagram.
This strategy has had devastating consequences, especially when it was effective in dominating information ecosystems.
Myanmar is arguably the most infamous case, where Facebook was used by hardline ultranationalists to spread hate speech and promote ethnic cleansing of Rohingya Muslims. In the Philippines, where Facebook usage more than tripled in the first five years after Free Basics was introduced and where nearly every internet user is on Facebook, election related misinformation has spread rampantly on the platform. While Free Basics quietly retreated from Myanmar, the fact that many people in Myanmar think Facebook is the internet has lasting implications for the receipt and dissemination of information, especially when the government uses it as a formal channel of communication with the public.
Free Basics, which was once available in over 60 countries, has faced extensive criticism over the years, including from a group of more than 60 human rights and digital rights organizations from around the world. The initiative has been characterized as unfairly benefiting Facebook by harvesting all of these users’ data for the company while providing them only with “poor internet for poor people.” The phenomenon of U.S. companies targeting populations around the world to harvest and monetize data has come to be known as “digital colonialism.” The program has shut down in a number of countries, but not before people were hooked on Facebook and came to equate it with the internet.
Another worrisome trend is major tech companies coordinating to remove content that they define as “terrorist” or “extremist.” While it’s understandable that Facebook, Google, and other tech companies want to work together to counter such content, evidence suggests they are over-censoring — and in fact often removing anti-terrorism counterspeech, satire, and journalistic material, with grave implications for rights including free speech and accountability. Online documentation of attacks on civilians and other grave human rights abuses in Syria and Yemen, for example, is rapidly disappearing, often making this information inaccessible to researchers and criminal investigators and impeding efforts to serve justice on those responsible.
The companies have started to address some of these concerns by adding local language content moderators, carrying out human rights impact assessments, partnering with fact-checkers, and publishing transparency reports. These are important steps but have had uneven impact — partly because the resources invested aren’t commensurate with Facebook’s global user base, and local Facebook staff and partners may sometimes be perceived as partisan or having ties to government. And most importantly, they don’t address the core issue of Facebook’s business model.
Addressing the monopolistic aspects of platforms isn’t a panacea for human rights problems, but it may make it easier to hold platforms accountable or create conditions for alternative models to emerge. A key step would be to enable data portability and interoperability, which would give people more control over their data and allow them to communicate between social media platforms, as they do between telephone networks and email providers. This could enable competition and empower users to have real choices in where they find information and how they connect with people online.
Congress also needs to adopt a strong federal data protection law that meaningfully regulates the collection, analysis, and sharing of personal data by companies with security and intelligence agencies, advertisers that engage in discriminatory profiling, or others who may violate rights. It should also consider requiring human rights impact assessments that assess all aspects of companies’ operations, including their underlying business model, and require human rights due diligence for their operations globally, and especially before entering new jurisdictions.
The rest of the world is not waiting for the U.S. to regulate big tech. But lawmakers here should carefully consider how their steps to regulate big tech — or not — will impact billions of people around the world. These companies have vast reach, and their human rights impact is global. A response to their dominance should be too.
*Deborah Brown is a senior researcher and advocate on digital rights at Human Rights Watch.