Perspectives

Advances In AI are Compounding Internet Freedom’s Decline. But They Don’t Have To.

Freedom on the Net 2023: The Repressive Power of Artificial Intelligence details how AI is increasing the scale, speed, and efficiency of digital repression.

Freedom on the Net 2023

 

Ernie, an artificial intelligence (AI) chatbot created by the Chinese company Baidu, refused to answer questions on Tiananmen Square, the site of mass prodemocracy protests in 1989 in which the Chinese army killed thousands of people. Venezuelan state media shared videos of AI-generated people working for a made-up news channel spreading progovernment messages about the country’s economy. And in India, YouTube and Twitter were required to deploy automated scanning tools to restrict access to a British Broadcasting Corporation (BBC) documentary that investigated Prime Minister Narendra Modi’s role in deadly riots in 2002.

Freedom on the Net 2023: The Repressive Power of Artificial Intelligence, released yesterday, found that global internet freedom declined for the 13th consecutive year. The report, which we conduct with a global network of civil society groups, details how AI is increasing the scale, speed, and efficiency of digital repression. Chatbots created by Chinese companies are reinforcing the Chinese Communist Party’s long-standing information controls. Generative AI technology is allowing Venezuelan authorities to build on their traditional tactics of content manipulation by leveraging hyperrealistic videos to propagate distorted information. India’s legal framework requires companies to proactively restrict content previously flagged by the government, essentially facilitating perpetual censorship.

AI deepens a crisis for human rights online

Our report explores two ways that AI is undermining internet freedom. First, AI has allowed governments to enhance and refine their online censorship capabilities. In at least 22 of the 70 countries we cover in Freedom on the Net, digital platforms are required to use automated systems to remove content deemed illegal under local law, which often concentrates on political, social, or religious speech that should be protected according to international human rights standards. Vietnamese authorities have explicitly demanded that companies use AI to remove so-called toxic content, a category that sweeps up independent reporting, dissent, and even more innocuous speech. By obliging platforms to use machine learning to comply with censorship rules, governments are effectively forcing them to detect and remove banned speech more efficiently. This use of machine learning also makes censorship less detectable, minimizing potential backlash and masking the role of the state.

Today’s popular chatbots have sparked new questions about the relationship between generative AI and censorship. Applications like ChatGPT and Bard may provide people in closed environments with indirect access to uncensored information sources. In light of this, authorities in China, Russia, and Vietnam have moved to ensure chatbots reinforce—rather than bypass—their censorship. Chinese regulations require AI systems to promote “core socialist values” and not incorporate ChatGPT into their services. As use of these tools increases, we expect more governments to adapt their laws and technical capacity to control how people interact with them.

Second, advances in AI risk supercharging disinformation campaigns. In at least 47 countries, governments deployed commentators to manipulate online discussions in their favor. An entire market of for-hire services has emerged in recent years to support state-backed content manipulation. The affordability and accessibility of generative AI technology portend a concerning escalation of these tactics in the coming years. Already, over the past year, we found that AI tools that can generate text, imagery, or audio were utilized in at least 16 countries to sow doubt, smear opponents, or influence public debate. An AI-manipulated audio clip, for instance, spread across social media purportedly showing a Nigerian presidential candidate planning to rig balloting, threatening to cast doubt on the integrity of the country’s February election.

Crude forms of digital repression surge

Over the past year, internet freedom was also imperiled by more traditional forms of repression that had little to do with AI. Myanmar came close to surpassing China as the world’s worst environment for internet freedom. One of Myanmar’s military’s crudest tactics included using Telegram groups to identify dissidents, allowing those they find to then be detained and even forcibly disappeared. Sudanese authorities restricted life-saving access to the internet in April 2023, as people were trapped amid heavy combat between rival paramilitary and military forces.

In a record 55 countries covered by the project, people faced arrest for simply expressing themselves online. The Iranian regime executed two people after they shared their religious views on Telegram. Governments in a record 41 countries blocked websites hosting political, social, or religious speech. Among the more than 9,000 websites blocked by the Belarusian government were independent news sites and their associated mirror sites run by exiled Belarusian journalists. In Cambodia, authorities blocked access to the news outlets Radio Free Asia, Voice of Democracy, and Cambodia Daily, further cementing the regime’s control over the online media landscape ahead of July 2023 elections.

Flipping the script: repurposing AI to advance internet freedom

When designed and deployed safely and fairly, AI can be leveraged to bolster human rights online. Algorithms being trained against real-world censors in places like China, Iran, and Kazakhstan are uncovering new ways to circumvent state censorship. Machine-learning techniques allow researchers to monitor and classify evidence of human rights abuses. Automated detection systems help researchers spot and map disinformation campaigns and the actors behind them.

Strong guardrails are necessary to limit the harmful uses and impact of AI while empowering the technology’s protective role. Designing and enforcing rights-respecting regulatory systems isn’t easy, but the lessons learned from the past decade of deliberations around internet governance provide a roadmap on how to do so. We should be realistic about how much the private sector can and will do on their own, especially in light of the shrinking internal resources dedicated to trust and safety, human rights, and country contexts over the past year. Companies should bolster the transparency of their systems and policies and reinvest in internal experts designed to monitor their impact. This self-regulation should be paired with meaningful government oversight that prioritizes human rights, strengthens regulatory enforcement, and increases transparency over AI design, deployment, and impact.

AI governance must include civil society from the start. Nonprofit organizations, investigative journalists, and human rights activists have driven major wins for internet freedom over the past decade. They need the resources and opportunity to meaningfully engage in developing AI regulation and to serve as watchdogs over how these systems are used by state and nonstate actors. Thus far, civil society from around the world has been far too left out in these processes.

Sustained action can reverse internet freedom’s decline, even as AI augments its drivers. Policymakers, civil society, and the tech industry need to recognize that doing so requires developing AI governance systems, while also addressing long-standing threats to privacy, free expression, and access to information. And in many cases, the solutions to both will be the same. People’s experiences online—which loved ones they can talk to, what information they can see, and how they can express themselves—depends on this action.

Learn More

Freedom on the Net 2023

Explore Freedom on the Net 2023

Explore the latest edition of Freedom on the Net to learn how artificial intelligence is increasing the scale, speed, and efficiency of digital repression.

Download the complete PDF booklet.

Georgian people mobilized in March 2023, including online, against a dangerous bill that would have forced civil society groups to register as “foreign agents” if they received more than 20 percent of their funding from abroad.

Policy Recommendations

Learn how policymakers, regulators, and tech companies can protect internet freedom.

A sticker saying ''Iran: The internet is down and they are killing the people'' seen on the back of a road sign

Acknowledgements

Freedom on the Net is a collaborative effort between Freedom House staff and a network of more than 85 researchers, who come from civil society organizations, academia, journalism, and other backgrounds, covering 70 countries.