Policy Recommendations
Policymakers, the tech industry, and civil society should work together to address the global decline in internet freedom
The following recommendations lay out strategies that policymakers, regulators, and private companies can adopt to prevent or mitigate illiberal uses of digital technology by both domestic and foreign actors, as well as the broader societal harms that the internet can exacerbate. While reversing the global decline in internet freedom will require the participation of a range of stakeholders, governments and companies should actively partner with civil society, which has always been at the forefront in raising awareness of key problems and identifying solutions to address them.
1. Promote Freedom of Expression and Access to Information
Freedom of expression online is increasingly under attack as governments continue to restrict connectivity and block social media platforms and websites that host political, social, and religious speech. Protecting freedom of expression will require strong legal and regulatory safeguards for digital communications and access to information.
Governments
Governments should maintain access to internet services, digital platforms, and circumvention technology, particularly during elections, protests, and periods of unrest or conflict. Imposing outright or arbitrary bans on social media and messaging platforms unduly restricts free expression and access to information. Governments should address any legitimate risks posed by social media and messaging platforms through existing democratic mechanisms, such as regulatory action, security audits, parliamentary scrutiny, and legislation passed in consultation with civil society. Other methods to address legitimate security problems include strengthening legal requirements for transparency, data privacy, and platform responsibility, such as mandatory human rights due diligence and risk assessments.
Legal frameworks addressing online content should establish special obligations for companies tailored to their size and their services, incentivize platforms to improve their own standards, and require human rights due diligence and reporting. Such requirements should prioritize transparency across core products and practices, including content moderation, recommendation and algorithmic systems, collection and use of data, and political and targeted advertising. Laws should also provide opportunities for vetted researchers to access platform data—information that can provide insights for policy development and civil society’s analysis and advocacy efforts.
Intermediaries should continue to benefit from safe-harbor protections for most of the user-generated and third-party content appearing on their platforms, so as not to encourage excessive restrictions that inhibit free expression. Laws should also protect “good Samaritan” rules allowing platforms to remove objectionable content in good faith, and reserve decisions on the legality of content for the judiciary. Independent, multistakeholder bodies and independent regulators with sufficient resources and expertise should be empowered to oversee the implementation of laws, conduct audits, and ensure compliance. Provisions within the EU’s Digital Services Act—notably its transparency provisions, data accessibility for researchers, a coregulatory form of enforcement, and algorithmic accountability—offer a promising model for content-related laws.
Companies
Companies should commit to respecting the rights of people who use their platforms or services and addressing any adverse impact that their products might have on human rights. The Global Network Initiative’s Principles provide concrete recommendations on how to do so.
Companies should support the accessibility of circumvention technology and resist government orders to shut down internet connectivity or ban digital services. Service providers should use all available legal channels to challenge such requests from state agencies, whether they are official or informal, especially when they relate to the accounts of human rights defenders, civil society activists, journalists, or other at-risk individuals.
If companies cannot resist demands in full, they should ensure that any restrictions or disruptions are as limited as possible in duration, geographic scope, and type of content affected. Companies should thoroughly document government demands internally, and notify people who use their platforms as to why connectivity or content may be restricted, especially in countries where government actions lack transparency. When faced with a choice between a ban of their services and complying with censorship orders, companies should bring strategic legal cases that challenge government overreach, in consultation or partnership with civil society.
2. Defend Information Integrity in the Age of AI
Even before the new wave of generative artificial intelligence (AI) products, AI was a key factor in the crisis of information integrity, serving as an intensifier in environments that were already vulnerable to manipulation. However, advancements in generative AI will supercharge the creation and dissemination of false and misleading content by state and nonstate actors, demanding a prompt response to safeguard access to reliable online information.
Governments
Governments should ensure that human rights principles, transparency, and independent oversight are embedded into AI regulation. Policymakers should specifically include robust protections against ineffective and unsafe systems, address algorithmic discrimination, require independent audits and human rights–based impact assessments, and mandate increased transparency regarding the design, testing, use, and effects of AI products. They should also require human review alternatives for AI decisions, such as in content moderation, and provide people with notice and clear explanations on how automated systems are being utilized. Governments should establish mechanisms for appeal and redress in cases of discrimination by AI systems. Finally, regulators should be empowered with sufficient resources and expertise to enforce their own rules and verify that companies are adhering to relevant laws.
Specifically, the US government should follow through on an executive order—in development at the time of writing—that includes protections outlined in the Office of Science and Technology Policy’s Blueprint for an AI Bill of Rights, such as safeguards against algorithmic discrimination, limits on data use, and requirements for notice and explanation. In addition to such action, Congress should work with civil society and the executive branch to craft legislation that takes a rights-based approach to AI governance and transforms guiding principles into binding law. The US Federal Election Commission should also prohibit political parties, committees, and candidates from intentionally misrepresenting candidates in advertising that features AI-generated or manipulated imagery. In Europe, lawmakers negotiating on the proposed EU AI Act should, at minimum, ensure that the final text obligates companies to label AI-generated media and conduct fundamental rights impact assessments for uses of AI services that present risks for human rights.
Ultimately, strengthening information integrity is a long-term challenge that requires long-term solutions. A whole-of-society approach to fostering a diverse and reliable information space entails supporting independent online media and empowering ordinary people with the tools they need to identify false or misleading information. Civic education initiatives and digital literacy training can help people navigate complex media environments. Governments should also allocate funding to develop detection tools for AI-generated content, which will only become more important as these tools grow more sophisticated and more widely used. Finally, democracies should scale up efforts to support independent online media through financial assistance and innovative financing models, technical support, and professional development support.
Companies
The private sector has a responsibility to ensure their products contribute to, rather than undermine, a diverse and reliable information space. Companies should invest in staff working on issues related to public policy, integrity, trust and safety, and human rights, including regional and country specialists. These teams should collaborate closely with civil society groups around the world to understand the local impact of their products. Without such expertise, companies are ill-equipped to address the myriad of human rights violations and challenges to information integrity that can emerge online and have offline consequences.
Companies should develop effective methods to label AI-generated content, which entails using a cryptographic signature, and coordinate with civil society to standardize how the industry documents the provenance of specific content. Companies should also invest in developing software to detect AI-generated content.
Companies should ensure transparency and fairness in their policies and decisions, including by being open about how machine learning is used to train automated systems tasked with classifying, recommending, and prioritizing content for human review. They should also refrain from relying on automated systems to remove content without the opportunity for meaningful human review, and establish mechanisms for explanation, redress, and appeal. Finally, companies should work closely with independent researchers who can study the effects their services have on information integrity and free expression.
3. Combat Disproportionate Government Surveillance
Governments worldwide have passed increasingly disproportionate surveillance laws, and can access a booming commercial market for surveillance tools, giving them the capacity to flout the rule of law and monitor the private communications of individuals inside and beyond their borders. The lack of data privacy safeguards in the United States and around the world exacerbates the harms of this excessive government surveillance.
Governments
Government surveillance programs should adhere to the International Principles on the Application of Human Rights to Communications Surveillance, a framework agreed upon by a broad consortium of civil society groups, industry leaders, and scholars. The principles, which state that all communications surveillance must be legal, necessary, and proportionate, should also be applied to AI-driven and biometric surveillance technologies, targeted surveillance tools like commercial spyware and extraction software, and open-source intelligence methods such as social media monitoring.
In the United States, lawmakers should reform or repeal existing surveillance laws and practices, including Section 702 of the Foreign Intelligence Surveillance Act and Executive Order 12333, to better align them with these standards. Broad powers under Section 702 and Executive Order 12333 have allowed US government agencies to collect and access Americans’ personal data without meaningful transparency or oversight. The US Congress should also close a legal loophole that allows US government agencies to purchase personal data from data brokers rather than obtaining a warrant. And in the European Union (EU), policymakers negotiating over the final text of the proposed EU AI Act should ensure that it prohibits the use of AI in technologies that are widely known to infringe on human rights, including facial recognition, so-called “predictive policing,” and real-time biometric identification.
Policymakers should refrain from mandating the introduction of “back doors” to digital devices and services, requiring that messages be traceable, or reducing intermediary liability protections for providers of end-to-end encryption. In the United States, any reforms to Section 230 of the Communications Decency Act should not undermine the ability of intermediaries and service providers to offer robust encryption. Weakening encryption would endanger the lives of activists, journalists, members of marginalized communities, and ordinary people around the world.
The US government is leading the international community in its efforts to combat commercial spyware abuses. In March 2023, the administration of President Joseph Biden announced an executive order that, among other mandates, bars federal agencies from the “operational” use of commercial spyware products that pose a threat to national security or counterintelligence, or that could be employed by foreign governments to violate human rights or target people from the United States. While this is a welcome step forward, the White House should work with Congress to make the order’s provisions permanent law through bipartisan legislation, ensuring that the prohibition remains in place under future administrations.
Governments should work closely with civil society to ensure that democracies’ lists of prohibited companies are swiftly and appropriately updated as the industry evolves. The US Commerce Department’s Bureau of Industry and Security has imposed special licensing requirements on several surveillance firms whose foreign government clients had used their technologies to target journalists, activists, and others. The addition of these firms to the bureau’s Entity List was a positive development, and others engaged in such practices should be subjected to the same restrictions.
While the European Parliament launched a committee of inquiry to investigate the use of Pegasus and other spyware tools, the European Commission still needs to take formal action. The EU should follow the example of the United States and rein in the commercial surveillance market. Robust action from Brussels would send a very strong signal to spyware purveyors that their irresponsible trade will no longer be tolerated, particularly those operating within the EU’s borders.
To guarantee effective international cooperation on spyware, the United States and like-minded democracies will need to encourage other governments to implement common standards. Governments that signed the Joint Statement on Efforts to Counter the Proliferation and Misuse of Commercial Spyware, as well as those that joined the Export Controls and Human Rights Initiative, should follow through on their commitments and encourage like-minded states to join.
Companies
Companies should mainstream end-to-end encryption in their products and uphold other robust security protocols, including by resisting government requests to provide special decryption access. Companies should also resist government data requests that contravene international human rights standards or lack a valid judicial warrant. Digital platforms should use all available legal channels to challenge such problematic requests from state agencies, whether they are official or informal, especially when they relate to the accounts of human rights defenders, civil society activists, journalists, or other at-risk individuals.
Businesses exporting surveillance and censorship technologies that could be used to commit human rights abuses should publicly report annually on the human rights related due diligence they are conducting before making sales, the due diligence obligations they are requiring from their resellers and distributors, and their efforts to identify requests from customers that suggest the technologies may be used for repressive purposes. The reports should include a list of countries to which they have sold such technologies.
4. Safeguard Personal Data
Comprehensive data protection regulations and industry policies on data protection are essential for upholding privacy and other human rights online, but they require careful crafting to ensure that they do not contribute to internet fragmentation—the siloing of the global internet into nation-based segments—and cannot be used by governments to undermine privacy and other fundamental freedoms.
Governments
Democracies should collaborate to create interoperable privacy regimes that comprehensively safeguard user information, while also allowing data to flow across borders to jurisdictions with similar levels of protection. Individuals should be given control over their information, including the right to access it, delete it, and easily transfer it to the providers of their choosing. Laws should include guardrails that limit the ways in which private companies can use personal data for AI development and in their AI systems, including algorithmic recommendations. Updated data-privacy protections should feature provisions that grant independent regulators and oversight mechanisms the ability, resources, and expertise to ensure compliance by foreign and domestic companies with privacy, nondiscrimination, and consumer-protection laws.
The US Congress should urgently pass a comprehensive federal law on data privacy that includes data minimization, the principle that personal information should only be collected and stored to the extent necessary for a specific purpose, and purpose limitation, the principle that personal data gathered for one purpose should not later be used for another. This is especially relevant for discussions around generative AI and other technologies that depend on harvesting information online without people’s consent.
In the absence of congressional action, the US Federal Trade Commission (FTC) has been working to address these concerns through new regulations on commercial surveillance and data security. While an Advance Notice of Proposed Rulemaking was announced over a year ago, the process will not be completed for at least another year. In the meantime, Congress should ensure that the FTC has sufficient resources to develop and enforce meaningful regulations related to data protection.
In addition to the FTC’s action, this year the Consumer Financial Protection Bureau (CFPB) announced proposed rulemaking under the Fair Credit Reporting Act, with the aim of holding the data broker industry accountable for the misuse of personal information. Among other principles, the CFPB should prioritize data minimization in its new regulations.
Companies
Companies should minimize the collection of personal information, such as health, biometric, and location data, and limit how third parties can access and use it. Companies should also clearly explain to people who use their services what data are being collected and for what purpose, including what information may be collected from user prompts to generative AI services. Finally, companies should ensure that people who use their services have control over their own information, including the right to access it, delete it, and prevent it from affecting an algorithm’s behavior.
5. Protect A Free And Open Internet
A successful defense of the free, open, and interoperable internet will depend on international cooperation and a shared vision for global internet freedom. Democracies should live up to their own values at home in order to serve as more credible advocates for internet freedom abroad. Freedom House research shows that governments learn from one another, with leaders in less free countries often pointing to the problematic actions of democratic states to justify their repressive policies. Democratic governments everywhere have an opportunity to set a positive example by effectively tackling the genuine challenges of the digital age in a way that strengthens human rights and the global internet.
Governments
Governments should ensure that internet-related diplomacy is both coordinated among democracies and grounded in human rights. The effort should include identifying regional multilateral forums that are strategically placed to advance free and open internet principles. Democracies should facilitate dialogue among national policymakers and regulators, allowing them to share best practices and strengthen joint engagement at international standards-setting bodies.
Specifically, the Freedom Online Coalition (FOC) should improve its name recognition and its ability to drive diplomatic coordination and global action. It should more proactively articulate the benefits of a free and open internet to other governments and be more publicly and privately vocal about threats and opportunities for human rights online. The FOC should also mainstream its activity in other multilateral forums like the International Telecommunication Union and the Group of Seven. The FOC should create an internal mechanism by which member states’ activities can be evaluated to ensure that they align with the coalition’s principles. Finally, the FOC should diversify and expand its advisory network.
Governments should establish internet freedom programming as a vital component of their democracy assistance, incorporating funding for digital security and cyber hygiene into their projects. Program beneficiaries should receive support for open-source and user-friendly technologies that will help them circumvent government censorship, protect themselves against surveillance, and overcome restrictions on connectivity. Policymakers should advance efforts to strengthen regulatory and judicial independence, enhance technical literacy among judges and others within the legal and regulatory system, and provide other financial and administrative resources for strategic litigation.
Democracies should collectively impose meaningful penalties, including targeted sanctions, on anyone directing or engaging in reprisals against individuals exercising free expression online. Sanctions against state entities should be crafted to minimize their impact on ordinary citizens, and when broad-based sanctions are imposed, democratic governments should carve out exemptions for internet services when relevant.
Governments should advocate for the immediate, unconditional release of those imprisoned for online expression that is protected under international human rights standards. Governments should incorporate these cases, in addition to broader internet freedom concerns, into bilateral and multilateral engagement with perpetrator states. It should be standard practice to raise the names of those detained for their online content, to request information or specific action related to their treatment, and to call for their release and the repeal of laws that improperly criminalize online expression.
Companies
Companies should engage in continuous dialogue with civil society to understand the effects of their policies and products. They should seek out local expertise on the political and cultural context in markets where they have a presence or where their products are widely used, especially in repressive settings that present unique human rights challenges. Consultations with civil society groups should inform companies’ decisions to operate in a particular country, their approach to local content moderation, and their development of policies and practices—particularly during elections or crisis events, when managing government requests, and when working to counter online harms.
Prior to launching new internet-related or AI services or expanding them to a new market, companies should conduct and publish human rights impact assessments to fully illuminate the ways in which their products and actions might affect rights including freedom of expression, freedom from discrimination, and privacy.
Finally, when complying with sanctions, companies should coordinate with democratic governments to confirm that they are not engaging in excessive risk-mitigation activities that might negatively and needlessly affect civilians who have not themselves been sanctioned.
Explore the Report
Explore the Report
Explore the 2023 edition of Freedom on the Net.
Country Narratives
Visit our Countries in Detail page to view all Freedom on the Net 2023 scores and read individual country narratives.
Acknowledgements
Freedom on the Net is a collaborative effort between Freedom House staff and a network of more than 80 researchers, who come from civil society organizations, academia, journalism, and other backgrounds, covering 70 countries.