Why We Should Stop Blaming Technology for the Erosion of Democracy
The past 18 months have felt like a constant stream of one warning: “Elections will be overrun by misinformation and AI,” particularly for traditional and flawed democracies. Over the past 6 months, specific to the U.S. election, I have tracked misinformation claims in various, used tools to investigate whether content was AI-generated and improperly labeled, and watched as tech leaders amplified divisive narratives in ways that not only defied expectations but were operationally almost impossible to combat in real time. Yet focusing solely on technology as the cause overlooks deeper issues, including political extremism, polarization, and the erosion of public trust—factors that existed before the rise of social media and artificial intelligence.
Prior to the election, sociologist Larry Diamond observed in Foreign Affairs, “political extremism, polarization, and distrust have been on the rise even in long-established liberal democracies.” This trend is amplified by technology’s deep integration into politics, with social media and AI now fueling authoritarianism through surveillance, disinformation, and polarization. While I don’t believe technology’s role is inherently negative, if democracies on the edge of illiberalism don’t address extremism and misinformation, both online and offline, we risk becoming no better than our adversaries in the long run.
Throughout the past year, I’ve spoken publicly on misinformation and global elections, having kicked off the year working on the Indonesian election, followed by (in no particular order) the Indian, EU, UK, South African, and Turkish elections, with a cautionary note that we can’t solely blame democracy’s challenges on the lack of guardrails or enforcement by technology companies. Disinformation warfare is a form of psychological manipulation that isn’t easily countered by last-minute media literacy campaigns, fact-checking, or even LLM system audits. False narratives, especially those entrenched for years, can’t be easily dissuaded through memes, influencer campaigns, or fact-checked posts—especially when fact-checking is perceived as partisan and influencers are being paid by campaigns and foreign adversaries to spread misinformation.
An unexpected factor in this election cycle was the outsized influence of Elon Musk. His weaponization of X to amplify misinformation, relentless attacks against outspoken opponents of President-Elect Trump, and his funding of nonprofits creating "dark PACs" like Progress 2028 intensified the spread of falsehoods. Musk’s disregard for removing harmful misinformation created a significant setback to the tireless efforts of election integrity professionals, many of whom have worked months—and sometimes years—to safeguard the election process.
Yet, despite these challenges, progress was made. The US Intelligence Community, through CISA and the FBI, shared intel on foreign interference from Iran and Russia. In regular cadence, Microsoft released reports on election interference with specific examples of misinformation campaigns. Media and fact-checking organizations, including Wired, Politifact, Snopes and FactCheck.org, launched real-time fact-checking initiatives, often in multiple languages. Global news outlets focused on U.S. elections and increased public education on misinformation. And while we saw a sharp rise in sexist content targeting Vice President Harris and other female leaders, there was extensive media coverage of these harmful narratives, which allowed them to be scrutinized publicly.
This will be my sixth election cycle (presidential and midterm) in the U.S., and it’s probably the most important one in recent history due to how information has been shared and how infrastructure vulnerabilities have been exploited by foreign actors. There will be post-mortems, finger-pointing, and calls for changes to our election process, but right now, we’re still very much in the middle of a disinformation storm. The potential for political violence—and violence targeting people within protected classes—is growing and will likely continue throughout the year. After we all take a long and well-deserved break, we’ll also need to start understanding what this administration’s policies around technology will look like. Will all executive guidance on the responsible use of AI be thrown out the window? Will we finally see privacy legislation focused on platforms? And will TikTok continue to operate in the U.S. in 2025? These are questions that need answers, though we all need a break to see the forest through the trees. But if you’re like me and don’t believe in rest, here’s an outline of steps we should start looking toward to protect elections and everyday citizens during this transition into the Trump administration.
Expanding Regulatory Efforts for Election Integrity and Creating Guardrails that Work
Countries worldwide are increasingly implementing regulatory frameworks to protect election integrity, focusing on mitigating misinformation, ensuring transparency in political advertising, and regulating AI’s influence on digital platforms. These regulations aim to safeguard democratic processes and limit the manipulation of public opinion during elections, offering best practices that current legislators and government officials can adopt. A comprehensive technology reform is essential to promote innovation with appropriate guardrails. The EU’s Digital Services Act election guidelines, for instance, helped create a calmer environment for the recent Parliamentary elections. Other examples include Canada’s election advertising transparency regulations and Australia’s Electoral Commission’s expanded oversight to combat digital disinformation. The U.S. has an opportunity to develop its own framework before the 2028 presidential election. Rather than viewing the U.S. as an innovator and Europe as a regulator, we should pursue an adaptable model that incorporates the best practices from each.
European Union countries, under the Digital Services Act (DSA), have adopted a robust framework to create a safer digital ecosystem. The DSA places specific obligations on large online platforms to prevent the spread of illegal content, including election misinformation, and mandates clear guidelines on the transparency of political ads. By establishing guidelines for risk mitigation and requiring platforms to provide transparency in digital advertising, the DSA aims to combat misinformation more effectively.
Canada implemented the Elections Modernization Act (Bill C-76) in 2019, which mandates transparency for third-party election advertising, requires identification on partisan ads, and imposes spending limits on political activities leading up to federal elections. The legislation also includes mechanisms to ensure that digital platforms are accountable for any political content they host, reinforcing a fair and transparent election process.
Australia has taken a proactive approach with its Disinformation Register, managed by the Australian Electoral Commission (AEC). This initiative addresses false information surrounding election processes by debunking misleading claims and reinforcing trust in election integrity. Additionally, Australia requires explicit authorizations on election-related communications, encouraging voters to verify the sources of information they encounter.
Clear Steps for Technology Companies
To build these guardrails, our favorite technology companies must play a proactive role. Without their buy-in, and the Trust and Safety + Integrity communities pushing them internally and externally, none of the necessary work will happen. The heads of every platform got very lucky because public attention shifted towards X, and less focus on how AI and platforms played a role in amplifying election related misinformation. Here are three critical steps companies can take now to safeguard future elections:
Strengthen Transparency: Establish publicly accessible archives of political ads and content flagged as misinformation, and make the criteria for labeling or removing content accessible to researchers and the public.
Expand Real-Time Fact-Checking: Implement multilingual, AI-driven fact-checking mechanisms that can handle the sheer volume of claims during high-stakes periods, including collaborations with trusted non-partisan partners.
Combat Hate Speech More Proactively: Platforms should adopt clearer, enforced policies to remove hate speech targeting protected groups, especially during election cycles when such rhetoric often increases.
Empowering Non-Partisan Stakeholders in Tech and Policy
For those in tech and policy, especially non-partisan members of organizations, your voice and actions matter now more than ever. Here’s how you can help:
Advocate for Reform: Engage with local, state, and federal policymakers to share best practices and experiences from both U.S. and international elections. Encourage the creation of policy frameworks that address misinformation without stifling free speech.
Support Cross-Border Collaboration: Partner with international counterparts to exchange insights on successful interventions from the EU, Canada, Australia, and others. This global perspective can help the U.S. adopt the most effective and democratic tools.
Our democracy deserves stronger guardrails to prevent the weaponization of technology. Together, we can foster innovation while safeguarding the foundational values of our society.
[This article was originally published on November 14, 2024, by the Author for All Tech is Human, where she is the Senior Fellow for Information Integrity.]