Collaboration is key to fighting online misinformation


Digital & Data Governance

Picture of Brian Crowley
Brian Crowley

Director of Trust & Safety at Google

When crises and significant global events occur, from the coronavirus pandemic to the Russian invasion of Ukraine, technology can serve as a lifeline, connecting people around the world and providing access to critical information. While digital acceleration is helpful during these times, it can also mean that misinformation can spread in new and complex ways – making it harder for people to find reliable information.

The tech industry plays a critical role in fighting the spread of misinformation online and we take our responsibility seriously to provide access to high-quality, trustworthy and credible content. From Search to YouTube to Google Ads, misinformation manifests differently on each of Google’s platforms. Our work starts with the rules we put in place across each service to prohibit certain types of harmful content and behaviours. This includes deepfakes, fraudulent activity, impersonation and medical misinformation. At Google, our Trust and Safety teams around the globe help enforce our policies and effectively moderate content across all its products at scale. When content violates the company’s policies or breaks the law, action is taken at scale – blocking, removing or restricting content – so it is less likely to cause harm.

One of the many reasons online misinformation is difficult to tackle is that a single group alone cannot fight it

Since Russia invaded Ukraine in 2022, we have seen significant changes in the information landscape as Moscow leverages the full spectrum of disinformation operations — from overt state-backed media to covert platforms and accounts — to influence public perception of the war.

Teams across Google have been working to counter this spread of misinformation. Jigsaw, a unit within Google that explores threats to open societies and builds technology that inspires scalable solutions, recently carried out a study on the role of ‘prebunking’ messages in reducing the influence of misinformation in central and eastern Europe. The study launched a campaign of six videos designed to help viewers build resilience to anti-refugee narratives. They reached almost a third of the Polish, Czech, and Slovakian populations and reported up to an 8% lift in ad viewers’ ability to discern misinformation.

Meanwhile, Google’s Threat Analysis Group – a team responsible for countering threats to Google and our users from government-backed attackers, coordinated information operations (IO) and serious cybercrime networks – has been working to identify how Russia is attempting to gain an advantage in cyberspace by using IO to help shape public perception of the war. This intelligence was applied to improve Google’s defences and protect users, and as a result, disrupted over 1,950 instances of Russian IO activity on Google platforms in 2022.

One of the many reasons online misinformation is difficult to tackle is that a single group alone cannot fight it – collaboration between academics, policymakers, publishers, NGOs and technology companies is key.

Take health-specific information, for example. Google worked with trusted partners, including the World Health Organization (WHO), the United States Centers for Disease Control and Prevention (CDC) and national health authorities during the height of the COVID-19 pandemic to surface reliable information.

The work against online misinformation is never complete, and we must continue to fight this challenging and harmful problem

Shortly after the outbreak of the war in Ukraine, Google renewed its commitment to a whole-of-society response to tackling misinformation with $10mn to support research, think tank and civil society partnerships across central and eastern Europe. In 2023, Google entered new partnerships in the Baltics with the Civic Resilience Initiative and the Baltic Center for Media Excellence. These two established organisations will receive €1.3mn in funding from Google to build on their impactful work with new research and programmes aimed at increasing media literacy, building further resistance and actively tackling misinformation in Lithuania, Latvia and Estonia.

Google is also providing financial support worth €1mn to the Central European Digital Media Observatory, led by Charles University, for new research and collaborations in the fight against information disorders in the region. This will aim to minimise the impact of information disorders, such as misinformation or disinformation in the region of central Europe, further expanding its research and increasing the level of media and digital literacy across Czechia, Poland and Slovakia.

Earlier this year, Google engaged closely with the European Commission and over 30 organisations across academia, civil society, industry and advertising to support a strengthened EU Code of Practice on Disinformation. This vital Code of Practice furthers its signatories’ commitment to fighting misinformation and preserving access to information.

Google’s first baseline report for the updated EU Code of Practice on Disinformation covers how commitments are met, with a snapshot of the action taken in response to the ongoing war in Ukraine, including information on pages and domains actioned for violating Google’s policies on unreliable and harmful claims, replicated content, manipulated media and dangerous or derogatory content.

The work against online misinformation is never complete, and we must continue to fight this challenging and harmful problem by continuing to evolve and adapt our approach across different products, enabling access to trustworthy information, partnering with others, equipping people with skills to detect misinformation and taking action at scale.

Related content:

This article is a contribution from a member or partner organisation of Friends of Europe. The views expressed in this #CriticalThinking article reflect those of the author(s) and not of Friends of Europe.

Track title


Stop playback
Video title


Africa initiative logo