Discussion summary: next level disinformation - deepfakes

Peace, Security & Defence

This is a summary of the recently concluded discussion on the seventh edition of Debating Security Plus (DS+). DS+ is a global online brainstorm that brings together a community of global security experts who will come together throughout the year to discuss the changing nature of warfare and its implication for the global thinking on peace, security and defence.


Deep fakes and disinformation are no longer a future threat but instead a current reality. Altered videos driven by artificial intelligence threaten to create further distrust in governments and electoral processes as well as to change the way in which citizens perceive the news on key issues like migration, health and their day-to-day life.  In the context of the EU elections, fake news became the new normal. While it is difficult to attribute the sources of fake content, evidence points to a lot of the disinformation being deployed from Russia. Dis- and misinformation influences voters and can therefore impact the outcome of elections. As these tools fall into the hands of civilians, a further democratisation of disinformation looks inevitable.

During April and May, online participants and speakers engaged in the debate on how deep fakes and disinformation will shape security in the coming years and brainstormed on what policy solutions are needed to guarantee a safe and credible access to information.

Speaking about deep fakes, Clare Moody, a Member of the European Parliament, cautioned that deep fakes and disinformation will exploit the lack of trust citizens currently have in European governments. She argued that growing distrust could be hugely damaging for Europe’s democratic system if no decisive action is taken.

Ruben Arcos, from the University Rey Juan Carlos in Madrid, explained that denial and deception have long played an important role in our strategic communications. However, he argued that as new technologies like deep fakes emerge, governments and other actors could easily discredit and mock the opposition or vice-versa, posing a great threat to our liberal democracies. The audio-visual nature of deep fakes could be used with the intent to create an alternative reality in the minds of citizens for different aims.

Within the EU context, Giles Portman, from the EEAS East Stratcom, called out Russia for using disinformation campaigns to interfere in recent elections across Europe, warning that these mechanisms are well organised and well-funded. This calls for an attentive and resilient strategy from the EU’s side.

Showcasing the malleable nature of fake news, Tom Law, from the Ethical Journalism Network, argued for the need to offer guidance for journalists to practice ethical and objective reporting and avoid falling in the misinformation trap. This is particularly an issue when it comes to reporting on migration. To ensure accuracy in reporting, knowledge of the law and media frames is crucial. Hany Farid, from Dartmouth College, warned that we should not take objective journalism for granted. He claimed that the surge in disinformation can, in part, be linked to lower citizen engagement with the news.

Looking towards the responsibilities of ‘big tech’, Farid, added that social media companies already have ’Terms of Use’ rules that need to be implemented and that these companies have a responsibility to clean “the mess they have created”. This includes revising social media companies’ monetising models. Currently, they depend on users spending more time on their platforms, incentivising the platforms to prioritise engagement over meaningful content. Governments should work closer with these companies to make sure that they are “walking the talk”.

Shamir Allibhai, from Amber Video, concluded that blockchain technology brings hope to the challenges posed by the new technology of deep fakes, due to its transparency and potential to track evidence. Blockchain technology could provide a clearer picture regarding the origin of fake videos, as well as how and when they were altered. This could help prosecutors accurately attribute those responsible for creating malicious fakes and hold them accountable.

Insights

view all insights

Next Event

view all events

Author

Track title

Category

00:0000:00
Stop playback
Video title

Category

Close

We use cookies to improve your online experience.
For more information, visit our privacy policy