Democracy and security in an age of quantum transparency
- By Chris Kremidas-Courtney
The first objective of memes is to entertain. However, they also represent a new form of online communication. Understanding the use of memes in the information space is the same as learning a new language, offering the capacity to understand a niche digital culture that traverses artificial intelligence (AI), politics, war, financial markets and disinformation.
The story of memes really begins in earnest with the internet via chat forums and social media platforms like MySpace. During these nascent years, memes were akin to mascots, featuring comical characters like Philosoraptor and the Forever Alone guy. Nowadays, memes rarely have the longevity to catalyse an identity. They usually appear on your feed, trend and then die out. Similarly, memes may not necessarily be analogous to one specific concept as satirical newspaper ‘funnies’ once were. Memes today have become intertextual.
Especially with the advent of AI image generators, memes show no signs of stopping. Dall-E is one such AI tool that uses this iterative model of image generation, which has become popular in the past year. ‘Weird Dall-E images’, started in February 2022, now has over 1.2mn followers on Twitter. In June, competitor Dall-E Mini served up around 50,000 images a day. Custom AI image generators can benefit an array of actors such as small and medium-sized businesses (SMEs), but despite that, they also have the power to manipulate and mislead. A warning on the Dall-E Mini web page warns that it may “reinforce or exacerbate societal biases” or “generate images that contain stereotypes against minority groups.”
There is no doubt that memes also have a political use
Memes are no longer just of social value. There is no doubt that memes also have a political use. The alt-right has inverted narratives around immigration policy and European integration, for example, through the co-option of memes. The resulting conspiracy theories of a ‘great replacement’ are insidious, as a paper from the Journal of Contemporary European Studies argues. While only a small group of people may believe the information they are presented with in memes, bits and pieces of their arguments filter down into the mainstream narrative. Of course, memes have also become a radar for illiberalism and propaganda, like in Viktor Orbán’s Hungary. Young Hungarians saw the similarities between ubiquitous government propaganda billboards and the aesthetics of parodic social media content and began to perceive everyday situations as internet memes. This phenomenon anchored the youth’s sense of (un)truth and liberal identity in a polarised political field filled with generational differences, class antagonisms and rural-urban divisions.
Memes also hold a military dimension. While Islamic State fighters have previously shown the threat that memes pose as a viable recruitment tool, the Russia-Ukraine War has seen memes being used in an all-out, para-military cyberwar. Last year Politico wrote about NAFO – a comedic take on NATO, meaning the North Atlantic Fellas Organization – which really exemplifies the informal, decentralised way the internet rallies to aid. The ‘fellas’ were able to weaponise memes and bring the fight to cyberspace. They piled onto Russian propaganda via coordinated social media attacks that relied on humour to poke fun at the Kremlin and undermine its online messaging. Ukraine’s Defence Minister Oleksii Reznikov even tweeted a personal salute to the NAFO fellas and changed his profile picture to a Shiba Inu carrying a Ukrainian shield. Experts saw the unprecedented act as an actual tactical move against a nation-state.
We have also been exposed to how memes can manipulate financial markets, as in the case of GameStop and Elon Musk. The GameStop short squeeze was a moment in popular retail trading in January 2021 when the stock price rose from $17 to over $500, due to a campaign popularised by memes in the /r/WallStreetBets subreddit, resulting in massive losses and foreclosures for hedge funds who had shorted the stock. Whereas the GameStop example was represented by a constellation of Reddit users, Musk’s case is less painstaking. After indicating in 2018 that he would take Tesla private once the stock reached $420, the price shot up by more than $20, resulting in more than four years of securities fraud litigation in a trial that continue to this day. The tycoon is known for his shrewd use of social media, including memes, to connect with his audience and promote his businesses. He has even been known to use memes to respond to criticism or negative news about his companies. When Tesla faced production delays or when Twitter users flocked en masse to competitor Mastodon after Musk became Chief Twit, Musk tweeted memes that played on the situation in a humorous way, which deflected negative attention and kept his audience engaged, like when he tweeted a grave meme as #RIPTwitter trended.
Memes have this ability to shift the social, political, military and information pillars
While memes can be a fun and engaging way to share information, it’s important to be aware of their potential for harm in combatting disinformation. Memes are typically short and simple, often relying on stereotypes or other forms of oversimplification to convey their message. Memes can be vectors of disinformation. They may present misleading or incomplete context, leaving out important information that changes the meaning. Conversely, because of their ability to simplify, memes have been used as an academic tool to make complex or abstract concepts more accessible. Therefore, users are required to be critical of information they view, which can then help to combat the spread of disinformation and protect against its negative effects.
Especially given the fact that the COVID-19 pandemic brought along an ‘infodemic’, which has been amplified in the digital world, it’s no surprise that memes have this ability to shift the social, political, military and information pillars. What rules are governing the use of memes in the information space?
In addition to the existing GDPR framework, the European Union will implement the Digital Services Act (DSA) and the Digital Marketing Act (DMA) in January 2024. These two acts will enforce new rules for Big Tech companies like Meta and Google to stomp out disinformation and the associated systemic risks. But the policy package has a lot to answer for in terms of connecting the dots.
Potential bottlenecks of the DSA include the cost of compliance, which could be a burden for new SMEs entering the market. Memes offer a cheap form of digital advertising and marketing to SMEs via AI-generated custom photography and artwork, but the burden of compliance fees would be a barrier to trading. There’s also the liability factor. The DSA would hold digital platforms liable for certain types of illegal content, which could lead to increased censorship and a chilling effect on free speech.
Mods also offer new forms of digital and diplomatic literacy
One of the key goals of the DSA is to take on the hot potato of content moderation. Online platforms will be required to take greater responsibility for hosted content. However, many social media platforms perform self-regulation in-house, which has downsides for researchers and users. For example, if a shocking piece of content makes its way onto Facebook, content moderators in Silicon Valley will take the decision as a matter of individual arbitration to remove it without researchers’ knowledge. Non-transparent content moderation could lead to blind spots in our understanding of the information space.
But it isn’t like this everywhere. The architecture of some digital platforms like Discord and Reddit have a different form of self-regulation: moderators or ‘mods’. These user-elected positions are often more transparent about the rules. Studies show that mods give more logic to social media platforms because they create the conditions for accountability and help people understand why their content was removed. Mods also offer new forms of digital and diplomatic literacy.
If we were to take the best of both moderator-led platform governance and the DSA’s openness to independent oversight, a form of co-regulation could emerge to help platforms create and adopt universal guidelines that fulfil the DSA’s objectives and prove practical for platforms. More than likely, not everything will be regulated in the best possible way. But it’s a process. This form of co-regulation gives users the freedom to express themselves, while enabling the DSA to learn more organically.
Ultimately, however, without sufficient legal standards—which could span multiple jurisdictions—or more strict and enforceable user regulations for online platforms, the challenge of controlling the circulation of sensitive images remains.
The views expressed in this #CriticalThinking article reflect those of the author(s) and not of Friends of Europe.
Next event In person