Interactive Advertising Bureau
08 October 2021

Guest Blog Post - Addressing & Tackling Disinformation - Q&A with DoubleVerify, Integral Ad Science & The Conscious Advertising Network.

Disinformation is not just a 2021 buzzword, it’s a worrying new social challenge that’s been on the rise over the last 12 to 18 months. As part of IAB Europe’s Quality & Transparency work track, we’ve been delving into the topic to understand more through podcast interviews and panel discussions. We now hand over the blog to  investigate what disinformation is, how it has grown and what is being done by the industry to tackle it. 

IAB Europe spoke to  David Goddard, Vice President, Business Development at DoubleVerify, Harriet Kingaby, Co-founder of The Conscious Advertising Network & Paul Nasse, Managing Director, Northern Europe at Integral Ad Science (IAS), to ascertain their insights.

David Goddard is DoubleVerify’s Vice President, Business Development, based in the London office and is responsible for expanding partnerships with digital media and technology companies. He is also Chair of the IAB Europe Programmatic Trading Committee, a multi-stakeholder group that aims to increase understanding of the programmatic ecosystem, the impact it is having on digital advertising, and influence industry initiatives to improve. Prior to DoubleVerify, David spent 5.5 years at BBC Global News and BBC Studios (formerly known as BBC WorldWide), as Vice President Global Programmatic Strategy & Commercial Development. Leading a global team of programmatic specialists tasked with driving the adoption and growth of Programmatic revenue.

Harriet Kingaby is an activist, working at the intersection of advertising, AI and misinformation. She co-founded The Conscious Advertising Network with Jake Dubbins of Media Bounty in 2018 and has since seen it gain international recognition, work with the UN, and grow to over 150 members from O2 to Accenture Interactive. She is also Insights Director at ethical ad agency Media Bounty.

 

Paul Nasse joined IAS in 2013 as one of the first team members to establish Integral Ad Science outside of HQ in New York. He is the Managing Director for Northern Europe, overseeing the UK, The Netherlands and Nordics. In the UK and beyond, Paul Nasse is responsible for driving the business and partnerships for the European Region. Paul works directly with IAS’s largest clients to ensure that IAS offers the best-in-class solutions and promotes robust cross-industry collaboration with the world’s biggest agencies, advertisers, publishers and industry bodies.  Paul has been part of the advertising industry for over 20 years, starting his career at ZenithOptimedia as a Print Buyer and rising through the ranks to Media Buying Director. He was responsible for the largest accounts within the group and developed the first digital advertising strategies across multiple global clients.

 

Q. Have you seen an increase in disinformation online over the past year? What specific examples can you give?

David - Almost 80 mobile towers have reportedly been burned down in the UK due to false coronavirus conspiracy theories that blame the spread of COVID-19 on 5G. Other similar reports include disinformation about Bill Gates and other globalists using the pandemic to implant microchips in the whole of humanity. These disturbing stories are in line with what DV’s been observing relative to spikes in our Inflammatory Politics and News Category. We see controversial events trigger spikes in disinformation. DV is able to analyse our various content categories and look at the percentage of all DV monitored ad calls that were adjacent to content classified within a specific category. This serves as a proxy for overall traffic trends.

As I mentioned before, Inflammatory Politics and News increased 83% in the U.S. in November 2020 compared with November 2019. We also observed massive surges in Covid-19 disinformation following the announcement of the vaccines. And when we look at Europe, we’ve seen similar trends emerge. In France, for example, Inflammatory Politics and News surged by 165% in February 2021 after the National Assembly passed the controversial “separatism” bill. When the U.K. finally reached a Brexit deal, Inflammatory Politics and News increased by 41% compared with the U.K. December 2020 average. In Germany, following Olaf Scholz’s nomination, Inflammatory News and Politics increased +40% compared with the German August 2020 average. What’s clear is that across the board, disinformation is on the rise, and its spread specifically spikes around polarised, controversial events.

Harriet - I think we have reached a critical stage in the sophistication of misinformation, where we are seeing profound impacts on society and people. We opened this year with the storming of the US Capitol by people that genuinely believed disinformation about the legitimacy of the elections. And we’ll finish it with COP26, the most important climate conference of a generation. The Global Disinformation Index are already reporting an uptick in disinformation every time a climate announcement is made. We cannot allow these incidents to continue to disrupt our lives and society. Brands have also come under attack. Furniture manufacturer Wayfair was hit by QAnon ‘child trafficking’ accusations surrounding furniture with women’s names. AstraZeneca was hit with a conspiracy that the monkey proteins used to develop the virus would also turn recipients into monkeys. We’ve opened Pandora’s Box, and it’s imperative that we cut off the funding model misinformation has through advertising.

However, we are also at a point where we understand it better than ever. For example, the Australian Muslim Advocacy Network, have published excellent research on how misinformation and hate speech combine to dehumanise communities, which should be an excellent tool for the tech platforms to use to create more sophisticated policies that consider patterns rather than single posts. There’s also been great thinking done on how we balance freedom of expression with protecting those most vulnerable to misinformation. Leaning into the potential for civil society/corporate collaborations will bring about better policies which genuinely tackle these issues.

Paul - The last year has seen not only the most content consumed online, but also a rise in hate speech and misinformation, driven by the pandemic, a turbulent US election and global social movements. These events are often associated with reactionary risky content that is intended to polarise society, such as misinformation regarding the roll-out of vaccines. Programmatic ad spend has surpassed all other avenues at a rapid rate. Political ad spending has increased exponentially over the same period. This has further fuelled concerns around misinformation: 77% of US voters claimed that they were concerned about fake news and misinformation during the election last year.

The early days of the pandemic led to increased consumption of trusted news sources to learn more about Covid-19, so reliable journalism had to compete for attention. In fact, in February 2020, World Health Organisation’s Director-General declared: “We’re not just fighting an epidemic; we are fighting an infodemic.” According to a Press Gazette analysis, based on data from SimilarWeb and NewsGuard, it found that total visits in 2020 to sites that were considered untrustworthy were up 70% compared to 2019, while the number of visits to generally trustworthy sites was 47% higher than in 2019.

 

Q. What solutions exist to identify and eradicate disinformation?  

David - DV protects clients from a range of disinformation through our Inflammatory Politics & News category, which provides vast coverage across over 130,000 sites that includes disinformation, misinformation, propaganda, extremist point of views and/or inflammatory political rhetoric. For example, the word ‘vaccine’ can be found in thousands of pieces of digital media every day. Many of these pieces could be legitimate pieces of content from trusted publishers and safe, suitable places for ads to appear. However, some will also be disinformation and will not be suitable places for brands to appear. Therefore, DV’s semantic science technology can understand those nuances and ensure brands only appear alongside trusted, safe and suitable content. It can learn to identify misinformation around topics like vaccines, and help prevent ads from appearing alongside these sources.

DV was first-to-market with this solution in 2016. And since then, we have continued to refine and improve the technology underlying this solution. Our methodology is constantly iterated on and DV does not rely on a single vendor or internal resource to identify disinformation. The methods we use to identify potential disinformation include:

  • Monitoring Twitter to understand what tweets using inflammatory language are linking out to.
  • Monitoring Reddit to understand what certain subreddits that lean toward disinformation are linking out to.
  • Looking for referral sites that link out to disinformation to better understand the network of disinformation.
  • Maintaining a list of concepts associated with emerging disinformation topics that our semantic science engine leverages to identify pages with inflammatory content.
  • Manually reviewing pages identified by our semantic science engine to see if disinformation is endemic to the site.
  • Leveraging Storyzy’s technology to further scale DV’s solution. Storyzy helps identify disinformation sites as they are created by using existing knowledge of technical signatures known to be employed by actors in disinformation. Importantly, this solution scales to multiple languages.

Harriet - As a starting point, advertisers and agencies should implement the CAN misinformation manifesto, and take a look at our Change the Narrative report on how to tackle climate misinformation. We need people to start taking climate words off their blocklists to ensure credible voices on climate can access the funding through advertising they need to beat deniers! We think all the great work around public health is a good start, but we need policies on big issues such as climate change ahead of COP26 too. We also welcome the introduction of an Online Safety Bill to tackle misinformation from a regulatory perspective, and support calls for there to be third party oversight of its application. This must be backed up by industry leadership. We all need to do more than we are already doing to get ahead of what Lord Puttnam called ‘a pandemic of misinformation’

Paul - IAS is constantly working with associations and the wider industry on collaborative approaches to ensure brands do not appear next to sites containing disinformation. The quickest and most scalable way to identify disinformation is through a combination of technology augmented with human review. IAS’s Misinformation solution is available within the Brand Safety Offensive Language and Controversial Content category. IAS has historically detected misinformation in a similar manner to other vendors, scanning for clearly fake or hyperbolic terminology while utilising human review of sites via watchdogs reviews or similar content reviewers. IAS has enhanced this solution by adding on to it the use of a unique combination of AI and an independent, third-party review by the Global Disinformation Index (GDI) to automatically remove questionable domains.

Using AI, IAS can detect emerging threats by determining which sites have a strong correlation to more high profile sources of misinformation. This functions as a recommendation service and GDI then evaluates the candidate pages to determine if the page in fact qualifies as Misinformation or not.  Additionally, GDI shares their list of organically detected domains for inclusion in our misinformation protection solution.  This unique combination of AI and manual review allows IAS to protect our clients from potentially damaging content at an increased scale. IAS partnership with GDI enhances our misinformation detection, ensuring journalistic integrity and reaffirming support for quality news sites. IAS now provides advertisers with expanded global coverage; identifying more sources of misinformation and ensuring greater protection for your digital campaigns.

 

Q. Do you have any examples of solutions in action and the change it has brought?

David - DoubleVerify analyzes billions of impressions a day to help keep brands safe from appearing alongside unsafe and unsuitable content. Publishers promoting specious and incendiary or racially biased/motivated claims are classified into DV’s Inflammatory Politics and News and Hate Speech categories, respectively. These category classifications allow advertisers to protect their brand reputation and ensure their ad dollars do not inadvertently fund bad-faith actors.

One example of DV’s solutions in actions can be seen at the start of the pandemic. When the pandemic began in March 2020, consumption of news content increased. DV immediately joined with the IAB took the stance that News Saves Lives and started working with our customers to implement brand suitability best practices in support of trusted news publishers. Within weeks, the violation rate on news content, which describes ads that are blocked or flagged as an incident, dropped by 35%. The brand suitability block rate that had increased by 32% in the wake of the pandemic now decreased by 40% month-over-month going into April, for a net difference of -8%. This demonstrates DV’s ability to help assure advertisers they are protected from running against content they feel may not be suitable while, at the same time, empowering advertisers to run on news that is suitable for their brand.

Harriet - During the height of the pandemic, we saw disinformation and conspiracy theories linking 5G and COVID-19, which caused some to attack masts and telecommunications employees. Many of the sites and channels promoting these conspiracies were funded by advertising. Addressing this became a key focus for Virgin O2, who joined CAN with a specific desire to tackle misinformation. All members of CAN audit their media spend against our manifestos and create an accompanying ‘action plan’. O2 now host a monthly steering group meeting that includes their core team and agencies to review their progress and actions against CAN’s six manifestos: anti-ad fraud, diversity & inclusion, hate speech, misinformation, informed consent and children’s wellbeing. This review includes both short term and long-term goals. Shorter-term actions are, for example, updating a range of briefs including campaign, production, and casting to take on board the six manifestos. Longer-term, the procurement team are involved, ensuring that any RFPs across O2’s supply chain cover the areas listed in CAN’s six manifestos.

Paul - IAS believes that ad verification providers have a responsibility to both assess brand risk and help establish a reliable standard for brand safety and suitability.  In 2021, IAS achieved a global recertification from the Trustworthy Accountability Group (TAG), which is the leading global certification program fighting criminal activity in order to increase trust in digital advertising. In July 2021, IAS also formed a new partnership with The Global Disinformation Index (GDI), making IAS the first ad verification company to help marketers avoid misinformation content based on GDI’s standards. Building on IAS’s expertise in brand safety and suitability, this partnership further protects brands from running ads on sites that GDI has identified for misinformation, ensuring journalistic integrity and reaffirming support for quality news sites.

When IAS identifies potential sources of misinformation through its artificial intelligence (AI) algorithm, these sites will now also be validated by GDI’s trusted and independent assessment of news content and risk. IAS will also add domains that GDI detects organically to ensure the most complete coverage for advertisers. IAS has expanded its global footprint with the inclusion of international domains provided by GDI. The combination of IAS’s advanced AI capabilities with GDI’s independent assessment to detect sources of misinformation gives advertisers confidence that their campaigns run on quality news platforms and avoid misinformation sites.  It’s clear that the digital disinformation problem is not eradicated. However, there are many positive steps being taken to avoid sites with disinformation, deliver ads in quality environments and retain brand value. IAS’s latest Industry Pulse report found that only 8% of digital advertising professionals saw fake news becoming a greater concern in 2021, down from 33% in 2020.

 

Q. On a global and European scale, what is the industry doing to tackle disinformation?

David - The IAB, The 4A’s, GARM, the WFA, and the ISBA have all done outstanding work that DV supports. One unique solution we offer is Brand Suitability Tiers. DV’s Brand Suitability Tiers offering is the first to align product functionality with standards advanced by the 4A’s Advertising Protection Bureau (APB) and World Federation of Advertisers (WFA) Global Alliance for Responsible Media (GARM). These guidelines seek to strengthen current brand safety and suitability practices and develop a common language for advertisers and publishers. Brand suitability tiers work by allowing advertisers to determine the level of risk with which they’re comfortable. For example, The Violence category provides coverage for content related to acts of physical harm and/or weapons that cause physical harm. The overall category would, for example, provide coverage for content that discusses the violent acts that unfolded at the U.S. Capitol and  content related to any future violent, armed protests and/or riots. Content related to non-violent protests or peaceful demonstrations does not fall within the Violence category.

With the introduction of Brand Suitability Tiers, the Violence category now has three risk tiers:

  • The high risk tier provides coverage for unmoderated content or glorification of any violent acts.
  • The medium risk tier provides coverage for professional News content about any violent acts.
  • The low risk tier provides coverage for educational content about violence and/or content that only includes a minor mention of violence.

Another tiered category is Terrorism. Recently, The fall of Kabul (Aug 16 - Aug 30) brought on an increase of Terrorism content online during the two weeks following the event. In this time frame, DV saw the rate of terrorism content increase nine times compared to the weeks leading up to the fall of Kabul on August 16.  Key Data DV analyzed include:

  • EMEA and North America — EMEA and North America saw drastic rises in terrorism content, with EMEA soaring 15.7x and North America increasing 8.7x.
  • APAC and LATAM — APAC and LATAM saw lower rises in terrorism content, with rates of 4.1x and 3.6x respectively.
  • Overall Rate — The terrorism content rate peaked on August 17th, reaching a rate 4.5x higher than the previous 2021 peak on April 18th.

Harriet - The UN Special Rapporteur on Freedom of Expression has expressed her concerns about a growing “information disorder”, highlighting the growing evidence that disinformation tends to thrive where human rights are constrained, where the public information regime is not robust and where media quality, diversity and independence is weak.  We welcome the introduction of the Online Harms Bill here in the UK and the conversations about legal frameworks internationally. For us, new legislation must contain very clear definitions of disinformation and misinformation, and how it intersects with laws protecting Freedom of Expression. We believe that it must prioritise misinformation that contradicts the scientific consensus on public health and climate change; undermines democratic elections; and must expressly protect those who are most marginalized and vulnerable in our societies. We believe legislation should also require platforms and media owners to have robust, enforced, policies that include those definitions, and that are overseen by a third party.  We also support the great work being done in the industry by GARM, the WFA, ISBA, and the IAB. We have big plans to make CAN a global organisation over the next few years, collaborating with local chapters to adapt our manifestos for different markets to help global advertisers take action.

Paul - With so many major news events happening around the world from coronavirus to protests, there’s even greater risk for disinformation to detract from marketing dollars having a desired impact. Industry stakeholders have drafted a Code of Practice that includes commitments to fight online disinformation. This includes representatives of online platforms, leading social networks and the advertising industry, including WFA. The main commitment related to advertisers is to use brand safety and brand suitability verification tools to avoid ads appearing next to fake news content. The Code of Practice has been endorsed by the European Commission and brand owners can sign up to the commitments on advertising.

Q. Best Practices - what safeguards should brands put in place? 

David - DV recommends the following brand safety and suitability best practices as a general guideline. To strike the balance between protecting brand equity while still supporting trusted news, advertisers can turn to today’s sophisticated brand suitability offerings that go beyond blocking. Brand safety and suitability is unique to every brand, but a nuanced, flexible brand suitability toolkit can help brands maximize scale and protection. The list below covers the most current brand suitability tools brands can leverage.

  1. Ensure Language Coverage - Be aware of what language coverage your verification provider has. Make sure they're able to classify content in multiple languages to ensure coverage wherever your media is running.
  2. Review Settings for Key Categories and Adjust Accordingly - Brands and advertisers may wish to consider avoiding categories discussed here, which include Inflammatory Politics and News and Hate Speech.
  3. Update Site and App Exclusion Lists - With app and site exclusion lists, clients can prevent their media from appearing on specific apps, domains and subdomains that they may deem inappropriate regardless of how the individual pages/articles are classified. With app and site inclusion lists, brands can proactively target content to only those apps and sites that they find acceptable.
  4. Limit Use of Keyword Blocking - Keyword blocking gives advertisers the ability to block specific keywords or phrases that they designate within a URL. Although keyword blocking can serve as a useful brand suitability tool for emerging news events where content has yet to be classified, it may result in unintended blocking and does not always provide coverage that is as comprehensive and nuanced as that provided by our avoidance categories, which dynamically classify content as it is published and ads are run against it..
  5. Add Trusted News Homepages to Page Exception Lists - DV gives advertisers the ability to add trusted news site homepages and section pages to their DV page exception lists. Page exception lists allow a brand’s ads to run irrespective of any content avoidance categories the brand may have set up. This is especially useful for programmatic buys and on high-volume entry pages where the consumer tends to associate the brand with the news publication rather than a specific headline adjacent to an ad.
  6. Protect Yourself Across Emerging Channels - Disinformation exists wherever content exists. Advertisers need to be able to ensure their ad dollars do not support unsafe content on social platforms and in emerging environments, such as CTV.

For video campaigns across CTV, desktop and mobile, DV offers DV Video Filtering, which provides a last line of defense to prevent ads from appearing in non-compliant environments. Traditionally, blocking unsafe or fraudulent impressions in video environments has been difficult because video blocking requires a technology standard called VPAID. Unfortunately, VPAID is not widely available and has zero coverage in CTV. DV Video Filtering adds an additional layer of protection to video buys, reduces media waste and minimizes infractions across brand safety and suitability, fraud and in-geo.

Paul - Brands invest a significant amount of time creating an image, cultivating consumer perception and fostering long-term associations. Therefore, it’s important to ensure digital messages appear in safe and suitable environments. Not solely to avoid disinformation, but also to effectively reach the right consumers. First and foremost, brands should work with global digital verification partners that are integrated with all the major DSPs. Brands should discuss with their media partners and ensure the right content classification categories are a part of their media buying strategies.

It’s important to understand the brand's appetite to risk in relation to the context in which their ads are seen. All brands are unique and their definition of safety and suitability is driven by their own values and goals. While misinformation detection is incredibly fast and powerful, with the scale at which  misinformation is being proliferated, it is imperative to layer on additional sources of protection. Keyword blocking should not be the only approach, additional measures such as brand safety and brand suitability, contextual targeting and cognitive semantic intelligence, can ensure brands only show up in suitable environments that do not contain disinformation. While more work is required to combat disinformation collectively, there are certain steps brands can take to ensure that they safeguard themselves from their ads appearing in unsuitable environments, and support high quality journalism.

 

Our Latest Posts

IAB Europe
Rond-Point Robert
Schuman 11
1040 Brussels
Belgium
Sign up for our newsletter
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram