The rise in deepfakes generated by artificial intelligence (AI) has been scarily rapid – a projected eight million will be shared in 2025, up from 500,000 in 2023. This sheer scale combined with greater sophistication and convincingness means finding ways to quickly detect and mitigate this ever-growing threat is an increasingly urgent priority. 

Concerns over criminal manipulation of digital text, images and video are not new, but the proliferation in recent months of generative AI tools that enable anyone, anywhere to quickly, easily and cheaply create deepfake images has significantly changed the game.

As deepfakes threaten to hit the mainstream across a range of harmful activity, from online child sexual exploitation and abuse (CSEA) to fraud and election interference, there is a corresponding drive to develop the tools and methods needed to tackle them at the required scale and pace. 

In its role as an innovative enabler connecting frontline government and law enforcement with cutting-edge technology from industry, the Accelerated Capability Environment (ACE) is at the heart of this ramp-up in activity designed to find practical solutions to arguably the greatest challenge of the online age. And 2024 was a year where the marriage of cutting-edge technology, collaboration and fresh thinking enabled significant strides forward. 

Circular collaboration 

Clear results that accelerate crucial deepfake detection in a range of domains have been made across a series of focused commissions carried out by ACE. And just as importantly, learnings and practical experiences developed in one commission have been shared with others to pass on deeper knowledge and skills.  

The biggest event in this space was the Deepfake Detection Challenge. Initiated by the Home Office, the Department for Science, Innovation and Technology, ACE and the renowned Alan Turing Institute, this visionary idea brought together academic, industry and government experts to develop innovative and practical solutions focused on detecting fake media.

More than 150 people attended the initial briefing where five challenge statements pushing the boundaries of current capabilities were launched. The critical importance of collaboration and sharing of skills and knowledge was a recurring theme, and major tech companies including Microsoft and Amazon Web Services (AWS) provided practical support.  

Eight weeks were spent developing innovative ideas and solutions on a specially created platform, which hosted approximately two million assets made up of both real and synthetic data for training and testing. Following this, 17 submissions were received, and six teams from our community – Frazer-Nash Consulting, IBM, Oxford Wave Research, Open Origins, Safe and Sound from the University of Southampton, and Naimuri – were selected to demonstrate their ideas in front of more than 200 stakeholders. 

Solutions from Frazer-Nash, Oxford Wave, the University of Southampton and Naimuri, a combination of existing products that have been identified as potentially showing operational value as well as early-stage proof of concepts being developed against specific use cases including CSEA, disinformation and audio, are now going through benchmark testing and user trials. 

Key insights from the initial challenge work, alongside the clear success in accelerating the state-of-the art in deepfake detection possibilities, included that curated data was critical to be able to make as much progress as possible in the time and conditions available, and that creating a dataset that was more representative of real-world operational scenarios would have been helpful.  

Using better data to detect child abuse deepfakes 

When another significant commission to further deepfake detection was brought to ACE by the government’s Defence Science and Technology Laboratory (DSTL) and the Office of the Chief Scientific Adviser (OCSA), data development was a top priority.  

To mature the EVITA (Evaluating video, text and audio) AI content detection tool the focus shifted away from volume.  

As part of developing next-step recommendations, ACE leveraged its expertise from the Deepfake Detection Challenge to create a reusable ‘gold standard’ dataset. This dataset was designed to effectively test detection models, including those targeting child sexual abuse material (CSAM).

By combining this ‘gold standard’ dataset with ACE’s extensive domain and community expertise – drawing on insights from Naimuri and Bays Consulting – ACE delivered rapid insights into the maturation of EVITA through comprehensive and diverse testing. 

This work not only enabled ACE to deliver the requested next-step recommendations for the EVITA programme but also led to the development of a repeatable testing and evaluation approach for deepfake detection. This approach enhances the ability to interpret and understand the results generated by detection tools. 

Alongside this, another piece of work was taking place exploring how AI can be used to detect deepfakes in policing. The biggest challenge is in digital forensics where, the ACE team heard, officers can be faced with up to a million child abuse images on a single seized phone.  

This commission, working with community members Blueprint, Camera Forensics and TRMG, seeks to understand where deepfake detection tooling fits into the investigation stage to add most value. Next steps in this particular project are ‘making this real’ – working towards commissioning a proof of concept or trial of an existing capability.  

And so the learning is becoming circular once more as the next stage of the Deepfake Detection Challenge progresses. This will push further than any work in this field so far, focusing on making the initial solutions presented more user-centric and deeply relevant to practitioners in the field. 

Deepfakes are both a growing menace and an evolving threat, but bridging the gap between models and reality will be critical to tackling them at scale and at pace. ACE, its customers and suppliers remain laser focused on this evolution from the theoretical to the practical. The potential of innovation combined with collaboration has already proved to be a potent force in this area, the challenge – in all ways – is maximising the potential of what comes next.

Share.
Exit mobile version