McAfee’s Pratim Mukherjee on fighting deepfake AI scams in 2024 and beyond

Updated on 09-Jul-2024

Beyond the famous Obama or Morgan Freeman videos, the world of deepfakes is no longer just science fiction. They are definitely not a laughing matter. With a staggering 10x increase in deepfake incidents detected globally from 2022 to 2023, these AI-generated forgeries are posing a significant threat.  

Warren Buffet, arguably the greatest investor of our time, recently mentioned during a shareholder meeting that AI scamming will be the next big growth industry in the world. “As AI continues to grow in popularity, deepfakes will likely become increasingly realistic and frequent. The biggest challenge is continuing to win the cat and mouse game that cybersecurity companies play with the bad actors, to ensure that our AI can beat their AI,” according to Pratim Mukherjee, Senior Director of Engineering, McAfee.

Also read: AI impact on cyber security future: The good, bad and ugly

In an exclusive interview with Digit, Pratim Mukherjee demystified some of the evolving unknowns surrounding deepfake AI related scams, how McAfee is trying to fight them, and the need for heightened safety posture for everyone in an increasingly AI-driven world. Edited excerpts follow:

Q) Can you elaborate on the specific functionalities of Project Mockingbird? How does it differ from existing deepfake detection solutions?

McAfee’s Deepfake Detector (formerly known as “Project Mockingbird”) utilises advanced AI detection models to identify AI-generated audio within videos, helping people understand their digital world and assess the authenticity of content. The functionality of this AI detection technology is far-ranging and will prove invaluable to consumers amidst a rise in AI-generated scams and disinformation: these deepfake audio detection capabilities will put the power of knowing what is real or fake directly into the hands of consumers. This will help consumers avoid ‘cheapfake’ scams where a cloned celebrity is claiming a new limited-time giveaway, and make sure consumers know instantaneously, when watching a video about a political candidate, whether it’s real or AI-generated. This takes protection in the age of AI to a whole new level, giving users the clarity and confidence to navigate the nuances in this new AI-driven world, to protect their online privacy and identity, and well-being.

McAfee’s Deepfake Detector is unique in its focus on audio analysis; it uses a combination of AI-powered contextual, behavioural, and categorical detection models to identify whether the audio in a video is likely AI-generated. This provides unmatched protection capabilities to consumers. 

Q) McAfee has partnered with Intel on AI-powered PCs. How does this partnership specifically address deepfake threats on consumer devices?

The collaboration between McAfee and Intel addresses a crucial consumer and societal need by helping individuals discern truth from fiction amidst the rise of AI-manipulated deepfake scams. Cybercriminals frequently use AI to alter audio in videos, creating convincing yet deceptive content. McAfee’s Deepfake Detector utilises advanced AI detection techniques, including transformer-based Deep Neural Network models, to accurately identify and notify users when audio in a video is likely AI-generated or manipulated. Demonstrated at RSA 2024, this collaboration showcases significant performance and privacy improvements made possible through Intel’s AI-powered PC technology.

Also read: Beware of April Fool scams: Don’t be too hope-fool

By leveraging the Intel Core Ultra processor’s NPU, McAfee’s AI models can perform inference—analysing and detecting deepfakes – locally on the device without sending private user information to the cloud. This local execution has led to a remarkable 300% performance improvement on the same model. The enhanced performance, combined with the privacy benefits of local processing and improved battery life, offers substantial advantages to customers. This collaboration ensures that consumers can benefit from real-time, efficient, and secure deepfake detection directly on their devices, enhancing their protection against sophisticated AI-driven threats.

Q) How will McAfee measure the success of its deepfake protection initiatives?

McAfee’s Deepfake Detector utilises advanced AI detection models to identify AI-generated audio within videos, helping people understand their digital world and assess the authenticity of content. Much like a weather forecast indicating a 70% chance of rain helps you plan your day, this technology equips consumers with insights to make educated decisions about whether content is what it appears to be. We see success as giving users the clarity and confidence to navigate the nuances of our new AI-driven world, to protect their online privacy and identity, and their well-being.

Q) What role do you see regulation and industry standards playing in mitigating AI deepfake threats?

McAfee is focused on educating consumers about the evolving threat landscape, and designing, building and delivering innovative solutions that protect millions of customers around the world from the threats of today and tomorrow. Legislation and regulations that curb the harmful use of AI will provide another vital step in the broader solution. In February 2024, McAfee joined leading tech companies such as Adobe, Google, IBM, Meta, Microsoft, and TikTok to play our part in protecting elections and the electoral process, as part of the Tech Accord to Combat Deceptive Use of AI in 2024 Elections.

In doing so, we’ve pledged to help prevent deceptive AI content from interfering with this year’s global elections by bringing our respective powers to bear, to combat deepfakes and other harmful uses of AI. That includes digital content such as AI-generated audio, video, and images that deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other figures in democratic elections. Likewise, it further covers content that provides false info about when, where, and how people can cast their vote.

Q) Deepfakes can be easily created and distributed online. How can collaboration between cybersecurity firms and international law enforcement agencies be improved to tackle this threat?

We see collaboration between cybersecurity firms and government agencies as essential for tackling online threats such as deepfakes. By sharing knowledge, resources, and expertise, we can collectively mitigate the risks posed by this evolving threat.

Also read: When AI misbehaves: Google Gemini and Meta AI image controversies

This includes research and investment in developing and deploying precautions to limit the risks of malicious deepfakes, attaching a record of the origin of content when possible, working to detect deceptive content, responding quickly and proportionately to the creation and dissemination of malicious deepfake content, participating in collective efforts to evaluate and learn from efforts to detect and mitigate deepfakes, engaging in shared public awareness and education efforts, and participating in initiatives designed to protect information integrity.

Q) Any unique challenges India faces with AI deepfakes compared to other regions? How is McAfee adapting its approach to address these challenges?

India has 820 million active internet users, and the combination of low latency and widespread access to millions of smartphones and social media accounts means that content can go viral quickly, potentially causing irreparable damage to the dignity or image of individuals. As technology evolves, so does this battle – not just for India but for the whole world. The advent of the latest digital technologies marks a decidedly new era in this struggle. Scammers across the globe employ new methods of trickery in different corners of the world, often AI-powered, to work around existing protections. Meanwhile, security professionals are creating technological advances to counter and prevent these attacks.  

McAfee’s Deepfake Detector utilises advanced AI detection techniques, including transformer-based Deep Neural Network models, expertly trained to detect and notify customers when audio in a video is likely generated or manipulated by AI. This cutting-edge and first of its kind technology is designed to empower users to live their lives online with confidence.

Q) Beyond deepfakes, what other AI-powered threats are you most concerned about in the coming years?

Beyond deepfakes, we see AI being increasingly integrated into cybercrime. Access to sophisticated AI tools has become both easy and affordable, making it very likely that we’ll see a rise in fake and altered content in general. 

Moreover, AI-enabled social media scams pose a growing danger, enabling cybercriminals to create realistic fake endorsements and advertisements that exploit user trust. Another concern is the rise of AI-driven malware, voice and visual cloning, and QR code scams represent a significant escalation in online threats, demanding vigilant cybersecurity measures to mitigate risks effectively.

In response to these evolving threats, we continue to innovate our cybersecurity solutions, leveraging advanced AI technologies to enhance our detection and mitigation capabilities and keep consumers safe online. By staying ahead of emerging threats and educating users about cybersecurity best practices, McAfee aims to mitigate the risks associated with AI-powered cyber threats and safeguard individuals, businesses, and governments from malicious activities in the digital age.

Q) How can machine learning be leveraged not just to detect deepfakes, but also to proactively identify and prevent the creation of malicious content?

At McAfee, we believe the future of AI and online safety lies in combining progress with protection. Our AI technology empowers users to safeguard their privacy, identity, and devices more effectively than ever. Machine learning models detect threats by referencing known models, combating both existing and new (zero-day) threats. By comparing potential threats to features it has seen before, our AI identifies a wide range of malicious content.

Additionally, our AI detects suspicious behaviours such as phishing and smishing – a type of scam cybercriminals use to trick people into clicking on malicious links to share personal information. This capability makes our AI a powerful tool against zero-day threats. By analysing application activities for patterns of malicious behaviour, AI can proactively flag and prevent harmful files or processes. We have utilised AI as a core component of our protection strategy for years. It automatically classifies threats, enriching its “threat intelligence.” As it encounters more threats, it becomes faster and more accurate, ensuring robust protection against even the most sophisticated malicious content.

Also read: Intel creates world’s first real-time fake video detector called FakeCatcher

Jayesh Shinde

Executive Editor at Digit. Technology journalist since Jan 2008, with stints at Indiatimes.com and PCWorld.in. Enthusiastic dad, reluctant traveler, weekend gamer, LOTR nerd, pseudo bon vivant.

Connect On :