Get your copy of the iiDENTIFii Identity Index - South Africa Edition here!

How To Contend With AI-generated Image Deception

face scan

AI-generated, enhanced and modified imagery is on the rise, leading us to question what’s real and what’s not. In March last year, fake photos of Pope Francis in a white puffer jacket went viral, highlighting the power of AI-generated images. This past week, The Guardian ran an article on how consumers can spot AI-generated images – for now. 

This epidemic of AI-generated imagery raises a wider concern for financial institutions, businesses, governments and healthcare organisations who rely on visual identification as part of their security frameworks. If businesses can’t trust an image, how can they ensure that proprietary data and funds are not at risk? 

The threat of misinformation 

The image of the pope was just the latest in a series of deepfakes. Last year, pictures of former President Donald Trump also appeared to show him in police custody. Although the creator made it clear that they were produced as an exercise in the use of AI, the images, combined with rumours of Trump’s imminent arrest, went viral and created a potentially dangerous narrative.

If the public doesn’t have access to information and images that are confirmed as true, the power to make informed decisions is removed. Anyone with a smartphone can produce malicious content that is aimed at stoking dissent and eroding trust. AI-generated misinformation is a major threat when it comes to influencing elections and national policy. 

A turning point in AI imagery

The current conversation around AI imagery is an important one. AI is growing more sophisticated by the second, and soon it won’t be as easy to identify the difference between a real image and a fake one. While there are guidelines in place on how to assess digitally manipulated content, we may not have the luxury of spotting the difference between real and altered images for very long. 

The wider risk of fake imagery 

According to the Southern African Fraud Prevention Service (SAFPS), impersonation attacks increased by 264% for the first five months of the year compared to 2021. South African Banking Risk Information Centre’s (SABRIC) Annual Crime Stats 2022 report noted that banking app fraud cases have increased by 36% between 2021 and 2022.

This is because an image of a person is often used to access essential services and resources. Right now, there are easily accessible tools that can generate hundreds of convincing AI images, based on a handful of authentic images. Criminals can take advantage of digitally altered images in many ways.

  • If a financial institution’s system only calls for a static image, fraudsters can use AI to fake a person’s likeness to steal funds. The same could even apply to accessing a person’s social grant. 
  • The risks even go beyond the realm of financial services. Criminals could impersonate someone to conduct SIM swap fraud – a crime that increased from 2 686 incidents in 2020 to 4 386 in 2021 according to SABRIC. 
  • A fake image could even be used to access a person’s private medical records or to conduct voter fraud. 

In addition to presentation attacks where a criminal would hold up a fake or hijacked image to the camera to get access to an account, deepfakes are now even used in digital injection attacks that bypass the camera on a device or are injected into a data stream.  

Organisations need to protect themselves against fake AI imagery 

Organisations need to put measures in place that defend against digitally altered images. 

Fortunately, technology exists that can discern whether a person is real, live and logging into a platform at that moment. In iiDENTIFii’s experience working with some of South Africa’s leading banks, the only way to ensure that a person on the other end of any kind of verification platform is real is through biometric liveness.  

3D liveness creates a three-dimensional map of the user’s face using a camera. This map captures details about the face’s shape and contours, allowing the system to distinguish between a flat image (photo or video) and a real person.

4D Liveness® is an even more sophisticated technology that maps the face from multiple angles and then provides a time stamp and further verification through a three-step process where the user’s selfie and ID document data are checked with relevant government databases. This rigorous approach has been proven to protect against false imagery attacks. 

Preventing real danger from fake images

In conclusion, AI-generated or doctored imagery goes far beyond the outrageous memes or fake celebrity pictures we see in the media. It is a sign of a more sinister trend at play, where any individual can easily access the tools required to spoof a person’s identity. 

Organisations need to ensure that their digital platforms are capable of identifying and securing against any form of fake digital imagery if they want to stay resilient against ever-evolving cyber threats. Contact us to learn more.

News and latest insights

Ready to get started?

The identity verification platform (IDV) of choice