Home » Technology » Deepfake threats to biometric security and solutions

Deepfake threats to biometric security and solutions

In a recent impersonation fraud, a Hong Kong-based bank was victimized by a scam where a banker was tricked into sending $25.6m to thieves in exchange for a video chat with the bank’s CFO and colleagues. But none of them were real people — all were deepfakes created with the help of artificial intelligence.

This incident shows how cybercriminals use deepfakes in order to deceive people and commit fraud. Deepfakes are also a concern for biometric systems.

In the last decade, biometrics has become a popular way to authenticate identities. It is also used for accessing digital systems. The usage of these markers will continue to grow at a rate of more than 20% per annum until 2030. Like every advancement in cybersecurity, bad actors aren’t far behind.

Anything that can be digitally sampled can be deepfaked — an image, video, audio, or even text to mimic the sender’s style and syntax. Even an amateur can create convincing fakes with a handful of widely available tools, and a training dataset such as YouTube videos.

Deepfake attempts to authenticate come in two forms: Presentation and injection attacks.

A presentation attack involves presenting a fabricated image, rendering or video in front of a camera, sensor, or other device to authenticate it. Some examples include:

Print attacks

  • Image in 2D
  • Cut out 2D paper masks with eyes
  • A smartphone displaying a photo
  • 3D Layered Mask
  • Replay Attack of a Captured Video of the Legitimate User

Deepfake attacks

  • Face swapping
  • Lip syncing
  • Voice cloning
  • Gesture/expression transfers
  • Text-to-speech

Injection attacks manipulate the data or communication channel between a camera or scanner, and the authentication system. This is similar to the well known man-in the-middle attack.

Cybercriminals can bypass security measures by injecting a fingerprint or face-ID into the authentication process using automated software designed for application testing. Examples include:

  • Uploading synthetic media
  • Streaming of media via a virtual device, such as cameras
  • Manipulating the data between a browser and a server (i.e.’man in the middle’)

Deepfakes: A Defense Against Them

There are several countermeasures that can be taken to defend against these attacks. They often center on determining whether the biometric markers come from a genuine, living person.

The techniques for liveness testing include analysing facial movements, verifying 3D information about depth and iris movement (optical), detecting electronic impulses with capacitive sensors and verifying fingerprints below the surface of the skin (ultrasonic).

It is a good first line of defense, but can be a nuisance to the user, since it requires the user’s participation. Liveness checks can be of two types:

  • Passive protection It runs in the background, without asking users to confirm their identity. Although it may not be as annoying, the protection is less.
  • Active methodsThis increases security and improves the user’s experience.

To respond to these new risks, organizations should prioritize which assets need the higher level security that comes with active liveness testing versus when it’s not necessary. Regulatory and compliance standards require liveness detection today, and they may do so in the near future as more incidents, such as the Hong Kong Bank Fraud, come to light.

Deepfakes and Best Practices

To combat deepfakes, it is important to use a multi-layered approach that includes both active and passive checks of liveness. Active liveness is achieved by the user performing randomized expressions. Passive liveness works without the user’s involvement.

True-depth cameras are also needed to protect against injection attacks and prevent device manipulation. In order to protect against deepfakes organizations should also consider these best practices.

  • Anti-Spoofing Algorithms: Algorithms which detect and differentiate between authentic biometric data and spoofed information can catch fakes, and authenticate an identity. They can use factors such as texture, temperature and color to identify the authenticity of biometric markers. For example, Intel’s FakeCatcher To determine whether a video is fake or real, look for subtle differences in pixels. These changes can be seen in the blood flow.

  • Data Encryption To prevent unauthorised access, ensure that biometric data are encrypted both during transmission and when stored. Strict encryption protocols and access controls can prevent the man-inthe-middle attack and other protocol injections.
  • Adaptive Authentication: Use additional signals such as network, device, application, and context factors to verify the user’s identity. This will allow you to choose appropriate authentication or reauthentication based on a request or a transaction.
  • Multi-Layered Defense: Bad actors can circumvent current security measures by relying on static or streaming analysis of photos/videos. Sensitive operations can be protected by a digital identity that is reusable. This credential could be used to augment high-risk transactions, such as cash wire transfers. Video calls can be enhanced with a green tick that says, “This individual has been independently confirmed.”

Identity Management Systems: Strengthening the System

It is important to keep in mind that biometric authentication alone will not provide a foolproof protection against identity attacks. This must be part of an identity management strategy that includes transactional risk, fraud detection, and spoofing.

In order to effectively combat the sophisticated threats that deepfake technology poses, organizations need to enhance their identity management and access control systems using the latest advances in detection and encryption techniques. This proactive approach will strengthen the security and resilience of digital systems against cyberthreats.

It is essential to prioritize these strategies in order to protect against identity theft, and ensure the reliability of biometric verification over time.