fraud
Credit: Pixabay/CC0 Public Domain

Earlier this month, a Hong Kong company lost HK$200 million (A$40 million) in a deepfake scam. An employee transferred funds following a video conference call with scammers who looked and sounded like senior company officials.

Generative AI tools can create image, video and voice replicas of real people saying and doing things they never would have done. And these tools are becoming increasingly easy to access and use.

This can perpetuate intimate image abuse (including things like "revenge porn") and disrupt democratic processes. Currently, many jurisdictions are grappling with how to regulate AI deepfakes.

But if you've been a victim of a deepfake scam, can you obtain compensation or redress for your losses? The legislation hasn't caught up yet.

Who is responsible?

In most cases of deepfake , scammers will avoid trying to fool banks and , instead opting for so-called "push payment" frauds where victims are tricked into directing their bank to pay the fraudster.

So, if you're seeking a remedy, there are at least four possible targets:

  1. the fraudster (who will often have disappeared)
  2. the social media platform that hosted the fake
  3. any bank that paid out the money on the instructions of the victim of the fraud
  4. the provider of the AI tool that created the fake.

The quick answer is that once the fraudster vanishes, it is currently unclear whether you have a right to a remedy from any of these other parties (though that may change in the future).

Let's see why.

The social media platform

In principle, you could seek damages from a social media platform if it hosted a deepfake used to defraud you. But there are hurdles to overcome.

Platforms typically frame themselves as mere conduits of content—which means they are not legally responsible for the content. In the United States, platforms are explicitly shielded from this kind of liability. However, no such protection exists in most other common law countries, including Australia.

The Australian Competition and Consumer Commission (ACCC) is taking Meta (Facebook's parent company) to court. They are testing the possibility of making directly liable for deepfake crypto scams if they actively target the ads to possible victims.

The ACCC is also arguing Meta should be liable as an accessory to the scam—for failing to remove the misleading ads promptly once notified of the problem.

With audio, video and image deepfakes only growing more realistic, we need multi-layered strategies of prevention, education and compensation.

At the very least, platforms should be responsible for promptly removing deepfake content used for fraudulent purposes. They may already claim to be doing this, but it might soon become a .

This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation: So you've been scammed by a deepfake. What can you do? (2024, February 26) retrieved 26 February 2024 from https://techxplore.com/news/2024-02-youve-scammed-deepfake.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.