AI‑driven fraud surge: FBI issues urgent warning
The FBI warns against the dangerous use of artificial intelligence. AI is being utilised not only to facilitate work but also for fraud. It allows for the creation of tools necessary to carry out fraudulent activities with minimal effort.
11 December 2024 15:28
The FBI highlights the growing threat of scams that utilise artificial intelligence technology. According to Bitdefender, cybercriminals are increasingly turning to generative AI to craft more credible and harder-to-detect scams.
Generative artificial intelligence, like OpenAI's GPT, enables criminals to swiftly create content that can be used for fraud. The FBI emphasises that AI reduces the time and effort needed to deceive victims, making these activities more effective.
Using generative artificial intelligence allows for the use of human-developed tools to create something entirely new. The potential of generative AI also enables the correction of errors and elimination of human mistakes, which often aid in detecting scams.
The FBI warns against fraudsters
"As it is difficult to identify whether content has been generated by artificial intelligence, the FBI provides the following examples of how criminals can use generative artificial intelligence in their fraud schemes to increase recognisability and social control," reads the FBI announcement.
Criminals use AI to create realistic profile pictures, identity documents, and sounds that can be used to impersonate public figures. This builds trust with potential victims and makes it harder to detect cybercriminals.
How to protect yourself from fraudsters?
To recognise content generated by cybercriminals, a thorough analysis should be conducted. Look for minor language errors or linguistic anomalies. In the case of graphics, we can look for artefacts like blurred parts of the image, missing or overly many fingers, or "strange" ear shapes.
Bitdefender recommends introducing a secret word that will serve as a password when communicating with loved ones. It will allow verification of whether the person on the other side is indeed a family member or an impostor. The company also recommends using tools to detect fraud, such as Bitdefender Scamio.
The FBI emphasises that although content generated by AI is not inherently illegal, it can be used for crimes such as fraud and extortion. Therefore, it is important to be aware of the threats and take appropriate precautions.