AI cracks reCAPTCHA v2, raising serious security concerns
The CAPTCHA security system has been cracked by artificial intelligence. As researchers from ETH Zurich demonstrated, a well-prepared model can solve security "puzzles" so that the system cannot recognise that the solver is not a human. The issue concerns the reCAPTCHA v2 variant.
27 September 2024 16:42
Tech Radar describes the discovery based on an analysis shared by the researchers. In the cases described, a popular AI model called YOLO was used, which "on behalf of a human" solved reCAPTCHA v2 tasks—a system designed to effectively distinguish machines from humans. Well-known to everyone, these puzzles involve indicating images with specific content, such as all those showing traffic lights or motorcycles.
Until now, such a test has generally been considered an effective method for verifying whether a human is genuinely at the computer or whether some type of script is performing the task. However, as the described studies show, a well-prepared YOLO model (trained here based on 14,000 street images) was able to indicate the correct images as effectively as a human.
Even when it made a mistake, the next attempt with a new puzzle was successful, and multiple attempts are allowed here. Moreover, the AI's success rate did not decrease even after additional CAPTCHA security measures such as mouse movement analysis or the "user's" browser history were activated. The AI effectively mimicked a human enough to trick the system. This raises serious security concerns.
The developed research is a significant signal for administrators responsible for security in online services. Although we are talking somewhat about academic considerations, it does not change the fact that the technology can be implemented in practice. Therefore, it is worth taking a closer look at website security and considering such system modifications so that AI cannot overcome them. However, given the pace of AI development, this may prove to be quite a challenge.