NewsRevealed: 'Lavender' AI system and its role in Gaza civilian casualties

Revealed: 'Lavender' AI system and its role in Gaza civilian casualties

In the drone attack on the humanitarian convoy, seven people were killed. In the photo: MQ-1 Predator with a Hellfire missile.
In the drone attack on the humanitarian convoy, seven people were killed. In the photo: MQ-1 Predator with a Hellfire missile.
Images source: © Wikimedia Commons CC BY-SA
Łukasz Maziewski

4 April 2024 19:14

A few days after a drone strike on a humanitarian convoy in the Gaza Strip, a very puzzling leak occurred. The +972 Magazine disclosed the existence of a highly advanced system claimed to be used by the Israelis. This revelation is particularly intriguing following the deaths of volunteers in Gaza.

The previously undisclosed system is named "Lavender". It is purportedly an extremely sophisticated system that utilises artificial intelligence algorithms for targeting and directing kinetic strikes.

Despite its complexity and integration with other modern tools, it reportedly has unacceptable flaws from an international law perspective. According to the portal, one of the fundamental rules it seems to breach is the imperative to avoid striking civilian populations.

Relying on its sources, the portal reported that civilians were killed as a direct consequence of the technology's actions. The systems were even purportedly intentionally directed to strike the homes of Hamas fighters when these individuals visited their families or returned home to rest.

Israelis strike against Hamas

According to reports from its informers, the portal claims that in the initial days of Israel's retaliatory attack on Gaza, the Israeli army was willing to accept 15-20 civilian casualties. This was considered the "cost" of accurately hitting a Hamas fighter, at least those of lower rank. For higher-ranking targets, the Israelis were said to be prepared for up to 100 civilian deaths in such attacks.

The portal went on to state that later, the Israelis escalated their approach. An astounding 37,000 Palestinians were reportedly marked as "targets", with the artificial intelligence being tasked with the physical elimination of many, especially those of lower rank.

This automatically raises questions about the attack that killed the Polish volunteer Damian Soból. Originally from Przemyśl, Poland, Soból died in an Israeli army strike on a Sunday. The convoy he was part of, operated by the organization World Central Kitchen, was fired upon repeatedly in the Gaza Strip.
A total of seven people perished in this attack. The incident occurred while on a mission to deliver food aid, which had arrived in the Gaza Strip only a few hours earlier by ship from Cyprus.

What about the law?

The information presented by the portal is detailed enough to suggest that it might be a deliberate leak aimed at revealing the "Lavender" system, among others. However, Commander Wiesław Goździewicz, a Polish expert in international humanitarian law of armed conflicts, believes it was a genuine leak. For Israel, offloading the responsibility for such an attack onto a machine would be detrimental, he suggests.

This could indicate a violation by Israel of the core principles of International Humanitarian Law (IHL), especially the principle of distinction, the lawyer explains. This principle involves the obligation to differentiate between combatants and civilians and between military and civilian objects, directing attacks only at military objectives.

The portal's description implies a breach of the principle of proportionality. This principle forbids attacks that are expected to cause excessive incidental harm, regarding the concrete and direct military advantage anticipated.

It's essential to recall what Goździewicz expressed in a study for the Bad Embassy portal. He questioned: if we have developed artificial intelligence (AI) to the point of creating a fully aware entity and implemented such AI into a drone, which then launches a Hellfire missile into a village filled with civilians in Sudan, who is responsible for the potential war crime consequences?

The system's creator, the individual who decided on its implementation, and the commander who authorized its use are all accountable, argues Goździewicz. AI is not self-created; it is a human invention, and therefore, humans bear responsibility for its application.

The person who commits a killing with a firearm is the perpetrator, not the pistol, the officer reminds us. Thus, there's a push for mechanisms that ensure the accountability of creators/decision-makers/commanders for utilizing specific types of weaponry. Even if computer systems were behind attacks on civilians in Gaza, their creators could face charges for committing war crimes.
Related content