Revealed: 'Lavender' AI system and its role in Gaza civilian casualties
A few days after a drone strike on a humanitarian convoy in the Gaza Strip, a very puzzling leak occurred. The +972 Magazine disclosed the existence of a highly advanced system claimed to be used by the Israelis. This revelation is particularly intriguing following the deaths of volunteers in Gaza.
Despite its complexity and integration with other modern tools, it reportedly has unacceptable flaws from an international law perspective. According to the portal, one of the fundamental rules it seems to breach is the imperative to avoid striking civilian populations.
Israelis strike against Hamas
The portal went on to state that later, the Israelis escalated their approach. An astounding 37,000 Palestinians were reportedly marked as "targets", with the artificial intelligence being tasked with the physical elimination of many, especially those of lower rank.
What about the law?
The information presented by the portal is detailed enough to suggest that it might be a deliberate leak aimed at revealing the "Lavender" system, among others. However, Commander Wiesław Goździewicz, a Polish expert in international humanitarian law of armed conflicts, believes it was a genuine leak. For Israel, offloading the responsibility for such an attack onto a machine would be detrimental, he suggests.
This could indicate a violation by Israel of the core principles of International Humanitarian Law (IHL), especially the principle of distinction, the lawyer explains. This principle involves the obligation to differentiate between combatants and civilians and between military and civilian objects, directing attacks only at military objectives.
The portal's description implies a breach of the principle of proportionality. This principle forbids attacks that are expected to cause excessive incidental harm, regarding the concrete and direct military advantage anticipated.
It's essential to recall what Goździewicz expressed in a study for the Bad Embassy portal. He questioned: if we have developed artificial intelligence (AI) to the point of creating a fully aware entity and implemented such AI into a drone, which then launches a Hellfire missile into a village filled with civilians in Sudan, who is responsible for the potential war crime consequences?
The system's creator, the individual who decided on its implementation, and the commander who authorized its use are all accountable, argues Goździewicz. AI is not self-created; it is a human invention, and therefore, humans bear responsibility for its application.