The images have not left me, of dead and wounded children being carried in the arms of the medics and relatives to the ambulances and hospitals. On February 28, at the start of Operation Epic Fury, cruise missiles struck the Shajareh Tayyebeh school – officially named a girls’ school, in Minab, southern Iran, hitting the building multiple times. Iranian authorities reported that 168 people killed – mostly school children, aged between seven and twelve – along with 26 teachers and four parents. Ninety-five more were wounded. These children had woken up that Tuesday morning, been sent to school by mothers who believed the classroom was the safest place outside their homes, but they were dead before the day was over. UNESCO condemned the strike as a "grave violation of humanitarian law". The attack has been cited as one of the deadliest single civilian casualty incidents in the conflict.
Preliminary findings suggest that the school was struck because it was listed as a military target database that had never been updated. The compound had once been part of an Iranian Revolutionary Guard Corps facility, but military personnel had moved out years earlier and school restarted. But the data did not reflect this. No one verified it before the missiles were fired.
Who is responsible? The programmer who built the system? The intelligence analysts who allowed the database to go unverified? The commanders who authorised the strike? Possibly all of them – but the diffusion of responsibility and accountability across this chain is precisely what makes AI-assisted targeting so dangerous and catastrophic. The distinction between military targets and civilian objects needs to be clearly defined and verified.
In any conventional targeting chain, an error of this magnitude would generate a trail of accountability, but algorithms cannot be held in contempt, sentenced, or court-martialled. The Minab school attack is evidence, at scale, that existing law is not sufficient when there is no human in the loop to verify and apply it.
The governance gap is real and closeable. Legal framework is essential; for example, mandatory human verification – with documented authorisation and a timestamp – before any AI-assisted targeting system can direct a strike on a location. No algorithm should be able to carry a target from identification to authorisation without a named human officer confirming, against current intelligence, that the target is what the system says it is.
Second, mandatory, time-stamped expiry on targeting intelligence data: any classification of a building as a military facility must lapse after a defined period and require re-verification before it can be actioned. The Minab school was a preventable catastrophe on both counts.
The children who arrived at school in Minab on the morning of 28th February, did not know that a database somewhere had not been updated. They only knew that it was a Tuesday, that school was safe, and they were happy to be there. The many drawings and sketches from the bombed site are heart wrenching to look at. The question is no longer whether we understand the problem, it is whether the people who have the authority to fix it, have the will to act, before the next school is in the database.
Neera Bali is Chief of Staff, Pahle India Foundation.