Forget our careers, will Israeli AI kills us?
2026-03-24 - 22:20
We saw how technological devices can be turned into weapons of mass destruction back in September 2024, when pagers used by hundreds of people, including Hezbollah members, for communication were detonated simultaneously in Lebanon. The obvious suspect in this attack, which left nine people dead and over three thousand injured—more than two hundred of them seriously—was, of course, Israel. Before the pagers were distributed, during the supply chain stage, tiny explosive mechanisms had been placed inside them. These devices were then transformed into lethal munitions by a remotely sent signal. After this attack, which shocked the world, nearly everyone developed a fear that cell phones could also be turned into explosives using similar mechanisms. Although experts say this is technically possible, the fact that it would require physical tampering with the production lines of major brands like Apple or Samsung makes that scenario unlikely. But here’s the truly bitter reality: there’s no longer any need to physically turn phones into bombs. It is known that during the genocide in Gaza and assassination operations in the region, Israel used both social media companies and artificial intelligence platforms as "digital guided bullets." It is no longer a secret that targeted individuals are being struck by guided missiles in their own homes, using intelligence derived from the signals emitted by the phones they carry. What is even more horrific became evident during the recent war in Iran. US-based artificial intelligence companies were revealed to have turned into "massacre apparatuses." It has been discussed for days that Anthropic's "Claude" model played a decisive role in the attacks by the US and Israel. This skillful assistant, which millions of users use to write stories and develop scenarios—just as they do with Gemini and ChatGPT—immersed its algorithms in atrocity during the processes of identifying, evaluating, and simulating targets in Iran. For instance, during the attack on February 28th when the war began, the two missile strikes conducted 40 minutes apart on a girls' school in Minab, where 160 children were killed, were carried out using Claude—provided to the Pentagon—to analyze satellite imagery, field reports, and data sets from signal intelligence to identify the target. Even if "human commanders" gave the order for the massacre, it was the "assistants" in our pockets that determined the coordinates of those innocent children. For months, the international community has been debating whether artificial intelligence will take our jobs away. Yet what is happening shows that AI, setting aside career hunting, has been transformed into an "intelligence-based guillotine" that adds people directly to target lists. It has been tragically proven that the "ethical algorithms" Silicon Valley so proudly boasts about are only activated when writing poetry, and that these technologies are, in fact, "execution devices." The "poet within us," who might lament for those children, had already long since died. Which country, which leader, which school, or which military installation is next will be determined by the occupation policies of leaders who have "seized" the power of artificial intelligence platforms. Because the real truth that should horrify the rest of the world is hidden in the fine print of these companies' agreements with the Pentagon. When their military collaborations were exposed, companies like Anthropic and OpenAI (ChatGPT) faced significant backlash and threats of boycotts from users. In response, while outlining their "red lines," they guaranteed that they would "only exempt US citizens from mass surveillance." The translation of this commercial maneuver is as follows: "Every civilian outside the borders of the United States is a legitimate dataset for our algorithms. You can target them if you wish." Simply put, we have a problem far more critical than AI making us unemployed: whether it will leave us alive.