Date: March 26, 2026
The Role of Technology in Modern Warfare
As the Iran-US war enters its fourth week since the initial strikes on February 28, the United States Department of Defense has deployed a range of advanced technologies in the Middle East. From artificial intelligence that processes data in microseconds to autonomous drones that strike without a pilot, today’s modern warfare looks different from the traditional mode. While proponents argue these tools reduce risk to soldiers, critics warn that the autonomy of these systems highlights the distinction between human judgment and machine decisions. The question is whether humanity is equipped to define the moral limits of the new technology on the battlefield.
According to U.S. Central Command, artificial intelligence (AI) technology has helped the US during Operation Epic Fury, a campaign aimed at eliminating the Iranian nuclear threat and its war power. The head of Central Command, Brad Cooper, acknowledged that “our warfighters are leveraging a variety of advanced AI tools.” He explained that “these systems help [the military] sift through vast amounts of data in seconds.” (US Central Command). Furthermore, The Pentagon has confirmed its usage of Anthropic’s Claude AI which also played a significant role during the U.S. operation to capture Nicolás Maduro in Venezuela by helping the army find the optimal strategy (O’Donnell). As Cooper pointed out, unlike traditional analysis processes, AI engines like Claude need only seconds to process vast amounts of intelligence and identify potential strike targets that would otherwise take hours or days to analyze. Consequently, the pace of modern warfare has accelerated significantly.
With the deployment of AI, ethical questions arise. Although the final decisions on strikes are made by humans, the ethics of artificial intelligence's involvement and its data reliability are debatable. One example that has seriously questioned the ethics of AI was the incident of a school strike that killed more than 175 children. According to The Bulletin, this incident is likely to stem from misinformation that AI processed (Goudarzi, S.). The real question is, “Is AI in its current form reliable enough to be deployed in a battleground?”
People often relate the situation to science fiction novels where technology takes over humans and begins making decisions beyond their control. Correspondingly, Anthropic is restricting the Pentagon's use of Claude for certain applications, despite the Army’s demand for full access. With the rapid development of technology and the advancement of AI, a clear restriction regarding the usage of AI in warfare seems to be required.
Alongside the use of AI, the United States used low-cost one-way attack drones for the first time in combat, and Iran has been using its Shahed drones extensively with the similar intent of using these killing drones without exposing air force soldiers to danger. These drones are cost-effective and low-risk because they do not require human resources in order to strike precisely. Because the Russia-Ukraine war has already proved the catastrophic consequences of these killer drones by causing the majority of casualties, accordingly, 70-80% of deaths and injuries throughout the war, their expanded use in the Iran-US conflict raises serious concerns about the normalization of autonomous and semi-autonomous warfare.
Similarly, according to reports on the Venezuela operation in January 2026, the country’s air defense system, internet, power grid, and communication systems were disrupted for 2.5 hours by a coordinated cyberattack. The United States has demonstrated its ability to paralyze a country through advanced technology.
The ability that these technologies have to conduct warfare is clearly devastating. As combat continues to evolve, whether or not countries define the limitations of technology is a challenge the world faces. From now on, technology will continue to play a significant role in warfare, shaping how humans face the consequences.
Image Credit: U.S. Marine Corps/Michael Virtue
Image Credit: Naeblys
Main Image Credit: Army Futures Command
Work Cited
Goudarzi, S. (2026, March 12). Unready for war, AI may already be causing deadly mistakes. Bulletin of the Atomic Scientists. https://thebulletin.org/2026/03/unready-for-war-ai-may-already-be-causing-deadly-mistakes/amp/
O’Donnell, James. “A Defense Official Reveals How AI Chatbots Could Be Used for Targeting Decisions.” MIT Technology Review, 13 Mar. 2026, www.technologyreview.com/2026/03/12/1134243/defense-official-military-use-ai-chatbots-targeting-decisions.
U.S. Central Command on X: ‘Update From CENTCOM Commander on Operation Epic Fury: https://t.co/5KQDv0Cfxs’ / X. X (Formerly Twitter), x.com/CENTCOM/status/2031700131687379148.