AhlulBayt News Agency (ABNA): The American newspaper Los Angeles Times, in a report, has examined the use of artificial intelligence technology by the Zionist regime's army on the battlefield and its potential consequences—consequences that, according to experts, could lead to serious errors in target identification.
According to the report, the process of identifying individuals is based on a system that gathers vast amounts of information from various sources, including smartphone data, surveillance and traffic cameras, Wi-Fi signals, drone imagery, government intelligence databases, and social media data.
Building a Threat Profile from Data
After collecting this information, intelligent systems classify and evaluate the data, linking it to individuals' identities.
Based on this, AI can create a timeline of an individual's activities and map out their communication network.
These systems analyze massive volumes of data at high speed, identify patterns, and compare them with the behavior of individuals previously identified as threats or who have been present in specific areas.
AI also examines individuals' behavioral changes and deviations from their normal life patterns to construct what is called a "threat profile."
The Main Problem: Data Over Logic
But experts believe that one of the most significant problems with these systems is their decision-making methodology.
According to one AI specialist cited in the report, such systems rely more on data than on reasoning and logic.
In other words, if the input data is incomplete, biased, or flawed, the final outcome will also be erroneous.
This expert explains that such systems sometimes confuse correlation with actual action; meaning they treat mere statistical similarity or behavioral links as signs of a threat, without considering the real context and background of the issue.
When Normal Behavior Becomes a Sign of Danger
Fasji Badalik, a professor at the Slovenian Institute of Criminology, also told the Los Angeles Times that these systems typically track individuals' daily activities—such as whom a person associates with or when and where they move.
Then, based on this data, they calculate the probability of a person being a militant or dangerous.
According to him, this process can lead to false positives—a situation where innocent people are mistakenly identified as threats.
Badalik explains: "Family members, individuals involved in propaganda activities, or those playing a role in financial matters are not necessarily fighters; but the system may categorize them in the same group due to similarities in communication patterns."
He concludes by raising a fundamental question: "Where exactly is the boundary between a military person and a civilian?"
This question has now become one of the most critical challenges in the use of artificial intelligence in modern warfare—a technology that, while increasing speed and analytical capability, still faces serious limitations in understanding human context and the complexity of real-world relationships.
**************
End/ 345E