killer robots: Scientist Mohsin Fakhrizadeh’s convoy of vehicles was passing through the outskirts of the capital. He was considered Iran’s most senior nuclear scientist and lived under tight security guard.
After some time Fakhrizadeh’s car was attacked. Bullets were fired and he died. This was no ordinary attack.
A senior officer who was present at the spot said that there was no attacker there. The shots were fired from a machine gun mounted in a car but there was no one wielding the machine gun. Another Iranian official said that the machine gun was being controlled through artificial intelligence.
‘Computer controlled machine gun’. It appears to be part of a story inspired by Hollywood’s “Killer Robot” series of films. Where there is such a machine that can think and shoot by aiming.
But is this possible? If yes, then another question arises that in the coming days will robots be face to face instead of soldiers in war?

Killer robots: Will robots fight in the war?
Heather Roff, senior research analyst at Johns Hopkins University in the US, says, “We are talking about a system that has the ability to set its goals and penetrate them without the help or instruction of any human.”
It has to be understood here that when it comes to the use of robots in war, the word on the tongue of experts is ‘autonomous’. That is, giving the machine the power to make its own decisions. There has been debate about this. The scope of these weapons is also very large. These range from primary level weapons to terminators. Haider says that such weapons exist even today.
She says, “We have sea tunnels, many types of land tunnels, cyber technology and cyber weapons. Which are autonomous. Then there is Israel’s anti-radiation missile HAROP. It is said that after being launched, it is based on signals. You can direct and deploy the Patriot missile in automatic mode.”
Many people may be surprised by the idea of using “killer robots” in war, but the truth is that such technology is still being developed around the world.
Haider Roff says, “China wants to strengthen its military. They are automating their naval ships and submarines. Israel is engaged in improving ground weapons such as tanks and guns. Russia also spends a lot of money on ground equipment. America’s interest is everywhere. From airforce, airbase systems to missiles. Britain also wants to increase the capability of its air force and navy.”
- How to stop thinking about sex | Thinking about sex has started changing like this
- Three ways to save jobs in the age of robots
Killer robots: Experts worry
The important thing is that of all the weapons being developed now, not all will be “killer robots”. But Haider seems concerned about some weapons and countries.
“Western countries tend to underestimate risk. They test their systems extensively, but Russia quickly deploys developed equipment without testing much. That’s what I’m concerned about,” she says.
Haider says that putting an untested system on the front is like calling for some destruction.
Tom Simpson, an associate professor at Oxford University, has spent years understanding the effects of rust. He has been associated with the British Army for five years. His interest in technical and security related questions remains constant.
On the idea of deploying robots instead of humans on dangerous fronts like war, he says that an argument is given in its favor which strengthens the moral ground of governments. It is the responsibility of the governments to develop such systems to save the lives of their soldiers. But Tom cautions that there are dangers in this technique as well.
Killer robots: Machines can make mistakes!
He says, “There is a fear that technology can lead to wrong decisions. It can kill people who should not be killed. It is also right to worry about it. If the soldier is a human then your target And it is his responsibility to differentiate between innocent people. But a machine will not always be able to make such a difference with success. Now the question is how much risk can be taken with it. Some people would say that the risk margin is negligible I do not agree with this view. In my opinion, this technology should be used only when its use poses less danger to civilians not participating in the war than the human army.
In Tom’s opinion, there are some battle fronts where more risk can be taken. He says that you imagine a situation where there is a pilot in a fighter plane and they have 20 or 30 automatic systems. Wherever they are in the sky, there is no one other than the enemy. You will see how the automated system successfully strikes such a place. It is also safe because there are no civilians there.
Another argument is made in favor of deploying automated robots that are more practical. Tom says that the genie has come out of the bottle. If you do not make such weapons, then do not make your enemies, they will make them.

Killer robots: What will be the strategy?
Tom Simpson says, “When it comes to banning such weapons or advocating for self-control, the question to be asked is how will the world be after 20 years? There will be few countries that have such an automated system and Our army will not have such capability to defend itself. I think in that situation people will tell the governments that the decision of self-control was wrong. The government’s first duty is to secure the country. Those responsible government are no first use The policy means the policy of not using such weapons in the first place.
It’s the same strategy that might remind you of the nuclear arms race of the 20th century. But another question is asked, why do you need to make such a weapon to counter the threat?
To this Tom Simpson replies, “Let’s talk about a system in which nano or micro technology has been used and through that 20 to 50 thousand devices are deployed in an area. There is no such way in this situation. That a person or a squad of humans can compete with him. To compete with him, you may not have exactly the same system, but you will need some kind of automatic system.”

Killer robots: Demand for ban
Tom says that only through such a system can the threat in front be countered. However, software engineer Laura Nolan has a different opinion.
She says, “Some people think that in the future war will be between robots. If robot soldiers fight then blood will not spill in such a war. I think this is an idealistic fantasy. We will see such a situation on most of the occasions that these machines has been deployed to compete with humans. It will be a very painful situation.”
Laura Nolan was associated with the tech company ‘Google’. In the year 2017, he was asked to work on a project with the US Defense Ministry. In this project, artificial intelligence systems were to be developed to analyze video footage taken from drones. This was not a project to develop weapons.
But Laura was worried about it. He felt that he was part of such an artificial technology whose path leads to war somewhere. He resigned in protest against this. After this, she got involved in the campaign to stop the “killer robot”. In his opinion, the problem at the core of such weapons is that computers and humans have different ways of thinking.
Laura says, “Computers do calculations whereas humans have the ability to make decisions by testing. The calculations are done on the basis of concrete data. While many things are taken into account while making judgmental decisions. If we explain with examples, then most of us People would not want to come before a robot judge in a court who enforces the law in a certain way.”
Killer robots: Technology limits
The problem is that any area of the battlefield where civilian population, that is, ordinary citizens, lives, is a very complicated area for conflict. There are a lot of difficult decisions to be taken which require proper judgment.
Laura says that if the machine’s software is designed to cope with situations different from that, then the situation can be very precarious. Whereas humans can take decisions by molding themselves according to the situation. She says that similar problems are seen in the technology of driverless cars.
In the year 2018, an Uber automated car collided with a woman going with a bicycle during testing in the US. The woman died in this accident. The investigation found that the car’s system could not identify the woman and her bicycle as a potential threat that could lead to a car collision. Laura argues that in the case of autonomous weapons, the situation could be more complicated. Because war itself is a very complicated situation. According to Laura, the question related to accountability also comes up here.
She says, “If something happens to an autonomous weapon that can be classified as a war crime, then who will be responsible? Will the accountability be of the commander who ordered the deployment of the weapon. Probably not because perhaps the weapon did this during training.” Nothing has been done. Even the engineers might not have imagined this to happen. In such a situation, it will be difficult to fix the accountability of the engineers.
Laura says that in her understanding these weapons should be banned at all costs.
Killer robots: Danger is danger
On the other hand, Paul Shorey, director of technology at the Center for New American Security, raises a different serious question related to such technology. He says, “Technology is taking us along with development on a path where decisions related to life and death will be handed over to machines and I think this is a serious question before humanity.”
Paul has been a policy analyst at the Pentagon. He has also written a book, ‘Army of Nuns, Autonomous Weapons and the Future of War’. In response to the question whether robots will make decisions related to life and death in future wars, he says that such machines still exist.
Paul says that the drones themselves take off through pre-programmed. He also says that robots can never have enough intelligence to understand the strategic and geopolitical risks associated with war.
Paul Shorey says, “There have been some situations recently. Fighters of Russia and America flying in the skies of Syria, which were helping the forces practicing on the ground. Last year there was a skirmish between the armies of China and India. . In the midst of such military disputes, there is a possibility of great risk if autonomous weapons are deployed.
Paul also says that computers are used to buy and sell shares in the stock market at the speed of superhumans. Everyone has seen its risks too. In the event of a flash crash, that is, a sudden rapid fall, humans cannot intervene because the machines do all this in as little as milliseconds. The same can happen in the event of war as well. In a blink of an eye, one can see firing between the machines of the two sides. Situations can become even more dire when the enemy turns his weapons towards you.
Paul says, “It absolutely can happen. Anything can be hacked. The military uses the same insecure computers as everyone else. Self-driving cars have been hacked. We It has been seen that hackers can control the steering and brakes of the car even from a distance. But if a hacker can control the weapons then the damage can be much more.
Paul says that there are other questions related to ‘Killer Robot’. They say that even if we assume that the machines hit the right place, then no human will be responsible for the loss of life. The question is also that if someone does not take the burden of war on his shoulders, then what will we say to the society? What answer will we give to humanity?’
It is certain that the machines taking part in the war today will continue to play a role. But will we be able to understand the limits of technology? How many decisions will be given to the machines? The answer to the question of what will happen in the future is that automated robots will have as many rights as we give them.
Read More…..
- Three ways to save jobs in the age of robots
- What is insurance simple words? what are insurance types?
- How information technology changed human life