UK Army could be 25-percent robotic by 2030, says British general

Cal Jeffrey

Posts: 4,178   +1,424
Staff member
The big picture: The UK military is moving forward with plans to develop and deploy several thousand combat robots, some which might be autonomous. So far, militaries worldwide have avoided using unmanned technologies in combat situations. Semi-autonomous drones have a pilot who is always at the controls, so humans make the final strike decisions, not AI.

British Army leaders think that by 2030 nearly a quarter of the UK's ground troops will be robots. That is almost 30,000 autonomous and remote-controlled fighting machines deployed within about a decade.

"I suspect we could have an army of 120,000, of which 30,000 might be robots, who knows?" General Sir Nick Carter told The Guardian in an interview.

Carter was careful to note that military leadership does not have any concrete goals in mind yet and that his estimates are only his opinion.

The main hurdle the Army currently faces is funding. Research into developing combat robots was stalled in October when a cross-governmental spending initiative got postponed. Gen. Carter said that talks are now proceeding on the funding issue and that they are "going on in a very constructive way."

"Clearly, from our perspective, we are going to argue for something like that [a multi-year budget] because we need long-term investment because long-term investment gives us the opportunity to have confidence in modernization," Carter explained.

Deploying combat robots programmed to make split-second, potentially lethal decisions raises some ethical flags, but deployment and development are two different things. The UK military and others have been working on small autonomous or remote-controlled weapons for years.

An example is the UK's i9 drone. It is a small, human-operated six-rotor craft equipped with two shotguns. Its primary function is to take point when storming a building in urban combat scenarios. Having the robot enter first to draw fire and dispatch the enemy before it has a chance to fire on human troops could save countless lives.

Currently, fully autonomous combat robots are not much more than a notion. While prototypes exist, there are no immediate plans to deploy them. World leaders would have to agree on when and how militaries could use these war machines. The last thing anybody wants is rogue AI running amok and killing innocents.

Image credit: Forces 

Permalink to story.

 
For general warfare I would agree with the applications. For Special Operations, not so much. In the later, there are frequent changes, sometimes minute by minute and others instantaneously that all the advanced AI cannot be prepared for.....but, on the battlefield this has very real possibilities, especially in encounters where drawing the first fire and returning the first fire will save lives and be faster and more accurate than a human being.
 
I'm a 20-year infantry/airborne Army vet with 4 combat tours under my belt. I'm not exactly sure what to think about this. I could see them come in handy if you were trying to swarm a hard target. Think Iwo Jima during WWII. Could have saved a lot of lives by just turning the "on" button and letting a thousand of these run up the beach blasting everything. But on the other hand, they could just as easily get in the way with friendly fire casualties a real possibility.

Not opposed to the idea. Just interested in specifically knowing how/when they would be deployed and who/what is controlling them.
 
Fewer bodybags?

LOL...of course. But warfare is a completely unpredictable environment. And fast moving to boot. I wouldn't want to get in a firefight alongside these things in some far-off country with the drone operator running the drone from Nebraska like they do with the large aerial drones. Or using an AI that has its actions pre-ordained and it's doing one thing while you're trying to do another and the bullets are flying. ;)
 
I'd of thought they'd also come in handy for defending positions. Unlike a minefield they can be switched off and moved to another position. They'd be very useful as artillery as they're simply given a position and they then flatten it. Driving supply vehicles would also be useful as soldiers aren't exposed in unarmoured vehicles this way. They could also be useful in enforcing road blocks rather than using soldiers who are at risk from explosive filled vehicles. On the attack side, the problem is obviously trying to not shoot the wrong people but I guess we can just continue labelling everybody who's killed a combatant ;)
 
Do you know the Skynet satellite system was developed by the UK military and is still going now with it's 6 or 7 and is a surveillance system, so not so far fetched but no way IA capable lol
 
Or using an AI that has its actions pre-ordained and it's doing one thing while you're trying to do another and the bullets are flying. ;)

So I am actually doing my Masters degree in robotics AI... none of their actions are pre-ordained. And that is the terrifying part. Most of the cutting edge AI research these days is just in trying to quantify and predict all these different AI models that we have built. We can train and train a model for every situation we can think of, but at the end of the day, we really have no way of knowing if we accidentally trained it wrong.

Take that example of the AI camera meant to watch a football (soccer) match, to follow the ball. A relatively simple problem, at first glance. You just need to feed it a bunch of images of a football being kicked around a field, taken with a similar camera system from a similar distance. Eventually, you have a system that will track the ball around the field automatically. At least until it sees some bald guy's head, and starts tracking that instead. The people who built this model had no way of knowing this would happen because the model is essentially a black box. They would first have to think of this as a potential issue, then they would have to feed it images of bald guys, from the same distance and camera system, and see if it misidentified them as footballs - and then would need to train it to ignore the bald guys, but not impact its ability to track the ball. Even then, its no guarantee that they trained it to ignore every bald guy, and didn't accidentally train it to ignore some footballs.

Now, replace 'football' with 'enemy combatant', and 'bald guy' with 'non-combatant' or even 'friendly soldier', and you begin to grasp the scope and severity of the problem. AI with access to weapons is nowhere near ready for use in combat situation. Self-driving supply convoys? Sure. Robot tanks? No.
 
I have no problem with robotic soldiers, but I do have a problem with AUTONOMOUS robotic soldiers. If these things are remote-controlled by human operators, then I'm ok with it. The problem with autonomous machines of war is that they wouldn't understand the idea of surrender. This was very ingeniously demonstrated in the original Robocop movie with the ED-209:
 
" Having the robot enter first to draw fire and dispatch the enemy before it has a chance to fire on human troops could save countless lives."

That's a ridiculous claim. In all likelihood, we're talking about dozens of lives at best(unless UK enters into a total war-type of situation) and even then we have to ask how many people will die because people are likely to have a very different reaction to a robot busting down their door than they would to a human.
 
" Having the robot enter first to draw fire and dispatch the enemy before it has a chance to fire on human troops could save countless lives."

That's a ridiculous claim. In all likelihood, we're talking about dozens of lives at best(unless UK enters into a total war-type of situation) and even then we have to ask how many people will die because people are likely to have a very different reaction to a robot busting down their door than they would to a human.

This is being marketed as robots charge first vs humans. But I expect this will lead to robot-vs-robot warfare pretty soonish thereafter.
 
Back