Tag Archives: Robotics

Disruptive Technology And The International Law Future

Image: “Devdiscourse”


A new chapter of the international order The automation of war is as inevitable as conflict itself.  Less certain, however, is the international community’s collective ability to predict the many ways that these changes will affect the traditional global order. 

The pace of technology is often far greater than our collective ability to contemplate its second and third order effects, and this reality counsels cautious reflection as we enter a new chapter in the age-old story of war and peace.


“Robots have long presented a threat to some aspect of the human experience.  What began with concern over the labor market slowly evolved into a full-blown existential debate over the future of mankind.  But lost somewhere in between the assembly line and apocalypse stands a more immediate threat to the global order:  the disruptive relationship between technology and international law.

Jus ad Bellum

Jus ad bellum is the body of international law that governs the initial use force.  Under this heading, force is authorized in response to an “armed attack.”  However, little discussion has focused on how unmanned technologies will shift this line between war and peace.

Iran’s recent unprovoked attack on one of the United States’ unmanned surveillance aircraft provides an interesting case study.  Though many saw the move as the opening salvo of war, the United States declined to respond in kind.  The President explained that there would have been a “big, big difference” if there was “a man or woman in the [aircraft.]”  This comment seemed to address prudence, not authority.  Many assumed that the United States would have been well within its rights to levy a targeted response.  Yet this sentiment overlooked a key threshold:  could the United States actually claim self-defense under international law?  

Two cases from the International Court of Justice are instructive.  In Nicaragua v. United States, the Court confronted the U.S. government’s surreptitious support and funding of the Contras, a rebel group that sought to overthrow the Nicaraguan government.  Nicaragua viewed the United States’ conduct as an armed attack under international law.  The Court, however, disagreed.

Key to the Court’s holding was the concept of scale and effect.  Although the U.S. government had encouraged and directly supported paramilitary activities in and against Nicaragua, the Court concluded that the scale and effect of that conduct did not rise to the level of an armed attack.  Notably, this was the case regardless of any standing prohibition on the United States’ efforts.

So too in Islamic Republic of Iran v. United States, more commonly known as the “Oil Platforms” case.  The Court analyzed the U.S. government’s decision to bomb evacuated Iranian Oil Platforms in response to Iranian missile and mining operations throughout the Persian Gulf.  Among other things, the Iranian operations injured six crew members on a U.S. flagged oil tanker, ten sailors on a U.S. naval vessel, and damaged both ships.  The Court nonetheless rejected the United States’ claim of self-defense because the Iranian operations did not meet the Nicaragua gravity threshold and thus did not qualify as “armed attacks.”  

Viewed on this backdrop, however contested, it strains reason to suggest that an isolated use of force against an unmanned asset would ever constitute an armed attack.  Never before have hostile forces been able to similarly degrade combat capability with absolutely no risk of casualty.  Though the Geneva Conventions prohibit the “extensive destruction” of property, it is another matter completely to conclude that any unlawful use of force is tantamount to an armed attack.  Indeed, the Nicaragua and Oil Platforms cases clearly reject this reasoning.  This highlights how the new balance of scale and effect will alter the landscape that separates peace and war.

Even assuming an attack on unmanned technology might constitute an armed attack under international law, there arise other complications regarding the degree of force available in response.  The jus ad bellum principles of necessity and proportionality apply to actions taken in self-defense, and the legitimate use of “defensive” force must be tailored to achieve that legitimate end.  A failure to strike this balance runs contrary to long-held principles of international law. 

What, then, happens when a robotic platform is destroyed and the response delayed?  Does the surrogate country have a general right to use limited, belated force in reply?  Maybe.  But a generalized response would likely constitute armed reprisal, which has fallen into disfavor with customary international law. 

To be lawful, the deferred use of defensive force must be tailored to prevent similar attacks in the future.  Anything short of this would convert a country’s inherent right to self-defense into subterfuge for illegal aggression.  Thankfully, this obligation is simply met where the initial aggressor is a developed country that maintains targeting or industrial facilities that can be tied to any previous, or potential future, means of attack.  But this problem takes on new difficulty in the context of asymmetric warfare.   

Non-state actors are more than capable of targeting robotic technology.  Yet these entities lack the traditional infrastructure that might typically (and lawfully) find itself in the crosshairs following an attack.  How, then, can a traditional power use force in response to a successful, non-state assault on unmanned equipment?  It is complicated.  A responsive strike that broadly targets members of the hostile force may present proportionality concerns that are unique from those associated with traditional attacks that risk the loss of life. 

How would a country justify a responsive strike that targets five members of a hostile force in response to a downed drone?  Does the answer change if fewer people are targeted?  And what if there is no question that those targeted were not involved in the initial act of aggression?  These questions aside, a responsive strike that exclusively targets humans in an attempt to stymie future attacks on unmanned equipment does not bear the same legal foundation as one that seeks to prevent future attacks that risk life.  The international community has yet to identify the exchange rate between robotic equipment and human lives, and therein lies the problem.

Jus in Bello

Robotic warfare will also disrupt jus in bello, the law that governs conduct during armed conflict.  Under the law of armed conflict, the right to use deadly force against a belligerent continues until they have been rendered ineffective, whether through injury, surrender, or detention.  But the right to use force first is not diminished by the well-recognized obligation to care for those same combatants if wounded or captured.  An armed force is not required to indiscriminately assume risk in order to capture as opposed to kill an adversary.  To impose such a requirement would shift risk from one group to another and impose gratuitous tactical impediments

This sentiment fades, however, once you place “killer robots” on the battlefield.  While there is little sense in telling a young soldier or marine that he cannot pull the trigger and must put himself at greater risk if an opportunity for capture presents itself, the same does not hold true when a robot is pulling the trigger.  The tactical feasibility of capture over kill becomes real once you swap “boots” for “bots” on the ground.  No longer is there the potential for fatality, and the risk calculus becomes largely financial.  This is not to say that robots would obligate a country to blindly pursue capture at the expense of strategy.  But a modernized military might effect uncontemplated restrictions on the traditional use of force under international law.  The justification for kill over capture is largely nonexistent in situations where capture is tactically feasible without any coordinate risk of casualty.

Design is another important part of this discussion.  Imagine a platoon of “killer robots” engages a small group of combatants, some of whom are incapacitated but not killed.  A robot that is exclusively designed to target and kill would be unable to comply with the internationally recognized duty to care for wounded combatants.  Unless medical care is a contemplated function of these robots’ design, the concept of a human-free battlefield will remain unrealized.  Indeed, the inherent tension between new tech and old law might indicate that at least some human footprint will always be required in theater—if only after the dust of combat settles.

Reports from China suggest that robots could replace humans on the battlefield within the next five years, and the U.S. Army is slated to begin testing a platoon of robotic combat vehicles this year.  Russia, too, is working to develop combat robots to supplement its infantry.  This, of course, raises an important question: what happens if the most powerful, technologically adept countries write off traditional obligations at the design table?  Might often makes right on the international stage, and given the lack of precedent in this area, the risk demands attention.

Law of the Sea

The peacetime naval domain provides another interesting forum for the disruptive effect of military robotics.  Customary international law, for example, has long recognized an obligation to render assistance to vessels in distress—at least to the extent feasible without danger to the assisting ship and crew.  This is echoed in a variety of international treaties ranging from the Geneva Convention on the High Seas to the United Nations Convention on the Law of the Sea.  But what becomes of this obligation when ships of the future have no crew?

Navies across the world are actively developing ghost fleets.  The U.S. Navy has called upon industry to deliver ten Large Unmanned Surface Vehicle ships by 2024, and just recently, the “Sea Hunter” became the first ship to sail autonomously between two major ports.  This comes as no surprise given the Navy’s 2020 request for $628.8 million to conduct research and development involving unmanned surface and sub-surface assets.  The Chinese, too, have been exploring the future of autonomous sea power.  

This move highlights the real possibility that technology may relieve the most industrially developed Navies of traditional international obligations.  Whether fortuitously or not, the size of a ghost fleet would inversely reflect a nation’s ability—and perhaps its obligation—to assist vessels in distress. 

This would shift the humanitarian onus onto less-developed countries or commercial mariners, ceding at least one traditional pillar of international law’s peacetime function.  This also opens the door to troubling precedent if global superpowers begin to consciously design themselves out of long-held international obligations.

The move to robotic sea vessels also risks an increase in challenges to the previously inviolable (and more-easily defendable) sovereignty of sea-going platforms.  In 2016, for example, a Chinese warship unlawfully detained one of the United States’ underwater drones, which, at the time, was being recovered in the Philippine exclusive economic zone.  The move was widely seen as violating international maritime law.  But the Chinese faced no resistance in their initial detention of the vessel and the United States’ response consisted of nothing more than demands for return.  Unlike their staffed counterparts, unmanned vessels are more prone to illegal seizure or boarding—in part because of the relatively low risk associated with the venture. 

This dynamic may increase a nation’s willingness to unlawfully exert control over another’s sovereign vessel while simultaneously decreasing the aggrieved nation’s inclination (or ability) to use force in response.  This same phenomenon bears out in the context of Unmanned Aerial Vehicles, for which the frequency and consequence of hostile engagement are counter-intuitively related.  But unmanned sea vessels are far more prone to low-cost incursion than their winged counterparts.  This highlights but one aspect of the normative consequence effected by unmanned naval technology, which, if unaddressed, stands to alter the cost-benefit analysis that often underlies the equilibrium of peace.”



Joshua Fiveson
Joshua Fiveson 

Joshua Fiveson is an officer in the U.S. Navy and a graduate of Harvard Law School.  Fiveson previously served as the youngest-ever military fellow with the Institute of World Politics, a national security fellow with the University of Virginia’s National Security Law Institute, a national security fellow with the Foundation for Defense of Democracies, and a leadership fellow with the Harvard Kennedy School’s Center for Public Leadership.  Fiveson also served as a John Marshall fellow with the Claremont Institute and a James Wilson fellow with the James Wilson Institute. 

How Marines And Robots Will fight Side By Side

Illustrations by Jacqueline Belker/Staff


This imagined scenario involves a host of platforms, teamed with in-the-flesh Marines, moving rapidly across wide swaths of the Pacific.

Those small teams of maybe a platoon or even a squad could work alongside robots in the air, on land, sea and undersea, to gain a short-term foothold that then could control a vital sea lane Chinese ships would have to bypass or risk sinking simply to transit.


“Somewhere off the coast of a tiny island in the South China Sea small robotic submarines snoop around, looking for underwater obstacles as remotely-controlled ships prowl the surf. Overhead multiple long-range drones scan the beachhead and Chinese military ­fortifications deeper into the hills.

A small team of Marines, specially trained and equipped, linger ­farther out after having launched from their amphibious warship, as did their robot battle buddies to scout this spit of sand.

Their Marine grandfathers and great-grandfathers might have rolled toward this island slowly, dodging sea mines and artillery fire only to belly crawl in the surf as they were raked with ­machine gun fire, dying by the thousands.

But in the near-term battle, suicidal charges to gain ground in a fast-moving battlefield is a robot’s job.

It’s a bold, technology-heavy concept that’s part of Marine Corps Commandant Gen. David Berger’s plan to keep the Corps relevant and lethal against a perceived growing threat in the rise of China in the Pacific and its increasingly sophisticated and capable Navy.

In his planning guidance, Berger called for the Marines and Navy to “create many new risk-worthy unmanned and minimally manned platforms.” Those systems will be used in place of and alongside the “stand-in forces,” which are in range of enemy weapons systems to create “tactical dilemmas” for adversaries.

“Autonomous systems and artificial intelligence are rapidly changing the character of war,” Berger said. “Our potential peer adversaries are investing heavily to gain dominance in these fields.”

And a lot of what the top Marine wants makes sense for the type of war fighting, and budget constraints, that the Marine Corps will face.

“A purely unmanned system can be very small, can focus on power, range and duration and there are a lot of packages you can put on it — sensors, video camera, weapons systems,” said Dakota Wood, a retired Marine ­lieutenant colonel and now senior research fellow at The Heritage ­Foundation in Washington, D.C.

The theater of focus, the Indo-Pacific Command, almost requires adding a lot of affordable systems in place of more Marine bodies.

That’s because the Marines are stretched across the world’s largest ocean and now face anti-access, area-denial, systems run by the Chinese military that the force hasn’t had to consider since the Cold War.

“In INDOPACOM, in the littorals, the Marine Corps is looking to kind of outsource tasks that machines are looking to do,” Wood said. “You’re preserving people for tasks you really want a person to handle.”

The Corps’ shift back to the sea and closer work with the Navy has been brewing in the background in recent years as the United States slowly has attempted to disentangle itself from land-based conflicts in the Middle East. Signaling those changes, recent leaders have published warfighting concepts such as expeditionary advanced based operations, or EABO, and littoral operations in contested environment.

EABO aims to work with the Navy’s distributed maritime operations concept. Both allow for the U.S. military to pierce the anti-access, area denial bubble. The littoral operations in contested environment concept makes way for the close-up fight in the critical space where the sea meets the land.

That’s meant a move to prioritize the Okinawa, Japan-based III Marine Expeditionary Force as the leading edge for prioritizing Marine forces and experimentation, as the commandant calls for the “brightest” Marines to head there.

Illustrations by Jacqueline Belker/Staff

Getting what they want

But the Corps, which traditionally has taken a backseat in major acquisitions, faces hurdles in adding new systems to its portfolio.

It was only in 2019 that the Marines gained more funding to add more MQ-9 Reaper drones. The Corps got the money to purchase its three Reapers in this year’s budget. But that’s a ­platform that’s been in wide use by the Air Force for more than a decade.

But that’s a short-term fix, the Corps’ goal remains the Marine Air-Ground Task Force unmanned aircraft system, expeditionary, or MUX.

The MUX, still under development, would give the Corps a long-range drone with vertical takeoff capability to launch from amphib ships that can also run persistent intelligence, surveillance and reconnaissance, electronic warfare and coordinate and initiate strikes from other weapons platforms in its network.

Though early ideas in 2016 called for something like the MUX to be in the arsenal, at this point officials are pegging an operational version of the aircraft for 2026.

Lt. Gen. Steven Rudder, deputy commandant for aviation, said at the annual Sea-Air-Space Symposium in 2019 that the MUX remains among the top priorities for the MAGTF.

Sustain and distract

In other areas, Marines are focusing on existing platforms but making them run without human operators.

One such project is the expeditionary warfare unmanned surface vessel. Marines are using the 11-meter ­rigid-hull inflatable boats already in service to move people or cargo, drop it off and return for other missions.

Logistics are a key area where autonomous systems can play a role. Carrying necessary munitions, medical supplies, fuel, batteries and other items on relatively cheap platforms keeps Marines out of the in-person transport game and instead part of the fight.

In early 2018 the Corps conducted the “Hive Final Mile” autonomous drone resupply demonstration in ­Quantico, Virginia. The short-range experiment used small quadcopters to bring items like a rifle magazine, MRE or canteen to designated areas to resupply a squad on foot patrol.

The system used a group of drones in a portable “hive” that could be ­programmed to deliver items to a predetermined site at a specific time and continuously send and return small drones with various items.

Extended to longer ranges on larger platforms and that becomes a lower-risk way to get a helicopter’s worth of supplies to far-flung Marines on small atolls that dot vast ocean expanses.

Shortly after that demonstration, the Marines put out requests for concepts for a similar drone resupply system that would carry up to 500 pounds at least 10 km. It still was not enough distance for larger-scale warfighting, but is the beginnings of the type of resupply a squad or platoon might need in a contested area.

In 2016, the Office of Naval Research used four rigid-hull inflatable boat with unmanned controls to “swarm” a target vessel, showing that they can also be used to attack or distract vessels.

And the distracting part can be one of the best ways to use unmanned assets, Wood said.

Wood noted that while autonomous systems can ­assist in classic “shoot, move, communicate” tactics, they sometimes be even more effective in sustaining forces and distracting adversaries.

“You can put machines out there that can cause the enemy to look in that direction, decoys tying up attention, munitions or other platforms,” Wood said.

And that distraction goes further than actual boats in the water or drones in the air.

As with the MUX, the Corps is looking at ways to include electronic warfare capabilities in its plans. That allows for robotic systems to spoof enemy sensors, making them think that a small pod of four rigid-hull inflatable boats appear to be a larger flotilla of amphib ships.

Illustrations by Jacqueline Belker/Staff


Marines fighting alongside and along with ­semi-autonomous systems isn’t entirely new.

In communities such as aviation, explosive ordnance disposal and air defense, forms of automation, from automatic flight paths to approaching toward bomb sites and recognizing incoming threats, have been at least partly outsourced to software and systems.

But for more complex tasks, not so much.

How robots have worked and will continue to work in formations is an evolutionary process, according to former Army Ranger Paul Scharre, director of the technology and national security program at the Center for a New American Security and author of, “Army of None: Autonomous Weapons and the Future of War.”

If you look at military technology in history, the most important use for such tech was in focusing on how to solve a particular mission rather than having the most advanced technology to solve all problems, Scharre said.

And autonomy runs on a kind of sliding scale, he said.

As systems get more complex, autonomy will give fewer tasks to the human and more to the robot, helping people better focus on decision-making about how to conduct the fight. And it will allow for one human to run multiple systems.

When you put robotic systems into a squad, you’re giving up a person to run them and leaders have to decide if that’s worth the trade off, Scharre said.

The more remote the system, the more vulnerable it might be to interference or hacking, he said. Built into any plan for adding autonomous systems there must be reliable, durable communication networks.

Otherwise, when those networks are attacked the systems go down.

That means that a Marine’s training won’t get less complicated, only more multifaceted.

Just as Marines continue to train with a map and compass for land navigation even though they have GPS at their fingertips, Marines operating with autonomous systems will need continued training in fundamental tactics and ways to fight if those systems fail.

“Our preferred method of fighting today in an ­infantry is to shoot someone at a distance before they get close enough to kill with a bayonet,” Scharre said. “But it’s still a backup that’s there. There are still bayonet lugs on rifles, we issue bayonets, we teach people how to wield them.”

Where do they live?

A larger question is where do these systems live? At what level do commanders insert robot wingmen or battle buddies?

Purely for reasons of control and effectiveness, Dakota Wood said they’ll need to be close to the action and Marine Corps personnel.

But does that mean every squad is assigned a robot, or is there a larger formation that doles out the automated systems as needed to the units?

For example, an infantry battalion has some vehicles but for larger movements, leaders look to a truck company, Wood said. The maintenance, care, feeding, control and programming of all these systems will require levels of specialization, expertise and resources.

The Corps is experimenting with a new squad formation, putting 15 instead of 13 Marines in the building block of the infantry. Those additions were an assistant team leader and a squad systems operator. Those are exactly the types of human positions needed to implement small drones, tactical level electronic warfare and other systems.

The MUX, still under development, would give the Corps a long-range drone with vertical takeoff capability to launch from amphib ships. (Bell Helicopter)
The MUX, still under development, would give the Corps a long-range drone with vertical takeoff capability to launch from amphib ships. (Bell Helicopter)

The Marine Corps leaned on radio battalions in the 1980s to exploit tactical signals intelligence. Much of that capability resided in the larger battalion that farmed out smaller teams to Marine Expeditionary Units or other formations within the larger division or Marine Expeditionary Force.

A company or battalion or other such formation could be where the control and distribution of ­autonomous systems remains.

But, current force structure moves look could integrate those at multiple levels. Maj. Gen. Mark Wise, deputy commanding general of Marine Corps Combat Development Command, said recently that the Corps is considering a Marine littoral regiment as a formation that would help the Corps better conduct EABO operations.

Combat Development Command did not provide details on the potentially new regimental formation, confirmed that a Marine littoral regiment concept is one that will be developed through current force design conversations.

A component of that could include a recently-proposed formation known as a battalion maritime team.

Maj. Jake Yeager, an intelligence officer in I MEF, charted out an offensive EABO method in a December 2019 article on the website War On The Rocks titled, “­Expeditionary Advanced Maritime Operations: How the Marine Corps can avoid becoming a second land Army in the Pacific.”

Part of that includes the maritime battalion, creating a kind of Marine air-sea task force. Each battalion team would include three assault boat companies, one raid boat company, one anti-ship missile boat battery and one reconnaissance boat company.

The total formation would use 40 boats, at least nine which would be dedicated unmanned surface vehicles, while the rest would be developed with unmanned or manned options, much like the ­rigid-hulled inflatable boats which the Corps is currently experimenting with.”


US Army 2019 Top 10 Science And Technology Advances

All Photos in this Article are Screenshots from CCDC ARL video


These are potentially game-changing developments intended to help Army soldiers fight and win on future battlefields.

The list includes research and development efforts in material science, robotics, and artificial intelligence.


“US Army researchers and engineers have been busy this year, developing new capabilities and technologies meant to help modernize the force.

Alexander Kott, the chief scientist at Combat Capabilities Development Command’s Army Research Laboratory, recently picked the Army’s top 10 science and technology advancements of 2019

Kott told Insider the research projects he selected were “the ones that had the potential for a long-term game change — something that could actually lead to a major change in future capabilities and, at the same time, that was well grounded in foundational science and technology.”

Here is what made the list.

10. Artificial muscles for tougher robots.

The Army wants to give robots artificial muscles to make them stronger than ever
The Army wants to give robots artificial muscles to make them stronger than ever. 

The Army is looking at building stronger robots through the development of artificial muscles made from twisted, coiled plastic fibers with the ability to contract and expand under the influences of various stimuli, effectively mimicking the way muscles naturally function.

9. Biorecognition receptors for real-time monitoring of soldier health and performance.

The Army wants to be able to monitor soldier health and performance in real time
The Army wants to be able to monitor soldier health and performance in real time. 

Army researchers are working to develop small, inexpensive, rugged peptide-based biorecognition receptors that are more capable than standard antibody receptors and can be integrated into wearable biosensors to provide immediate real-time information on a soldier’s health and performance.

8. Water-based, fire-proof batteries.

The battery is much safer than traditional lithium-ion batteries
The battery is much safer than traditional lithium-ion batteries. 

The Army has developed new aqueous lithium-ion batteries that use a nonflammable, water-based solvent and lithium salt that is not sensitive to heat.

The service has replaced the highly flammable electrolyte in current lithium-ion batteries and created a power source that can be safely stored at varied temperatures.

7. Immediate power from water-based liquids.

The Army is developing ways to get on-demand power from hydrogen extracted from water-based liquids
The Army is developing ways to get on-demand power from hydrogen extracted from water-based liquids. 

Army researchers are looking at ways they might extract hydrogen for power generation from water-based liquids, including urine, using a stable, aluminum-based nongalvonic alloy tablet that reacts with the water.

This approach could work for lights or radios in situations in which there may not be other suitable power options available.

6. Incredibly strong 3D printed steel.

A piece of incredibly-strong, 3D-printed steel
A piece of incredibly-strong, 3D printed steel. 

The Army has figured out how to 3D print steel that is 50% stronger than anything available commercially.

Army experts expect this capability to improve logistics by giving soldiers the ability to produce tough spare parts for tanks and other systems in the field.

5. Interest detection to determine what grabs a soldier’s attention in battle.

The US Army wants to be able to determine what stimuli soldiers are most likely to react to in battle
The US Army wants to be able to determine what stimuli soldiers are most likely to react to in battle. 

Army researchers have been monitoring soldier brain waves to track neural activity and responses to environmental stimuli to determine what grabs a soldier’s attention on the battlefield.

The Army expects this research to lead to improvements in situational awareness, command decision-making, and future manned-unmanned teaming.

4. Artificial intelligence that can find fuel-efficient materials for improved fuel cells.

The Army, through Army-funded researchers, is looking at ways to use AI to find fuel-efficient materials to develop improved fuel cells
The Army, through Army-funded researchers, is looking at ways to use AI to find fuel-efficient materials to develop improved fuel cells. 

Army-funded researchers have developed a system of algorithmic bots called Crystal that can sort through a myriad of possible element combinations to advance material-science research, including the search for fuel-efficient materials for improved fuel cells.

3. Robotic arrays for communication.

The Army is working on new ways for soldiers to communicate with other warfighters in complex battlespaces
The Army is working on new ways for soldiers to communicate with other warfighters in complex battlefields. 

The Army has managed to create small robots equipped with compact, low-frequency antennas and artificial-intelligence systems that allow the wheeled vehicles to organize themselves into an array, creating a new way for soldiers to effectively communicate in challenging battlefield environments.

2. Self-repairing materials.

The 3D-printed material is self-healing at room temperature and does not require any external stimuli
The 3D printed material is self-healing at room temperature and does not require any external stimuli. 

Army researchers have developed a synthetic material, specifically a 3D printed reversible cross-linking epoxy, that can repair itself when damaged. The repair process can occur at room temperature without additional stimuli or the application of a healing agent.

1. Robots that can operate on any future battlefield, no matter what that combat space looks like.

A tracked robot dragging what appears to be a barrier
A tracked robot dragging what appears to be a barrier. 

The Army is essentially creating a robot brain that can think its way through unfamiliar situations by developing algorithms and capabilities that will allow unmanned systems to operate in any environment, no matter what the future battlefield looks like.”


Watch A Human Try To Fight Off Door -Opening Robot Dog


Robot Dog



“The most subtle detail here is also the most impressive: The robot is doing almost all of this autonomously, at least according to the video’s description.

The robot is able to correct for extreme forces, all the while handling a relatively precise task.”

“HEY, REMEMBER THAT dog-like robot, SpotMini, that Boston Dynamics showed off last week, the one that opened a door for its robot friend? Well, the company just dropped a new video starring the canine contraption. In this week’s episode, a human with a hockey stick does everything in his power to stop the robot from opening the door, including tugging on the machine, which struggles in an … unsettling manner. But the ambush doesn’t work. The dogbot wins and gets through the door anyway.

Boston Dynamics is a notoriously tight-lipped company, so just the few sentences it provided with this clip is a relative gold mine. That information describes how a human handler drove the bot up to the door, then commanded it to proceed. The rest you can see for yourself. As SpotMini grips the handle and the human tries to shut the door, it braces itself and tugs harder—all on its own. As the human grabs a tether on its back and pulls it back violently, the robot stammers and wobbles and breaks free—still, of its own algorithmic volition.

Boston Dynamics is, as it says in the title of the video, “testing robustness.” That is, a robot’s ability to deal with our crap. It’s hard as hell to get a robot to not fall on its face, much less fight off a human and go about its business as if nothing happened.

Now, we can’t be sure just how autonomous SpotMini is. A human could still be controlling it with a joystick from afar. But could a robot really do this all on its own? “I think it probably is, because actually teleoperating a robot to behave that way is pretty challenging,” says Noah Ready-Campbell, founder and CEO of Built Robotics. “It’s extremely impressive, no doubt.”

If you’re looking for reassurance, though, consider that SpotMini’s autonomous capabilities are probably pretty limited. Humans are still good at human things like planning (driving the robot to the door), while machines are getting ever better at repetitive tasks (like opening doors). There are actually already plenty of robots working in concert with humans in the wild: Security robots, for instance, work as eyes and ears for human guards, and robots deliver food in hotels and hospitals. But in both those cases, a human is in the loop. Robots just aren’t ready to wander on their own, leading to the proliferation of call centers where robots in distress can get help from human teleoperators.

So beyond opening doors and stabilizing itself, just how capable is SpotMini on its own? Could the robot do something like wander out of the building and find a particular room in another building? “I doubt that,” Ready-Campbell says. “The sheer variety of obstacles it would encounter like going up stairs, different shaped doors, all those kinds of things, it would probably break down.”

When it comes to needling by hockey stick, though, Boston Dynamics seems to have things covered. In 2016 the company released a video in which a human used one to push around Atlas, the company’s famous humanoid robot. This did not suit everyone. Some called the human offender a jerk, and praised the poor robot as hard-working. So this time, Boston Dynamics made its intentions clear in its video’s description: “(Note: This testing does not irritate or harm the robot.)”

How SpotMini might be used is unclear, but it’s worth noting that Boston Dynamics developed the robot’s older brother, BigDog, as a pack mule for the military. (Though the Marines rejected it because it was too noisy.) It’s also worth noting that it’s rare for the company to drop videos like this so close together, much less give so much information in the video’s description. Might it finally be getting ready to release a machine into the market?

Time will tell. But before you freak out about robots breaking into your house, please keep in mind that robots are here to help humanity, no matter how much we attack them with hockey sticks. Maybe open doors for them, just to be safe.”




Polaris Trucks Carry Commandos And Casualties – And Can Be Robots




“Polaris is a small, tough company that makes small, tough trucks, favored by the MarinesSpecial Forces, and allied nations. They’re basically military-grade dune buggies, easy to transport by plane or helicopter and easy to customize to the mission. In this video, Polaris shows us one of their larger DAGOR vehicles configured to carry a full eight-man squad and the smaller MRZR set up as a mini-ambulance — as well as where to attach the gadgets to make it self-driving for the Army’s S-MET robotics competition.”




Why We Must Not Build Automated Weapons of War



A drone operator from the Mosul Brigade of the Iraqi Special Operations Force 2 releases a drone during a military operation to retake parts of Mosul from the Islamic State on Dec. 5, 2016. Achilleas Zavallis—AFP/Getty Images


“Over 100 CEOs of artificial intelligence and robotics firms recently signed an open letter warning that their work could be repurposed to build lethal autonomous weapons — “killer robots.”

They argued that to build such weapons would be to open a “Pandora’s Box.” This could forever alter war.”

“Over 30 countries have or are developing armed drones, and with each successive generation, drones have more autonomy. Automation has long been used in weapons to help identify targets and maneuver missiles. But to date, humans have remained in control of deciding whether to use lethal force. Militaries have only used automated engagements in limited settings to defend against high-speed rockets and missiles. Advances in autonomous technology could change that. The same intelligence that allows self-driving cars to avoid pedestrians could allow future weapons that hunt and attack targets on their own.

For the past three years, countries have met through the United Nations to discuss lethal autonomous weapons. Over 60 non-governmental organizations have called for a treaty banning autonomous weapons. Yet most countries are hedging their bets. No major military powers have said they plan to build autonomous weapons, but few have taken them off the table.

There’s a certain irony in the CEOs of robotics and AIcompanies warning of the dangers of the very same technologies they themselves are building. They implore countries to “double their efforts” in international negotiations and warn that “we do not have long to act.” But if the situation is truly dire, couldn’t these companies slow their research to buy diplomats more time?

In reality, even if all of these companies stopped research, the field of AI would continue marching forward. The intelligence behind autonomous robots isn’t like stealth technology, which was created in secret defense labs and tightly controlled by the military. Autonomous technology is everywhere. Hobbyist drones that retail for a few hundred dollars can takeoff, land, follow moving objects and avoid obstacles all on their own. Elementary school students build robots in competitions. Even the Islamic State is getting in on the game, strapping bombs to small drones. There is no stopping AI. Robotics companies can’t easily band together to stop progress, because it only takes one company to break the agreement and advance the technology. Besides, to ask companies to stop research would be to ask them to forgo innovations that could generate profits and save lives.

These same dynamics make constraining autonomous weapons internationally very difficult. Asking countries to sign a treaty banning a weapon that doesn’t yet exist means asking them to forgo a potentially useful tool to defend against threats and save lives. Moreover, the same problem of cheaters applies in the international arena, but the stakes are higher. Instead of lost profits, a nation might lose a war. History suggests that even when the international community widely condemns a weapon as inhumane — like chemical weapons — some despots will use them anyway. Treaties alone won’t prevent rogue regimes and terrorists from building autonomous weapons. If autonomous weapons led to a decisive advantage in war, a treaty that disarmed only those who care for the rule of law would be the worst of all possible worlds.

The letter’s signers likely understand this, which may be why the letter doesn’t call for a ban, a notable departure from a similar letter two years ago. Instead, the signatories ask countries at the United Nations to “find a way to protect us from all these dangers.” Banning or regulating emerging weapons technologies is easier said than done, though. Nations have tried to ban crossbows, firearms, surprise attacks by submarines, aerial attacks on cities and, in World War I, poison gas. All have failed.

And yet: Nations held back from using poison gas on the battlefields of World War II. The Cold War saw treaties banning chemical and biological weapons, using the environment as a weapon and placing nuclear weapons in space or on the seabed. The United States and Soviet Union pulled back from neutron bombs and anti-satellite weapons even without formal treaties. Nuclear weapons have proliferated, but not as widely as many predicted. In more recent years, nations have passed bans on blinding lasers, land mines and cluster munitions.

Weapons are easier to ban when few countries have access to them, when they are widely seen as horrifying and when they provide little military benefits. It is extremely difficult to ban weapons that are seen as giving a decisive advantage, as nuclear weapons are. A major factor in what will happen with autonomous weapons, therefore, is how nations come to see the benefits and risks they pose.

Autonomous weapons pose a classic security dilemma for countries. All countries may be better off without them, but mutual restraint requires cooperation. Last year, nations agreed to create a more formal Group of Governmental Experts to study the issue. The group will convene in November and, once again, nations will attempt to halt a potentially dangerous technology before it is used in war.”


Pentagon Studies Weapons That Can Read Users’ Mind


still from DARPA video

DARPA’s Revolutionizing Prosthetics program is devising new kinds of artificial limbs — and new ways to control them.


“The troops of tomorrow may be able to pull the trigger using only their minds.

As artificially intelligent droneshackingjamming, and missiles accelerate the pace of combat, some of the military’s leading scientists are studying how mere humans can keep up with the incredible speed of cyber warfare, missiles and other threats.

One option: Bypass crude physical controls — triggers, throttles, keyboards — and plug the computer directly into the human brain. In one DARPA experiment, a quadriplegic first controlled an artificial limb and then flew a flight simulator. Future systems might monitor the users’ nervous system and compensate for stress, fatigue, or injury. Is this the path to what the Pentagon calls human-machine teaming?

This is an unnerving scenario for those humans, like Stephen Hawking, who mistrust artificial intelligence. If your nightmare scenario is robots getting out of control, “let’s teach them to read our minds!” is probably not your preferred solution. It sounds more like the beginning of a movie where cyborg Arnold Schwarzenegger goes back in time to kill someone.

But the Pentagon officials who talked up this research yesterday at Defense One’s annual tech conference emphasized the objective was to improve human control over artificial intelligence. Teaching AI to monitor its user’s level of stress, exhaustion, distraction, and so on helps the machine adapt itself to better serve the human — instead of the other way around. Teaching AI to instantly detect its user’s intention to give a command, instead of requiring a relatively laborious push of a button, helps the human keep control — instead of having to let the AI off the leash because no human can keep up with it.

Official Defense Department policy, as then-Secretary Ash Carter put it, is that the US will “never” allow an artificial intelligence to decide for itself whether or not to kill a human being. However, no less a figure than the Carter’s undersecretary of acquisition and technology, Frank Kendall, fretted publicly that making our robots wait for human permission would slow them down so much that enemy AI without such constraints would beat us. Vice-Chairman of the Joint Chiefs, Gen. Paul Selva, calls this the “Terminator Conundrum.” Neuroscience suggests a way out of this dilemma: Instead of slowing the AIs down, make the humans’ orders come faster.

Accelerate Humanity

“We will continue to have humans on the loop, we will have human input in decisions, but the way we go about that is going to have to shift, just to cope with the speed and the capabilities that autonomous systems bring,” said Dr. James Christensen, portfolio manager at the Air Force Research Laboratory‘s 711th Human Performance Wing. “The decision cycle with these systems is going to be so fast that they have to be sensitive to and responsive to the state of the individual (operator’s) intent, as much as overt actions and control inputs that human’s providing.”

In other words, instead of the weapon system responding to the human operator physically touching a control, have it respond to the human’s brain cells forming the intention to use a control. “When you start to have a direct neural interface of this type, you don’t necessarily need to command and control the aircraft using the stick,” said Justin Sanchez, director of DARPA‘s Biological Technologies Office. “You could potentially re-map your neural signatures onto the different control surfaces” — the tail, the flaps — “or maybe any other part of the aircraft” — say landing gear or weapons. “That part hasn’t really been explored in a huge amount of depth yet.”

Reading minds, even in this limited fashion, will require deep understanding and close monitoring of the brain, where thoughts take measurable form as electrical impulses running from neuron to neuron. “Can we develop precise neurotechnologies that can go to those circuits in the brain or the peripheral nervous system in real time?” Sanchez asked aloud. “Do we have computational systems that allow us to understand what the changes in those signals (mean)? And can we give meaningful feedback, either to the person or to the machine to help them to do their job better?”

DARPA’s Revolutionizing Prosthetics program hooked up the brain of a quadriplegic — someone who could move neither arms nor legs — to a prosthetic arm, allowing the patient to control it directly with their thoughts. Then, “they said, ‘I’d like to try to fly an airplane,’” Sanchez recounted. “So we created a virtual flight simulator for this person, allowed this neural interface to interface with the virtual aircraft, and that person flew.”

“That was a real wake-up call for everybody involved,” Sanchez said. “We didn’t initially think you could do that.”

Adapting To The Human

Applying direct neural control to real aircraft — or tanks, or ships, or network cybersecurity systems — will require a fundamental change in design philosophy. Today, said Christensen, we give the pilots tremendous information on the aircraft, then expect them to adapt to it. In the future, we could give the aircraft tremendous information on its pilots, then have it use artificial intelligence to adapt itself to them. The AI could customize the displays, the controls, even the mix of tasks it took on versus those it left up to the humans — all exquisitely tailored not just to the preferences of the individual operator but to his or her current mental and physical state.

When we build planes today, “they’re incredible sensor platforms that collect data on the world around them and on the aircraft systems themselves, (but) at this point, very little data is being collected on the pilot,” Christensen said. “The opportunity there with the technology we’re trying to build now is to provide a continuous monitoring and assessment capability so that the aircraft knows the state of that individual. Are they awake, alert, conscious, fully capable of performing their role as part of this man-machine system? Are there things that the aircraft then can do? Can it unload gees (i.e. reduce g-forces)? Can it reduce the strain on the pilot?”

“This kind of ability to sense and understand to the state and the capabilities of the human is absolutely critical to the successful employment of highly automated systems,” Christensen said. “The way all of our systems are architected right now, they’re fixed, they’re predictable, they’re deterministic” — that is, any given input always produces the exact same output.

Predictability has its advantages: “We can train to that, they behave in very consistent ways, it’s easier to test and evaluate,” Christensen said. “What we lose in that, though is the real power of highly automated systems, autonomous systems, as learning systems, of being able to adapt themselves. ”

“That adaptation, though, it creates unpredictability,” he continued. “So the human has to adapt alongside the system, and in order to do that, there has to be some mutual awareness, right, so the human has to understand what is the system doing, what is it trying to do, why is that happening; and vice versa, the system has to has some understanding of the human’s intent and also their state and capabilities.”

This kind of synergy between human and artificial intelligence is what some theorists refer to as the “centaur,” after the mythical creature that combined human and horse — with the human part firmly in control, but benefiting from the beast’s strength and speed. The centaur concept, in turn, lies at the heart of the Pentagon’s ideas of human-machine teaming and what’s known as the Third Offset, which seeks to counter (offset) adversaries’ advancing technology with revolutionary uses of AI.

The neuroscience here is in its infancy. But it holds the promise of a happy medium between hamstringing our robots with too-close control or letting them run rampant.”


Underwater Robots: Will the Pentagon Miss the Boat?




“The Defense Department has made significant investments in underwater vehicle designs and prototypes.

But it has not funded  larger numbers for testing and experimentation.

Top defense contractors have jumped into the race to develop autonomous mini-submarines for the U.S. military. As the Pentagon makes it increasingly clear that unmanned technology will be a linchpin of future warfare, contractors have taken the plunge, partnered with or acquired commercial firms in this sector in hopes of capturing future Defense Department contracts.

There is a flaw in the plan, however, warns retired Navy Rear Adm. Fred Byus. The Pentagon has taken initial steps to “steer investments” in autonomous technology but is not moving fast enough to increase production of robots so they can be made available to large numbers of users for testing and experimentation.

The technology to produce autonomous underwater vehicles is ready to transition from the lab to the fleet, says Byus, who is general manager of mission and defense technologies at Battelle. He contends that if the Pentagon continues to buy vehicles only in onesies and twosies, the technology is at risk of getting stuck in limbo, will remain unfamiliar to most potential users and will produce prototypes that are too expensive to be accessible across the military.

Undersea drones are one area of warfare where the United States has the opportunity to gain a big technological edge over potential adversaries, Byus says. Leaps in innovation have occurred both in the defense and commercial markets but the Pentagon may not be able to take advantage of the advanced technology because of its internal approaches to acquisitions, he adds. “You have to have processes that keep up with technology.” With robotics, it is important to “get the technology into the hands of the war fighters as widely as possible.”

Defense Secretary Ashton Carter has been a proponent of unmanned undersea systems. He said in February that the Pentagon would invest $600 million over the next five years in “variable size and variable payload unmanned undersea vehicles.” Carter described a vision of networked “distributed” drones that would give naval forces unprecedented capabilities to collect intelligence.

Despite this high-level endorsement, the Defense Department’s acquisition organizations are not moving quickly to push the technology forward and start building prototypes in sufficiently large numbers, Byus says. Talking about the promise of robotics alone is not enough unless there is “parallel development of tactics” for the use of the technology, and incentives for vendors to produce more systems at lower prices.

The Navy this month solicited a “request for information” from contractors, asking for proposals on how existing unmanned undersea vehicles could be adapted for military use. Under a project called “extra large unmanned undersea vehicle,” the Naval Sea Systems Command wants to conduct experiments to develop tactics and concept of operations.

Contractors like Battelle, General Dynamics and Boeing Phantom Works have made big bets on commercial robots they believe are suitable for military use and cheaper than anything the Pentagon could ever invent.

Byus worries that the Defense Department’s plan to tap commercial technology may fall short because it is mostly focused on niche experiments that will not create a demand for vehicles and therefore not motivate the industry to keep investing. “The autonomous systems industrial base is not in place to support large scale employment of the technology,” he says. “They need to be thinking about that.” Underwater submarines, for instance, have not been made in big enough numbers so units across the Navy can test them, he says, nor is there enough work to support the development of the autonomous underwater vehicle industrial base.

Companies in this sector continue to hedge their bets. Battelle moved to acquire SeeByte Inc., a software developer that specializes in autonomous undersea vehicles and sensors. One of the industry’s best known players, Bluefin Robotics was taken over by Battelle in 2005, and earlier this year was acquired by General Dynamics Mission Systems.

A major Navy ship builder, Huntington Ingalls Industries, has produced the Proteus underwater vehicle in a partnership with Battelle. It is a dual-mode system that can be driven by a pilot or operated autonomously. The vehicle was designed by the Columbia Group’s Engineering Solutions Division. Huntington Ingalls acquired ESD two years ago and renamed it the Undersea Solutions Group.

Commercial companies that have developed underwater robots are now feeling the pinch of the downturn in the oil and gas industries. This creates an opportunity for the Pentagon to play a more prominent role as a customer of this technology, Byus says. “Underwater technology development is under the same type of financial constraints on the commercial side that it has seen with the downturn in R&D on the government side.”

The military needs to step up the integration of unmanned systems into the force because it can’t afford the rising costs of people, he says, and needs to “relieve war fighters from dull dirty and dangerous work that autonomous systems are capable of doing.” With underwater submarines, the military could deploy a network of robots to keep eyes on potential enemies, for example.

“It will take some progressive thinkers in the Defense Department to say, ‘For this industrial base to be in place when we need it, we need to kick start the commercial applications as well as the government applications.’” A cautionary tale is found in the ship-building industry, where there are so few suppliers that prices continue to soar, forcing the Navy to buy fewer platforms — a downward cycle known in the Pentagon as the procurement “death spiral.”

The Pentagon also would benefit from better outreach to commercial companies so it can learn what innovations are being acquired by other countries, some of which are potential future adversaries. “You need a well coordinated program of commercial and government investment,” Byus says. “With only commercial development, you’ll have technological parity. If it’s all government funded, there is a risk that you end up with an industrial base and systems that are very expensive, which increases the cost of systems and the challenge of getting them into the hands of users.”


Flying Robots from the World’s Biggest Drone Show


Hummingbird II from Reference Technologies

Hummingbird II from Reference Technologies


“In the past year, drones have crashed onto the White House lawn, hauled radioactive cesium to the roof of the Japanese prime minister’s Tokyo office, and swooped above battlefields in Iraq and Ukraine.

The future of drone design is an area with huge importance for companies and for the military. At the recent Unmanned Systems 2015 show in Atlanta, Georgia, that future was on display.

Hosted by the Association for Unmanned Vehicle Systems International, the show brought together drone makers, military types, and business leaders from around the globe. (Google founder Larry Page was spotted briefly on the showroom floor.) All were looking for the next thing in UAV design. Here’s a look at some of the most interesting, innovative and outlandish drones Defense One ran across.

1. The SnowGoose BRAVO from Canada-based Mist Mobility Integrated Systems Technology, or MMIST, is an update to the company’s SnowGoose CQ-10A cargo drone. The original played a role in U.S. Special Operations Command missions for years. The Bravo may look like a helicopter with that big rotor up top, but notice the large back propellor? The SnowGoose is an autogyro, an aircraft whose rotor serves as a stabilizer while motive power is delivered to the propellor. Unlike a helicopter the main rotor of the BRAVO does not need mechanical power once the drone is flying. Before takeoff, shaft power is applied to the main rotor to get it spinning then the system hops into the air like a helicopter.  The pusher propeller gets it moving forward to maintain rotor flying speed. MMIST president and SnowGoose creator Sean McCann said the SnowGoose BRAVO can carry 600 pounds of cargo, reach 18,000 feet, and accelerate past 70 mph. And it uses less energy than would a conventional helicopter design, he said.

The SnowGoose BRAVO from MMIST

2. There are two types of Hummingbird drones. First, the famous nature-mimicking AeroVironment Nano Hummingbird, developed under a grant from the Defense Advanced Research Projects Agency. Then there’s this one, which looks more like an enormous, six-propelled pressure cooker. Reference Technologies is marketing this Hummingbird II for airborne delivery and situational awareness. It has a ceiling of 14,000 feet and a top speed of 60 mph.

Hummingbird II from Reference Technologies.

3. In March, a massive cyclone named Pam swept across the small island nation of Vanuatu, killing 11 people, displacing almost 200,000, and destroying more than 96 percent of the nation’s food crops. To assess the enormous damage area, relief workers used a five-pound foldable drone called the Indago VTOL Quad Rotor from Lockheed Martin. (VTOL stands for vertical takeoff and landing.)

Defense One was on hand to view the Indago demo. No, it’s not the most creative-looking design, but the Indago requires much less setup time than many other commercial quadcopters. It’s far quieter and more rugged but — starting at $48,900 — also more expensive than similar consumer drones. Indago has not undergone full US Military Standard Environment testing but the perceptor sensor package affixed to the bottom has been mil-speced.

The Indago VTOL Quad Rotor from Lockheed Martin

4. The Eturnas D, a semi-solar-powered drone from DII, has a seven-foot wingspan and weighs just 10 pounds. Solar-powered drones aren’t new, but integrating them into new frames is an ever-evolving art. The Eturnas can fly for six hours at 27 mph or 1.2 hours at 45 mph.

The Eturnas D from DII

5. The diesel-powered TANAN, marketed by Airbus Defense and Space, represented here at about one-quarter its actual size, stands 17 feet long and 6 feet high. It requires one operator, looks like something from a Saturday morning cartoon show, can carry a maximum payload of 176 pounds, and can reach 93 mph—but not all at the same time.

The TANAN Airbus Defense and Space

6. Part unmanned Osprey, part Transformer, the Lotus from Joby Aviation was developed in concert with NASA. Its wingtips transform into propellers to provide vertical lift like a helicopter. Once aloft, the tips fold back and the tilting tail rotor takes over to reduce drag and allow cruising. “A future 275 lb.-class hybrid-electric UAV development of this configuration has the potential to become the first VTOL aircraft to achieve 24 hour endurance,” the website says, hopefully. Check out the technical paper and animation of the vertical lift:

The Lotus from Joby Aviation (bottom picture courtesy of Joby Aviation)

7. The flying disc-shaped Radeus from  Radeus Labs features two sets of counter-rotating blades. During vertical lift, the blades spin at 60 mph, pulling the saucer off the ground. But when the drone moves forward in the air, the top portion of the vehicle converts to an autogyro. Its maker, Ray Hayden, says that he’s achieved the vertical lift portion on a similar model but not this one. He came to Atlanta seeking partners and investors.

Radeus from Radeus Labs

8. The ECA/Infotron IT 180 features a strange looking counter-rotating propeller design, giving it the appearance of a spinning top or, perhaps, a mutant dragonfly. The electric version is good for an hour’s flight before recharging and reaches a ceiling of 14,700 feet.

ECA/Infotron IT 180 from Infotron

9. Looking like a cross between the IT-O interrogator droid from Star Wars and a disco ball from hell, the All-Terrain Land and Air Sphere, or ATLAS, from Unmanned Cowboys is a rotorcraft with a spherical “exoskeleton.” The frame protects it when it crashes and enables the operator to adjust it and put it back into flight without touching it. (See video below). It’s ideal for those situations when you may want to throw a drone into a room to collect intelligence but don’t want to go in and pick it up when you slam into the doorframe.”

The ATLAS from Unmanned Cowboys


America vs. China, Japan & Italy in the US Military’s Robot Super Competition


defense-large                                                               Image: Ben Watson – DARPA


“The cost of the Robotics Challenge to the taxpayer is $95 million. DARPA will hand out a total of $3.5 million in prizes at the end of the competition.

The Pentagon’s coming Robotics Grand Challenge just got more interesting as 14 additional teams from labs around the world will join the 11 teams already cleared to participate in the summer’s robot competition in Pomona, California.

The Defense Advanced Projects Research Agency, or DARPA, announced the new teams include two from Germany, six from Japan, one from Italy, two from South Korea, one from Hong Kong and one from China.

That diversity is good news as it allows a wider variety of robot frames and platforms to compete against one another. Some of the more visually striking new entries include the Aero entry from Japan, which bares no small resemblance to the menacing Gort from the 1951 science fiction classic “The Day the Earth Stood Still”, the sleek and tastefully appointed WALKMAN out of Italy (natch), and the simian CHIMP from Team Tartan Rescue based in Pittsburg (seen in the video below.)

But the new comer that many will be watching is China’s Team Intelligent Pioneer from the Hefei Institutes of Physical Science. Long associated with an abundance of inexpensive manual labor, China is a fast-rising player in the international robotics industry where insiders recently told Reuters that robotics firms were “rising up like mushrooms.”

Some seven teams will be using a modified version of the Boston Dynamics Atlas robot but “it’s each team’s unique software, user interface, and strategy that will distinguish them and push the technology forward,” program manager Gil Pratt said in a statement.

The course will consist of eight tasks like climbing rubble, entering controlled areas and ascending stairs, which the robot must complete in less than one hour, on battery and in an environment with extremely limited electronic communication. The robots will have to figure out how perform the tasks with almost no human guidance or steering.

The challenge was inspired by the difficulties that emergency crews faced in response to the Fukushima Daiichi nuclear power plant disaster in Japan in 2011. It presented, in many ways, a worst-case scenario where massive infrastructure damage and destroyed communications made working on the plant extremely difficult. Despite dangerously high levels of radiation, emergency teams of humans had to access parts of the plant and continue to operate equipment because no robotic system was up to the task.

The cost of the Robotics Challenge to the taxpayer is $95 million. DARPA will hand out a total of $3.5 million in prizes at the end of the competition.”