Tag Archives: artificial intelligence

Experienced Young Military Professionals Discuss The Future of Warfare

Standard

EDITOR’S NOTE: The following two articles by a Middle East war veteran at West Point and a Navy military lawyer contemplating warfare technology and the law should be carefully read by the American Public. These young gentlemen are highly visible in their fields. They and their peers are the future leadership of our country.

______________________________________________________________________________

“MODERN WAR INSTITUTE AT WEST POINT” By Matt Cavanaugh

“Victory’s been defeated; it’s time we recognized that and moved on to what we actually can accomplish.

We’ve reached the end of victory’s road, and at this juncture it’s time to embrace other terms, a less-loaded lexicon, like “strategic advantage,” “relative gain,” and “sustainable marginalization.”

A few weeks back, Colombian President Juan Manuel Santos and Harvard Professor Steven Pinker triumphantly announced the peace deal between the government of Columbia and the Revolutionary Armed Forces of Columbia (FARC). While positive, this declaration rings hollow as the exception that proves the rule – a tentative treaty, however, at the end, roughly 7,000 guerrillas held a country of 50 million hostage over 50 years at a cost of some 220,000 lives. Churchill would be aghast: Never in the history of human conflict were so many so threatened by so few.

One reason this occasion merited a more somber statement: military victory is dead. And it was killed by a bunch of cheap stuff.

The term “victory” is loaded, so let’s stipulate it means unambiguous, unchallenged, and unquestioned strategic success – something more than a “win,” because, while one might “eke out a win,” no one “ekes out a victory.” Wins are represented by a mere letter (“w”); victory is a tickertape with tanks.

Which is something I’ll never see in my military career; I should explain. When a government has a political goal that cannot be obtained other than by force, the military gets involved and selects some objective designed to obtain said goal. Those military objectives can be classified broadly, as Prussian military theorist Carl von Clausewitz did, into either a limited aim (i.e. “occupy some…frontier-districts” to use “for bargaining”), or a larger aim to completely disarm the enemy, “render[ing] him politically helpless or military impotent.” Lo, we’ve arrived at the problem: War has become so inexpensive that anyone can afford the traditional military means of strategic significance – so we can never fully disarm the enemy. And a perpetually armed enemy means no more parades (particularly in Nice).

Never in the history of human conflict were so many so threatened by so few.

It’s a buyer’s market in war, and the baseline capabilities (shoot, move, and communicate) are at snake-belly prices. Tactical weaponry, like AK-47s are plentiful, rented, and shipped from battlefield to battlefield, and the most lethal weapon U.S. forces encountered at the height of the Iraq War, the improvised explosive device, could be had for as little as $265. Moving is cost-effective too in the “pickup truck era of warfare,” and reports on foreign fighters in Syria remind us that cheap, global travel makes it possible for nearly anyone on the planet to rapidly arrive in an active war zone with money to spare. Also, while the terror group Lashkar-e-Taiba shut down the megacity Mumbai in 2008 for less than what many traveling youth soccer teams spend in a season, using unprotected social media networks, communication has gotten even easier for the emerging warrior with today’s widely available unhackable phones and apps. These low and no-cost commo systems are the glue that binds single wolves into coordinated wolf-packs with guns, exponentially greater than the sum of their parts. The good news: Ukraine can crowdfund aerial surveillance against Russian incursions. The less-good news: strikes, like 9/11, cost less than three seconds of a single Super Bowl ad. With prices so low, why would anyone ever give up their fire, maneuver, and control platforms?

All of which explains why military victory has gone away. Consider the Middle East, and the recent comment by a Hezbollah leader, “This can go on for a hundred years,” and his comrade’s complementary analysis, that “as long as we are there, nobody will win.” With such a modestly priced war stock on offer, it’s no wonder Anthony Cordesman of the Center for Strategic and International Studies agrees with the insurgents, recently concluding, of the four wars currently burning across the region, the U.S. has “no prospect” of strategic victory in any. Or that Modern War Institute scholar Andrew Bacevich assesses bluntly, “If winning implies achieving stated political objectives, U.S. forces don’t win.” This is what happens when David’s slingshot is always full.

The guerrillas know what many don’t: It’s the era, stupid. This is the nature of the age, as Joshua Cooper Ramos describes, “a nightmare reality in which we must fight adaptive microthreats and ideas, both of which appear to be impossible to destroy even with the most expensive weapons.” Largely correct, one point merits minor amendment – it’s meaningless to destroy when it’s so cheap to get back in the game, a hallmark of a time in which Wolverine-like regeneration is regular.

This theme even extends to more civilized conflicts. Take the Gawker case: begrudged hedge fund giant Peter Thiel funded former wrestler Hulk Hogan’s lawsuit against the journalistic insurrectionists at Gawker Media, which forced the website’s writers to lay down their keyboards. However, as author Malcolm Gladwell has pointed out – Gawker’s leader, Nick Denton, can literally walk across the street, with a few dollars, and start right over. Another journalist opined, “Mr. Thiel’s victory was a hollow one – you might even say he lost. While he may have killed Gawker, its sensibility and influence on the rest of the news business survive.” Perhaps Thiel should have waited 50 more years, as Columbia had to, to write his “victory” op-ed? He may come to regret the essay as his own “Mission Accomplished” moment.

True with websites, so it goes with warfare. We live in the cheap war era, where the attacker has the advantage and the violent veto is always possible. Political leaders can speak and say tough stuff, promise ruthless revenge – it doesn’t matter, ultimately, because if you can’t disarm the enemy, you can’t parade the tanks.”

https://rosecoveredglasses.wordpress.com/2019/05/15/military-victory-is-dead/

JIA SIPA

By JOSHUA FIVESON

A new chapter of the international order The automation of war is as inevitable as conflict itself.  Less certain, however, is the international community’s collective ability to predict the many ways that these changes will affect the traditional global order. 

The pace of technology is often far greater than our collective ability to contemplate its second and third order effects, and this reality counsels cautious reflection as we enter a new chapter in the age-old story of war and peace.

_________________________________________________________________________

“Robots have long presented a threat to some aspect of the human experience.  What began with concern over the labor market slowly evolved into a full-blown existential debate over the future of mankind.  But lost somewhere in between the assembly line and apocalypse stands a more immediate threat to the global order:  the disruptive relationship between technology and international law.

Jus ad Bellum

Jus ad bellum is the body of international law that governs the initial use force.  Under this heading, force is authorized in response to an “armed attack.”  However, little discussion has focused on how unmanned technologies will shift this line between war and peace.

Iran’s recent unprovoked attack on one of the United States’ unmanned surveillance aircraft provides an interesting case study.  Though many saw the move as the opening salvo of war, the United States declined to respond in kind.  The President explained that there would have been a “big, big difference” if there was “a man or woman in the [aircraft.]”  This comment seemed to address prudence, not authority.  Many assumed that the United States would have been well within its rights to levy a targeted response.  Yet this sentiment overlooked a key threshold:  could the United States actually claim self-defense under international law?  

Two cases from the International Court of Justice are instructive.  In Nicaragua v. United States, the Court confronted the U.S. government’s surreptitious support and funding of the Contras, a rebel group that sought to overthrow the Nicaraguan government.  Nicaragua viewed the United States’ conduct as an armed attack under international law.  The Court, however, disagreed.

Key to the Court’s holding was the concept of scale and effect.  Although the U.S. government had encouraged and directly supported paramilitary activities in and against Nicaragua, the Court concluded that the scale and effect of that conduct did not rise to the level of an armed attack.  Notably, this was the case regardless of any standing prohibition on the United States’ efforts.

So too in Islamic Republic of Iran v. United States, more commonly known as the “Oil Platforms” case.  The Court analyzed the U.S. government’s decision to bomb evacuated Iranian Oil Platforms in response to Iranian missile and mining operations throughout the Persian Gulf.  Among other things, the Iranian operations injured six crew members on a U.S. flagged oil tanker, ten sailors on a U.S. naval vessel, and damaged both ships.  The Court nonetheless rejected the United States’ claim of self-defense because the Iranian operations did not meet the Nicaragua gravity threshold and thus did not qualify as “armed attacks.”  

Viewed on this backdrop, however contested, it strains reason to suggest that an isolated use of force against an unmanned asset would ever constitute an armed attack.  Never before have hostile forces been able to similarly degrade combat capability with absolutely no risk of casualty.  Though the Geneva Conventions prohibit the “extensive destruction” of property, it is another matter completely to conclude that any unlawful use of force is tantamount to an armed attack.  Indeed, the Nicaragua and Oil Platforms cases clearly reject this reasoning.  This highlights how the new balance of scale and effect will alter the landscape that separates peace and war.

Even assuming an attack on unmanned technology might constitute an armed attack under international law, there arise other complications regarding the degree of force available in response.  The jus ad bellum principles of necessity and proportionality apply to actions taken in self-defense, and the legitimate use of “defensive” force must be tailored to achieve that legitimate end.  A failure to strike this balance runs contrary to long-held principles of international law. 

What, then, happens when a robotic platform is destroyed and the response delayed?  Does the surrogate country have a general right to use limited, belated force in reply?  Maybe.  But a generalized response would likely constitute armed reprisal, which has fallen into disfavor with customary international law. 

To be lawful, the deferred use of defensive force must be tailored to prevent similar attacks in the future.  Anything short of this would convert a country’s inherent right to self-defense into subterfuge for illegal aggression.  Thankfully, this obligation is simply met where the initial aggressor is a developed country that maintains targeting or industrial facilities that can be tied to any previous, or potential future, means of attack.  But this problem takes on new difficulty in the context of asymmetric warfare.   

Non-state actors are more than capable of targeting robotic technology.  Yet these entities lack the traditional infrastructure that might typically (and lawfully) find itself in the crosshairs following an attack.  How, then, can a traditional power use force in response to a successful, non-state assault on unmanned equipment?  It is complicated.  A responsive strike that broadly targets members of the hostile force may present proportionality concerns that are unique from those associated with traditional attacks that risk the loss of life. 

How would a country justify a responsive strike that targets five members of a hostile force in response to a downed drone?  Does the answer change if fewer people are targeted?  And what if there is no question that those targeted were not involved in the initial act of aggression?  These questions aside, a responsive strike that exclusively targets humans in an attempt to stymie future attacks on unmanned equipment does not bear the same legal foundation as one that seeks to prevent future attacks that risk life.  The international community has yet to identify the exchange rate between robotic equipment and human lives, and therein lies the problem.

Jus in Bello

Robotic warfare will also disrupt jus in bello, the law that governs conduct during armed conflict.  Under the law of armed conflict, the right to use deadly force against a belligerent continues until they have been rendered ineffective, whether through injury, surrender, or detention.  But the right to use force first is not diminished by the well-recognized obligation to care for those same combatants if wounded or captured.  An armed force is not required to indiscriminately assume risk in order to capture as opposed to kill an adversary.  To impose such a requirement would shift risk from one group to another and impose gratuitous tactical impediments

This sentiment fades, however, once you place “killer robots” on the battlefield.  While there is little sense in telling a young soldier or marine that he cannot pull the trigger and must put himself at greater risk if an opportunity for capture presents itself, the same does not hold true when a robot is pulling the trigger.  The tactical feasibility of capture over kill becomes real once you swap “boots” for “bots” on the ground.  No longer is there the potential for fatality, and the risk calculus becomes largely financial.  This is not to say that robots would obligate a country to blindly pursue capture at the expense of strategy.  But a modernized military might effect uncontemplated restrictions on the traditional use of force under international law.  The justification for kill over capture is largely nonexistent in situations where capture is tactically feasible without any coordinate risk of casualty.

Design is another important part of this discussion.  Imagine a platoon of “killer robots” engages a small group of combatants, some of whom are incapacitated but not killed.  A robot that is exclusively designed to target and kill would be unable to comply with the internationally recognized duty to care for wounded combatants.  Unless medical care is a contemplated function of these robots’ design, the concept of a human-free battlefield will remain unrealized.  Indeed, the inherent tension between new tech and old law might indicate that at least some human footprint will always be required in theater—if only after the dust of combat settles.

Reports from China suggest that robots could replace humans on the battlefield within the next five years, and the U.S. Army is slated to begin testing a platoon of robotic combat vehicles this year.  Russia, too, is working to develop combat robots to supplement its infantry.  This, of course, raises an important question: what happens if the most powerful, technologically adept countries write off traditional obligations at the design table?  Might often makes right on the international stage, and given the lack of precedent in this area, the risk demands attention.

Law of the Sea

The peacetime naval domain provides another interesting forum for the disruptive effect of military robotics.  Customary international law, for example, has long recognized an obligation to render assistance to vessels in distress—at least to the extent feasible without danger to the assisting ship and crew.  This is echoed in a variety of international treaties ranging from the Geneva Convention on the High Seas to the United Nations Convention on the Law of the Sea.  But what becomes of this obligation when ships of the future have no crew?

Navies across the world are actively developing ghost fleets.  The U.S. Navy has called upon industry to deliver ten Large Unmanned Surface Vehicle ships by 2024, and just recently, the “Sea Hunter” became the first ship to sail autonomously between two major ports.  This comes as no surprise given the Navy’s 2020 request for $628.8 million to conduct research and development involving unmanned surface and sub-surface assets.  The Chinese, too, have been exploring the future of autonomous sea power.  

This move highlights the real possibility that technology may relieve the most industrially developed Navies of traditional international obligations.  Whether fortuitously or not, the size of a ghost fleet would inversely reflect a nation’s ability—and perhaps its obligation—to assist vessels in distress. 

This would shift the humanitarian onus onto less-developed countries or commercial mariners, ceding at least one traditional pillar of international law’s peacetime function.  This also opens the door to troubling precedent if global superpowers begin to consciously design themselves out of long-held international obligations.

The move to robotic sea vessels also risks an increase in challenges to the previously inviolable (and more-easily defendable) sovereignty of sea-going platforms.  In 2016, for example, a Chinese warship unlawfully detained one of the United States’ underwater drones, which, at the time, was being recovered in the Philippine exclusive economic zone.  The move was widely seen as violating international maritime law.  But the Chinese faced no resistance in their initial detention of the vessel and the United States’ response consisted of nothing more than demands for return.  Unlike their staffed counterparts, unmanned vessels are more prone to illegal seizure or boarding—in part because of the relatively low risk associated with the venture. 

This dynamic may increase a nation’s willingness to unlawfully exert control over another’s sovereign vessel while simultaneously decreasing the aggrieved nation’s inclination (or ability) to use force in response.  This same phenomenon bears out in the context of Unmanned Aerial Vehicles, for which the frequency and consequence of hostile engagement are counter-intuitively related.  But unmanned sea vessels are far more prone to low-cost incursion than their winged counterparts.  This highlights but one aspect of the normative consequence effected by unmanned naval technology, which, if unaddressed, stands to alter the cost-benefit analysis that often underlies the equilibrium of peace.”

https://jia.sipa.columbia.edu/online-articles/disruptive-technology-and-future-international-law

ABOUT THE AUTHOR:

Joshua Fiveson
Joshua Fiveson 

Joshua Fiveson is an officer in the U.S. Navy and a graduate of Harvard Law School.  Fiveson previously served as the youngest-ever military fellow with the Institute of World Politics, a national security fellow with the University of Virginia’s National Security Law Institute, a national security fellow with the Foundation for Defense of Democracies, and a leadership fellow with the Harvard Kennedy School’s Center for Public Leadership.  Fiveson also served as a John Marshall fellow with the Claremont Institute and a James Wilson fellow with the James Wilson Institute. 

Disruptive Technology And The International Law Future

Standard
Image: “Devdiscourse”
JIA SIPA

By JOSHUA FIVESON

A new chapter of the international order The automation of war is as inevitable as conflict itself.  Less certain, however, is the international community’s collective ability to predict the many ways that these changes will affect the traditional global order. 

The pace of technology is often far greater than our collective ability to contemplate its second and third order effects, and this reality counsels cautious reflection as we enter a new chapter in the age-old story of war and peace.

_________________________________________________________________________

“Robots have long presented a threat to some aspect of the human experience.  What began with concern over the labor market slowly evolved into a full-blown existential debate over the future of mankind.  But lost somewhere in between the assembly line and apocalypse stands a more immediate threat to the global order:  the disruptive relationship between technology and international law.

Jus ad Bellum

Jus ad bellum is the body of international law that governs the initial use force.  Under this heading, force is authorized in response to an “armed attack.”  However, little discussion has focused on how unmanned technologies will shift this line between war and peace.

Iran’s recent unprovoked attack on one of the United States’ unmanned surveillance aircraft provides an interesting case study.  Though many saw the move as the opening salvo of war, the United States declined to respond in kind.  The President explained that there would have been a “big, big difference” if there was “a man or woman in the [aircraft.]”  This comment seemed to address prudence, not authority.  Many assumed that the United States would have been well within its rights to levy a targeted response.  Yet this sentiment overlooked a key threshold:  could the United States actually claim self-defense under international law?  

Two cases from the International Court of Justice are instructive.  In Nicaragua v. United States, the Court confronted the U.S. government’s surreptitious support and funding of the Contras, a rebel group that sought to overthrow the Nicaraguan government.  Nicaragua viewed the United States’ conduct as an armed attack under international law.  The Court, however, disagreed.

Key to the Court’s holding was the concept of scale and effect.  Although the U.S. government had encouraged and directly supported paramilitary activities in and against Nicaragua, the Court concluded that the scale and effect of that conduct did not rise to the level of an armed attack.  Notably, this was the case regardless of any standing prohibition on the United States’ efforts.

So too in Islamic Republic of Iran v. United States, more commonly known as the “Oil Platforms” case.  The Court analyzed the U.S. government’s decision to bomb evacuated Iranian Oil Platforms in response to Iranian missile and mining operations throughout the Persian Gulf.  Among other things, the Iranian operations injured six crew members on a U.S. flagged oil tanker, ten sailors on a U.S. naval vessel, and damaged both ships.  The Court nonetheless rejected the United States’ claim of self-defense because the Iranian operations did not meet the Nicaragua gravity threshold and thus did not qualify as “armed attacks.”  

Viewed on this backdrop, however contested, it strains reason to suggest that an isolated use of force against an unmanned asset would ever constitute an armed attack.  Never before have hostile forces been able to similarly degrade combat capability with absolutely no risk of casualty.  Though the Geneva Conventions prohibit the “extensive destruction” of property, it is another matter completely to conclude that any unlawful use of force is tantamount to an armed attack.  Indeed, the Nicaragua and Oil Platforms cases clearly reject this reasoning.  This highlights how the new balance of scale and effect will alter the landscape that separates peace and war.

Even assuming an attack on unmanned technology might constitute an armed attack under international law, there arise other complications regarding the degree of force available in response.  The jus ad bellum principles of necessity and proportionality apply to actions taken in self-defense, and the legitimate use of “defensive” force must be tailored to achieve that legitimate end.  A failure to strike this balance runs contrary to long-held principles of international law. 

What, then, happens when a robotic platform is destroyed and the response delayed?  Does the surrogate country have a general right to use limited, belated force in reply?  Maybe.  But a generalized response would likely constitute armed reprisal, which has fallen into disfavor with customary international law. 

To be lawful, the deferred use of defensive force must be tailored to prevent similar attacks in the future.  Anything short of this would convert a country’s inherent right to self-defense into subterfuge for illegal aggression.  Thankfully, this obligation is simply met where the initial aggressor is a developed country that maintains targeting or industrial facilities that can be tied to any previous, or potential future, means of attack.  But this problem takes on new difficulty in the context of asymmetric warfare.   

Non-state actors are more than capable of targeting robotic technology.  Yet these entities lack the traditional infrastructure that might typically (and lawfully) find itself in the crosshairs following an attack.  How, then, can a traditional power use force in response to a successful, non-state assault on unmanned equipment?  It is complicated.  A responsive strike that broadly targets members of the hostile force may present proportionality concerns that are unique from those associated with traditional attacks that risk the loss of life. 

How would a country justify a responsive strike that targets five members of a hostile force in response to a downed drone?  Does the answer change if fewer people are targeted?  And what if there is no question that those targeted were not involved in the initial act of aggression?  These questions aside, a responsive strike that exclusively targets humans in an attempt to stymie future attacks on unmanned equipment does not bear the same legal foundation as one that seeks to prevent future attacks that risk life.  The international community has yet to identify the exchange rate between robotic equipment and human lives, and therein lies the problem.

Jus in Bello

Robotic warfare will also disrupt jus in bello, the law that governs conduct during armed conflict.  Under the law of armed conflict, the right to use deadly force against a belligerent continues until they have been rendered ineffective, whether through injury, surrender, or detention.  But the right to use force first is not diminished by the well-recognized obligation to care for those same combatants if wounded or captured.  An armed force is not required to indiscriminately assume risk in order to capture as opposed to kill an adversary.  To impose such a requirement would shift risk from one group to another and impose gratuitous tactical impediments

This sentiment fades, however, once you place “killer robots” on the battlefield.  While there is little sense in telling a young soldier or marine that he cannot pull the trigger and must put himself at greater risk if an opportunity for capture presents itself, the same does not hold true when a robot is pulling the trigger.  The tactical feasibility of capture over kill becomes real once you swap “boots” for “bots” on the ground.  No longer is there the potential for fatality, and the risk calculus becomes largely financial.  This is not to say that robots would obligate a country to blindly pursue capture at the expense of strategy.  But a modernized military might effect uncontemplated restrictions on the traditional use of force under international law.  The justification for kill over capture is largely nonexistent in situations where capture is tactically feasible without any coordinate risk of casualty.

Design is another important part of this discussion.  Imagine a platoon of “killer robots” engages a small group of combatants, some of whom are incapacitated but not killed.  A robot that is exclusively designed to target and kill would be unable to comply with the internationally recognized duty to care for wounded combatants.  Unless medical care is a contemplated function of these robots’ design, the concept of a human-free battlefield will remain unrealized.  Indeed, the inherent tension between new tech and old law might indicate that at least some human footprint will always be required in theater—if only after the dust of combat settles.

Reports from China suggest that robots could replace humans on the battlefield within the next five years, and the U.S. Army is slated to begin testing a platoon of robotic combat vehicles this year.  Russia, too, is working to develop combat robots to supplement its infantry.  This, of course, raises an important question: what happens if the most powerful, technologically adept countries write off traditional obligations at the design table?  Might often makes right on the international stage, and given the lack of precedent in this area, the risk demands attention.

Law of the Sea

The peacetime naval domain provides another interesting forum for the disruptive effect of military robotics.  Customary international law, for example, has long recognized an obligation to render assistance to vessels in distress—at least to the extent feasible without danger to the assisting ship and crew.  This is echoed in a variety of international treaties ranging from the Geneva Convention on the High Seas to the United Nations Convention on the Law of the Sea.  But what becomes of this obligation when ships of the future have no crew?

Navies across the world are actively developing ghost fleets.  The U.S. Navy has called upon industry to deliver ten Large Unmanned Surface Vehicle ships by 2024, and just recently, the “Sea Hunter” became the first ship to sail autonomously between two major ports.  This comes as no surprise given the Navy’s 2020 request for $628.8 million to conduct research and development involving unmanned surface and sub-surface assets.  The Chinese, too, have been exploring the future of autonomous sea power.  

This move highlights the real possibility that technology may relieve the most industrially developed Navies of traditional international obligations.  Whether fortuitously or not, the size of a ghost fleet would inversely reflect a nation’s ability—and perhaps its obligation—to assist vessels in distress. 

This would shift the humanitarian onus onto less-developed countries or commercial mariners, ceding at least one traditional pillar of international law’s peacetime function.  This also opens the door to troubling precedent if global superpowers begin to consciously design themselves out of long-held international obligations.

The move to robotic sea vessels also risks an increase in challenges to the previously inviolable (and more-easily defendable) sovereignty of sea-going platforms.  In 2016, for example, a Chinese warship unlawfully detained one of the United States’ underwater drones, which, at the time, was being recovered in the Philippine exclusive economic zone.  The move was widely seen as violating international maritime law.  But the Chinese faced no resistance in their initial detention of the vessel and the United States’ response consisted of nothing more than demands for return.  Unlike their staffed counterparts, unmanned vessels are more prone to illegal seizure or boarding—in part because of the relatively low risk associated with the venture. 

This dynamic may increase a nation’s willingness to unlawfully exert control over another’s sovereign vessel while simultaneously decreasing the aggrieved nation’s inclination (or ability) to use force in response.  This same phenomenon bears out in the context of Unmanned Aerial Vehicles, for which the frequency and consequence of hostile engagement are counter-intuitively related.  But unmanned sea vessels are far more prone to low-cost incursion than their winged counterparts.  This highlights but one aspect of the normative consequence effected by unmanned naval technology, which, if unaddressed, stands to alter the cost-benefit analysis that often underlies the equilibrium of peace.”

https://jia.sipa.columbia.edu/online-articles/disruptive-technology-and-future-international-law

ABOUT THE AUTHOR:

Joshua Fiveson
Joshua Fiveson 

Joshua Fiveson is an officer in the U.S. Navy and a graduate of Harvard Law School.  Fiveson previously served as the youngest-ever military fellow with the Institute of World Politics, a national security fellow with the University of Virginia’s National Security Law Institute, a national security fellow with the Foundation for Defense of Democracies, and a leadership fellow with the Harvard Kennedy School’s Center for Public Leadership.  Fiveson also served as a John Marshall fellow with the Claremont Institute and a James Wilson fellow with the James Wilson Institute. 

How Marines And Robots Will fight Side By Side

Standard
Illustrations by Jacqueline Belker/Staff

“MARINE CORPS TIMES”

This imagined scenario involves a host of platforms, teamed with in-the-flesh Marines, moving rapidly across wide swaths of the Pacific.

Those small teams of maybe a platoon or even a squad could work alongside robots in the air, on land, sea and undersea, to gain a short-term foothold that then could control a vital sea lane Chinese ships would have to bypass or risk sinking simply to transit.

____________________________________________________________________________

“Somewhere off the coast of a tiny island in the South China Sea small robotic submarines snoop around, looking for underwater obstacles as remotely-controlled ships prowl the surf. Overhead multiple long-range drones scan the beachhead and Chinese military ­fortifications deeper into the hills.

A small team of Marines, specially trained and equipped, linger ­farther out after having launched from their amphibious warship, as did their robot battle buddies to scout this spit of sand.

Their Marine grandfathers and great-grandfathers might have rolled toward this island slowly, dodging sea mines and artillery fire only to belly crawl in the surf as they were raked with ­machine gun fire, dying by the thousands.

But in the near-term battle, suicidal charges to gain ground in a fast-moving battlefield is a robot’s job.

It’s a bold, technology-heavy concept that’s part of Marine Corps Commandant Gen. David Berger’s plan to keep the Corps relevant and lethal against a perceived growing threat in the rise of China in the Pacific and its increasingly sophisticated and capable Navy.

In his planning guidance, Berger called for the Marines and Navy to “create many new risk-worthy unmanned and minimally manned platforms.” Those systems will be used in place of and alongside the “stand-in forces,” which are in range of enemy weapons systems to create “tactical dilemmas” for adversaries.

“Autonomous systems and artificial intelligence are rapidly changing the character of war,” Berger said. “Our potential peer adversaries are investing heavily to gain dominance in these fields.”

And a lot of what the top Marine wants makes sense for the type of war fighting, and budget constraints, that the Marine Corps will face.

“A purely unmanned system can be very small, can focus on power, range and duration and there are a lot of packages you can put on it — sensors, video camera, weapons systems,” said Dakota Wood, a retired Marine ­lieutenant colonel and now senior research fellow at The Heritage ­Foundation in Washington, D.C.

The theater of focus, the Indo-Pacific Command, almost requires adding a lot of affordable systems in place of more Marine bodies.

That’s because the Marines are stretched across the world’s largest ocean and now face anti-access, area-denial, systems run by the Chinese military that the force hasn’t had to consider since the Cold War.

“In INDOPACOM, in the littorals, the Marine Corps is looking to kind of outsource tasks that machines are looking to do,” Wood said. “You’re preserving people for tasks you really want a person to handle.”

The Corps’ shift back to the sea and closer work with the Navy has been brewing in the background in recent years as the United States slowly has attempted to disentangle itself from land-based conflicts in the Middle East. Signaling those changes, recent leaders have published warfighting concepts such as expeditionary advanced based operations, or EABO, and littoral operations in contested environment.

EABO aims to work with the Navy’s distributed maritime operations concept. Both allow for the U.S. military to pierce the anti-access, area denial bubble. The littoral operations in contested environment concept makes way for the close-up fight in the critical space where the sea meets the land.

That’s meant a move to prioritize the Okinawa, Japan-based III Marine Expeditionary Force as the leading edge for prioritizing Marine forces and experimentation, as the commandant calls for the “brightest” Marines to head there.

Illustrations by Jacqueline Belker/Staff

Getting what they want

But the Corps, which traditionally has taken a backseat in major acquisitions, faces hurdles in adding new systems to its portfolio.

It was only in 2019 that the Marines gained more funding to add more MQ-9 Reaper drones. The Corps got the money to purchase its three Reapers in this year’s budget. But that’s a ­platform that’s been in wide use by the Air Force for more than a decade.

But that’s a short-term fix, the Corps’ goal remains the Marine Air-Ground Task Force unmanned aircraft system, expeditionary, or MUX.

The MUX, still under development, would give the Corps a long-range drone with vertical takeoff capability to launch from amphib ships that can also run persistent intelligence, surveillance and reconnaissance, electronic warfare and coordinate and initiate strikes from other weapons platforms in its network.

Though early ideas in 2016 called for something like the MUX to be in the arsenal, at this point officials are pegging an operational version of the aircraft for 2026.

Lt. Gen. Steven Rudder, deputy commandant for aviation, said at the annual Sea-Air-Space Symposium in 2019 that the MUX remains among the top priorities for the MAGTF.

Sustain and distract

In other areas, Marines are focusing on existing platforms but making them run without human operators.

One such project is the expeditionary warfare unmanned surface vessel. Marines are using the 11-meter ­rigid-hull inflatable boats already in service to move people or cargo, drop it off and return for other missions.

Logistics are a key area where autonomous systems can play a role. Carrying necessary munitions, medical supplies, fuel, batteries and other items on relatively cheap platforms keeps Marines out of the in-person transport game and instead part of the fight.

In early 2018 the Corps conducted the “Hive Final Mile” autonomous drone resupply demonstration in ­Quantico, Virginia. The short-range experiment used small quadcopters to bring items like a rifle magazine, MRE or canteen to designated areas to resupply a squad on foot patrol.

The system used a group of drones in a portable “hive” that could be ­programmed to deliver items to a predetermined site at a specific time and continuously send and return small drones with various items.

Extended to longer ranges on larger platforms and that becomes a lower-risk way to get a helicopter’s worth of supplies to far-flung Marines on small atolls that dot vast ocean expanses.

Shortly after that demonstration, the Marines put out requests for concepts for a similar drone resupply system that would carry up to 500 pounds at least 10 km. It still was not enough distance for larger-scale warfighting, but is the beginnings of the type of resupply a squad or platoon might need in a contested area.

In 2016, the Office of Naval Research used four rigid-hull inflatable boat with unmanned controls to “swarm” a target vessel, showing that they can also be used to attack or distract vessels.

And the distracting part can be one of the best ways to use unmanned assets, Wood said.

Wood noted that while autonomous systems can ­assist in classic “shoot, move, communicate” tactics, they sometimes be even more effective in sustaining forces and distracting adversaries.

“You can put machines out there that can cause the enemy to look in that direction, decoys tying up attention, munitions or other platforms,” Wood said.

And that distraction goes further than actual boats in the water or drones in the air.

As with the MUX, the Corps is looking at ways to include electronic warfare capabilities in its plans. That allows for robotic systems to spoof enemy sensors, making them think that a small pod of four rigid-hull inflatable boats appear to be a larger flotilla of amphib ships.

Illustrations by Jacqueline Belker/Staff

Overreliance

Marines fighting alongside and along with ­semi-autonomous systems isn’t entirely new.

In communities such as aviation, explosive ordnance disposal and air defense, forms of automation, from automatic flight paths to approaching toward bomb sites and recognizing incoming threats, have been at least partly outsourced to software and systems.

But for more complex tasks, not so much.

How robots have worked and will continue to work in formations is an evolutionary process, according to former Army Ranger Paul Scharre, director of the technology and national security program at the Center for a New American Security and author of, “Army of None: Autonomous Weapons and the Future of War.”

If you look at military technology in history, the most important use for such tech was in focusing on how to solve a particular mission rather than having the most advanced technology to solve all problems, Scharre said.

And autonomy runs on a kind of sliding scale, he said.

As systems get more complex, autonomy will give fewer tasks to the human and more to the robot, helping people better focus on decision-making about how to conduct the fight. And it will allow for one human to run multiple systems.

When you put robotic systems into a squad, you’re giving up a person to run them and leaders have to decide if that’s worth the trade off, Scharre said.

The more remote the system, the more vulnerable it might be to interference or hacking, he said. Built into any plan for adding autonomous systems there must be reliable, durable communication networks.

Otherwise, when those networks are attacked the systems go down.

That means that a Marine’s training won’t get less complicated, only more multifaceted.

Just as Marines continue to train with a map and compass for land navigation even though they have GPS at their fingertips, Marines operating with autonomous systems will need continued training in fundamental tactics and ways to fight if those systems fail.

“Our preferred method of fighting today in an ­infantry is to shoot someone at a distance before they get close enough to kill with a bayonet,” Scharre said. “But it’s still a backup that’s there. There are still bayonet lugs on rifles, we issue bayonets, we teach people how to wield them.”

Where do they live?

A larger question is where do these systems live? At what level do commanders insert robot wingmen or battle buddies?

Purely for reasons of control and effectiveness, Dakota Wood said they’ll need to be close to the action and Marine Corps personnel.

But does that mean every squad is assigned a robot, or is there a larger formation that doles out the automated systems as needed to the units?

For example, an infantry battalion has some vehicles but for larger movements, leaders look to a truck company, Wood said. The maintenance, care, feeding, control and programming of all these systems will require levels of specialization, expertise and resources.

The Corps is experimenting with a new squad formation, putting 15 instead of 13 Marines in the building block of the infantry. Those additions were an assistant team leader and a squad systems operator. Those are exactly the types of human positions needed to implement small drones, tactical level electronic warfare and other systems.

The MUX, still under development, would give the Corps a long-range drone with vertical takeoff capability to launch from amphib ships. (Bell Helicopter)
The MUX, still under development, would give the Corps a long-range drone with vertical takeoff capability to launch from amphib ships. (Bell Helicopter)

The Marine Corps leaned on radio battalions in the 1980s to exploit tactical signals intelligence. Much of that capability resided in the larger battalion that farmed out smaller teams to Marine Expeditionary Units or other formations within the larger division or Marine Expeditionary Force.

A company or battalion or other such formation could be where the control and distribution of ­autonomous systems remains.

But, current force structure moves look could integrate those at multiple levels. Maj. Gen. Mark Wise, deputy commanding general of Marine Corps Combat Development Command, said recently that the Corps is considering a Marine littoral regiment as a formation that would help the Corps better conduct EABO operations.

Combat Development Command did not provide details on the potentially new regimental formation, confirmed that a Marine littoral regiment concept is one that will be developed through current force design conversations.

A component of that could include a recently-proposed formation known as a battalion maritime team.

Maj. Jake Yeager, an intelligence officer in I MEF, charted out an offensive EABO method in a December 2019 article on the website War On The Rocks titled, “­Expeditionary Advanced Maritime Operations: How the Marine Corps can avoid becoming a second land Army in the Pacific.”

Part of that includes the maritime battalion, creating a kind of Marine air-sea task force. Each battalion team would include three assault boat companies, one raid boat company, one anti-ship missile boat battery and one reconnaissance boat company.

The total formation would use 40 boats, at least nine which would be dedicated unmanned surface vehicles, while the rest would be developed with unmanned or manned options, much like the ­rigid-hulled inflatable boats which the Corps is currently experimenting with.”

https://www.marinecorpstimes.com/news/your-marine-corps/2020/02/03/war-with-robots-an-inside-look-at-how-marines-and-robots-will-fight-side-by-side/

The Democratization Of Artificial Intelligence (And Machine Learning)

Standard

FEDERAL NEWS NETWORK

Artificial intelligence programs are multiplying like rabbits across the federal government. The Defense Department has tested AI for predictive maintenance on vehicles and aircraft.

Civilian agencies have experimented with robotic process automation. RPA pilots at the General Services Administration and the IRS helped employees save time on repetitive, low-skill tasks.

______________________________________________________________________________

This content is provided by Red Hat

On the industry side, Chris Sexsmith, Cloud Practice Lead for Emerging Technologies at Red Hat, says it’s reached the point where companies are becoming more concerned with a second layer: It’s not only about leveraging AI itself, but also how to effectively manage the data.

“What are some of the ethical concerns around using that data?” Sexsmith asked. “Essentially, how does an average company or enterprise stay competitive in this industry while staying in line with always-evolving rules? And ultimately, how do we avoid some of the pitfalls of artificial intelligence in that process?”

Some research-based agencies are starting to take a look at the idea of ethical uses for AI and data. Federal guidance is still forthcoming, but on May 22, 40 countries including the U.S. signed off on a common set of AI principles through the International Organization for Economic Cooperation and Development.

But one of the biggest concerns right now is the “black box.” Essentially, once an AI has analyzed data and provided an output, it’s very difficult to see how that answer was reached. But Sexsmith said agencies and organizations can take steps to avoid the black box with Red Hat’s Open Data Hub project.

“Open Data Hub is designed to foster an open ecosystem for AI/ML – a place for users, agencies, and other open source software vendors to build and develop together. As always at Red Hat, our goal is to be very accessible for users and developers to collectively build and share this next generation of toolsets,” Sexsmith said. “The ethical benefits in this model are huge – the code is open to inspection, freely available and easy to examine. We effectively sidestep the majority of black box scenarios that you may get with other proprietary solutions. It’s very easy to inspect what’s happening – the algorithms and code that are used on your datasets to tune your models, for instance – because they are 100% open source and available for analysis.”

Open Data Hub is a machine-learning-as-a-service toolbox, built on top of Red Hat’s OpenShift, a platform for managing Linux containers. But it’s designed to be portable, to run in hybrid environments, across on-premise and public clouds.

“We aim to give the data scientists, engineers and practitioners a head start with the infrastructure components and provide an easy path to running data analytics and machine learning in this distributed environment,” Sexsmith said. “Open Data Hub isn’t one solution, but an ecosystem of solutions built on Openshift, our industry-leading solution centered around Kubernetes, which handles distributed scheduling of containers across on-prem and cloud environments. ODH provides a pluggable framework to incorporate existing software and tools, thereby enabling your data scientists, engineers and operations teams to execute on a safe and secure platform that is completely under your control.”

Red Hat is currently partnered with companies like NVIDIA, Seldon.io, and PerceptiLabs on the Open Data Hub project. It’s also working on the Mass Open Cloud, a collaboration of academia, industry and the state of Massachusetts.

But Sexsmith sees a lot of possibilities in this space for federal agencies to advance their AI capabilities. Geospatial reconnaissance, law enforcement, space exploration and national labs are just a few of the federal missions that could benefit from AI’s ability to process massive amounts of data in an open, ethical way.

“Federal agencies obviously have a lot of restrictions on how data can be utilized and where that data can reside,” Sexsmith said. “So in this world of hybrid cloud, there is a need to be cautious and innovative at the same time. It is easy to inadvertently build bias into AI models and possibly make a bad situation worse. Full control of data and regular reviews of both code and data, including objective reviews of ML output, should be a top priority. At minimum, a human should always be in the loop. And while the simplicity of a proprietary service is often appealing, there is danger in not fully understanding how machine-learning results are reached. Code and data are more intertwined than ever, and the rules around data and privacy are always evolving. Maintaining control of your data in a secure open source environment is a smart move, and a primary goal of Open Data Hub.”

Artificial Intelligence And The Potential To Replace The Federal Work Force

Standard
Image: “Irishtimes.com

FEDERAL NEWS NETWORK” By Permission – Jeff Neal

Employees will need new skills. OK. Got that. What new skills will they need? Are we talking about the skills of the tech folks in the agency? Yes. Are we talking about the people who will use the tech? Yes.

Are we talking about the agency’s customers? Yes. So we are talking about the potential retraining of the bulk of the federal workforce over a period of years.

______________________________________________________________________________

“It is hard to avoid seeing articles and studies that talk about artificial intelligence (AI) and how it will provide many benefits and open the door to countless risks. A recent two-part Partnership for Public Service report — “More Than Meets AI” — talked about steps agencies should take to communicate with their employees, ensure they have the right skills, minimize risk and build confidence in systems.

All of those are good things to think about. It is true that the potential for AI is so far-reaching that it will certainly change how employees work, present risks we are only beginning to understand and change how the American people interact with the government. The problem with a lot of what I am reading is that it does not take the promise of AI and present concrete examples of how something we all are used to seeing and experiencing will change.

We have retrained people before. When we started moving from paper to mainframe-based systems, we trained employees how to use the dumb terminals that started appearing in their offices. When the first personal computers started appearing in offices, we taught people how to use them, found ways to use the capabilities of the technology and then gradually transformed the way everyone works.

The transformation in those days was slow and mostly predictable. It was a move from paper and pencil to digital, but much of the work replicated what was already being done. While the change was predictable, it was also far-reaching. As I wrote in October last year, during the 1950s, the federal government employed more than a million clerks. Those jobs were mostly automated out of existence. By 2014, the number was down to 123,000. Now the number is down to 106,000.

The fact that we could replace 900,000 jobs and not have tremendous disruption is partly because it was a gradual transformation, partly because it affected the lowest graded jobs where turnover was traditionally high, and partly because it changed the nature of how the most repetitive tasks were done. But it did not change the fundamental work being done, as much as we might think.

The federal government was part of a much larger move to an economy based on knowledge work. Knowledge workers derive their economic value from the knowledge they bring to the table. Clerical work, much like trade and craft work, brought value mostly because of the labor the employees carry out, not their technical and programmatic skills. As those jobs disappeared, they were replaced with people whose knowledge was their contribution.

That transformation is the reason I have said for a long time that the federal government is actually much larger as a percentage of the population than it was in the 1950s. In 2014, I wrote a post that showed how that happened. At the time, we had 183 U.S. residents for every nonclerical federal employee. In the 1950s, the number was one for every 503 residents.

The change in the federal workforce that was driven by the increased use of technology was enabled by increased government spending and the fact that the number of federal employees appeared to remain relatively constant. In inflation-adjusted dollars, federal spending is almost five times as much per U.S. resident as it was in the 1950s.        Subscribe to Federal News Network’s Morning Federal Report and In Case You Missed It newsletters for the latest federal workforce news.

When we experience the next wave of AI-enabled changes, can we expect the same thing to happen? Is it likely that we will continue to see federal spending increase at the rate it has in the past 60 years? Will large numbers of federal jobs be replaced with technology, only to reappear in another form? I think the answers to those questions are going to drive federal agency priorities for years to come.

Will federal spending increase? If the recent spending agreement is any measure, absolutely. The last big attempt by Congress to put itself on a fiscal diet was sequestration. Remember that? They put automatic cuts in place so they could force themselves to stop spending. Then they spent trillions. Spending kept going up because politicians will spend money to get votes.

A free-spending Congress means we are likely to see the dollars continue to flow. The fact that 85% of federal jobs are not in the National Capital Region means they are not going to want to see real reductions in the number of federal jobs. So it is safe to predict that the number of federal workers is going to continue to hover around two million. Add the flowing money, desire to protect jobs in congressional districts and the emerging wave of AI, and the result will be a radically transformed federal workplace. The difference this time is that the pace of advances in technology is increasing and the capabilities we will see from AI will replace knowledge workers to a degree we have not seen before.

This post is the first of a series that will look at the impact of AI. Rather than addressing it in broad terms, future posts will take a look at one type of federal job and examine how the work is performed today and what we can expect as technology develops. I will also make some recommendations on how that transition can come about and what will happen to the employees.

I have more than 40 years of experience in human resources, so that is the occupation I will examine. The changes we can expect in HR and how the government can make those changes will translate to other types of work as well. The next post in this series will be in two weeks.”

ABOUT THE AUTHOR:

Jeff Neal is a senior vice president for ICF and founder of the blog, ChiefHRO.com. Before coming to ICF, Neal was the chief human capital officer at the Homeland Security Department and the chief human resources officer at the Defense Logistics Agency.

Hi Tech Weapons Today – A 40 MM Drone Canister With A “Can-Doom” Attitude – And It’s Cheap

Standard
(Image composite, DefendTex / US Army photo by Tia Sokimson)

C4ISRNET

Drone 40, produced by Melbourne-based defense technology firm Defend Tex, is a drone whose niche involves a 40 mm grenade launcher.

It is a range expander for infantry, a new and novel loitering munition, and a testament to the second-order effects of a thriving drone parts ecosystem. Drone 40 is designed to fly with minimal human involvement

______________________________________________________________________________

“A 40mm canister is an unusual form factor for a quadcopter, but not an unproductive one. Like the endless variation on a simple form seen in beetles, quadcopters combine four rotors, internal sensors and remote direction with the adaptability to fit into any ecological niche.

Drone 40 was created as a solution to the problem of range; specifically, the problem of a range disparity between the infantry weapons carried by Australian infantry, which are accurate to about 500 meters, and the AK-74s carried by adversaries, which can reach out to 800 meters (though the accuracy at that range is disputed). Even if the fire is just suppressing fire, Australia was looking for a way to let its infantry fight back, but not one that required changing the gun or adding a lot more weight to what soldiers were already carrying.

“The only thing that we had in the infantry kit with any utility was a 40 millimeter grenade launcher,” which led to the design of the Drone 40, said DefendTex CEO Travis Reddy. Rather than overtaxing the launcher with a medium-velocity round that could travel the distance needed, the launcher would instead give a boost to a drone-borne munition that would then fly under its own power the rest of the way to the target.

The overall appearance of the Drone 40 is that of an oversized bullet. Four limbs extend from the cylindrical body, with rotors attached. In flight, it gives the appearance of a rocket traveling at perpendicular angles, the munition suspended below the rotors like a Sword of Damocles. It is a quadcopter, technically.

Drone 40 is a loitering munition, for a very short definition of loiter. When carrying a 110 gram payload, it can fly for about 12 minutes. The person commanding the Drone 40 can remotely disarm the munition, letting the drone land inert for later recovery. When not carrying an anti-personnel or anti-tank munition as payload, it can be outfitted with a sensor. For an infantry unit that wants to scout first, fire later, the sensor module can provide early information, then be swapped out with a deadly payload. Beyond Australia, the company envisions providing the Drone 40 to the Five Eyes militaries.

The drone’s video streaming can transmit 10 km over direct line of sight. Drone 40 can also record video and retransmit it when it comes within range, or it can take still images. With the radio frequency relayed by another aerial system, that range can be expanded. Using GPS, the drone can follow a waypoint plotted course to a target, or it can use its own synthetic aperture radar to identify and track a target. Reddy says it can distinguish the radar profile of, say, a T-72 tank, and then follow it autonomously.

The unit’s development was largely funded in collaboration with Autonomous Systems Collaborative Research by the Australian government, and the drone can work collaboratives, with multiple Drone 40s flying together and operating off the sensor data from a single ISR drone in the swarm. Most of the flying, identifying and tracking of targets is done autonomously; however, human control remains an essential part of the machine’s operation.

“The Department of Defense has very strict rules around any use of autonomy in the battlefield,” says Reddy. “We always have to have either man in the loop or man on the loop. The weapon system will never be autonomous, fully acquire and prosecute target without authorization and confirmation from the human.”

The autonomy is there, in a sense, to pass off the task of flying a drone into position and only task the operator with making a call once the drone is in place.

“If there’s someone flying this thing or looking at the video feed, they’re not in combat and someone else is not in combat because they have to be protected at that point in time,” says Reddy. “Everything we do is trying to ensure that we have almost fire and forget, just a reminder when it’s on station or it requires a decision to be made; the rest of the time, the operator is in the fight.”

To make Drone 40 work at the small size and desired price point, its makers had to lean on the commercial drone market. Existing versions, Reddy says, cost less than a $1,000 apiece, with a goal of getting the cost down to around $500.

“To hit the price point that we are using, we are heavily leveraging the current drone market. We have companies, large companies that sink hundreds of millions of dollars into R&D and we can leverage that investment,” says Reddy. “If we wanted to design a radar on a drone ourselves, it would cost us many millions of dollars to achieve and end up in a price point of $10,000 to $15,000 a unit. Instead we let the automotive industry spend all that money and now they’re producing chips that are in the tens of dollars.”

Drone 40 is also designed to be scaled up. DefendTex is working on Drone 81, a larger round designed to work with mortar tubes, and there are other drone models in the works matched to specific munition sizes. If the iteration is successful, it will create a whole arsenal of possibilities for range-expanding munitions that fit into existing platforms.”

https://www.c4isrnet.com/unmanned/2019/06/05/a-drone-with-a-can-doom-attitude/

The Pentagon Wants Your Thoughts On Artificial Intelligence

Standard
IMAGE: ZACKARY CANEPARI – REDU

“WIRED”

IN FEBRUARY, THE Pentagon unveiled an expansive new artificial intelligence strategy that promised the technology would be used to enhance everything the department does, from killing enemies to treating injured soldiers.”

______________________________________________________________________________

“It said an Obama-era advisory board packed with representatives of the tech industry would help craft guidelines to ensure the technology’s power was used ethically.

In the heart of Silicon Valley on Thursday, that board asked the public for advice. It got an earful—from tech executives, advocacy groups, AI researchers, and veterans, among others. Many were wary of the Pentagon’s AI ambitions and urged the board to lay down rules that would subject the department’s AI projects to close controls and scrutiny.

“You have the potential of large benefits, but downsides as well,” said Stanford grad student Chris Cundy, one of nearly 20 people who spoke at the “public listening session” held at Stanford by the Defense Innovation Board, an advisory group established by the Obama administration to foster ties between the Pentagon and the tech industry. Members include executives from Google, Facebook, and Microsoft; the chair is former Google chair Eric Schmidt.

Although the board is examining the ethics of AI at the Pentagon’s request, the department is under no obligation to heed any recommendations. “They could completely reject it or accept it in part,” said Milo Medin, vice president of wireless services at Google and a member of the Defense Innovation Board. Thursday’s listening session took place amid tensions in relations between the Pentagon and Silicon Valley.

Last year, thousands of Google employees protested the company’s work on a Pentagon AI program called Project Maven, in which the company’s expertise in machine learning was used to help detect objects in surveillance imagery from drones. Google said it would let the contract expire and not seek to renew it. The company also issued guidelines for its use of AI that prohibit projects involving weapons, although Google says it will still work with the military.

Before the public got its say Thursday, Chuck Allen, a top Pentagon lawyer, presented Maven as an asset, saying AI that makes commanders more effective can also protect human rights. “Military advantages also bring humanitarian benefits in many cases, including reducing the risk of harm to civilians,” he said.

Many people who spoke after the floor opened to the public were more concerned about that AI may undermine human rights.

Herb Lin, a Stanford professor, urged the Pentagon to embrace AI systems cautiously because humans tend to place too much trust in computers’ judgments. In fact, he said, AI systems on the battlefield can be expected to fail in unexpected ways, because today’s AI technology is inflexible and only works under narrow and stable conditions.

Mira Lane, director of ethics and society at Microsoft, echoed that warning. She also raised concerns that the US could feel pressure to change its ethical boundaries if countries less respectful of human rights forge ahead with AI systems that decide for themselves when to kill. “If our adversaries build autonomous weapons, then we’ll have to react,” she said.

Marta Kosmyna, Silicon Valley lead for the Campaign to Stop Killer Robots, voiced similar worries. The group wants a global ban on fully autonomous weapons, an idea that has received support from thousands of AI experts, including employees of Alphabet and Facebook.

The Department of Defense has been bound since 2012 by an internal policy requiring a “human in the loop” whenever lethal force is used. But at UN discussions the US has argued against proposals for similar international-level rules, saying existing agreements like the 1949 Geneva Conventions are a sufficient check on new ways to kill people.

“We need to take into account countries that do not follow similar rules,” Kosmyna said, urging the US to use its influence to steer the world toward new, AI-specific restrictions. Such restrictions could bind the US from switching its position just because an adversary did.

Veterans who spoke Thursday were more supportive of the Pentagon’s all-in AI strategy. Bow Rodgers, who was awarded a Bronze Star in Vietnam and now invests in veteran-founded startups, urged the Pentagon to prioritize AI projects that could reduce friendly-fire incidents. “That’s got to be right up on top,” he said.

Peter Dixon, who served as a Marine officer in Iraq and Afghanistan, spoke of situations in which frantic calls for air cover from local troops taking heavy fire were denied because US commanders feared civilians would be harmed. AI-enhanced surveillance tools could help, he said. “It’s important to keep in mind the benefits this has on the battlefield, as opposed to just the risk of this going sideways somehow,” Dixon said.

The Defense Innovation Board expects to vote this fall on a document that combines principles that could guide the use of AI with general advice to the Pentagon. It will also concern itself with more pedestrian uses of AI under consideration at the department, such as in healthcare, logistics, recruiting, and predicting maintenance issues on aircraft.

“Everyone gets focused on the pointy end of the stick, but there are so many other applications that we have to think about,” said Heather Roff, a research analyst at Johns Hopkins University’s Applied Physics Laboratory who is helping the board with the project.

The board is also taking private feedback from tech executives, academics, and activists. Friday it had scheduled a private meeting that included Stanford professors, Google employees, venture capitalists, and the International Committee of the Red Cross.

Lucy Suchman, a professor at Lancaster University in the UK, was looking forward to that meeting but is pessimistic about the long-term outcomes of the Pentagon’s ethics project. She expects any document that results to be more a PR exercise than meaningful control of a powerful new technology—an accusation she also levels at Google’s AI guidelines. “It’s ethics-washing,” she said.”

https://www.wired.com/story/pentagon-wants-your-thoughts-ai-may-not-listen/

Artificial Intelligence (AI) Is Cheapening Authoritarian Governance

Standard

Image: “WIRED” https://www.wired.com/story/ai-cold-war-china-could-doom-us-all/

“GEOPOLITICAL MONITOR”

“With access to the metadata of one billion digital citizens engaging in uncountable daily digital activities, China is accumulating vast amounts of metadata to develop, refine and deploy its AI systems to achieve its strategic objectives. 

China’s AI hegemonic quest will challenge the United States’ ability to maintain its dominant political, economic, and security presence in the region.”

______________________________________________________________________________

“China is pursing AI hegemony at the domestic and regional level. It will have consequential impacts for the consolidation of CPP and President’s Xi Jinping’s power. In particular, successfully deploying a nationwide AI-based technology will promote social stability in China. It will also facilitate the leapfrogging of China’s economic development. Simultaneously, pervasive AI-based monitoring significantly lowers the cost of authoritarian governance resulting in the consolidation of the CCP’s position as the central and enduring politic unit in China.

The long-term objective of AI’s nationwide deployment is to allow the CCP leadership to achieve its twin goals of realizing “socialist modernization” by 2035, and to “build a modern socialist country that is strong, prosperous, democratic, culturally advanced, and harmonious” by 2049.

At the same time, AI hegemony for China is a strategy to migrate the axis of US-China competition to the technological realm where China’s size and its closed “Chinanet” system gives it asymmetric comparative advantages. If this strategy is successfully prosecuted, these asymmetries would allow China to reshape global trade and help China secure its core interests.

Made in China 2025: Toward AI-based Social Economic Development

With access to the metadata of one billion digital citizens engaging in uncountable daily digital activities, China is accumulating vast amounts of metadata to develop, refine and deploy its AI systems to achieve its strategic objectives. Lee Kai Fu, author of AI Superpowers: China, Silicon Valley, and the New World Order understands this data abundance with the analogy that “if data were petroleum in the artificial intelligence era, then China would be Saudi Arabia.”

China’s quest for AI dominance is related to national development strategies, conceived as the China’s Made in China 2025/ 中国制造 2025 plan. Successfully creating an AI-based digital economy, China may be able to transition its economy away from heavy manufacturing and toward high technology, services, and robotics, enabling it to shift away from its current economic growth model.

As of November 2018, China’s total GDP comprises of 40% being generated by the manufacturing sector and 51.6% being generated by the services sector. Compared to countries within the region, this service sector figure is less than South Korea at 60%, Japan at 70% and many other East Asian economies. In terms of quality and scale of service sector jobs being created, there are concerns that the trajectory needed to escape the middle-income trap is not on target. With that in mind, policymakers in Zhongnanhai are cognizant of the role an AI-based digital economy would have in transitioning the Chinese economy toward sustainable high quality technological-based growth.

Qu Xianming, a drafting member of the Made in China 2025 initiative best articulates the transformative role of Made in China in his comments to the National Intellectual Property Administration (CNIPA): “Without innovation or intellectual property, we cannot build a manufacturing power for the aim of a strong nation. To build a manufacturing power through intellectual property creation and use is the necessary route to build a strong China.”

Congruently, AI-based technology cements the CCP’s political position domestically through the deployment of a social credit system that rewards or punishes citizen behavior according to rules laid out by the government. This significantly decreases the cost of authoritarian rule through pervasive Orwellian-like monitoring of society and sanctioning of any anti-Party political activities.

Importantly, the deploying of AI to decrease the costs of authoritarian rule is an attractive model for other authoritarian states. This will likely cascade into a convergence of support for China’s model of social control and one-party political engineering.

BRI’s Digital Corridor: Locking in Economic Partners and Locking out Rivals

The development of a closed digital system Chinanet using AI technologies will impact the evolution of the BRI and the integrity of the global production network. On the BRI front, participating states may find themselves in a position in which they must adopt the Chinanet system to maximize the benefits that come together with the BRI. Getting on board the digital corridor of the BRI has benefits. It provides access to participating states to the plethora of digital apps that give them access to the Chinanet’s cashless payment system which decreases costs and increases both the speed and efficiency of commerce. Importantly, it provides access to the largest digital market on the planet. For emerging states along the BRI, these new capabilities would be game-changing in terms of promoting social economic development.

Potentially outweighing the benefits of the digital corridor of the BRI may be the closed nature of the Chinanet system and the associated lack of privacy protection in that the Chinese government can access private information when and how they see fit. This means that proprietor data, including intellectual property but also private information, may be accessible to China’s Office of the Central Cyberspace Affairs Commission.

There are questions about how the Chinanet system and BRI may affect the global production network as well. Will Japanese and other businesses be compelled to duplicate their production networks into a China-based closed digital platform and a non-Chinese based open digital platform? If so, this would increase transaction costs of doing business, shorten supply chains and potentially fracture the global production network. The consequences for Japanese businesses are consequential as China remains Japan’s largest trading partner but also an important location for the manufacturing of Japanese products.

Bumps in the BRI: Growing Awareness, Domestic Politics and Shifting Strategies

States on and off the BRI are growing aware of some of the pitfalls in openly embracing the BRISri LankaPakistanMalaysia are but a few of the states that have been subject to the so-called debt trap diplomacy associated with the BRI, sentiments about neocolonialism, and concerns over most benefits of BRI projects benefiting Chinese businesses rather than local businesses and communities. This has caused states to re-think BRI participation. Noteworthy, growing awareness of the AI-based closed Chinanet system and its lock-in effects will increase suspicions about the BRI in 2019 and beyond.

This pushback will have consequences for the BRI and domestic politics in the year ahead, especially as the BRI is President Xi Jinping’s signature policy initiative and that it has been written into the CCP constitution.

Domestically, critics argue that the global pushback against the BRI is related to not so much how fast China has extended its global footprint under President Xi, but rather how much China has overextended itself under his leadership. For example, the BRI now has Antarctic, Arctic, African, Pacific Islands, and Eurasian components. The AI-based digital layer of the BRI further muddies the waters by providing ammunition for BRI, Xi, and China critics.

For many, the growing chorus of BRI and China-related criticism abroad and the hard line taken by the Trump administration, as evidenced by the trade war, is a damning rebuke of Xi’s leadership style, ambitions, and assertive foreign policy.

The domestic backlash includes high-profile essays by prominent scholars such as Xu Zhangrun/许章润, a professor of law at Beijing’s Tsinghua University, who wrote an essay critical of Xi’s administration. Former bureaucrats have also voiced their dismay with President Xi’s administration such as China’s former chief trade negotiator Long Yongtu/龙永图who criticized Beijing’s ‘unwise’ tactics in US tariff war. There has even been criticism of President Xi’s assertiveness in the militarization of the South China Sea in rejecting the July 2016 Permanent Court of Arbitration’s decision rejecting Beijing’s expansive territorial claims in the SCS.

The domestic backlash will compel the Xi administration to recalibrate its BRI strategy in 2019 and over the coming years. We have already seen this with Japan and China agreeing to engage in third country infrastructure cooperation to enhance the BRI’s reputation by linking it to Japan’s longstanding reputation for high quality, transparent, infrastructure building that has left a legacy of no-strings attached infrastructure development but crucially human capital development as well.

US-China Trade War and Geo-technology Competition: Japan’s Delicate Position

The 90-day pause in the escalation of the US-China trade war to pursue a negotiated settlement between China and the US on the trade front is not going to resolve the substantial political, security, trade, and technology differences between China and the United States. China’s President Xi understands the best way to preserve his and the CCP’s power and to achieve China’s national objectives is through domestic and regional AI hegemony. This objective is incompatible with the US view that trade must be free, fair, and reciprocal. It is also incongruous with the US position that the market, not governments, should determine how markets work. Saliently, China’s AI hegemonic quest will challenge the United States’ ability to maintain its dominant political, economic, and security presence in the region.

Japan is in the precarious position of having to navigate between its economic interests and its security interests. It may need to carve out a role as a middle-man between China and the United States in 2019 and beyond to established shared digital rules to ensure that it is not forced to duplicate its production networks. This herculean challenge may not be achievable as the US steps up pressure on China as the Sino-US rivalry intensifies. In this case, Japan along with the US, the EU, and other like-minded countries may need to pool their AI resources to outcompete China in the race for AI hegemony.”

https://www.geopoliticalmonitor.com/high-tech-domination-and-the-us-china-trade-war-ai-is-cheapening-authoritarian-governance/

HHS Standing Up Artificial Intelligence Contract Other Agencies Can Use

Standard

“FEDSCOOP”

“The solicitation calls on potential vendors to demonstrate expertise in four areas of IAAI development: applied ideation and design support; engineering and process engineering support; systems design; and engineering, prototyping and model making support. “

______________________________________________________________________________

“The Department of Health and Human Services’ shared services organization is planning to establish a new contract vehicle that will offer artificial intelligence, automation and other emerging technology services.

The Program Support Center — which offers more than 40 shared services for all federal agencies as a fee-for-service provider — issued a request for proposals Thursday for Intelligent Automation/Artificial Intelligence (IAAI) solutions, services and products for a pending indefinite delivery, indefinite quantity contract.

AI and automation services have drawn wide interest in agency circles for their ability to make labor-intensive tasks run more efficiently. They’ve also played prominent roles in the Trump administration’s federal workforce reform plans to shift employees to more “high-value” work.

HHS has also tried to leverage emerging technologies for more acquisition efficiencies with its HHS Accerlate initiative, which recently earned an authority to operate to test its solutions on live acquisition datasets

The IAAI contract will have a base period of five years and cover a litany of development services for introducing emerging technologies, from ideation and prototyping to operations and maintenance, to include solutions like AI, robotic process automation, microservices, machine learning, blockchain and others.

“PSC believes that IAAI solutions will be doing everything from reducing backlog and cutting costs to performing functions; such as predicting fraudulent transactions and identifying critical suspects via facial recognition, which are considered difficult for an individual to complete on their own,” the solicitation says. “Indeed, we expect that IAAI technologies will fundamentally transform how the public sector gets work done — redesigning jobs and creating entirely new professions. In sum, PSC believes that IAAI will change the nature of many jobs and revolutionize facets of government operations.

Task orders on the contract have a maximum limit of $4 million, including options; however, the RFP said that agency leaders expect the orders to generally be valued at $300,000 or less. The contract maximum for the period of performance is $49 million, including any options.

Interested stakeholders have until Jan. 30 to respond.”

Plotting A Course For The Automated Future Of Federal Work

Standard
Image: Getty Images

“FEDSCOOP”

“The reality is there’s no way to be able to make the most effective use of emerging technologies without having the most effective employees to know what to do with it.

The paradox is, at the core, the more we talk about the issues of technology, the more important the issues of human capital become.” _____________________________________________________________________________

“The government’s efforts to reskill and retrain the federal workforce are expected to ramp up in 2019 as the White House’s embrace of emerging technologies like artificial intelligence and automation continues to expand.

While these technology solutions are in the early stages of adoption, human input and, more importantly, institutional knowledge will be essential to ensuring the efficiency gains expected of AI and automation. And with some estimating that the government will need to reskill up to 300,000 federal employees during this AI boom, an early question for agency leaders will be how to introduce these new technologies that will transform future of public service?

“I think you’ve got to start first with what you are going to use it for,” said Meagan Metzger, CEO of Dcode, which helps emerging technology companies navigate entry into federal markets.

Metzger said the benefits of AI often break down into two categories: doing things that humans shouldn’t be doing, like labor-intensive administrative work, and doing things that humans can’t do, such as sifting through petabytes of data for potential solutions. Determining how agencies plan to use AI and how employees will work with it is an essential first step.

“I think first, it’s identifying which workforce you are actually going to affect with AI because, depending on how you get there, it’s very different,” Metzger said. “You are going to have to look at, once you assess your skill gaps, what tools do you actually need to enable them to train, which could be AI, or stuff like data science workbenches so people can collaborate.”

The Trump administration is actively pursuing both paths as part of its President’s Management Agenda, calling on agencies to adopt automated technologies that will allow federal employees to pursue more “high-value” work while also using it to better leverage troves of government data as an asset.

Making incremental introductions

One challenge of these reskilling efforts is that they are occurring amid a massive technological transformation in the federal government.

While agencies are also consolidating data centers, determining which services to migrate to the cloud and bolstering their cybersecurity posture, deciding how to introduce a workforce-changing technology like automation or AI can be a daunting task, especially with the pressure to catch up to the operations currently happening in the private sector.

“The challenge that I think a lot of agencies have is that they get paralyzed,” Metzger said. “[They think], ‘I need to do a complete infrastructure modernization, and I need to have an entire data management strategy complete before I can do any of those things,’ and that’s unfortunately not fast.”

She said the easier path for agencies to adopt automation is to identify small tasks to test the technology on and then scaling it up from there.

“Pick a business or a mission problem,” Metzger said. “The way that technology is structured now with cloud, you can start slowly, incrementally modernizing their environment. Maybe there’s a program that has a discrete set of data sources, you can modernize that and start moving that program to AI. Then, slowly start picking off the programs until you get to a total solution.”

The other advantage an incremental approach offers is that it allows agencies leaders to leverage small wins to cultivate a critical following rather than trying to institute a wholesale change, said Department of Transportation CIO Vicki Hildebrand, whose last day in government is Jan. 4.

“I think it goes back to those organic opportunities. You see an opportunity, you start with a baby step and I think that’s how you do it. Then you show success, then you’ve got a few more believers and then you have another opportunity,” she said. “All of a sudden, you have a stakeholder who has bought in and you’re not pitching it to somebody who’s half listening.”

Tasked with modernizing the technology infrastructure of a department that encompasses regulating air, rail and automotive travel, Hildebrand said her innovation successes often came by earning small wins and using their success to broaden their adoption.

“This is exactly the way IT loves to live,” she said. “When the stakeholders think there is a more modern technological way to help us and you grab those opportunities. Then you try to get the few people that are going to be the best to help that initiative move forward. Then they bring on a few people and then there’s a new opportunity.”

The Culture Question

Given the already competitive market for cyber, data science and IT talent, the Trump administration recently announced its plans to tap the federal workforce to plug critical skills gaps and prepare for the adoption automation technology through reskilling.

Federal CIO Suzette Kent recently announced the Federal Cybersecurity Reskilling Academy, as well as three pilot programs in 2019 to address technology skills gaps, leadership management and robotic process automation (RPA).

While those training programs could address reskilling efforts from a high level, ensuring that frontline employees will buy into such a transformation will likely determine whether the efforts can be sustained across the enterprise.

One way to generate “organic opportunities” is to take some of the workforce’s most labor-intensive tasks off the table. That’s why Hildebrand said she expects RPA to make a big splash in the federal government in 2019.

“You can remove so much manual work and improve quality by using RPA,” she said. “So I see a lot of that catching fire around some of the things that RPA is really good at. It’s not meant for everything, but the manual, repetitive, copy-paste kinds of activities, I would be surprised if there wasn’t a lot of that happening to free up resources for other things.

Hildebrand added: “The challenge on the heels of that, though, is now you have people who have been doing copy-paste for a long period of time and a robot is now doing that, you need programs to teach them other things.”

Using tech to build better human capital management

Agencies like the Treasury Department’s Bureau of Fiscal Service, the General Services Administration and other have tested RPA pilot programs in the last year, and its potential to make operations more efficient has officials looking to it as an early step in the government’s embrace of automation technology.

Jim Walker, director of public sector marketing at RPA vendor UiPath, said the technology doesn’t so much require reskilling as it does restructuring the work. And within that restructuring lies the opportunity for agencies to eliminate time-consuming tasks to pursue more workforce development.

“It’s not as if robotics is changing the workforce any different than the cell phone has, any different than the computer has,” said Walker, a former federal employee. “Whether it’s through the government, through a contractor employee or through their own home, retraining is not nearly the problem it was 15 or 20 years ago.”

The President’s Management Agenda has also called on the Office of Management and Budget and its Office of Federal Procurement Policy to tackle some of those reskilling challenges by developing new training methods for employees impacted by automation. The plan is expected in the second quarter of fiscal 2019.

Don Kettl, academic director of the University of Texas at Austin’s LBJ Washington Center, said at the heart of the federal government’s technological revolution has to be a focus on also revolutionizing how it trains its employees to match.”