Tag Archives: artificial intelligence

Automation Is Advancing In Federal Acquisition

Standard
Image: “FCW

FCW

Federal agencies are evolving from leveraging rote robotic processing bots in their acquisition operations toward more complex artificial intelligence processes to inject even more efficiencies into contracting.

____________________________________________________________________________

“We do have seeds of true AI sprouting” for federal acquisition applications, Omid Ghaffari-Tabrizi, director of the Acquisitions Centers of Excellence in the General Services Administration said during a Defense One June 3 virtual event on automation in acquisition.

While robotic process automation (RPA) bots that handle rote, repetitive chores and free up humans for other work are increasingly common, AI is more complicated, according to Ghaffari-Tabrizi.

GSA uses a bot to track, find and change Section 508 disability clauses in contracts to ensure compliance, and that work is more advanced than just rote processing he said. That review, he said, takes “some degree of intelligence,” but the output is always reviewed by humans to ensure accuracy.

While RPA bots can be implemented relatively quickly based on automating established processes, AI takes more time and expertise because it forges new paths in processes and data, by finding new ways to traverse both, said Michelle McNellis, who is also a director of acquisitions at GSA.

GSA has been at the forefront of implanting bots, with dozens automatically performing repetitive electronic processes, such as automating the work associated with processing offers under the Federal Acquisition Service’s Multiple Award Schedules as well as an invoice notification bot.

It’s also using bots for its FASt Lane, eOffer and eMod processes, said Ghaffari-Tabrizi. FASt Lane is the agency’s program to accelerate how IT contractors get new products onto its buying schedules, while eOffer/eMod allow vendors to submit modifications to their contracts.

Other federal agencies looking to harness similar RPA capabilities, said McNellis, should move deliberately, getting input from all agency operations, including finance, IT, acquisition and management. Legal issues and IT capabilities need to be addressed before moving ahead with either AI or RPA efforts, she said.”

DoD Creating Standards For Artificial Intelligence (AI) Programs

Standard
Image: “Breaking Defense

BREAKING DEFENSE

DoD’s Research and Engineering (R&E) office has launched a new initiative to develop best practices for the many programs to design and build artificial intelligence (AI) applications

___________________________________________________________________________

AI is one of DoD’s top research and development priorities, charged to the Director of R&E Mike Griffin.

The standards initiative is the brain child of newly appointed AI technical director Jill Chrisman, Lewis told the virtual “Critical Issues in C4I Conference 2020,” sponsored by AFCEA and George Mason University.

“When Jill first joined us just a couple of weeks ago, I asked her to give me a site view of all the efforts underway in AI across the department, and kind of give me an evaluation of where we stood,” Lewis explained today. However, he said, because DoD has “so many hundreds of programs that we really couldn’t do a fair evaluation of each individual activity.”

So, instead R&E has decided “to establish a series of standards, if you will, principles and practices that we consider to be good practices for artificial intelligence engineering,” he said. “I liken it to systems engineering.”

A key goal of the new effort is to break down stovepipes in order to allow the various DoD AI efforts to share databases and applications.

In addition, Lewis said, R&E is aiming to “figure out what are the artificial intelligence applications that will have the biggest impact on the warfighter.” This could involve moving out prototypes rapidly so that warfighters have an opportunity to “play with them, experiment with them, and figure out what makes their job more effective,” he added. At the same time, it would enable warfighters to quickly reject things that are not useful or overly complicated.

Lewis said that developing autonomous systems is another top priority. That portfolio of effort is handled by assistant director Wayne Nickols, and is focused on development autonomous systems that can team seamlessly with humans.

“We want autonomy systems that will operate in ways that put human life at lower risk,” he explained. “If we can if we can have a robotic system as a target, instead of a human being as a target, that’s that’s our preferred approach.”

In his wide-ranging discussion, Lewis also expounded on DoD’s research goals for quantum science — a focus area that he said is somewhat less well developed than others on DoD’s high priority list.

“There is a lot of hype associated with quantum science,” he said bluntly. “People are talking about quantum computers that will, in a few years, replace our fastest supercomputers, quantum communication technology, quantum key encryption techniques. And frankly, a lot of it is promising —  but it’s also very very far term.”

That said, Lewis noted that there are two near-term opportunities for DoD in the field: enabling back-up positioning, navigation and timing capability in case GPS satellites are degraded in anyway; and future “exquisite sensors for a variety of applications.”

The Pentagon’s Artificial Intelligence “Black Box”

Standard
Image: “FCW”

FCW

In February, DOD formally adopted its first set of principles to guide ethical decision-making around the use of AI.

By the guidance they seek to push back on criticism from Silicon Valley and other researchers who have been reluctant to lend their expertise to the military.

____________________________________________________________________________

“The Department of Defense is racing to test and adopt artificial intelligence and machine learning solutions to help sift and synthesize massive amounts of data that can be leveraged by their human analysts and commanders in the field. Along the way, it’s identifying many of the friction points between man and machine that will govern how decisions are made in modern war.

The Machine Assisted Rapid Repository System (MARS) was developed to replace and enhance the foundational military intelligence that underpins most of the department’s operations. Like U.S. intelligence agencies, officials at the Pentagon have realized that data — and the ability to speedily process, analyze and share it among components – was the future. Fulfilling that vision would take a refresh.

“The technology had gotten long in the tooth,” Terry Busch, a division chief at the Defense Intelligence Agency, said during an Apr. 27 virtual event hosted by Government Executive Media. “[It was] somewhat brittle and had been around for several decades, and we saw this coming AI mission, so we knew we needed to rephrase the technology.”

The broader shift from manual and human-based decision-making to automated, machine-led analysis presents new challenges. For example, analysts are used to discussing their conclusions in terms of confidence-levels, something that can be more difficult for algorithms to communicate. The more complex the algorithm and data sources it draws from, the trickier it can be to unlock the black box behind its decisions.

“When data is fused from multiple or dozens of sources and completely automated, how does the user experience change? How do they experience confidence and how do they learn to trust machine-based confidence?” Busch said, detailing some of the questions DOD has been grappling with.

The Pentagon has experimented with new visualization capabilities to track and present the different sources and algorithms that were used to arrive at a particular conclusion. DOD officials have also pitted man against machine, asking dueling groups of human and AI analysts to identify an object’s location – like a ship – and then steadily peeling away the sources of information those groups were relying on to see how it impacts their findings and the confidence in those assertions. Such experiments can help determine the risk versus reward of deploying automated analysis in different mission areas.

Like other organizations that leverage such algorithms, the military has learned that many of its AI programs perform better when they’re narrowly scoped to a specific function and worse when those capabilities are scaled up to serve more general purposes.

Nand Mulchandani, chief technology officer for the Joint Artificial Intelligence Center at DOD, said the paradox of most AI solutions in government is that they require very specific goals and capabilities in order to receive funding and approval, but that hyper-specificity usually ends up being the main obstacle to more general applications later on. It’s one of the reasons DOD created the center in the first place, and Mulchandani likens his role to that of a venture capitalist on the hunt for the next killer app.

“Any of the actions or things we build at the JAIC we try to build them with leverage in mind,” Mulchandani said at the same event. “How do we actually take a pattern we’re finding out there, build a product to satisfy that and package it in a way that can be adopted very quickly and widely?”

Scalability is an enduring problem for many AI products that are designed for one purpose and then later expanded to others. Despite a growing number of promising use cases, the U.S. government still is far from achieving desired end state for the technology. The Trump administration’s latest budget calls for increasing JAIC’s funding from $242 million to $290 million and requests a similar $50 million bump for the Defense Advanced Research Projects Agency’s research and development efforts around AI.

Ramping up the technology while finding the appropriate balance in human/machine decision-making will require additional advances in ethics, testing and evaluation, training, education, products and user interface, Mulchandani said.

“Dealing with AI is a completely different beast in terms of even decision support, let alone automation and other things that come later,” he said. “Even in those situations if you give somebody a 59% probability of something happening …instead of a green or red light, that alone is a huge, huge issue in terms of adoption and being able to understand it.”

https://fcw.com/articles/2020/04/28/dod-ai-black-box-johnson.aspx?oly_enc_id=

Experienced Young Military Professionals Discuss The Future of Warfare

Standard

EDITOR’S NOTE: The following two articles by a Middle East war veteran at West Point and a Navy military lawyer contemplating warfare technology and the law should be carefully read by the American Public. These young gentlemen are highly visible in their fields. They and their peers are the future leadership of our country.

______________________________________________________________________________

“MODERN WAR INSTITUTE AT WEST POINT” By Matt Cavanaugh

“Victory’s been defeated; it’s time we recognized that and moved on to what we actually can accomplish.

We’ve reached the end of victory’s road, and at this juncture it’s time to embrace other terms, a less-loaded lexicon, like “strategic advantage,” “relative gain,” and “sustainable marginalization.”

A few weeks back, Colombian President Juan Manuel Santos and Harvard Professor Steven Pinker triumphantly announced the peace deal between the government of Columbia and the Revolutionary Armed Forces of Columbia (FARC). While positive, this declaration rings hollow as the exception that proves the rule – a tentative treaty, however, at the end, roughly 7,000 guerrillas held a country of 50 million hostage over 50 years at a cost of some 220,000 lives. Churchill would be aghast: Never in the history of human conflict were so many so threatened by so few.

One reason this occasion merited a more somber statement: military victory is dead. And it was killed by a bunch of cheap stuff.

The term “victory” is loaded, so let’s stipulate it means unambiguous, unchallenged, and unquestioned strategic success – something more than a “win,” because, while one might “eke out a win,” no one “ekes out a victory.” Wins are represented by a mere letter (“w”); victory is a tickertape with tanks.

Which is something I’ll never see in my military career; I should explain. When a government has a political goal that cannot be obtained other than by force, the military gets involved and selects some objective designed to obtain said goal. Those military objectives can be classified broadly, as Prussian military theorist Carl von Clausewitz did, into either a limited aim (i.e. “occupy some…frontier-districts” to use “for bargaining”), or a larger aim to completely disarm the enemy, “render[ing] him politically helpless or military impotent.” Lo, we’ve arrived at the problem: War has become so inexpensive that anyone can afford the traditional military means of strategic significance – so we can never fully disarm the enemy. And a perpetually armed enemy means no more parades (particularly in Nice).

Never in the history of human conflict were so many so threatened by so few.

It’s a buyer’s market in war, and the baseline capabilities (shoot, move, and communicate) are at snake-belly prices. Tactical weaponry, like AK-47s are plentiful, rented, and shipped from battlefield to battlefield, and the most lethal weapon U.S. forces encountered at the height of the Iraq War, the improvised explosive device, could be had for as little as $265. Moving is cost-effective too in the “pickup truck era of warfare,” and reports on foreign fighters in Syria remind us that cheap, global travel makes it possible for nearly anyone on the planet to rapidly arrive in an active war zone with money to spare. Also, while the terror group Lashkar-e-Taiba shut down the megacity Mumbai in 2008 for less than what many traveling youth soccer teams spend in a season, using unprotected social media networks, communication has gotten even easier for the emerging warrior with today’s widely available unhackable phones and apps. These low and no-cost commo systems are the glue that binds single wolves into coordinated wolf-packs with guns, exponentially greater than the sum of their parts. The good news: Ukraine can crowdfund aerial surveillance against Russian incursions. The less-good news: strikes, like 9/11, cost less than three seconds of a single Super Bowl ad. With prices so low, why would anyone ever give up their fire, maneuver, and control platforms?

All of which explains why military victory has gone away. Consider the Middle East, and the recent comment by a Hezbollah leader, “This can go on for a hundred years,” and his comrade’s complementary analysis, that “as long as we are there, nobody will win.” With such a modestly priced war stock on offer, it’s no wonder Anthony Cordesman of the Center for Strategic and International Studies agrees with the insurgents, recently concluding, of the four wars currently burning across the region, the U.S. has “no prospect” of strategic victory in any. Or that Modern War Institute scholar Andrew Bacevich assesses bluntly, “If winning implies achieving stated political objectives, U.S. forces don’t win.” This is what happens when David’s slingshot is always full.

The guerrillas know what many don’t: It’s the era, stupid. This is the nature of the age, as Joshua Cooper Ramos describes, “a nightmare reality in which we must fight adaptive microthreats and ideas, both of which appear to be impossible to destroy even with the most expensive weapons.” Largely correct, one point merits minor amendment – it’s meaningless to destroy when it’s so cheap to get back in the game, a hallmark of a time in which Wolverine-like regeneration is regular.

This theme even extends to more civilized conflicts. Take the Gawker case: begrudged hedge fund giant Peter Thiel funded former wrestler Hulk Hogan’s lawsuit against the journalistic insurrectionists at Gawker Media, which forced the website’s writers to lay down their keyboards. However, as author Malcolm Gladwell has pointed out – Gawker’s leader, Nick Denton, can literally walk across the street, with a few dollars, and start right over. Another journalist opined, “Mr. Thiel’s victory was a hollow one – you might even say he lost. While he may have killed Gawker, its sensibility and influence on the rest of the news business survive.” Perhaps Thiel should have waited 50 more years, as Columbia had to, to write his “victory” op-ed? He may come to regret the essay as his own “Mission Accomplished” moment.

True with websites, so it goes with warfare. We live in the cheap war era, where the attacker has the advantage and the violent veto is always possible. Political leaders can speak and say tough stuff, promise ruthless revenge – it doesn’t matter, ultimately, because if you can’t disarm the enemy, you can’t parade the tanks.”

https://rosecoveredglasses.wordpress.com/2019/05/15/military-victory-is-dead/

JIA SIPA

By JOSHUA FIVESON

A new chapter of the international order The automation of war is as inevitable as conflict itself.  Less certain, however, is the international community’s collective ability to predict the many ways that these changes will affect the traditional global order. 

The pace of technology is often far greater than our collective ability to contemplate its second and third order effects, and this reality counsels cautious reflection as we enter a new chapter in the age-old story of war and peace.

_________________________________________________________________________

“Robots have long presented a threat to some aspect of the human experience.  What began with concern over the labor market slowly evolved into a full-blown existential debate over the future of mankind.  But lost somewhere in between the assembly line and apocalypse stands a more immediate threat to the global order:  the disruptive relationship between technology and international law.

Jus ad Bellum

Jus ad bellum is the body of international law that governs the initial use force.  Under this heading, force is authorized in response to an “armed attack.”  However, little discussion has focused on how unmanned technologies will shift this line between war and peace.

Iran’s recent unprovoked attack on one of the United States’ unmanned surveillance aircraft provides an interesting case study.  Though many saw the move as the opening salvo of war, the United States declined to respond in kind.  The President explained that there would have been a “big, big difference” if there was “a man or woman in the [aircraft.]”  This comment seemed to address prudence, not authority.  Many assumed that the United States would have been well within its rights to levy a targeted response.  Yet this sentiment overlooked a key threshold:  could the United States actually claim self-defense under international law?  

Two cases from the International Court of Justice are instructive.  In Nicaragua v. United States, the Court confronted the U.S. government’s surreptitious support and funding of the Contras, a rebel group that sought to overthrow the Nicaraguan government.  Nicaragua viewed the United States’ conduct as an armed attack under international law.  The Court, however, disagreed.

Key to the Court’s holding was the concept of scale and effect.  Although the U.S. government had encouraged and directly supported paramilitary activities in and against Nicaragua, the Court concluded that the scale and effect of that conduct did not rise to the level of an armed attack.  Notably, this was the case regardless of any standing prohibition on the United States’ efforts.

So too in Islamic Republic of Iran v. United States, more commonly known as the “Oil Platforms” case.  The Court analyzed the U.S. government’s decision to bomb evacuated Iranian Oil Platforms in response to Iranian missile and mining operations throughout the Persian Gulf.  Among other things, the Iranian operations injured six crew members on a U.S. flagged oil tanker, ten sailors on a U.S. naval vessel, and damaged both ships.  The Court nonetheless rejected the United States’ claim of self-defense because the Iranian operations did not meet the Nicaragua gravity threshold and thus did not qualify as “armed attacks.”  

Viewed on this backdrop, however contested, it strains reason to suggest that an isolated use of force against an unmanned asset would ever constitute an armed attack.  Never before have hostile forces been able to similarly degrade combat capability with absolutely no risk of casualty.  Though the Geneva Conventions prohibit the “extensive destruction” of property, it is another matter completely to conclude that any unlawful use of force is tantamount to an armed attack.  Indeed, the Nicaragua and Oil Platforms cases clearly reject this reasoning.  This highlights how the new balance of scale and effect will alter the landscape that separates peace and war.

Even assuming an attack on unmanned technology might constitute an armed attack under international law, there arise other complications regarding the degree of force available in response.  The jus ad bellum principles of necessity and proportionality apply to actions taken in self-defense, and the legitimate use of “defensive” force must be tailored to achieve that legitimate end.  A failure to strike this balance runs contrary to long-held principles of international law. 

What, then, happens when a robotic platform is destroyed and the response delayed?  Does the surrogate country have a general right to use limited, belated force in reply?  Maybe.  But a generalized response would likely constitute armed reprisal, which has fallen into disfavor with customary international law. 

To be lawful, the deferred use of defensive force must be tailored to prevent similar attacks in the future.  Anything short of this would convert a country’s inherent right to self-defense into subterfuge for illegal aggression.  Thankfully, this obligation is simply met where the initial aggressor is a developed country that maintains targeting or industrial facilities that can be tied to any previous, or potential future, means of attack.  But this problem takes on new difficulty in the context of asymmetric warfare.   

Non-state actors are more than capable of targeting robotic technology.  Yet these entities lack the traditional infrastructure that might typically (and lawfully) find itself in the crosshairs following an attack.  How, then, can a traditional power use force in response to a successful, non-state assault on unmanned equipment?  It is complicated.  A responsive strike that broadly targets members of the hostile force may present proportionality concerns that are unique from those associated with traditional attacks that risk the loss of life. 

How would a country justify a responsive strike that targets five members of a hostile force in response to a downed drone?  Does the answer change if fewer people are targeted?  And what if there is no question that those targeted were not involved in the initial act of aggression?  These questions aside, a responsive strike that exclusively targets humans in an attempt to stymie future attacks on unmanned equipment does not bear the same legal foundation as one that seeks to prevent future attacks that risk life.  The international community has yet to identify the exchange rate between robotic equipment and human lives, and therein lies the problem.

Jus in Bello

Robotic warfare will also disrupt jus in bello, the law that governs conduct during armed conflict.  Under the law of armed conflict, the right to use deadly force against a belligerent continues until they have been rendered ineffective, whether through injury, surrender, or detention.  But the right to use force first is not diminished by the well-recognized obligation to care for those same combatants if wounded or captured.  An armed force is not required to indiscriminately assume risk in order to capture as opposed to kill an adversary.  To impose such a requirement would shift risk from one group to another and impose gratuitous tactical impediments

This sentiment fades, however, once you place “killer robots” on the battlefield.  While there is little sense in telling a young soldier or marine that he cannot pull the trigger and must put himself at greater risk if an opportunity for capture presents itself, the same does not hold true when a robot is pulling the trigger.  The tactical feasibility of capture over kill becomes real once you swap “boots” for “bots” on the ground.  No longer is there the potential for fatality, and the risk calculus becomes largely financial.  This is not to say that robots would obligate a country to blindly pursue capture at the expense of strategy.  But a modernized military might effect uncontemplated restrictions on the traditional use of force under international law.  The justification for kill over capture is largely nonexistent in situations where capture is tactically feasible without any coordinate risk of casualty.

Design is another important part of this discussion.  Imagine a platoon of “killer robots” engages a small group of combatants, some of whom are incapacitated but not killed.  A robot that is exclusively designed to target and kill would be unable to comply with the internationally recognized duty to care for wounded combatants.  Unless medical care is a contemplated function of these robots’ design, the concept of a human-free battlefield will remain unrealized.  Indeed, the inherent tension between new tech and old law might indicate that at least some human footprint will always be required in theater—if only after the dust of combat settles.

Reports from China suggest that robots could replace humans on the battlefield within the next five years, and the U.S. Army is slated to begin testing a platoon of robotic combat vehicles this year.  Russia, too, is working to develop combat robots to supplement its infantry.  This, of course, raises an important question: what happens if the most powerful, technologically adept countries write off traditional obligations at the design table?  Might often makes right on the international stage, and given the lack of precedent in this area, the risk demands attention.

Law of the Sea

The peacetime naval domain provides another interesting forum for the disruptive effect of military robotics.  Customary international law, for example, has long recognized an obligation to render assistance to vessels in distress—at least to the extent feasible without danger to the assisting ship and crew.  This is echoed in a variety of international treaties ranging from the Geneva Convention on the High Seas to the United Nations Convention on the Law of the Sea.  But what becomes of this obligation when ships of the future have no crew?

Navies across the world are actively developing ghost fleets.  The U.S. Navy has called upon industry to deliver ten Large Unmanned Surface Vehicle ships by 2024, and just recently, the “Sea Hunter” became the first ship to sail autonomously between two major ports.  This comes as no surprise given the Navy’s 2020 request for $628.8 million to conduct research and development involving unmanned surface and sub-surface assets.  The Chinese, too, have been exploring the future of autonomous sea power.  

This move highlights the real possibility that technology may relieve the most industrially developed Navies of traditional international obligations.  Whether fortuitously or not, the size of a ghost fleet would inversely reflect a nation’s ability—and perhaps its obligation—to assist vessels in distress. 

This would shift the humanitarian onus onto less-developed countries or commercial mariners, ceding at least one traditional pillar of international law’s peacetime function.  This also opens the door to troubling precedent if global superpowers begin to consciously design themselves out of long-held international obligations.

The move to robotic sea vessels also risks an increase in challenges to the previously inviolable (and more-easily defendable) sovereignty of sea-going platforms.  In 2016, for example, a Chinese warship unlawfully detained one of the United States’ underwater drones, which, at the time, was being recovered in the Philippine exclusive economic zone.  The move was widely seen as violating international maritime law.  But the Chinese faced no resistance in their initial detention of the vessel and the United States’ response consisted of nothing more than demands for return.  Unlike their staffed counterparts, unmanned vessels are more prone to illegal seizure or boarding—in part because of the relatively low risk associated with the venture. 

This dynamic may increase a nation’s willingness to unlawfully exert control over another’s sovereign vessel while simultaneously decreasing the aggrieved nation’s inclination (or ability) to use force in response.  This same phenomenon bears out in the context of Unmanned Aerial Vehicles, for which the frequency and consequence of hostile engagement are counter-intuitively related.  But unmanned sea vessels are far more prone to low-cost incursion than their winged counterparts.  This highlights but one aspect of the normative consequence effected by unmanned naval technology, which, if unaddressed, stands to alter the cost-benefit analysis that often underlies the equilibrium of peace.”

https://jia.sipa.columbia.edu/online-articles/disruptive-technology-and-future-international-law

ABOUT THE AUTHOR:

Joshua Fiveson
Joshua Fiveson 

Joshua Fiveson is an officer in the U.S. Navy and a graduate of Harvard Law School.  Fiveson previously served as the youngest-ever military fellow with the Institute of World Politics, a national security fellow with the University of Virginia’s National Security Law Institute, a national security fellow with the Foundation for Defense of Democracies, and a leadership fellow with the Harvard Kennedy School’s Center for Public Leadership.  Fiveson also served as a John Marshall fellow with the Claremont Institute and a James Wilson fellow with the James Wilson Institute. 

Disruptive Technology And The International Law Future

Standard
Image: “Devdiscourse”
JIA SIPA

By JOSHUA FIVESON

A new chapter of the international order The automation of war is as inevitable as conflict itself.  Less certain, however, is the international community’s collective ability to predict the many ways that these changes will affect the traditional global order. 

The pace of technology is often far greater than our collective ability to contemplate its second and third order effects, and this reality counsels cautious reflection as we enter a new chapter in the age-old story of war and peace.

_________________________________________________________________________

“Robots have long presented a threat to some aspect of the human experience.  What began with concern over the labor market slowly evolved into a full-blown existential debate over the future of mankind.  But lost somewhere in between the assembly line and apocalypse stands a more immediate threat to the global order:  the disruptive relationship between technology and international law.

Jus ad Bellum

Jus ad bellum is the body of international law that governs the initial use force.  Under this heading, force is authorized in response to an “armed attack.”  However, little discussion has focused on how unmanned technologies will shift this line between war and peace.

Iran’s recent unprovoked attack on one of the United States’ unmanned surveillance aircraft provides an interesting case study.  Though many saw the move as the opening salvo of war, the United States declined to respond in kind.  The President explained that there would have been a “big, big difference” if there was “a man or woman in the [aircraft.]”  This comment seemed to address prudence, not authority.  Many assumed that the United States would have been well within its rights to levy a targeted response.  Yet this sentiment overlooked a key threshold:  could the United States actually claim self-defense under international law?  

Two cases from the International Court of Justice are instructive.  In Nicaragua v. United States, the Court confronted the U.S. government’s surreptitious support and funding of the Contras, a rebel group that sought to overthrow the Nicaraguan government.  Nicaragua viewed the United States’ conduct as an armed attack under international law.  The Court, however, disagreed.

Key to the Court’s holding was the concept of scale and effect.  Although the U.S. government had encouraged and directly supported paramilitary activities in and against Nicaragua, the Court concluded that the scale and effect of that conduct did not rise to the level of an armed attack.  Notably, this was the case regardless of any standing prohibition on the United States’ efforts.

So too in Islamic Republic of Iran v. United States, more commonly known as the “Oil Platforms” case.  The Court analyzed the U.S. government’s decision to bomb evacuated Iranian Oil Platforms in response to Iranian missile and mining operations throughout the Persian Gulf.  Among other things, the Iranian operations injured six crew members on a U.S. flagged oil tanker, ten sailors on a U.S. naval vessel, and damaged both ships.  The Court nonetheless rejected the United States’ claim of self-defense because the Iranian operations did not meet the Nicaragua gravity threshold and thus did not qualify as “armed attacks.”  

Viewed on this backdrop, however contested, it strains reason to suggest that an isolated use of force against an unmanned asset would ever constitute an armed attack.  Never before have hostile forces been able to similarly degrade combat capability with absolutely no risk of casualty.  Though the Geneva Conventions prohibit the “extensive destruction” of property, it is another matter completely to conclude that any unlawful use of force is tantamount to an armed attack.  Indeed, the Nicaragua and Oil Platforms cases clearly reject this reasoning.  This highlights how the new balance of scale and effect will alter the landscape that separates peace and war.

Even assuming an attack on unmanned technology might constitute an armed attack under international law, there arise other complications regarding the degree of force available in response.  The jus ad bellum principles of necessity and proportionality apply to actions taken in self-defense, and the legitimate use of “defensive” force must be tailored to achieve that legitimate end.  A failure to strike this balance runs contrary to long-held principles of international law. 

What, then, happens when a robotic platform is destroyed and the response delayed?  Does the surrogate country have a general right to use limited, belated force in reply?  Maybe.  But a generalized response would likely constitute armed reprisal, which has fallen into disfavor with customary international law. 

To be lawful, the deferred use of defensive force must be tailored to prevent similar attacks in the future.  Anything short of this would convert a country’s inherent right to self-defense into subterfuge for illegal aggression.  Thankfully, this obligation is simply met where the initial aggressor is a developed country that maintains targeting or industrial facilities that can be tied to any previous, or potential future, means of attack.  But this problem takes on new difficulty in the context of asymmetric warfare.   

Non-state actors are more than capable of targeting robotic technology.  Yet these entities lack the traditional infrastructure that might typically (and lawfully) find itself in the crosshairs following an attack.  How, then, can a traditional power use force in response to a successful, non-state assault on unmanned equipment?  It is complicated.  A responsive strike that broadly targets members of the hostile force may present proportionality concerns that are unique from those associated with traditional attacks that risk the loss of life. 

How would a country justify a responsive strike that targets five members of a hostile force in response to a downed drone?  Does the answer change if fewer people are targeted?  And what if there is no question that those targeted were not involved in the initial act of aggression?  These questions aside, a responsive strike that exclusively targets humans in an attempt to stymie future attacks on unmanned equipment does not bear the same legal foundation as one that seeks to prevent future attacks that risk life.  The international community has yet to identify the exchange rate between robotic equipment and human lives, and therein lies the problem.

Jus in Bello

Robotic warfare will also disrupt jus in bello, the law that governs conduct during armed conflict.  Under the law of armed conflict, the right to use deadly force against a belligerent continues until they have been rendered ineffective, whether through injury, surrender, or detention.  But the right to use force first is not diminished by the well-recognized obligation to care for those same combatants if wounded or captured.  An armed force is not required to indiscriminately assume risk in order to capture as opposed to kill an adversary.  To impose such a requirement would shift risk from one group to another and impose gratuitous tactical impediments

This sentiment fades, however, once you place “killer robots” on the battlefield.  While there is little sense in telling a young soldier or marine that he cannot pull the trigger and must put himself at greater risk if an opportunity for capture presents itself, the same does not hold true when a robot is pulling the trigger.  The tactical feasibility of capture over kill becomes real once you swap “boots” for “bots” on the ground.  No longer is there the potential for fatality, and the risk calculus becomes largely financial.  This is not to say that robots would obligate a country to blindly pursue capture at the expense of strategy.  But a modernized military might effect uncontemplated restrictions on the traditional use of force under international law.  The justification for kill over capture is largely nonexistent in situations where capture is tactically feasible without any coordinate risk of casualty.

Design is another important part of this discussion.  Imagine a platoon of “killer robots” engages a small group of combatants, some of whom are incapacitated but not killed.  A robot that is exclusively designed to target and kill would be unable to comply with the internationally recognized duty to care for wounded combatants.  Unless medical care is a contemplated function of these robots’ design, the concept of a human-free battlefield will remain unrealized.  Indeed, the inherent tension between new tech and old law might indicate that at least some human footprint will always be required in theater—if only after the dust of combat settles.

Reports from China suggest that robots could replace humans on the battlefield within the next five years, and the U.S. Army is slated to begin testing a platoon of robotic combat vehicles this year.  Russia, too, is working to develop combat robots to supplement its infantry.  This, of course, raises an important question: what happens if the most powerful, technologically adept countries write off traditional obligations at the design table?  Might often makes right on the international stage, and given the lack of precedent in this area, the risk demands attention.

Law of the Sea

The peacetime naval domain provides another interesting forum for the disruptive effect of military robotics.  Customary international law, for example, has long recognized an obligation to render assistance to vessels in distress—at least to the extent feasible without danger to the assisting ship and crew.  This is echoed in a variety of international treaties ranging from the Geneva Convention on the High Seas to the United Nations Convention on the Law of the Sea.  But what becomes of this obligation when ships of the future have no crew?

Navies across the world are actively developing ghost fleets.  The U.S. Navy has called upon industry to deliver ten Large Unmanned Surface Vehicle ships by 2024, and just recently, the “Sea Hunter” became the first ship to sail autonomously between two major ports.  This comes as no surprise given the Navy’s 2020 request for $628.8 million to conduct research and development involving unmanned surface and sub-surface assets.  The Chinese, too, have been exploring the future of autonomous sea power.  

This move highlights the real possibility that technology may relieve the most industrially developed Navies of traditional international obligations.  Whether fortuitously or not, the size of a ghost fleet would inversely reflect a nation’s ability—and perhaps its obligation—to assist vessels in distress. 

This would shift the humanitarian onus onto less-developed countries or commercial mariners, ceding at least one traditional pillar of international law’s peacetime function.  This also opens the door to troubling precedent if global superpowers begin to consciously design themselves out of long-held international obligations.

The move to robotic sea vessels also risks an increase in challenges to the previously inviolable (and more-easily defendable) sovereignty of sea-going platforms.  In 2016, for example, a Chinese warship unlawfully detained one of the United States’ underwater drones, which, at the time, was being recovered in the Philippine exclusive economic zone.  The move was widely seen as violating international maritime law.  But the Chinese faced no resistance in their initial detention of the vessel and the United States’ response consisted of nothing more than demands for return.  Unlike their staffed counterparts, unmanned vessels are more prone to illegal seizure or boarding—in part because of the relatively low risk associated with the venture. 

This dynamic may increase a nation’s willingness to unlawfully exert control over another’s sovereign vessel while simultaneously decreasing the aggrieved nation’s inclination (or ability) to use force in response.  This same phenomenon bears out in the context of Unmanned Aerial Vehicles, for which the frequency and consequence of hostile engagement are counter-intuitively related.  But unmanned sea vessels are far more prone to low-cost incursion than their winged counterparts.  This highlights but one aspect of the normative consequence effected by unmanned naval technology, which, if unaddressed, stands to alter the cost-benefit analysis that often underlies the equilibrium of peace.”

https://jia.sipa.columbia.edu/online-articles/disruptive-technology-and-future-international-law

ABOUT THE AUTHOR:

Joshua Fiveson
Joshua Fiveson 

Joshua Fiveson is an officer in the U.S. Navy and a graduate of Harvard Law School.  Fiveson previously served as the youngest-ever military fellow with the Institute of World Politics, a national security fellow with the University of Virginia’s National Security Law Institute, a national security fellow with the Foundation for Defense of Democracies, and a leadership fellow with the Harvard Kennedy School’s Center for Public Leadership.  Fiveson also served as a John Marshall fellow with the Claremont Institute and a James Wilson fellow with the James Wilson Institute. 

How Marines And Robots Will fight Side By Side

Standard
Illustrations by Jacqueline Belker/Staff

“MARINE CORPS TIMES”

This imagined scenario involves a host of platforms, teamed with in-the-flesh Marines, moving rapidly across wide swaths of the Pacific.

Those small teams of maybe a platoon or even a squad could work alongside robots in the air, on land, sea and undersea, to gain a short-term foothold that then could control a vital sea lane Chinese ships would have to bypass or risk sinking simply to transit.

____________________________________________________________________________

“Somewhere off the coast of a tiny island in the South China Sea small robotic submarines snoop around, looking for underwater obstacles as remotely-controlled ships prowl the surf. Overhead multiple long-range drones scan the beachhead and Chinese military ­fortifications deeper into the hills.

A small team of Marines, specially trained and equipped, linger ­farther out after having launched from their amphibious warship, as did their robot battle buddies to scout this spit of sand.

Their Marine grandfathers and great-grandfathers might have rolled toward this island slowly, dodging sea mines and artillery fire only to belly crawl in the surf as they were raked with ­machine gun fire, dying by the thousands.

But in the near-term battle, suicidal charges to gain ground in a fast-moving battlefield is a robot’s job.

It’s a bold, technology-heavy concept that’s part of Marine Corps Commandant Gen. David Berger’s plan to keep the Corps relevant and lethal against a perceived growing threat in the rise of China in the Pacific and its increasingly sophisticated and capable Navy.

In his planning guidance, Berger called for the Marines and Navy to “create many new risk-worthy unmanned and minimally manned platforms.” Those systems will be used in place of and alongside the “stand-in forces,” which are in range of enemy weapons systems to create “tactical dilemmas” for adversaries.

“Autonomous systems and artificial intelligence are rapidly changing the character of war,” Berger said. “Our potential peer adversaries are investing heavily to gain dominance in these fields.”

And a lot of what the top Marine wants makes sense for the type of war fighting, and budget constraints, that the Marine Corps will face.

“A purely unmanned system can be very small, can focus on power, range and duration and there are a lot of packages you can put on it — sensors, video camera, weapons systems,” said Dakota Wood, a retired Marine ­lieutenant colonel and now senior research fellow at The Heritage ­Foundation in Washington, D.C.

The theater of focus, the Indo-Pacific Command, almost requires adding a lot of affordable systems in place of more Marine bodies.

That’s because the Marines are stretched across the world’s largest ocean and now face anti-access, area-denial, systems run by the Chinese military that the force hasn’t had to consider since the Cold War.

“In INDOPACOM, in the littorals, the Marine Corps is looking to kind of outsource tasks that machines are looking to do,” Wood said. “You’re preserving people for tasks you really want a person to handle.”

The Corps’ shift back to the sea and closer work with the Navy has been brewing in the background in recent years as the United States slowly has attempted to disentangle itself from land-based conflicts in the Middle East. Signaling those changes, recent leaders have published warfighting concepts such as expeditionary advanced based operations, or EABO, and littoral operations in contested environment.

EABO aims to work with the Navy’s distributed maritime operations concept. Both allow for the U.S. military to pierce the anti-access, area denial bubble. The littoral operations in contested environment concept makes way for the close-up fight in the critical space where the sea meets the land.

That’s meant a move to prioritize the Okinawa, Japan-based III Marine Expeditionary Force as the leading edge for prioritizing Marine forces and experimentation, as the commandant calls for the “brightest” Marines to head there.

Illustrations by Jacqueline Belker/Staff

Getting what they want

But the Corps, which traditionally has taken a backseat in major acquisitions, faces hurdles in adding new systems to its portfolio.

It was only in 2019 that the Marines gained more funding to add more MQ-9 Reaper drones. The Corps got the money to purchase its three Reapers in this year’s budget. But that’s a ­platform that’s been in wide use by the Air Force for more than a decade.

But that’s a short-term fix, the Corps’ goal remains the Marine Air-Ground Task Force unmanned aircraft system, expeditionary, or MUX.

The MUX, still under development, would give the Corps a long-range drone with vertical takeoff capability to launch from amphib ships that can also run persistent intelligence, surveillance and reconnaissance, electronic warfare and coordinate and initiate strikes from other weapons platforms in its network.

Though early ideas in 2016 called for something like the MUX to be in the arsenal, at this point officials are pegging an operational version of the aircraft for 2026.

Lt. Gen. Steven Rudder, deputy commandant for aviation, said at the annual Sea-Air-Space Symposium in 2019 that the MUX remains among the top priorities for the MAGTF.

Sustain and distract

In other areas, Marines are focusing on existing platforms but making them run without human operators.

One such project is the expeditionary warfare unmanned surface vessel. Marines are using the 11-meter ­rigid-hull inflatable boats already in service to move people or cargo, drop it off and return for other missions.

Logistics are a key area where autonomous systems can play a role. Carrying necessary munitions, medical supplies, fuel, batteries and other items on relatively cheap platforms keeps Marines out of the in-person transport game and instead part of the fight.

In early 2018 the Corps conducted the “Hive Final Mile” autonomous drone resupply demonstration in ­Quantico, Virginia. The short-range experiment used small quadcopters to bring items like a rifle magazine, MRE or canteen to designated areas to resupply a squad on foot patrol.

The system used a group of drones in a portable “hive” that could be ­programmed to deliver items to a predetermined site at a specific time and continuously send and return small drones with various items.

Extended to longer ranges on larger platforms and that becomes a lower-risk way to get a helicopter’s worth of supplies to far-flung Marines on small atolls that dot vast ocean expanses.

Shortly after that demonstration, the Marines put out requests for concepts for a similar drone resupply system that would carry up to 500 pounds at least 10 km. It still was not enough distance for larger-scale warfighting, but is the beginnings of the type of resupply a squad or platoon might need in a contested area.

In 2016, the Office of Naval Research used four rigid-hull inflatable boat with unmanned controls to “swarm” a target vessel, showing that they can also be used to attack or distract vessels.

And the distracting part can be one of the best ways to use unmanned assets, Wood said.

Wood noted that while autonomous systems can ­assist in classic “shoot, move, communicate” tactics, they sometimes be even more effective in sustaining forces and distracting adversaries.

“You can put machines out there that can cause the enemy to look in that direction, decoys tying up attention, munitions or other platforms,” Wood said.

And that distraction goes further than actual boats in the water or drones in the air.

As with the MUX, the Corps is looking at ways to include electronic warfare capabilities in its plans. That allows for robotic systems to spoof enemy sensors, making them think that a small pod of four rigid-hull inflatable boats appear to be a larger flotilla of amphib ships.

Illustrations by Jacqueline Belker/Staff

Overreliance

Marines fighting alongside and along with ­semi-autonomous systems isn’t entirely new.

In communities such as aviation, explosive ordnance disposal and air defense, forms of automation, from automatic flight paths to approaching toward bomb sites and recognizing incoming threats, have been at least partly outsourced to software and systems.

But for more complex tasks, not so much.

How robots have worked and will continue to work in formations is an evolutionary process, according to former Army Ranger Paul Scharre, director of the technology and national security program at the Center for a New American Security and author of, “Army of None: Autonomous Weapons and the Future of War.”

If you look at military technology in history, the most important use for such tech was in focusing on how to solve a particular mission rather than having the most advanced technology to solve all problems, Scharre said.

And autonomy runs on a kind of sliding scale, he said.

As systems get more complex, autonomy will give fewer tasks to the human and more to the robot, helping people better focus on decision-making about how to conduct the fight. And it will allow for one human to run multiple systems.

When you put robotic systems into a squad, you’re giving up a person to run them and leaders have to decide if that’s worth the trade off, Scharre said.

The more remote the system, the more vulnerable it might be to interference or hacking, he said. Built into any plan for adding autonomous systems there must be reliable, durable communication networks.

Otherwise, when those networks are attacked the systems go down.

That means that a Marine’s training won’t get less complicated, only more multifaceted.

Just as Marines continue to train with a map and compass for land navigation even though they have GPS at their fingertips, Marines operating with autonomous systems will need continued training in fundamental tactics and ways to fight if those systems fail.

“Our preferred method of fighting today in an ­infantry is to shoot someone at a distance before they get close enough to kill with a bayonet,” Scharre said. “But it’s still a backup that’s there. There are still bayonet lugs on rifles, we issue bayonets, we teach people how to wield them.”

Where do they live?

A larger question is where do these systems live? At what level do commanders insert robot wingmen or battle buddies?

Purely for reasons of control and effectiveness, Dakota Wood said they’ll need to be close to the action and Marine Corps personnel.

But does that mean every squad is assigned a robot, or is there a larger formation that doles out the automated systems as needed to the units?

For example, an infantry battalion has some vehicles but for larger movements, leaders look to a truck company, Wood said. The maintenance, care, feeding, control and programming of all these systems will require levels of specialization, expertise and resources.

The Corps is experimenting with a new squad formation, putting 15 instead of 13 Marines in the building block of the infantry. Those additions were an assistant team leader and a squad systems operator. Those are exactly the types of human positions needed to implement small drones, tactical level electronic warfare and other systems.

The MUX, still under development, would give the Corps a long-range drone with vertical takeoff capability to launch from amphib ships. (Bell Helicopter)
The MUX, still under development, would give the Corps a long-range drone with vertical takeoff capability to launch from amphib ships. (Bell Helicopter)

The Marine Corps leaned on radio battalions in the 1980s to exploit tactical signals intelligence. Much of that capability resided in the larger battalion that farmed out smaller teams to Marine Expeditionary Units or other formations within the larger division or Marine Expeditionary Force.

A company or battalion or other such formation could be where the control and distribution of ­autonomous systems remains.

But, current force structure moves look could integrate those at multiple levels. Maj. Gen. Mark Wise, deputy commanding general of Marine Corps Combat Development Command, said recently that the Corps is considering a Marine littoral regiment as a formation that would help the Corps better conduct EABO operations.

Combat Development Command did not provide details on the potentially new regimental formation, confirmed that a Marine littoral regiment concept is one that will be developed through current force design conversations.

A component of that could include a recently-proposed formation known as a battalion maritime team.

Maj. Jake Yeager, an intelligence officer in I MEF, charted out an offensive EABO method in a December 2019 article on the website War On The Rocks titled, “­Expeditionary Advanced Maritime Operations: How the Marine Corps can avoid becoming a second land Army in the Pacific.”

Part of that includes the maritime battalion, creating a kind of Marine air-sea task force. Each battalion team would include three assault boat companies, one raid boat company, one anti-ship missile boat battery and one reconnaissance boat company.

The total formation would use 40 boats, at least nine which would be dedicated unmanned surface vehicles, while the rest would be developed with unmanned or manned options, much like the ­rigid-hulled inflatable boats which the Corps is currently experimenting with.”

https://www.marinecorpstimes.com/news/your-marine-corps/2020/02/03/war-with-robots-an-inside-look-at-how-marines-and-robots-will-fight-side-by-side/

The Democratization Of Artificial Intelligence (And Machine Learning)

Standard

FEDERAL NEWS NETWORK

Artificial intelligence programs are multiplying like rabbits across the federal government. The Defense Department has tested AI for predictive maintenance on vehicles and aircraft.

Civilian agencies have experimented with robotic process automation. RPA pilots at the General Services Administration and the IRS helped employees save time on repetitive, low-skill tasks.

______________________________________________________________________________

This content is provided by Red Hat

On the industry side, Chris Sexsmith, Cloud Practice Lead for Emerging Technologies at Red Hat, says it’s reached the point where companies are becoming more concerned with a second layer: It’s not only about leveraging AI itself, but also how to effectively manage the data.

“What are some of the ethical concerns around using that data?” Sexsmith asked. “Essentially, how does an average company or enterprise stay competitive in this industry while staying in line with always-evolving rules? And ultimately, how do we avoid some of the pitfalls of artificial intelligence in that process?”

Some research-based agencies are starting to take a look at the idea of ethical uses for AI and data. Federal guidance is still forthcoming, but on May 22, 40 countries including the U.S. signed off on a common set of AI principles through the International Organization for Economic Cooperation and Development.

But one of the biggest concerns right now is the “black box.” Essentially, once an AI has analyzed data and provided an output, it’s very difficult to see how that answer was reached. But Sexsmith said agencies and organizations can take steps to avoid the black box with Red Hat’s Open Data Hub project.

“Open Data Hub is designed to foster an open ecosystem for AI/ML – a place for users, agencies, and other open source software vendors to build and develop together. As always at Red Hat, our goal is to be very accessible for users and developers to collectively build and share this next generation of toolsets,” Sexsmith said. “The ethical benefits in this model are huge – the code is open to inspection, freely available and easy to examine. We effectively sidestep the majority of black box scenarios that you may get with other proprietary solutions. It’s very easy to inspect what’s happening – the algorithms and code that are used on your datasets to tune your models, for instance – because they are 100% open source and available for analysis.”

Open Data Hub is a machine-learning-as-a-service toolbox, built on top of Red Hat’s OpenShift, a platform for managing Linux containers. But it’s designed to be portable, to run in hybrid environments, across on-premise and public clouds.

“We aim to give the data scientists, engineers and practitioners a head start with the infrastructure components and provide an easy path to running data analytics and machine learning in this distributed environment,” Sexsmith said. “Open Data Hub isn’t one solution, but an ecosystem of solutions built on Openshift, our industry-leading solution centered around Kubernetes, which handles distributed scheduling of containers across on-prem and cloud environments. ODH provides a pluggable framework to incorporate existing software and tools, thereby enabling your data scientists, engineers and operations teams to execute on a safe and secure platform that is completely under your control.”

Red Hat is currently partnered with companies like NVIDIA, Seldon.io, and PerceptiLabs on the Open Data Hub project. It’s also working on the Mass Open Cloud, a collaboration of academia, industry and the state of Massachusetts.

But Sexsmith sees a lot of possibilities in this space for federal agencies to advance their AI capabilities. Geospatial reconnaissance, law enforcement, space exploration and national labs are just a few of the federal missions that could benefit from AI’s ability to process massive amounts of data in an open, ethical way.

“Federal agencies obviously have a lot of restrictions on how data can be utilized and where that data can reside,” Sexsmith said. “So in this world of hybrid cloud, there is a need to be cautious and innovative at the same time. It is easy to inadvertently build bias into AI models and possibly make a bad situation worse. Full control of data and regular reviews of both code and data, including objective reviews of ML output, should be a top priority. At minimum, a human should always be in the loop. And while the simplicity of a proprietary service is often appealing, there is danger in not fully understanding how machine-learning results are reached. Code and data are more intertwined than ever, and the rules around data and privacy are always evolving. Maintaining control of your data in a secure open source environment is a smart move, and a primary goal of Open Data Hub.”

Artificial Intelligence And The Potential To Replace The Federal Work Force

Standard
Image: “Irishtimes.com

FEDERAL NEWS NETWORK” By Permission – Jeff Neal

Employees will need new skills. OK. Got that. What new skills will they need? Are we talking about the skills of the tech folks in the agency? Yes. Are we talking about the people who will use the tech? Yes.

Are we talking about the agency’s customers? Yes. So we are talking about the potential retraining of the bulk of the federal workforce over a period of years.

______________________________________________________________________________

“It is hard to avoid seeing articles and studies that talk about artificial intelligence (AI) and how it will provide many benefits and open the door to countless risks. A recent two-part Partnership for Public Service report — “More Than Meets AI” — talked about steps agencies should take to communicate with their employees, ensure they have the right skills, minimize risk and build confidence in systems.

All of those are good things to think about. It is true that the potential for AI is so far-reaching that it will certainly change how employees work, present risks we are only beginning to understand and change how the American people interact with the government. The problem with a lot of what I am reading is that it does not take the promise of AI and present concrete examples of how something we all are used to seeing and experiencing will change.

We have retrained people before. When we started moving from paper to mainframe-based systems, we trained employees how to use the dumb terminals that started appearing in their offices. When the first personal computers started appearing in offices, we taught people how to use them, found ways to use the capabilities of the technology and then gradually transformed the way everyone works.

The transformation in those days was slow and mostly predictable. It was a move from paper and pencil to digital, but much of the work replicated what was already being done. While the change was predictable, it was also far-reaching. As I wrote in October last year, during the 1950s, the federal government employed more than a million clerks. Those jobs were mostly automated out of existence. By 2014, the number was down to 123,000. Now the number is down to 106,000.

The fact that we could replace 900,000 jobs and not have tremendous disruption is partly because it was a gradual transformation, partly because it affected the lowest graded jobs where turnover was traditionally high, and partly because it changed the nature of how the most repetitive tasks were done. But it did not change the fundamental work being done, as much as we might think.

The federal government was part of a much larger move to an economy based on knowledge work. Knowledge workers derive their economic value from the knowledge they bring to the table. Clerical work, much like trade and craft work, brought value mostly because of the labor the employees carry out, not their technical and programmatic skills. As those jobs disappeared, they were replaced with people whose knowledge was their contribution.

That transformation is the reason I have said for a long time that the federal government is actually much larger as a percentage of the population than it was in the 1950s. In 2014, I wrote a post that showed how that happened. At the time, we had 183 U.S. residents for every nonclerical federal employee. In the 1950s, the number was one for every 503 residents.

The change in the federal workforce that was driven by the increased use of technology was enabled by increased government spending and the fact that the number of federal employees appeared to remain relatively constant. In inflation-adjusted dollars, federal spending is almost five times as much per U.S. resident as it was in the 1950s.        Subscribe to Federal News Network’s Morning Federal Report and In Case You Missed It newsletters for the latest federal workforce news.

When we experience the next wave of AI-enabled changes, can we expect the same thing to happen? Is it likely that we will continue to see federal spending increase at the rate it has in the past 60 years? Will large numbers of federal jobs be replaced with technology, only to reappear in another form? I think the answers to those questions are going to drive federal agency priorities for years to come.

Will federal spending increase? If the recent spending agreement is any measure, absolutely. The last big attempt by Congress to put itself on a fiscal diet was sequestration. Remember that? They put automatic cuts in place so they could force themselves to stop spending. Then they spent trillions. Spending kept going up because politicians will spend money to get votes.

A free-spending Congress means we are likely to see the dollars continue to flow. The fact that 85% of federal jobs are not in the National Capital Region means they are not going to want to see real reductions in the number of federal jobs. So it is safe to predict that the number of federal workers is going to continue to hover around two million. Add the flowing money, desire to protect jobs in congressional districts and the emerging wave of AI, and the result will be a radically transformed federal workplace. The difference this time is that the pace of advances in technology is increasing and the capabilities we will see from AI will replace knowledge workers to a degree we have not seen before.

This post is the first of a series that will look at the impact of AI. Rather than addressing it in broad terms, future posts will take a look at one type of federal job and examine how the work is performed today and what we can expect as technology develops. I will also make some recommendations on how that transition can come about and what will happen to the employees.

I have more than 40 years of experience in human resources, so that is the occupation I will examine. The changes we can expect in HR and how the government can make those changes will translate to other types of work as well. The next post in this series will be in two weeks.”

ABOUT THE AUTHOR:

Jeff Neal is a senior vice president for ICF and founder of the blog, ChiefHRO.com. Before coming to ICF, Neal was the chief human capital officer at the Homeland Security Department and the chief human resources officer at the Defense Logistics Agency.

Hi Tech Weapons Today – A 40 MM Drone Canister With A “Can-Doom” Attitude – And It’s Cheap

Standard
(Image composite, DefendTex / US Army photo by Tia Sokimson)

C4ISRNET

Drone 40, produced by Melbourne-based defense technology firm Defend Tex, is a drone whose niche involves a 40 mm grenade launcher.

It is a range expander for infantry, a new and novel loitering munition, and a testament to the second-order effects of a thriving drone parts ecosystem. Drone 40 is designed to fly with minimal human involvement

______________________________________________________________________________

“A 40mm canister is an unusual form factor for a quadcopter, but not an unproductive one. Like the endless variation on a simple form seen in beetles, quadcopters combine four rotors, internal sensors and remote direction with the adaptability to fit into any ecological niche.

Drone 40 was created as a solution to the problem of range; specifically, the problem of a range disparity between the infantry weapons carried by Australian infantry, which are accurate to about 500 meters, and the AK-74s carried by adversaries, which can reach out to 800 meters (though the accuracy at that range is disputed). Even if the fire is just suppressing fire, Australia was looking for a way to let its infantry fight back, but not one that required changing the gun or adding a lot more weight to what soldiers were already carrying.

“The only thing that we had in the infantry kit with any utility was a 40 millimeter grenade launcher,” which led to the design of the Drone 40, said DefendTex CEO Travis Reddy. Rather than overtaxing the launcher with a medium-velocity round that could travel the distance needed, the launcher would instead give a boost to a drone-borne munition that would then fly under its own power the rest of the way to the target.

The overall appearance of the Drone 40 is that of an oversized bullet. Four limbs extend from the cylindrical body, with rotors attached. In flight, it gives the appearance of a rocket traveling at perpendicular angles, the munition suspended below the rotors like a Sword of Damocles. It is a quadcopter, technically.

Drone 40 is a loitering munition, for a very short definition of loiter. When carrying a 110 gram payload, it can fly for about 12 minutes. The person commanding the Drone 40 can remotely disarm the munition, letting the drone land inert for later recovery. When not carrying an anti-personnel or anti-tank munition as payload, it can be outfitted with a sensor. For an infantry unit that wants to scout first, fire later, the sensor module can provide early information, then be swapped out with a deadly payload. Beyond Australia, the company envisions providing the Drone 40 to the Five Eyes militaries.

The drone’s video streaming can transmit 10 km over direct line of sight. Drone 40 can also record video and retransmit it when it comes within range, or it can take still images. With the radio frequency relayed by another aerial system, that range can be expanded. Using GPS, the drone can follow a waypoint plotted course to a target, or it can use its own synthetic aperture radar to identify and track a target. Reddy says it can distinguish the radar profile of, say, a T-72 tank, and then follow it autonomously.

The unit’s development was largely funded in collaboration with Autonomous Systems Collaborative Research by the Australian government, and the drone can work collaboratives, with multiple Drone 40s flying together and operating off the sensor data from a single ISR drone in the swarm. Most of the flying, identifying and tracking of targets is done autonomously; however, human control remains an essential part of the machine’s operation.

“The Department of Defense has very strict rules around any use of autonomy in the battlefield,” says Reddy. “We always have to have either man in the loop or man on the loop. The weapon system will never be autonomous, fully acquire and prosecute target without authorization and confirmation from the human.”

The autonomy is there, in a sense, to pass off the task of flying a drone into position and only task the operator with making a call once the drone is in place.

“If there’s someone flying this thing or looking at the video feed, they’re not in combat and someone else is not in combat because they have to be protected at that point in time,” says Reddy. “Everything we do is trying to ensure that we have almost fire and forget, just a reminder when it’s on station or it requires a decision to be made; the rest of the time, the operator is in the fight.”

To make Drone 40 work at the small size and desired price point, its makers had to lean on the commercial drone market. Existing versions, Reddy says, cost less than a $1,000 apiece, with a goal of getting the cost down to around $500.

“To hit the price point that we are using, we are heavily leveraging the current drone market. We have companies, large companies that sink hundreds of millions of dollars into R&D and we can leverage that investment,” says Reddy. “If we wanted to design a radar on a drone ourselves, it would cost us many millions of dollars to achieve and end up in a price point of $10,000 to $15,000 a unit. Instead we let the automotive industry spend all that money and now they’re producing chips that are in the tens of dollars.”

Drone 40 is also designed to be scaled up. DefendTex is working on Drone 81, a larger round designed to work with mortar tubes, and there are other drone models in the works matched to specific munition sizes. If the iteration is successful, it will create a whole arsenal of possibilities for range-expanding munitions that fit into existing platforms.”

https://www.c4isrnet.com/unmanned/2019/06/05/a-drone-with-a-can-doom-attitude/

The Pentagon Wants Your Thoughts On Artificial Intelligence

Standard
IMAGE: ZACKARY CANEPARI – REDU

“WIRED”

IN FEBRUARY, THE Pentagon unveiled an expansive new artificial intelligence strategy that promised the technology would be used to enhance everything the department does, from killing enemies to treating injured soldiers.”

______________________________________________________________________________

“It said an Obama-era advisory board packed with representatives of the tech industry would help craft guidelines to ensure the technology’s power was used ethically.

In the heart of Silicon Valley on Thursday, that board asked the public for advice. It got an earful—from tech executives, advocacy groups, AI researchers, and veterans, among others. Many were wary of the Pentagon’s AI ambitions and urged the board to lay down rules that would subject the department’s AI projects to close controls and scrutiny.

“You have the potential of large benefits, but downsides as well,” said Stanford grad student Chris Cundy, one of nearly 20 people who spoke at the “public listening session” held at Stanford by the Defense Innovation Board, an advisory group established by the Obama administration to foster ties between the Pentagon and the tech industry. Members include executives from Google, Facebook, and Microsoft; the chair is former Google chair Eric Schmidt.

Although the board is examining the ethics of AI at the Pentagon’s request, the department is under no obligation to heed any recommendations. “They could completely reject it or accept it in part,” said Milo Medin, vice president of wireless services at Google and a member of the Defense Innovation Board. Thursday’s listening session took place amid tensions in relations between the Pentagon and Silicon Valley.

Last year, thousands of Google employees protested the company’s work on a Pentagon AI program called Project Maven, in which the company’s expertise in machine learning was used to help detect objects in surveillance imagery from drones. Google said it would let the contract expire and not seek to renew it. The company also issued guidelines for its use of AI that prohibit projects involving weapons, although Google says it will still work with the military.

Before the public got its say Thursday, Chuck Allen, a top Pentagon lawyer, presented Maven as an asset, saying AI that makes commanders more effective can also protect human rights. “Military advantages also bring humanitarian benefits in many cases, including reducing the risk of harm to civilians,” he said.

Many people who spoke after the floor opened to the public were more concerned about that AI may undermine human rights.

Herb Lin, a Stanford professor, urged the Pentagon to embrace AI systems cautiously because humans tend to place too much trust in computers’ judgments. In fact, he said, AI systems on the battlefield can be expected to fail in unexpected ways, because today’s AI technology is inflexible and only works under narrow and stable conditions.

Mira Lane, director of ethics and society at Microsoft, echoed that warning. She also raised concerns that the US could feel pressure to change its ethical boundaries if countries less respectful of human rights forge ahead with AI systems that decide for themselves when to kill. “If our adversaries build autonomous weapons, then we’ll have to react,” she said.

Marta Kosmyna, Silicon Valley lead for the Campaign to Stop Killer Robots, voiced similar worries. The group wants a global ban on fully autonomous weapons, an idea that has received support from thousands of AI experts, including employees of Alphabet and Facebook.

The Department of Defense has been bound since 2012 by an internal policy requiring a “human in the loop” whenever lethal force is used. But at UN discussions the US has argued against proposals for similar international-level rules, saying existing agreements like the 1949 Geneva Conventions are a sufficient check on new ways to kill people.

“We need to take into account countries that do not follow similar rules,” Kosmyna said, urging the US to use its influence to steer the world toward new, AI-specific restrictions. Such restrictions could bind the US from switching its position just because an adversary did.

Veterans who spoke Thursday were more supportive of the Pentagon’s all-in AI strategy. Bow Rodgers, who was awarded a Bronze Star in Vietnam and now invests in veteran-founded startups, urged the Pentagon to prioritize AI projects that could reduce friendly-fire incidents. “That’s got to be right up on top,” he said.

Peter Dixon, who served as a Marine officer in Iraq and Afghanistan, spoke of situations in which frantic calls for air cover from local troops taking heavy fire were denied because US commanders feared civilians would be harmed. AI-enhanced surveillance tools could help, he said. “It’s important to keep in mind the benefits this has on the battlefield, as opposed to just the risk of this going sideways somehow,” Dixon said.

The Defense Innovation Board expects to vote this fall on a document that combines principles that could guide the use of AI with general advice to the Pentagon. It will also concern itself with more pedestrian uses of AI under consideration at the department, such as in healthcare, logistics, recruiting, and predicting maintenance issues on aircraft.

“Everyone gets focused on the pointy end of the stick, but there are so many other applications that we have to think about,” said Heather Roff, a research analyst at Johns Hopkins University’s Applied Physics Laboratory who is helping the board with the project.

The board is also taking private feedback from tech executives, academics, and activists. Friday it had scheduled a private meeting that included Stanford professors, Google employees, venture capitalists, and the International Committee of the Red Cross.

Lucy Suchman, a professor at Lancaster University in the UK, was looking forward to that meeting but is pessimistic about the long-term outcomes of the Pentagon’s ethics project. She expects any document that results to be more a PR exercise than meaningful control of a powerful new technology—an accusation she also levels at Google’s AI guidelines. “It’s ethics-washing,” she said.”

https://www.wired.com/story/pentagon-wants-your-thoughts-ai-may-not-listen/