Tag Archives: Military Ethics

The Pentagon’s Artificial Intelligence “Black Box”

Standard
Image: “FCW”

FCW

In February, DOD formally adopted its first set of principles to guide ethical decision-making around the use of AI.

By the guidance they seek to push back on criticism from Silicon Valley and other researchers who have been reluctant to lend their expertise to the military.

____________________________________________________________________________

“The Department of Defense is racing to test and adopt artificial intelligence and machine learning solutions to help sift and synthesize massive amounts of data that can be leveraged by their human analysts and commanders in the field. Along the way, it’s identifying many of the friction points between man and machine that will govern how decisions are made in modern war.

The Machine Assisted Rapid Repository System (MARS) was developed to replace and enhance the foundational military intelligence that underpins most of the department’s operations. Like U.S. intelligence agencies, officials at the Pentagon have realized that data — and the ability to speedily process, analyze and share it among components – was the future. Fulfilling that vision would take a refresh.

“The technology had gotten long in the tooth,” Terry Busch, a division chief at the Defense Intelligence Agency, said during an Apr. 27 virtual event hosted by Government Executive Media. “[It was] somewhat brittle and had been around for several decades, and we saw this coming AI mission, so we knew we needed to rephrase the technology.”

The broader shift from manual and human-based decision-making to automated, machine-led analysis presents new challenges. For example, analysts are used to discussing their conclusions in terms of confidence-levels, something that can be more difficult for algorithms to communicate. The more complex the algorithm and data sources it draws from, the trickier it can be to unlock the black box behind its decisions.

“When data is fused from multiple or dozens of sources and completely automated, how does the user experience change? How do they experience confidence and how do they learn to trust machine-based confidence?” Busch said, detailing some of the questions DOD has been grappling with.

The Pentagon has experimented with new visualization capabilities to track and present the different sources and algorithms that were used to arrive at a particular conclusion. DOD officials have also pitted man against machine, asking dueling groups of human and AI analysts to identify an object’s location – like a ship – and then steadily peeling away the sources of information those groups were relying on to see how it impacts their findings and the confidence in those assertions. Such experiments can help determine the risk versus reward of deploying automated analysis in different mission areas.

Like other organizations that leverage such algorithms, the military has learned that many of its AI programs perform better when they’re narrowly scoped to a specific function and worse when those capabilities are scaled up to serve more general purposes.

Nand Mulchandani, chief technology officer for the Joint Artificial Intelligence Center at DOD, said the paradox of most AI solutions in government is that they require very specific goals and capabilities in order to receive funding and approval, but that hyper-specificity usually ends up being the main obstacle to more general applications later on. It’s one of the reasons DOD created the center in the first place, and Mulchandani likens his role to that of a venture capitalist on the hunt for the next killer app.

“Any of the actions or things we build at the JAIC we try to build them with leverage in mind,” Mulchandani said at the same event. “How do we actually take a pattern we’re finding out there, build a product to satisfy that and package it in a way that can be adopted very quickly and widely?”

Scalability is an enduring problem for many AI products that are designed for one purpose and then later expanded to others. Despite a growing number of promising use cases, the U.S. government still is far from achieving desired end state for the technology. The Trump administration’s latest budget calls for increasing JAIC’s funding from $242 million to $290 million and requests a similar $50 million bump for the Defense Advanced Research Projects Agency’s research and development efforts around AI.

Ramping up the technology while finding the appropriate balance in human/machine decision-making will require additional advances in ethics, testing and evaluation, training, education, products and user interface, Mulchandani said.

“Dealing with AI is a completely different beast in terms of even decision support, let alone automation and other things that come later,” he said. “Even in those situations if you give somebody a 59% probability of something happening …instead of a green or red light, that alone is a huge, huge issue in terms of adoption and being able to understand it.”

https://fcw.com/articles/2020/04/28/dod-ai-black-box-johnson.aspx?oly_enc_id=

Generals and Admirals Need Checks and Balances Too

Standard

ASSOCIATION OF UNITED STATES ARMY”  By Lt. Col. Joe Doty, USA Retired and Maj. Gen. Rich Long, USA Retired

“Without question, most past and present top officers are some of the finest, most competent, values-based and selfless officers our nation can produce.

But they, like us all, are human, flawed, and we all need a healthy dose of oversight and accountability.

Some generals have made the news lately for behaviors that violate the professional ethic. Although this trend seems new or current, it isn’t. Thomas E. Ricks, a well-published author on defense matters, wrote “General Failure” in the November 2012 issue of The Atlantic and in the same year published a book, The Generals: American Military Command from World War II to Today, on the same topic. His critique focused on a perceived lack of accountability in our armed forces at the general-officer level.

In June 2008, Lt. Col. Robert Bateman wrote “Cause for Relief: Why Presidents No Longer Fire Generals” in Armed Forces Journal. And in May 2007, then-Lt. Col. Paul Yingling wrote his (in)famous “A Failure in Generalship,” also in Armed Forces Journal. Our national security advisor, Lt. Gen. H.R. McMaster, in 1997 wrote Dereliction of Duty: Lyndon Johnson, Robert McNamara, the Joint Chiefs of Staff, and the Lies that Led to Vietnam. The book talks about failures at our highest officer and political levels up to and during the Vietnam War.

Generals are human beings and as such we need to be honest and frank about human behavior and human frailty. Nobody is perfect. So it seems to be an appropriate question: How is the system working in terms of oversight and accountability for general officers?

Recently we’ve had an admiral caught up in the “Fat Leonard” scandal; a former aide to the secretary of defense, Maj. Gen. Ronald Lewis, was relieved of his duties due to transgressions; and Maj. Gen. David Haight was forced to retire due to questionable professional behavior. At some point, we must ask ourselves whether there is a more effective system of checks and balances that can mitigate some of these issues. Lastly, and perhaps most egregiously, there is the case of former Brig. Gen. Jeffrey Sinclair, who pleaded guilty to adultery, maltreatment of a subordinate, engaging in improper relations and several other charges. Who was providing oversight of him or holding him accountable for his actions?

Don Snider, an expert in the study of the Army profession, notes that professions like the military are self-policing. Other unique aspects of professions (such as law and medicine) include that they:

  • Provide a necessary service to the country.
  • Have a shared ethic.
  • Have a unique expert knowledge.
  • Develop their own members.

Our military takes each of these aspects of being a profession seriously. As the most senior representatives of a self-policing profession, our general officers should be the standard-bearers and set the example for the rest of the force—and for the country—in their personal and professional lives.

They should also know how to self-police. Assuming there is real self-policing of generals, either by someone or a group, would it be helpful to make the policing process more transparent? Would making public the specific (and anonymous) examples of how generals are holding themselves accountable be an appropriate service to the nation?

At the risk of oversimplifying this self-policing and oversight challenge, is a general’s immediate supervisor responsible for policing and holding accountable his or her subordinate? Is the four-star responsible for the three-star? Is the two-star responsible for the one-star? Here, it is important to note that the concept of chain of command is ingrained in the DNA of every service member. It is part of the professional ethic. And the construct of chain of command has a built-in concept and understanding of responsibility and accountability, which does not cease once someone is promoted to general rank.

DoD inspectors general certainly play a role in oversight and accountability, but it’s a role initiated after an allegation has been made. IG investigators are not involved in the day-to-day business of general officers. How do we get more proactive and ahead of the allegations?

At the top levels, trust is sacrosanct. Theoretically, our promotion and selection system has selected those who need little or no oversight. However, the promotion and selection system is only as good as people can make it, and there will be bad apples. It can be argued that officers at this level need more or closer oversight due to their strategic responsibilities and the potential for national or international embarrassment. The Gen. David Petraeus affair could serve as an example.

Mathematically and statistically, it is safe to assume there are bad apples among general officers. The military’s selection and promotion system is run by human beings, so it must have flaws and make mistakes. Is it realistic to think every general never does anything wrong? This violates common and reasoned sense. There are just over 300 generals in the active Army and about 650 in the Total Army. The fact that only one or two get in trouble each year is pretty good and perhaps surprising, but because of the sacred nature of their duties, even one-tenth of a percent is too high. Again, the need for oversight and accountability.

In terms of the human dimension and understanding of this topic, there are basic psychological processes at work. One can be called the Bathsheba Syndrome or “the dark side of success,” which suggests absolute power corrupts absolutely or that enormous success can be an antecedent to ethical failure. There are numerous historical examples of this: Tiger Woods and Richard Nixon come to mind. As such, it can easily be argued that because of their success, top officers need more oversight and accountability.

Expectancy theory is taught in most basic psychology courses and suggests people behave in ways they are expected to behave. Officers who attain the rank of general are the best of the best and are expected to be that way—almost flawless—and in some cases, may think they are flawless (as their evaluation reports state) and therefore think they can get away with anything. Unhinged or unbalanced ambition and/or unhealthy narcissism are recipes for disaster.

There is a difference between an officer who knows they should be and deserve to be a general, and one who may be a bit surprised and humbled to obtain the rank. This difference may be cognitively and emotionally subtle at the individual level, but can be profound in how it plays out. Again, an argument for more structured oversight and accountability.

It is the nature of life in the military to cover for each other. Loyalty to and taking care of your buddies and comrades in arms is part of the professional ethic. These bonds are emotional and powerful, as they must be due to the nature of the profession. But to what extreme? When are the times when this loyalty does not and should not apply?

The answer is: when one’s actions are unethical, against the law or will hurt the effectiveness of the organization. Importantly, a subordinate’s loyalty to a general-level officer is often exponentially magnified due to the rank, position power, referent power and expert power of the general. Hence, loyalty at this level may be impervious to and blind to wrongdoing. Asking or expecting a subordinate to call out a possible transgression by a superior officer can, unfortunately, be a career-ender for the subordinate. Is it realistic to think people in and around Sinclair over the course of his career never suspected anything nefarious was going on?

A recommended solution to this challenge is for DoD to require colonels selected as executive officers for generals to attend the IG course and have as part of their duties a formal responsibility of reporting and answering outside the chain of command and to certify, under oath, that they are not aware of malfeasance or issues that must be addressed. Other duties could include:

  • Challenging the general’s assumptions and thinking.
  • Attempting to find blind spots in the general’s personality and thinking.
  • Asking lots of “why” questions.
  • Providing candid and blunt feedback and assessments.

We also recommend that DoD increase its education and developmental opportunities in terms of helping officers increase their emotional intelligence, specifically in terms of self-awareness and self-management. Emotional intelligence is a leadership skill that can be taught, learned and increased over time. Individuals with high levels of emotional intelligence are less vulnerable to self-delusion, burnout, and personal and professional indiscretions.

Our purpose here is not to poke anyone in the eye or throw stones. Our focus is on organizational improvement and learning. “

GENERALS NEED CHECKS AND BALANCES TOO