Tag Archives: Google

New Contract Award Reveals Pentagon’s Evolving Cloud Strategy

Standard
Image: Peggy Frierson/Defense Media Activity

DEFENSE ONE”

Defense Innovation Unit (DIU) Secure Multi-Cloud Management System Contract disproves fears that the massive JEDI contract meant one company would get all the work.

It shows that the Pentagon is moving away from its older multi-cloud environment, a kluge of little clouds mostly from longtime defense contractors.”

________________________________________________________________________

“Google will build security-and app-management tools for the Pentagon’s Defense Innovation Unit, deepening the Silicon Valley giant’s military ties and illuminating the challenges facing the Defense Department’s drive to a multi-cloud environment.

Tools and a console built with the company’s Anthos application management platform will allow DIU to manage apps on either of the cloud services heavily used by the Pentagon: Microsoft Azure, which won the hotly contested JEDI cloud contract, and Amazon Web Services, or AWS, heavily used by DoD researchers, from a Google Cloud console.

Mike Daniels, vice president of government sales for Google Cloud services, said the company’s approach to security both complements and differs from those of Microsoft and AWS. Traditional “castle-and-moat” network security uses firewalls and virtual private networks to keep attackers on the other side of some sort of digital barrier. The higher security certification, the deeper and wider that moat. It works well enough in a single-cloud environment but less well in one with applications running in multiple clouds. It can also present problems when you’re dealing with an “extended workforce”: a bunch of people working from home or different locations.

Google’s approach is based on fewer borders, perimeters, and moats, Daniels explained. “It looks at critical access control based on information about a specific device, its current state, its facilitated user, and their context. So it considers internal and external networks to be untrusted,” he said. “We’re dynamically asserting and enforcing levels of access at the application layer, not at the moat or perimeter. What does that do? That allows employees in the extended workforce to access web apps from virtually any device anywhere without a traditional remote-access [virtual private network].”

First, it shows that the Pentagon is moving away from its older multi-cloud environment, a kluge of little clouds mostly from longtime defense contractors. When the JEDI program was announced, a lot of those vendors howled that a single massive cloud contract would leave DoD overly reliant on one company. The Pentagon countered that while JEDI was its biggest cloud contract to date, it would not be the last. What DoD did not say—but what some vendors should have anticipated—is that Azure and AWS will be picking up more and more of that business. Case in point: the Air Force’s Cloud One, a key node in their Advanced Battle Management System concept, is a hybrid AWS-Azure cloud. “Multi-cloud environment” for DoD increasingly means AWS and Azure. Future software should be compatible with both. 

Second, it shows that Google is overcoming its employees’ resistance to defense contracting. In 2017, newly appointed Defense Secretary Jim Mattis made Google one of the main stops on his tech tour. His favorable impression of the company’s pioneering cloud-based approach to AI shaped the JEDI competition and helped give rise to Project Maven, a program to apply AI to intelligence, surveillance, and reconnaissance. But an employee protest led Google to end its work with Maven.

Since then, Google has put in place a list of ethical guidelines, which, it says, should enable the company to work with the Defense Department in a way that doesn’t violate what it sees as its core values. It’s working with the Joint Artificial Intelligence Center on projects related to healthcare and business automation and far-reaching research initiatives in AI safety and the post-Moore’s Law computing environment. Meredith Whittaker, the Google employee who led the protests, left the company last year.

Last April, Kent Walker, the company’s senior vice president for global affairs, described the perception that the company was opposed to doing national security work, as “frustrating.” 

Government cloud contracts have become a lot more important to Google’s business model than they were a few years ago. Google has tripled its investment in the public sector space, said Daniels. While this individual contract award is in the seven figures range, Daniels sees it as a possible pathfinder for future work with more of the Defense Department, enabled by DIU. “Frankly, the U.S. DoD is important to us, both domestically as well as globally. We are a global public sector business. To the extent that the U.S. Department of Defense is doing work with us, I do think that is an indicator for us globally as to the confidence that governments around the world can put into Google as a business partner.” 

https://www.defenseone.com/technology/2020/05/what-googles-new-contract-reveals-about-pentagons-evolving-clouds/165524/

Google Not Bidding $10 Billion Pentagon Cloud Contract – “Artificial Intelligence Ethics Concerns”

Standard

Ethics_-Google-Imagegoogle-good-evil-featuredGoogle Core Values

“REUTERS”

“Alphabet Inc’s Google said on Monday it was no longer vying for a $10 billion cloud computing contract with the U.S. Defense Department, in part because the company’s new ethical guidelines do not align with the project, without elaborating.”


“Google said in a statement “we couldn’t be assured that [the JEDI deal] would align with our AI Principles and second, we determined that there were portions of the contract that were out of scope with our current government certifications.”

The principles bar use of Google’s artificial intelligence (AI) software in weapons as well as services that violate international norms for surveillance and human rights.

Google was provisionally certified in March to handle U.S. government data with “moderate” security, but Amazon.com Inc and Microsoft Corp have higher clearances.

Amazon was widely viewed among Pentagon officials and technology vendors as the front-runner for the contract, known as the Joint Enterprise Defense Infrastructure cloud, or JEDI.

Google had been angling for the deal, hoping that the $10 billion annual contract could provide a giant boost to its nascent cloud business and catch up with Amazon and fellow JEDI competitor Microsoft.

That the Pentagon could trust housing its digital data with Google would have been helpful to its marketing efforts with large companies.

But thousands of Google employees this year protested use of Google’s technology in warfare or in ways that could lead to human rights violations. The company responded by releasing principles for use of its artificial intelligence tools.

In its statement, Google said it would have been able to support “portions” of the JEDI deal had joint bids been allowed.

The news outlet Federal News Network first reported Google’s decision.”

https://www.reuters.com/article/us-alphabet-google-pentagon/google-drops-out-of-bidding-for-10-billion-pentagon-data-deal-idUSKCN1MI2BZ

Google Installs Tool To Help US Military Veterans Find Jobs

Standard

Google Helping Vets Find Jobs

Image: Massoud Hossaini/Associated Press

“WASHINGTON EXAMINER”

“Google, the tech firm whose name is synonymous with Internet searches, is introducing a new tool to help U.S. military veterans build careers after leaving the armed forces.

The product, built into Google’s job-search function, allows ex-military personnel to search positions using their occupational specialty code, retrieving a list of employment opportunities where their skills are in particular demand.”


“The transition process is complex,” and the firm’s new resources are designed to “play a part in making that easier,” Lisa Gevelber, vice president of the “Grow with Google” operation, said in a statement. While the unemployment rate among veterans has dropped in the past year, reaching 3 percent at the end of July, former military personnel often struggle to find positions where they can use their specialized knowledge to full advantage and then to adapt to vastly different working conditions.

A 2011 study by the Pew Research Center found that 44 percent of veterans in the post-9/11 era experienced difficulties making the transition. Google, which cited estimates that about 250,000 service members a year will leave the military through 2019, noted that military spouses face a 20 percent unemployment rate and 35 percent under-employment rate after years of leaving jobs because of reassignments.

“We look forward to working with America’s transitioning service members to help them succeed in civilian life,” Gevelber said. Employers will be able to incorporate Google’s tool into their own recruitment sites, the company noted, and companies like FedEx and Pepsi have already begun using it.

Google Maps and Search, meanwhile, will include a feature to help users identify veteran-owned businesses. Google.org, the search engine’s philanthropy arm, will also provide a $2.5 million grant to the United Services Organizations, a nonprofit that supports military members, for a certification that will help help participants find information-technology jobs.

“There is an opportunity to re-equip service members with IT skills as they move on to their next chapter after military service and to help address the spouse unemployment/underemployment problem with highly portable careers,” Alan Reyes, a USO senior vice president, said in a statement.”

https://www.washingtonexaminer.com/business/google-installs-tool-to-help-u-s-military-veterans-find-jobs

 

How Tech Giants Created What The Pentagon Was Not Permitted to Develop

Standard

Tech Giants DARPA

“WIRED”

“How are private-sector entities amassing a level of power that Americans denied to the Defense Advanced Research Projects Agency (DARPA)?

What are the consequences of this tool existing outside the bounds of public oversight? And where do we go from here?”

____________________________________________________________________________________________

“Every purchase you make with a credit card, every magazine subscription you buy and medical prescription you fill, every Web site you visit and e-mail you send or receive, every academic grade you receive, every bank deposit you make, every trip you book and every event you attend—all these transactions and communications will go into … a virtual, centralized grand database,” the New York Times columnist warns.

On the heels of Mark Zuckerberg’s numerous government testimonies and sustained criticism over the Cambridge Analytica scandal, the author of this Times column must be talking about Facebook—right? Or perhaps the web’s broader, ad-based business model?

Not so: The William Safire column, “You Are a Suspect,” was published in the Times in 2002—two years before Facebook was created. And Safire isn’t talking about social networks or digital advertising—he’s discussing Total Information Awareness, a US Defense Advanced Research Projects Agency (Darpa) program that proposed mining vast amounts of Americans’ data to identify potential national security threats. The virtual grand database was to belong to the Department of Defense, which would use it to identify behavior patterns that would help to predict emerging terrorist threats.

Today, we’re voluntarily participating in the dystopian scenario Safire envisioned 16 years ago, with each bit of data handed to companies like Facebook and Google. But in this system, private companies are our information repositories—leaving us to reckon with the consequences of a world that endows corporations with the kind of data once deemed too outrageous for the government.

The Total Information Awareness project, run by Admiral John Poindexter, was a meta-program. It was designed to aggregate signals generated via other programs run out of Darpa’s Information Awareness Office. The programs focused on a range of capabilities, including information analysis, decision-support tools, language translation, data mining, and pattern recognition. When the component parts were combined, they would form a comprehensive picture of people and their behaviors. The purpose was to detect signals that could be used to identify terrorist behavior and head off attacks; the inspiration was the fact that the government had failed to connect the dots left by the 9/11 terrorists as they planned their attack.

Concern about the program was bipartisan and widespread. The Cato Institute warned of the potential for a surveillance society and raised Fourth Amendment concerns. The ACLU called it a virtual dragnet that would require the government “to collect as much information as it can about everyone—and these days, that is a LOT of information … Not only government records of all kinds but individuals’ medical and financial records, political beliefs, travel history, prescriptions, buying habits, communications (phone calls, emails and web surfing), school records, personal and family associations, and so on.” The US Senate, led by senators Ron Wyden and Byron Dorgan, voted unanimously to defund the program shortly after it was announced; some of the technological underpinnings were reshuffled, sent to other parts of the government that weren’t focused on the activities of US citizens.

But as Total Information Awareness was being disassembled in Washington, DC, a similar system emerged, and began to gather momentum, in Silicon Valley. Within a few years, top industry trend reports and VC blog posts began to talk up the power (and economic promise) of “Big Data” and “Social Mobile Local.”

Today, you can probably name several companies that have access to data on every purchase you make with a credit card, every magazine subscription you buy, every website you visit and email you send or receive, every trip you book, and every event you attend—all the various types of data that Safire referenced, but gathered to identify behavior patterns that help predict what ads you’ll click on.

In the private sector, startups and large companies alike began to tout how well they could gather, store, and mine data; it was a popular business model. The fact that it was happening was hardly a secret—personalized ads within Gmail, universal logins, and retargeted ads following us from site-to-site were readily apparent to even minimally savvy users. The biggest tech companies in the world succeeded because they built products users loved—their users voluntarily opted in to giving up their information and behavioral data. Widespread tracking and data aggregation became the norm, as universal logins, tracking pixels, forays into linked products (Gmail, for example), and acquisitions of startups enabled the tech platforms to build comprehensive profiles of users with ease.

At a Congressional hearing last month, Senator Maria Cantwell broached the similarity between TIA and private information-gathering in an exchange with Facebook CEO Mark Zuckerberg:

“Have you heard of Total Information Awareness?” Senator Cantwell asked. “Do you know what I’m talking about?”

“No, I do not,” Zuckerberg replied.

It wasn’t unreasonable for Zuckerberg to have never heard of the program; after all, it was proposed when he was in high school. Cantrell went on to explain the initiative: data mining on a vast scale, with the potential for unprecedented surveillance, control, and identification. She brought up WhatsApp and Palantir as other examples of private-company data harvesters, which she called a “major trend in an information age.”

There are, of course, a few critical differences between Total Information Awareness and the platforms run by social media giants. First, users voluntarily opt in to the companies’ terms of service. People ostensibly know what they’re giving up in exchange for free email, messaging, and ways to share pictures and connect with friends. Second, the stakes are very different: The government has the power to arrest you; Facebook and Google don’t.

Total Information Awareness was nakedly about surveillance and how it could be used both defensively and offensively. Facebook, on the front end, is about likes and photos and status updates and community. Google is a great search engine and popular email provider. But the backends of our private companies are eerily similar to TIA, and the same threats exist—but without any offsetting public oversight. This is dangerous. Now multiple governments, not just our own, can tap into a vast data trove to predict our behavior and target us. In fact, as we’ve come to learn, other nations have already misused it, such as Russia, which used precision targeting and copious quantities of data to reach American citizens.

The private platforms that amassed the data collection and mining mechanisms necessary for total information awareness are ironically under no obligation to use it for the original national security and counterterrorism purposes that inspired (and purportedly justified) TIA—and many digital civil liberties organizations don’t believe they should. But more disturbingly, it seems that our social platforms may in fact be doing the opposite: They may be inadvertently radicalizing extremists. It seems increasingly possible that there is something to the idea that social signals can identify early indications of radicalization, or pick out those who are likely to be receptive or sympathetic. In response, our recommendation engine and search functions may well be inadvertently using those signals to further facilitate the process—the exact opposite of detection and intervention.

So where do we go from here? How do we address the profoundly powerful tools now in private hands? First, we need civic groups to scrutinize powerful companies the same way they scrutinize government. Some of the groups most vocally critical of Total Information Awareness—the Electronic Frontier Foundation, the Open Technology Institute—have been largely silent about Facebook’s and Google’s amassing of similar power. Slate’s April Glaser recently wrote an excellent piece examining the nuances and politics at play here, from financial conflicts of interest to worldviews. As she put it, we need the watchdogs to bark.

But civil society alone can’t win this fight. We need scrutiny from everyday internet users too. As a country, we were extremely bothered by the government aggregating databases and mining for certain behavioral signatures that might reveal extremist leanings. The idea that there could be effective oversight of such an invasive program was dismissed as absurd. But just a few years later, we were remarkably quick to give away the contents of those same databases to large corporations. We don’t trust our government—too many scandals, too much abuse—but we haven’t fully reckoned with the fact that we’re turning our personal information over to powerful private interests with no oversight.”

https://www.wired.com/story/darpa-total-informatio-awareness/

 

 

 

AI Chip Saved Google From Building a Dozen New Data Centers

Standard

Image:  Google –  server racks loaded with TPU’s

“WIRED”

“GOOGLE OPERATES WHAT is surely the largest computer network on Earth, a system that comprises custom-built, warehouse-sized data centers spanning 15 locations in four continents.

Rather than double its data center footprint, Google instead built its own computer chip specifically for running deep neural networks, called the Tensor Processing Unit, or TPU.

“It makes sense to have a solution there that is much more energy efficient,” says Norm Jouppi, one of the more than 70 engineers who worked on the chip. In fact, the TPU outperforms standard processors by 30 to 80 times in the TOPS/Watt measure, a metric of efficiency.

GoogleChipInline.jpg

A Neural Network Niche

Google first revealed this custom processor last May, but gave few details. Now, Jouppi and the rest of his team have a released a paper detailing the project, explaining how the chip operates and the particular problems it solves. Google uses the chip solely for executing neural networks, running them the moment when, say, someone barks a command into their Android phone. It’s not used to train the neural network beforehand. But as Jouppi explains, even that still saves the company quite a bit. It didn’t have to build, say, an e

The chip also represents a much larger shift in the world of computer processors. As Google, Facebook, Microsoft, and other internet giants build more and more services using deep neural networks, they’ve all needed specialized chips both for training and executing these AI models. Most companies train their models using GPUs, chips that were originally designed for rendering graphics for games and other highly visual applications but are also suited to the kind of math at the heart of neural networks. And some, including Microsoft and Baidu, the Chinese internet giant, use alternative chips when executing these models as well, much as Google does with the TPU.

The difference is that Google built its own chip from scratch. As a way of reducing the cost and improving the efficiency of its vast online empire, the company builds much of its own data center hardware, including servers and networking gear. Now, it has pushed this work all the way down to individual processors.

In the process, it has also shifted the larger market for chips. Since Google designs its own, for instance, it’s not buying other processors to accommodate the extra load from neural networks. Google going in-house even for specialized tasks has wide implications; like Facebook, Amazon, and Microsoft, it’s among the biggest chip buyers on Earth. Meanwhile the big chip makers—including, most notably, Intel—are building a new breed of processor in an effort to move the market back in their direction.

Focused But Versatile

Jouppi joined Google in late 2013 to work on what became the TPU, after serving as a hardware researcher at places like HP and DEC, a kind of breeding ground for many of Google’s top hardware designers. He says the company considered moving its neural networks onto FPGAs, the kind of programmable chip that Microsoft uses. That route wouldn’t have taken as long, and the adaptability of FPGAs means the company could reprogram the chips for other tasks as needed. But tests indicated that these chips wouldn’t provide the necessary speed boost. “There’s a lot overhead with programmable chips,” he explains. “Our analysis showed that an FPGA wouldn’t be any faster than a GPU.”

In the end, the team settled on an ASIC, a chip built from the ground up for a particular task. According to Jouppi, because Google designed the chip specifically for neural nets, it can run them 15 to 30 times faster than general purpose chips built with similar manufacturing techniques. That said, the chip is suited to any breed of neural network—at least as they exist today—including everything from the convolutional neural networks used in image recognition to the long-short-term-memory network used to recognize voice commands. “It’s not wired to one model,” he says.

Google has used the TPU for a good two years, applying it to everything from image recognition to machine translationto AlphaGo, the machine that cracked the ancient game of Go last spring. Not bad—especially considering all the data construction it helped avoid in the process.”

https://www.wired.com/2017/04/building-ai-chip-saved-google-building-dozen-new-data-centers/

 

 

 

Defense Innovation Board Lays Out First Concepts

Standard

pentagon-innovation-board

“DEFENSE NEWS”

“Thinkers and business leaders from the tech world outside of the traditional defense sector.

The sole exception to that is the presence of retired Adm. William McRaven, the former head of SOCOM.

The board came out with a series of rough recommendations for Secretary of Defense Ash Carter — or his successor — that they believe will lead to injecting a culture of innovation into the Pentagon.

Schmidt opened the meeting by acknowledging the importance of the Pentagon’s mission: “We all believe an outside perspective would be beneficial and we’ve set out to try and make some recommendations.”

He added that members of the board have spent the summer traveling around to various DoD installations, including trips to Nellis Air Force Base in Nevada, Fort Bragg in North Carolina and Special Operations Command (SOCOM) headquarters in Tampa, Florida. Schmidt also spent two days last week traveling with Carter to learn about the nuclear enterprise, and future trips are scheduled for US Pacific Command and US Central Command.

So what are the early ideas from the board?

A Chief Innovation Officer

The first idea listed by the board was the concept of a chief innovation officer, appointed directly by the secretary of defense, to serve as a point person for innovation efforts around the department.

Cass Sunstein, a professor at Harvard Law School who has served in various government positions, explained that the sharing of best practices around the DoD is currently “less than ideal,” and noted that the position could act as the umbrella from which funding for low-level projects could flow.

Sunstein also said he believes that office could be set up “in a hurry. This could be done in a relatively informal way in the very near future.” At the same time, he acknowledged that there are “significant” legal and logistical challenges about creating the office.

The position could particularly help create cover for individuals who are down in the ranks and have ideas but are unable to flow them forward on their own.

“There are innovators who are in the Defense Department and who are excellent, but who could be sharing best practices and better coordinated and could be spurred a bit more, and the idea there is a dispersed innovative capacity in the form of lower-level people who have great ideas but face obstacles,” Sunstein told journalists after the event. “The idea of that as an umbrella for various concepts, we’re drawn to that.”

Create a Digital ROTC

The recent hacks of the Office of Personnel Management and state election offices show how critical it is for the US to recruit and retain top cyber talent, said Marne Levine, chief operation officer at Instagram. Top commercial firms with deep pockets and great benefits compete fiercely for that talent, with DoD struggling to keep up.

So in order to attract talent to the Pentagon, the board suggested creating a “digital ROTC,” where the Pentagon would pay college tuition for cyber experts in exchange for their service.

Levine acknowledged setting aside the funding for such a program “may require hard budget choices,” but “one only has to think of the high cost of cyberattacks to understand the value of such an investment.”

Similarly, she put forth the idea of creating a science, technology, engineering and math, or STEM, career-path specialization inside the department, similar to that followed by doctors or lawyers.

The good news, said astrophysicist and television personality Neil deGrasse Tyson, is that the generation currently in high school and college is more interested in science than any before it.

“If you’re going to recruit people who have an interest in science and technology, I can assert that the pool of people now available to you is greater than ever before,” he said. But to attract those people from the commercial sector, the Pentagon needs to offer the best opportunities for new technologies and programs around.

“You can’t just say come because we’re cool. You have to be cool,” Tyson said. “And you’ll get ’em, for sure.”

Create a Center of Excellence for Artificial Intelligence and Machine Learning

The use of artificial intelligence and machine learning have the “ability to spur innovation and represent transformational change,” said J. Michael McQuade, senior vice president for science and technology with United Technologies.

That is certainly an opinion shared by Deputy Secretary of Defense Bob Work, who has talked extensively about the importance of artificial intelligence for the next generation of Pentagon systems. But McQuade said the Pentagon needs to think broadly about that potential and how it can impact things down to supply-chain optimization and training, and not just combat functions.

“We do believe substantial changes are happening in the core science and technology capability” here, McQuade said, which means the Pentagon should look at creating a center of excellence to be the central hub of this work. Whether that is a national lab or institute isn’t clear yet, but the center would ensure “adequate” focus on the issue.

Embed Software Development Teams Within Key Commands

Reid Hoffman, a co-founder of LinkedIn and now with Greylock Partners, joked that the tech industry has become so reliant on software that Silicon Valley should be renamed Software Valley. And the Pentagon, he said, simply has not kept up.

As a result, he put forth the idea of creating embedded software development teams in various key commands, which would be “small, agile teams of software developers where you would keep these teams current on modern techniques of software development.”

Improve Software Testing Regimens

Milo Medin, vice president of Access Services with Google Capital and a former NASA official, also emphasized the importance of software for the Pentagon, noting it is the driving factor behind upgrade programs for everything from radars to the F-35 joint strike fighter.

Currently, operational testing of software is set in the classic mindset, Medin said, adding that the testers seem to have “an implicit assumption” that the Pentagon’s firewalls, as currently constructed, are sufficient.

“In the heavily networked battle space these systems are operating in, the consequences of our weapon systems being breached from a security perspective could be severe,” he warned, adding that as autonomy enters the battle space the risk of systems being hacked could expand.

As a result, software testing needs to happen on an ongoing basis, not just when the planes are going operational. And for that to happen, the government needs access to the software code that runs the systems.

Speaking to reporters after the event, Medin stressed that does not mean defense contractors should be forced to hand over control of code developed in house, a major issue that has been raised from industry in recent years.

“The issue isn’t owning the software. The issue is access to the software,” he said. “If software is your differentiator, if software becomes a core competency … that’s something the government needs to be able to have access to, to be able to build and to be able to potentially modify. That’s what you find in the tech sector.”

Create Funding Streams for COCOMs

The Defense Innovation Board is made up of thinkers from academia and the private tech sector, in a purposeful attempt to inject outside thinking into the department. The sole exception to that is the presence of retired Adm. William McRaven, the former head of SOCOM.

Now the Chancellor at the University of Texas, McRaven provides an insider’s perspective on the acquisition system and internal processes that drive the Pentagon. He also understands how to operate around them to innovate quickly, due to his experience at SOCOM, which is famously able to develop and deploy technology at rapid rates.

But while SOCOM has that ability, other parts of the military do not — something McRaven said the board came to understand during various visits this summer.

“We were a little frustrated as you see these magnificent infantrymen and pilots who are equally as smart [as SOCOM], equally want to innovate, and yet the layers of bureaucracy to get the decision-makers to make those decisions are difficult.”

As a result, McRaven would like to see a way to give other combatant commanders acquisition ability. Not for big, Category 1 programs — “You need to let that go through a traditional approach,” he said — but for smaller technology programs. And if the commanders can quickly turn small projects into fielded capabilities, the idea that innovative thinking will be rewarded will “spread like wildfire” through the force, he added.

Future Concepts

Those concepts are still in their infancy, but represent the more concrete ideas the board has come up with. But there are several broader concepts that the members are still trying to get their head around.

Jennifer Pahika, the founder of the nonprofit Code for America, said she wants to tap into what tech companies call the “maker movement,” with an eye on the tinkerers in the military who have good ideas but not the venue for turning them into products. Eric Lander, president and director of the Broad Institute, said he was really interested in what role biological technologies could provide.

But the toughest issue to tackle, and perhaps the most important, is cultural. All involved agreed that developing a culture where new ideas can be tested and fail, without fear of ending a career, is going to be the biggest challenge. And it’s not clear exactly how that can be changed.

Schmidt said he is “convinced” the biggest change the board needs to look at is with people and culture, more than specific pieces of technology.

That was driven home by the public comment section of the meeting, which featured a number of junior and mid-level officers talking about the risk-adverse nature of the Pentagon. At the end of the day, however, the hope is that the ideas from the board can start to change that around the edges before injecting change more directly into the system.

“The fact [board members are] not steeped in the Department of Defense may be the best thing this group brings,” McRaven told reporters. “At the end of the day, we want to have an outside look because I think that’s where we can make real change.”

Added Schmidt: “We’re not going to write a report without impact. We view ourselves as more of a contact sport, working with whatever way is appropriate.”

Another question is about the future of the group once Carter leaves office, which is expected to occur early next year as a new administration comes to power. The board is currently scheduled to expire in April 2018, but could be renewed much the same way other advisory boards have been in the past.

“The other boards have been around for a while, and I’m assuming we will generate enough value that people want us around,” Schmidt said. “And if we don’t perform, we will be fired.”

http://www.defensenews.com/articles/defense-innovation-board-lays-out-first-concepts?utm_source=Sailthru&utm_medium=email&utm_campaign=EBB%2010.6.16&utm_term=Editorial%20-%20Early%20Bird%20Brief

 

 

 

“Jig Saw” – Google’s Plan to Stop Aspiring ISIS Recruits

Standard

education

“WIRED”

“Perhaps one of world’s most dangerous problems of ignorance and indoctrination can be solved in part by doing what Google does best:

Helping people find what they most need to see.

Google has built a half-trillion-dollar business out of divining what people want based on a few words they type into a search field. In the process, it’s stumbled on a powerful tool for getting inside the minds of some of the least understood and most dangerous people on the Internet: potential ISIS recruits. Now one subsidiary of Google is trying not just to understand those would-be jihadis’ intentions, but to change them.

Jigsaw, the Google-owned tech incubator and think tank—until recently known as Google Ideas—has been working over the past year to develop a new program it hopes can use a combination of Google’s search advertising algorithms and YouTube’s video platform to target aspiring ISIS recruits and ultimately dissuade them from joining the group’s cult of apocalyptic violence. The program, which Jigsaw calls the Redirect Method and plans to launch in a new phase this month, places advertising alongside results for any keywords and phrases that Jigsaw has determined people attracted to ISIS commonly search for. Those ads link to Arabic- and English-language YouTube channels that pull together preexisting videos Jigsaw believes can effectively undo ISIS’s brainwashing—clips like testimonials from former extremists, imams denouncing ISIS’s corruption of Islam, and surreptitiously filmed clips inside the group’s dysfunctional caliphate in Northern Syria and Iraq.

“This came out of an observation that there’s a lot of online demand for ISIS material, but there are also a lot of credible organic voices online debunking their narratives,” says Yasmin Green, Jigsaw’s head of research and development. “The Redirect Method is at its heart a targeted advertising campaign: Let’s take these individuals who are vulnerable to ISIS’ recruitment messaging and instead show them information that refutes it.”

The results, in a pilot project Jigsaw ran early this year, were surprisingly effective: Over the course of about two months, more than 300,000 people were drawn to the anti-ISIS YouTube channels. Searchers actually clicked on Jigsaw’s three or four times more often than a typical ad campaign. Those who clicked spent more than twice as long viewing the most effective playlists than the best estimates of how long people view YouTube as a whole. And this month, along with the London-based startup Moonshot Countering Violent Extremism and the US-based Gen Next Foundation, Jigsaw plans to relaunch the program in a second phase that will focus its method on North American extremists, applying the method to both potential ISIS recruits and violent white supremacists.

An Antidote to Extremism’s Infection

While tech firms have been struggling for years to find countermeasures to extremist content, ISIS’ digital propaganda machine has set a new standard for aggressive online recruitment. Twitter has banned hundreds of thousands of accounts only to see them arise again—manymigrating to the more private service Telegram—while other services like YouTube and Facebook have fought an endless war of content removal to keep the group’s vile beheading and immolation videos offline. But attempts to intercept the disaffected young Muslims attracted to that propaganda and offer them a counternarrative—actual protection against the group’s siren song—have mostly amounted to public service announcements. Those PSA series have included the U.S. State Department’s campaign called Think Again, Turn Away and the blunt messaging of the cartoon series Average Mohammed.

Those campaigns are likely only effective for dissuading the audience least indoctrinated by ISIS’s messages, argues Green, who’s interviewed jailed ISIS recruits in Britain and defectors in an Iraqi prison. “Further down the funnel are the people who are sympathetic, maybe ideologically committed, maybe even already in the caliphate,” says Green. “That’s Jigsaw’s focus.”

To capture the people already drawn into ISIS’ orbit, Jigsaw took a less direct approach. Rather than create anti-ISIS messages, the team curates them from YouTube. “We thought, what if the content exists already?” says Green. “We knew if it wasn’t created explicitly for this purpose, it would be more authentic and therefore more compelling.”

Testing the Theory

Jigsaw and two partners on the pilot project, Moonshot CVE and the Lebanese firm Quantum Communications, assembled two playlists of videos they found in both Arabic and English, ranging from moderate Muslim clerics pointing out ISIS’s hypocrisy to footage of long food lines in the ISIS’s Syrian stronghold Raqqa.

Another video in Jigsaw’s playlist shows an elderly woman excoriating members of ISIS and quoting the Koran to them:

Jigsaw chose more than 1,700 keywords that triggered ads leading to their anti-ISIS playlists. Green and her team focused on terms they believed the most committed ISIS recruits would search for: names of waypoints on travel routes to ISIS territory, phrases like “Fatwa [edict] for jihad in Syria” and names of extremist leaders who had preached ISIS recruitment. The actual text of the search ads, however, took a light-touch approach, with phrases like “Is ISIS Legitimate?” or “Want to Join ISIS?” rather than explicit anti-ISIS messages.

Measuring the actual effects of the campaign in dissuading ISIS recruits isn’t easy. But Jigsaw and its partners found that they at least captured searchers’ attention. The clickthrough rates on some of the ads were more than 9 percent, they say, compared with averages around 2 or 3 percent in the average Google keyword advertising campaign. They also discovered that the hundreds of thousands of searchers spent a total of half a million minutes watching the videos they collected, with the most effective videos getting as much as 8 minutes and 20 seconds average viewing time.

But Could It Work?

Jigsaw’s program is far from a comprehensive solution to ISIS’s online recruitment, says Humera Khan, the executive director of the Islamic deradicalization group Muflehun. She points out that both Google and Facebook have trained anti-extremism non-profits in the past on how to use their keyword advertising, though perhaps without the deep involvement in targeting, curating and promoting video Jigsaw is trying. More importantly, she argues, attracting ISIS sympathizers to a video playlist is only the first step. “If they can hook people in, can they keep them coming back with new and relevant content? That’ll be important,” says Khan. Eventually, any successful deradicalization effort also needs human interaction, too, and a supportive community backing up the person’s decision to turn away from extremism. “This sounds like a good piece of the solution. But it’s not all of it.”

From a national security perspective, Jigsaw’s work raises another glaring question: Why not target would-be ISIS recruits for surveillance and even arrest instead? After all, intercepting ISIS sympathizers could not only rescue those recruits themselves, but the future victims of their violence in terrorist attacks or genocidal massacres in ISIS’s bloody sphere of influence. On that question, Jigsaw’s Green answers carefully that “social media platforms including YouTube have a responsibility to cooperate [with] the governments’ lawful requests, and there are processes in place to do that.” Translation? Google likely already helps get some of these people arrested. The company, after all, handed over some data in 64 percent of the more than 40,000 government requests for its users’ data in the second half of last year.

But Green says that the Redirect Method, beyond guiding ISIS admirers to its videos, doesn’t seek to track them further or identify them, and isn’t designed to lead to arrests or surveillance, so much as education.  “These are people making decisions based on partial, bad information,” says Green. “We can affect the problem of foreign fighters joining the Islamic State by arming individuals with more and better information.” She describes the campaign’s work as a kind of extension of Google’s core mission “to make the world’s information accessible and useful.”

Google’s Clever Plan to Stop Aspiring ISIS Recruits

 

 

GOOGLE’S SATELLITE MAP GETS 700-TRILLION-PIXEL MAKEOVER

Standard

Google Maps

Google Maps’s newest depiction of the San Francisco Bay area includes certain features, like the new span of the Bay Bridge, that just weren’t there in 2013. // Google/Landsat

“NEXT GOV”

More than 1 billion people use Google Maps every month, making it possibly the most popular atlas ever created.

On Monday  its users will see something different when they examine the planet’s forests, fields, seas and cities.

Google has added 700 trillion pixels of new data to its service. The new map, which activates this week for all users of Google Maps and Google Earth, consists of orbital imagery that is newer, more detailed, and of higher contrast than the previous version.

Most importantly, this new map contains fewer clouds than before—only the second time Google has unveiled a “cloudless” map. Google had not updated its low- and medium-resolution satellite map in three years.

The improvements can be seen in the new map’s depiction of Christmas Island. Almost 1,000 miles from Australia, the island was largely untouched by human settlement until the past two centuries. Its remoteness gives it a unique ecology, but—given its location in the middle of the tropical Indian Ocean—it is frequently obscured by clouds. The new map clears these away:

Google / Landsat
A 99-acre immigration detention center operated on behalf of the Australian government can now be clearly seen; it’s the only tan splotch of development in the island’s northwestern “arm.” In the old version of the map, the detention center was harder to distinguish from clouds. The island’s eastern settlements are also now completely visible. Compare the old version of the map:

Google / Landsat
The new map also does not include the darker diagonal lines that seem to slice across the older scene above. These lines were caused by a physical malfunction on Landsat 7, the U.S. government satellite which supplied the older map’s imagery data. The new version of the map includes data from Landsat 8, the newer version of the same satellite, letting Google clear the ugly artifacts.”

Read the rest of the story here.

Google Wants to Buy Your Patent to Keep It Away From Trolls

Standard

macobserver dot com

Image: Macobserver.com

“WIRED”

“For Google, offering to buy patents first before the trolls have a chance to snag them makes sense as a company with the ability and desire to put them to use.

Yesterday the search giant unveiled a program it’s calling Patent Purchase Promotion, a new marketplace where patent holders are invited to tell Google about patents they’re willing to sell, at a price they themselves have set..

The US patent system isn’t just broken. It’s being abused to curb innovation, handicap inventors, and redirect company resources toward pointless and lengthy litigation.

Now, Google says it has a new idea for fixing the mess.

“Unfortunately, the usual patent marketplace can sometimes be challenging, especially for smaller participants who sometimes end up working with patent trolls,” wrote Allen Lo, Google’s deputy general counsel for patents, in a blog post announcing the program. “Then bad things happen, like lawsuits, lots of wasted effort, and generally bad karma. Rarely does this provide any meaningful benefit to the original patent owner.”

The program is geared toward companies, but Google intentionally left participation requirements open so that individual inventors could also take part, Kurt Brasch, a senior patent licensing manager at Google, wrote in an email to WIRED.

Opening the Process

The patent landscape has long been fraught with dysfunction. Some firms tangle themselves up in patent wars, filing suits against each other ostensibly over the right to push the limits of innovation. But it can often seem like the end goal is merely to hamstring the competition or make massive amounts of money through litigation rather than actually making something.

Then there are the “trolls”—typically shell corporations that don’t actually make or sell anything—who seek to enforce patents they own. Inventors, whether they work within companies or independently, have to navigate this minefield of ill-intentioned players when thinking about creating new technologies. Firms and activists, meanwhile, hope to avoid having valuable patents fall into the hands of the trolls. For Google, offering to buy patents first before the trolls have a chance to snag them makes sense as a company with the ability and desire to put them to use.

But the process could also offer Google other advantages as a kind of market research. The company could find out what good patents are out there and what their holders think they’re worth. Google’s open-submission approach would seem to be a novelty in the arena of protective patent-buying. Yes, there are patent management firms like RPX, which buys up patents defensively so they can’t be used as ammunition in the patent wars. But the process of how these companies acquire new patents—and for how much—tends to be hidden from the public.

For that matter, Google itself would not say exactly how much it will reveal about the results of its own experiment, or how open it will be about what it does with the patents it acquires. Whatever its goals, the company is in a unique position to reach out to patent holders both large and small. The more interesting question, however, is how well the end result of the program will align with Google’s famous corporate motto: “Don’t be evil.”

http://www.wired.com/2015/04/google-wants-buy-patent-keep-away-trolls/

Artificial Intelligence (AI) Has Arrived – And That Worries the World’s Brightest Minds

Standard

 

artificial_intelligence

“WIRED”

“This slightly scary theoretical term refers to an uncontrolled hyper-leap in the cognitive ability of AI that Musk and physicist Stephen Hawking worry could one day spell doom for the human race.

The Future of AI: Opportunities and Challenges,” was an unprecedented meeting of the minds that brought academics like Oxford AI ethicist Nick Bostrom together with industry bigwigs like Skype founder Jaan Tallinn and Google AI expert Shane Legg.

In the past five years, advances in artificial intelligence—in particular, within a branch of AI algorithms called deep neural networks—are putting AI-driven products front-and-center in our lives. Google, Facebook, Microsoft and Baidu, to name a few, are hiring artificial intelligence researchers at an unprecedented rate, and putting hundreds of millions of dollars into the race for better algorithms and smarter computers.

AI problems that seemed nearly unassailable just a few years ago are now being solved. Deep learning has boosted Android’s speech recognition, and given Skype Star Trek-like instant translation capabilities. Google is building self-driving cars, and computer systems that can teach themselves to identify cat videos. Robot dogs can now walk very much like their living counterparts.

“Things like computer vision are starting to work; speech recognition is starting to work There’s quite a bit of acceleration in the development of AI systems,” says Bart Selman, a Cornell professor and AI ethicist who was at the event with Musk. “And that’s making it more urgent to look at this issue.”

Given this rapid clip, Musk and others are calling on those building these products to carefully consider the ethical implications. At the Puerto Rico conference, delegates signed an open letter pledging to conduct AI research for good, while “avoiding potential pitfalls.” Musk signed the letter too. “Here are all these leading AI researchers saying that AI safety is important,” Musk said yesterday. “I agree with them.”

Google Gets on Board

Nine researchers from DeepMind, the AI company that Google acquired last year, have also signed the letter. The story of how that came about goes back to 2011, however. That’s when Jaan Tallinn introduced himself to Demis Hassabis after hearing him give a presentation at an artificial intelligence conference. Hassabis had recently founded the hot AI startup DeepMind, and Tallinn was on a mission. Since founding Skype, he’d become an AI safety evangelist, and he was looking for a convert. The two men started talking about AI and Tallinn soon invested in DeepMind, and last year, Google paid $400 million for the 50-person company. In one stroke, Google owned the largest available talent pool of deep learning experts in the world. Google has kept its DeepMind ambitions under wraps—the company wouldn’t make Hassabis available for an interview—but DeepMind is doing the kind of research that could allow a robot or a self-driving car to make better sense of its surroundings.

That worries Tallinn, somewhat. In a presentation he gave at the Puerto Rico conference, Tallinn recalled a lunchtime meeting where Hassabis showed how he’d built a machine learning system that could play the classic ’80s arcade game Breakout. Not only had the machine mastered the game, it played it a ruthless efficiency that shocked Tallinn. While “the technologist in me marveled at the achievement, the other thought I had was that I was witnessing a toy model of how an AI disaster would begin, a sudden demonstration of an unexpected intellectual capability,” Tallinn remembered.

Deciding the dos and don’ts of scientific research is the kind of baseline ethical work that molecular biologists did during the 1975 Asilomar Conference on Recombinant DNA, where they agreed on safety standards designed to prevent manmade genetically modified organisms from posing a threat to the public. The Asilomar conference had a much more concrete result than the Puerto Rico AI confab.

At the Puerto Rico conference, attendees signed a letter outlining the research priorities for AI—study of AI’s economic and legal effects, for example, and the security of AI systems. And yesterday, Elon Musk kicked in $10 million to help pay for this research. These are significant first steps toward keeping robots from ruining the economy or generally running amok. But some companies are already going further. Last year, Canadian roboticists Clearpath Robotics promised not to build autonomous robots for military use. “To the people against killer robots: we support you,” Clearpath Robotics CTO Ryan Gariepy wrote on the company’s website.

Pledging not to build the Terminator is but one step. AI companies such as Google must think about the safety and legal liability of their self-driving cars, whether robots will put humans out of a job, and the unintended consequences of algorithms that would seem unfair to humans. Is it, for example, ethical for Amazon to sell products at one price to one community, while charging a different price to a second community? What safeguards are in place to prevent a trading algorithm from crashing the commodities markets? What will happen to the people who work as bus drivers in the age of self-driving vehicles?

To the people against killer robots: we support you.

Itamar Arel is the founder of Binatix, a deep learning company that makes trades on the stock market. He wasn’t at the Puerto Rico conference, but he signed the letter soon after reading it. To him, the coming revolution in smart algorithms and cheap, intelligent robots needs to be better understood. “It is time to allocate more resources to understanding the societal impact of AI systems taking over more blue-collar jobs,” he says. “That is a certainty, in my mind, which will take off at a rate that won’t necessarily allow society to catch up fast enough. It is definitely a concern.”

Predictions of a destructive AI super-mind may get the headlines, but it’s these more prosaic AI worries that need to be addressed within the next few years, says Murray Shanahan, a professor of cognitive robotics with Imperial College in London. “It’s hard to predict exactly what’s going on, but we can be pretty sure that they are going to affect society.”

http://www.wired.com/2015/01/ai-arrived-really-worries-worlds-brightest-minds/

Image: “Militaryaerospace.com”