Category Archives: Computer Security

Estonia Lesson Learned: “Every Country Should Have a Cyber War”

Standard

cyber-war-or-business-as-usual-10-728

“DEFENSE ONE”

” Estonia’s biggest turning point was 10 years ago, when the country came under sustained cyberattack.

The shock of a cyberwar united the community to take action.  Estonians don’t see cybersecurity as a phenomenon,  it’s about being empowered by technology, not controlled by it.”


“Estonia’s steps have certainly been radical, and other countries can learn lessons from them about how to defend themselves.

In 1991, Estonia was part of the dying communist empire. Its economy was run by central planners in Moscow, less than half of all households had a phone line, and goods were so scarce that people had to line up for food.

Skip ahead 26 years, and Estonians don’t even have to queue to vote. They do that online.

In just over two decades, Estonia has become one of the world’s most digitally innovative and efficient countries. In fact, Estonians conduct all their civic responsibilities online. Offices and paper forms have become obsolete as state-issued digital identities allow all citizens to carry out any financial or government transaction from their laptops or cellphones. And that gives them an edge when it comes to cybersecurity.

Estonia’s journey down the digital road has been astonishingly fast. When it gained independence from the Soviet Union in 1991, it had almost no money and few natural resources. But it did have one advantage: It was the designated center for software and computer production for the USSR. After achieving independence, the country had a pool of tech expertise for them to build on.

During these early years of independence, Estonia needed to create the means for a new economy. And it wasn’t going to be easy. The country’s tiny population of just 1.3 million is spread over a relatively vast countryside. Outside the capital Tallinn, there’s an average of just four people per square kilometer. The new government didn’t have the resources to extend government offices or banking facilities to small towns and villages, so it decided to encourage self-service, and spread internet access across the country in order to do so.

To achieve this, the government set up an investment group to build computer networking and infrastructure. By 1997, almost every school was connected to the internet, and by 2004, 300 wifi access points had been established, bringing the internet even to small villages—and mostly for free.

In 2007, Estonia was in the middle of a political fight with Moscow over plans to remove a Soviet war memorial from a park in Tallinn. Suddenly, it was hit with three weeks of D-DoS (designated denial of service) attacks. When this happens, multiple sources send multiple online requests, flooding a service or system and making it unable to function. It’s the digital equivalent of crowding an entrance to a building so that no one can come in or out.

As a result, the internet shut down as websites were bombarded with traffic. Russia denied any involvement, but Estonia didn’t believe it.

“War is the continuation of policy by other means,” Estonian president Kersti Kaljulaid told a NATO cyber-conference in Tallinn in June 2017. “Ten years on, it is clear that the decision made by Estonia not to withdraw but stay and fight for the security of our cyberspace was indeed the right one.”

The attacks made Estonia more determined than ever to develop its digital economy and make it safe from future attacks. “I think every country should have a cyber war,” says Taavi Kotka, the government’s former chief information officer. “Citizens get knowledge about what an attack means, about how phishing works, how D-DoS works, and they start to understand and live with that. People aren’t afraid if they know they can survive something. It’s the same thing as electricity going off: Okay, it’s an inconvenience, but you know how to deal with it.”

In Estonia, people are not afraid of cyber warfare, nor are they afraid of sharing personal data across public and private institutions. Go to a hospital, and the nurse or doctor can call up your entire health records from any doctor you ever visited without the need to call their offices and asking them to send files.

Full marks for convenience, simplicity, and efficiency. But what about the dangers of nameless bureaucrats accessing your personal data? Isn’t there a risk of future governments abusing the system and using your intimate details against you? Isn’t this inviting an Orwellian nightmare?

Estonia says no. Unlike an authoritarian state like the old Soviet Union, government transparency is built into the system. While all your private data is online, only you can give permission for any data to be accessed. And you can check who has accessed what. If a doctor you don’t know has viewed your records, it will be traceable, and you can have them sacked. As one software developer Quartz spoke to said, “You become your own Big Brother.”

Data is protected through a framework known as X-road, which helps exchange decentralized data between big government databases. X-road has built-in security measures that encrypt traffic and time-stamps so that the data cannot be manipulated. Taimar Peterkop, from Estonia’s Information System Authority, says that the security measures built into E-identity databases are all but impenetrable by outsiders. “Estonia takes data integrity very seriously because our society is so digitized,” he says. “If someone manipulates citizens’ data, that’s a challenge for us. We use blockchain-based technology to ensure the data is as it should be.”

When it comes to security, Peterkop says humans are usually the weak link. “Cybersecurity starts with us. If you have weak cyber hygiene, that’s a problem. We need to raise awareness and educate people about using strong authentication methods,” he says. For example, Estonia has public-education campaigns about how to use your smart devices wisely.

It seems like glaringly obvious advice, but a look at the recent USelection shows that basic cyber hygiene has been an after-thought, even for the powerful. When Democratic nominee Hilary Clinton’s campaign chief John Podesta’s Gmail account was hacked, Wikileaks founder Julian Assange claimed Podesta’s password was simply the word “password.” The campaign denied this claim and said they fell victim to a phishing scam. Whatever the case, it was an avoidable security breach that should never have occurred.

Peterkop also says that consumers need to ask more questions about the Internet of Things, especially when it comes to everyday household products and devices. “There is so much pressure to come up with new products in a hurry, so security measures are an after-thought,” he says. “As consumers, it’s essential that we start paying attention to it. We don’t do enough risk mitigation. Basically every TV is a computer now.” These issues are present already: A recent document dump from Wikileaks points to hacking tools that directly relate to Samsung televisions.

Estonia’s steps have certainly been radical, and other countries can learn lessons from them about how to defend themselves. As well as creating a paperless public service, Estonia is now backing up government data on secure servers offsite in Luxembourg. It has also prioritized tougher international action for cyber-crime and encouraged private companies to review security measures and have stronger agreements with server providers.”

http://www.defenseone.com/technology/2017/08/every-country-should-have-cyber-war-what-estonia-learned-russian-hacking/140217/?oref=d-mostread

 

A New Tool for Looking at Federal Cybersecurity Spending

Standard
cyber Spending

Image:  “Taxpayers for Common Sense”

“THE PROJECT ON GOVERNMENT OVERSIGHT”

“A new database and visualization tool that breaks down unclassified federal spending on cybersecurity over the past decade—giving the public a peek at how each major federal agency is devoting resources toward protecting computer systems.”


“More and more of what the federal government does relies on complex computer systems and networks. This high tech infrastructure makes the government work better by making services more efficient and accessible.

But that digital revolution also comes with big risks—just think back to the massive data breach at the Office of Personnel Management disclosed in 2015, when hackers compromised sensitive information about tens of millions of Americans. Last year, there were at least “30,899 cyber incidents that led to the compromise of information or system functionality” at federal agencies, according to a White House report released in March. The number of attacks on federal computer systems have risen sharply over the last decade.

So how much is the government spending to protect itself (and us) in this brave new world?

Unfortunately, the answer is “we don’t really know.” But a new tool from nonpartisan watchdog group Taxpayers for Common Sense provides perhaps the most comprehensive analysis of federal cybersecurity spending.

Last week, Taxpayers released a new database and visualization tool that breaks down unclassified federal spending on cybersecurity over the past decade—giving the public a peek at how each major federal agency is devoting resources toward protecting computer systems.

Taxpayers used public budget documents to build the database, but it wasn’t easy. “There is no government-wide standard definition or method of accounting for what qualifies as cyber funding and, therefore, no way to fully track it,” the organization explains on its methodology page. Agencies also use a variety of different approaches to tackle the issue, making it even harder to pin down their spending. Then, there is the government’s murky “black budget” of classified spending. So Taxpayers “settled on providing the best picture [it] could develop from extensive research of government programs” that are unclassified, spending two years searching through thousands of budget documents for terms like “information security” and “information assurance.”

Taxpayers found the amount spent on cybersecurity has quadrupled over 11 years. The group was able to tally $7 billion in unclassified cybersecurity spending in 2007, as compared to $28 billion in 2016. But some of that growth could be attributed to improvements in how the government tracks cybersecurity funding.

The resulting snapshot isn’t perfect, but it’s an impressive start—and a necessary one. After all, you can’t figure out what bang the government gets for its cybersecurity buck if you don’t know where those bucks go.”

http://www.pogo.org/blog/2017/08/a-new-tool-for-looking-at-federal-cybersecurity-spending.html

 

 

 

 

 

Flush Times for Hackers in Booming Cyber Security Job Market

Standard
A recruiter advertises a QR code to attract hackers to apply for jobs at the Black Hat security conference in Las Vegas

A recruiter advertises a QR code to attract hackers to apply for jobs at the Black Hat security conference in Las Vegas, Nevada, U.S. July27, 2017.     Joseph Menn

“REUTERS”

“One of the outside firms that handle such programs, HackerOne, said it has paid out $18.8 million since 2014 to fix 50,140 bugs, with about half of that work done in the past year.

Mark Litchfield made it into the firm’s “Hacker Hall of Fame” last year by being the first to pull in more than $500,000 in bounties through the platform, well more than he earned at his last full-time security job, at consulting firm NCC Group.”


“In the old days, “The only payout was publicity, free press,” Litchfield said. “That was the payoff then. The payoff now is literally to be paid in dollars.”

There are other emerging ways to make money too. Justine Bone’s medical hacking firm, MedSec, took the unprecedented step last year of openly teaming with an investor who was selling shares short, betting that they would lose value.

It was acrimonious, but St Jude Medical ultimately fixed its pacemaker monitors, which could have been hacked, and Bone predicted others will try the same path.

“Us cyber security nerds have spent most of our careers trying to make the world a better place by engaging with companies, finding bugs which companies may or may not repair,” Bone said.

“If we can take our expertise out to customers, media, regulators, nonprofits and think tanks and out to the financial sector, the investors and analysts, we start to help companies understand in terms of their external environment.”

Chris Wysopal, co-founder of code auditor Veracode, bought in April by CA Technologies, said that he was initially skeptical of the MedSec approach but came around to it, in part because it worked. He appeared at Black Hat with Bone.

“Many have written that the software and hardware market is dysfunctional, a lemon market, because buyers don’t know how insecure the products they purchase are,” Wysopal said in an interview.

“I’d like to see someone fixing this broken market. Profiting off of that fix seems like the best approach for a capitalism-based economy.”

Reporting by Joseph Menn and Jim Finkle; additional reporting by Dustin Volz; Editing by Jonathan Weber and Grant McCool

The surge in far-flung and destructive cyber attacks is not good for national security, but for an increasing number of hackers and researchers, it is great for job security.

The new reality is on display in Las Vegas this week at the annual Black Hat and Def Con security conferences, which now have a booming side business in recruiting.

“Hosting big parties has enabled us to meet more talent in the community, helping fill key positions and also retain great people,” said Jen Ellis, a vice president with cybersecurity firm Rapid7 Inc, which filled the hip Hakkasan nightclub on Wednesday at one of the week’s most popular parties.

Twenty or even 10 years ago, career options for technology tinkerers were mostly limited to security firms, handfuls of jobs inside mainstream companies, and in government agencies.

But as tech has taken over the world, the opportunities in the security field have exploded.

Whole industries that used to have little to do with technology now need protection, including automobiles, medical devices and the ever-expanding Internet of Things, from thermostats and fish tanks to home security devices.

More insurance companies now cover breaches, with premiums reduced for strong security practices. And lawyers are making sure that cloud providers are held responsible if a customer’s data is stolen from them and otherwise pushing to hold tech companies liable for problems, meaning they need security experts too.

The non-profit Center for Cyber Safety and Education last month predicted a global shortage of 1.8 million skilled security workers in 2022. The group, which credentials security professionals, said that a third of hiring managers plan to boost their security teams by at least 15 percent.

For hackers who prefer to pick things apart rather than stand guard over them, an enormous number of companies now offer “bug bounties,” or formal rewards, for warnings about vulnerabilities that leave them exposed to criminals or spies.

In the old days, “The only payout was publicity, free press,” Litchfield said. “That was the payoff then. The payoff now is literally to be paid in dollars.”

There are other emerging ways to make money too. Justine Bone’s medical hacking firm, MedSec, took the unprecedented step last year of openly teaming with an investor who was selling shares short, betting that they would lose value.

It was acrimonious, but St Jude Medical ultimately fixed its pacemaker monitors, which could have been hacked, and Bone predicted others will try the same path.

“Us cyber security nerds have spent most of our careers trying to make the world a better place by engaging with companies, finding bugs which companies may or may not repair,” Bone said.

“If we can take our expertise out to customers, media, regulators, nonprofits and think tanks and out to the financial sector, the investors and analysts, we start to help companies understand in terms of their external environment.”

Chris Wysopal, co-founder of code auditor Veracode, bought in April by CA Technologies, said that he was initially skeptical of the MedSec approach but came around to it, in part because it worked. He appeared at Black Hat with Bone.

“Many have written that the software and hardware market is dysfunctional, a lemon market, because buyers don’t know how insecure the products they purchase are,” Wysopal said in an interview.

“I’d like to see someone fixing this broken market. Profiting off of that fix seems like the best approach for a capitalism-based economy.”

https://www.reuters.com/article/us-cyber-conference-business-idUSKBN1AD001

Whistleblower Hotlines: A Valuable Tool

Standard
00_EthicsCorner

Photo: iStock

“NATIONAL DEFENSE MAGAZINE’

“An effective ethics reporting tool, implemented as part of an ethics and compliance program, can help an organization detect and resolve potential misconduct issues.

It can also help support a culture of integrity and responsibility within the workplace.

Misconduct in the workplace can be devastating. The Association of Certified Fraud Examiners’ “2016 Report to the Nations” estimates that, on average, organizations lose 5 percent of revenue per year due to fraud and other misconduct.

Many organizations have implemented active and deliberate misconduct-detection processes. “Active” means that a person, or an internal control method, has been put in place and is instrumental in looking for fraud and other misconduct. Compare that to “passive” detection, in which the organization learns of unethical activity only after the fact or by accident.

How does an ethics reporting tool, such as a whistleblower hotline, fit in? It could be labeled a “passive” tool because fraud or other misconduct is often reported after it has happened. However, an ethics reporting tool can help to shed light earlier on misconduct that might otherwise continue for any length of time and cause more damage.

Knowing about misconduct sooner enables an organization to put a stop to it earlier. According to the report, the median duration of fraud prior to detection is about 18 months. For smaller organizations, early detection could mean the difference between surviving or going out of business.

A whistleblower hotline doesn’t just help bring fraud to the forefront. Other types of misconduct commonly reported using these systems are harassment, discrimination, workplace health and safety violations, alcohol/drug abuse, violence in the workplace, and conflicts of interest — to name a few.

Once an ethics program has been implemented, it needs to engage every employee, from the top down. It can’t just exist as window dressing.

Senior management needs to be committed to the ethics program and sincere about sharing their commitment with employees. Employees learn acceptable workplace behavior by taking cues from leadership. If management doesn’t believe in the ethics program and model leading with integrity themselves, employees are not likely to use the reporting tool to report any unethical conduct.

Employees may also be skeptical about coming forward to report perceived misconduct. Many people are concerned that even if they do make a report, no corrective action will be taken. But the biggest fear for employees is retaliation by co-workers and management. Ethics program best practices, as well as regulatory standards, call for ethics hotlines to ensure confidentiality for employees who report concerns and offer the option for anonymity.

External third-party ethics hotlines, which often include a case management database, can help. Third-party programs provide the ability for management and the reporter to communicate with each other about the allegation securely, within the system, enabling management to gather more information while protecting the whistleblower’s identity. This ensures a more thorough investigation of the alleged misconduct, getting to the bottom of any serious issues sooner, before they escalate.

Customizable third-party whistleblowing systems allow companies to create a program that is best suited to meet the needs of their organization, regardless of industry. They log and date stamp every report and allow management of each case to closure.

The ability to include a company’s national or global locations as part of the reporting process enables all incidents to be funneled into the one system in an organized manner.

Every industry has its own unique risk concerns and customizable third-party systems help management spot and track issues and trends, no matter the location, the department or the issue.

If they are not comfortable talking with their supervisor, a whistleblower wants to know where they can go to report ethical concerns and remain anonymous. An anonymous hotline removes many of the obstacles to reporting inappropriate behavior and gives employees, suppliers and vendors the ability to raise genuine concerns about illegal or unethical behavior.

Ethics hotlines also reduce the risk of individuals going outside the organization with their concerns, potentially damaging an organization’s reputation and causing further financial harm.

Every employee wants to know that his or her voice matters in the organization. That’s why encouraging a speak-up culture is important. Employees want to know they are part of the success of the company. Encouraging them to speak up about wrongdoing and showing them that their concerns do matter and are taken seriously creates more motivated employees who truly want to participate in the company’s future.

Many companies believe they are too small to warrant an ethics reporting system. There’s a belief that there’s too much complexity and work involved. But putting in extra upfront effort to set up a customizable program that is right for the company is well worth it when the result is more open communication, happier employees, reduced risk, and future growth and success.

When an organization implements a confidential and anonymous third-party ethics hotline, it lets employees and stakeholders know that it is serious about adherence to its code of conduct, it takes all reports of misconduct seriously, and it does not tolerate retaliation towards anybody reporting perceived misconduct.

If company leaders truly want to promote a speak-up culture, and give employees a safe place to come forward to report ethics and compliance concerns, then one of the best ways is to provide employees security and comfort of anonymity and confidentiality via a whistleblower hotline.”

http://www.nationaldefensemagazine.org/articles/2017/7/17/whistleblower-hotlines-a-valuable-tool

Army Turns To Industry For Network Overhaul

Standard
Army-command-post

Army Command Post

“BREAKING DEFENSE”

“The Army today has about 20 different software “baselines,” with different units and offices using inconsistent and often incompatible programs, often because their hardware is too old to handle anything better.

The resulting patchwork of networks is expensive to operate and difficult to secure against cyber attack. So the service wants to upgrade everyone to a single, consistent, up-to-date baseline within two years.

Want to sell information technology to the US Army? Then you need to write this down: Paul.A.Ostrowski.mil@mail.mil. That’s the email of the generalseeking industry’s input — historically something of a struggle for the service — as the Army reviews and overhauls its networks.

The Army’s long-term goal: a single unified network connecting everything from the home base to the battlefield, easy for the service to upgrade, easy for soldiers to use amidst the stress of combat, and hard for enemies to take down. The Army’s immediate question for industry: Can you build it?

Lt. Gen. Ostrowski, the director of the Army Acquisition Corps, wants you to write him if you want in on a series of roundtables the Army is holding with selected companies, hosted by the federally funded Institute for Defense Analyses (IDA). One roundtable was personally led by the Army Chief of Staff, the hard-charging, wisecracking Gen. Mark Milley, who is taking a hands-on role in the review he launched in May.

“Who’s in charge? The Chief’s in charge…. he and the Secretary of the Army,” Ostrowski said at yesterday’s Association of the US Army conference on networks. Those top leaders have brought together the Army’s Chief Information Officer/G-6 (chief signals officer), the Army resourcing staff (G-8), the Training & Doctrine Command that brainstorms future warfare concepts and writes requirements for new systems, and the acquisition officials who buy them.

“What’s different is the involvement of the leadership,” said Army CIO Gary Wang, who’s leading the review for Gen. Milley. While the Pentagon bureaucracy does plenty of reviews, he told me, “oftentimes it’s delegated down to a much lower level.” This time, though, the severity of the Army’s “financial constraints” have gotten the Chief of Staff and Acting Army Secretary Robert Speer personally involved, Wang said.

There’s another reason Wang didn’t mention: the savage criticism in Congress of the Army’s flagship battlefield network, WIN-T. Gen. Milley himself said the network is too “fragile” and “vulnerable” for future battles against high-tech adversaries like Russia or China, because its transmissions are too easily detected and then jammed or hacked.

Beyond WIN-T

“WIN-T’s our current network,” Ostrowski said when I asked him about the system. “We’re an Army that has to fight tonight, and WIN-T will be very much part of that. Period. That gets that off the table.” Then he moved on to other topics — notably not saying what this review would mean for WIN-T in the future.

But this review goes well beyond WIN-T, Milley and Speer have emphasized. It covers all the Army’s networks, both for combat units and back-office business operations. The crucial issue, Ostrowski said, is “how do we simplify the network? Right now we have a lot of parts and pieces. We’ve gone out and bought a lot of stuff that’s incredible in terms of its capabilities. but we’ve got to simplify: We’ve got to make this soldier-intuitive; we’ve got to make it soldier-maintainable and soldier-operable.”

The Army today has about 20 different software “baselines,” with different units and offices using inconsistent and often incompatible programs, often because their hardware is too old to handle anything better. The resulting patchwork of networks is expensive to operate and difficult to secure against cyber attack. So the service wants to upgrade everyone to a single, consistent, up-to-date baseline within two years.

What’s more, cybersecurity in the narrow sense is not enough. The Army can’t just focus on hackers sending malicious code over the internet: It also has to worry about electronic warriors jamming, triangulating, or eavesdropping on radio transmissions. That’s a uniquely military problem. Yes, civilian mobile phones also rely on radio — that’s what “wireless” means — but only to reach the nearest cell tower, which is often plugged into fiber optic cable; battlefield wireless networks rely on long-distance radio, which is much more vulnerable.

A Daunting Task

So what does the Army want from its future network, and therefore from industry?

First and most fundamentally, Ostrowski told the AUSA conference, the review is driven by rapidly evolving threats, because the network needs to be ready to go to “fight and win our nation’s wars” against those threats. The Army must stand ready “to deploy rapidly, anywhere, anytime, to shape, prevent, and win, against any foe in any domain — domain being cyber, space, air, land, or maritime — and any environment — environment being megacity, desert, jungle, arctic.” So the network must be able to operate, and the soldiers using it must be able to reliably communicate, in all those conditions, under attack by any of those threats, and on the move, without stopping to set up radio antennas or lay fiber optic cables.

To that end, the network must be “simple and intuitive,” Ostrowski said, easy for soldiers to operate without extensive training or constant tweaking. Soldiers must be able to keep it running without relying on legions of industry Field Service Representatives (FSR), as was often the case in Afghanistan and Iraq.

The network must also be easy to upgrade as technology changes, without having to start the whole laborious procurement process over again, and without being locked in to one company’s intellectual property that no one firm can touch. “I will tell you up front, that if you’re going to bring proprietary solutions to the table, don’t come,” Ostrowski said. Instead, the network must be built on open standards, allowing any company to offer upgrades just as any company that meets Apple’s standards can sell apps for the iPhone.

Just as the network has to be open to different companies’ products, Ostrowski continued, “it has to be accessible to our allied partners,” allowing friendly nations’ networks to connect with ours.

Finally, the network must be secure against cyberattack, resilient to the damage of those attacks that do get through, and able to transmit its wireless signals in a way the enemy cannot easily detect. (The technical terms are Low Probability of Detection (LPD) and Low Probability of Intercept (LPI)).

This is a daunting list of desiderata, but engineers from both the Army and “numerous companies” are already “whiteboarding” how they would achieve them, Ostrowski said. “My name and number (are) up there,” he said, pointing to his slides. “I need you to let me know if you want to play.”

Who’s facilitating all this interaction? The Institute for Defense Analyses (IDA), a federally funded research & development cooperation that Congress had already chartered to study the Army network, said Maj. Gen. Peter Gallagher, who works for CIO Wang as director of architecture, operations, networks, and space. Gallagher told me he doubted if he’d ever seen a review this intensive, adding the full-court outreach to industry was “something Gen. Milley personally directed.”

“We rely on industry for everything we do,” Gallagher said simply.”

http://breakingdefense.com/2017/07/army-chief-milley-turns-to-industry-for-network-overhaul/

 

 

 

DoD Is Buying Fewer Commercial Items. Oops!

Standard

DOD Fewer Commercial Items DIUX-poster

“BREAKING DEFENSE”

“One constant in the acquisition reform debate of the last two decades ……  “buy more commercial items in a commercial fashion, and do it quickly and cheaply.”

But a report by the Government Accountability Office analyzing a decade of the federal acquisition database finds that Pentagon’s purchase of commercial items has declined since 2007.

Now, nobody argued that you could buy F-35s or ships that way, but as competitors such as China and Russia fielded weapons in double-quick time and software and computer hardware became increasingly important to a weapon’s effectiveness, so did speeding up purchases and lowering their costs grew in importance.

To build bridges with the commercial sector and to ensure the military sped up its adoption of technology advances — especially in software and commercial IT — former Defense Secretary Ash Carter created the the Strategic Capabilities Office and the Defense Innovation Unit Experimental, fondly known as the DIUX. They were supposed to help accelerate the purchase of commercial technology, bolstered by a raft of legal and policy changes over the last decade.

“The data now supports what was long suspected — that the purchase of commercial items was declining. The question is why? The answer can likely be found in the overreaction to the perceived contracting abuses of of the Iraq War.

“While commercial items and the Iraq War shouldn’t be linked, they became so in the so-called ‘war on profits’ that was initiated early on in the Obama Administration,” Greenwalt argues. “In a typical overreaction applied to a different set of circumstances….the DOD bureaucracy, instead of going after bloated cost-type contracts and move to a more fixed-price, commercial-like, performance-based contracting approach, decided to do the opposite and reign in commercial item contracts where profit margins are traditionally higher.”

Part of the problem appears to be that Pentagon acquisition officials just don’t know much about buying commercially. To cope with that, the Defense Contract Management Agency (DCMA) created six Commercial Item Centers of Excellence staffed with engineers and price/cost analysts to advise contracting officers in how to determine what can be bought commercially.

“According to DCMA officials,” the GAO report says, “experts at these centers began reviewing cases in June 2016 and since then have examined 437 cases that contained approximately 2300 items. They recommended that the contracting officer make a determination that an item was commercial in 94 percent of the cases reviewed.”

But Greenwalt isn’t really optimistic, even though he pushed hard to make sure the acquisition community had the policy and legal tools to buy more commercially.

“The linkage between higher profits and higher risks and performance that occurs on commercial item contracts was forgotten in order to keep as many traditional cost-type programs (with somewhat reduced fees) going during a budgetary downturn,” he says. “Congress acted in the last two NDAAs to try and roll back this situation, but since none of the rules to implement new commercial item legislation have been enacted yet, it is doubtful we will see much improvement soon in the statistics.”

http://breakingdefense.com/2017/07/dod-is-buying-fewer-yes-fewer-commercial-items-oops/

 

Pentagon Studies Weapons That Can Read Users’ Mind

Standard
still from DARPA video

DARPA’s Revolutionizing Prosthetics program is devising new kinds of artificial limbs — and new ways to control them.

“BREAKING DEFENSE”

“The troops of tomorrow may be able to pull the trigger using only their minds.

As artificially intelligent droneshackingjamming, and missiles accelerate the pace of combat, some of the military’s leading scientists are studying how mere humans can keep up with the incredible speed of cyber warfare, missiles and other threats.

One option: Bypass crude physical controls — triggers, throttles, keyboards — and plug the computer directly into the human brain. In one DARPA experiment, a quadriplegic first controlled an artificial limb and then flew a flight simulator. Future systems might monitor the users’ nervous system and compensate for stress, fatigue, or injury. Is this the path to what the Pentagon calls human-machine teaming?

This is an unnerving scenario for those humans, like Stephen Hawking, who mistrust artificial intelligence. If your nightmare scenario is robots getting out of control, “let’s teach them to read our minds!” is probably not your preferred solution. It sounds more like the beginning of a movie where cyborg Arnold Schwarzenegger goes back in time to kill someone.

But the Pentagon officials who talked up this research yesterday at Defense One’s annual tech conference emphasized the objective was to improve human control over artificial intelligence. Teaching AI to monitor its user’s level of stress, exhaustion, distraction, and so on helps the machine adapt itself to better serve the human — instead of the other way around. Teaching AI to instantly detect its user’s intention to give a command, instead of requiring a relatively laborious push of a button, helps the human keep control — instead of having to let the AI off the leash because no human can keep up with it.

Official Defense Department policy, as then-Secretary Ash Carter put it, is that the US will “never” allow an artificial intelligence to decide for itself whether or not to kill a human being. However, no less a figure than the Carter’s undersecretary of acquisition and technology, Frank Kendall, fretted publicly that making our robots wait for human permission would slow them down so much that enemy AI without such constraints would beat us. Vice-Chairman of the Joint Chiefs, Gen. Paul Selva, calls this the “Terminator Conundrum.” Neuroscience suggests a way out of this dilemma: Instead of slowing the AIs down, make the humans’ orders come faster.

Accelerate Humanity

“We will continue to have humans on the loop, we will have human input in decisions, but the way we go about that is going to have to shift, just to cope with the speed and the capabilities that autonomous systems bring,” said Dr. James Christensen, portfolio manager at the Air Force Research Laboratory‘s 711th Human Performance Wing. “The decision cycle with these systems is going to be so fast that they have to be sensitive to and responsive to the state of the individual (operator’s) intent, as much as overt actions and control inputs that human’s providing.”

In other words, instead of the weapon system responding to the human operator physically touching a control, have it respond to the human’s brain cells forming the intention to use a control. “When you start to have a direct neural interface of this type, you don’t necessarily need to command and control the aircraft using the stick,” said Justin Sanchez, director of DARPA‘s Biological Technologies Office. “You could potentially re-map your neural signatures onto the different control surfaces” — the tail, the flaps — “or maybe any other part of the aircraft” — say landing gear or weapons. “That part hasn’t really been explored in a huge amount of depth yet.”

Reading minds, even in this limited fashion, will require deep understanding and close monitoring of the brain, where thoughts take measurable form as electrical impulses running from neuron to neuron. “Can we develop precise neurotechnologies that can go to those circuits in the brain or the peripheral nervous system in real time?” Sanchez asked aloud. “Do we have computational systems that allow us to understand what the changes in those signals (mean)? And can we give meaningful feedback, either to the person or to the machine to help them to do their job better?”

DARPA’s Revolutionizing Prosthetics program hooked up the brain of a quadriplegic — someone who could move neither arms nor legs — to a prosthetic arm, allowing the patient to control it directly with their thoughts. Then, “they said, ‘I’d like to try to fly an airplane,’” Sanchez recounted. “So we created a virtual flight simulator for this person, allowed this neural interface to interface with the virtual aircraft, and that person flew.”

“That was a real wake-up call for everybody involved,” Sanchez said. “We didn’t initially think you could do that.”

Adapting To The Human

Applying direct neural control to real aircraft — or tanks, or ships, or network cybersecurity systems — will require a fundamental change in design philosophy. Today, said Christensen, we give the pilots tremendous information on the aircraft, then expect them to adapt to it. In the future, we could give the aircraft tremendous information on its pilots, then have it use artificial intelligence to adapt itself to them. The AI could customize the displays, the controls, even the mix of tasks it took on versus those it left up to the humans — all exquisitely tailored not just to the preferences of the individual operator but to his or her current mental and physical state.

When we build planes today, “they’re incredible sensor platforms that collect data on the world around them and on the aircraft systems themselves, (but) at this point, very little data is being collected on the pilot,” Christensen said. “The opportunity there with the technology we’re trying to build now is to provide a continuous monitoring and assessment capability so that the aircraft knows the state of that individual. Are they awake, alert, conscious, fully capable of performing their role as part of this man-machine system? Are there things that the aircraft then can do? Can it unload gees (i.e. reduce g-forces)? Can it reduce the strain on the pilot?”

“This kind of ability to sense and understand to the state and the capabilities of the human is absolutely critical to the successful employment of highly automated systems,” Christensen said. “The way all of our systems are architected right now, they’re fixed, they’re predictable, they’re deterministic” — that is, any given input always produces the exact same output.

Predictability has its advantages: “We can train to that, they behave in very consistent ways, it’s easier to test and evaluate,” Christensen said. “What we lose in that, though is the real power of highly automated systems, autonomous systems, as learning systems, of being able to adapt themselves. ”

“That adaptation, though, it creates unpredictability,” he continued. “So the human has to adapt alongside the system, and in order to do that, there has to be some mutual awareness, right, so the human has to understand what is the system doing, what is it trying to do, why is that happening; and vice versa, the system has to has some understanding of the human’s intent and also their state and capabilities.”

This kind of synergy between human and artificial intelligence is what some theorists refer to as the “centaur,” after the mythical creature that combined human and horse — with the human part firmly in control, but benefiting from the beast’s strength and speed. The centaur concept, in turn, lies at the heart of the Pentagon’s ideas of human-machine teaming and what’s known as the Third Offset, which seeks to counter (offset) adversaries’ advancing technology with revolutionary uses of AI.

The neuroscience here is in its infancy. But it holds the promise of a happy medium between hamstringing our robots with too-close control or letting them run rampant.”

http://breakingdefense.com/2017/07/pentagon-studies-weapons-that-can-read-users-mind/

The Business of National Cybersecurity

Standard

Business of Cyber Security

 

“FIFTH DOMAIN CYBER”

“With all the attention this subject is now receiving, one would think the business of national cyber security (commercial, government and defense) would be very robust.

Small and medium-sized businesses are not singing a happy, carefree tune. Delays in contracts, budget cuts and delayed payments seem to be the most common complaints.

It is hard to open a browser, look at a newspaper, or watch or listen to a news show without the topic of cybersecurity coming up. In mid-June, Microsoft received a lot of attention from headlines about the company’s warning of an elevated risk of cyberattacks. Another attention-grabbing headline came from Chris Childers, the CEO of the National Defense Group located in Germantown, Maryland, who shined light on the fact that many satellites in use today are dated and use old technology that was made before cyberthreats were a real issue and prior to when cyber defenses were readily available.

With all of the headlines about cyberattacks, viruses, ransomware attacks (WannaCry) and so on, you would think cybersecurity business is booming. Odds are it is not as robust as many people think. Let’s not forget when the Department of Homeland Security said 20-plus states faced major hacking attempts during the 2016 presidential election.

Today, basic cybersecurity understanding and skills need to reach into every profession and every level of the workforce. Updating the skills of the workforce must be continuous, and this takes time and money.

Another interesting point was brought up during a recent cyber strategy thinking session: Could our adversaries be leveraging inexpensive cyberattacks and threats as economic warfare, knowing full well that we will move to identify, analyze and address the emerging threats — something that would cost us money? After all, what choice do we have?”

http://fifthdomain.com/2017/07/07/the-business-of-national-cybersecurity-commentary/

 

 

 

 

 

Cyber Training and Education Must Be Continuous

Standard
Cyber Training

(Photo Credit: Staff Sgt. Alexandre Montes/Air Force)

“C4ISRNET”

“Today, very few organizations have as a requirement for employees and contractors the upkeep of cyber security knowledge.

That must change immediately if we are to keep pace with the ever-changing cyber threat environment.

Times are certainly changing. Politics, regulation, threats, conflict and so much more are changing; but it can be difficult to adapt to all the new and emerging technologies and their applications. There is little doubt that the world’s reliance on computers and their use continue to increase rapidly. Arguably, digital transformation is the leading driver of change. All these are producing a significant amount of new data and data communication paths that are all potential targets for cyberattacks by our adversaries and criminals. The sum of all this equals changes to our knowledge base, education requirements and the cyberthreat environment. Let’s take a look at some of the stats.

Cyberattack surface area

In 2016, there were multiple numbers that clearly showed just how large the cyberattack surface area has become. It was estimated that in 2016, internet of things, or IoT, devices rose to 17.6 billion. In 2016, there were an estimated 12.5 million connected cars produced and put into operation. Also in 2016, an estimated 45 percent of Americans had either a smart home or invested in smart-home technology, according to a survey by Coldwell Banker.Now we should also include robots. In the forth quarter of 2016, robot orders in North America surged by 61 percent. The increases in robot sales has led analysts projecting that robots will take/occupy 6 percent of all U.S. jobs by 2021.

Here is something that provides a partially over-arching perspective: data storage. IDC projects data storage growth by 35 to 40 percent per year for external storage and 33 to 38 percent for internal storage. Finally, consider Gartner’s projection that “manufacturers, consumer goods companies, medical device providers and their supply chain vendors are expanding the use of 3-D printing.” Think of the data files flowing to those printers! Think about the value of those files. Theft of those files enables counterfeit products, for sure. Think about all the changes in technology and to the cyberattack surface area the above data represents.

Threats

Consider the following metrics as an indication of the current pace of change to the cyber environment. In just one quarter of 2016, Panda Labs stated there were 18 million new strains of malware identified/captured. That equates to about two and one-third new pieces of malware being identified every single second. That is what was found! It is anyone’s guess what was actually released. In 2016, ransomware continued to grow in number. In fact, some place the growth rate at approximately 300 percent. That means in 2016 there were on average approximately 4,000 ransomware attacks occurring every day. That equates to two and three-fourths ransomware attacks per minute. We shouldn’t forget about the growing use of cryptocurrencies for payment in ransomware attacks! At the time of writing this, there were more than 850 differentcryptocurrencies with a total market capitalization equal to or over $97 billion. Think about all the nefarious activities that cryptocurrencies could be used to fund. It’s proven to be so relevant that a recent cryptocurrency webinar had approximately 3,000 professional attendees.
Distributed denial of service, or DDoS, attacks in 2016 were up in frequency, intensity and the amount of flooding data. In fact, we saw the largest DDoS attack of its kind in history. One company reported DDoS traffic of 1.2 terabytes per second. But hold on. Think about the potential for a highly distributed IoT bot net. That is a distinct possibility evolving right before our eyes.

Impact on cyber training and education

The pace at which the cyberthreat environment is changing creates a huge challenge for our military and intelligence communities. Keeping up with these changes is a large and growing task. Considering the pace with which technology is advancing and implemented, it is easy to see just how essential continuous education has become. With all of the changes that have taken place and continue to take place, updating the curriculum must be an ongoing activity; the same goes for the knowledge and skill-set requirement of professionals in the cybersecurity field. Today, very few organizations have as a requirement for employees and contractors the upkeep of cybersecurity knowledge. That must change immediately if we are to keep pace with the ever-changing cyberthreat environment.”

 http://www.c4isrnet.com/articles/cyber-training-and-education-must-be-continuous

 

Kill The Open Internet and Wave Goodbye to Consumer Choice

Standard
kill net neutrality

Image: Dan Wasserman Tribune Media Services “The Week”

“WIRED”

“By Terrell McSweeny (@TMcSweenyFTC) a commissioner of the Federal Trade Commission and Jon Sallet (@jonsallet), the former general counsel of the Federal Communications Commission. Both are alumni of the antitrust division of the Department of Justice.”


“Since the Bush administration, both Republican and Democratic FCC chairs have emphasized that they would take action to protect the open internet, and they have done so.

An Open Internet has worked for America, creating a virtuous circle of innovation, trust, adoption, and further innovation. That circle should not be broken.

The Net Neutrality debate can seem complicated. But at its heart, the issue rests on two simple realities: First, for more than a decade, the status quo in the US has been an open internet that supports thriving innovation among websites, apps, and new digital services. Second, innovators and consumers are dependent on a few large broadband providers that serve as gatekeepers to the internet.

In 2015, the FCC adopted its Open Internet Order to guarantee that consumers aren’t blocked or manipulated when they use their broadband connections and ensure that competition from the internet isn’t artificially squelched. The two goals work hand in hand, because residential broadband connections are the pathways on which consumers travel to the modern world and through which the content and services of the internet reaches residential users.

Two years later the new majority at the FCC has announcedthat it intends to undo the 2015 order. That includes the prohibitions on blocking, throttling, and paid prioritization. But the FCC has also proposed eliminating the General Conduct rule, which protects competition.

The FCC would be mistaken to unravel a bipartisan approach that has worked. Since the Bush administration, both Republican and Democratic FCC chairs have emphasized that they would take action to protect the open internet, and they have done so. Like a police officer keeping a watchful eye at a busy intersection, the FCC’s presence has both stopped and deterred harm to consumers, competition, and innovation.

The threat to the open internet is real because competition in US broadband markets is limited, to the extent that it exists at all. About 90 million US households subscribe to the kind of broadband that runs on wires to their homes. The top four providers—two cable and two telecom—together claim three-quarters of all residential customers.

Of course, consumers can only choose among the broadband networks that reach them. Roughly 21 percent of US census blocs have no high-speed landline broadband provider, and 37 percent have only one option. This is no choice at all. For downloading data at 100 Mbps, 88 percent of the country has either no option or just one provider.

In rural America it’s much worse: More than half of rural census blocs have no choice of a high-speed broadband provider, which condemns them to slow speeds for any service they can get. Even where there are choices, the FCC has found that consumers face significant costs in switching between broadband providers. Moreover, broadband providers have the ability to target content creators selectively, making it harder for consumers to understand why they’re having trouble accessing certain content.

So it’s clear that most US consumers depend upon a few big players in order to access the internet. Therefore, the critical question is whether these companies have the incentive and ability to harm consumers and competition. That is, are they motivated to control what kinds of innovations come to consumers? And do they have the tools to do so? Both the FCC and the Department of Justice have recognized in recent proceedings that the answers are yes and yes.

Broadband providers have the power and the motivation to curb any competition that uses their networks in order to reach consumers. And we know that eliminating competition—via mergers, for example—risks consumers paying higher prices and receiving lower quality products and services. It doesn’t seem like a coincidence that the so-called new Golden Age of TV has flourished at a time when Amazon, Hulu, Netflix, and other services are producing popular, award-winning shows in direct competition with more established players.

Here’s why there’s a problem: The big broadband companies also supply video programming, which means that those firms’ revenues are directly threatened when consumers use their broadband connections to access competing video providers. The incentive for broadband companies to discriminate against online video providers will only grow stronger as the market becomes more competitive, as it has recently with the arrival of services that carry live television channels just like traditional cable operators.

When reviewing the proposed (and ultimately failed) merger of Comcast and TimeWarner Cable, economists at the Department of Justice concluded that the merged firm’s power would likely reduce competition in the video and broadband markets, leaving consumers with fewer choices, higher prices, and lower quality. And when the Department of Justice considered a proposed (and ultimately successful) merger of Charter Communications and TimeWarner Cable, it recognized the ability of cable and telephone companies to take action against new video competition and limited the new company’s ability to seek terms in programming contracts that could harm online video providers.

The 2015 Open Internet Order set forth 16 pages of economic and technological analysis to support the conclusion that “broadband providers (including mobile broadband providers) have the economic incentives and technical ability to engage in practices that pose a threat to Internet openness by harming other network providers, edge providers, and end users.”

Some argue that using traditional antitrust rules can get the same job done, and just as well. While the two of us both believe strongly in the importance of antitrust enforcement, these laws cannot duplicate the kind of prospective, industry-wide rules contained in the 2015 Open Internet Order.

Supreme Court Justice Anthony Kennedy faced precisely this argument when he wrote the majority opinion in the Supreme Court case upholding requirements that cable systems carry broadcast stations. He wrote that regulation could be preferred to antitrust because of “the considerable expense and delay inherent in antitrust litigation, and the great disparities in wealth and sophistication between [TV stations and cable systems],” as well as the burden of bringing a case, which would require “considerable expense and delay.” All of this is even more true in disputes between large broadband providers and their customers. That’s why open internet rules make sense: They let the industry know what is required while giving consumers an avenue of relief at the FCC that doesn’t require long and expensive antitrust litigation.

The economic facts are telling, but that’s not all. Consumers should be able to use their broadband connections to access the lawful content of their choosing. The FCC is reconsidering whether broadband providers should be given the new freedom to block or interfere with the ability of consumers to express their thoughts or to listen to the views they want to hear. And that threatens the kind of free speech on which America was built. In 1776, Thomas Paine didn’t need the permission of any other content creator or distributor to circulate Common Sense. But without rules prohibiting blocking, throttling, and the like, broadband providers would gain the power to limit what unpopular content flows over their networks—to the detriment of consumers and democracy. One challenger to the 2015 Open Internet Order argued exactly this to the DC Circuit: that the rules violated its right to block legal but unpopular content.

An Open Internet has worked for America, creating a virtuous circle of innovation, trust, adoption, and further innovation. That circle should not be broken.”

https://www.wired.com/story/kill-the-open-internet-and-wave-goodbye-to-consumer-choice/