Tag Archives: cyber Warfare

COVID-19 Enhances Pentagon Cyber Policy Commission Report Recommendations

Standard

FIFTH DOMAIN

“The importance of having that one person, that singular belly button in the executive branch who’s coordinating efforts across government .

So that you don’t have to create an ad hoc task force, [so] you’re not scrambling to find who are the right people we need in the room after the crisis has already occurred,” Co-Chairman Rep.Mike Gallagher, R-Wis. Gallagher

______________________________________________________________________________

“A co-chairman of the Cyberspace Solarium Commission said April 22 that the fiscal 2021 defense policy bill could include about 30 percent of the group’s cyber policy recommendations.

According to Rep. Mike Gallagher, R-Wis., who co-chairs the Cyberspace Solarium Commission, which released a report with more than 75 cyber policy recommendations March 11, said on a webinar hosted by Palo Alto Networks that commission staff is working with the appropriate congressional committees and subcommittees to put about 30 percent of its recommendations into this year’s National Defense Authorization Act.

The report proposed a three-pronged strategy for securing cyberspace, called layered deterrence: shape behavior, deny benefit and impose cost.

The report also takes U.S. Cyber Command’s “defend forward” policy, which allows the military to take a more aggressive approach in cyberspace. It also suggests broadening the policy to encompass the entire federal government.

Gallagher didn’t specifically identify recommendations he thinks will be included in the NDAA, but given that the bill focuses on authorizing Defense Department programs, Pentagon-specific recommendations are the likeliest to be in the legislative text.

The recommendations for the department focus on ensuring that the Cyber Mission Force is adequately equipped; establishing vulnerability assessments for weapons and nuclear control systems; sharing threat intelligence; and threat hunting of the networks of the defense-industrial base.

The spread of the new coronavirus, COVID-19, disrupted the commission report’s rollout, which included congressional hearings on the commission’s recommendation. Those hearings have been canceled. But the pandemic also highlights the need to implement recommendations made in the report, Gallagher said, specifically the establishment of a national cyber director in the White House.

“The importance of having that one person, that singular belly button in the executive branch who’s coordinating efforts across government so that you don’t have to create an ad hoc task force, [so] you’re not scrambling to find who are the right people we need in the room after the crisis has already occurred,” Gallagher said

Before the spread of the coronavirus, congressional committees had planned to host hearings on the commission report, but those were canceled after the coronavirus spread throughout the United States. Congress is currently wrestling with how to remotely conduct voting and committee business, as the pandemic is restricting gatherings of large groups of people.

“Even though coronavirus has complicated some of … our commission rollout, we’re continuing the legislative process right now, and I’m pretty optimistic about our ability to shape this year’s NDAA,” Gallagher said.

As for the other recommendations, Gallagher said they aren’t germane to the NDAA and will take “some time.”

https://www.fifthdomain.com/congress/capitol-hill/2020/04/22/cyber-policy-suggestions-for-pentagon-could-be-implemented-this-year/

How Marines And Robots Will fight Side By Side

Standard
Illustrations by Jacqueline Belker/Staff

“MARINE CORPS TIMES”

This imagined scenario involves a host of platforms, teamed with in-the-flesh Marines, moving rapidly across wide swaths of the Pacific.

Those small teams of maybe a platoon or even a squad could work alongside robots in the air, on land, sea and undersea, to gain a short-term foothold that then could control a vital sea lane Chinese ships would have to bypass or risk sinking simply to transit.

____________________________________________________________________________

“Somewhere off the coast of a tiny island in the South China Sea small robotic submarines snoop around, looking for underwater obstacles as remotely-controlled ships prowl the surf. Overhead multiple long-range drones scan the beachhead and Chinese military ­fortifications deeper into the hills.

A small team of Marines, specially trained and equipped, linger ­farther out after having launched from their amphibious warship, as did their robot battle buddies to scout this spit of sand.

Their Marine grandfathers and great-grandfathers might have rolled toward this island slowly, dodging sea mines and artillery fire only to belly crawl in the surf as they were raked with ­machine gun fire, dying by the thousands.

But in the near-term battle, suicidal charges to gain ground in a fast-moving battlefield is a robot’s job.

It’s a bold, technology-heavy concept that’s part of Marine Corps Commandant Gen. David Berger’s plan to keep the Corps relevant and lethal against a perceived growing threat in the rise of China in the Pacific and its increasingly sophisticated and capable Navy.

In his planning guidance, Berger called for the Marines and Navy to “create many new risk-worthy unmanned and minimally manned platforms.” Those systems will be used in place of and alongside the “stand-in forces,” which are in range of enemy weapons systems to create “tactical dilemmas” for adversaries.

“Autonomous systems and artificial intelligence are rapidly changing the character of war,” Berger said. “Our potential peer adversaries are investing heavily to gain dominance in these fields.”

And a lot of what the top Marine wants makes sense for the type of war fighting, and budget constraints, that the Marine Corps will face.

“A purely unmanned system can be very small, can focus on power, range and duration and there are a lot of packages you can put on it — sensors, video camera, weapons systems,” said Dakota Wood, a retired Marine ­lieutenant colonel and now senior research fellow at The Heritage ­Foundation in Washington, D.C.

The theater of focus, the Indo-Pacific Command, almost requires adding a lot of affordable systems in place of more Marine bodies.

That’s because the Marines are stretched across the world’s largest ocean and now face anti-access, area-denial, systems run by the Chinese military that the force hasn’t had to consider since the Cold War.

“In INDOPACOM, in the littorals, the Marine Corps is looking to kind of outsource tasks that machines are looking to do,” Wood said. “You’re preserving people for tasks you really want a person to handle.”

The Corps’ shift back to the sea and closer work with the Navy has been brewing in the background in recent years as the United States slowly has attempted to disentangle itself from land-based conflicts in the Middle East. Signaling those changes, recent leaders have published warfighting concepts such as expeditionary advanced based operations, or EABO, and littoral operations in contested environment.

EABO aims to work with the Navy’s distributed maritime operations concept. Both allow for the U.S. military to pierce the anti-access, area denial bubble. The littoral operations in contested environment concept makes way for the close-up fight in the critical space where the sea meets the land.

That’s meant a move to prioritize the Okinawa, Japan-based III Marine Expeditionary Force as the leading edge for prioritizing Marine forces and experimentation, as the commandant calls for the “brightest” Marines to head there.

Illustrations by Jacqueline Belker/Staff

Getting what they want

But the Corps, which traditionally has taken a backseat in major acquisitions, faces hurdles in adding new systems to its portfolio.

It was only in 2019 that the Marines gained more funding to add more MQ-9 Reaper drones. The Corps got the money to purchase its three Reapers in this year’s budget. But that’s a ­platform that’s been in wide use by the Air Force for more than a decade.

But that’s a short-term fix, the Corps’ goal remains the Marine Air-Ground Task Force unmanned aircraft system, expeditionary, or MUX.

The MUX, still under development, would give the Corps a long-range drone with vertical takeoff capability to launch from amphib ships that can also run persistent intelligence, surveillance and reconnaissance, electronic warfare and coordinate and initiate strikes from other weapons platforms in its network.

Though early ideas in 2016 called for something like the MUX to be in the arsenal, at this point officials are pegging an operational version of the aircraft for 2026.

Lt. Gen. Steven Rudder, deputy commandant for aviation, said at the annual Sea-Air-Space Symposium in 2019 that the MUX remains among the top priorities for the MAGTF.

Sustain and distract

In other areas, Marines are focusing on existing platforms but making them run without human operators.

One such project is the expeditionary warfare unmanned surface vessel. Marines are using the 11-meter ­rigid-hull inflatable boats already in service to move people or cargo, drop it off and return for other missions.

Logistics are a key area where autonomous systems can play a role. Carrying necessary munitions, medical supplies, fuel, batteries and other items on relatively cheap platforms keeps Marines out of the in-person transport game and instead part of the fight.

In early 2018 the Corps conducted the “Hive Final Mile” autonomous drone resupply demonstration in ­Quantico, Virginia. The short-range experiment used small quadcopters to bring items like a rifle magazine, MRE or canteen to designated areas to resupply a squad on foot patrol.

The system used a group of drones in a portable “hive” that could be ­programmed to deliver items to a predetermined site at a specific time and continuously send and return small drones with various items.

Extended to longer ranges on larger platforms and that becomes a lower-risk way to get a helicopter’s worth of supplies to far-flung Marines on small atolls that dot vast ocean expanses.

Shortly after that demonstration, the Marines put out requests for concepts for a similar drone resupply system that would carry up to 500 pounds at least 10 km. It still was not enough distance for larger-scale warfighting, but is the beginnings of the type of resupply a squad or platoon might need in a contested area.

In 2016, the Office of Naval Research used four rigid-hull inflatable boat with unmanned controls to “swarm” a target vessel, showing that they can also be used to attack or distract vessels.

And the distracting part can be one of the best ways to use unmanned assets, Wood said.

Wood noted that while autonomous systems can ­assist in classic “shoot, move, communicate” tactics, they sometimes be even more effective in sustaining forces and distracting adversaries.

“You can put machines out there that can cause the enemy to look in that direction, decoys tying up attention, munitions or other platforms,” Wood said.

And that distraction goes further than actual boats in the water or drones in the air.

As with the MUX, the Corps is looking at ways to include electronic warfare capabilities in its plans. That allows for robotic systems to spoof enemy sensors, making them think that a small pod of four rigid-hull inflatable boats appear to be a larger flotilla of amphib ships.

Illustrations by Jacqueline Belker/Staff

Overreliance

Marines fighting alongside and along with ­semi-autonomous systems isn’t entirely new.

In communities such as aviation, explosive ordnance disposal and air defense, forms of automation, from automatic flight paths to approaching toward bomb sites and recognizing incoming threats, have been at least partly outsourced to software and systems.

But for more complex tasks, not so much.

How robots have worked and will continue to work in formations is an evolutionary process, according to former Army Ranger Paul Scharre, director of the technology and national security program at the Center for a New American Security and author of, “Army of None: Autonomous Weapons and the Future of War.”

If you look at military technology in history, the most important use for such tech was in focusing on how to solve a particular mission rather than having the most advanced technology to solve all problems, Scharre said.

And autonomy runs on a kind of sliding scale, he said.

As systems get more complex, autonomy will give fewer tasks to the human and more to the robot, helping people better focus on decision-making about how to conduct the fight. And it will allow for one human to run multiple systems.

When you put robotic systems into a squad, you’re giving up a person to run them and leaders have to decide if that’s worth the trade off, Scharre said.

The more remote the system, the more vulnerable it might be to interference or hacking, he said. Built into any plan for adding autonomous systems there must be reliable, durable communication networks.

Otherwise, when those networks are attacked the systems go down.

That means that a Marine’s training won’t get less complicated, only more multifaceted.

Just as Marines continue to train with a map and compass for land navigation even though they have GPS at their fingertips, Marines operating with autonomous systems will need continued training in fundamental tactics and ways to fight if those systems fail.

“Our preferred method of fighting today in an ­infantry is to shoot someone at a distance before they get close enough to kill with a bayonet,” Scharre said. “But it’s still a backup that’s there. There are still bayonet lugs on rifles, we issue bayonets, we teach people how to wield them.”

Where do they live?

A larger question is where do these systems live? At what level do commanders insert robot wingmen or battle buddies?

Purely for reasons of control and effectiveness, Dakota Wood said they’ll need to be close to the action and Marine Corps personnel.

But does that mean every squad is assigned a robot, or is there a larger formation that doles out the automated systems as needed to the units?

For example, an infantry battalion has some vehicles but for larger movements, leaders look to a truck company, Wood said. The maintenance, care, feeding, control and programming of all these systems will require levels of specialization, expertise and resources.

The Corps is experimenting with a new squad formation, putting 15 instead of 13 Marines in the building block of the infantry. Those additions were an assistant team leader and a squad systems operator. Those are exactly the types of human positions needed to implement small drones, tactical level electronic warfare and other systems.

The MUX, still under development, would give the Corps a long-range drone with vertical takeoff capability to launch from amphib ships. (Bell Helicopter)
The MUX, still under development, would give the Corps a long-range drone with vertical takeoff capability to launch from amphib ships. (Bell Helicopter)

The Marine Corps leaned on radio battalions in the 1980s to exploit tactical signals intelligence. Much of that capability resided in the larger battalion that farmed out smaller teams to Marine Expeditionary Units or other formations within the larger division or Marine Expeditionary Force.

A company or battalion or other such formation could be where the control and distribution of ­autonomous systems remains.

But, current force structure moves look could integrate those at multiple levels. Maj. Gen. Mark Wise, deputy commanding general of Marine Corps Combat Development Command, said recently that the Corps is considering a Marine littoral regiment as a formation that would help the Corps better conduct EABO operations.

Combat Development Command did not provide details on the potentially new regimental formation, confirmed that a Marine littoral regiment concept is one that will be developed through current force design conversations.

A component of that could include a recently-proposed formation known as a battalion maritime team.

Maj. Jake Yeager, an intelligence officer in I MEF, charted out an offensive EABO method in a December 2019 article on the website War On The Rocks titled, “­Expeditionary Advanced Maritime Operations: How the Marine Corps can avoid becoming a second land Army in the Pacific.”

Part of that includes the maritime battalion, creating a kind of Marine air-sea task force. Each battalion team would include three assault boat companies, one raid boat company, one anti-ship missile boat battery and one reconnaissance boat company.

The total formation would use 40 boats, at least nine which would be dedicated unmanned surface vehicles, while the rest would be developed with unmanned or manned options, much like the ­rigid-hulled inflatable boats which the Corps is currently experimenting with.”

https://www.marinecorpstimes.com/news/your-marine-corps/2020/02/03/war-with-robots-an-inside-look-at-how-marines-and-robots-will-fight-side-by-side/

Government Improving The Sharing Of Cyber Security Threat Information

Standard
Image: “Fifth Domain”

FIFTH DOMAIN

“A new joint report from inspectors general across the government found that information sharing among the intelligence community and the rest of government “made progress.”

______________________________________________________________________________

“Over and over cybersecurity officials in the civilian government, the intelligence community and the Department of Defense say the same platitude: information sharing is important. Often, however, little insight, or metrics, back up exactly how well they are doing it.

The report, titled “Unclassified Joint Report on the Implementation of the Cybersecurity Information Sharing Act of 2015” and released Dec. 19, found that cybersecurity threat information sharing has improved throughout government over the last two years, though some barriers remain, like information classification levels.

Information sharing throughout government has improved in part because of security capability launched by the Office of the Director of National Intelligence’s Intelligence Community Security Coordination Center (IC SCC) that allowed the ODNI to increase cybersecurity information all the way up to the top-secret level. The capability, called the Intelligence Community Analysis and Signature Tool (ICOAST), shares both indicators of compromise and malware signatures that identify the presence of malicious code. According to the report, the information from the platform is available to “thousands” of users across the IC, DoD and civilian government.

Information sharing within the IC has also improved due to the creation of several websites within its top-secret networks that contain threat indicators and several different types of summary reports on cyber activity and vulnerabilities.

Technological change is molding the future of information sharing within the government. With the rise of cloud computing at various classification levels throughout the government, the IC SCC told IGs that it plans to expand the ICOAST threat intel capability to work in secret and unclassified clouds. That is in the “planning and development” stages, according the report.

“At the secret and unclassified levels, the ICOAST instances will interface with multiple DoD components and other federal entities that have the responsibility for distributing cyberthreat information to federal, state and local entities and the private sector,” the IGs wrote.

According to the report, an IC SCC official told the IGs that they wanted to deploy ICOAST in those environments by the end of calendar year 2019. A spokesperson for the ODNI didn’t immediately respond to a question about the availability of ICOAST.

The IC SCC is also working with the Department of Homeland Security’s cybersecurity arm, the Cybersecurity and Infrastructure Security Agency, to improve information within CISA’s threat intelligence platform, Automated Indicator Sharing (AIS), for integration with ICOAST.

Barriers to sharing

Though the government has made marked improvements in its info sharing, the IG noted several ongoing challenges to better information sharing.

CISA’s AIS solution, a system through which the federal government and the private sector can share threat intelligence in near-real time, has its own participation challenges. In December 2018, the IGs found, there were 252 federal and non-federal organizations signed up for AIS. But in June 2019, only four agencies and six non-federal entities were using the platform for information sharing. DHS told auditors that the lack of participation hindered improvement.

“DHS reported that the limited number of participants who input cyberthreat information to AIS is the main barrier for DHS to improve the quality of the indicators with more actionable information to mitigate potential cyberthreats,” the IGs wrote.

The most common complaint was that AIS threat information lacked proper context to be actionable, a complaint similar to that heard from state governments receiving threat intelligence from DHS and the FBI during the 2016 election. Therefore, cybersecurity officials at several agencies couldn’t “determine why the indicator was an issue.”

“As a result, the entities did not know what actions to take based on the information received from AIS without performing additional research,” the IGs wrote.

CISA officials told the IGs that they were working on improving the quality of information with AIS.

Meanwhile, agencies also noted that the classification levels of certain threat intelligence prevented widespread info sharing. Aside from officials lacking proper clearance being prevented from viewing certain information, auditors also noted that classified threat information couldn’t be uploaded into the sharing platforms that aren’t cleared for storing that information, further hampering sharing efforts. Some agencies have worked with the owners to downgrade the classification level, according to the report.

“Sharing cyberthreat indicators and defensive measures increases the amount of information available for defending systems and networks against cyberattacks,” the IGs wrote.”

https://www.fifthdomain.com/dod/2019/12/30/how-good-is-the-government-at-threat-information-sharing/

What We Continue To Get Wrong About Cybersecurity

Standard
Photo:  (Yuri Arcurs YAPR/Getty Images)

FIFTH DOMAIN By: Kiersten E. Todt

It’s tempting to believe that by developing more robust technology, we’ll be able to put the cyber thieves out of business. If only that were true. 

Humans pose the biggest cybersecurity threat of all.

______________________________________________________________________________

“October 1 marked the start of National Cybersecurity Awareness Month. While the designation is a clever way to highlight the need for greater vigilance in how we use technology, it’s nonetheless ill-advised. Cybersecurity shouldn’t be treated as a flavor of the month. We need to focus on it every day.

Today’s cyberthreat environment is menacing, and it’s clear that we always need to be in a state of “high alert.” Hackers show no signs of retreat — and are becoming more aggressive and sophisticated. Earlier this year, hackers circulated a tranche of unique usernames and passwords numbering in the billions.

While security technology is much better than it was even just a few years ago, it nonetheless contains one major liability: it’s often only as good as the humans who use it.

Consider the disclosure in late July of a breach at Capital One, which affected about 100 million individuals in the United States. According to a Justice Department filing, a Seattle hacker, breached Capital One through a misconfigured firewall caused by human error. The hacker was able to exploit that misconfiguration.

In August, Facebook reported that it left a database containing 419 million records unprotected, without a password. As we examine the major breaches over the last several years — Target, Home Depot, Sony, Equifax — their initial point of vulnerability was access stemming from weak authentication; in other words, passwords that could be hacked.

These events, and others like them, are a reminder that while we can reduce and manage the number of cyber incidents, it’s unlikely we’re ever going to eliminate them. Hackers ultimately prey on the greatest vulnerability: human behavior.

That’s the backdrop to what the head of information security at a global infrastructure company recently told me. He said his top priority is not acquiring the most advanced cybersecurity technology. Instead, it is educating his workforce. He recognizes that employees are the most vulnerable access point for a breach — and also works with his human resources department to incorporate cybersecurity education into employee on-boarding.

That’s a smart strategy. Companies need to focus on human behavior and make it the foundation for a reliable, powerful culture of security. Doing so will lead to an increased return on investment in technology by developing an educated and informed workforce.

Companies also need to recognize that a key component of security is resilience — and resilience does not mean rebuilding what you had, but learning from experiences so that you build into the future. Natural disasters provide a useful point of comparison. While the United States often rebuilds to the same specs as pre-disaster, the Dutch rebuild to withstand an event greater than the one that wreaked havoc in the first place. A similar approach should be taken for cyber events. Our public and private technology infrastructure — the digital highways of commerce — should be developed to withstand anticipated future threats and events, based on what you have learned from your breach.

Similarly, companies should measure cybersecurity success not just by the attacks they block. They should follow the lead set by a global financial company, where the head of information security recently told me that her main metric is not what her company prevents, but how effectively the company responds after a breach has occurred. Similar to the impact of natural disasters, the effects of a breach can play out over days, weeks, months, and years. Therefore, the effectiveness of a company’s response can be the difference between a demonstration of failure and a demonstration of preparedness, resilience,and success.

The good news is that companies have a growing awareness of the importance of their cybersecurity. But there is a still a long way to go and a clear need to invest more in cybersecurity training, education, and awareness of employees. Companies need to ensure that everyone understands how one simple human mistake can put the entire company at risk. Creating a culture of security should be a top corporate priority because cybersecurity is critical to the mission of every company.

Human behavior is the foundation for security. That message needs to be delivered — and acted on — not just this month, but every month.”

https://www.fifthdomain.com/opinion/2019/10/14/what-we-continue-to-get-wrong-about-cybersecurity/

Kiersten E. Todt
Kiersten E. Todt

Kiersten E. Todt is the managing director of The Cyber Readiness Institute and the former executive director of President Obama’s independent, bipartisan Commission on Enhancing National Cybersecurity. She has served in senior positions in the private sector and in the White House and U.S. Senate.

‘But Who’s In Charge?’ – The Question For Feds In Cyber Security

Standard

FIFTH DOMAIN

Government officials consistently argue that no single agency could take responsibility for the cyber security of the federal government.

But, in today’s current arrangement, one challenge is that several organizations work on cyber security issues in silos, particularly when it comes to election security or protecting the power grid.

___________________________________________________________________________

“In an event that brought two Cabinet secretaries and around 50 top federal and state officials together for three days of discussion on cybersecurity and critical infrastructure, one question remained: Who has the lead on information security issues in the United States?

It was an issue pondered aloud by Sen. Ron Johnson, R-Wisc., the chairman of the Senate’s Homeland Security committee. Johnson said Sept. 19 he had recently sat through a classified 5G briefing with cabinet officials and had a similar inquiry then.

“The No. 1 question I [had] is ‘who’s in charge? Who is actually doing the problem definition when it comes to our challenge with 5G?,’” Johnson said at the Cybersecurity and Infrastructure Security Agency’s second annual national cybersecurity summit at National Harbor. “And nobody would really answer the question.”

“This is constant. This is common across the federal government,” Johnson added.

In addition, state and local governments control their elections. Private utilities largely manage the power grid. And with cyberthreats launched remotely and crossing international borders, this means there’s a need for coordination and collaboration between organizations — a task that can be made difficult by jurisdictions.

“International boundaries dissolve away,” said CISA Director Chris Krebs in his opening speech Sept. 18. “Jurisdictions do not, but the boundaries seemingly do.”

The summit brought together top state election and cybersecurity officials; federal officials from agencies such as the Pentagon, the Department of Homeland Security, the National Security Agency and the Department of Commerce; top congressional aides, along with some industry experts. In a rarity, several panels were exclusively government officials, even with a government moderator. At the outset, Krebs said he was ready to move beyond the boilerplate “information sharing” conversation.

“I’m sick of hearing about information sharing and how that’s going to solve the problem. It’s not,” Krebs said. “We have to get beyond information sharing. We have to work together to understand what our respective advantages are.”

Looking at the stature of the officials participating in this year’s summit, it’s clear that CISA is serious about finding the best ways to protect government systems.

Coordinating with big players

Consider the government leadership that sat for the summit’s first panel. It included Anne Neuberger, director of the NSA’s new Cybersecurity Directorate; Suzette Kent, federal chief information officer; Tonya Ugoretz, deputy assistant director of the FBI’s cybersecurity division; Jack Wilmer, deputy CIO and chief information security officer at the Pentagon — all moderated by CISA’s Assistant Director for Cybersecurity Jeanette Manfra. Outside of a congressional hearing, it was an unusual display of top officials.

The significance of the leaders CISA pulled together wasn’t lost on some in industry.

“They’re linking the players together, and so when you start creating that kind of collaboration, information sharing, where we can create an outcome and execute behind — that is where you actually start seeing transformational change,” said Travis Reese, president of FireEye, a threat intelligence company. “And to me, it starts here, it starts with getting people from different backgrounds, different components, different sides of the political world, into a place where they can share ideas, be very transparent [and] have some debates.”

Agency leaders made clear that they were exploring their responsibilities in relation to other entities in the federal government.

“We spend a lot of time at FBI thinking about our role as it relates to others in this constellation of entities that have a piece of this mission — especially CISA and CYBERCOM and other organizations [that] have been developing,” said Ugoretz. “What we keep coming back to … is that it requires such a blend of mission and authorities and capabilities to tackle all the different aspects of what we’re looking at in the cyber mission space. We agree that there really can’t be one entity, realistically, that does it all. But it’s all about, ‘how do we come together?’”

Krebs made increased coordination a point in his opening speech, comparing the role of the Federal Emergency Management Agency as the lead in disaster response to the state of affairs in cybersecurity.

“We don’t have that same doctrine built out for a large-scale cyber event,” Krebs said.

That missing doctrine worried Krebs, who said the government “got pretty close this summer” to a large cybersecurity event, referencing the ransomware attacks against parishes in Louisiana and school districts in Texas. At the summit, Jared Maples, homeland security adviser for the state of New Jersey, said that he guesses he receives as many as 10 ransomware alerts from organizations throughout his state each week.

Maples explained to Fifth Domain how CISA is helping states defend against these ransomware attacks by providing them with threat analysis of ransomware strains.

“We can get it out to the smaller constituencies, which we do have direct access to. [For] the feds, it’s tough to get out to 376 million people, but we can get it out to all 9 million of our people very quickly,” Maples told Fifth Domain.

Mac Warner, the Republican secretary of state in West Virginia, said that DHS and CISA are providing localities in his states with incident response plans to events like natural disasters and providing other training to country clerks.

“There’s a lot of activity going on from the federal government, DHS, CISA, and others to help us get the message across — not only training our own people but then the public part,” Warner said in response to a question from Fifth Domain.

CISA’s not the only group assisting state officials in cyberspace. For ransomware attacks, the Maples said his agency also gets help from the FBI.

“The federal government — CISA, for example, and a lot of our partners, FBI — there’s a lot of capabilities to help overcome those if you are attacked and respond to them,” Maples said.

CISA also manages a handful of cybersecurity programs for federal agencies, such as the trusted internet connection (TIC) program, which provides safe internet connection, and the Continuous Diagnostics and Mitigation (CDM) program, which provides insight into agencies’ cybersecurity posture.

“[This is] really the first time we had an agency really focused on security, with a major focus on cybersecurity,” said Grant Schneider, the federal chief information security officer. “Something that has really galvanized … efforts across the federal government.”

Johnson praised Krebs’ leadership at CISA and said that the overall structure of issue governance made sense, but added that there needed to be identified leadership over the individual issues.

“In some of these subproblems, like 5G … we do need to understand that we need individuals within government to be in charge of all the different operations,” he said.”

https://www.fifthdomain.com/civilian/dhs/2019/09/25/but-whos-in-charge-is-the-question-for-feds-in-cybersecurity/

Company Buys Russian Troll And Hacks Own Site Researching State-Sponsored Disinformation

Standard
Image: “Wired

WIRED

A targeted troll campaign today can cost as little as $250, says Andrew Gully, a research manager at Alphabet subsidiary Jigsaw. He knows because that’s the price Jigsaw paid for one last year.

Jigsaw set out to test just how easily and cheaply social media disinformation campaigns, or “influence operations,” could be bought in the shadier corners of the Russian-speaking web.

______________________________________________________________________________

“For more that two years, the notion of social media disinformation campaigns has conjured up images of Russia’s Internet Research Agency, an entire company housed on multiple floors of a corporate building in St. Petersburg, concocting propaganda at the Kremlin’s bidding. But a targeted troll campaign today can come much cheaper

As part of research into state-sponsored disinformation that it undertook in the spring of 2018, Jigsaw set out to test just how easily and cheaply social media disinformation campaigns, or “influence operations,” could be bought in the shadier corners of the Russian-speaking web. In March 2018, after negotiating with several underground disinformation vendors, Jigsaw analysts went so far as to hire one to carry out an actual disinformation operation, assigning the paid troll service to attack a political activism website Jigsaw had itself created as a target.

In doing so, Jigsaw demonstrated just how low the barrier to entry for organized, online disinformation has become. It’s easily within the reach of not just governments but private individuals. Critics, though, say that the company took its trolling research a step too far, and further polluted social media’s political discourse in the process.

“Let’s say I want to wage a disinformation campaign to attack a political opponent or a company, but I don’t have the infrastructure to create my own Internet Research Agency,” Gully told WIRED in an interview, speaking publicly about Jigsaw’s year-old disinformation experiment for the first time. “We wanted to see if we could engage with someone who was willing to provide this kind of assistance to a political actor … to buy services that directly discredit their political opponent for very low cost and with no tooling or resources required. For us, it’s a pretty clear demonstration these capabilities exist, and there are actors comfortable doing this on the internet.”

Trolls Behind the Counter

In early 2018, Jigsaw hired a security firm to sniff around Russian-language black-market and gray-market web forums for disinformation-for-hire services. (That company asked WIRED not to name it, to preserve its ability to work on underground forums.) Browsing sites like Exploit, Club2Crd, WWH, and Zloy, the security firm’s researchers say they didn’t find explicit offers of trolling or disinformation campaigns for sale, but plenty of related schemes like fake followers, paid retweets, and black hat search engine optimization. The team guessed, though, that more awaited beneath the surface.

“If we look at this as window shopping, we hypothesized that if someone was selling fake likes in the window, there’s probably something else behind the counter they might be willing to do,” says Gully. When researchers for the security firm Jigsaw had hired started chatting discreetly with those vendors, they found that a few did in fact offer mass-scale social media posting on political subjects as an unlisted service.

Before it bought one of those paid trolling campaigns, Jigsaw realized that it first needed a target. So together with its hired security firm, Jigsaw created a website—seeded with blog posts and comments they’d written to make it appear more real—for a political initiative called “Down With Stalin.” While the question of Stalin’s image sounds like a decades-old debate, it engaged with a current, ongoing argument in Russia about whether Stalin should be remembered as a hero or a criminal. (Partly due to the Kremlin’s rehabilitation efforts, polls show positive sentiments toward Stalin are at their highest in years.)

“The idea was to create a tempest in a teacup,” says one of the security firm staffers who worked on the project, explaining the decision to focus on a historical figure. “We wanted to be very careful, because we didn’t want too much tie-in to real-life issues. We didn’t want to be seen as meddling.”

To attack the site it had created, Jigsaw settled on a service called SEOTweet, a fake follower and retweet seller that also offered the researchers a two-week disinformation campaign for the bargain price of $250. Jigsaw, posing as political adversaries of the “Down with Stalin” site, agreed to that price and tasked SEOTweet with attacking the site. In fact, SEOTweet first offered to remove the site from the web altogether fraudulent complaints that the site hosted abusive content, which it would ostensibly send to the site’s web host. The cost: $500. Jigsaw declined that more aggressive offer, but green lit its third-party security firm to pay SEOTweet $250 to carry out its social media campaign, providing no further instructions.


Down With Stalin, Up With Putin

Two weeks later, SEOTweet reported back to Jigsaw that it had posted 730 Russian-language tweets attacking the anti-Stalin site from 25 different Twitter accounts, as well as 100 posts to forums and blog comment sections of seemingly random sites, from regional news sites to automotive and arts-and-crafts forums. Jigsaw says a significant number of the tweets and comments appeared to be original post written by humans, rather than simple copy-paste bots. “These aren’t large numbers, and that’s intentional,” says Jigsaw’s Gully. “We weren’t trying to create a worldwide disinformation campaign about this. We just wanted to see if threat actors could provide a proof of concept.”

Without any guidance from Jigsaw, SEOTweet assumed that the fight over the anti-Stalin website was actually about contemporary Russian politics, and the country’s upcoming presidential elections. “You simply don’t understand all that the president does for our country so that people can live better, and you armchair analysts can’t do anything,” read one Russian-language tweet (below) posted by a fake user named @sanya2un1995, including a photo of Stalin in her post but clearly referring to Russian president Vladimir Putin. Another fake account wrote a post on a forum accusing the anti-Stalin site of “writing all kinds of nasty things about our president, supposedly he has everyone on their knees and is trying to bring back the USSR, but personally I think that’s not how it is, he is doing everything for us, for the common man.”

Strangely, neither Jigsaw nor the security firm hired for the experiment said they were able to provide WIRED with more than a couple of samples of the campaign’s posts, due to a lack of records of the experiment from a year ago. The 25 Twitter accounts used in the campaign have since all been suspended by Twitter.

WIRED tried reaching out to SEOTweet via its website, seo-tweet.ru, which currently advertises the services of a self-professed marketing and cryptocurrency entrepreneur named Markus Hohner. But Hohner didn’t respond to a request for comment.

An example tweet posted by the SEOTweet service’s disinformation-for-hire campaign. Although it shows a picture of Stalin, it clearly expresses support for current Russian president Vladimir Putin.JIGSAW

Blowback

Even as Jigsaw exposes the potential for cheap, easily accessible trolling campaigns, its experiment has also garnered criticism of Jigsaw itself. The company, after all, didn’t just pay a shady service for a series of posts that further polluted political discourse online. It did so with messages in support of one of the worst genocidal dictators of the 20th century, not to mention the unsolicited posts in support of Vladimir Putin.

“Buying and engaging in a disinformation operation in Russia, even if it’s very small, that in the first place is an extremely controversial and risky thing to do,” says Johns Hopkins University political scientist Thomas Rid, the author of a forthcoming book on disinformation titled Active Measures.

Even worse may be the potential for how Russians and the Russia media could perceive—or spin—the experiment, Rid says. The subject is especially fraught given Jigsaw’s ties to Alphabet and Google. “The biggest risk is that this experiment could be spun as ‘Google meddles in Russian culture and politics.’ It fits anti-American clichés perfectly,” Rid says. “Didn’t they see they were tapping right into that narrative?”

But Jigsaw chief operating officer Dan Keyserling stands by the research, pointing out that the actual content it generated represents an insignificant drop in the social media bucket. “We take every precaution to make sure that our research methods minimize risk,” Keyserling says. “In this case, we weighed the relatively minor impact of creating fake websites and soliciting this kind of small scale campaign against the need to expose the world of digital mercenaries.”

To what degree the Jigsaw experiment really exposed that practice, however, deserves scrutiny, says Alina Polyakova, a disinformation-focused fellow at the Brookings Institution. She supports the idea of the research in theory, but notes that Jigsaw never published its results—and still hasn’t, even now.

“I don’t think policymakers or your average citizen gets how dangerous this is, that the cost of entry is so low,” Polyakov says. “As an experiment, I don’t think this is a problem. What I do think is a problem is not actually publicizing it.” Jigsaw’s staff concedes that they didn’t publish their results, or even publicize the experiment until now—in part, they say, to avoid revealing anything about their research methodology that would inhibit their security firm partners’ ongoing work in Russian-language underground markets. But Jigsaw says it did use the experiment’s results to inform its work on detecting disinformation campaigns, as well as in a summit they held in Ukraine on disinformation in late 2018, ahead of the Ukrainian presidential election.

Jigsaw wouldn’t be the first to court controversy for flirting with the disinformation dark arts. Last year, the consultancy New Knowledge acknowledged that it had experimented with disinformation targeted at conservative voters ahead of Alabama’s special election to fill an open Senate seat. Eventually, internet billionaire Reid Hoffman apologized for funding the group that had hired New Knowledge and sponsored its influence operation test.

The Jigsaw case study has at least proven one point: The incendiary power of a disinformation campaign is now accessible to anyone with a few hundred dollars to spare, from a government to a tech company to a random individual with a grudge. That means you can expect those campaigns to grow in number, along with the toxic fallout for their intended victims—and in some cases, to the actors caught carrying them out, too.WIREDSign up for our daily newsletter and get the best of WIRED.Email addressSIGN UPWill be used in accordance with ourPrivacy Policy

https://www.wired.com/story/jigsaw-russia-disinformation-social-media-stalin-alphabet/


Chinese Hackers Found And Repurposed Elite NSA-Linked Tools

Standard
Image: Istock

CYBERSCOOP

A hacking group with ties to Chinese intelligence has been using tools linked to the National Security Agency as far back as March 2016, according to research from security firm Symantec.

_____________________________________________________________________________

“The tools include some released by the Shadow Brokers, a mysterious group that dumped computer exploits once used by the NSA on the open internet in April 2017. Symantec’s research suggests that the Chinese-linked group, which the company calls “Buckeye,” was using the same NSA-linked tools at least a year before they were publicly leaked.

According to Symantec, one of the tools used by Buckeye was DoublePulsar, a backdoor implant that allows attackers to stealthily collect information and run malicious code on a target’s machine. DoublePulsar was used in conjunction with another tool, which Symantec calls Trojan.Bemstour, that took advantage of various Microsoft Windows vulnerabilities in order to secretly siphon information off targeted computers.

The Trojan.Bemstour exploit allowed attackers to remotely manipulate a machine’s kernel, the core part of a computer’s operating system that manages resources such as memory. When put into action, the exploit can pull sensitive information from a targeted machine or can be combined in conjunction with other vulnerabilities to take control of the kernel.

One of the vulnerabilities was patched in March 2017. The other was reported by Symantec to Microsoft in September 2018 and patched in March 2019.

Buckeye used the tools in attacks that hit telecommunications companies, firms dedicated to scientific research and education institutions from March 2016 to the middle of 2017, according to Symantec. The group hit organizations in Belgium, Hong Kong, Luxembourg, the Philippines and Vietnam.

NSA china hacking tools
An inforgraphic that shows the timeline of Buckeye’s use of NSA tools. (Symantec)

DoublePulsar has been linked to the Equation Group, an elite hacking team that the cybersecurity community has long attached to the NSA. One of the vulnerabilities leveraged by Trojan.Bemstour was also used by two other Equation Group exploits — EternalRomance and EternalSynergy — that were included in the Shadow Brokers’ April 2017 dump.

“How Buckeye obtained Equation Group tools at least a year prior to the Shadow Brokers leak remains unknown,” a blog post from Symantec reads.

The company does state there’s a possibility that Buckeye may have developed its own version of the tools after possibly observing an Equation Group attack and reverse-engineering the malware it caught by monitoring network traffic.

Buckeye — also known as APT3, Boyusec or Gothic Panda — has not been active since 2017, researchers said. Symantec found, however, that development of Trojan.Bemstour continued into 2019. The company said the most recent version of the exploit was complied on March 23 — 11 days after Microsoft patched the last associated vulnerability. It is unclear who continued to use the tools in 2018 and 2019, according to Symantec.

Three alleged members of Buckeye were indicted in the U.S. in November 2017. At the time of the indictments, numerous cybersecurity researchers told CyberScoop there was a high probability that APT3 was linked with China’s Ministry of State Security (MSS). Serving as China’s civilian intelligence agency, analysts say the MSS has become Beijing’s preferred arm for conducting economic espionage.

The research comes days after the Department of Defense issued a report stating that China’s cyber-theft and cyber-espionage operations are accelerating to the point that they can “degrade core U.S. operational and technological advantages.”

“The threat and the challenge is persistent. The Chinese remain very aggressive in their use of cyber,” Assistant Secretary of Defense Randall G. Schriver said during a press briefing on the report.

The NSA did not return a request for comment.”

The Inevitable Fracturing of the Internet

Standard
Image: “Shutterstock”

“STRATFOR WORLDVIEW” By: Matthew BeySenior Global Analyst, Stratfor

“The days of a global internet with relative openness are over as regulation and digital borders rapidly increase in the coming years.

A complex labyrinth of different regulations, rules and cybersecurity challenges will rule the internet of tomorrow, which will become increasingly difficult for corporations to navigate.”

______________________________________________________________________________
“Nationalism and concerns about digital colonization and privacy are driving the “splinternet.” Those forces will not reverse, but only accelerate.

The United States will still back a relatively open internet model, but it has clearly assessed that a global pact to govern cyberspace would tie its own hands in the competition with China.

In 2001, Amazon founder Jeff Bezos — whose company had yet to turn a quarterly profit — said in an interview, “I very much believe the internet is indeed all it is cracked up to be.” Now, 18 years later, the emphasis should be placed on how “cracked up” the internet could become. The concept of a “splinternet” or the “balkanization of the internet” — in which the global digital information network would be sectioned off into smaller internets by a growing series of rules and regulations — has existed for years. But we’re now barreling toward a point where concept will become reality.

The Wild West days of an open internet are gone for good, and the implications of an increasingly fragmented internet will be profound. It will result in a regulatory minefield that will present new challenges to the current dominance of large U.S. multinational internet companies, like Amazon, and consequently has the potential to leave the United States with less ability to exert “soft power” through its corporate giants.

The Open Internet Rests in Peace

The internet developed in tandem with the United States’ rise as the world’s sole superpower; once the Cold War ended, it became a key hallmark of U.S. dominance. The internet began as something called ARPANET, a creation of the U.S. Defense Department, before going public in the 1990s. But although the internet became global, the United States still maintained its role as its primary manager through the Internet Corporation for Assigned Names and Numbers’ (ICANN) contract with the U.S. government. ICANN plays a key role in managing the domain name system (DNS), a set of databases in root servers that make the internet functional.

The U.S. policy that information and data are human rights that should flow freely among countries, companies and individuals, combined with the country’s internet managerial role, has helped facilitate the current U.S. dominance in the global internet sector. The largest U.S. internet companies — Amazon, Google, Facebook, Netflix and others — have been able to extend their dominance over most of the world relatively unencumbered by drastically different regulations or viable local competitors. The dominance of U.S. corporations has meant that U.S. companies also primarily control the 21st century’s equivalent of oil (aka the most prized resource of the time): data. And they can spin it to their advantage. The omnipresence of U.S. companies in some countries has become akin to digital colonialism, exemplified by Facebook’s control over mobile experiences in dozens of countries through its Free Basics program and Google’s control over advertising. Moreover, as the Edward Snowden revelations in 2013 showed, U.S. intelligence services and law enforcement branches have more freedom than other countries to access data — legally or illegally — since it lives on U.S.-based servers.

Those dual realities — U.S. corporate dominance of the internet and its incomparable access to data — have fueled a backlash against the open internet model. At the same time, companies and countries have developed new tools that make it less expensive for authoritarian states to limit and stifle the free movement of information internally, as well as more easily use bots on social media to try to spin a narrative in their favor. Backlash against the open internet comes from multiple directions, and it’s not going away.

A Divided Internet as an Authoritarian Tool

U.S. rivals are increasingly taking steps to compartmentalize the internet, creating global and domestic spheres. Most well-known is China, which for years has controlled the movement of information between global cyberspace and domestic cyberspace through its Great Firewall, which controls domestic access to the web, for instance restricting access to specific foreign sites. But Russia and Iran are taking notes from China and going one step further: creating domestic internets that can be cut off from the global internet if necessary while remaining internally intact and functional. Iran’s National Information Network is now fully operational, and the country has been trying to force its netizens to set up websites and Iranian-made competitors to Western apps on Iran’s domestic internet rather than the World Wide Web. Russia has done the same, although it’s unclear whether a purported test to cut off all access to the global internet it had planned to carry out at some point before April 1 was actually conducted.

The U.S. corporate dominance of the internet and its incomparable access to data have fueled a backlash against the open internet model.Russia, Iran and China setting up their own networks out of concern over meddling from Western countries may only be the tip of the iceberg of authoritarian governments developing robust internal networks to control information. As the price of internet control tools declines, they will be increasingly accessible to smaller and less developed countries. Obvious candidates for setting up domestic internets or employing robust internet filtering systems include Egypt, Saudi Arabia, Turkey and Brazil. (The latter has floated the possibility of increasing internet regulations in the past.) Russia has even proposed a smaller internet exclusive to BRICS countries (Brazil, Russia, India, China and South Africa) as a means of breaking free from U.S. digital hegemony.

Nationalism and the Push for More Data Privacy

It’s not just authoritarian countries that are taking notice of U.S. internet hegemony. At the opposite end of the spectrum, data privacy, data nationalism and economic nationalism are driving internet regulations and controls. This is perhaps most true in Europe. Despite being as wealthy as the United States, Europe has struggled to create internet companies that can compete with U.S. counterparts. There is no European equivalent to Facebook, Google or Amazon. And individual European nations are too small for country-focused companies to compete with the financial firepower that U.S. competitors can wield in investments. Perhaps unsurprisingly, as nationalism has increased across Europe, so has a desire to lessen the United States’ internet dominance. Examples so far include antitrust and monopoly investigations against Google, as well as increased regulations requiring data localization and calls for higher taxes.

Data privacy has been a crucial component of European reactions to U.S. internet control, particularly the European Union’s deeply impactful May 2018 introduction of General Data Protection Regulation (GDPR). The regulatory scheme forced new compliance rules on data privacy, including how data can be used, where it is stored and how people can give consent on data issues. GDPR was driven in part by Snowden’s revelations that the National Security Agency and the so-called “Five Eyes” intelligence-sharing alliance were accessing data globally. It introduced an enormous set of regulations, which require companies to uniquely navigate each European country’s jurisdiction. And while this does not exactly equate to a wholly separate, physically divided internet like the Russian and Iranian proposals, it has a similar effect of increasing regulations and decreasing the global all-access quality of the internet.

Even in the United States, movements to increase internet fragmentation are emerging. Proponents aim to reduce the hegemony of large companies and their unparalleled control of data, and they also want to increase personal data protections, perhaps by introducing GDPR-like mechanisms in certain states.

And companies are also increasingly interested in slicing up the internet in different ways, as ecosystems start to emerge around certain platforms. Apple’s business model has drawn in and locked down users to the Apple and iOS ecosystem. Amazon and Google have done the same with their offerings, as have China’s Alibaba and Tencent, increasingly. As concrete, country-led internet fragmentation occurs, these company-specific ecosystem approaches could come to dominate certain sets of affiliated countries or regions, further fomenting new digital boundaries.

Divided Opinions About Dividing the Internet

The last two years have highlighted the extremely divided international viewpoints about how the internet should be governed. On five different occasions, the United Nations has tasked a group of government experts with establishing rules and norms for global digital governance. After the fifth group failed to do so in July 2017, no sixth group has been created. In November 2018, French President Emmanuel Macron announced the Paris Call for Trust and Security in Cyberspace, a new initiative to establish international norms that was signed by more than 50 nations, 90 nonprofit groups and universities and 130 private corporations including Facebook and Google.

But the United States, China and Russia did not sign the Paris Call initiative, and those three countries also blocked each of the U.N. efforts. After all, the great power competition heating up among the United States, China and Russia extends to cyberspace. The United States has been able to exert enormous amounts of soft power through the internet, and China’s rise is now becoming a more important geopolitical threat to the United States in all ways, including digitally. Washington has recently focused heavily on ensuring that international agreements about cyberspace do not introduce the added challenge of making it harder for the United States to compete with its Chinese adversary.

The great power competition heating up among the United States, China and Russia extends to cyberspace.Countries’ domestic laws and national regulations reign supreme due to the physical requirements of the current internet, so the United States, China, Russia and others truly can go their own way in cyberspace. That means that global internet governance issues are likely to remain stalled while regional or affinity groups, or extremely nationalistic countries, introduce their own localized regulations, firewalls and, in some cases, domestic internets with a limited connection to the outside world.

A Complex Future Is Already Here

China provides a good case study of how this domestic internet control can affect the dominance of U.S. companies when taken to the extreme. China’s Great Firewall and extremely tech-nationalist rules have essentially made it impossible for U.S. companies to operate in the country. The government explicitly bans some companies, while others are subject to so much censorship and surveillance that they simply choose not to pursue the Chinese market. This situation has allowed Chinese companies to dominate inside China, evolving and catering to the domestic market. Even when U.S. companies have tried to compete, they’ve failed. In the future, this type of domestic dominance may likely emerge in other countries with extreme nationalist internet policies, such as Iran.

Globally this means that businesses — purely internet-based and otherwise — should be prepared to navigate an increasingly complicated minefield of different internet regulations. In the 21st century, almost every sector of the world economy is deeply dependent on quick, seamless connectivity to the internet and data flow, and increasing regulations will slow and disrupt operations in many ways, no matter how large or small a business may be. Indeed, in many niches of the tech sphere, national competitors to formerly dominant international behemoths will emerge. But small companies will also be put at a large disadvantage when trying to expand beyond one or two countries because of the overhead costs of having to comply with different rules and regulations that can vary vastly.

U.S. tech companies will struggle to maintain their global influence in a world of internet fragmentation where national sovereignty reigns supreme.Ironically, the major U.S. and Chinese companies can most easily afford to comply if they choose to. Yet, this will only reinforce concerns of digital colonialism and privacy — eventually likely provoking an even stronger backlash against large U.S. companies. In the West, this opposition will focus on data privacy and how to treat data, particularly as artificial intelligence and the Internet of Things create even more personal data from our lives.

Looking Forward

U.S. tech companies will struggle to maintain their global influence in a world of internet fragmentation where national sovereignty reigns supreme. For China, on the other hand, that scenario is preferable. Its nurtured giants Tencent and Alibaba, for example, are beginning to export the ecosystems that they’ve built in China to some of China’s neighbors, eating into markets that have traditionally been dominated by U.S. companies. This may drive some backlash against Chinese digital colonization, but since China is new to that particular game, it will still be making progress in its power competition with the United States even if it faces limits and opposition.

The end result is that the next 25 years of internet regulation and changing guidelines about how information flows across boundaries will be far more complicated than the previous 25. The extreme version of the splinternet, in which every country creates its own internet with limited connections to the global internet, is unlikely to come to pass. The requirements of a modern economy simply won’t allow that eventuality. Instead, companies will be required to jump through increasingly more hoops, and domestic demands for local ownership or data regulation will grow steadily. Corporate America will still demand an open internet for all — even making massive investments in satellite technology to try to do so — but it will not be able to prevent the inevitable.

The age of the splinternet is at hand.

https://worldview.stratfor.com/article/age-splinternet-inevitable-fracturing-internet-data-privacy-tech


Foreign Trolls Are Targeting Veterans On Facebook

Standard

Veterans-Facebook-496695144

PHOTO:  CHIP SOMODEVILLA/GETTY IMAGES

“WIRED”   BY  KRISTOFER GOLDSMITH

“Studies have shown that older Americans are particularly at risk for being scammed, and the average American veteran is near retirement age.

It’s time for VA to adopt “personal cyber hygiene”— steps that people can take to improve their cybersecurity—as an important part of veterans’ overall health.”


“I first came across the imposter Facebook page by accident. The page was made to look like that of my employer, Vietnam Veterans of America, complete with our organization’s registered trademark and name. As an Iraq veteran and the office’s designated millennial policy guy, I was helping run VVA’s social media accounts. The discovery kicked off what would become a 15-month-long amateur investigation into digital trolls in Bulgaria, the Philippines, and 27 other countries—all running Facebook pages targeting American troops and veterans with political propaganda.

Last year, an Oxford study revealed that military veterans are ripe targets for exploitation by foreign powers seeking to undermine American democracy. The report concluded that veterans are more likely than the average person to be community leaders and that their political opinions have significant influence on those around them. Recognizing this, foreign powers have sought to infiltrate our community, impersonating individuals and organizations with tens of thousands of members in an effort to gain veterans’ trust.

At first, what I found on the imposter Vietnam Vets account didn’t make sense. The Facebook page had recycled old news stories about the Department of Veterans Affairs and veterans’ benefits, as well as a post about a “Vietnam Veterans song of the day.” Even though the latter didn’t have any audio attached, it was nonetheless shared by followers hundreds of times each day.

My employer reported the imposter, and the page’s administrators quickly scrubbed it of any trace of our logo to avoid banishment from Facebook. Digging deeper, we realized that the news the page was sharing was scraped from legitimate military and veteran-focused newspapers, but that the stories’ dates and content were altered to provoke emotional responses—specifically outrage.

The fake page’s most viral video was a looping, 58-second local media story depicting what looked like berries smeared on a Vietnam Veterans’ monument. However, now it ran in a post that read “EXCLUSIVE: Vietnam Veterans Monument Vandalized… Share and Vote!” Text superimposed over the video prompted “Do you think the criminals must suffer?” More sinister still, the trolls had figured out how to game Facebook’s algorithms into thinking that the video was a live feed. As a result, Facebook treated the looped video as if it were important breaking news, pushing it into the newsfeeds of tens of thousands of Americans.

It took three months and a handful of press releases before Facebook shut down the fraudulent page. The company’s mostly automated reporting features found that even the fake live video hadn’t violated community standards.

Five months after the imposter page was shut down, we discovered two more Facebook pages that were sharing the same content and linking to new websites. One appeared to be a page that had been dormant since 2015, a year before Russia’s election interference reportedly began. Because its creator had forgotten to register the affiliated website anonymously, we were able to identify one of the trolls by name and location: Plovdiv, Bulgaria.

It’s unclear whether this particular troll was financially motivated or part of a network of troll farms in Eastern Europe that are targeting American democracy. Whatever the motivation, the effect was the same. Changing the dates on old stories about Congress making cuts to veterans’ benefits can spread panic, anger, and confusion throughout the community and influence political beliefs and voting behavior.

This troll’s persistence sparked further investigation. We eventually found scores of American-veteran-focused Facebook pages producing politically polarizing content from outside the United States.

Vietnam Veterans of America produced a report on our earliest findings for 11 committees in Congress and a host of alphabet agencies. It’s important for Congress and federal agencies to investigate these foreign entities to find out what damage has been done. But we’re also calling on the Department of Veterans Affairs take a more proactive role in inoculating veterans against this type of threat to prevent future harm.

It’s wholly appropriate for Congress to empower the VA to start taking preventative measures to protect veterans from digital threats like manipulation and fraud.

While low tech-literacy and ailing health can create vulnerabilities for older veterans, more recent vets face a looming threat from the massive 2015 OPM data-breach. That cyberattack compromised the background check information for nearly everyone who had received a security clearance since the Iraq war began. If hostile foreign actors were to cross-reference data from this trove of sensitive information with what’s publicly available on social media platforms, they could easily target individual veterans.

Despite its risks, social media is an important tool for many veterans. Facebook allows Vietnam Veterans of America’s members to connect with women and men that they went to war with five decades ago. In addition, countless online veterans groups have been formed to identify those at risk of suicide, so their battle-buddies can intervene before it’s too late.

Mark Zuckerberg has said that protecting democracy is an arms race. In the same way that the VA has been a leader in healing the wounds of ground combat, it must develop ways to protect veterans from harm wrought by cyber war.”

https://www.wired.com/story/trolls-are-targeting-vets-on-facebook/

What Alexander Hamilton Can Teach Us About Cyber Policy

Standard

Alexander Hamilton and Cyber

“DEFENSE ONE”

“Hamiltonian framework is particularly a good lens for the cyber realm, for it encourages policymakers to balance the expected effects and unintended consequences of a proposed policy; and to harmonize concerns over too little, or too much, government intervention.

Let’s apply the first Treasury Secretary’s principles to, say, China’s economic espionage.”

_______________________________________________________________________________________

“Biographer Ron Chernow wrote that Hamilton “saw too clearly that greater freedom” — i.e., freedom from government restriction — “could lead to greater disorder and, by a dangerous dialectic, back to a loss of freedom. Hamilton’s lifelong task was to try to straddle and resolve this contradiction and to balance liberty and order.” Much like the precarious years of the fledgling American republic, today’s cyberspace exists in a dual state of disorder and order.

To better understand how Hamilton’s framework can help us, let’s look at a specific challenge: Beijing’s efforts to steal the West’s business secrets.

In 2011, the Office of the National Counterintelligence Executive wrote that “Chinese actors are the world’s most active and persistent perpetrators of economic espionage.” In 2015, U.S.President Obama and Chinese President Xi entered into a Cyber Pact, vowing that neither state would “conduct or knowingly support cyber-enabled theft of intellectual property, including trade secrets or other confidential business information, with the intent of providing competitive advantages to companies or commercial sectors.”

Unfortunately, “the hacking has not stopped” according to Dmitri Alperovitch, the Chief Technology Officer of the CrowdStrike cybersecurity firm. There’s some evidence that it has decreased, though this might indicate hackers’ improving ability to cover their tracks, or perhaps a rising focus on other countries. In March, the U.S. Treasury Department said in a 215-page report that “Beijing’s cyber espionage against U.S. companies persists and continues to evolve.”

Given these mixed results, what would Hamilton recommend? First, as the “founder of US economic protectionism,” he’d probably say that we need a stronger policy. Cyber economic espionage undermines trust in global trade and politics. The necessity of the times requires it.

Second, he might also suggest evaluating whether the Obama-Xi Cyber Pact produced more harmful effects than it was intended to cure—and how we might stop a new policy from having similar effects. While some viewed the Pact as a promising first steptoward cyber norms, its lack of enforcement mechanisms was somehow glossed over, leaving nothing but a “cyber mirage,” as the Wall Street Journal put it. Hamilton encourages a vigorous examination of “the consequences of the means” from a proposed measure, including the “extent and duration” of its effects.

Another drawback of the Pact: it doesn’t account for cultural distinctions in the definition of espionage. Although all states collect economic intelligence—a subset of national security intelligence collection—the prohibition extends specifically to commercially motivated espionage — i.e., “economic espionage.” For example, in 2014 the FBI issued a public indictment against five Chinese military hackers for engaging in commercially motivated cyber espionage (in other words, spying for the purpose of stealing trade secrets) against several U.S.businesses, including but not limited to Westinghouse Electric, SolarWorld, U.S. Steel, Allegheny Technologies, and Alcoa.

The Obama Administration held that a “major difference [exists] between spying for national security purposes, something the U.S.does daily, and the commercial, for-profit espionage carried out by China’s military.” This, however, is a Western distinction that is not recognized in China, says Texas A&M law professor Peter Yu: to Chinese policymakers, there is an “overlap between security and economic concerns” because many Chinese businesses are “state-owned enterprises.” Put another way by David Sanger of the New York Times, the “Chinese argue that the distinction is an American artifact, devised for commercial advantage. They believe that looking for business secrets is part of the fabric of national security, especially for a rising economic powerhouse.”

When boundaries become fuzzy, policies must impart clarity — in this case, a framework to help actors on both sides determine which kind of espionage is which. One option is to implement a Cyber Espionage Predominant Purpose Test, or CEPP Test, a variation on the “predominant purpose tests” used to resolve contract disputes. Such a test “helps courts decide whether the UCC [Uniform Commercial Code] or the common law governs a particular transaction that involves both goods and services. (For instance, the sale of a household appliance that needs to be installed can be seen as involving both tangible and moveable property and significant labor),” writes David Horton, a law professor at the University of California, Davis.

CEPP Test would gauge the predominant purpose of the espionage act in question by considering the type of economic/commercial data in dispute; how the economic/commercial information was acquired; and the overall intent of the entity that collected the economic/commercial data. Rather than relying upon each country to argue for its own bright line rule or definition, this test would enable the International Court of Justice to evaluate both parties’ views, determine the predominant purpose of the cyber espionage act in question, and reach an equitable settlement. Based on these historical-legal underpinnings, the CEPP Test has a high potential for success — meeting Hamilton’s third principle.

The 2019 G7 Summit in France could be an opportunity for a discussion on implementing the CEPP Test to curtail cyber economic espionage. America’s first Treasury Secretary would no doubt be pleased to see it.”

https://www.defenseone.com/ideas/2018/07/what-alexander-hamilton-can-teach-us-about-cyber-policy/149921/