Category Archives: Computer Security

Public Opportunity To Comment – FAR Part 40 Addition – ‘Information Security And Supply Chain Security’

Standard

“REGULATIONS .GOV”

“Content

Action

Notice of request for information (RFI).

Summary

DoD, GSA, and NASA recently established Federal Acquisition Regulation (FAR) part 40, Information Security and Supply Chain Security. The intent of this RFI is to solicit feedback from the general public on the scope and organization of FAR part 40.

Dates

Interested parties should submit written comments to the Regulatory Secretariat Division at the address shown below on or before June 10, 2024 to be considered in the formation of the changes to FAR part 40.

Addresses

Submit comments in response to this RFI to the Federal eRulemaking portal at https://www.regulations.gov by searching for “RFI FAR part 40”. Select the link “Comment Now” that corresponds with “RFI FAR part 40”. Follow the instructions provided on the “Comment Now” screen. Please include your name, company name (if any), and “RFI FAR part 40” on your attached document. If your comment cannot be submitted using https://www.regulations.gov, call or email the points of contact in the FOR FURTHER INFORMATION CONTACT section of this document for alternate instructions.

Instructions: Response to this RFI is voluntary. Respondents may answer as many or as few questions as they wish. Each individual or entity is requested to submit only one response to this RFI. Please identify your answers by responding to a specific question or topic if possible. Please submit responses only and cite “RFI FAR part 40” in all correspondence related to this RFI. Comments received generally will be posted without change to https://www.regulations.gov, including any personal and/or business confidential information provided. Public comments may be submitted as an individual, as an organization, or anonymously (see frequently asked questions at https://www.regulations.gov/faq ). To confirm receipt of your comment(s), please check https://www.regulations.gov, approximately two-to-three days after submission to verify posting.

For Further Information Contact

For clarification of content, contact Ms. Malissa Jones, Procurement Analyst, at 571-882-4687 or by email at malissa.jones@gsa.gov. For information pertaining to status, publication schedules, or alternate instructions for submitting comments if https://www.regulations.gov cannot be used, contact the Regulatory Secretariat Division at 202-501-4755 or GSARegSec@gsa.gov. Please cite FAR Case 2023-008.

Supplementary Information

The final FAR rule 2022-010, Establishing FAR part 40, amended the FAR to establish a framework for a new information security and supply chain security FAR part, FAR part 40. The final rule does not implement any of the information security and supply chain security policies or procedures; it simply established FAR part 40. The final FAR rule was published in the Federal Register at 89 FR 22604, on April 1, 2024. Relocation of existing requirements and placement of new requirements into FAR part 40 will be done through separate rulemakings.

Currently, the policies and procedures for prohibitions, exclusions, supply chain risk information sharing, and safeguarding information that address security objectives are dispersed across multiple parts of the FAR, which makes it difficult for the acquisition workforce and the general public to understand and implement applicable requirements. FAR part 40 will provide the acquisition team with a single, consolidated location in the FAR that addresses their role in implementing requirements related to managing information security and supply chain security when acquiring products and services.

The new FAR part 40 provides a location to cover broad security requirements that apply across acquisitions. These security requirements include requirements designed to bolster national security through the management of existing or potential adversary-based supply chain risks across technological, intent-based, or economic means ( e.g., cybersecurity supply chain risks, foreign-based risks, emerging technology risks). The intent is to structure FAR part 40 based on the objectives of the regulatory requirement (similar to how environmental objectives are covered in FAR part 23, and labor objectives are addressed in FAR part 22). Security-related requirements that include and go beyond information and communications technology (ICT) will be covered under FAR part 40. An example of products and services that include and go beyond ICT are cybersecurity supply chain risk management requirements such as requirements related to section 889 of the John S. McCain National Defense Authorization Act for Fiscal Year 2019 (Pub. L. 115-232). Security-related requirements that only apply to ICT acquisitions will continue to be covered in FAR part 39. The test for whether existing regulations would be in FAR part 40 would be based on the following questions:

  • Question 1: Is the regulation or FAR case addressing security objectives?

○ If yes, move to question 2

○ If no, the regulation would be located in another part of the FAR.

  • Question 2: Is the scope of the requirements limited to ICT?

○ If yes, the regulation would be located in FAR part 39

○ If no, the regulation would be located FAR part 40.

The following are examples of the FAR subparts and regulations that are under consideration and could potentially be located in, or relocated to, FAR part 40:

Part 40—Information Security and Supply Chain Security

40.000 Scope of part.

○ General Policy Statements

○ Cross reference to updated FAR part 39 scoped to ICT

Subpart 40.1—Processing Supply Chain Risk Information

○ FAR 4.2302, sharing supply chain risk information

○ Cross reference to counterfeit and nonconforming parts (FAR 46.317)

○ Cross reference to cyber threat and incident reporting and information sharing (FAR case 2021-017)

Subpart 40.2—Security Prohibitions and Exclusions

○ FAR subpart 4.20, Prohibition on Contracting for Hardware, Software, and Services Developed or Provided by Kaspersky Lab

○ FAR subpart 4.21, Prohibition on Contracting for Certain Telecommunications and Video Surveillance Services or Equipment

○ FAR subpart 4.22, Prohibition on a ByteDance Covered Application, which covers the TikTok application, from FAR case 2023-010

○ Prohibition on Certain Semiconductor Products and Services (FAR case 2023-008)

○ FAR subpart 4.23, Federal Acquisition Security Council, except section 4.2302

○ Covered Procurement Action/agency specific exclusion orders (FAR case 2019-018)

○ FAR subpart 25.7, Prohibited Sources

○ Prohibition on Operation of Covered Unmanned Aircraft Systems from Covered Foreign Entities (FAR case 2024-002)

Subpart 40.3—Safeguarding Information

○ FAR subpart 4.4, Safeguarding Classified Information Within Industry

○ Controlled Unclassified Information (CUI) (FAR case 2017-016)

○ FAR subpart 4.19, Basic Safeguarding of Covered Contractor Information Systems

In this notice, DoD, GSA, and NASA are providing an opportunity for members of the public to provide comments on the proposed scope of FAR part 40. Feedback provided should support the goal of providing a single location to cover broad security requirements that apply across acquisitions. Providing the acquisition team with a single, consolidated location in the FAR that addresses their role in implementing requirements related to managing information security and supply chain security when acquiring products and services will enable the acquisition workforce to understand and implement applicable requirements more easily.

DoD, GSA, and NASA seek responses to any or all the questions that follow this paragraph. Where possible, include specific examples of how your organization is or would be impacted negatively or positively by the recommended scope and subparts; if applicable, provide rationale supporting your position. If you believe the proposed scope and subparts should be revised, suggest an alternative (which may include not providing guidance at all) and include an explanation, analysis, or both, of how the alternative might meet the same objective or be more effective. Comments on the economic effects including quantitative and qualitative data are especially helpful. In addition to the FAR parts and subparts proposed for relocation to FAR part 40, let us know:

1. What specific section(s) of the FAR would benefit from inclusion in FAR part 40?

2. What specific suggestions do you have for otherwise improving the proposed scope or subparts of FAR part 40?

William F. Clark,

Director, Office of Government-wide Acquisition Policy, Office of Acquisition Policy, Office of Government-wide Policy.”

https://www.regulations.gov/document/FAR-2024-0054-0001

Pentagon ‘Maturity Model’ For Generative AI Coming In June

Standard

“BREAKING DEFENSE” – By Sydney J. Freedberg Jr

“We have a gap between the science and the marketing, and one of the things our organization is doing, [through its] Task Force Lima, is trying to rationalize that gap.” – Craig Martell Chief Digital and Artificial Intelligence Officer.

__________________________________________________________________________________________________________

“To get a gimlet-eyed assessment of the actual capabilities of much-hyped generative artificial intelligences like ChatGPT, officials from the Pentagon’s Chief Data & AI Office said they will publish a “maturity model” in June.

“We’ve been working really hard to figure out where and when generative AI can be useful and where and when it’s gonna be dangerous,” the outgoing CDAO, Craig Martell, told the Cyber, Innovative Technologies, & Information Systems subcommittee of the House Armed Services Committee. “We have a gap between the science and the marketing, and one of the things our organization is doing, [through its] Task Force Lima, is trying to rationalize that gap. We’re building what we’re calling a maturity model, very similar to the autonomous driving maturity model.”

That widely used framework rates the claims of car-makers on a scale from zero — a purely manual vehicle, like a Ford Model T — to five, a truly self-driving vehicle that needs no human intervention in any circumstances, a criterion that no real product has yet met.

For generative AI, Martell continued, “that’s a really useful model because people have claimed level five, but objectively speaking, we’re really at level three, with a couple folks doing some level four stuff.”

The problem with Large Language Models to date is that they produce plausible, even authoritative-sounding text that is nevertheless riddled with errors called “hallucinations” that only an expert in the subject matter can detect. That makes LLMs deceptively easy to use but terribly hard to use well.

“It’s extremely difficult. It takes a very high cognitive load to validate the output,” Martell said. “[Using AI] to replace experts and allow novices to replace experts — that’s where I think it’s dangerous. Where I think it’s going to be most effective is helping experts be better experts, or helping someone who knows their job well be better at the job that they know well.”

“I don’t know, Dr. Martell,” replied a skeptical Rep. Matt Gaetz, one of the GOP members of the subcommittee. “I find a lot of novices showing capability as experts when they’re able to access these language models.”

“If I can, sir,” Martell interjected anxiously, “it is extremely difficult to validate the output. … I’m totally on board, as long as there’s a way to easily check the output of the model, because hallucination hasn’t gone away yet. There’s lots of hope that hallucination will go away. There’s some research that says it won’t ever go away. That’s an empirical open question I think we need to really continue to pay attention to.

“If it’s difficult to validate output, then… I’m very uncomfortable with this,” Martell said.

The day before Martell testified on the Hill, his chief technology officer, Bill Streilein, told the Potomac Officers Club’s annual conference on AI details about the development and timeline for the forthcoming maturity model.

Since the CDAO’s Task Force Lima launched last August, Streilein said, it’s been assessing over 200 potential “use cases” for generative AI submitted by organizations across the Defense Department. What they’re finding, he said, is that “the most promising use cases are those in the back office, where a lot of forms need to be filled out, a lot of documents need to be summarized.”

“Another really important use case is the analyst,” he continued, because intelligence analysts are already experts in assessing incomplete and unreliable information, with doublechecking and verification built into their standard procedures.

As part of that process, CDAO went to industry to ask their help in assessing generative AIs — something that the private sector also has a big incentive to get right. “We released an RFI [Request For Information] in the fall and received over 35 proposals from industry on ways to instantiate this maturity model,” Streilein told the Potomac Officers conference. “As part of our symposium, which happened in February, we had a full day working session to discuss this maturity model.

“We will be releasing our first version, version 1.0 of the maturity model… at the end of June,” he continued. But it won’t end there: “We do anticipate iteration… It’s version 1.0 and we expect it will keep moving as the technology improves and also the Department becomes more familiar with generative AI.”

Streilein said 1.0 “will consist of a simple rubric of five levels that articulate how much the LLM autonomously takes care of accuracy and completeness,” previewing the framework Martell discussed with lawmakers. “It will consist of datasets against which the models can be compared, and it will consist of a process by which someone can leverage a model of a certain maturity level and bring it into their workflow.”

Why is CDAO taking inspiration from the maturity model for so-called self-driving cars? To emphasize that the human can’t take a hands-off, faith-based approach to this technology.

“As a human who knows how to drive a car, if you know that the car is going to keep you in your lane or avoid obstacles, you’re still responsible for the other aspects of driving, [like] leaving the highway to go to another road,” Streilein said. “That’s sort of the inspiration for what we want in the LLM maturity model… to show people the LLM is not an oracle, its answers always have to be verified.”

Streilein said he’s is excited about generative AI and its potential, but he wants users to proceed carefully, with full awareness of the limits of LLMs.

“I think they’re amazing. I also think they’re dangerous, because they provide the very human-like interface to AI,” he said. “Not everyone has that understanding that they’re really just an algorithm predicting words based on context.”

https://breakingdefense.com/2024/03/useful-or-dangerous-pentagon-maturity-model-for-generative-ai-coming-in-june

ABOUT THE AUTHOR:

Sydney J. Freedberg Jr. has written for Breaking Defense since 2011 and served as deputy editor for the site’s first decade, covering technology, strategy, and policy with a particular focus on the US Army. He’s now a contributing editor focused on cyber, robotics, AI, and other critical technologies and policies that will shape the future of warfare. Sydney began covering defense at National Journal magazine in 1997 and holds degrees from Harvard, Cambridge, and Georgetown.

Participants From 40-Plus Countries Meeting To Thrash Out ‘Responsible AI’ For Military Use

Standard

“BREAKING DEFENSE” By Sydney J. Freedberg Jr.

“The goal is to share best practices, discuss models like the Pentagon’s online Responsible AI Toolkit, and build their personal expertise in AI policy to take home to their governments.”

_________________________________________________________________________________________________________

“Thirteen months after the State Department rolled out its Political Declaration on ethical military AI at an international conference in the Hague, representatives from the countries who signed on will gather outside of Washington to discuss next steps.

“We’ve got over 100 participants from at least 42 countries of the 53,” a senior State Department Official told Breaking Defense, speaking on background to share details of the event for the first time. The delegates, a mix of military officers and civilian officials, will meet at a closed-door conference March 19 and 20 at the University of Maryland’s College Park campus,

“We really want to have a system to keep states focused on the issue of responsible AI and really focused on building practical capacity,” the official said.

On the agenda: every military application of artificial intelligence, from unmanned weapons and battle networks, to generative AI like ChatGPT, to back-office systems for cybersecurity, logistics, maintenance, personnel management, and more. The goal is to share best practices, discuss models like the Pentagon’s online Responsible AI Toolkit, and build their personal expertise in AI policy to take home to their governments.

That cross-pollination will help technological leaders like the US refine their policies, while also helping technological followers in less wealthy countries to “get ahead of the issue” before investing in military AI themselves.

This isn’t just a talking shop for diplomats, the State official emphasized. Next week’s meeting will feature a mix of military and civilian delegates, with the civilians coming not just from foreign ministries but also the independent science & technology agencies found in many countries. The very process of organizing the conference has served a useful forcing function , the official said, simply by requiring signatory countries to figure out who to send and which agencies in their governments should be represented.

State wants this to be the first of an indefinite series of annual conferences hosted by fellow signatory states around the world. In between these general sessions, the State official explained, smaller groups of like-minded nations should get together for exchanges, workshops, wargames, and more — “anything to build awareness of the issue and to take some concrete steps” towards implementing the declaration’s 10 broad principles [PDF]. Those smaller fora will then report back to the annual plenary session, which codify lessons, debate the way forward, and set the agenda for the coming year.

“We value a range of perspectives, a range of experiences, and the list of countries endorsing the declaration reflects that,” the official said. “We’ve been very gratified by the breadth and depth of the support we’ve received for the Political Declaration.

“53 countries have now joined together,” the official said, up from 46 (US included) announced just a few months ago in November. “Look carefully at that list: It’s not a US-NATO ‘usual suspects’ list.”

The nations who’s signed on are definitely diverse. They include core US allies like Japan and Germany; more troublesome NATO partners Turkey and Hungary; wealthy neutrals like Austria, Bahrain, and Singapore; pacifist New Zealand; wartorn Ukraine (which has experimented with AI-guided attack drones); three African nations, Liberia, Libya, and Malawi; and even minuscule San Marino. Notably absent, however, are not only the Four Horsemen that have long driven US threat assessments — China, Russia, Iran, and North Korea — but also infamously independent-minded India (despite years of US courtship on defense), as well as most Arab and Muslim-majority nations.

That doesn’t mean there’s been no dialogue with those countries. Last November, just weeks apart, China joined the US in signing the broader Bletchley Declaration on AI across the board (not only military) at the UK’s AI safety summit, and Chinese President Xi Jinping agreed to vaguely defined discussions on what US President Joe Biden described after their summit in California as “risk and safety issues associated with artificial intelligence.” Both China and Russia participate in the regular Geneva meetings of the UN Group of Government Experts (GGE) on “Lethal Autonomous Weapons Systems” (LAWS), although activists aiming for a ban on “killer robots” say those talks have long since stalled.

The State official took care to say the US-led process wasn’t an attempt to bypass or undermine the UN negotiations. “Those are important discussions, those are productive discussions, [but] not everyone agrees,” they said. “We know that disagreements will continue in the LAWS context — but I don’t think that we are well advised to let those disagreements stop us, collectively, from making progress where we can” in other venues and on other issues.

Indeed, it’s a hallmark of State’s Political Declaration — and the Pentagon’s approach to AI ethics, from which it draws — that it addresses not just futuristic “killer robots” and SkyNet-style supercomputers, but also other military uses of AI that, while less dramatic, are already happening today. That includes mundane administration and industrial applications of AI, such as predictive maintenance. But it also encompasses military intelligence AIs that help designate targets for lethal strikes, such as the American Project Maven and the Israeli Gospel (Habsora).

All these various applications of AI can be used, not just to make military operations more efficient, but to make them more humane as well, US officials have long argued. “We see tremendous promise in this technology,” the State official said. “We see tremendous upside. We think this will help countries discharge their IHL [International Humanitarian Law] obligations… so we want to maximize those advantages while minimizing any potential downside risk.”

That requires establishing norms and best practices “across the waterfront” of military AI, the US government believes. “It’s important not to estimate the need to have a consensus around how to use even the back office AI in a responsible way,” the official said, “such as [by] having international legal reviews, having adequate training, having auditable methodologies….These are fundamental bedrock principles of responsibility that can apply to all applications of AI, whether it’s in the back office or on the battlefield.”

https://breakingdefense.com/2024/03/40-plus-countries-convening-next-week-to-thrash-out-responsible-ai-for-military-use

Don’t Forget About ‘Old-Fashioned’ Artificial Intelligence

Standard

“DEFENSE ONE” By Ross Wilkers

“For all of the hype about ChatGPT and its generative ilk, older machine-learning tools and techniques are still useful and getting better.”

____________________________________________________________________________________________________________

“For all the talk in 2023 about generative artificial intelligence’s potential promises and perils, what could be described as “old-fashioned AI” can do the job just fine in many instances.

One way to think about old-fashioned AI versus generative AI is how in the former, machines do not talk back to the operator for the most part. Ask questions to a generative AI tool however, and those machines do talk back to the user.

The 2024 edition of Deloitte’s annual report on government technology trends to watch in the year ahead singles out generative AI as an area of interest for agencies, but the author also reminded me that the other category is advancing as well.

Scott Buchholz, chief technology officer for Deloitte’s government and public services practice, pointed out two categories where old-fashioned AI still comes into play: instances that are approved automatically because they are routine, and others that need further examination so humans can look at them most closely.

Predictive modeling functions in use cases such as foot traffic through a security checkpoint also remain suitable for AI as the world has known it, according to Buchholz.

“All of those other sorts of things that we’ve spent at least 10 or 20 years now trying to figure out and work through somehow, are incredibly important, and arguably in some cases more valuable than what can currently be done with generative AI,” Buchholz told me.

Deloitte’s report points out that generative AI does complicate the never-ending discussion over whether machines are capable of thought. ChatGPT and the others like it can at least imitate human cognition as they show promise for trawling through large volumes of content.

“There’s more experimentation going on than most people recognize, the government actually does have implementations of generative A I technologies in production,” Buchholz said. “They tend to do things like searching large bodies of policies and answering questions in English, which sounds a little mundane until you try reading government policies and then you might have some empathy for the reason why people would find that really exciting.”

Indeed, several pilots are already live across the public sector ecosystem in a push to better understand the full potential of generative AI tools, both the positive and negative.

Buchholz also pointed out one surprising aspect of that technology, perhaps surprising to some but will not be to others: the rate of adoption across commercial industries is not dramatically different than in government.

One unique aspect to government is the volume of legacy systems that accumulate large levels of technical debt over time, which Buchholz said naturally means more work is needed in that context than a comparable commercial setting.

Computing power also continues to be a high-priority discussion item across the entire technology ecosystem, given both a strong desire by many enterprises to move into cloud environments and a universal realization that everything of importance in the world runs on chips.

Deloitte’s report says that standard cloud services still provide “more than enough functionality for most business-as-usual operations,” but more specialized hardware including custom chips are required for more cutting-edge use cases.

Buchholz said that for the most part, the speeds of microprocessors have not dramatically improved over the past couple of years and the point is near where they can be made even smaller to make systems go faster.

The global hyperscale cloud providers are realizing that and coming up with special purpose chips, which could be used for singular and higher-end use cases like machine learning.

Buchholz said that means options are increasing for what workloads can be run, albeit more slowly if the chips have greater specialization.

“What’s happening is there’s this increasing diversity of ways you can do things and different cost performance tradeoffs,” Buchholz said.

Which brings the discussion to quantum computing, which looks to be becoming more real as time goes on and certainly taking up more of the spotlight. The Dec. 3 episode of CBS’ 60 Minutes program had a segment on the race to lead the world in quantum, for instance.

“Those may be starting to come online in the next two or three years if you believe vendors road maps and they can continue to meet them,” said Buchholz, who wears the dual hat of leading Deloitte’s global quantum efforts. “That becomes really exciting because at that point in time, we have new ways of solving problems that have new and interesting characteristics.”

https://www.defenseone.com/technology/2024/01/dont-foget-about-old-fashioned-ai/393058/

ABOUT THE AUTHOR:

Ross Wilkers covers the business of government contracting, companies and trends that shape the market. He joined WT in 2017 and works with Editor-in-Chief Nick Wakeman to host and produce our WT 360 podcast that features discussions with the market’s leading executives and voices. Ross is a native of Northern Virginia and is an alumnus of George Mason University.

Pentagon Reveals Updated Cost Estimates For CMMC Implementation

Standard

“DEFENSESCOOP” By Jon Harper

DOD provided new projections for how much money contractors and other organizations will have to spend to implement the Pentagon’s CMMC program in a proposed rule for Cybersecurity Maturity Model Certification that was published in the Federal Register. “

__________________________________________________________________________________________________________

“The program would mandate that defense contractors and subcontractors who handle federal contract information and controlled unclassified information (CUI) implement cybersecurity standards at various levels — depending on the type and sensitivity of the information — and assess their compliance.

“The CMMC initiative will require the Department of Defense to identify CMMC Level 1, 2, or 3 as a solicitation requirement for any effort that will cause a contractor or subcontractor to process, store, or transmit FCI or CUI on its unclassified information system(s). Once CMMC is implemented in 48 CFR, DoD will specify the required CMMC Level in the solicitation and the resulting contract,” the proposed rule explains.

More than 200,000 companies in the defense industrial base could be affected by the rule.

The Pentagon is planning for a phased implementation. It intends to include CMMC requirements in all solicitations issued on or after Oct. 1, 2026, when applicable, although waivers could be issued in certain cases before solicitations are issued.

Depending on the required security level, contractors and subcontractors will have to do self-assessments or be evaluated by a third-party organization — known as a C3PAO — or government assessors.

Costs would be incurred for related activities such as planning and preparing for the assessment, conducting the assessment and reporting the results.

“In estimating the Public costs, DoD considered applicable nonrecurring engineering costs, recurring engineering costs, assessment costs, and affirmation costs for each CMMC Level,” per the proposed rule.

“For CMMC Levels 1 and 2, the cost estimates are based only upon the assessment, certification, and affirmation activities that a defense contractor, subcontractor, or ecosystem member must take to allow DoD to verify implementation of the relevant underlying security requirements,” it notes. “DoD did not consider the cost of implementing the security requirements themselves because implementation is already required by FAR clause 52.204–21, effective June 15, 2016, and by DFARS clause 252.204–7012, requiring implementation by Dec. 31, 2017, respectively; therefore, the costs of implementing the security requirements for CMMC Levels 1 and 2 should already have been incurred and are not attributed to this rule.”

An annual Level 1 self-assessment and affirmation would assert that a company has implemented all the basic safeguarding requirements to protect federal contract information as set forth in 32 CFR 170.14(c)(2).

For Level 1, the Pentagon estimates that the cost to support a self-assessment and affirmation would be nearly $6,000 for a small entity and about $4,000 for a larger entity.

Triennial Level 2 self-assessments and affirmations would attest that a contractor has implemented all the security requirements to protect CUI as specified in 32 CFR 170.14(c)(3). A triennial Level 2 certification assessment conducted by a C3PAO would verify that a contractor is meeting the security requirements.

“A CMMC Level 2 assessment must be conducted for each [organization seeking certification] information system that will be used in the execution of the contract that will process, store, or transmit CUI,” the proposed rule notes.

A Level 2 self-assessment and related affirmations are estimated to cost over $37,000 for small entities and nearly $49,000 for larger entities (including the triennial assessment and affirmation and two additional annual affirmations). A Level 2 certification assessment is projected to cost nearly $105,000 for small entities and approximately $118,000 for larger entities (including the triennial assessment and affirmation and two additional annual affirmations).

“Receipt of a CMMC Level 2 Final Certification Assessment for information systems within the Level 3 CMMC Assessment Scope is a prerequisite for a CMMC Level 3 Certification Assessment. A CMMC Level 3 Certification Assessment, conducted by [the Defense Contract Management Agency] Defense Industrial Base Cybersecurity Assessment Center (DIBCAC), verifies that an [organization seeking certification] has implemented the CMMC Level 3 security requirements to protect CUI as specified in 32 CFR 170.14(c)(4),” per the proposed rule.

A triennial Level 3 certification assessment would have to be conducted for each company information system that will process, store, or transmit CUI, in the execution of the contract.

Level 3 certification would require “implementation of selected security requirements from NIST SP 800–172 not required in prior rules. Therefore, the Nonrecurring Engineering and Recurring Engineering cost estimates have been included for the initial implementation and maintenance of the required selected NIST SP 800–172 requirements,” according to the proposed rule.

The total cost of a Level 3 certification assessment includes the expenses associated with a Level 2 certification assessment as well as the outlays for implementing and assessing the security requirements specific to Level 3.

For a small organization, the estimated recurring and nonrecurring engineering costs associated with meeting the security mandates for Level 3 are $490,000 and $2.7 million, respectively. The projected cost of a certification assessment is more than $10,000 (including the triennial assessment and affirmation and two additional annual affirmations).

For a larger organization, the estimated recurring and nonrecurring engineering costs associated with Level 3 safeguards are $4.1 million and $21.1 million, respectively. The projected cost of a certification assessment and related affirmations is more than $41,000 (including the triennial assessment and affirmation and two additional annual affirmations).

Level 3 standards are expected to apply only to a “small subset” of defense contractors and subcontractors, the proposed rule states.

For the calculations, officials tried to account for organizational differences between small companies and larger defense contractors. For example, small firms are generally expected to have less complex, less expansive IT and cybersecurity infrastructures and operating environments. They are also more likely to outsource IT and cybersecurity to an external service provider, according to the proposed rule.

Additionally, officials anticipate that organizations pursuing Level 2 assessments will seek consulting or implementation assistance from an external service provider to help them get ready for assessments or to participate in assessments with the C3PAOs.

The annualized costs for contractors and other non-government entities to implement CMMC 2.0 will be about $4 billion, calculated for a 20-year horizon. For the government, they will be approximately $10 million, according to the projections.

The Pentagon is seeking public feedback on the proposed rule. Comments are due by Feb. 26, 2024.

The costs and procedural requirements associated with implementing CMMC have been a major concern for defense contractors and trade associations.

“Burdensome regulation has long been a hurdle, particularly for small and medium-sized businesses that contribute to the defense industrial base. It’s critical for defense companies to have the tools — and the standards — to keep our nation’s sensitive unclassified material secure while not deterring companies from contributing to the defense industrial base,” Eric Fanning, president and CEO of the Aerospace Industries Association, said in a statement Tuesday. “We look forward to reviewing the proposed rule and providing full feedback to ensure the Department has what it needs to implement a final rule that accounts for the complexities within the defense industrial base.”

defensescoop.com/2023/12/28/cmmc-implementation-cost-estimates/

ABOUT THE AUTHOR

Jon Harper

Jon Harper is Managing Editor of DefenseScoop, the Scoop News Group’s newest online publication focused on the Pentagon and its pursuit of new capabilities. He leads an award-winning team of journalists in providing breaking news and in-depth analysis on military technology and the ways in which it is shaping how the Defense Department operates and modernizes. You can also follow him on Twitter @Jon_Harper_

5 Steps To Better Artificial Intelligence (AI) Procurement

Standard

“FEDERAL TIMES” By Devaki Raj

“Artificial intelligence can help realize massive social, economic, and security achievements, but the federal government must modernize its acquisition rules and processes, lest they be left behind by both our Allies and adversaries.”

________________________________________________________________________________________________________

“Governments today face a growing issue: Artificial intelligence can help realize massive social, economic, and security achievements, but for them to proceed without AI-specific procurement processes is to fall increasingly behind. Governments are constrained by archaic regulations and practices that hinder rapid acquisition of cutting-edge technologies.

In my recent testimony before the U. S. Senate, representing CrowdAI—a small business I led that was recently acquired by Saab— I called attention to these complexities while advocating for the federal government to modernize its acquisition rules and processes, lest they be left behind by both our Allies and adversaries.

While it won’t be easy, change is critical. Here are five ways for government to build effective roadmaps for successful AI procurement:

1. Contextualize data

Despite what some businesses tout, commercial AI does not have plug-and-play solutions for governmental projects. In reality, there is only nascent constructs — building blocks that are adaptable (and potentially incredibly effective) but their efficacy hinges on precise data for model conditioning. Without the correct data, AI models cannot produce the desired outcome.

While governments have, and should, protect their data, sources, and methods, it is impossible to build effective AI technologies without access.

To make AI and automation achievable, governments must begin curating specialized, high-quality datasets that are not only compliant with security and privacy regulations, but conform to common standards for machine learning. This task is complex, especially considering international data-sharing agreements and the complications that arise from global partnerships. However, ignoring this step will lead to inefficiencies that will frustrate adoption of AI-powered solutions.

2. Perpetually improve

The contracting models of the past are fundamentally ill-equipped to cope with the dynamic nature of AI technology. Just like your smartphone needs regular updates to improve features, fix bugs, and prevent vulnerabilities, AI models are similarly iterative technologies that rely on dynamic, ongoing adjustments.

For instance, our collaborative projects with the California Air National Guard to map wildfires in real-time revealed the necessity for continuous model adaptation as fires moved from forested to urban areas.

While we know what needs to happen for long-term success, most government contracts fail to include ongoing updates and maintenance required for these projects.

Governmental contracts must go beyond making a simple, one-time purchase and entrench the iterative nature of AI into their frameworks. A contract must be a reflection of the technology it is purchasing: adaptive, agile, and perpetually improving.

3. Rigorously evaluate

The democratization of AI through open-source models presents a dual-edged sword. While it expands access, it also complicates vetting, particularly for government agencies that may not have the in-house technical acumen to validate a model’s suitability for intricate missions.

Compounded with contracts that fail to facilitate iterative development, the result is significant risk exposure to governments from unqualified contractors and failed projects at taxpayer’s expense.

To mitigate this risk, governments need to invest in building or acquiring evaluation expertise. Procurement protocols should encompass not just initial benchmarks, but periodic reviews that scrutinize alignment with evolving mission objectives, data privacy and security norms, and social-ethical constraints.

4. Track and prioritize

Smaller businesses, with their nimble structures and innovative dynamism, often find themselves stymied when scaling their solutions within the confines of governmental procurement. While small businesses may be agile, they can also wither under lengthy government procurement timelines without support.

Contractual milestones within research and development contracts for transition to operations would help innovators scale up and provide lasting value to government organizations. Some programs like the Small Business Innovation Research (SBIR) have initiated steps in this direction. And SBIR’s sole-source award policy enables small businesses to compete with large prime contractors. However, it is not enough.

SBIR faces growing challenges. Over the last few years, we’ve observed the quantity of SBIR awardees balloon, the size of awards shrink, and the amount of transitioned projects slip.

To support SBIR program success and the businesses that pursue these awards, governmental offices should track and prioritize the quantity of projects transitioned to programs of record, not simply measure the quantity contracts or funds awarded. What’s more, departments and agency that participate in SBIR must update transition mechanisms, the contracts and funding, that will account for a capability needing significant ongoing improvement such as AI.

5. Support international frameworks

Over the past 75 years or so, we have come simply to accept that policy lags technology. The pace of change is too fast, we say, for regulations to be contemplated, let alone promulgated. However, the gap has become too wide and too consequential to ignore. The evolving tapestry of international entanglements, underscored by technological dependencies and data alliances, has rendered traditional procurement frameworks inadequate to the point of being dangerous.

To move forward, we need more than a national AI strategy. We also need to support international collaborative frameworks that consider the multi-faceted impact of AI, from cooperative security to ethics, if we want to create and sustain successful frameworks for advanced technologies.

Conclusion

Governmental agencies need to drastically evolve their procurement methods to meet the unique challenges posed by AI.

For governments to transition to the effective procurement and use of AI technologies, which carry the potential for extraordinary benefits, we must look at the bigger picture. Yes, we need contract adaptations. But governments also need a more profound systemic overhaul that aligns technological advancements with strategic foresight, legal rigor, and ethical considerations.

By doing so, we can transition from the present state of disjointed engagements to a future where AI serves as a robust, reliable partner in achieving the collective aims of society.”

https://www.federaltimes.com/opinions/2023/11/22/5-steps-to-better-artificial-intelligence-procurement/

ABOUT THE AUTHOR:

Devaki Raj was the CEO and Co-Founder of CrowdAI. On September 7, 2023, Devaki joined Saab, Inc.’s newly established strategy office as the Chief Digital and AI Officer. On September 14, Devaki testified before the U.S. Senate Homeland Security and Governmental Affairs Committee on AI acquisition and procurement.

Five Things To Watch As Pentagon Prepares To Issue CMMC Formal Rule

Standard

“FEDERAL NEWS NETWORK” By Justin Doubleday

The program has been years in the making, and the rule is coming out approximately two years after the Pentagon massively reshaped the CMMC program to make it less of a burden for smaller businesses.

____________________________________________________________________________________________________

“The Pentagon will soon publish a rule to formally begin implementing the long-awaited Cybersecurity Maturity Model Certification regime.

CMMC is intended to provide the Defense Department with a way to assess whether tens of thousands of contractors in its industrial base are meeting cybersecurity requirements for protecting controlled unclassified information (CUI) on their networks.

“This is the most ambitious cybersecurity conformity initiative ever attempted,” Matt Travis, the chief executive officer of the Cyber Accreditation Body, noted during a “CMMC Ecosystem Summit” hosted by GovExec.

With the rule nearing publication, DoD officials did not discuss the program during the event. But several officials and experts close to the long-brewing CMMC program offered key things to watch as DoD prepares to issue what many expect will be a proposed rule.

Expect a lengthy document

The CMMC rule won’t make for light reading. Travis said he expects the rule will be in the “hundreds of pages” when you factor in supporting documents.

Bob Metzger, head of the Washington office for law firm Rogers Joseph O’Donnell, said he expects the rule to be “long and complex.”

“I’ve heard reports that it’s 150 pages, perhaps even more in the draft stage,” Metzger said today. “I’ve been told that there will be an extended treatment at the start of the rule that explains why they’re doing this and what it’s supposed to mean, and what the benefits will be and how it will impact industry.”

But the back part of the rule, Metzger said, will explain what will change in federal regulations under Title 32 “National Defense,” as well as under the Defense Acquisition Regulations System in Title 48 of U.S. Code. Those changes will explain how the CMMC program will work in practice, key information for companies that will need to get assessed under the forthcoming requirements.

“That’s the stuff that actually will impact you when it becomes final,” Metzger said.

Comment period to be extensive

Once DoD publishes the rule, it will kick off a 60-day public comment period. Metzger noted the previous CMMC rule in 2020, before the program was revamped, received more than 800 public comments. “I would expect there’ll be more for this,” he said.

And it’s also possible, given the enormity of the program’s impact, that DoD extends the comment period beyond 60 days. Metzger pointed to how agencies recently extended the public comment period for a number of cybersecurity rules and requests for information, including an RFI on cyber regulatory harmonization.

“Probably they’ll extend it probably for another 60 days,” Metzger said. “But that would be it.”

While DoD officials are following protocol by staying tight-lipped about the rule ahead of its release, Travis expects officials will talk more once the rule is published.

“I would expect the department, at some point during that comment period, to say something publicly,” he said. “I don’t think we’ll hear anything out of the gate. But I would be surprised that they didn’t come out and kind of explain their work while the public comment period is still open.”

How will DoD address small business concerns

The changes DoD announced in late 2021 were intended to streamline the program by reducing the different “levels” of CMMC from five to just three, while also making it easier for small business to comply with the certification requirements.

“A question yet unanswered is whether the rule will sort of set different expectations or demands for smaller businesses as opposed to larger ones,” Metzger said.

In CMMC regulatory documents that were accidentally posted online in August and subsequently pulled down, DoD estimated approximately 76,000 companies would be required to get a CMMC “level two” third-party certification, including more than 56,000 small businesses.

Travis said small business concerns are a key factor for DoD in the shaping of the CMMC 2.0 program.

“The small business concerns is one I know the department and the government has been working on because it’s so important to make sure that we hold them accountable, but hold them accountable in a way that’s not going to chase them out of the [defense industrial base],” Travis said.

Jack Wilmer, a former official in the DoD office of the chief information officer and now chief executive of cyber firm Core4ce, suggested the requirements should be tailored to the sensitivity of the information, not the size of the business.

“You can be manufacturing some really meaningful components for the department,” Wilmer said of small businesses. “It’s a slippery slope when you go down size being the determining factor in how secure you should be. I tend to fall back much more on the level of sensitivity of information that you are dealing with.”

Due to privity of contract rules, DoD will also lean on its big prime contractors to ensure the subcontractors in their supply chains are in line with CMMC.

“I would expect that the government will increase the pressure on and vigilance over the primes to make sure that they are in fact not just flowing down the clauses, but taking measures to assure that the subs are complying with the requirements,” Metzger said.

How will it be implemented?

The rule and other supporting documents could also shed more light on DoD’s plan for eventually implementing the requirements. DoD officials have previously said they don’t expect to finalize the CMMC rule until later in 2024.

The documents that were accidentally released in August pointed to a ramp-up of the CMMC certification requirements. It shows that DoD projected a total of 517 entities would need a third-party assessment in the first year of CMMC, but subsequent years would see a steep increase in the requirements.

“The rollout is going to start small, and they’re likely to look for companies who will be ready for it,” Metzger said. “They don’t want a lot of people to fail, because that disrupts or interrupts the supply chain. Not a good outcome. But that ramp is going to get pretty steep, pretty rapidly.”

What will other agencies do?

DoD contract spending dwarfs the rest of the federal government by a wide margin. But other agencies have not been keen to jump on the CMMC bandwagon, publicly at least, even though they also face concerns about the cybersecurity practices of their contractors.

The Department of Homeland Security earlier this month released its plan for evaluating contractor “cybersecurity readiness” by requiring companies to fill out a security questionnaire that will then be assessed by DHS officials.

DHS’s chief information security officer has said the CMMC requirements would be too arduous for its base of small businesses.

Metzger said he believes “DHS is avoiding dealing with CUI in the hands of contractors until they see what happens with [the CMMC] rule.”

ABOUT THE AUTHOR:

Justin Doubleday covers cybersecurity, homeland security and the intelligence community for Federal News Network. He previously covered the Pentagon for Inside Defense, where he reported on emerging technologies, cyber and supply chain security. Justin is a 2013 graduate of the University of New Hampshire, where he received his B.A. in English/Journalism. 

DOD Recognizes ‘FutureG’ A Critical Technology Investment Area

Standard

Imagert.cto.mil

“NATIONAL DEFENSE MAGAZINE” By Nick Maynard and Arun Seraphin  

“The lack of U.S. wireless equipment vendors leaves the Defense Department and domestic service providers vulnerable to price and production swings, creating national security threats.

It is critical that the government continue to invest in next-generation wireless research, prototyping and scale-up.”

_______________________________________________________________________________________________________

“The Defense Department has correctly identified FutureG as a critical technology area that will lay the groundwork for continued U.S. leadership in information technology, which is vital for maintaining our economic and national security.

While fifth-generation cellular network (5G) communications are becoming commercially available to the United States, the domestic wireless ecosystem is severely challenged by competition from foreign government-subsidized equipment vendors.

Over the past decade, the global wireless equipment market has consolidated into five vendors. Two of these companies, Huawei and ZTE, were designated by the Federal Communications Commission as national security risks due to connections with the Chinese government. The other leading 5G vendors — Samsung, Nokia and Ericsson — are all based overseas, but make up the core of the U.S. equipment market.

 Federal agencies have conducted many research and demonstration efforts on emerging 5G technologies over the past three years. These efforts are creating the technological foundation that will enable 5G to serve as the basis to create the next generation of wireless cellular networks and security technologies for military missions.

One exciting effort is the 2023 5G Challenge being run by the undersecretary of defense for research and engineering in partnership with the National Telecommunications and Information Administration.

For this effort to succeed and enable the nation to regain its spectrum leadership — including in sixth-generation systems and beyond — the domestic wireless ecosystem will need to both develop and adopt a standard architecture usable by industry, startups, academia and government organizations alike.

The early promise of Open Radio Access Network, or O-RAN, technology offers an opportunity for the United States to regain some lost ground and to establish leadership in 5G and beyond. More open and flexible wireless networks can also ultimately increase vendor diversity, increase innovation in wireless networking technology, lower deployment and operational costs and even increase security.

To respond to foreign subsidization, public-private collaborative efforts between government, industry and academia are needed to support a comprehensive effort for the department to develop, validate, and deploy new O-RAN technologies and systems that will be open and interoperable, while enabling a vibrant and competitive marketplace.

One key step will be to increase government collaborations with domestic suppliers to build a 5G consortium that would support the creation of a domestic market and further O-RAN technology in the United States. This emerging technology for next-generation wireless networks is a concept based on interoperability and standardization of Radio Access Network elements, including a unified interconnection standard for white-box hardware and open-source software elements from different vendors.

O-RAN architecture integrates modular base station software coupled with off-the-shelf hardware, allowing baseband and radio unit components from discrete suppliers to operate seamlessly together.

Further U.S. investment into O-RAN would incentivize domestic development and provide a more robust industrial base for next-generation wireless technology. Congress could establish a public-private partnership to accelerate domestically based O-RAN technologies and systems that enable open and interoperable networks.

This effort would benefit national security missions and economic competitiveness by expanding U.S. leadership in advanced technologies that allow access to, increase the control of and use of the data across the electromagnetic spectrum.

However, one of the main barriers to the further development and adoption of O-RAN technology is the lack of at-scale, over-the-air, fully integrated test and development platforms in the United States. Currently, small and innovative equipment or software companies that create O-RAN products cannot conduct full-blown interoperability testing on their own. Due to financial or availability constraints, they often lack access to equipment made by their partners or competitors.

To address near-term market acceleration needs, there is a clear requirement for labs and testbeds to manage interoperability testing and product certification.

Without this capability, operators will not likely have the confidence to install O-RAN equipment in their networks. The Pentagon, civilian research agencies and industry organizations currently maintain a number of 5G testbeds, but a recent survey showed that no existing facility has the capabilities to support current industry and government requirements.

Large-scale, real-time demonstrations and prototypes are necessary next steps to encourage potential federal and carrier customers to have faith in this new technology. In particular, a Defense Department-funded test environment should be accessible to a broad user base with predictable availability to academic and industry partners to conduct their tailored experimentation.

It should also be supported by full-time research experts and operations staff tasked with testbed management and maintenance.

With these kinds of sustained investments in research, development and testing infrastructure, the nation can ensure that it will build the open FutureG ecosystem that will support national needs.”

https://www.nationaldefensemagazine.org/articles/2023/10/17/building-a-futureg-future-requires-investments

Nick Maynard is the co-founder and CEO of US Ignite. Arun Seraphin is the director of NDIA’s Emerging Technologies Institute.

Are You Seeking CMMC Certification? Here’s What You Need To Know

Standard

“WASHINGTON TECHNOLOGY” By Ola Sage and Dustin Siggins

“Contractors can’t sit back and wait for updated timelines or rule making guidance. Waiting can mean losing current contracts and not being able to bid on new requests for proposals which contain CMMC requirements.”

_________________________________________________________________________________________________________

“For two years, Department of Defense leaders have pushed hard to address private sector concerns about complexity, cost, and necessity surrounding the Cybersecurity Maturity Model Certification.

That has led to a projected finalized rulemaking for CMMC implementation in the first quarter of fiscal year 2025 – a deadline that DOD seems determined to hold onto even with a potential government shutdown still looming and the newly announced inspector general accreditation audit.

National security concerns aren’t going to wait for the government to fully re-open, the election results, or an IG audit. It takes 12 to 18 months to navigate the 110 security controls in the CMMC certification process. Once CMMC rulemaking is complete, contractors who aren’t certified won’t be able to contract with DOD as a prime or subcontractor.

Here’s what DOD contractors need to know to avoid the pitfalls of not being certified, and to turn the investment of certification into long-term growth opportunities. 

What CMMC certification means for your company

Companies that become certified will keep existing business and earn a well-deserved reputation within the industry. They’ll also be in the relatively select category of companies that can pursue a broader scope of contracts within DOD in the short term and within certain civilian agencies as the latter implement CMMC. 

Companies that fail to be certified will face two levels of problems. The first is that they won’t be able to bid on new requests for proposals that contain CMMC requirements. Secondly, they may have to start over on the certification process if they have critical flaws. Even worse, however, they risk losing current contracts – which will cost them revenue, key cybersecurity talent, and relationships with partners like banks and insurers. 

Fiscal 2025 seems like a long way off, but it’s going to come fast. It’s in DOD contractors’ best interests to assume that CMMC is going to be reality sooner rather than later.

Communicate with your Supply Chain

Much of the CMMC attention has focused on prime contractors because of their role with DOD. But certification goes all the way down the supply chain, to the smallest subcontractor. All it takes is a wayward thumb drive or an employee with donuts instead of documents on his mind to blow up the entire system, taking down contracts worth hundreds of millions of dollars. DOD will require CMMC to be included in mandatory contractual flow clauses.

Many prime contractors send letters to their supply chain partners informing them of upcoming CMMC requirements and their obligation to comply.  But it’s too easy for smaller contractors not plugged into the regulatory apparatus to simply confuse it as just more paperwork from yet another compliance department.

Therefore, contractors higher in the supply chain have to show why compliance matters, and build trusted relationships so that everyone is ready to work together to be CMMC-certified.

One way to do that is ensuring that you and your contractors use only Cyber AB authorized assessors and certified CMMC professionals. Be alert to companies that claim to be part of the ecosystem but are not officially registered. 

Second: understand where your subcontractors stand in the approval process. Larger prime contractors may find it worth the investment to pay for, or subsidize the cost of, a mock assessment for their subcontractors so they know what level of risk they might be inheriting.

Also help your subs who don’t have the revenue, time, or other resources to become certified. Donating your learned best practices, volunteering key people’s expertise, or providing enclaves for storage may cost you a little now, but you’ll get yourself as close as anyone can reasonably expect to an impregnable company, safe from hackers as well as fines and other punishments which are common after breaches.  

CMMC certification ensures that there is a baseline of high-quality technology and processes for the protection of DOD Controlled Unclassified Information. What may separate your company from the competition is how well you choose to communicate, and invest, up and down your supply chain.

3 approaches to prepare for CMMC certification

The journey to CMMC compliance depends on factors like your expertise (do you have an internal cyber team?), time constraints (is your cyber team bogged down with other tasks?), and finances (what investments are needed in technology, processes, and polices).

Step one is to download a version of NIST 800-172a which can be used to guide an internal team in implementing the 110 controls of NIST 800-171. That will require a certain level of technical knowledge from the internal team.

Second: hire a Registered Practitioner Organization to assist you in preparing your organization for certification. This may be an expensive option, but it offers the advantage of getting specialized experts on your team.

A hybrid approach can also come in handy. Look to your internal team for some matters and outside consultants for others. For example, you may decide to develop your policies and procedures in house but engage an RPO to help implement technical configurations in your network environment.

Whatever approach you choose, start now. It takes anywhere from 12-18 months to fully address all 110 controls of NIST 800-171. When you’re ready, the next step is to sign up with an authorized CMMC Third Party Assessment Organization.

Most C3PAOs offer a mock assessment, which simulates a real assessment and gives you insight into areas where you might still have gaps to fill; and the official assessment where you receive an official result. 

There’s also a near-term option which has been very beneficial to DOD contractors which seek CMMC certification. The Pentagon’s interim Joint Surveillance Voluntary Program is offering a voluntary NIST 800-171 assessment conducted by a C3PAO in conjunction with the Defense Industrial Base’s Cybersecurity Assessment Center.

DOD’s stated intent is that contractors which pass the rigorous assessment will earn a Level 2 certification for three years, but the voluntary program will only be available until rulemaking is finalized. 

CMMC 2.0 is here – are you ready?

The Department of Defense has at least 300,000 contractors, with multiple potential cybersecurity breach points endangering every single one. National security concerns aren’t going to wait for the government to fully re-open, the election results, or an IG audit.

That’s why DOD leadership is pushing full-bore for CMMC finalized rulemaking and why being CMMC-compliant is the best way for DOD contractors to turn serving the national defense into a good offense for your business’s future.”

https://washingtontechnology.com/opinion/2023/10/are-you-seeking-cmmc-certification-heres-what-you-need-know/390916/

ABOUT THE AUTHORS:

Ola Sage is founder and CEO of CyberRx, a cybersecurity risk and compliance firm and one of 48 certified third-party assessment organizations in the CMMC ecosystem.

Dustin Siggins is founder of Proven Media Solutions and a business writer with bylines at Insider, Forbes, USA TODAY, and elsewhere.

DARPA AI Cyber Challenge Aims To Secure Nation’s Most Critical Software

Standard

“DARPA”

“A call to top computer scientists, AI experts, software developers, and beyond to participate in the AI Cyber Challenge (AIxCC) – a two-year competition aimed at driving innovation at the nexus of AI and cybersecurity to create a new generation of cybersecurity tools.”

________________________________________________________________________________________________________

“In an increasingly interconnected world, software undergirds everything from financial systems to public utilities. As software enables modern life and drives productivity, it also creates an expanding attack surface for malicious actors.

This surface includes critical infrastructure, which DARPA experts say is especially vulnerable to cyberattacks given the lack of tools capable of securing systems at scale. Recent years have exposed the threats posed to society by malicious cyber actors exploiting this state of affairs, and have made plain the daunting attack surface cyber defenders are tasked to protect. Despite these vulnerabilities, advances in modern technology may provide a path towards solving them.

“AIxCC represents a first-of-its-kind collaboration between top AI companies, led by DARPA, to create AI-driven systems to help address one of society’s greatest challenges – cybersecurity,” said Perri Adams, DARPA’s AIxCC program manager. “In the past decade, we’ve seen the development of promising new AI-enabled capabilities. When used responsibly, we see significant potential for this technology to be applied to key cybersecurity issues. By automatically defending critical software at scale, we can have the greatest impact for cybersecurity across the country, and the world.”

AIxCC will allow two tracks for participation: the Funded Track and the Open Track. Funded Track competitors will be selected from proposals submitted to a Small Business Innovation Research solicitation. Up to seven small businesses will receive funding to participate. Open Track competitors will register with DARPA via the competition website and will proceed without DARPA funding.

Teams on all tracks will participate in a qualifying event during the semifinal phase, where the top scoring teams (up to 20) will be invited to participate in the semifinal competition. Of these, the top scoring teams (up to five) will receive monetary prizes and continue to the final phase and competition. The top three scoring competitors in the final competition will receive additional monetary prizes.

AIxCC brings together leading AI companies that will work with DARPA to make their cutting-edge technology and expertise available to challenge competitors. Anthropic, Google, Microsoft, and OpenAI will collaborate with DARPA to enable competitors to develop state-of-the-art cybersecurity systems.

The Open Source Security Foundation (OpenSSF), a project of the Linux Foundation, will serve as a challenge advisor to guide teams in creating AI systems capable of addressing vital cybersecurity issues, such as the security of our critical infrastructure and software supply chains. Most software, and thus most of the code needing of protection, is open-source software, often developed by community-driven volunteers. According to the Linux Foundation, open-source software is part of roughly 80% of modern software stacks that comprise everything from phones and cars, to electrical grids, manufacturing plants, etc.1

Finally, AIxCC competitions will be held at DEF CON with additional events at Black Hat USA, both of which are internationally recognized cybersecurity conferences that draw tens of thousands of experts, practitioners, and spectators from around the world to Las Vegas every August. AIxCC will consist of two phases: the semifinal phase and the final phase. The semifinal competition and the final competition will be held at DEF CON in Las Vegas in 2024 and 2025.

“If successful, AIxCC will not only produce the next generation of cybersecurity tools, but will show how AI can be used to better society by defending its critical underpinnings,” said Adams.

For complete details about the competition, including the timeline to register, eligibility information, rules and more, visit AICyberChallenge.com.”

https://www.darpa.mil/news-events/2023-08-09

[1] www.linuxfoundation.org/research/addressing-cybersecurity-challenges-in-open-source-software