At the NITSL conference last week in Charlotte I had a great discussion with some really smart people from a US Nuclear Licensee. They were all former colleagues of mine from my past life as a consultant. One of them asked the group of us what we would do to improve or replace NEI 08-09 in order to make the cyber security program for nuclear power plants sustainable.
I love the challenge of the question but feel that in order to answer it, we must first ask the question: What is wrong with the current cyber security plan? I’m not talking here about the tactical issues, some of which I have addressed in previous posts, but rather what are the strategic flaws in the program that limit its ability to provide an ongoing program that is appropriate, efficient, and effective to prevent adverse impact to the operation of a nuclear plant through a cyber attack.
I think the answer is very simple. There are three fatal flaws to the current RG 5.71 / NEI 08-09 based cyber security plans:
- RG 5.71 and NEI 08-09 lack any defined measurement of success
- RG 5.71 and NEI 08-09 lack defined guidance for the controls
- RG 5.71 and NEI 08-09 lack any mechanism for adapting the program to changing threats
Looking at these flaws in order:
No measurement for success
NEI 08-09 Rev 6 Appendix A section 3.1.6 contains the following guidance:
For CDAs, the information in Sections 3.1.3 – 3.1.5 is utilized to analyze and document one or more of the following:
1. Implementing the cyber security controls in Appendices D and E of NEI 08-09, Revision 6.
2. Implementing alternative controls/countermeasures that eliminate threat/attack vector(s) associated with one or more of the cyber security controls enumerated in (1) above by:
[Removed for Brevity]
3. Not implementing one or more of the cyber security controls by:
Performing an analysis of the specific cyber security controls for the CDA that will not be implemented
Documenting justification demonstrating the attack vector does not exist (i.e., not applicable) thereby demonstrating that those specific cyber security controls are not necessary
[Emphasis Added]
This seems as though it provides a measurement for success, until you realize that:
a. There is no definition of a Threat/Attack Vector in any official document
b. There is therefore no guidance on which Threat/Attack Vectors apply to which controls
c. There is therefore no guidance on how to perform the analysis to determine that alternate controls are appropriate or the specified controls are not necessary
This flaw has led to the current situation, where each licensee is initially (during inspection) subject to the whims of the particular cyber contractor provided by the NRC and long term subject to what can be negotiated through the findings reconciliation process with the NRC. But make no mistake, none of this is done according to any defined standard. Such a standard does not exist, and the results of that reconciliation process has, at times, produced sloppy and technically unsupported results. See my post on Kiosk management and BadUSB for an egregious example of that.
But you may say: What about the big 5:
Physical Access
Wired Network
Wireless Networking
Portable Media
Supply Chain
You may continue: Aren’t these the Threat Vectors?
No, they are not. They are defined officially as the 5 Attack Pathways that the NRC is concerned about.
You throw the hail Mary: Can’t we use them as Threat Vectors anyway?
No, you may not. Not if Threat Vectors are the criteria to demonstrate “that those cyber security controls are not necessary.” The 5 Attack Pathways are wholly insufficient to demonstrate the necessity of most of the controls. There are a few that work: Many E5 controls are only necessary if the CDA has wired or wireless connectivity, but not all, and D1.17 is only necessary if a CDA has wireless access, but there are very few more where the Attack Pathway alone is determinant.
Consider a very simply example: a badge card reader, with no PIN or Hand Geometry capability. A simple badge card reader may have 3 of the 5 Attack Pathways associated with it: Physical Access, Wired Network Access, Supply Chain, yet none of the D2 controls for audit logs are “necessary” to protect the simple badge card reader from cyber attack. I am of course assuming that those Attack Pathways are relevant to the D2 controls for audit logs as there is no defined guidance on that.
Truth is, the Attack Pathways alone provide very little utility in any security analysis. These 5 Attack Pathways are insufficient to address all of the D and E controls anyway, at a minimum you’d have to add:
Human Performance, the reason we provide training and document procedures; and
Entropy (everything in the universe eventually moves from order to disorder, i.e. things break), the reason we make backups and document system configurations.
The first major change I would make to the Cyber Security Plan would be to define, explicitly, what a Threat Vector is, identify the Threat Vectors applicable to each control, and provide for detailed guidance on how to do the analysis required to determine the presence of those Threat Vectors for specific resources and the capability of alternate controls to eliminate those Threat Vectors.
I would apply the cmplid:// methodology, not simply for self-promotion, but as I see it as the best combination of simplicity and technical basis:
Threat Vector Definition:
The combination of an Attack Pathway, Security Objective, and an Exploitable Condition that necessitates application of a security control for a given resource.
The analysis process is as follows:
Document the Attack Pathways for each control
Document the Security Objectives of each control
Document the Exploitable Conditions the controls seek to mitigate
This may seem complicated but it really is not, consider the following examples
Control
D 1.1 A formal, documented, critical digital asset (CDA) access control policy is developed, disseminated, and reviewed in accordance with 10 CFR 73.55(m), and updated. This access control program addresses purpose, scope, roles, responsibilities, management commitment, and internal coordination; and formal, documented procedures that facilitate the implementation of the access control policy and associated access security controls.
Attack Pathway
Human Performance
Security Objective
Ensures resources within scope of the security program are developed, implemented, and maintained according to appropriate organizationally defined standards.
Exploitable Condition
Resources within scope of the security program will be inconsistently managed according to personnel specific proclivities or ‘tribal knowledge.’
Control
D 1.5 a2 Restricts security functions to the least amount of users necessary to ensure the security of CDAs.
Attack Pathway
Physical Access, Wired Network, Wireless Network
Security Objective
Ensures logical access accounts have only the permissions required to fulfill their required business objectives.
Exploitable Condition
Logical user account permissions will exceed required business objectives.
Control
E 3.3 b Malicious code protection mechanisms (including signature definitions) are documented and updated whenever new releases are available in accordance with programs, procedures, and processes.
Attack Pathway
Physical Access, Wired Network, Wireless Network, Portable Media
Security Objective
Ensures malicious code cannot propagate or execute within the technology infrastructure.
Exploitable Condition
Malicious code will be used to compromise organizational resources.
If you’d like to learn more about this contact us here, we have done this analysis for all the D and E controls.
Documenting these characteristics of the controls is pretty simple and the application of this analysis is simple as well as each of these characteristics can be associated rather easily to one or more attributes of a resource through a logic tree type analysis. The logic trees can then be used to easily determine the presence (and absence) of Threat Vectors. Implementing this would provide a defined mechanism for measuring success of the Cyber Security Plans.
No Mechanism to Adapt the Program to Changing Threats
The control library in RG 5.71/NEI 08-09 is from a draft of NIST SP 800-53 that was finally published in August of 2009. Since then SP 800-53 has been revised once (rev 4 published in April 2013, and it is currently in draft again. The controls in RG 5.71/NEI 08-09 have not been updated and there is no mechanism to update them.
The standard explanation for need to define the control library used for the Cyber Security Plan is that of “tailoring” the controls to be more relevant to application at nuclear power plants. However, the vast majority of the controls were “tailored” no more than a find and replace operation on the words “information system” for the words “Critical Digital Asset” and replacing the [organization-defined operations] with obvious specifics, e.g.:
AC-11 SESSION LOCK
The information system:
a. Prevents further access to the system by initiating a session lock after [Assignment: organization-defined time period] of inactivity or upon receiving a request from a user;Became
D 1.10 SESSION LOCK
CDAs are configured to:
Initiate a session lock within 30 minutes of inactivity.
Provide the capability for users to initiate session lock mechanisms.
There really is not too much going on here. Certainly, in my estimation, not enough to rely on an out-dated control library. Additionally, some controls that were “tailored” more than this, were essentially changed to something other than their original intent or something impossible to achieve, e.g.:
AU-10 NON-REPUDIATION
The information system protects against an individual falsely denying having performed a particular action.D 2.10 NON-REPUDIATION
This Technical cyber security control ensures the protection of CDAs and audit records against an individual falsely denying they performed a particular action.
[Emphasis Added]
See my previous post for a detailed understanding of whats wrong with D 2.10.
If I were tasked with developing the next cyber security program for nuclear power, I’d delete Appendices D & E and simply point to the latest version of the SP 800-53. I’d allow the industry a year (maybe two) to update their threat vector analysis to incorporate the additional controls included in the revision of SP 800-53 and implement those deemed necessary.
Through this change, the Cyber Security Programs in place at nuclear power plants could be perpetually current (or at least only a year or two behind) with the changing threat landscape and the response thereto prescribed by NIST.
No Defined Guidance for the Controls
RG 5.71/NEI 08-09 simply contain a subset of the “tailored” controls from Appendix F of NIST SP 800-53. RG 5.71/NEI 08-09 unfortunately leave out almost all of the good stuff. The “Supplemental Guidance”, “References”, and “Related Controls” sections are completely absent. Had they been included, it would have been obvious that D 1.19 was the wrong control for the primary concern of Milestone 4, malicious code propagation within the plant, and countless other controls could have been much better understood.
My previous recommended change, pointing to SP 800-53 as the control library used, would solve this problem as well.
Using SP 800-53 would also allow the Nuclear Cyber Security Program to mature much easier, as it could eventually be compared by licensees directly with the NIST Cyber Security Framework for Improving Critical Infrastructure and the Risk Management Framework used by the Federal Government.
I think this provides a pretty good approach to fixing the fatal flaws present in RG 5.71/NEI 08-09, and I’d love to hear your thoughts.
To the extent possible under law, Richard Dahl has waived all copyright and related or neighboring rights to Fixing RG 5.71/NEI 08-09. This work is published from: United States.