ABCDE
1
Attestation RequirementAttestation Requirement DetailsRelated SSDF Practices and TasksSSDF Notional Implementation ExamplesRelated EO 14028 Subsection
2
Requirement 1

The software was developed and built in secure environments.

Those environments were secured by the following actions, at a minimum:
Requirement 1a

Separating and protecting each environment involved in developing and building software;
PO.5.1: Separate and protect each environment
involved in software development.
Example 1: Use multi-factor, risk-based authentication and conditional access for each environment.

Example 2: Use network segmentation and access controls to separate the environments from each other and from production environments, and to separate components from each other within each non-production environment, in order to reduce attack surfaces and attackers’ lateral movement and privilege/access escalation.

Example 3: Enforce authentication and tightly restrict connections entering and exiting each software development environment, including minimizing access to the internet to only what is necessary.

Example 4: Minimize direct human access to toolchain systems, such as build services. Continuously monitor and audit all access attempts and all use of privileged access.

Example 5: Minimize the use of production-environment software and services from non-production environments.

Example 6: Regularly log, monitor, and audit trust relationships for authorization and access between the environments and between the components within each environment.

Example 7: Continuously log and monitor operations and alerts across all components of the development environment to detect, respond, and recover from attempted and actual cyber incidents.

Example 8: Configure security controls and other tools involved in separating and protecting the environments to generate artifacts for their activities.

Example 9: Continuously monitor all software deployed in each environment for new vulnerabilities, and respond to vulnerabilities appropriately following a riskbased approach.

Example 10: Configure and implement measures to secure the environments’ hosting infrastructures following a zero trust architecture7.
4e(i)(A)
3
Requirement 1b

Regularly logging, monitoring, and auditing trust relationships used for authorization and
access:

i) to any software development and build
environments; and

ii) among components within each
environment;
PO.5.1: Separate and protect each environment
involved in software development.
[See above]4e(i)(B)
4
Requirement 1c

Enforcing multi-factor authentication and
conditional access across the environments
relevant to developing and building software in
a manner that minimizes security risk;
PO.5.1: Separate and protect each environment
involved in software development.
[See above]4e(i)(C)
5
PO.5.2 : Secure and harden development endpoints (i.e., endpoints for software designers, developers, testers, builders, etc.) to perform development-related tasks using a risk-based approach.Example 1: Configure each development endpoint based on approved hardening
guides, checklists, etc.; for example, enable FIPS-compliant encryption of all
sensitive data at rest and in transit.

Example 2: Configure each development endpoint and the development
resources to provide the least functionality needed by users and services and to
enforce the principle of least privilege.

Example 3: Continuously monitor the security posture of all development
endpoints, including monitoring and auditing all use of privileged access.

Example 4: Configure security controls and other tools involved in securing and
hardening development endpoints to generate artifacts for their activities.

Example 5: Require multi-factor authentication for all access to development
endpoints and development resources.

Example 6: Provide dedicated development endpoints on non-production
networks for performing all development-related tasks. Provide separate
endpoints on production networks for all other tasks.

Example 7: Configure each development endpoint following a zero trust
architecture
4e(i)(C)
6
Requirement 1d

Taking consistent and reasonable steps to
document, as well as minimize use or inclusion
of software products that create undue risk,
within the environments used to develop and
build software;
PO.5.1: Separate and protect each environment
involved in software development.
[See above]4e(i)(D)
7
Requirement 1e

Encrypting sensitive data, such as credentials,
to the extent practicable and based on risk;
PO.5.2: Secure and harden development endpoints
(i.e., endpoints for software designers, developers,
testers, builders, etc.) to perform development-related
tasks using a risk-based approach.
[See above]4e(i)(E)
8
Requirement 1f

Implementing defensive cyber security
practices, including continuous monitoring of
operations and alerts and, as necessary,
responding to suspected and confirmed cyber
incidents;
PO.3.2: Follow recommended security practices to
deploy, operate, and maintain tools and toolchains.
Example 1: Evaluate, select, and acquire tools, and assess the security of each
tool.

Example 2: Integrate tools with other tools and existing software development
processes and workflows.

Example 3: Use code-based configuration for toolchains (e.g., pipelines-as-code,
toolchains-as-code).

Example 4: Implement the technologies and processes needed for reproducible
builds.

Example 5:
Update, upgrade, or replace tools as needed to address tool
vulnerabilities or add new tool capabilities.

Example 6: Continuously monitor tools and tool logs for potential operational and
security issues, including policy violations and anomalous behavior.

Example 7: Regularly verify the integrity and check the provenance of each tool
to identify potential problems.

Example 8:
See PW.6 regarding compiler, interpreter, and build tools.

Example 9:
See PO.5 regarding implementing and maintaining secure
environments.
4e(i)(F)
9
PO.3.3: Configure tools to generate artifacts6 of their support of secure software development practices as defined by the organization.Example 1: Use existing tooling (e.g., workflow tracking, issue tracking, value
stream mapping) to create an audit trail of the secure development-related
actions that are performed for continuous improvement purposes.

Example 2:
Determine how often the collected information should be audited,
and implement the necessary processes.

Example 3: Establish and enforce security and retention policies for artifact data.

Example 4: Assign responsibility for creating any needed artifacts that tools
4e(i)(F)
10
PO.5.1: Separate and protect each environment
involved in software development.
[See above]4e(i)(F)
11
PO.5.2: Secure and harden development endpoints
(i.e., endpoints for software designers, developers,
testers, builders, etc.) to perform development-related
tasks using a risk-based approach.
[See above]4e(i)(F)
12
4e(iii)
13
Requirement 2

The software producer has made a good-faith effort to maintain trusted source code supply chains by:
Requirement 2a

Employing automated tools or comparable
processes; and

Requirement 2b

Establishing a process that includes reasonable
steps to address the security of third-party
components and manage related vulnerabilities;
PO 1.1: Identify and document all security
requirements for the organization’s software
development infrastructures and processes, and
maintain the requirements over time.
Example 1: Define policies for securing software development infrastructures and
their components, including development endpoints, throughout the SDLC and
maintaining that security.

Example 2:
Define policies for securing software development processes
throughout the SDLC and maintaining that security, including for open-source and
other third-party software components utilized by software being developed.

Example 3: Review and update security requirements at least annually, or sooner
if there are new requirements from internal or external sources, or a major
security incident targeting software development infrastructure has occurred.

Example 4:
Educate affected individuals on impending changes to requirements
4e(iii)
14
PO.3.1: Specify which tools or tool types must or should be included in each toolchain to mitigate identified risks, as well as how the toolchain components are to be integrated with each other.Example 1: Define categories of toolchains, and specify the mandatory tools or tool types to be used for each category.

Example 2:
Identify security tools to integrate into the developer toolchain.

Example 3:
Define what information is to be passed between tools and what data formats are to be used.

Example 4:
Evaluate tools’ signing capabilities to create immutable records/logs for auditability within the toolchain.

Example 5:
Use automated technology for toolchain management and orchestration.
4e(iii)
15
PO.3.2: Follow recommended security practices to
deploy, operate, and maintain tools and toolchains.
[See above]4e(iii)
16
PO.5.1: Separate and protect each environment
involved in software development.
[See above]4e(iii)
17
PO.5.2: Secure and harden development endpoints
(i.e., endpoints for software designers, developers,
testers, builders, etc.) to perform development-related
tasks using a risk-based approach.
[See above]4e(iii)
18
PS.1.1: Store all forms of code – including source code, executable code, and configuration-as-code –based on the principle of least privilege so that only authorized personnel, tools, services, etc. have access.Example 1: Store all source code and configuration-as-code in a code repository, and restrict access to it based on the nature of the code. For example, opensource code intended for public access may need its integrity and availability protected; other code may also need its confidentiality protected.

Example 2: Use version control features of the repository to track all changes
made to the code with accountability to the individual account.

Example 3: Use commit signing for code repositories.

Example 4: Have the code owner review and approve all changes made to the
code by others.

Example 5: Use code signing8 to help protect the integrity of executables.

Example 6: Use cryptography (e.g., cryptographic hashes) to help protect file integrity.
4e(iii)
19
PS.2.1: Make software integrity verification information
available to software acquirers.
Example 1: Post cryptographic hashes for release files on a well-secured website.

Example 2: Use an established certificate authority for code signing so that consumers’ operating systems or other tools and services can confirm the validity of signatures before use.

Example 3: Periodically review the code signing processes, including certificate renewal, rotation, revocation, and protection.
4e(iii)
20
PS.3.1: Securely archive the necessary files and supporting data (e.g., integrity verification information, provenance data) to be retained for each software release.Example 1: Store the release files, associated images, etc. in repositories following the organization’s established policy. Allow read-only access to them by necessary personnel and no access by anyone else.

Example 2: Store and protect release integrity verification information and provenance data, such as by keeping it in a separate location from the release files or by signing the data.
4e(iii)
21
PW.4.1: Acquire and maintain well-secured software components (e.g., software libraries, modules, middleware, frameworks) from commercial, opensource, and other third-party developers for use by the organization’s software.Example 1: Review and evaluate third-party software components in the context
of their expected use. If a component is to be used in a substantially different way
in the future, perform the review and evaluation again with that new context in
mind.

Example 2:
Determine secure configurations for software components, and make
these available (e.g., as configuration-as-code) so developers can readily use the
configurations.

Example 3: Obtain provenance information (e.g., SBOM, source composition
analysis, binary software composition analysis) for each software component, and
analyze that information to better assess the risk that the component may
introduce.

Example 4: Establish one or more software repositories to host sanctioned and
vetted open-source components.

Example 5: Maintain a list of organization-approved commercial software
components and component versions along with their provenance data.

Example 6: Designate which components must be included in software to be
developed.

Example 7: Implement processes to update deployed software components to
newer versions, and retain older versions of software components until all
transitions from those versions have been completed successfully.

Example 8: If the integrity or provenance of acquired binaries cannot be
confirmed, build binaries from source code after verifying the source code’s
integrity and provenance.
4e(iii)
22
PW.4.4: Verify that acquired commercial, open-source, and all other third-party software components comply with the requirements, as defined by the organization, throughout their life cycles.Example 1: Regularly check whether there are publicly known vulnerabilities in
the software modules and services that vendors have not yet fixed.

Example 2: Build into the toolchain automatic detection of known vulnerabilities in
software components.

Example 3: Use existing results from commercial services for vetting the software
modules and services.

Example 4: Ensure that each software component is still actively maintained and
has not reached end of life; this should include new vulnerabilities found in the
software being remediated.

Example 5: Determine a plan of action for each software component that is no
longer being maintained or will not be available in the near future.

Example 6:
Confirm the integrity of software components through digital
signatures or other mechanisms.

Example 7:
Review, analyze, and/or test code. See PW.7 and PW.8.
4e(iii)
23
PW 7.1: Determine whether code review (a person looks directly at the code to find issues) and/or code analysis (tools are used to find issues in code, either in a fully automated way or in conjunction with a person) should be used, as defined by the organization.Example 1: Follow the organization’s policies or guidelines for when code review should be performed and how it should be conducted. This may include thirdparty code and reusable code modules written in-house.

Example 2: Follow the organization’s policies or guidelines for when code analysis should be performed and how it should be conducted.

Example 3:
Choose code review and/or analysis methods based on the stage of the software.
4e(iii)
24
PW8.1: Determine whether executable code testing should be performed to find vulnerabilities not identified by previous reviews, analysis, or testing and, if so, which types of testing should be used.Example 1: Follow the organization’s policies or guidelines for when code testing
should be performed and how it should be conducted (e.g., within a sandboxed
environment). This may include third-party executable code and reusable
executable code modules written in-house.

Example 2:
Choose testing methods based on the stage of the software.
4e(iii)
25
RV 1.1: Gather information from software acquirers, users, and public sources on potential vulnerabilities in the software and third-party components that the software uses, and investigate all credible reports.Example 1: Monitor vulnerability databases9, security mailing lists, and other sources of vulnerability reports through manual or automated means.

Example 2: Use threat intelligence sources to better understand how vulnerabilities in general are being exploited.

Example 3: Automatically review provenance and software composition data for all software components to identify any new vulnerabilities they have.
4e(iii)
26
27
Requirement 3

The software producer maintains provenance data for internal and third-party code incorporated into the software;
PO.1.3: Specify which tools or tool types must or should be included in each toolchain to mitigate identified risks, as well as how the toolchain components are to be integrated with each other.Example 1: Define a core set of security requirements for software components, and include it in acquisition documents, software contracts, and other agreements with third parties.

Example 2:
Define security-related criteria for selecting software; the criteria can include the third party’s vulnerability disclosure program and product security incident response capabilities or the third party’s adherence to organizationdefined practices.

Example 3: Require third parties to attest that their software complies with the organization’s security requirements.

Example 4:
Require third parties to provide provenance5 data and integrity verification mechanisms for all components of their software.

Example 5: Establish and follow processes to address risk when there are security requirements that third-party software components to be acquired do not meet; this should include periodic reviews of all approved exceptions to requirements.
4e(vi)
28
PO.3.2: Follow recommended security practices to deploy, operate, and maintain tools and toolchains.[see above]4e(vi)
29
PO.5.1: Separate and protect each environment involved in software development.[see above]4e(vi)
30
PO.5.2: Secure and harden development endpoints (i.e., endpoints for software designers, developers, testers, builders, etc.) to perform development-related tasks using a risk-based approach.[see above]4e(vi)
31
PS.3.1: Specify which tools or tool types must or should be included in each toolchain to mitigate identified risks, as well as how the toolchain components are to be integrated with each other.[see above]4e(vi)
32
PS.3.2: Follow recommended security practices to deploy, operate, and maintain tools and toolchains.Example 1: Make the provenance data available to software acquirers in accordance with the organization’s policies, preferably using standards-based formats.

Example 2: Make the provenance data available to the organization’s operations and response teams to aid them in mitigating software vulnerabilities.

Example 3: Protect the integrity of provenance data, and provide a way for recipients to verify provenance data integrity.

Example 4: Update the provenance data every time any of the software’s components are updated.
4e(vi)
33
PW.4.1: Acquire and maintain well-secured software components (e.g., software libraries, modules, middleware, frameworks) from commercial, opensource, and other third-party developers for use by the organization’s software.[see above]4e(vi)
34
PW.4.4: Verify that acquired commercial, open-source, and all other third-party software components comply with the requirements, as defined by the organization, throughout their life cycles.[see above]4e(vi)
35
RV.1.1: Gather information from software acquirers, users, and public sources on potential vulnerabilities in the software and third-party components that the software uses, and investigate all credible reports.[See above]4e(vi)
36
RV.1.2: Review, analyze, and/or test the software’s code to identify or confirm the presence of previously undetected vulnerabilities.Example 1: Configure the toolchain to perform automated code analysis and testing on a regular or continuous basis for all supported releases.

Example 2: See PW.7 and PW.8.
4e(vi)
37
4e(iv)
38
Requirement 4

The software producer employed automated tools or comparable processes that check for security vulnerabilities. In addition:
a) The software producer ensured these processes operate on an ongoing basis and, at a minimum, prior to product, version, or update releases and

b) The software producer has a policy or process to address discovered security vulnerabilities prior to product release; and

c) The software producer operates a vulnerability disclosure program and accepts, reviews, and addresses disclosed software vulnerabilities in a timely fashion.
PO.4.1: Define criteria for software security checks
and track throughout the SDLC.
Example 1: Ensure that the criteria adequately indicate how effectively security risk is being managed.

Example 2:
Define key performance indicators (KPIs), key risk indicators (KRIs), vulnerability severity scores, and other measures for software security.

Example 3:
Add software security criteria to existing checks (e.g., the Definition of Done in agile SDLC methodologies).

Example 4:
Review the artifacts generated as part of the software development workflow system to determine if they meet the criteria.

Example 5:
Record security check approvals, rejections, and exception requests as part of the workflow and tracking system.

Example 6:
Analyze collected data in the context of the security successes and failures of each development project, and use the results to improve the SDLC.
4e(iv)
39
PO.4.2: Implement processes, mechanisms, etc. to
gather and safeguard the necessary information in
support of the criteria.
Example 1: Use the toolchain to automatically gather information that informs security decision-making.

Example 2:
Deploy additional tools if needed to support the generation and collection of information supporting the criteria.

Example 3: Automate decision-making processes utilizing the criteria, and periodically review these processes.

Example 4: Only allow authorized personnel to access the gathered information, and prevent any alteration or deletion of the information.
4e(iv)
40
PS.1.1: Store all forms of code – including source code, executable code, and configuration-as-code – based on the principle of least privilege so that only authorized personnel, tools, services, etc. have access.[see above]4e(iv)
41
PW.2.1: Have
1) a qualified person (or people) who were not involved with the design
and/or
2) automated processes instantiated in the toolchain review the software design to confirm and enforce that it meets all of the security requirements and satisfactorily
addresses the identified risk information.
Example 1: Review the software design to confirm that it addresses applicable security requirements.

Example 2:
Review the risk models created during software design to determine if they appear to adequately identify the risks.

Example 3:
Review the software design to confirm that it satisfactorily addresses the risks identified by the risk models.

Example 4:
Have the software’s designer correct failures to meet the requirements.

Example 5:
Change the design and/or the risk response strategy if the security requirements cannot be met.

Example 6:
Record the findings of design reviews to serve as artifacts (e.g., in the software specification, in the issue tracking system, in the threat model).
4e(iv)
42
PW.4.4: Verify that acquired commercial, open-source, and all other third-party software components comply with the requirements, as defined by the organization, throughout their life cycles.[see above]4e(iv)
43
PW.5.1: : Follow all secure coding practices that are appropriate to the development languages and environment to meet the organization’s requirementsExample 1: Validate all inputs, and validate and properly encode all outputs.

Example 2: Avoid using unsafe functions and calls.

Example 3:
Detect errors, and handle them gracefully.

Example 4: Provide logging and tracing capabilities.

Example 5: Use development environments with automated features that encourage or require the use of secure coding practices with just-in-time trainingin-place.

Example 6:
Follow procedures for manually ensuring compliance with secure coding practices when automated methods are insufficient or unavailable.

Example 7: Use tools (e.g., linters, formatters) to standardize the style and formatting of the source code.

Example 8: Check for other vulnerabilities that are common to the development languages and environment.

Example 9: Have the developer review their own human-readable code to complement (not replace) code review performed by other people or tools. See PW.7.
4e(iv)
44
PW.6.1: Use compiler, interpreter, and build tools that
offer features to improve executable security.
Example 1: Use up-to-date versions of compiler, interpreter, and build tools.

Example 2: Follow change management processes when deploying or updating compiler, interpreter, and build tools, and audit all unexpected changes to tools.

Example 3: Regularly validate the authenticity and integrity of compiler, interpreter, and build tools. See PO.3.
4e(iv)
45
PW.6.2: Determine which compiler, interpreter, and build tool features should be used and how each should be configured, then implement and use the approved configurations.Example 1: Enable compiler features that produce warnings for poorly secured code during the compilation process.

Example 2:
Implement the “clean build” concept, where all compiler warnings are treated as errors and eliminated except those determined to be false positives or irrelevant.

Example 3:
Perform all builds in a dedicated, highly controlled build environment.

Example 4:
Enable compiler features that randomize or obfuscate execution characteristics, such as memory location usage, that would otherwise be predictable and thus potentially exploitable.

Example 5:
Test to ensure that the features are working as expected and are not inadvertently causing any operational issues or other problems.

Example 6:
Continuously verify that the approved configurations are being used.

Example 7:
Make the approved tool configurations available as configuration-ascode so developers can readily use them.
4e(iv)
46
PW.7.1: Determine whether code review (a person looks directly at the code to find issues) and/or code analysis (tools are used to find issues in code, either in
a fully automated way or in conjunction with a person) should be used, as defined by the organization.
Example 1: Follow the organization’s policies or guidelines for when code review
should be performed and how it should be conducted. This may include thirdparty code and reusable code modules written in-house.

Example 2:
Follow the organization’s policies or guidelines for when code
analysis should be performed and how it should be conducted.

Example 3:
Choose code review and/or analysis methods based on the stage of
the software.
4e(iv)
47
PW.7.2: Perform the code review and/or code analysis
based on the organization’s secure coding standards,
and record and triage all discovered issues and
recommended remediations in the development
team’s workflow or issue tracking system.
Example 1: Perform peer review of code, and review any existing code review,
analysis, or testing results as part of the peer review.

Example 2: Use expert reviewers to check code for backdoors and other
malicious content.

Example 3: Use peer reviewing tools that facilitate the peer review process, and
document all discussions and other feedback.

Example 4:
Use a static analysis tool to automatically check code for
vulnerabilities and compliance with the organization’s secure coding standards
with a human reviewing the issues reported by the tool and remediating them as
necessary.

Example 5: Use review checklists to verify that the code complies with the
requirements.

Example 6: Use automated tools to identify and remediate documented and verified unsafe software practices on a continuous basis as human-readable code is checked into the code repository.

Example 7: Identify and document the root causes of discovered issues.

Example 8: Document lessons learned from code review and analysis in a wiki that developers can access and search.
4e(iv)
48
PW.8.2: Scope the testing, design the tests, perform
the testing, and document the results, including
recording and triaging all discovered issues and
recommended remediations in the development
team’s workflow or issue tracking system.
Example 1: Perform robust functional testing of security features.

Example 2: Integrate dynamic vulnerability testing into the project’s automated test suite.

Example 3: Incorporate tests for previously reported vulnerabilities into the project’s test suite to ensure that errors are not reintroduced.

Example 4: Take into consideration the infrastructures and technology stacks that the software will be used with in production when developing test plans.

Example 5: Use fuzz testing tools to find issues with input handling.

Example 6: If resources are available, use penetration testing to simulate how an attacker might attempt to compromise the software in high-risk scenarios.

Example 7: Identify and record the root causes of discovered issues.

Example 8: Document lessons learned from code testing in a wiki that developers can access and search.

Example 9: Use source code, design records, and other resources when developing test plans.
4e(iv)
49
PW.9.1: Define a secure baseline by determining how
to configure each setting that has an effect on security
or a security-related setting so that the default settings
are secure and do not weaken the security functions
provided by the platform, network infrastructure, or
services.
Example 1: Conduct testing to ensure that the settings, including the default
settings, are working as expected and are not inadvertently causing any security
weaknesses, operational issues, or other problems.
4e(iv)
50
PW.9.2: Implement the default settings (or groups of
default settings, if applicable), and document each
setting for software administrators.
Example 1: Verify that the approved configuration is in place for the software.

Example 2: Document each setting’s purpose, options, default value, security
relevance, potential operational impact, and relationships with other settings.

Example 3: Use authoritative programmatic technical mechanisms to record how
each setting can be implemented and assessed by software administrators.

Example 4: Store the default configuration in a usable format and follow change
control practices for modifying it (e.g., configuration-as-code).
4e(iv)
51
RV.1.1: Gather information from software acquirers,
users, and public sources on potential vulnerabilities in
the software and third-party components that the
software uses, and investigate all credible reports.
Example 1: Monitor vulnerability databases9, security mailing lists, and other sources of vulnerability reports through manual or automated means.

Example 2:
Use threat intelligence sources to better understand how vulnerabilities in general are being exploited.

Example 3:
Automatically review provenance and software composition data for all software components to identify any new vulnerabilities they have.
4e(iv)
52
RV.1.2: Review, analyze, and/or test the software’s
code to identify or confirm the presence of previously
undetected vulnerabilities.
Example 1: Configure the toolchain to perform automated code analysis and testing on a regular or continuous basis for all supported releases.

Example 2: See PW.7 and PW.8.
4e(iv)
53
RV.1.3: Have a policy that addresses vulnerability
disclosure and remediation, and implement the roles,
responsibilities, and processes needed to support that
policy.
Example 1: Establish a vulnerability disclosure program, and make it easy for security researchers to learn about your program and report possible vulnerabilities.

Example 2: Have a Product Security Incident Response Team (PSIRT) and processes in place to handle the responses to vulnerability reports and incidents, including communications plans for all stakeholders.

Example 3: Have a security response playbook to handle a generic reported vulnerability, a report of zero-days, a vulnerability being exploited in the wild, and a major ongoing incident involving multiple parties and open-source software components.

Example 4: Periodically conduct exercises of the product security incident response processes.
4e(iv)
54
RV.2.1: Analyze each vulnerability to gather sufficient
information about risk to plan its remediation or other
risk response.
Example 1: Use existing issue tracking software to record each vulnerability.

Example 2:
Perform risk calculations for each vulnerability based on estimates of
its exploitability, the potential impact if exploited, and any other relevant
characteristics.
4e(iv)
55
RV.2.2: Plan and implement risk responses for
vulnerabilities.
Example 1: Make a risk-based decision as to whether each vulnerability will be
remediated or if the risk will be addressed through other means (e.g., risk
acceptance, risk transference), and prioritize any actions to be taken.

Example 2:
If a permanent mitigation for a vulnerability is not yet available,
determine how the vulnerability can be temporarily mitigated until the permanent
solution is available, and add that temporary remediation to the plan.

Example 3:
Develop and release security advisories that provide the necessary
information to software acquirers, including descriptions of what the vulnerabilities
are, how to find instances of the vulnerable software, and how to address them
(e.g., where to get patches and what the patches change in the software; what
configuration settings may need to be changed; how temporary workarounds
could be implemented).

Example 4:
Deliver remediations to acquirers via an automated and trusted
delivery mechanism. A single remediation could address multiple vulnerabilities.

Example 5:
Update records of design decisions, risk responses, and approved
exceptions as needed. See PW.1.2.
4e(iv)
56
RV.3.3: Review the software for similar vulnerabilities
to eradicate a class of vulnerabilities, and proactively
fix them rather than waiting for external reports.
Example 1: See PW.7 and PW.8.4e(iv)
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100