Earlier this year, I began working through a series of posts to define full enterprise development processes that are inclusive of business concerns. The goal was to convey a complete picture of backlogs and their potential for utilization in addressing not only product or software development needs, but the business at large. While it is all clear in my mind, trying to put forth a cohesive explanation is much less straightforward.
"It is best to begin at the beginning," so I will start with providing some context.
Each industry has its own set of regulations and processes. I don't think that is groundbreaking information to anyone. Unfortunately, historically, [supposed] deep understanding has been limited to the obvious departments: security, risk, compliance, legal, etc. (I personally don't think this was ever a good approach.)
It is interesting in that, somewhere in between the years of evolving desktop development, to creating sites then services online, those departments became entities unto themselves. When software development as a career became more mainstream, there were very few specializations to be had. One couldn't choose to "only do front-end or UI work." There wasn't enough of it to sustain a full-time job, and quite honestly, "UI" wasn't even *a thing*.
When my teams were writing an application to be run on users' desktops, there was no such thing as handing over libraries or executables to another team for packaging and distributing. You had to own the packaging and installation scripting yourself. In most cases, the registration process and licensing was built into the application itself.
You had to be aware of security concerns, you had to know how things would behave on different systems, where they had to be registered, how to install them, how to roll them back, how to create logs to be sent to not only allow for remote support but general troubleshooting. You had to be aware of encryption and how to protect your work in various ways so people couldn't hack into your software without paying for it. You had to learn about licensing and cost models and obfuscation. The list goes on...
Somewhere over the last <cough> decades, the idea of limiting that understanding and assigning the accompanying accountability into different departments has gained traction. In the doing, a lot of information and collaboration has been lost. This may have appeared to be a logical separation –for a very short period of time. However, in this new age where every company is now a technology company, and platforms, security, networking, services, and UIs are all code to be managed, that loss of knowledge is a huge detriment, and simply not viable.
This is slowly coming to the forefront in unfortunate ways. Data breaches, new legislation around data governance, merging of concepts and departments as if these are new ideas: DevOps, DevSecOps, Information Security, etc., all say to me that "what is old is new again." For those of us who started our careers in the 1990's, it has become an exercise in equating new buzz words from Star Trek or the SAT guide for the same concepts that were common years ago.
With that in mind, I can definitively say that it IS possible to know and operationalize all of the things that need to be done –IF you have stayed current and IF you have continued to learn and fully understand. (Those are giant "IF" statements though.) I was cautioned when I chose Computer Science as a field of study that it meant that I would have to spend my life learning, throughout my career. I believed that and embraced it. And it is 100% true, and that commitment has now shown itself to be an advantage.
In our new world, data is king –and [digital] mountains of it are collected daily through the varied interactions that we all have with technology. In the discipline of security and regulations, in determining what set(s) of standards should be assessed and applied, the first thing to understand is this:
The kind of organization you are matters less than the kinds of data that you are handling –although there is often an overlap between the two.
Health data and patient data in the U.S. are subject to laws such as Health Insurance Portability and Accountability Act (HIPAA) and the Health Information Technology for Economic and Clinical Health Act (the HITECH Act), as well as regulations such as the Clinical Laboratory Improvements Amendments (CLIA).
U.S. financial data must comply with consumer-protection laws like the Electronic Fund Transfer Act (EFTA) and a litany of regulations under the SEC (such as Sarbanes-Oxley), CFTC, FISMA, and numerous other financial regulatory bodies. Meanwhile, data collected online that might potentially involve minors –regardless of industry –may have to comply with certain requirements under the Children's Online Privacy Protection Act (COPPA).
The same goes for geography. The EU's General Data Protection Regulation (GDPR), which took effect in 2018, is the EU's data privacy and "right to be forgotten" regulation. GDPR –and parallel regulations such as France's Digital Republic Bill –can expose multinational organizations to hefty financial penalties, additional rules for disclosing data breaches, and increased scrutiny of the adequacy of their data security.
The GDPR provision that keeps IT security teams busiest is Article 32, which requires "a process for regularly testing, assessing, and evaluating the effectiveness of technical and organizational measures for ensuring the security of the processing" of personal data.
Not only did GDPR introduce more rigor around the security of personal data, it redefined Personally Identifiable Information (PII) itself. Where the definition had previously been around basic "information about a person," it is now more subjective and nuanced to address the current technological landscape. With the wealth of information available, it no longer even takes a name to identify a person. A birthday and general location can be enough. A company where someone may work and a first name can also be sufficient. GDPR opened the door to acknowledge those contexts, and to enforce security around everything.
GDPR-style data privacy laws came to the U.S. with the California Consumer Privacy Act (CCPA), effective in 2020. But those aren't the only laws or regulations that affect IT security teams. There are plenty of others for worry, with words like "compliance," "privacy," and "security" in their job titles, from CSOs down.
GDPR aside, it has become axiomatic that non-EU countries must comply with current EU regulatory schemes when they have EU users and customers –and the notion was driven home (albeit in context other than that of data protection) when the EU fined Google a record €2.4 billion (then the equivalent of about $2.7 billion USD) for antitrust violations.
To put a finer point on it, GDPR applies to citizens of an EU nation –regardless of where they may reside, or where the company with which they may be interacting is located. For instance, if a citizen of England is living in the U.S. and signs up for a Hulu account, Hulu becomes responsible for complying with GDPR in the handling of that person's data (in addition to all regulations to which the company is already bound).
The same holds true for other international or interstate situations involving users or customers in jurisdictions other than their own. Where your customers are located matters. In the U.S., for example, [as of this writing] 46 states have their own data breach notification laws (and each such state, accordingly, has its very own definition of such basic terms as “data” and “breach”) –with Massachusetts and California’s respective breach-notification schemes viewed as the strictest.
Of note, as of January 2021, over 130 jurisdictions now have their own data privacy laws.
Not only is it incumbent to be well-versed in the data privacy laws for the countries with which one might interact, it is equally important to be aware of the LACK of data privacy laws in others. In the case where a country has not agreed to comply with international laws, a company is then limited in its ability to contract with or use the services of a country within that sovereignty. To do so fails to ensure the privacy of the data once it is out of the originating country's control, which is a violation in itself.
States also differ on other data privacy and IT security compliance laws. The states of Nevada, Minnesota, and Washington stand out for having their own laws on the books creating liability in certain situations for businesses that handle credit card transactions and are not in compliance with PCI-DSS.
Other industry standards too can have the force of “pseudo-law” –notably, the NIST Cybersecurity Framework, which federal regulators often apply to financial-services firms and government contractors. Speaking of data compliance at the federal level, data collectors transacting business in the U.S. still must comply with relevant federal laws and regulations on top of any state laws and regulations. The FTC, for example, has an extremely broad regulatory reach (sometimes having overlapping jurisdiction with other agencies) and enforces many laws not mentioned here affecting data practices.
Thus, it can be difficult for even small enterprises to keep up with information security and data privacy compliance. The challenge becomes even greater for large enterprises – particularly when employees (or even entire departments) go rogue with Shadow IT implementations. This is where compliance software can come in handy for keeping track of, maintaining, and enforcing IT-security and data-privacy policies.
Sometimes, however, information security, data privacy, and IT compliance overall are people problems more than they are pure data problems. While companies can better ensure their information security with a policy-management tool to track employee training and compliance, it is difficult to manage completely in any automated fashion. As with most things, overarching understanding and training is the only solution.
If you consider a normal enterprise organization's structure, there are potentially combinations of project management, design, security, legal, networking/infrastructure, sales, marketing, compliance, risk analysis/mitigation, developers, operations, quality assurance, and support groups. It is necessary for not only the obviously-labeled-as-such departments' employees to understand the larger picture, but also for each employee in every department. Everyone must be familiar with potential pitfalls. To think otherwise is not only to be a poor steward of your customers'/clients' data and privacy, but to put the future of the company in jeopardy.
To illustrate for each of the departments above, consider the following situations. For some, to be concrete, I am using "passwords" as the topic, as broken access control was ranked the "fifth most concerning web security vulnerability" and asserted to have a "High" likelihood of exploitation. (According to Veracode's State of Software, Vol. 10, access control was among the most common to be involved in exploits and security incidents despite being the least prevalent of those examined.)
Project management: A PM/PO may create a backlog item that includes a request to provide a reset password flow. There is a recommended pattern for password storage, forgotten passwords, reset passwords, input validation, multifactor authentication, error handling, authentication, authorization, and logging (among others) that are all relevant to that request (OWASP Cheat Sheet Series).
Security: In storing the data for the password (or any change), this group is responsible for ensuring the correct hashing values and communications to the team, encryption at rest as well as in transit, logging of the change at the database level, logging of the person requesting the change, forcing the session's expiration for the account after the reset, audit logging of the change and the management of the audit log (separate from the logging of the change at the database and the logging by the development team), password-length and complexity requirements, enforcing password expiration, ensuring that the new password is not the same as a specified number of previous passwords, logging anomalies in repeated attempts to change a password, locking of accounts based on certain activities, and enforcing RBAC.
Legal: In contracting with external companies or integrations with service providers, the legal team needs to be aware of all of the factors involved to ensure that all things are handled correctly. For instance, if part of audit logging being implemented included the IP address of the requesting call, there may have been a clause in the contract specifying the lack of storage of PII. The contract would have been violated without their knowledge. In most cases, your average developer would not even be aware of the contract language or that he/she had done anything wrong. (In case one didn't know, some interpretations of the EU include the IP address in PII, which then also opens up the concepts of categorizing the data and retention policies.)
Networking/infrastructure: These teams would need to be equally aware of all of the above in order to support compliance. Device management, RBAC, auditing, encryption protocols and each's availability would potentially be their responsibility. Further, for a password approach, the development team may choose to make use of a third-party package or open-source library. Infrastructure teams are responsible for ensuring that servers are single-use and that all unnecessary aspects or libraries are removed. They are further obliged to keep any packages current with security patches within 30 days of their release. They must be in the loop. They are usually responsible for scheduled scans and penetration tests of the network. Updating those packages in turn affects the development team and could potentially have had breaking changes to other aspects of the code. Many of those packages or components are utilized via a key exchange, which then leads to secrets and key management, which have their own rules as well.
Sales: The sales team needs to be able to speak to any of the security concerns and regulations to which a potential client or company may be subject. There may be discussions of integrations and questions from the client's security teams and the sales group need to be comfortable and confident in addressing those concerns. They may be attempting or have requested to demonstrate a feature that is not yet released and have asked for a public endpoint or firewall port be opened to allow for their sale. They may use production data in their demonstration. They may ask for their own testing accounts in a production environment in order to do their job. (All of those requests are violations.) Separate from password security or this situation overall, the sales team must also be aware of the risk in bringing clients into the office, allowing access to the Wi-Fi systems, potential documents on anyone else's desks, the security of their own laptops in joining client networks in the field, sharing files electronically, getting contractual documents off of printers as quickly as possible, not printing things or exposing their laptops to client networks without a firewall, etc. (Those are just samples, but each has an actual regulation dictating the behavior expected and required.)
Marketing: Marketing team members may inadvertently reveal PII in the course of their job and interactions online. They may request data be collected in a tool such as Google Analytics that is not allowed. They may synchronize systems with external providers like MailChimp or HubSpot and allow for data to be compromised in systems over which the internal controls are not subject, and the responsible/accountable team be completely ignorant of these integrations. They may not inform the Operations team (or a compliance) team of the fact that they are using an external site or tool, when that is also expected to be a line item in a software supply chain's asset inventory. They may grant permissions to users outside of the organization or share a common login within the department. When/if an employee leaves, there is no understanding of the potential data compromise with the client information residing in an external system and thus vulnerable to attack. If the site or tool wasn't authorized and thus managed by the security team, there is no guarantee that the user will be decommissioned in a timely manner, which is another violation. An employee's use or viewing of any client information in one of those tools or sites would be subject to an audit log as well, which is not always possible.
Compliance: Most compliance teams are more involved with movement of data or retention without the knowledge of how any of these other situations could happen or be a risk. Still, certain retention rules are configured at the server level over which they have no control, knowledge, or monitoring.
Risk analysis/mitigation: Risk can only be analyzed and mitigated if it is known. Most do not know that there are development/technical details that are also required. They are usually most knowledgeable about the business side of risks. They may not know that there is a requirement for risk analysis on every request being made for development as well. They can only trust that each person is doing his/her job correctly and completely. They may not know what tools are available to assist them in their jobs, as new ones become available daily.
Developers: Developers need to understand all of the patterns and best practices, not only for development, but also for the industry. They must ensure that they fully understand separation of duties, separation of data, that they are not allowed to test with production data, that they are not allowed to have test data in production, that they enforce role-based access, that any and all logs or auditing are automated and "scrubbed" of PII, security aspects, tokenization, software library and open source repository governance, replacement of configuration information and protection for access to each, scanning of source code for static analysis, coding issues, mitigating risks, responding to compromises or breaches, ensuring that enough information is logged to allow them to troubleshoot WITHOUT access to *real* information or production systems, adhere to correct branching patterns, traceability, logging of hours, refactoring of code, continuous integration, security scanning, penetration testing, regression testing, approvals, firewalls and virtual networks, encryption, updating any packages in use, etc.
Operations: By definition, operations teams are in the business of operationalizing the business and processes. They are similar to development teams in having to know *all the things* but also inclusive of data retention, in many cases.
Quality assurance: Quality assurance professionals are another group that needs to know almost everything, and to think of how to break those same things, while ensuring that the overall solution works. They have the added difficulties of testing real-world-esque bugs to prove resolution without production data –when it is usually the outlier's specific production data that was problematic. The easiest way to accomplish their job is the exact violation, so they have to be committed.
Support: If you think of support professionals, they are expected to do their work and see JUST ENOUGH information to do that job –without knowing from moment to moment what each incoming call or issue might be. In many cases, they are on a forensic search to help, but don't have access to the data that might help or might not. With the least privilege as advised, they have no option other than to escalate tickets. Where it gets tricky is when someone takes a screenshot, attaches it to a bug, slack message, or email, and now there is PII (no matter how limited) floating around in an untracked manner. For the individuals who know not to do that, they will sometimes print a screenshot to walk over to someone else for input. That piece of paper is now a liability too. Understandable, but a violation, nonetheless. (As someone who had her debit card compromised in a similar way, when a piece of paper was put into the garbage and not the shredder, I'm particularly sensitive to it.)
While those are scenarios around office-type jobs, do not think that leaves more manual jobs, potentially traditionally thought of as blue-collar roles, immune. There are similar standards and controls for industrial technology –especially as embedded devices become more and more common.
At this point, you are potentially thinking that we should all just quit trying. Don't!
No, this does not mean that each of your employees must be a cybersecurity or data governance engineer or expert. (It might be possible though, since I wrote most of that to this point off the top of my head.) It does, however, mean that the prevalent approach of online training to provide 30 minutes of videos or multiple-choice questions and pretend that your employees are trained is a blatant misapplication of the laws. (It probably has taken that much time to even read this post to get to this sentence.)
Hoarding the details and information within a department hinders the ability of the entire company and invites unintentional risks by even the most well-meaning employees.
The best way to approach these requirements is to truly understand the purposes of each, and then embed compliance in the daily execution of the employees' job. While some may think that it is impossible to be aware of all regulations, at the heart of it, they all go back to a few basic points. Because of that, there are some "shortcuts" to be had.
<Create and insert image for Venn diagram of regulations here>
For a typical company (which I am again treating each as a technology company), while there are many of which one must be aware, they all go back to a few basic publications by NIST.
Software development regulations
While there are many regulatory bodies specific to an organization's industry, almost all have the same roots.
While the name refers to technology, these standards are not limited to software. They are applied to all industries in which technology is involved –which is basically everything in today's world.
NIST's mission
To promote U.S. innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve our quality of life.
NIST's vision
NIST will be the world's leader in creating critical measurement solutions and promoting equitable standards. Our efforts stimulate innovation, foster industrial competitiveness, and improve the quality of life.
NIST's core competencies
Measurement science
Rigorous traceability
Development and use of standards
NIST's core values
NIST is an organization with strong values, reflected both in our history and our current work. NIST leadership and staff will uphold these values to ensure a high performing environment that is safe and respectful of all.
Perseverance: We will take the long view, planning the future with scientific knowledge and imagination to ensure continued impact and relevance for our stakeholders.
Integrity: We are ethical, honest, independent, and provide an objective perspective.
Inclusivity: We work collaboratively to harness the diversity of people and ideas, both inside and outside of NIST, to attain the best solutions to multidisciplinary challenges.
Excellence: We apply rigor and critical thinking to achieve word-class results and continuous improvement in everything we do.
There are published Standard Reference Materials and Standard Reference Data. Most of the reference information cites ISO and IEC guidelines, as all are bodies organized around specialized industry-leading approaches. There are Publications (SP260s) as well.
ISO develops and publishes International Standards. There are more than 23,862 standards and all are included in their catalogue.
IEC develops international standards that represent a global consensus of state-of-the-art know-how. There are over 10,000 standards, 1M certificates, and 173 countries.
Parallel to ISO and IEC is ISA. It sets the standards for automation and is the source of technical resources for the management of industrial automation. Further, ISA created the ISA Global Cybersecurity Alliance to advance cybersecurity readiness and awareness in manufacturing and critical infrastructure facilities and processes. ISA also owns Automation.com, a leading online publisher of automation-related content, and is the founding sponsor of The Automation Federation. ISA manages to speak to the standards and their implementation with the ISA Security Compliance Institute and the ISA Wireless Compliance Institute. Overall, there are more than 150 standards documents and 5,500 technical papers covering topics in effectively every area of automation.
Which one "wins"?
A common misconception is that an organization must choose between NIST or ISO or IEC or ISA and that one is better than the other. In fact, they are all of use, applicable in different ways, have multiple synergies, and ultimately standardize the same things.
Many organizations turn to Control Objectives for Information and Related Technology (COBIT) as a means of managing the multiple frameworks available. COBIT helps organizations bring standards, governance, and process to cybersecurity. The ultimate goal is to provide actionable risk management to an organization's critical infrastructure.
Both NIST and ISO are specifically useful for data security, risk assessments, and security programs. Of them all, most commonly, the NIST Cybersecurity Framework is compared to ISO 27001: the specification for an information security management system (ISMS).
Closely related is the ISO/IEC 20000 standard: Information Technology. (Most of the 20000 series involve software development, specifically. For instance, ISO/IEC 27017 is "Information technology - Security techniques - Code of practice for information security controls based on ISO/IEC 27002 for cloud services.")
NIST 800-53 (Security and Privacy Controls for Information Systems and Organizations) is more security control driven with a wide variety of groups to facilitate best practices related to federal information systems. (For context, it is 492 pages long.) Its controls, along with other supporting NIST publications, while mandatory for federal information system, are:
designed to help organizations identify the security and privacy controls needed to manage risk and to satisfy the security and privacy requirements in FISMA, the Privacy Act of 1974 [PRIVACT], OMB policies (e.g., [OMB A-130]), and designated Federal Information Processing Standards (FIPS), among others. It accomplishes this objective by providing a comprehensive and flexible catalog of security and privacy controls to meet current and future protection needs based on changing threats, vulnerabilities, requirements, and technologies. The publication also improves communication among organizations by providing a common lexicon that supports the discussion of security, privacy, and risk management concepts....
The control selection process can be part of an organization-wide risk management process, a systems engineering process [SP 800-160-1],13 the Risk Management Framework [SP 800-37], the Cybersecurity Framework [NIST CSF], or the Privacy Framework [NIST PF].
ISO 27001 is less technical and more risk-focused for organizations of all shapes and sizes.
For further specific context of multiple regulations and compliance and their relationships with each other, below are four (4) examples:
PCI DSS (Payment Card Industry Data Security Standard), certification with which is a requirement of any organization that accepts payments and any organization that shares information, integrates with, or provides services to an organization that accepts payments, is largely based on NIST and additionally refers to ENISA, OWASP, SANS (SysAdmin Audit Network Security Institute), ISO, and CIS (Center for Internet Security), among others. It specifically cites:
NIST SP 800-53 (Security and privacy controls for information systems and organizations)
NIST SP 800-30 (Guide for conducting risk assessments)
NIST SP 800-63 (Digital identity guidelines: authentication and lifecycle management)
NIST SP 800-115 (Technical guide to information security testing and assessment)
NIST SP 800-52 (Guidelines for selection, configuration, and use of transport layer security [TLS] implementations)
NIST SP 800-57 (Key management)
ISO 27001 (Information security)
ISO 27005 (Information security risk management)
OWASP: Open Web Application Security Project and the Cheat Sheet Series
OCTAVE: Operationally Critical Threat, Asset, and Vulnerability Evaluation
FedRAMP (Federal Risk and Authorization Management Program), which standardizes security assessment and authorization for cloud products and services used by U.S. Federal Agencies.
In order for a commercial cloud service offering (CSO) to be used by a federal agency, the CSO must demonstrate FedRAMP compliance.
FedRAMP compliance is the ability to substantiate adherence to government security requirements outlines in NIST SP 800-53 and supplemented by the FedRAMP Program Management Office (PMO).
The second step in obtaining FedRAMP Compliance Certification is to implement controls in accordance with FIPS 199 categorization. FIPS 199 categorization is the "Federal Information Processing Standards Publication Series of the National Institute of Standards and Technology (NIST)."
SOC (System and Organization Controls), types 1, 2, and 3
SOC is part of the AICPA (American Institute of Certified Public Accountants) framework.
SOC 1 reports use the SSAE 18 standard.
SOC 2 and 3 reports use the AT 101 standard.
SOC audits generate the reports based on a range of "common criteria controls" for "Trust Services Criteria (TSP)."
NIST 800-53 is the publication against which the SOC audits are assessed.
FISMA (Federal Information Security Modernization Act) requires organizations to fully implement and adhere to the prescribed family of controls within NIST SP 800-53.
Not to leave the industrial and automation sectors out, there is also the ISA/IEC 62443 series by the ISA99 committee and adopted by the IEC.
It specifies a framework to address and mitigate current and future security vulnerabilities in industrial automation and control systems (IACSs).
ISA-62443-4-2, Security for Industrial Automation and Control Systems: Technical Security Requirements for IACS Components, specifies requirements for embedded devices, network components, host components, and software applications.
ISA-62443-3-3, System Security Requirements and Security Levels, specifies security capabilities that enable a component to mitigate threats for a given security level without the assistance of compensating countermeasures.
ISA-62443-4-1, Product Security Development Life-Cycle Requirements, includes definition of a secure development life cycle, including security requirements, secure design, secure implementation (including coding guidelines), verification and validation, defect management, patch management, and product end of life.
ISA-62443-3-2, Security Risk Assessment, System Partitioning, and Security Levels, is based on understanding security as risk management.
ISA-TR62443-2-3, Patch Management in the IACS Environment, is a technical report addressing installation of patches, software updates, software upgrades, firmware upgrades, service packs, hot fixes, basic input/output system updates, and other digital electronic program updates that resolve bug fixes, operability, reliability, and cybersecurity vulnerabilities.
Sounding familiar yet?
No matter the industry or solution being created, there is guidance and standardization to match the same basic requirements in handling data. And at its core, that is all anything does: handle data. Whether it is to save it, encrypt it, act on it, display it, move it, etc., every regulation goes back to being intentional, evaluating risk, definition, and security at every step of the process, whatever that process may be.
If you understand the underlying reasons, the spirit and the letter of the requirements, you can satisfy the needs for compliance in the course of your daily work. You're already using the tools, but they are only as good as what you put into them.
Now that we've seen that the core is the same, I'll share what this looks like in practice...
<KRISTEN, to move the rest below to the right post.>
Specifically, tools should be used to facilitate the SDLC. I also detail a customized approach to Agile development methodologies in support of security, compliance, and best practices. This post will detail the assumptions of understanding to which is referred in the others' details.
DISCLAIMER: In traditional fashion of how my mind works, there will be assumptions made here of understanding, and links provided for reference for those who do not. If I tried to cover all of those myself, it would quite probably be impossible to comprehend.
Comments