Saturday, July 2, 2022

TSA Pipeline Security Directive Change

On June 29, 2022, the TSA announced its changes to a cybersecurity directive for U.S. pipelines after receiving harsh criticism from operators, cybersecurity experts, and trade groups. In late May 2021, in the wake of the Colonial Pipeline ransomware attack, the TSA issued the first of two directives. This first Directive set arduous breach reporting obligations on operators affected by ransomware, malware, or cyber-attack. In the second directive issued Mid-July 2021, the TSA worked with CISA and prescribed more “technical countermeasures”. 

The details of the directives were not released publicly but the TSA said last year that they required owners to:

“Implement specific mitigation measures to protect against ransomware attacks and other known threats to information technology and operational technology systems, develop and implement a cybersecurity contingency and recovery plan, and conduct cybersecurity architecture design review.”

In this newly announced change to the directive, a spokesperson for the TSA stated that it is planning to reissue the second set of security guidelines next month but with changes that “afford greater flexibility to industry in achieving critical cybersecurity outcomes.”

This change of the more technical directive comes after considerable reprisal and backlash from operators and industry experts. Some say the prescribed technical controls are counterintuitive to securing OT and consequently weaken the security posture. It seems through word of mouth the directive was littered with IT-centric technologies and techniques like Zero Trust Architecture (ZTA) and Multifactor Authentication. While these are all tried and true methods for Information Technology, given the nature of the equipment and functions of Operational Technology, it is often impossible to apply these types of security controls. OT systems are open “by design” and allow commands to flow freely to maintain the efficiency and reliability of processes. These systems lack the capacity or the structure to make use of IT-centric security topologies and technologies. On top of this, in many cases, these entities are made up of systems that are a culmination of merger and acquisition and amassed from many differing yet interconnected control systems. Due to the nature of OT and the inevitable heterogeneity, an attempt to prescribe technical controls across the breadth of the pipeline, or any other any Critical Infrastructure Key Resource (CIKR) industry, becomes virtually impossible to implement.

It is said the new directive will extend the reporting obligation of a breach from the original 12 hours to 24 hours. While this seems to give teams more time to figure out what just happened, it also lends itself to many unanswered questions like:

(1) What constitutes a “breach”?

(2) What trigger starts the obligatory reporting clock?

a. Time of the incident?

b. Time of discovery there was an incident?

c. Time of the impact of an incident?

d. Time that the impact was determined it was truly a cyber incident?

Cyber events in OT, in many cases, manifest as system failures. Subtle changes to the configuration of OT assets need to be examined to see if the changes were legitimate operational requirements or if they were injected by malware or a bad actor. This is no trivial feat for operators to stitch together the evidence of a cyber-attack and, in many cases, distinguish the indications of attack (IoA) in OT from normal system operations or routine system failure. To support operators with this massive challenge, Hexagon offers PAS Cyber Integrity® to help address risks, vulnerabilities, and speed the response and recovery process better than any competing network-based asset management system. 

CISA does have a lot of resources to leverage ahead of an incident that is noteworthy and available here Cyber Resource Hub | CISA. Unfortunately, however, having technical controls that are a mismatch to the Operational Technology Architecture, doesn’t afford operators the likelihood of successful defense. Likewise, having the clock ticking overhead negates the team’s ability to take time to fully investigate an event. The larger question we all must ask is, “what value comes from simply ‘reporting’ a breach?” Focusing on infrastructure Cybersecurity resiliency, continuity of operations, and contingency plans are far more beneficial than compliance-driven reporting unless the TSA or Government at large has some plans and the capacity to jump in at “crisis time” to assist.


Tuesday, July 20, 2021

 Best Practices

The “cringeworthy” phrase that has run its course

Let’s face it; we have all used this phrase “Best Practices.” There is even a definition in Webster's dictionary, "A best practice is a method or technique that has been generally accepted as superior to any alternatives because it produces results that are superior to those achieved by other means or because it has become a standard way of doing things, e.g., a standard way of complying with legal or ethical requirements." For Security, at one time, it was the battle cry for “what needs to be done.” Salespeople love to refer to “Best Practices,” and young practitioners are empowered because they feel they “know” these as absolute truths.

In many of my discussions with people, I often find myself either quietly biting my lip or actively challenging the use of this overused phrase. Some folks understand, yet some may wonder why, for security, I find this term inaccurate (I am not alone, I know of others in the Security profession that agree).
I have spent the last 30+ years of my over four decades in IT, practicing security. In that time, I have seen the evolution of our collective understanding and the growth of the craft, given the fluid nature of the development of security technologies. To illustrate my point as to why the term "Best Practices" does not accurately apply to the Security trade, I will look at log management as an example.

At an early stage in the evolution of our craft in, what is now widely referred to as “Cybersecurity, the focus was “Logging, Logging, Logging” log everything. Then the explosion of storage requirements and log management, in general, became cumbersome. The control evolved into “Log Aggregation,” which then evolved further to include “normalization.” With advances in Security Event and Incident Management (SEIM) technologies, we come to log aggregation, normalization, and event correlation. Side note - Before my security purist friends call me out on the acronym choice and definition, I used SEIM on purpose as SIEM alludes to a focus on Information and not the “incident as a whole.” The SEIM process and control now focus on collecting evidence of the related events across the enterprise, as evidence of an Indication of Compromise (IoC) or Indication of Attack (IoA) is being sequestered. Simultaneously the SEIM analysts orchestrate the Incident Response (IR) workflow (or playbook) to ensure all stakeholders complete their necessary tasks to mitigate risk to the organization. With all the evolution, each iteration of Log management was, at one time, considered “Best Practice.”

There’s that term again. What we meant was that these were practices that were best “at the time.” So, that answers one variable (“Best when?”), but now, how about for “what environment?”. 

Let’s look at this “log management evolution” example as it applies to the Operational Technologies (OT) environment or Industrial Automation Control Systems (IACS). We find that not until very recently was centralized anomaly alerting a real possibility. Until technologies emerged like those offered by Dragos and Nozomi Networks, which allows for “out of band” OT SIEM data management to be integrated into IT EDR Platforms like Tanium, it was physically impossible to perform Log Management activities (short of minimal manual review on device-by-device basis). The limitations of the endpoints coupled with the minimalist operating systems and very limited memory capacity of the Human Machine Interfaces (HMIs), Programmable Logic Controllers (PLCs), and Field Input/Output (Field I/Os) would not tolerate additional burdens associated with rudimentary security tools. So, the answer to the "for what environment?" variable is that the practices may not be “Best” across the collective enterprise. On to the next logical variable, “Best according to what Risk?”

If one were to look across the enterprise and find very low-risk assets, we must question ourselves, are these assets worthy of the same level of scrutiny as higher-risk assets? With unlimited resources, one may conclude it is inconsequential. However, in an enterprise that closely controls resource utilization and operational overhead, the answer may be that those low-risk assets may not need to be as closely monitored. This diminished rigor requirement may conclude that universally applied log management practices may not be “Best” under all risk profiles from the perspective of resource utilization or operational overhead.

The term “Best Practices” has run its course. It is a catchphrase that seasoned security professionals more and more question as it is very subjective. “Best,” best for whom? Best under what circumstances? Bust for what risk profile? 

I prefer to use more defensible terms like “leading industry-accepted standards.” Standards, frameworks, and practices developed and supported broadly across the security industry. There are emerging standards and technologies and established standards. The emerging technologies and standards are still on the initial climb up the Hype Cycle. Established Standards and technologies have leveled out on the Hype Cycle and have been embraced broadly across the security industry. Another term I like to use is “necessary practices.” Necessary practices are those that are necessary to defend commensurately against genuine threats and risks.

Because we have been saying the “BP” phrase for so long, it will be a learning curve to re-train our vocabulary, but, in the long run, we will be much more accurate in our discussions and leave less to interpretation.

Sunday, July 4, 2021

What does D3FEND from MITRE bring to us?

 MITRE releases D3FEND framework as a complement to its existing ATT&CK structure

The National Security Agency (NSA) announced Tuesday that the MITRE project had released the D3FEND framework, funded by the agency. The new framework aims to improve the cybersecurity of national security systems, the Department of Defense, and the defense industrial base and add defensive cybersecurity techniques to the existing ATT&CK framework. (IndustrialCyber, 2021)

D3FEND IS ON THE STREET… NOW WHAT?

What does this new framework bring to the table? As the l33tSpeak name suggests, it looks at security control capabilities with a Defensive lens. D3FEND framework knowledge-base estimates operational applicability. The framework identifies strengths and weaknesses while developing enterprise solutions consisting of multiple capabilities. The framework looks at these control capabilities because practitioners need to know not only what threats a capability claims to address but, more specifically, how those threats are addressed engineering-wise and under what circumstances the countermeasures would work. A cybersecurity countermeasure is any process or technology developed to negate or offset offensive cyber activities. It is not enough to understand what a countermeasure does—what it detects, what it prevents. We must know how it does it. A security architect must understand their organization's countermeasures—precisely what they do, how they do it, and their limitations—if countermeasures are effectively employed. (Kaloroumakis, Smith, 2021)

The D3FEND framework is focused on generic taxonomy vice vendor-specific terminology, or other technological colloquialisms. Like the ATT&CK uses categories of attack criterion, the D3FEND framework focuses on five major categories and looks at the defensive posture (control functions) recommended to affect the desired outcome. These categories are:

Harden

The harden tactic is used to increase the opportunity cost of computer network exploitation. Hardening differs from Detection in that it generally is conducted before a system is online and operational.
  • Application Hardening - Application Hardening makes an executable application more resilient to a class of exploits that either introduce new code or execute unwanted existing code. These techniques may be applied at compile-time or on an application binary.
  • Credential Hardening - Credential Hardening techniques modify the system or network properties to protect the system or network/domain credentials.
  • Message Hardening - Email or Messaging Hardening includes measures taken to ensure the confidentiality and integrity of user-to-user computer messages.
  • Platform Hardening - Hardening components of a Platform to make them more challenging to exploit.
    • Platforms include components such as:
      • BIOS UEFI Subsystems
      • Hardware security devices such as Trusted Platform Modules
      • Boot process logic or code
      • Kernel software components

Detect

The detect tactic is used to identify adversary access to or unauthorized activity on computer networks.
  • File Analysis - File Analysis is an analytic process to determine a file's status. For example, viruses, trojans, benign, malicious, trusted, unauthorized, sensitive, etc.
  • Identifier Analysis - Analyzing identifier artifacts such as IP address, domain names, or URL(I)s.
  • Message Analysis - Analyzing email or instant message content to detect unauthorized activity.
  • Network Traffic Analysis - Analyzing intercepted or summarized computer network traffic to detect unauthorized activity.
  • Platform Monitoring - Monitoring platform components such as operating systems software, hardware devices, or firmware.
    • Platform monitoring consists of analyzing and monitoring system-level devices and low-level components, including hardware devices, to detect unauthorized modifications or suspicious activity.
    • Monitored platform components include system files and embedded devices such as:
      • Kernel software modules
      • Boot process code and load logic
      • Operating system components and device files
      • System libraries and dynamically loaded files
      • Hardware device drivers
      • Embedded firmware devices
  • Process Analysis - Process Analysis consists of observing a running application process and analyzing it to watch for certain behaviors or conditions which may indicate adversary activity. Analysis can occur inside of the process or through a third-party monitoring application. Examples include monitoring system and privileged calls, monitoring process initiation chains, and memory boundary allocations.
  • User Behavior Analysis - Analysis of user behavior and patterns to detect unauthorized user activity.

Isolate

The isolate tactic creates logical or physical barriers in a system that reduces adversaries' opportunities to create further accesses.
  • Execution Isolation - Execution Isolation techniques prevent application processes from accessing non-essential system resources, such as memory, devices, or files.
  • Network Isolation - Network Isolation techniques prevent network hosts from accessing non-essential system network resources.

Deceive

The deceive tactic is used to advertise, entice, and allow potential attackers access to an observed or controlled environment.
  • Decoy Environment - A Decoy Environment comprises hosts and networks to deceive an attacker.
  • Decoy Object - A Decoy Object is created and deployed to deceive attackers.

Evict

The eviction tactic is used to remove an adversary from a computer network.
  • Credential Eviction - Credential Eviction techniques disable or remove compromised credentials from a computer network.
  • Process Eviction - Process eviction techniques terminate or remove the running process.

HOW THIS ALL FITS TOGETHER

    The D3FEND framework is the next logical step in standardizing the correlation of attack methods and vectors to the deployed or proposed defensive controls and capabilities. These controls are necessary to reduce the impact of events. The subcategories beneath the Harden, Detect, Isolate, Deceive and Evict include public comments to support the recommended capabilities. Using the ATT&CK framework to identify techniques and potential attack methods, a security team can match the methods against threat intelligence, thereby identifying potential threats and threat vectors. It is then possible to look critically at the defensive capabilities using the D3FEND framework to look for any gaps based on the threat landscape. I recommend using the bow tie methodology to graphically depict the flow from the attack vector, through defensive controls, past the event, through compensating controls to get to a more realistic potential impact.




    Tuesday, November 10, 2020

    Jumping on the bandwagon

     It seems everyone is making predictions for 2021. If we learned nothing else from 2020, it is that “The best-laid schemes of Mice and Men Often go awry, and leave us nothing but grief and pain, for promised joy” (Burns, Robert, 1785). Nevertheless, I will throw my two cents in the mix. 

     

    For 2021 I see our cybersecurity challenges giving us four very distinct challenge areas: Cybersecurity Hygiene, Threat profiling and tuning, Insider Threat, and Advanced Persistent Threat (APT) activities. There will be an uptick and increasingly brazen attack pattern from various APTs around the world. Keeping up on our Cybersecurity hygiene will be paramount in our success in thwarting these onslaughts. Threat Profiling will prioritize the attacks for which we may most likely need to defend. We will require more visibility on insider activity and risk scoring to understand where information exposure may be (or becoming) at risk. And lastly, we must know our enemy and know where they strike, why, and how. As critical infrastructure upgrades the Operational Technology (OT) environments, it will open these up to more common IT-based attacks that may complicate the security of these (increasingly) connected environments. 

     

    Cyber Hygiene

    As most security professionals know, a large percentage of the nefarious events are “crimes of opportunity.” Access doors left open, S3 buckets mismanaged, application developers not accounting for proper IAM in the apps, etc. We should give special attention to strengthening our programs on IAM provisioning and de-provisioning, shoring up and strengthening our testing and change control process, and streamlining our processes that close the gap from “detection to remediation” in our vulnerability management programs. These process improvements coupled with the standard tool deployments of SEIM, NextGen AV, EDR, Rogue WAP prevention, and properly configured, maintained, and deployed firewall solutions will help defend against our mounting threat for this coming year. Oh, and, no matter your development lifecycle, implement an AppDevSec program now, not later, but now. 

     

    Threat Profiling

    The focus on our excellent cyber hygiene at home brings us to looking at a shifting landscape in the global APT threats. According to the latest analysis, these groups will (and have) stepped up their “brazen” attack attempts on their target audiences. Not without, however, notable intelligence on their “calling card” Indications of Compromise (IoC)


    APT10

    This Chinese group (also known as Red Apollo) seems to infiltrate “supplier” networks to either disrupt the supply chain, steal intellectual property for the Chinese Government, or leverage the supplier to dive deeper into the government or customer’s systems. Paraphrased from their FBI “wanted poster,” – APT 10 conducted extensive campaigns of global intrusions into computer systems aiming to steal, among other data, intellectual property and confidential business and technological information from more than at least 45 commercial and defense technology companies in at least a dozen states, managed service providers (“MSP”), which are companies that remotely manage the information technology infrastructure of businesses and governments around the world, and U.S. government agencies. The victim companies targeted by ZHU HUA and ZHANG SHILONG were involved in a diverse array of commercial activity, industries, and technologies, including aviation, space and satellite technology, manufacturing technology, oil and gas exploration, production technology, communications technology, computer processor technology, and maritime technology. In addition, for example, the APT 10 Group’s campaign compromised the data of an MSP and certain of its clients located in at least 12 countries, including Brazil, Canada, Finland, France, Germany, India, Japan, Sweden, Switzerland, the United Arab Emirates, the United Kingdom, and the United States.


    APT41

    This Chinese group (also known as Double Dragon) is transitioning from outright attacks to “hacking for hire” or “Criminal as a Service – CaaS.” Paraphrased from their FBI “wanted poster” – They face D.C. Grand Jury charges, including Unauthorized Access to Protected Computers, Aggravated Identity Theft, Money Laundering, and Wire Fraud. These charges primarily stemmed from alleged activity targeting high technology and video gaming companies and a United Kingdom citizen.

    On August 11, 2020, a Grand Jury in the District of Columbia returned an indictment against Chinese nationals QIAN Chuan, FU Qiang, and JIANG Lizhi on charges including Racketeering, Money Laundering, Fraud, Identity Theft, and Access Device Fraud. 

     

    These charges stem from their alleged unauthorized computer intrusions while employed by Chengdu 404 Network Technology Company. The defendants allegedly conducted supply chain attacks to gain unauthorized access to networks throughout the world, targeting hundreds of companies representing a broad array of industries: social media, telecommunications, government, defense, education, and manufacturing. These victims included companies in Australia, Brazil, Germany, India, Japan, and Sweden. The defendants allegedly targeted telecommunications providers in the United States, Australia, China (Tibet), Chile, India, Indonesia, Malaysia, Pakistan, Singapore, South Korea, Taiwan, and Thailand. The defendants allegedly deployed ransomware attacks and demanded payments from victims. APT41 is now taking orders for your desired level of “Christmas Chaos” ahead of the holidays. Okay, that was an editorial comment meant as a joke – but often, jokes are founded in truth.


    APT34

    This Iranian group (also known as Helix Kitten) targets similar entities as their sister team APT33 (also known as Elfin Team or Refined Kitten). APT33 reportedly targeted aerospace, defense, and petrochemical industry targets in the United States, South Korea, and Saudi Arabia. APT34 seems to take on a societal disruption focus, looking at organizations in the financial, energy, telecommunications, chemical industries, and critical infrastructure systems. They leverage “not so creative” methods to infiltrate their targets. They use things like Microsoft Excel macros, PowerShell-based exploits, and social engineering to gain access. This use of IT born attack vectors exacerbate the security challenges. The shift in OT within our critical infrastructure to more common IT components and networking will open the door to more common security exploits and challenges. 

     

    This focus on critical infrastructure seems concerning to me. In 2002, I was privileged enough to be invited to participate in the “Digital Pearl Harbor” study at the Naval War College. The findings from the study were “downplayed” and “watered down” to not cause too much public concern, so shortly after 9/11, the results of the scenario testing, to most of the 90 or so participants, was nothing short of terrifying. I live by a theory that “surprise attack planning,” hard dates and target parameters notwithstanding, have flexibility in the project timeline.

     

    So, my predictions are not all gloom and doom. My actual message for my prediction is that we now have an exciting opportunity to look closely and critically at ourselves. We need to evaluate our cyber hygiene, Threat profiling, our insider threat program (to include necessary operational checks and balances), and flesh out and drive our APT mitigation strategy and playbooks. “Knowledge is power” (Bacon, Francis, 1597). The more we know about ourselves, our defenses, our gaps, and our advisories, the more effectively we may spread our security budget. 

    Sunday, November 8, 2020

    So, now what?

    In my recent posts, I have discussed the path to “getting there.” Getting Certified. Getting a Cybersecurity degree that lands your target job. Evaluating and getting products and technologies to help strive for more comprehensive security. So now what? What’s missing? 

     

    I hinted at it, and Debbie Gordon brought it up on a comment before citing “Simulation as a standard for Cybersecurity” … Practical application, practice, and proficiency are what’s missing from the previous topics. As I’ve discussed, getting the piece of paper saying you did well in passing a course, class, test, or memorizing frameworks and theories may work to get the interview. Still, if that is the only arrow in your quiver, all the good intentions will not satisfy landing the role. “Doing” is the best way to hone proficiency. But resume bullets of positions that fit the “doing” category alone may not be enough. 

     

    In the early days of the CMMI (Capability Maturity Model Integration), I had to remind my clients that “just because you have a very mature process (complete with documentation and continuous process improvement), does not mean it is “effective,” or, for that matter, even “good.” The same goes for citing your past work at an employer. While this shows “time in grade” it is, by no means, an indicator of proficiency. 

     

    Proficiency develops through “live-fire events.” Events that may come from left field and have an urgency for which to address. The challenge here is that the limited opportunities to respond to an event or situation that requires grace under fire come infrequently and sporadic in many corporate environments. Much of the time, a responder or security professional will be looking at potential Indications of Compromises (IoCs) and running down rat holes. This activity is much different from quickly and efficiently employing a broad range of tools, interacting with many other departments, stakeholders, or cordoning off malware and bad actors. Preparedness from daily “run and maintain” tasks is where simulation helps keep the responders sharp. Simulation Standards apply to operational staff as well as job seekers. Skills atrophy is a real degradation of proficiency driven through a lack of (or opportunity to) practice. If the team is going through the motion of looking down rat holes for IoCs, it can be difficult to quickly swing into full alert mode when an actual event occurs.

     

    For the job seeker, to add weight to the new degree or certification, I believe there is a real need to augment the experience, credentials, and formal study with simulation standards (and scoring). Being a “Missouri Boy,” I have lived my life insisting that folks don’t just “tell me how good they are” but that they “show me.” That is what simulation exercises will do. It will prove that not only will your potential hire, team member, or entire team be prepared to execute, but the simulation will also benchmark and score efficiency and effectiveness. 

     

    Creating “muscle memory” is what regular simulation can accommodate. I will use a winter driving example. An inexperienced driver (or one that has not driven in the winter for a long time) may be taken off guard by hitting a patch of ice and losing traction. Contrarily, the season winter driver instinctively knows how to take corrective action and navigate the condition with ease. This “muscle memory” principle is the same for the cybersecurity teams as they go from the routine daily activities to stepping up to an actual event. By leveraging regular simulation exercises in a “live fire” environment, teams will gel under pressure and develop a “second nature” rhythm and cadence to their risk mitigation, remediation “dance” It comes down to the desired preparedness. Would you rather your Cybersecurity and IR teams be watching their feet doing the Box step or gliding across the floor doing the tango?

     

    But you may be wondering if I have created a catch 22 here. I stated that “real events are few and far between,” then I said simulations should be “live fire” events. It is recommended, and, almost necessary, to look to third parties to help with the simulation standards and run a safe, isolated, real environment. 

     

    An organization should look towards a Cyber Range to facilitate the environment, attack scenarios, and proficiency measurement. Unfortunately, this “Cyber Range” space is defined differently by different companies. It doesn’t mean the same thing and does not truly develop the desired team dynamics or proficiency. 

     

    Years ago, when MSSPs were cropping up on the scene, there was a D.C. based company, RipTech, that shown brightly very early on. RipTech was a Managed SOC to which all other SOC providers aspired. Their secret was that they delivered what the client needed and expected. I have found a “RipTech like” provider in the emerging Cyber Range space, one that sets the bar for what a real Cyber Range, offering Simulation as a Standard, should be. If you’re intrigued by the concept of increasing proficiency, effectiveness, and developing repeatable “muscle memory” in your teams, I would be happy to assist. 

     

    Please send me a note on LinkedIn to discuss simulation options. I will be glad to help define success criteria, custom needs, and desired metrics while making introductions for your organization to the “cream of the (Cyber Range) crop.”  

     

    Saturday, November 7, 2020

    The cream of the crop always rises to the top

     I have watched as the vernacular in cybersecurity normalized, abused, and over-used the term “best” in my career. Best is a very subjective term in cybersecurity. Best for whom, for what risk tolerance level, or for what threat profile? When looking at security products and technologies, the same “of what does ‘Best’ consist?” can be said. Features and benefits outlined in the sales cycle may taunt one into believing one technology is superior. When it is all examined with clarity of expectations for a specific organization, a deduction may become apparent that “Best for some may not be Best for the application at hand.” 

    It reminds me of an analogy my friend and colleague made. When the VCR was building in popularity, features like multiple timer events could be programmed through a menu interface displayed on the playback TV screen (“on-screen display” or OSD). This feature allowed several programs to record at different times without further user intervention and became a central selling point. But, sadly, for those who paid dearly for these enhanced features, the reality was that most consumers only used “play,” Fast Forward,” “Stop,” and “Rewind.” 

    This phenomenon can happen to any organization sans a formal Proof of Concept (POC) product selection process. This repeatable and customizable process will first document the expectations of the organization. Questions like “what challenge will this technology or tool solve?” and “what are the threat vectors, threat profiles, and typical methods of exploitation this technology or tool will need to address?” This shortlist of questions is, by no means, an exhaustive list. The initial questions should frame and define the custom scope and expectations for the applicability or organizational nuances. 

    In my experience, I see many organizations put “technical blinders” on during their POC efforts. They look at the technical superiority or ease of the technical aspect of deployment. This technological focus is less than half the story. The “operational sustainability” quotient, frequently, is overlooked. Items like current staffing levels, current staffing training or expertise, Upkeep or maintenance, Operational configuration, Features that are being considered that may require a cultural adaptation, are there any HR or Legal concerns to a new capability? These and other organizational specific challenges need to be weighed and measured.

    Once the comprehensive criterion is established and weighting is applied appropriately, the “cream of the crop” will naturally rise to the top. CISOs will want to consider leaving some scoring leeway to account for business-related differentiators such as “how easy with whom to work is the supplier?” or, of course, the almighty pricing (and negotiation experience).


    Thursday, November 5, 2020

    From the Trenches (episode one)

    Moving mountains cannot be accomplished alone. Most CISOs face similar questions when “pitching” an unfamiliar (to the organization) security control or technology. No matter which framework or standard the company has chosen to follow, the executive team will wonder “What do other organizations do?”, “Is this security challenge relevant to our industry?” and “Is this the most current or effective control or is there another way our peers have found for meeting the security challenge?” The next question may be “Is this an appropriate level of rigor for our size and market share?”

    Being able to answer (or have these answers readily available) will be critical for a CISO in gaining support for budget and controls that will ultimately add to a CISO’s security control/capital, and potentially protect their job. 

    Boiling the ocean is never a good strategy. Breaking down security controls into a “shortlist” of necessary practices (necessary for your organization, size, and organizational risk profile) has proven to be the most effective way to initiate the implementation of any security control or program. Picking the top 20 categories and their subsequent controls can give any organization a significant leg up with better control of the risk landscape while maintaining a manageable operational overhead.

    Knowing how these categories and controls map to the major standards will further help a CISO in making the case for recommended improvements. Leadership should keep several things in mind; Risk tolerance and appetite (mapped to necessary practices), cultural perceptions and end-user experience, and operational sustainability. Compensating for these concerns during the initial request for resources, coupled with peer statistics and metrics, may help fully flesh out a business case for a process, policy, training, or technological improvements. 

    From the trenches (episode two)

    A ubiquitous question posed to any long time Cybersecurity practitioner is, “how can one get into the Cybersecurity field if they have focused on a non-technical or non-cyber related vocation?”

    This question is complicated to answer on many levels because companies looking for Cybersecurity talent need people that can jump in and add value at the start. 

    There are limited intern openings almost every spring, but these are impractical for a working professional with full-fledged obligations, and there’s a 50%/50% chance of a follow-on offer. 

    Cybersecurity “Boot camp” is a recent advent. These boot camps typically prepare the student to pass an exam and rapidly obtain a certification. The challenge with this learning style is that a seasoned security professional will see right through the regurgitation of “frameworks” or “best practices.” The craft surrounding cybersecurity is the comprehensive understanding of balancing risk, business objectives and applying necessary practices commensurate to both. 

    More and more outstanding Cybersecurity degree programs have been cropping up that provide a good “ground up” view of Cybersecurity. All with their unique focus and curriculum designed for various end goals for the students. Some focus on pushing the student towards industry certifications, some focus on audit, while others may take a deep technical dive. Many universities employ professors that are active practitioners like the Webster University Master’s Degree program. http://www.webster.edu/catalog/current/graduate-catalog/degrees/cybersecurity.html

    The Webster University Master of Science (MS) in cybersecurity readies individuals for demanding positions in public and private sectors overseeing, operating, or protecting critical computer systems, information, networks, infrastructures, and communications networks. Students entering the cybersecurity program should know about computer systems, digital networks, familiarity with internet and wireless applications, and possess stable (high school algebra and exposure to trigonometry) mathematical and written and oral communication skills.

    Graduates will be capable of explaining the essential principles and theories used throughout the field of cybersecurity. Graduates will also be capable of applying knowledge in the field of cybersecurity to analyze real-world problems. And finally, graduates will be capable of effectively integrating cybersecurity knowledge to propose solutions to real-world problems.

    Yet one more shift is taking place in the Cybersecurity education landscape. Partnerships between businesses and universities have made higher education in Cyber more attainable to non-technical/Cyber career-changing learners. An exceptional program recently brought to my attention is the 100% Online NYU Tandon Bridge program. Students accepted to this program are indeed endowed with the tools to hit the Cybersecurity field running no matter their previous occupation of course of study. It is an affordable, flexible way for students with a non-technical background to attain admittance into select graduate programs.

    https://engineering.nyu.edu/academics/programs/cybersecurity-ms-online/nyu-cyber-fellows

    https://engineering.nyu.edu/academics/programs/nyu-tandon-bridge

    Through the Tandon Bridge program, there are many STEM Master’s degree programs at NYU a student may shoot for, but, for the focus of this discussion, I will simply highlight the Cybersecurity, or Cyber Fellows M.S. and NYU Cyber Fellows program. 

    The NYU Cyber Fellows provides a 75% scholarship towards tuition for the elite online Cybersecurity Master’s Degree. Thanks to generous industry support, this first of its kind program is offered for the affordable price of approximately $17,000 and includes access to a hands-on virtual lab, industry collaborations, an industry-reviewed curriculum, exclusive speaker events, and peer mentors. This multi-company industry input to the learning experience makes this program stand out. This junction between Higher Education and multi-company exposure is where “practical application” and operational sustainability skills are tested and shared.

    By closing the financial gap through endowments and scholarships, Companies and Higher Education can help fill the cybersecurity talent gap through attention to structuring coursework along with affordability. If an employee can leverage a companies’ tuition assistance program, break into a desirable field, and have a minimal financial burden, the incentives for success are visible for all. 

    Friday, October 16, 2020

    SEIM, SIEM and UEBA

    A critical component of securing an enterprise is a tool to concatenate and correlate the data from events across security logs and tools. This grew from what we used to call "Log Aggregation, Correlation, and Normalization". As the volume of data coming through these systems became unmanageable, the evolution of the tools started alerting on event types. 

     The newer tools are classified as either a SEIM or SIEM. While both acronyms identify a tool's basic functionality, I offer up an opinion that there are subtitle nuances to the tools with which places them in one of the other categories. While used almost interchangeably, when we look at the acronyms separately we see that they have a focus that fits differing objectives

    • SEIM stands for Security Event and Incident Management, while 
    • SIEM stands for Security Information and Event Management 

    While both tools would do the basics of a SIEM and gather security information from the various tools and logs necessary to have a holistic look at events across the security landscape, I believe a tool that falls into the SEIM category would have greater functionality in not only identifying events but include a workflow engine and evidence handling capacity to facilitate incident management across the enterprise. 

    One point that often gets overlooked by technologists is that responding to Incidents across the enterprise includes many steps taken outside of IT that also need to be captured, recorded, and forensically held in a defensible chain of custody. This workflow may be managed by non-IT or IT Security personnel. 

    This cross-organizational workflow is a piece that is evolving at best and has brought the rise of Security Orchestration And Response (SOAR) platforms. Most SEIM vendors are focused on User and Entity Behavior Analytics (UEBA) which includes machine learning of baseline activities and alerts on anomalous behaviors. This automation, while useful and expands the potential to capture more activities that may be suspect, it does not help the "defensibility" of responding to an incident beyond the technical realm. (more on UEBA in another post). As a consumer, I would want my SEIM to include a SOAR functionality or at the minimum an integration into my enterprise workflow management platform. There are so many disconnected choices out there (from SOAR at the Firewall, Standalone products, or some limited functionality in some SEIM solutions) 

    I have found there are two types of SEIM vendors - Mature Security tools that are trying to incorporate UEBA; and Mature UEBA tools trying to snap on SEIM functionality. Both of which have their challenges. It really comes down to the applicability of the functionality in your particular use case. If your SOC team has the ability to "swivel chair" for event and incident management logging and tracking then the more mature UEBA tool, which may have some limitations in SEIM and SOAR functionality, may be acceptable.

    Friday, April 11, 2014

    Asset Managers Prepare: SEC Cybersecurity Sweep Audits Are a Reality

    With the recent announcement by Jane Jarcho, National Associate Director for the Securities and Exchange Commission's Asset Manager exam program, that they “will be looking to see what policies are in place to prevent, detect and respond to cyber attacks,” as well as vendor access and vendor due diligence from asset managers, the SEC has ramped up their Cybersecurity Assessment program out of their Chicago office.

    The practice of Asset management, broadly defined, refers to any system that monitors and maintains things of value to an entity or group. The context of Jarcho’s statements was directed towards Investment Managers.

    Investment management is the professional asset management of various securities (shares, bonds and other securities) and other assets (e.g., real estate) in order to meet specified investment goals for the benefit of the investors. Investors may be institutions (insurance companies, pension funds, corporations, charities, educational establishments etc.) or private investors (both directly via investment contracts and more commonly via collective investment schemes e.g. mutual funds or exchange-traded funds).

    The new focus is to try to "require" asset managers to disclose Cybersecurity breaches. The focus on cyber events (which has traditionally been "guidance" but is now being pressured to become "rule") is being prepared to meet the registration filing process for "material changes" as described in the SEC guidance for Corporate Financial Disclosure, topic 2: Cybersecurity

    From the SEC CF Disclosure Guidance Topic Number 2: Cybersecurity 

    Disclosure by Public Companies Regarding Cybersecurity Risks and Cyber Incidents

    The federal securities laws, in part, are designed to elicit disclosure of timely, comprehensive, and accurate information about risks and events that a reasonable investor would consider important to an investment decision. Although no existing disclosure requirement explicitly refers to Cybersecurity risks and cyber incidents, a number of disclosure requirements may impose an obligation on registrants to disclose such risks and incidents. In addition, material information regarding Cybersecurity risks and cyber incidents is required to be disclosed when necessary in order to make other required disclosures, in light of the circumstances under which they are made, not misleading. 3 Therefore, as with other operational and financial risks, registrants should review, on an ongoing basis, the adequacy of their disclosure relating to Cybersecurity risks and cyber incidents.

    The SEC is ramping up to perform Sweep Assessments. These are very targeted assessments against very succinct criterion. In this case the SEC is looking to see if Asset Managers are “doing the right things”. This comes as no shock or revelation to Cybersecurity professionals but Asset Managers have never really been held “accountable” for Cybersecurity in the past. Many Investment Managers strive to follow standards and industry accepted practices but they find they may not have such tight control over their Broker/Dealer networks. Alliance partner attestations of Cybersecurity control alone will not indemnify the brand and reputation following an incident. Due diligence and focus on standards becomes more and more important as the regulatory scrutiny increases.

    Some of the items the SEC will be asking the Investment Managers to provide seems rudimentary to a Cybersecurity professional but may not be as readily accessible to some organizations. Getting an early start on identifying and compiling the pieces of evidence being asked for is key to not only a successful Cybersecurity focused Sweep Assessment but also to double check, validate or shore up a comprehensive Cybersecurity program. Having a third party assist in interpreting this evidence and ensuring the objective for the requests will ease the pain and confusion that may precede a formal assessment.

    Some highlights from the SEC’s list of items they will be asking for are:

    Relevant Policies and Procedures for:
    Physical Device Inventory
    Software platform and application inventory
    Network Maps
    Cataloging of External Connections to the network
    Resource Classification based on risk
    Logging capabilities
    Written Security Policy
    Risk Assessment Processes
    Cybersecurity threats
    Physical security threats
    Systemic Cybersecurity roles and responsibility/accountability
    Business Continuity Plan
    Cybersecurity Ownership/Leadership (CISO or equivalent)
    Cybersecurity Insurance maintenance and coverage
    Cybersecurity Risk Management (adherence to Standards)
    Cybersecurity Network Controls, testing and Staff Training
    Engineering
    Application Development
    Configuration Standards
    Distributed Denial Of Service resiliency
    Data Retention and Destruction
    Incident Response
    Encryption
    Audits
    BCP and DR
    Risks associated with Remote Customer Access and Asset Transfer Requests
    Online Access
    Identification Authentication and Access Management
    User Credential protection
    E-mail authentication
    Risks associated with Vendors and Third Parties
    Detection of Unauthorized Activity
    Documentation of any incidents since January 1, 2013


    There is much more detail they will be requesting to support the topics highlighted above. Bottom line here is that the SEC is looking to see that the organization is following industry accepted practices for Cybersecurity. Identifying supporting evidence will greatly reduce the frustration and churn that may result from being subjected to these sweep assessments. My advice is to get a jump on this expectation by identifying and validating the supporting evidence early.