Last week I participated in a panel on Continuous Monitoring at FOSE. Joining me were Mark Crouter from MITRE as the moderator, John “Rick” Walsh, chief of technology and business processes in the Cybersecurity Directorate of the Army’s Office of the CIO, and Angela Orebaugh, Fellow and Senior Associate at Booz Allen Hamilton. Auspicious company indeed.
For those not tuned into the federal government’s cybersecurity initiatives, the concept of continuous monitoring evolved from the previous approach in FISMA (federal information security management act), which mandated annual reviews of federal agencies’ security programs. After a few years of implementation it was widely recognized that the reviews generated rooms full of paper, which were obsolete as soon as they were printed, but didn’t elevate information security plan effectiveness to an acceptable level.Between 2006 and 2010, the number of security incidents rose by over 650%. The resulting strategy is embodied in FISMA 2012 (2.0), which is aimed at continuous monitoring of security controls, determining gaps between current and accepted security baselines, and quantifying risk.
Rick has been facing the challenges of implementing continuous monitoring within the government, and his experience has been that the different business processes, missions, and systems create obstacles, but once overcome, the solution yields financial and process efficiencies, and improved security. One of the biggest challenges is enumerating the assets, but once done is sure to reveal duplication of systems and opportunities to consolidate systems and software licensing.
Angela framed the conversation in her intro, which was appropriate since she co-authored NIST Special Publication 800-137, Information Security Continuous Monitoring for Federal Information Systems and Organizations. She has also been involved with the Security Content Automation Protocols (SCAP, pronounced ess-cap) project, which provides a set of standards for describing vulnerabilities (CVE, common vulnerabilities & exposures), systems (CPE, common platform enumeration), and configuration standards (CCE, common configuration enumeration), as well as a scoring system (CVSS), a test definition language (XCCDF), and a vulnerability definition language (OVAL). Angela advocated use of SCAP as a foundation for continuous monitoring.
Questions from the audience mainly focused on how to implement continuous monitoring, including getting buy-off from senior management and budgeting. The key is to show short-term results that are meaningful to business stakeholders. While continuous monitoring is in the process of being mandated, the danger is treating it as a checklist and doing the bare minimum to comply; whereas, when done right continuous monitoring can be the cornerstone for real security improvements, including interrupting the kill chain through early attack detection, provide total visibility to include troubleshooting operational problems, and give management a security dashboard with both technical and business gauges. The State Department was one of the first successful adopters of continuous monitoring and was able to not only ameliorate their high-risk vulnerabilities by 90%, but also slash the cost of certification and accreditation by 62%.
One of the more amorphous questions was how continuous is continuous? Does data need to be analyzed in real-time or near real-time? Does this apply to all systems? The answer is that it depends on each individual agency’s goals and the telemetry that can be collected from the systems. Organizations don’t want to have to retool systems to provide events as they occur–unless the systems are critical enough to warrant that cost and effort and there is no other way to gain the needed visibility. The panel all agreed that some systems only need to report into a central monitoring solution on an occasional basis–vulnerability scanners, for example–while network monitoring should report in near real-time, which means in one-minute intervals for most systems that create NetFlow records. Ultimately, there is no one-size-fits-all answer.
My overall impression from the panel is that continuous monitoring to the federal sector is what we call Security Intelligence in private industry, and both need to be defined and implemented per the enterprise or agency’s specific needs. The primary difference is that continuous monitoring is focused on metrics: quantifying the delta between expected state of assets and the measured states and classifying these differences as vulnerabilities. The scorecard approach provides a common baseline for different organizations to compare themselves against each other, and for management to better understand their organizational security posture at any given moment in time and compare it against past performance.
I was asked at the GTRA conference how the public and private sectors differ. My view is that the government does more up-front analysis and planning, while the private sector sees a need and builds a solution. Between well-considered frameworks, like FISMA 2.0, and tools like QRadar and OpenPages, the federal government and industry have an opportunity to collaborate on a complete Security Intelligence solution incorporating continuous monitoring and meaningful security scorecards and dashboards.
Click here to learn how Security Intelligence can help Federal organizations address continuous monitoring requirements. Find out how QRadar Risk Manager addresses the need for configuration auditing, and assessing the risk of configuration changes, across multi-vendor network environments (switches, routers, firewalls and IDS/IPS).
Stay up to date with the latest news from the Institute for Advanced Security by joining the community, following us on Twitter, and subscribing to the Institute expert blog. We love to share content from our members so please click the pencil icon to submit your content ideas!