October 29, 2014 | Written by: IBM Cloud Staff
Share this post:
By Matt Rutkowski
Can I trust an OpenStack cloud?
This question is probably not one that many of us developing in or on OpenStack might ask ourselves or even consider. With the recent announcement of OpenStack Juno, the milestone 10th release, and considering all of its compelling new features culled from existing and new components, it seems that we can assume wide customer adoption of OpenStack to be a forgone conclusion. However, the reality is that there are a large number of enterprise customers that face challenges moving to an OpenStack cloud.
These challenges have nothing to do with OpenStack lacking infrastructure as a service (IaaS) functions such as exposing vCPU topologies to guest images in Nova or supporting Backup and Restore for Couchbase in Trove. Instead, customers face the challenge of auditing and monitoring their valuable workloads and data in the cloud in accordance with their strict corporate, industry or regional policies.
(Related: A guide to the OpenStack Juno release)
Yes, you can trust OpenStack by using its built in, standardized CADF auditing
I am here to let you know that since the Havana release, the OpenStack community has been continuously adding auditing support to OpenStack using an open standard called Cloud Auditing Data Federation (CADF), which is developed by the Distributed Management Task Force (DMTF). The analysis of CADF audit records can be an effective way to help customers prove their various policies are being enforced within OpenStack.
The CADF model answers the critical questions about any activity or event that might happen in any software service (in OpenStack or at other layers) in a normative manner using CADF’s seven Ws of auditing:
The ability to audit OpenStack started in Havana release
During the Havana release, a Python implementation of the CADF standard (pyCADF) was created and introduced as both a library for components that wished to generate CADF-formatted audit events directly and also as a pluggable Web Server Gateway Interface (WSGI) middleware filter that uses the library. This audit filter can be added to the application programming interface (API) processing pipeline of any OpenStack component, including Nova, Neutron, Cinder and so forth.
Using Nova as an example, the following diagram shows how the filter gets inserted into the pipeline (after being included in the service’s api-paste.ini file):
The filter was initially designed and tested to handle the majority of OpenStack v2 APIs during the Havana release, but was primarily tested on the most popular Nova APIs.
Icehouse and Juno releases see CADF auditing support increase
The auditing goal of the Icehouse release was to expand support and testing of the AuditMiddleware filter to formally include Glance, Cinder, Neutron and Swift and to create customizable mapping files to allow component owners better control of how OpenStack resource names get mapped to the CADF standard’s normalized names. We also expanded the filter to handle nearly every Nova API, even ones that essentially “wrapped” other APIs in the body of the HTTP message. As recognition of its expanding support, the pyCADF library and filter were updated to a core library and to use Oslo Messaging.
Also during Icehouse, I was pleased that Keystone core developers took notice of CADF and its potential and worked to directly emit CADF events for every user authentication request coming from every OpenStack service that uses Keystone. Now we had layered proof from CADF audit event records to show exactly what tenant administrators and users were doing at both the API and access control levels. At the Juno summit we were able to demonstrate Icehouse services sending CADF audit events to IBM Security QRadar SIEM (see this video). The standing-room-only audience of this general session was very receptive to seeing real customer policies and rules being enforced using CADF. We set up rules to alert admins in real time when strange or anomalous uses of an OpenStack Cloud were happening
In the Juno release, and with the incredible support of awesome Keystone developers such as Brad Topol, Brant Knudson, Dolph Mathews and Steve Martinelli, Keystone not only took ownership of the pyCADF library in order to assure quality of the codebase, but also expanded its CADF support. Now, Keystone emits CADF events that allow customers to track both federated identity management (FIM) authentication and role assignment (such as tenant or group) activities. This means that customers can monitor how their users are authenticated with any external identity provider (IdP) and can see all role-based access control (RBAC) permissions being granted to them as well.
What does the future hold for CADF in OpenStack?
As we prepare for the upcoming OpenStack Kilo summit, we are looking to continue getting the message out to new components that may not be aware of pyCADF or the CADF AuditMiddleware filter. We want these components to assist us in creating a custom mapping file that helps the filter accurately interpret their APIs and normalize the names of resources they manage to CADF standardized semantics that can be effectively analyzed. I have heard that Keystone is expanding its support to (security) policy management activities and that CADF can again be used to audit any new APIs or activities in this area.
CADF for real-time operational intelligence
From its current use in OpenStack, you may think CADF is all about traditional security auditing. Of course it is, but CADF is designed for much more. It is designed for handling operational metric data and real-time measurements from the actual cloud data center hardware and software.
This means that the actual compute hosts, networks and storage devices that underlie and fulfill the OpenStack services can generate data for OpenStack to audit workloads and data performance to assure they adhere to service level agreements (SLAs). Additionally, this means CADF metric data could be used to immediately detect when servers, networks or storage assigned to customer workloads underperform or fail and allow OpenStack clouds to take automatic corrective actions to scale or failover.
To this end, a new project called Monasca—an acronym for “monitoring at scale”—will be discussed at the upcoming summit and the sponsoring companies (HP, Rackspace and IBM) are looking at supporting CADF events as a normative format for such use cases.
Indeed, it is an exciting time when we can effectively apply open standards such as CADF to deliver to customers an enterprise-worthy OpenStack platform that they can trust!
If you want to know more about how to use CADF in OpenStack to produce data that can be used to audit or analyze your customers’ security, operational or business policies, feel free to comment below or look for me at the upcoming OpenStack Summit in Paris. During the Design Summit on Thursday and Friday, you can likely find me hanging out in Heat or Heat-Translator sessions.