October 24, 2019 By jason-mcalpin 6 min read

I’m thrilled to announce that the CNCF CloudEvents project approved and released version 1.0 of the CloudEvents specifications today.

This announcement from the CNCF CloudEvents project is the result of two years’ worth of hard work done by one of the broadest community-based activities that I’ve had the pleasure of working with. For those who might not be familiar with the project, allow me to provide a quick overview.

CloudEvents: What and why?

At its core, the main CloudEvents specification does one very simple thing—it enumerates a small set of common metadata attributes that most events already define. It includes very basic attributes, such as a unique ID, where the event came from, and something to indicate the type of occurrence that caused the generation of the event.

If most events already have this information, you might be wondering what value CloudEvents brings. Well, while these attributes already exist, each system producing events will have its own way of representing this information. For example, some might name the unique ID as id, while some might call it identifier or recordID. It’s because of these differences that CloudEvents was created.

Obviously, the ultimate receiver of an event will need to understand all of the various attributes of each event in order to fully (and correctly) process it. However, often middleware and platform providers would like to perform basic event flow management (such as routing and filtering) of an event before it reaches its final destination.

Before CloudEvents, they would be required to understand all of the types, syntaxes, and semantics of every event they might see in order to properly process it. CloudEvents tries to address this pain point by taking the most common of these attributes and giving them a well-defined name and a well-defined location in a message so that middleware no longer needs to understand the specifics of each event or change each time a new event is sent through the system. If you think about how HTTP proxies work by examining a set of well-defined HTTP headers, then consider this as the same concept, but for events.

It is important to note that while CloudEvents defines these attributes, CloudEvents does not mandate that the event payload itself use them. By this I mean that the CloudEvents defined attributes are meant to augment existing events/messages rather than force events to conform to a new specification (we did not create yet another “common event format” to rule them all).

This, I believe, is a critical aspect of why CloudEvents is starting to see interest from the community.

CloudEvents example

Let me elaborate on this by looking at a concrete example—let’s say that you have an existing event that is sent over HTTP and looks like this:

POST /event HTTP/1.0
Host: example.com
Content-Type: application/json

  "action": "newItem",
  "itemID": "93"

In order for this to become “CloudEvents” compliant, all we need to do is add a few extra HTTP headers:

POST /event HTTP/1.0
Host: example.com
Content-Type: application/json
ce-specversion: 1.0
ce-type: repo.newItem
ce-source: http://bigco.com/repo
ce-id: 610b6dd4-c85d-417b-b58f-3771e532

  "action": "newItem",
  "itemID": "93"

Let’s briefly look at each one:

  • ce-specversion indicates the version of the CloudEvents specification this event adheres to.
  • ce-type is the “reason” behind the generation of the event. This often indicates the “occurrence” behind the event notification.
  • ce-source is a unique identifier for the entity that produced the event.
  • ce-id is a producer-scope unique ID for this event so that simple de-dupe logic can be performed.

That’s it! Notice a couple of things:

  • The amount of extra metadata the CloudEvents requires is just four attributes. The ones in this example are the only mandatory attributes that every CloudEvent must have, but the spec does define some additional optional ones that people can include if they wish.
  • The original event/message wasn’t really changed, it was just extended. This means that an existing receiver for this event should continue to work even if it doesn’t know about or understand CloudEvents.

CloudEvents and middleware

So, while what I’ve described might seem almost trivial in nature, think about what middleware can now do. Without actually understanding anything in the HTTP body (the event payload), it could allow for generic filtering on event types or who the event producer is. It can route events to different backend systems based on this metadata. And, the specification of this middleware processing can be done without semantic understanding of the values.

In other words, through something like simple regular expressions, middleware can be told to do things like route all events from “bigco.com” to a particular system without needing any specialized logic for bigco’s events.

For those familiar with the Knative project, this is exactly what its “eventing” component does. It converts all incoming events into CloudEvents so that applications deployed into Knative can specify an event workflow based on the common CloudEvents metadata. This allows for the Knative eventing workflow logic to process all events that come into the system regardless of the message transport, the syntax of the original event, and, most importantly, without needing to understand the semantics of the events themselves. This ensures that the Knative eventing middleware can remain event-agnostic but still provide great value to the applications deployed into it by allowing people to define eventing workflow logic without custom code.

Other CloudEvents features and deliverables

The above describes the core features of the specification, but there are other features and deliverables from the project. I won’t go into them in extensive detail here; rather, I’ll just list them as a teaser:

  • The above example uses a “binary” format for a CloudEvent. This means it allows for augmentation of a message to minimize the impact on existing code. However, the project also defines a “structured” format in which the event payload is wrapped with a CloudEvent envelope for cases where it is easier if everything is grouped together into a single unit and not split between message headers and message payload.
  • The project defines how to serialize the CloudEvents metadata in a variety of transports and formats, including HTTP, MQTT, JSON, AMQP, and Avro.
  • CloudEvents also defines a few extension attributes. These are ones that the project felt were gaining in popularity but were not quite pervasive enough to be included within the specification itself—but one day might be.
  • There are a set of SDKs to help people produce and consume CloudEvents—languages supported are C-Sharp, Java, Go, JavaScript, and Python.

This was a very quick overview, and I would encourage you to look at the specifications to get more information, and definitely look at the “primer” since that will provide even more background information on the motivation behind the specs and some of the bigger decisions made.

The CloudEvents Project community

I wanted to use the rest of this blog post to talk about the people within the CloudEvents community that worked on the specifications. If you look at the attendance spreadsheet, you’ll notice that just about every major cloud provider has been involved to some extent. Yes, some more than others, but you’ll also notice that the list of participants includes quite a few companies that are considered “end users.”

I think that this diverse set of participants is a bit unique. We all talk about how we want input from as wide a range of people in the community as possible, but I find that it just doesn’t work out that way most of the time. For CloudEvents, it did.

Additionally, having worked on “standards” for many (many!) years, I’ve noticed that often these types of activities can be taken over by politics and endless battles that sometimes feel more based on company-specific requirements than good technical reasoning or what’s best for the end users. While I won’t claim that the CloudEvents project was perfect and everyone got along completely at all times, I do honestly believe that it was one of the least political projects I’ve had the pleasure of working with.

I can’t really explain why it worked out this way. Perhaps it’s because the scope of the project was so limited? Perhaps our rather unique governance model was a disincentive for game playing? Perhaps there’s something about being part of the CNCF and its goals of trying to bring the cloud native community together? Perhaps we just got lucky with the list of participants? I don’t know the answer, but I do want to thank all of the participants of the project who made this possible. Everyone (from the least to the most vocal) had a positive impact on the project and we would not have been able to produce such a great net result without you!

The CloudEvents projects was an offshoot of the CNCF’s Serverless Working Group, and the WG will be deciding what to work on next. I hope that we continue to see the same level of participation in our next project, and if you’re not part of our community yet, please consider joining— we’d love to have your input and contributions.

Learn more about CloudEvents

More from Announcements

IBM Hybrid Cloud Mesh and Red Hat Service Interconnect: A new era of app-centric connectivity 

2 min read - To meet customer demands, applications are expected to be performing at their best at all times. Simultaneously, applications need to be flexible and cost effective, and therefore supported by an underlying infrastructure that is equally reliant, performant and secure as the applications themselves.   Easier said than done. According to EMA's 2024 Network Management Megatrends report only 42% of responding IT professionals would rate their network operations as successful.   In this era of hyper-distributed infrastructure where our users, apps, and data…

IBM named a Leader in Gartner Magic Quadrant for SIEM, for the 14th consecutive time

3 min read - Security operations is getting more complex and inefficient with too many tools, too much data and simply too much to do. According to a study done by IBM, SOC team members are only able to handle half of the alerts that they should be reviewing in a typical workday. This potentially leads to missing the important alerts that are critical to an organization's security. Thus, choosing the right SIEM solution can be transformative for security teams, helping them manage alerts…

IBM and MuleSoft expand global relationship to accelerate modernization on IBM Power 

2 min read - As companies undergo digital transformation, they rely on APIs as the backbone for providing new services and customer experiences. While APIs can simplify application development and deliver integrated solutions, IT shops must have a robust solution to effectively manage and govern them to ensure that response times and costs are kept low for all applications. Many customers use Salesforce’s MuleSoft, named a leader by Gartner® in full lifecycle API management for seven consecutive times, to manage and secure APIs across…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters