Serverless is an approach to computing that offloads responsibility for common infrastructure management tasks (e.g., scaling, scheduling, patching, provisioning, etc.) to cloud providers and tools, allowing engineers to focus their time and effort on the business logic specific to their applications or process.
The most useful way to define and understand serverless is focusing on the handful of core attributes that distinguish serverless computing from other compute models, namely:
The serverless model requires no management and operation of infrastructure, enabling developers to focus more narrowly on code/custom business logic.
Serverless computing runs code only on-demand on a per-request basis, scaling transparently with the number of requests being served.
Serverless computing enables end users to pay only for resources being used, never paying for idle capacity.
Serverless is fundamentally about spending more time on code, less on infrastructure.
Are there servers in serverless computing?
The biggest controversy associated with serverless computing is not around its value, use cases, or which vendor offerings are the right fit for which jobs, but rather the name itself. An ongoing argument around serverless is that the name is not appropriate because there are still servers in serverless computing.
Why the name “serverless” has persisted is because the name is describing an end user’s experience. In a technology that is described as “serverless,” the management needs of the underlying servers are invisible to the end user. The servers are still there, you just don’t see them or interact with them.
Serverless vs. FaaS
Serverless and Functions-as-a-Service (FaaS) are often conflated with one another but the truth is that FaaS is actually a subset of serverless. As mentioned above, serverless is focused on any service category, be it compute, storage, database, etc. where configuration, management, and billing of servers are invisible to the end user. FaaS, on the other hand, while perhaps the most central technology in serverless architectures, is focused on the event-driven computing paradigm wherein application code, or containers, only run in response to events or requests.
Serverless architectures pros and cons
While there are many individual technical benefits of serverless computing, there are four primary benefits of serverless computing:
It enables developers to focus on code, not infrastructure.
Pricing is done on a per-request basis, allowing users to pay only for what they use.
For certain workloads, such as ones that require parallel processing, serverless can be both faster and more cost-effective than other forms of compute
While there is much to like about serverless computing, there are some challenges and trade-offs worth considering before adopting them:
Long-running processes: FaaS and serverless workloads are designed to scale up and down perfectly in response to workload, offering significant cost savings for spiky workloads. But for workloads characterized by long-running processes, these same cost advantages are no longer present and managing a traditional server environment might be simpler and more cost-effective.
Vendor lock-in: Serverless architectures are designed to take advantage of an ecosystem of managed cloud services and, in terms of architectural models, go the furthest to decouple a workload from something more portable, like a VM or a container. For some companies, deeply integrating with the native managed services of cloud providers is where much of the value of cloud can be found; for other organizations, these patterns represent material lock-in risks that need to be mitigated.
Cold starts: Because serverless architectures forgo long-running processes in favor of scaling up and down to zero, they also sometimes need to start up from zero to serve a new request. For certain applications, this delay isn’t much of an impact, but for something like a low-latency financial application, this delay wouldn’t be acceptable.
Monitoring and debugging: These operational tasks are challenging in any distributed system, and the move to both microservices and serverless architectures (and the combination of the two) has only exacerbated the complexity associated with managing these environments carefully.
Understanding the serverless stack
Defining serverless as a set of common attributes, instead of an explicit technology, makes it easier to understand how the serverless approach can manifest in other core areas of the stack.
Functions as a Service (FaaS): FaaS is widely understood as the originating technology in the serverless category. It represents the core compute/processing engine in serverless and sits in the center of most serverless architectures. See "What is FaaS?" for a deeper dive into the technology.
Serverless databases and storage: Databases and storage are the foundation of the data layer. A “serverless” approach to these technologies (with object storage being the prime example within the storage category) involves transitioning away from provisioning “instances” with defined capacity, connection, and query limits and moving toward models that scale linearly with demand, in both infrastructure and pricing.
Event streaming and messaging: Serverless architectures are well-suited for event-driven and stream-processing workloads, which involve integrating with message queues, most notably Apache Kafka.
API gateways: API gateways act as proxies to web actions and provide HTTP method routing, client ID and secrets, rate limits, CORS, viewing API usage, viewing response logs, and API sharing policies.
Comparing FaaS to PaaS, containers, and VMs
While Functions as a Service (FaaS), Platform as a Service (PaaS), containers, and virtual machines (VMs) all play a critical role in the serverless ecosystem, FaaS is the most central and most definitional; and because of that. it’s worth exploring how FaaS differs from other common models of compute on the market today across key attributes:
Provisioning time: Milliseconds, compared to minutes and hours for the other models.
Ongoing administration: None, compared to a sliding scale from easy to hard for PaaS, containers, and VMs respectively.
Elastic scaling: Each action is always instantly and inherently scaled, compared to the other models which offer automatic—but slow—scaling that requires careful tuning of auto-scaling rules.
Capacity planning: None required, compared to the other models requiring a mix of some automatic scaling and some capacity planning.
Persistent connections and state: Limited ability to persist connections and state must be kept in external service/resource. The other models can leverage http, keep an open socket or connection for long periods of time, and can store state in memory between calls.
Maintenance: All maintenance is managed by the FaaS provider. This is also true for PaaS; containers and VMs require significant maintenance that includes updating/managing operating systems, container images, connections, etc.
High availability (HA) and disaster recovery (DR): Inherent in the FaaS model with no extra effort or cost. The other models require additional cost and management effort. In the case of both VMs and containers, infrastructure can be restarted automatically.
Resource utilization: Resources are never idle—they are invoked only upon request. All other models feature at least some degree of idle capacity.
Resource limits: FaaS is the only model that has resource limits on code size, concurrent activations, memory, run length, etc.
Charging granularity and billing: Per blocks of 100 milliseconds, compared to by the hour (and sometimes minute) of other models.
Use cases and reference architectures
Given its unique combination of attributes and benefits, serverless architectures are well-suited for use cases around data and event processing, IoT, microservices, and mobile backends.
Serverless and microservices
The most common use case of serverless today is supporting microservices architectures. The microservices model is focused on creating small services that do a single job and communicate with one another using APIs. While microservices can also be built and operated using either PaaS or containers, serverless has gained significant momentum given its attributes around small bits of codes that do one thing, inherent and automatic scaling, rapid provisioning, and a pricing model that never charges for idle capacity.
Any action (or function) in a serverless platform can be turned into a HTTP endpoint ready to be consumed by web clients. When enabled for web, these actions are called web actions. Once you have web actions, you can assemble them into a full-featured API with an API Gateway that brings additional security, OAuth support, rate limiting, and custom domain support.
Combining managed Apache Kafka with FaaS and database/storage offers a powerful foundation for real-time buildouts of data pipelines and streaming apps. These architectures are ideally suited for working with all sorts of data stream ingestions (for validation, cleansing, enrichment, transformation), including:
Business data streams (from other data sources)
IoT sensor data
Financial (market) data
Get started with tutorials on serverless computing
Expand your serverless computing skills with these tutorials:
Go serverless with PHP: Experienced PHP developers will learn about serverless PHP. The tutorial covers using IBM Cloud Functions CLI to provision PHP actions, how to invoke PHP actions over HTTP, and how to integrate PHP actions with third-party REST APIs and IBM Cloud services.
Serverless and IBM
IBM Cloud Functions is a Function-as-a-Service (FaaS) platform that executes code in serverless environments.
IBM Cloud Functions is based on Apache OpenWhisk (an open source project developed by IBM) and works to speed and simplify the development process. Developers can leverage IBM Cloud Functions in combination with other products/tools in IBM Cloud, including IBM Watson APIs, Cloudant, Object Storage, Event Streams, and many others.