“The Right Tool for the Job”: An increasingly challenging decision

Choosing technology for applications—new or legacy—is hard work. Gone are the days of the 1990s, when a few vendors dominated the software market; and gone, too, are the days of the 2000s, when LAMP stacks signaled the incursion of open source into “enterprise” application stacks. Developers today face a bewildering proliferation of technologies to choose from, and these choices aren’t trivial:

  • Does this software do what it says it does?
  • Is this framework going to be dead in five years?
  • Can I reliably hire programmers that know this language?

Today’s software is largely built on open source, and open source—both the software itself, and the communities around it—moves quickly.

Furthermore, microservices architectures compound these concerns—now it’s not just one application’s worth of decisions that need to be made. Instead, many organizations find themselves in a Groundhog’s Day of tools selection imposed on them by this Conway’s Law and this newfound freedom to choose the right tool for the job.

If you’d like an even deeper dive into this subject, check out our webinar: “How to Choose a Database on IBM Cloud.”

Databases

Not all technical decisions are created equal—they vary widely both in difficulty and in impact. Programming languages debates, for example, easily devolve into aesthetics and Turing completeness (i.e., there’s typically no right answer, and as long as the team is reasonably sharp, it’s not likely to be a make-or-break decision).

Choosing a database is a different class of challenge. The ability to make a rigorous, informed choice of database technology requires a trifecta of the following:

  1. Deep understanding of the application problem at hand.
  2. Knowledge of database semantics, including distributed computing fundamentals.
  3. Skill in seeing through enterprise software marketing.

This isn’t a set of talents easily found or obtained. Unfortunately, making a non-rigorous, uninformed choice of database technology can be a terminal mistake for an application.

It’s also common for developers to offload these decisions to abstractions—”the framework handles all of that for me”—and ignore the details of the database underneath. This, too, can be a terminal mistake; (good) abstractions are powerful tools, but unfortunately most database abstractions, such as ORMs, leak profusely. That’s not to say they’re not useful, but it does imply that they can’t be blindly trusted. Even when working through a framework, understanding the semantics of the underlying database is essential.

Cloud databases

Modern cloud platforms provide a counterbalance here through managed database-as-a-service (“DBaaS”) offerings. By using DBaaS products, rather than self-hosting databases, application developers can limit the risk of this critical decision in two ways:

  1. Most of the databases available from the major cloud platforms are good databases. These vendors have already done most of the heavy lifting of eliminating low-quality software from the decision space. Cloud platforms have an incentive to make their customers successful, and offering them subpar software isn’t a good way to do that.
  2. By offloading operational management to a cloud platform, application developers offload a significant risk and cost. The price of a database-as-a-service will deliver not only hosting and 24×7 management, but also automated high-availability, backup/restore, version upgrades, security, and so on. Many of the best databases in the world are “free” but the knowledge to implement a production-quality installation isn’t. By utilizing a DBaaS platform, developers minimize the learning curve and can base their decisions on requirements and value, rather than cost.

At IBM, we present a wide range of DBaaS offerings, from traditional relational stores like Db2 and PostgreSQL, to more esoteric systems such as etcd and Redis. We do this because we believe these are good technologies, and each has its place in someone’s application architecture. We’ve narrowed the field, but the decision remains—how does one choose the right database for an application?

Josh Mintz does a great job of summarizing in “A Brief Overview of the Database Landscape.”

Lenses

Problems of this subtlety and complexity warrant more than a simple flowchart or decision-tree style process—the space is too complicated to wrangle without some sort of decomposition. What one needs are lenses: tools for looking at the problem from different angles and making rigorous, informed decisions about those local problems that can be combined to inform global decisions.

What follows are some example lenses for making decisions about databases. These are by no means exhaustive, but they’re far better than nothing. Again, the goal is to make better decisions, not perfect ones.

Zero trust and data protection

Databases store a variety of types of data. Depending on the type, different regulatory requirements will play a role, and not all databases provide the same level of isolation and capabilities to match all of your requirements. 

How strong is your need to isolate the data? Will encryption at rest and in transit be enough, or do you work with data that should be protected in use, as well?

Putting your database into secure enclaves can help you protect your data and move towards a zero trust architectureConfidential computing platforms like IBM Cloud Hyper Protect DBaaS provide additional options on the database side, combining a known database — like PostgreSQL or MongoDB — with a confidential computing environment.

Primary and auxiliary data stores

First off, something that might be obvious. Not all databases are general purpose databases. Database engineering is like any other engineering practice; it’s mostly about tradeoffs. Some databases—call them “primary” data stores, for lack of a better term—really are “jack of all trades” systems, designed to solve 80% of any problem thrown at them, but they’re really not all that common. These database feature general purpose data modeling tools (typically relational or document-based) and general purpose query languages, such as SQL. They take data integrity seriously. They’re flexible enough to solve most problems, and that’s a powerful feature. As an application’s requirements evolve over time, “primary” data stores are more likely than not going to be able to support that evolution.

Most databases, on the other hand, are “T-shaped”—they excel at a few specialized tasks and fall flat when applied to problems outside of their domain. This isn’t to say they’re bad databases—again, this is about engineering tradeoffs. These T-shaped databases—call them “auxiliary”—typically have restricted data modeling tools (key-value, graph, columnar) and restricted query languages. They’re trading off generality, and often data integrity, for exceptional performance in a limited domain.

When an application’s requirements line up with the strong suit of an auxiliary-type database, the result can be tremendous. When they don’t, the result can be, well, tremendous. Without a clearly understood reason otherwise, primary-type databases should be the default choice for most applications.

Write-optimized vs. read-optimized stores

This is a very complex topic, with a rich literature, far beyond the scope of this article. In short: databases must make tradeoffs between write performance and read performance. To a first-order approximation this is usually obvious: Db2 is a write-optimized OLTP store; Db2 Warehouse is a read-optimized OLAP store. Don’t mix them up!

Indexing: Zero to full-text

Indexing isn’t free. One of the reasons Redis is so fast and simple is that it doesn’t have secondary indexing capabilities. From there, there’s a wide spectrum of options, from Elasticsearch’s full-text search capabilities, to PostgreSQL’s mishmash of B-tree, Hash, GiST, SP-GiST, and GIN, to Cloudant’s Javascript-based map/reduce views.

Understanding the semantics of all of these different options and their tradeoffs is no easy feat, of course. The most important point here is not re-inventing the wheel. A home-grown indexing implemetation is unlikely to be semantically correct, let alone as efficient as a database-native indexing mechanism. In short, use real indexes for applications that need indexes.

Gigabytes and terabytes

How much data does an application need? How will it grow as the application grows? Does it really need “infinite” scalability, or can the data be logically partitioned in some way? There’s no avoiding the fact that distributed systems are hard; and, again, database engineering is about tradeoffs. Single-node systems are generally simpler to reason about and simpler to program against, and that’s important, even if someone else is operating the database.

What happens when it fails?

Consistency and availability

Twenty years on, Brewer’s CAP theorem is standard distributed systems lore, but understanding its implications up the software stack is still a nontrivial exercise. The reality is much more complicated than “C, A, P, pick two, and you have to pick P.” Many databases include flexible consistency controls that allow application developers to change CAP semantics on an operation-by-operation basis; others provide “CP” semantics on some operations and “AP” semantics on others.

Databases (or parts of them) are going to fail. Developers need to understand the impact of a partial database failure on their application and select a database with semantics that fit their needs. For example, a social network probably doesn’t need strong consistency, but a banking application probably does.

On the other hand, programming against an eventually-consistent system is typically more challenging than the alternative. It’s not always natural for developers to internalize the idea that their application might not be able to read its writes.

Note, too, that distributed systems have a tendency to sneak in to even “simple” single node systems. PostgreSQL is a non-distributed database in the naive sense, but a highly-available PostgreSQL with some sort of replication between members is most definitely distributed.

Backups

Database high availability is a critical component of application high availability, but it’s not the complete story. Some situations (e.g., regional disasters or systemic corruption) require proper backup and restore mechanisms. And again, not all databases offer the same level of functionality here.

Elasticsearch, for example, exploits its immutable on-disk representation to implement an efficient incremental backup mechanism. This allows very frequent backups of the largest clusters with very little impact to the performance of the system.

Elasticsearch does not, however, represent data in a manner suitable for continuous archiving. PostgreSQL, on the other hand, does, which allows for even more frequent backups and point-in-time restoration functionality. This comes at a cost, however—replaying transactions for a PostgreSQL point-in-time restore will likely take longer than restoring a snapshot of a similarly-sized Elasticsearch cluster. In other words, there’s a tradeoff (surprise!) here between recovery point objectives (i.e., RPO, i.e, how “old” is the data being restored) and recovery time objectives (i.e., RTO, i.e., how long it takes to restore). Backup implementation decisions vary widely based on platform and the implementation details of the database in question; RPO and RTO are generally good high-level abstractions to use when evaluating them.

Putting it all together

In the end, this all roughly boils down to Software Engineering 101 crib notes, specialized on databases:

  1. Engineering is about tradeoffs.
  2. Understanding the impact of these tradeoffs is critical to building software.
  3. Learning how to understand the impact is not trivial.

While there’s no exhaustive set of lenses for looking at these problems, it’s important to get a foothold in to the space. Without one, (3) is insurmountable—databases are too complicated for intuitive reasoning. At IBM Cloud, we’ve done a lot of the work already through careful selection of our database portfolio, but there are still decisions to be made, and we’re ultimately better off when our customers are happy with the databases they’ve chosen for their applications.

Further reading

Most of the state of the art in databases derives from the academic literature, either directly or indirectly; anyone looking to learn more shouldn’t hesitate to dive in there. Readings in Database Systems, or the “Red Book” is a great resource for learning about databases from the ground up. More comprehensive treatments of transactional systems can be found in Gray and Date.

There are innumerable smart individuals blogging about databases and distributed systems on the web. To name a few: Kyle Kingsbury, Henry Robinson, and Martin Kleppmann are all worth reading, for starters.

Good luck!

Get started with Databases on IBM Cloud.

Was this article helpful?
YesNo

More from Cloud

IBM Tech Now: April 8, 2024

< 1 min read - ​Welcome IBM Tech Now, our video web series featuring the latest and greatest news and announcements in the world of technology. Make sure you subscribe to our YouTube channel to be notified every time a new IBM Tech Now video is published. IBM Tech Now: Episode 96 On this episode, we're covering the following topics: IBM Cloud Logs A collaboration with IBM watsonx.ai and Anaconda IBM offerings in the G2 Spring Reports Stay plugged in You can check out the…

The advantages and disadvantages of private cloud 

6 min read - The popularity of private cloud is growing, primarily driven by the need for greater data security. Across industries like education, retail and government, organizations are choosing private cloud settings to conduct business use cases involving workloads with sensitive information and to comply with data privacy and compliance needs. In a report from Technavio (link resides outside ibm.com), the private cloud services market size is estimated to grow at a CAGR of 26.71% between 2023 and 2028, and it is forecast to increase by…

Optimize observability with IBM Cloud Logs to help improve infrastructure and app performance

5 min read - There is a dilemma facing infrastructure and app performance—as workloads generate an expanding amount of observability data, it puts increased pressure on collection tool abilities to process it all. The resulting data stress becomes expensive to manage and makes it harder to obtain actionable insights from the data itself, making it harder to have fast, effective, and cost-efficient performance management. A recent IDC study found that 57% of large enterprises are either collecting too much or too little observability data.…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters