Cloudant Best (and Worst) Practices — Part 2

By: Stefan Kruger

Cloudant advice to new users, from the people who design the product and run the service

As outlined in “Cloudant Best (and Worst) Practices—Part 1,” I’ve had the unique opportunity to see IBM Cloudant from all angles—the customers who use it, the engineers that run it, and the folks who support and sell it—and I’m here to summarize the best—and worst!—practices we see most often in the field.

Rules 0–8: Check out Part 1

Rule 9: In Cloudant Search (or CQ indexes of type text), limit the number of fields

Cloudant Search and CQ indexes of type text (both of which are Apache Lucene under the hood) allow you to index any number of fields into the index. We’ve seen some examples where this is abused either deliberately or (most often) fat-fingered. Plan your indexing to comprise only the fields required by your actual queries. Indexes take space and can be costly to rebuild if the number of indexed fields is large.

There’s also the issue of which fields you store in a Cloudant Search. Stored fields are retrieved in the query without doing include_docs=true so the trade-off is similar to Rule 7.

Cloudant Search docs.

Rule 10: Resolve your conflicts

Cloudant is designed to treat conflicts as a natural state of data in a distributed system. This is a powerful feature that helps a Cloudant cluster be available at all times. However, the assumption is that conflicts are still reasonably rare. Keeping track of conflicts in Cloudant’s core has significant costs associated with it.

It is perfectly possible (but a bad idea!) to just ignore conflicts, and the database will carry on operating by choosing a random, but deterministic, revision of conflicted documents. However, as the number of unresolved conflicts grows, the performance of the database goes down a black hole, especially when replicating.

As a developer, it’s your responsibility to check for, and to resolve, conflicts—or even better, employ data models that make conflicts impossible.

If you routinely create conflicts, you should consider model changes.

Cloudant guide to conflicts. Cloudant guide to versions and MVCC. Three-part blog series on conflicts.

Rule 11: Deleting documents won’t delete them

Deleting a document from a Cloudant database doesn’t actually purge it. Deletion is implemented by writing a new revision of the document under deletion with an added field _deleted: true. This special revision is called a tombstone. Tombstones still take up space and are also passed around by the replicator.

Models that rely on frequent deletions of documents are not suitable for Cloudant.

Cloudant tombstone docs.

Rule 12: Be careful with updates

It is more expensive in the long run to mutate existing documents than to create new ones because Cloudant will always need to keep the document tree structure around, even if internal nodes in the tree will be stripped of their payloads. If you find that you create long revision trees, your replication performance will suffer. Moreover, if your update frequency goes above, say, once or twice every few seconds, you’re more likely to produce update conflicts.

Prefer models that are immutable.

The obvious question after Rules 11 and 12 is: Won’t the data set grow unbounded if my model is immutable? If you accept that deletes don’t fully purge the deleted data and that updates are actually not updating in place, in terms of data volume growth, there is not much difference. Managing data volume over time requires different techniques. The only way to truly reclaim space is to delete databases, rather than documents. You can replicate only winning revisions to a new database and delete the old to get rid of lingering deletes and conflicts. Or, perhaps you can build it into your model to regularly start new databases (say ‘annual data’) and archive off (or remove) outdated data if your use case allows.

Rule 13: Replication isn’t magic

“So let’s set up three clusters across the world—Dallas, London, Sydney—with bidirectional synchronisation between them to provide real-time collaboration between our 100,000 clients.”

No.

Cloudant is good at replication. It’s so effortless that it can seem like magic, but note that it makes no latency guarantees. In fact, the whole system is designed with eventual consistency in mind. Treating Cloudant’s replication as a real-time messaging system will not end up in a happy place. For this use case, put a system in between that was designed for this purpose, such as Apache Kafka.

It’s difficult to put a number on replication throughput—the answer is always “it depends.” Things that impact replication performance include, but are not limited to, the following:

  • Change frequency

  • Document size

  • Number of simultaneous replication jobs on the cluster as a whole

  • Wide (conflicted) document trees

Blog post on replication topology. Cloudant guide to replication. Updates to the replication scheduler.

Rule 14: Use the bulk API

Cloudant has nice API endpoints for bulk loading (and reading) many documents at once. This can be much more efficient than reading/writing many documents one at a time. The write endpoint is:

${database}/_bulk_docs.

Its main purpose is to be a central part in the replicator algorithm, but it’s available for your use, too, and it’s pretty awesome.

With _bulk_docs, in addition to creating new docs, you can also update and delete. Some client libraries, including PouchDB, implement create, update, and delete even for single documents this way for fewer code paths.

Here is an example of creating one new, updating a second existing, and deleting a third document:

curl -XPOST 'https://ACCT.cloudant.com/DB/_bulk_docs' \
-H "Content-Type: application/json" \
-d '{"docs":[{"baz":"boo"}, \
{"_id":"463bd...","foo":"bar"}, \
{"_id":"ae52d...","_rev":"1-8147...","_deleted": true}]}'

You can also fetch many documents at once by issuing a POST to _all_docs (there is also a newish endpoint called _bulk_get, but this is probably not what you want (it’s there for a specific internal purpose).

To fetch a fixed set of docs using _all_docs, POST with a keys body:

curl -XPOST 'https://ACCT.cloudant.com/DB/_all_docs' \
-H "Content-Type: application/json" \
-d '{"keys":["ab234....","87addef...","76ccad..."]}'

Note that Cloudant (at the time of writing) imposes a max request size of 1 MB, so _bulk_docs requests exceeding this size will be rejected.

Cloudant bulk operations docs.

Rule 15: Eventual Consistency is a harsh taskmaster (AKA, don’t read your writes)

Eventual consistency is a great idea on paper and a key contributor to Cloudant’s ability to scale out in practice. However, it’s fair to say that the mindset required to develop against an eventually consistent data store does not feel natural to most people.

You often get stung when writing tests:

  1. Create a database.

  2. Populate the database with some test data.

  3. Query the database for some subset of this test data.

  4. Verify that the data you got back is the data you expected to get back.

Nothing wrong with that? That works on every other database you’ve ever used, right?

Not on Cloudant.

Or rather, it works 99 times out of 100.

The reason for this is that there is a (mostly) small inconsistency window between writing data to the database and this being available on all nodes of the cluster. As all nodes in a cluster are equal in stature, there is no guarantee that a write and a subsequent read will be serviced by the same node. In some circumstances, the read may be hitting a node before the written data has made it to this node.

So why don’t you just put a short delay in your test between the write and the read? That will make the test less likely to fail, but the problem is still there.

Cloudant has no transactional guarantees, and whilst document writes are atomic (you’re guaranteed that a document can either be read in its entirety or not at all), there is no way to close the inconsistency window. It’s there by design.

A more serious concern that should be at the forefront of every developer’s mind is that you can’t safely assume that data you write will be available to anyone else at a specific point in time. This takes some getting used to if you come from a different kind of database tradition.

Testing Tip: What you can do to avoid the inconsistency window in testing is to test against a single-node instance of Cloudant or CouchDB running in Docker (docker stuff here). A single node removes the eventual consistency issue, but be aware that you are then testing against an environment that behaves differently to what you will target in production. Caveat Emptor.

Rule 16: Don’t mess with Q, R, and N unless you really know what you are doing

Cloudant’s quorum and sharding parameters, once you discover them, seem like tempting options to change the behaviour of the database.

Stronger consistency—surely just set the write quorum to the replica count?

No! Recall that there is no way to close the inconsistency window in a cluster.

Don’t go there. The behaviour will be much harder to understand, especially during network partitions. If you’re using Cloudant-the-service, the default values are fine for the vast majority of users.

There are times when tweaking the shard count for a database is essential to get the best possible performance, but if you can’t say why this is, you’re likely to make your situation worse.

Rule 17: Design document (ddoc) management requires some flair

As your data set grows and your number of views goes up, sooner or later, you will want to ponder how you organise your views across ddocs. A single ddoc can be used to form a so-called view group: —a set of views that belong together by some metric that makes sense for your use case. If your views are pretty static, that makes your view query URLs semantically similar for related queries. It’s also more performant at index time because the index loads the document once and generates multiple indexes from it.

Ddocs themselves are read and written using the same read/write endpoints as any other document. This means that you can create, inspect, modify, and delete ddocs from within your application. However, even small changes to ddocs can have big effects on your database. When you update a ddoc, all views in it become unavailable until indexing is complete. This can be problematic in production. To avoid it, you have to do a crazy ddoc-swapping dance (see couchmigrate).

In most cases, this is probably not what you want to have to deal with. As you start out, it is most likely more convenient to have a one-view-per-ddoc policy.

Also, in case it isn’t obvious, views are code and should be subject to the same processes you use in terms of source code version management for the rest of your application code. How to achieve this may not be immediately obvious. You could version the JS snippets and then cut and paste the code into the Cloudant dashboard to deploy whenever there is a change (and yes, we all resort to this from time to time).

There are better ways to do this, and this is one reason to use some of the tools surrounding the couchappconcept. A couchapp is a self-contained CouchDB web application that nowadays doesn’t see much use. Several couchapp tools exist that are there to make the deployment of a couchapp—including its views, crucially—easier.

Using a couchapp tool means that you can automate the deployment of views as needed, even when not using the couchapp concept itself.

See for example couchapp and situp. Cloudant guide to design doc management.

Rule 18: Cloudant is rate limited—let this inform your code

Cloudant-the-service (unlike vanilla CouchDB) is sold on a “reserved throughput capacity” model. That means that you pay for the right to use up to a certain throughput, rather than the throughput you actually end up consuming. This takes a while to sink in. One somewhat flaky comparison might be that of a cell phone contract where you pay for a set number of minutes regardless of whether you end up using them or not.

Although, the cell phone contract comparison doesn’t really capture the whole situation. There is no constraint on the sum of requests you can make to Cloudant in a month; the constraint is on how fast you make requests.

It’s really a promise that you make to Cloudant, not one that Cloudant makes to you. You promise to not make more requests per second than what you said you would up front. A top speed limit, if you like. If you transgress, Cloudant will fail your requests with a status of 429: Too Many Requests. It’s your responsibility to look out for this and deal with it appropriately, which can be difficult when you’ve got multiple app servers. How can they coordinate to ensure that they collectively stay below the requests-per-second limit?

Cloudant’s official client libraries have some built-in provision for this that can be enabled (Note: this is switched off by default to force you to think about it), following a “back-off and retry” strategy. However, if you rely on this facility alone, you will eventually be disappointed. Back-off and retry only helps in cases of temporary transgression, not a persistent butting up against your provisioned throughput capacity limits.

Your business logic must be able to handle this condition. Another way to look at it is that you get the allocation you pay for. If that allocation isn’t sufficient, the only solution is to pay for a higher allocation.

Provisioned throughput capacity is split into three different buckets:

  • Lookup: A “primary key” read—fetching a document based on its _id.

  • Write: Storing a document or attachment on disk.

  • Query: Looking up documents via a secondary index (any API endpoint that has a _design or _find in it).

You get different allocations of each and the ratios between them are fixed. This fact can be used to optimise for cost. As you get 20 Lookups for every 1 Query (per second), if you find that you’re mainly hitting the Query limit but you have plenty headroom in Lookups, it may be possible to reduce the reliance on Queries through some remodeling of the data or perhaps doing more work client-side.

The corollary here, though, is that you can’t assume that any third-party library or framework will optimise for cost ahead of convenience. Client-side frameworks that support multiple persistence layers via plugins are unlikely to be aware of this or perhaps even incapable of making such tradeoffs.

Checking this before committing to a particular tool is a good idea.

It is also worth understanding that the rates aren’t directly equivalent to HTTP API endpoint calls. You should expect that for example, a bulk update will count according to its constituent document writes.

Cloudant docs on plans and pricing on IBM Public Cloud.

Thanks

Thanks to Glynn Bird, Robert Newson, Richard Ellis, Mike Broberg, and Jan Lehnardt for helpful comments. Any remaining errors are mine alone and have nothing to do with them.

Be the first to hear about news, product updates, and innovation from IBM Cloud