Modified on by DavidBirmingham
One of the most significant questions to answer on the Netezza platform is that of "distribution" - what does it mean? how does it apply? and all that. If you are a Netezza aficionado, be warned that the following commentary serves as a bit of a primer for the un-initiated. However, feel free to make comments or suggestions to improve it, or pass it on if you like.
In the prior entry I offered up a simple graphic (right) as to how the Netezza architecture is laid out internally.
In this depiction, each CPU is harnessing its own disk drive. In the original architecture, this was called a SPU, because the CPU was coupled with a disk drive and an FPGA (field-programmable gate array) that holds the architectural firmware "secret sauce" of the Netezza platform. The FPGA is still present and accounted for in the new architecture as the additional blade(s) on the blade server. Not to over-complicate things, so let's look at the logical and physical layout of the data itself and this will be a bit clearer.
When we define a table on the Netezza host, it logically exists in the catalog once. However, with the depicted CPU/disk combinations (right) we can see 16 of these available to store the information physically. If the table is defined with a distribution of "random", then the data will be evenly divided between all 16 of them. Let's say we have a table arriving that has 16 million records. Once loaded, each of these SPUs would be in control over 1 million records. So the table exists logically once, and physically multiple times. For a plain straight query on this configuration, any question we ask the table will initate the same query on all SPUs simultaneously. They will work on their portion of the data and return it an answer. This Single-Instruction-Multiple-Data model is a bread-and-butter power strategy of Massively Parallel computing. It is powerful because at any given moment, the answer we want is only as far away as the slowest SPU. This would mean as fast as the SPU can scan 1 million records, this is the total duration of the query. All of the SPUs will move at this speed, in parallel, so will finish at the same time.
What if we put another table on the machine that contains say, 32 million rows? (trying to keep even numbers here for clarity). The machine will load this table such that each SPU will contain 2 million rows each. If no other data is on the machine, we effectively have 3 million records per SPU loaded, but the data itself may be completely unrelated. Notice how we would not divide the data another way, like putting 16 million records on 6 of the SPUs and the other 32 million records on the remaining 10 SPUs. No, in massively parallel context, we want all the SPUs working together for every query. The more SPUs are working, the faster the query will complete.
Now some understand the Massively Parallel model to be thus: Multiple servers, each containing some domain of the information, and when a query is sent out it will find its way to the appropriate server and cause it to search its storage for the answer. This is a more stovepiped model than the Netezza architecture and will ultimately have logistics and hardware scalability as Achilles's heels, not strengths for parallel processing. Many sites are successful with such models. But they are very application-centric and not open for reuse for other unrelated applications. I noted above that the information we just loaded on the Netezza machine can be in unrelated tables, and databases and application suites, because the Netezza machine is general-purpose when it comes to handling its applicability for data processing. It is purpose-built when we talk about scale.
But let's take the 16-million-record vs 32-million record model above and assume that one of the tables is a transactional table (the 32 million) and one is a dimensional table (the 16 million). The dimensional table contains a key called "store_id" that is represented also on the transactional table such that we can join them together in various contexts. Will the Netezza machine do this, and how will it? After all, the data for the tables is spread out across 16 SPUs.
Well, we have the option of joining these "as is" or we can apply an even more interesting approach, that of using a distribution key. Here's where we need to exercise some analysis, because it appears as though the STORE_ID is what we want for a distribution, but this might skew the data. What does this mean? When we assign a distribution, the Netezza machine will then hash the distribution key into one of 16 values. Every time that particular key appears, it will always be assigned the same value and land on the same SPU. So now we can redistribute both of these random tables on STORE_ID and be certain that for say, SPU #5, all of the same store_id's for the dimension table and for the transaction table are physically co-located on the same disk. You probably see where this is going now.
Ahh, but what if we choose store_id and it turns out that a large portion of the IDs hash to a common SPU? What if we see, rather than 1 million records per SPU, we now see around 500k per SPU with one SPU radically overloaded with the remainder? This will make the queries run slow, because while all the SPUs but one will finish fast, in half the time of before, they will all be waiting on the one overworked SPU with all the extra data. This is called data skew and is detrimental to performance.
However, had we chosen a date or timestamp field, our skew may have been even worse. We might see that the data is physically distributed on the SPUs in a very even form. But when we go to query the data, we will likely use a date as part of the query, meaning that only a few SPUs, perhaps even one SPU, will actually participate in the answer while all the others sit idle. This is called process skew and is also detrimental to performance.
Those are just some things to watch out for. If we stay in the model of using business keys for distribution and using dates for zone maps (which I will save for another entry) we will usually have a great time with the data and few worries at all.
Back to the configuration on store_id's. If we now query these two tables using store_id in the query itself, Netezza will co-locate the join on the SPU and will only allow records to emit from this join if there is a match. This means that the majority of the work will be happening in the SPU itself before it ever attempts to do anything else with it. What if we additionally clamped the query by using a date-field on the transaction table? The transaction table would then be filtered on this column, further reducing the join load on the machine and returning the data even faster.
So here is where we get lifting effects where other machines experience drag. For an SMP-based machine, adding joins can make the query run slower. On a Netezza machine, joins can serve as filter-effects, limiting the data returned by the query and increasing the machine's immediate information on where-not-to-look. Likewise the date-filter is another way we tell the machine where-not-to-look. As I have noted, the more clues we can give the machine as to where-not-to-look, the faster the query will behave.
I have personally witnessed cases where adding a join to a large table actually decreases the overall query time. Adding another one further decreases it. Adding another one even further and so on - five-joins later we were getting returns in less than five seconds where the original query was taking minutes. This filter-table and filter-join effect occurs as a free artifact of the Netezza architecture.
Distribution is simply a way to lay the data out on the disks so that we get as many SPUs working for us as possible on every single query. The more SPUs are working, the more hardware is being applied to the problem. The more hardware, the more the physics is working for us. Performance is in the physics, always and forever. Performance is never in the software.
While the above is powerful for read-query lift (co-located reading) there is another power-tool called the co-located write. If we want to perform an insert-select operation, or a create-table-as-select, we need to be mindful that certain rules apply when we create these tables. For example, if we want to perform a co-located read above and drop the result into an intermediate table, it is ideal for the intermediate table to be distributed on the same key as the tables we are reading from. Why is this? Because the Netezza machne will simply read the data from one portion of the disk and ultimately land it to another portion of the same disk. Once again it never leaves the hardware in order to affect the answer. If we were to distribute the target table on another key, the data will have to leave the SPU as it finds a new SPU home to align with its distribution key. If we actually need to redistribute, this is fine. But if we don't and this is an arbitrary configuration, we can buy back a lot of power just by co-locating the reads and writes for maximum hardware involvement.
So that's distribution at a 50,000 foot level. We will look at more details later, or hop over to the Netezza Community to my blog there where some of these details have already been vetted.
It is important to keep in mind that while distribution and co-location are keys to high(er) performance on the machine, the true hero here is the SPU architecture and how it applies to the problem at hand. We have seen cases where applying a distribution alone has provided dramatic effects, such as 30x and 40x boost because of the nature of the data. This is not typical however, since the co-located reads will usually provide only a percentage boost rather than an X-level boost. This is why we often suggest that people sort of clear-their-head of jumping directly into a distribution discussion and instead put together a workable data model with a random distribution. Once loaded, profile the various key candidates to get a feel for which ones work the best and which ones do not. We have seen some users struggle with the data only because they prematurely selected a distribution key that - unbeknownst to them - had a very high skew and made the queries run too slow. This protracted their workstreams and made all kinds of things take longer than they should have.
So at inception, go simple, and then work your way up to the ideal.
On multiple distribution keys:
Many question arise on how many distribution keys to use. Keep in mind that this is as much a physical choice as a functional one. If the chosen key provides good physical distribution, then there is no reason to use more keys. More granularity in a good distribution has no value. However, a distribution key MUST be used in a join in order to achieve co-location. So if we use multiple keys, we are committing to using all of them in all joins, and this is rarely the case. I would strongly suggest that you center on a single distribution key and only move to more than one if you have high physical skew (again, a physical not functional reason). The distribution is not an index - it is a hash. In multi-key distributions, the join does not first look at one column then the next - it looks at all three at once because they are hashed together for the distribution. Joined-at-the-hip if you will.
On leaving out a distribution key:
One case had three tables joining their main keys to get a functional answer. They were all distributed on the same key, which was a higher-level of data than the keys used in the join. The developer apparently thought that because the distribution keys are declared that the machine will magically use them behind the scenes with no additional effort from us. This is not the case. If the distribution key(s) (all of them) are not mentioned in the join, the machine will not attempt co-location. In this particular case, using the higher-level key would have extended the join somewhat but would not have changed the functional answer. Simply adding this column to the join reduced that query's duration by 90 percent. So even if a particular distribution key does not "directly" participate in the functional answer, it must directly participate in the join so as to achieve co-location. And if this does not change the functional outcome, we get the performance and the right answer.
How it affects concurrency:
Many times people will ask: Why is it that the query runs fast when it's running alone, but when it's running side-by-side with another instance of itself they both slow to a crawl? This is largely due to the lack of co-location in the query. When the query cannot co-locate, it must redistribute the data across the inter-blade network fabric so that all the CPUs are privy to all the data. This quickly saturates the fabric so that when another query launches, they start fighting over fabric bandwidth not the CPU bandwidth. In fact some have noted that the box appears to be asleep because the CPUs and drives aren't doing anything during this high-crunch cycle. That's right, and it's a telltale sign that your machine's resources are bottlenecked at the fabric and not the CPU or I/O levels. By co-locating the joins, we keep the data off the fabric and down in the CPUs where it belongs, and the query will close faster and can easily co-exist with other high-intensity queries.
Modified on by DavidBirmingham
For the past many months I have been diligently updating and upgrading the original 2008 Netezza Underground to address the many features of TwinFin, Striper and other offerings from IBM. I have recently been notified that it has passed final edit and is available on Amazon.com.
All I can say is "whew!" and many thanks to those who helped put it together. It's been a whirlwind.
Here is the URL
When I started the project I realized that a big part of the original book remains timeless. I didn't leave it "as is" though - practically every page and all the chapters have new material, case studies and such. I peppered the book with some additional graphics since the intrinsic points require a bit more reinforcement than mere words will suffice.
The original chapter on "Distribution Stuff" is now "Performance Stuff" and is twice as long, covering the various aspects of setting up tables, troubleshooting, page-level zone maps and a lot more.
Fortunately, this time around there is a better mechanism to contact me if you have questions or want to report any errata (hey, it could happen!) - you can reach me through this blog, on linked-in or directly through my primary email address at Brightlight Consulting:
For a brief history of why ELT (that is, in-box-data-processing) is even a topic of discussion, we must recognize that the high-powered appliances such as Netezza have not only made such implementation viable, but even desirable.
Just so we level-set on what one of these looks like, it's a SQL statement. Usually an insert/select but can also include updates and the like. Many of you recognize these as multi-stage operations inside a stored procedure. The sentiment of course is that such an animal can perform better inside-the-machine than to take it out of the machine (through an ETL tool), process it, and put it back. This may be true for smaller data sets, but you aren't reading this because you have smaller data sets, are you? Netezza users are big-game hunters. Pea-shooters are for small animals, but if we want big game, we'd better bring a bigger game with us, put on a bigger game face, and bring bigger equipment.
But is it really the size of the equipment? Netezza users know that the size and the architecture are keys to success. Dog-piling a bunch of CPUs together does not a power-frame make.
Okay, back to the storyline here - in-the-box SQL-transforms, in a traditional RDBMS platform, are the realm of small game. Once the game gets bigger, these transforms degrade in performance, and rather rapidly. Watch as a swarm of engineers tries to reconstruct and rebuild the procedures, the SQL, the tables and even upgrade the hardware in a futile attempt to achieve incrementally more power. Emphasis on incrementally-more. Not linearly more.
As they grow weary of this battle, the ETL tools start looking better and better. We reach a critical mass, and the ELT-in-the-box is summarily forsaken as we stand up the ETL tool.
Sometime after this transition, the data volumes once again overwhelm the environment. One thing leads to another, and one day the Netezza machine arrives on the floor. Hopes are high.
But notice the transition above - ELT was forsaken for ETL, likely never to return. But wait, now we really have the power to do the ELT SQL-transforms, but we've mentally and perhaps emotionally (yeah, verily even politically) moved away from SQL-transforms.
Some reading this might scratch their heads and think, What's he talking about? We've been doing ELT in the machines using (PLSQL, etc, name your approach here) for many years. Why would we shy away from it?
- Hand-crafted SQL statements. They are easy to deploy but difficult to maintain and re-deploy. The moment we have to change the data model, this becomes painfully obvious.
- Proliferation of one-off operations, in cut-and-paste form, increasing inefficiency and complexity of the overall implementation
- Lack of overarching control - stored procs run and go "black" for an hour or more without any clue as to their status
- They don't scale in a traditional (SMP-based) RDBMS - the larger the data gets, the more the implementation will suffer
- ETL tools don't support them in the quantity and logistical quality we really need, but they are getting there.
Why not use an ETL Tool? I mean, they handle push-down SQL-generation right?
I can perhaps summarize this situation in a single conversation I had with a ETL tool vendor who was hawking the capability for his own tool, and after showing me the mechanics of making one of these little jewels operate with aplomb, I asked him, "So what about doing a few hundred of these in a sequence, or branching them into multiple sequences?" The vendor rep looked almost hurt. "Why would you want to do that?"
Well, if we're really talking about migrating transforms into the machine, this can grow into hundreds of operations rather quickly. This situation apparently overwhelms the logistical capability of even the most powerful ETL tool. But I have hope that they will solve this situation. Eventually.
I am not holding out hope that it will happen soon, or voluntarily. These tool vendors have invested millions in the performance boosting of their own products and will not likely toss this investment on the flames of the appliance movement, even if said flames are the exhaust flames of the appliance's rocket engines. This is why I say "eventually". They don't really have a marketable reason to embrace this approach.
Another problem of the ETL tool is that it is so divorced from the appliance's infrastructure that it cannot control the cross-environment logistics. This is especially true of the "virtual transaction" - that is - multiple flows arriving in multiple tables that are each in context of one another, yet are individually shared-nothing operations. If one of the flows should fail, how do we back out of this? Can we do a rollback of the tables where their flows succeeded? No we cannot. We could certainly implement an approach, but this is neither inherent nor intrinsic to the ETL tool. We need a shared-nothing virtual transaction that will control all of the flow in a common context, commit them in that context and faithfully rollback the results in that same context. ETL tools don't go there. Unless we implement the tool that way. As an application. Once implemented, how do we reuse this for the next application, and the next? We can see that it's not part of the tool itself.
If next-generation "ELT" scenarios are to be successful, they need several very important capabilities that are simply non-optional and non-trivial:
- Catalog-aware - The entire application must be coupled to the catalog definitons so that changes in definitions will automatically drive the resolution of their impact. We don't want to go on a submarine hunt to find every place where a particular column is used. The tool should already know.
- Logistical control - the ELT management needs to be able to support hundreds of transforms with arbitrary branches and synchronization
- SQL-generation - from templates that are tied to the catalog - If the table changes, we can propagate the changed automatically into the transforms.
- Proper handling of shared-nothing operations into the target, including virtual transactions that we can replicate to other databases, or rollback completely.
- Treat internal appliance databases as "waypoints" rather than "endpoints". That is, multiple databases are in play and used for various roles, such as intake, intermediate workspace, and persistence. These resources should be configurable and relatively transparent to the tool.
- Support of major patterns such as CDC, Dedupe, column-level scrubs and transforms, masking, column remapping, soft-delete, slowly-changing-dimensions and of course, analytic functions etc.
I am sure the visitors of this blog have even more aspects of a "wish list" that they have implemented (perhaps painfully so) and want more control over the data, its processing logistics and error control and recovery. Feel free to add your own comments and suggestions here.
Modified on by DavidBirmingham
Sometimes the average Netezza user gets a bit tripped-up on how an MPP works and how co-located joining operates. They see the "distribute on" phrase and immediately translate "partition" or "index" when Netezza has neither. In fact, those concepts and practices don't even have an equivalent in Netezza. This confusion is simply borne on the notion that Netezza-is-like-other-databases-so-fill-in-the-blank. And this mistake won't lead to functional problems. They will still get the right answer, and get it pretty fast. But it could be soooo much faster.
As an example, we might have a traditional star-schema for our reporting users. We might have a fact table that records customer transactions, along with dimensions of a customer table, a vendor table, a product table etc. If we look at the size of the tables, we find that the product and vendor tables are relatively small compared to the customer, and the fact table dwarfs them all. A typical default would be to distribute each of these tables on their own integer ID, such as customer_id, vendor_id etc. and then putting in a transaction fact record id (transaction_id) that is separate from the others, even though the transaction record contains the ID fields from the other tables.
Then the users will attempt to join the customer and the transaction fact using the customer_id. Functionally this will deliver the correct answer but let's take a look under-the-covers what the performance characteristics will be. As a note, the machine is filled with SBlades, each containing 8 CPUs. For example, if we have a TwinFin-12, this is 12 SBlades with 8 CPUs, or 96 CPUs. They are interconnected with a proprietary, high-speed Ethernet configured to optimize inter-CPU cross-talk.
Also whenever we put a table into the machine, it logically exists in one place, the catalog, but physically exists on disks assigned to the CPUs. A simplistic explanation would be that if we have 100 CPU/disk combinations and load 100,000 rows to a table that is distributed on "random", each of the disks would receive exactly 1000 records. When we query the table, the same query is sent to all 100 CPUs and they only operate on their local portion of the data. So in essence, every table is co-located with every other table on the machine. This does not mean however, that they will act in co-location on the CPU. The way we get them to act in co-location (that is, joining them local to the CPU) is to distribute them on the same key.
But because our noted tables are not distributed on the same key, they cannot co-locate the join. This means that the requested data from the customer table will be shipped to the fact table. What does this look like? Because the customer table has no connection to the transaction_id, the machine must ship all customer records to all blades (redistribution) so that the CPUs there can attempt to join on the body of the customer table. We can see how inefficient this is. This is not a drawback of the Netezza machine. It is a misapplication of the machine's capabilities.
Symptoms: One query might run "fine". But two of them run slow. Several of them even slower. Results are inconsistent when other activities are running on the machine. We can see why this is the case, because the processing is competing for the fabric. Why is this important to understand? The inter-CPU fabric is a fairly finite resource and if we allow data to fly over it in an inefficient manner, it will quickly saturate the fabric. All the queries start fighting over it.
Taking a step back, let's try something else. We distribute the transaction_fact on the customer_id, not the transaction_id. Keep in mind that the transaction_id only exists on the transaction table so using it for distribution will never engage co-location. Once we have both tables distributed on the customer_id, let's look at the results now:
When the query initiates, the host will recognize that the data is co-located and the data will start to join without ever leaving the CPU where the two table portions are co-located. The join result is all that rises from the CPU, and no data is shipped around the machine to affect the answer. This is the most efficient and scalable way to deal with big-data in the box.
Now another question arises: If the vendor and product dimensions are not co-located with the transaction_fact, how then will we avoid this redistribution of data? The answer is simple: they are small tables so their impact is negligible. Keep in mind that we want to co-locate the big-ticket-or-most-active tables. I say that because we have sites that are similar in nature where the customer is as large as two of the other dimensions, but is not the most active dimension. We want to center our performance model on the most-active datasets.
This effect can rear its head in counter-intuitive ways. Take for example the two tables - fact_order_header and fact_order_detail. These two tables are both quite monstrous even though the detail table is somewhat larger. Fact_order_header is distributed on the order_header_id and the fact_order_detail is distributed on the order_detail_id. The fact_order_detail also contains the order_header_id, however.
In the above examples, the order header was being joined to the detail, along with a number of other keys. This achieved the correct functional answer, but because they were not using the same distribution key, the join was not co-located. So we suggested putting the order_detail table on the same distribution as the order-header (order_header_id). Since the tables were already being joined on this column, this was a perfect fit. The join received an instant boost and was scalable, no longer saturating the inter-CPU fabric.
The problem was in how the data architects thought about the distribution keys. They were using key-based thinking (like primary and foreign keys) and not MPP-based thinking. In key-based thinking, functionality flows from parent-to-child, but in MPP-based thinking, there is no overriding functional flow of keys - it's all about physics. This is not to say that "function doesn't matter" but we cannot put together the tables on a highly physical machine and expect it to behave at highest performance unless we regard the physics and protect the physics as an asset. Addressing the functionality alone might provide the right functional answer, but not the most scalable performance.
After completing another migration from a traditional, general-purpose RDBMS to the Netezza technology, I visited a friend who had several artifacts in his home that had to be the strangest things I'd ever seen.
Now I'd heard of genetically altered cat fur, you know, buying a cat online while picking the fur color of your choice (blue, lavender, teal - etc). Seems like an odd thing to do to a cat. I like cats and dogs both, so don't imagine that this blog essay attempts to take sides. Some folks are downright serious on their choice of pet, so I'll smooth that fur wherever I can.
Back to the hairless cats. I asked him "Where did you find these? And what happened to your other cats?" And he laughed, "That's a funny story. These are my cats, but I had to shave them." To this, I rolled my eyes, wondering where this was about to lead. He told the tale:
"We took a vacation down south and put the cats in kennels in the back of the truck. The round-trip took a toll on their fur and matted up everything from head to toe. Weeks later, the cat fur had not smoothed out. The kids had been brushing it out but it wasn't working. And if you think they look ridiculous shaved, you have no idea how silly they looked with their hair matted."
"So you shaved them?" I asked.
"Sure" he said, "Seems practical right? Just get rid of the matted hair altogether. Teasing it apart would have taken, well, years of time. Their fur will grow back out soon enough"
"Aren't there, you know, shampoos and stuff for that? I mean, shaving seems a little extreme."
"Tried all that. Bad thing about it, mats are bad for cats - they cause infections and all kinds of nasty side effects. Best to just shave it all and be done with it."
Laughing on the inside, I thought a bit about how we have to decompose and de-engineer an organically-grown data warehouse. Some would suggest porting (forklifting) the whole thing over "as is". Like taking a matted-hair cat and moving them from one house to another. It changes the venue for the cat, but doesn't help the cat at all. It's still sick and getting sicker from the mats. Such folks "tell a tale" of the success of their migration derring-do. But they are like nomads. Hunting the game until there's no more, then pulling up stakes to find another place to burn out. Forklift-migrations have value only to the ones who are doing the migrating, not the recipients. No sooner will they tie a bow on it than someone will request a change, and we will discover what we already knew: The original data model (now the new data model) isn't very resilient to change no matter where it is hosted.
We realize we have ported both the good and the bad from the old system, when we had the opportunity to port the good and leave the bad behind. We essentially are agreeing that we are about to standardize on the past and then accommodate the future, rather than a better approach: standardize on the future and accommodate the past.
Many years ago, at one site we had to carefully tease-apart the data and the stored procedured to find out what they were actually doing. Unfortunately we had carved up the work for several teams rather than reviewing it together. Had we done this, we would have discovered that the stored procs executed in chains of work, and that many of the chains were copy-pasted from one original chain that was too "matted" to risk breaking. So they copied-and-modified this chain to perform the new functionality. Enough of these and we see how the stored procedure doesn't benefit us (at all) for back-end data processing. In fact, we strongly suggest people use stored procs in Netezza for BI-adaptation and optimization, for the presentation layer. But not for the back end. Stored procs are not operationally viable for a wide range of reasons. It's even funny how folks move from one technology to another and try to replicate the stored procedure logic as a knee-jerk exercise, without realizing how flawed it really is. Perhaps the Netezza stored proc will run a lot faster. Trust me, performance is the least of your worries.
So once we converged the teams together, these themes started popping out like rabbits. By the end of the first day we are all laughing at the sheer level of redundancy in the back end. But not particularly surprised at the outcome. We'd seen it in lots of places before, but not so bad.
Of course, it never once occurred to us that we would port these hundreds of stored procs over to the new system. Rather we would functionally specify what they are doing now, and leverage tools to accommodate the vast majority of the functionality, only building what was left over. I mean, this is a standard functional port, why complicate things? Forklifting into a Netezza machine will certainly yield 10x performance, so why the beef? Without optimizing the data structures and processes to leverage Netezza's power, we might get 10x but leavel 100x on the table. Is this a good tradeoff?
Well, true to form, someone had the capacity to complicate things. He whipped out a spreadsheet and calculated the cost of the hundreds-of-stored-procs in the original system, not realizing we were planning to reduced these to maybe fifteen operations at most. Spreadsheet calculator in-hand, he estimated that it would take 24 people, 8 months, to handle on these stored procs. I sat back in my seat, stunned, because he was costing a project we weren't about to undertake. Rather, building out 15 or so operations would require a handful of people and 90 days at the outside. But also true to form, the project principals saw visions of sugar plums (another word for sales-comp) that got in the way of their better judgment. They actually went to the client with these inflated numbers, he rejected their proposal outright and gave the business to someone else. It's easy to lose a deal when the client sees the inflation on-the-page.
But what our "spreadsheet guy" missed, was that we weren't about to embark on a journey of finding a home for each stored proc (we already knew this had no value, and the client knew it too). He believed that we intended to bring the matted-cats into the house and put them on pillows, when we intended to pick the cats we wanted to keep, and shave them.
Okay, that's a strange analogy, but we had no intention whatsover of accepting all that convoluted spaghetti as the foundation for the go-forward system.
Netezza gives us the capacity - to simplify. We keep the parts we consider valuable (the cat) and get rid of all the mess that keeps the cat sick and unhappy. Taking only the functions we want, we then reconstruct (let the hair grow back out) only what we want to keep, and take the opportunity to apply some solid architectural principles and likewise capitalize on the strengths of the Netezza platform.
In the end, if we really have a platform that is standardized on the future, but accommodates the past, we also have something else that is even more powerful: A simpler, stronger engine that is ready to grow in functionality, adapting to our changing needs. The old system was never built with this kind of vision or priority, because the power wasn't there to affect it anyhow.
When introducing the Netezza platform to a new environment, or even trying to leverage existing technologies to support it, very often the infrastructure admins will have a lot of questions, especially concerning backups and disaster recovery. Not the least of which are "how much" "how often" and such like. More often than not, every one our responses will be met with a common pattern, a sentence starting with the same two words:
Case in point, when we had a casual conversation with some overseers of backup technology as a precursor to "the big meeting" we almost - quite accidentally - shut down the conversation entirely. Just the mention of "billions" of rows, or speaking of the database in "Carl Sagan" scaled terms, caused them to want to scramble for budget and market surveys of technologies that were more scalable than their paltry nightly tape-backup routines. In this particular conversation, we were talking about nightly backups that were larger than the monthly backups of all of their other systems combined. Clearly we were about to pop the seams of their systems and they wanted a little runway to head off the problem in the right way.
But what is the "right way" to perform a backup where one of the tables is over 20 terabytes in size, the entire database is over 40 terabytes in size and the backup systems require two or even three weeks to extract and store this information for just one backup cycle? Quipped one admin "It takes so long to back up the system that we need to start the cycle again before the first cycle is even partially done." and another "Forget the backup. What about the restore? Will it take us three weeks to restore the data too? This seems unreasonable."
Yes it does seem unreasonable precisely because it is - quite unreasonable. As many of you may have already discovered, the Netezza platform is a change-agent, and will either transform or crush the environment where it is installed, so voracious are its processing needs and so mighty is its power to store mind-numbing quantities of information.
The aforementioned admins simply plugged their backup system into the Netezza machine, closed their eyes and flipped the switch, then helplessly watched the malaise unfold. It doesn't have to be so painful. These are very-large-scale systems that we are attempting to interface with smaller-scale systems. We might think that the backup system is the largest scaled system in our entire enclosure, but put a Netezza machine next to it and watch it scream like a little girl.
So here's the deal: No environment of this size shoiuld be handled in a manner that is logistically unworkable for the infrastructure hosting it. We can say all day that these lower-scaled technologies should work better or that Netezza should pony-up some stuff to bridge the difference, but we all know that it's not that easy. Netezza has simplified a lot of things, but simplification of things outside the Netezza machine - aren't we asking a bit much of one vendor?
To avoid pain and injury, think about the things that we need to accomplish that are daunting us, and solve the problem. The problem is not in the technology but in the largesse of the information. We would have the same problem on our home computers if we had a terabyte of data to backup onto a common 50-gig tape drive. We would need twenty tapes to store the data. The backup/restore technology works perfectly fine and reasonably well for a variety of large-scale purposes. We simply need to be creative about adapting it to the Netezza machine. Don't plug it in and hope for the best. Don't do monolithic backups. The data did not arrive in the machine in a monolithic manner so why are we trying to preserve it that way? Leave large-scale storage and retrieval to the Netezza machine and don't crush the supporting technologies with a mission they were never designed for.
Several equally viable schools of thought are in play here. What we are looking for is the most reliable one. Which one will instill the highest confidence with least complexity? The more complex a backup/restore solution becomes, the less operational confidence we have in it. If it cannot backup and restore in a reasonable time frame, we exist in a rather anxious frontier, wondering when the time will come that the restoration may be required and we put our faith in the notion that it either won't, or when it does all of the other collateral operational ssues will eclipse the importance of the restoration. In other words. future circumstance will get us off the hook. There is a better way, like a deterministic and testable means to truly backup and restore the system with high reliability and confidence.
On deck is the simplest form of the solution - another Netezza machine. Many of you already have a Disaster-Recovery machine in play. Trust me when I tell you that this should be fleshed out as a fully functional capability (discussed next) and then the need for a commodity backup/restore technology evaporates. Using another Netezza machine, especially when leveraging the Netezza-compressed information form, allows us to replicate terabytes of information in a matter of minutes. I don't have to point out that none of our secondary technologies can compete with this.
A second strategy requires a bit more thought, but it actually does leverage our current backup/restore technology in a manner that won't choke it. It won't change the fact that the restoration, while reliable, may be slow simply because moving many terabytes in and out of one of these secondary environments is inherently slow already.
A third strategy is a hybrid of the second, in that enormous SAN/NAS resources are deployed as the active storage mechanism for the data that is not on (or no longer on) the Netezza machine. This can be a very expensive proposition on its own. We know of sites that keep the data on SAN in load-ready form, and then load data on-demand to the Netezza machine, query the just-loaded data, return the result to the user and then drop the table. You may not have on-demand needs of this scale, but this shows that Netezza is ready to scale into it.
A fourth strategy is a hybrid of the first, that is we still use a Netezza machine to back up our other Netezza machine, but we use the more cost-effective Netezza High-Capacity server, which is less expensive than the common TwinFin (fewer CPUS, more disk drives) but otherwise behaves in every way identically to its more high-powered brethren. And honestly if we were to put apples-to-apples in a comparison between the cost of a big SAN plant to store these archives versus the High Capacity Server, the server wins hands-down. It's cheaper, simpler to operate, doesn't require any special adaptation and we can replicate data in terms guided by catalog metadata rather than adapting one technology to another.
So let's take these from the least viable to the most viable and compare them in context and contrast, and let the computer chips fall where they may.
Commodity backup/restore technology
If we want to leverage this, we need to understand that it cannot be used to perform monolothic operations. These are unmanageable for a lot of reasons:
To mitigate the above, we need to adapt the large-scale database to the backup technology by decoupling and downsizing the operation into manageable chunks. This is a direct application of the themes surrounding protracted data movement in any environment. The larger the data set, the more the need for checkpointed operations so that the overall event is an aggregation of smaller, manageable events. If any single event fails, it is independently restartable without having to start over. Case in point, if I have 100 steps to complete a task and they are all dependent upon one another, and the series should die on step 71, I still have 29 steps remaining that may have completed without incident, but I cannot run them without first completing step 71. This is what a monolithic backup buys us - an all-or-nothing dependency that is not manageable and I would argue is entirely artificial.
To continue this analogy, lets say that any one of these 100 steps only takes one minute. In the above, I am still 30 minutes away from completion. I arrive at 6am to find that step 71 died, and now I have to restart from step 1 and it will cost me another 100 minutes. Even if I could restart at step 71, I am still 30 minutes away from completion.
Contrast the above to a checkpointed, independent model. If we have 100 independent steps and the step 71 should die, the remaining 29 steps will still continue. We arrive at 6am to find that only one of the 100 steps died and we are only 1 minute away from full completion. The difference in the two models is very dramatic:
Monolithic means we are operationally reactive when failure occurs. The clock is ticking and we have to get things back on track and keep moving. Checkpointed means we are administratively responsive when failure occurs. We don't have to scramble to keep things going. In fact, in the above example, if step 71 should fail and the operator is notified, doesn't the operator have at least half an hour to initiate and close step 71 independently of the remaining 29 steps? Operators need breathing room, not an anxious existence.
Monolithic methods are supported de-facto by the backup/restore technology. If we want to perform a checkpointed operation, we have to adapt the backup/restore process to the physical or logical content of the information. We don't want to directly mate the backup technology itself, so we need to adapt it.
Logical Content Boundaries
This means we have to define logical content boundaries in the data. What's that? You don't have any logical content boundaries, not even for operational purposes? Well, per my constant harping on enhancing our solution data structures with operational content, such as job/audit ID and other quantities, perhaps we need to take a step back and underscore the value of these things because they exist for a variety of reasons. One of them is now upon us - the operatonal support of content-bounded backups. It is required for scalability and adaptability and is not particularly hard to apply or maintain.
A more important aspect of content boundary is the ability to identify old versus new data. If the data is carved out in manageable chunks, some will always be least-recent and some more-recent. Invariably the least-recent chunks will be identical in content no matter how many times we extract them for backup. This means we can extract/backup these only once and then focus our attention on the most active data. In a monolithic model, there is no distinction between least or most active, least or most recent. In large-scale databases, the least-recent data is the majority, so the monolithic backup is painfully redundant when it need not be.
Do we absolutely need content-bounded backups for all of our tables to function correctly? Of course not. But by applying this as a universal theme it allows us to treat all tables as part of a backup framework where all of them behave the same way. So part of this is in the capture of the data but a larger part is the operational consistency of the solution.
Many reference tables such as lookups will never grow larger and we know this. In fact, their data may remain static for many years. For the ones that are tied to the application and grow or change every day, these we will call solution tables. They are typically fed by an upstream source and are modified on a regular basis. Any of these tables can grow out of control. The reference data then represents a very low operational risk. Why then would we not simply fold the reference data into the larger body and treat all the tables the same? There is no operational penalty for it, but enormous benefit from being able to treat all tables the same way inside a common solution.
At this point, the backup/restore framework will address all of the tables the same way, but now we have the ability to leverage rules and conditions within the framework so that special handling is available if necessary. This is a common theme in large-scale processing: Handle everything as though it will grow, but accommodate exceptions with configuration rules. I'll forego this aspect for now and let' take a look at what we need in basic terms:
- An intermedaite landing zone: Some off-machine storage location that can hold the data while in transit - e.g a NAS or SAN volume. This will not be the same size as the database but some fraction of it. It is a workspace for intermediate files, not permanant storage
- Content boundaries: A means to define and manage content bthese are tags or quantities in the data itself. They need to be consistent for all tables so that the backups and restore operate the same way. Of course, if we don't have these, we need to apply them.
- A restoration database: a separate database that will be used for the restoration process, because we won't be restoring the data directly into the table from whence it came. Why not? There is an increasing likelihood that the data structures have changed since the last backup. If the structure has changed, we cannot reload the data into it. We need a way to shepherd the data back into the machine.
- Processes: that capture / backup / restore the data using content boundaries and common utilities. These are typically control scripts that are easily configured, deployed and maintained
- Archival Database (next): Physical content replicas of the masters that hold all of the original information in a form useful for backup and restore. Whether they keep the data active/online is immaterial. Their role is to interface with the backup/restore mechanisms so that any data once made offline can easily be made online through this database.
Setup formal archival databases. Whether these reside on the same server, or on a DR / High Capacity server (below) is immaterial. The point is that the data in the master tables will be actively rolled into these tables, which will form the backbone for all backup and restoration operations. We therefore replicate the masters (with streaming replication) into the archival stores and then at some interval perform the backup of the archives, not the master.
Foreshorten the Master Tables
Now that we have a means to define content boundaries - you did apply those boundaries right? We can now look at the database holistically for optimizations based on active data.
At one site we have a table with ninety billion records in three different fact tables spanning over ten years of information, However, the end-users and principals claimed (and we verified) that the most active data on any given day is the most recent six months. Anything prior to that, they would query perhaps once or twice a year for investigative purposes, but had not tied any reports to it.
So now we have an opportunity to get agreement from the users to shorten the master tables. This is especially necessary for those fact tables. In the end, these fact tables (above) were shortened to less than four billion rows each and these are kept trimmed on a regular basis. The original long-term data is held in another archival database that is the foundation for the backups and restores.
On a per-table basis, use the content boundaries to manufacture/compile a set of checkpointed instructions for each table to support extraction and restoration
Execute the extractions to create flat files
Each extraction will pull data to a Netezza-compressed flat file and also supply another file with the metadata instructions necessary to restore the file's contents. On a per-file basis, these instructions will include:
Original table definition, used to construct a template of the table in the restoration database
File load instructions used to get the content back into the machine
Transformation command, used to move data from the restoration database into its original home
- The files will be extracted (using content boundaries) onto the storage landing zone
- The smaller extractions can be performed in parallel, are independent and form their own checkpointed process
- The backup techology will begin its backup process once the second (metadata companion) file arrives for the set. The metadata companion file behaves as a trigger.
- Once the backup for the file-set is complete, the backup/restore deletes the file and its companion
- This decouples the backup/restore processing from the database (completely) so that it can focus solely on file-based backup and restore
- For restoration, both the content file(s) and companion metadata file(s) will be retrieved and placed on the storage landing zone
- The metadata will be used to reconstruct an empty intermediate version of the original table in the Restoration Database
- Load the table using nzload
- Then the metadata will be used (even a preformulated SQL statement) to copy the data into the original target
- Throw away the intermediate load table, delete the data file and its companion metadata file
- The restoration process can be run as many ways-parallel as supported by nzload
- The restoration process can also be surgical, with the more agile ability to restore data in smaller segments
The above protocol, while likely easy to pull off an administer, still has a number of moving parts that the Netezza based-equivalent (later) will not have.
The only difference between the above protocol and one using a SAN-based storage mechanism, is the absence of a formal backup/restore technology. Rather the SAN is the long-term storage location and we perform the incremental extractions onto it. Rather than delete the files, we keep them.
This has significant implications for the cost of the SAN. After all, if we intend to interface this to the Netezza machine, we would not want common NAS storage because it is too slow and the vendors actively disclaim their technology from being viable for data warehousing. The primary reason is that the network and CPUs are set up for load-balancing, like a transactional database, but not bulk onload/offload of the data.
Not only will we need enough SAN to backup the environment, but to also carry fathers and grandfathers if need be (this is a policy decision). With checkpointed extracts, the father/grandfather issue is largely moot. This is because once a checkpointed extract of older data is pulled and stored, it won't be changing and capturing another one just like it has no value.
In this approach we leverage another Netezza machine like a DR server, as our backup, archival and restore foundation. It can easily hold the information quantity. The difference here is in price, of course, since a fully-functional TwinFin is more expensive than most common SAN installations. However, the High Capacity Server (below) mitigates this pricing problem while delivering a consistent data experience.
One primary benefit of performing backups into the DR server is that it can automatically serve the role of a hot-swap server in case of failure in the primary server.
For this scenario to work however, we would want streaming replication between the active databases and the DR server so that the data is being reconciled while being processed. This allows us to have a fully functional hot-swap if the primary crashes, and we can continue uninterrupted while the primary is serviced. Word to the wise on this kind of scenario however, bringing back the primary means that it is out of sync, since the secondary took over for a span of time. So we would need to be able to reverse the streaming replication to make it whole.
Scenarios like this often embrace the practice of operationally swapping the two machines on defined boundaries, like once a month or once a quarter, where they actually switch roles each time. This allows the operations staff to gain confidence in the two machines as redundant to each other in every way. I have seen cases like this where the primary machine went down, the secondary machine kicked in seamlessly and all was well. I have also seen cases where the principals kept the DR server up to date but when it came time to operationally switch, some important piece (usually in the infrastructure between the devices) was missing causing the failover itself to fail. It is best to have a plan in place, but it's better to have tested the plan and that it actually works.
On a per-table basis, use the content boundaries to manufacture/compile a set of checkpointed instructions for each table to support extraction and restoration
Using a variation of the nzmigrate utility, perform the table-level extractions
Extract data in Netezza-compressed form to a flat file
Load the flat file into a restoration database in the second machine
Perform the transform-execution to copy data from the restoration database into the target table
The extractions, restoratons and transformations can all leverage simple scripts and catalog metadata, not become dependent on deeply hand-crafted code
Intermediate (SAN) landing zone requires less space because the data is being transferred in Netezza-compressed format and it is cleaned-up on-the-fly
Transfer is quicker because Netezza-compressed data is written to the data-slices directly instead of coming through the host.
A word on Netezza-compressed transfer. I wrote about this in Netezza Transformation but it is important to highlight here. We performed an experiment moving half a terabyte scattered across a hundred or more tables. This data was moved from its original home to a database in another machine. The first method used simple SQL-extract into an nzLoad component. This process took over an hour. The second method used transient external tables with compression, coupled with an nzload in compressed mode. The entire transfer took less than six minutes. This was because the compressed form of the data was already 14x compressed.
In other experiment using over 20x compression for the data, we were able to transfer ten terabytes in less than an hour. This kind of data transfer speaks well for the streaming replication necessary for DR server operation (above) but underscores the fact that even when transferring between Netezza machines, it's as though we haven't left the machine at all.
Netezza-based High-Capacity Server
This option is simply a form of the Netezza-based hybrid (above) but on a dedicated server designed to support backup and recovery.
The better part about this server is that is has more disk drives and fewer CPUs, making it far more cost effective for storage than common SAN devices. Couple this with the minimal overhead required for transferring data between machines, and the ability to surgically control the content with the content-boundaries and catalog metadata, and we get the best of all worlds with this device.
Not only that, but it is also scalable to support storage of all other Netezza devices in the shop as well as any non-Netezza device where we simply want to capture structured information for archival purposes. The High Capacity server is queryable also, meaning that even the ad-hoc folks will find some value in keeping the data online and available.
Lastly, in Spring 2010, as part of the safe-harbor presentation one of the principals at IBM Netezza announced plans for a replication server. I can only imagine that this device will deliver us from any additional hiccups associated with streaming replication that we might now be doing in script or other utility control language.
At Brightlight, our data integration framework (nzDIF) has the nz_migrate techniques built directly into the flow substrate of the processing controller, as well as the enforcement and maintenance of the aforementioned content boundaries. We are actively acquiring and applying best, most scalable and simplified approaches as a solution framework firmly lashed to a purpose-built machine. I am a big proponent of encouraging Enzees to take on these things themselves, or at least let us coach you on how to make it happen. The solutions are simple because the Netezza platform itself is simplified in its operation. Stand on the shoulders of genius - the air is good up here.
Once again this entry will serve as an intro to the un-initiated. I get a lot of pings from folks who are new to the stuff and want to know their way around the machine from the beginning. Zone maps are bread-and-butter power tools of the Netezza technology. The companion capabilities are the materialized view and the cluster base tables (which are new to v 6.0).
About three years ago I sat in on a special presentation to our firm from one of Netezza's competitors. They put up some power-point slides to compare their technology to "the others". Their product was a data warehousing product and all the "others" were "OLTP" products. The usual suspects were in this list of course, but oddly so was Netezza. But Netezza is not an OLTP machine in any sense of the word. How odd that this company did not know enough about its primary rival to discern this simple truth. In addition, this vendor made certain odd claims about Netezza that were either architectural false-assumptions or just misinformation. One of which was this "In a Netezza machine, the more data you add to a table, the slower the table's response will be." This is of course completely false. Netezza tables are like any data storage with one major takeaway that should not be missed:
Netezza tables may behave inconsistently if the data is not properly configured and deployed. Notice I did not say "if the hardware is not properly ...." but the data itself. Netezza already has the hardware configuration under control and there's nothing we can do to affect it. We can however, align our data with the hardware architecture and will receive the benefit of a consistent experience no matter how much the data grows. It's hard to imagine this kind of consistency on any platform, so stay with me here.
A number of years ago I did some deep immersion in a health-care expert system for Value Health Sciences. The objective was to directly apply and enforce the codes found in the CPT code book used by physicians (those are the codes the doc will check on the sheet he hands to you before he leaves the office). Insurance companies have standard billing/invoicing arrangements with the physicians and these codes are the centerpiece. For VHS, their medical practitioners had set up rules for characterizing (and properly re-characterizing) a claim. For example, if a physician billed codes 1,2 and 4 but he was supposed to have billed these all as 1 and 5 (because code 5 bundles 2 and 4 to reduce cost) the system would have a rule in it that looked something like:
1,2,4 = 1, 5
The difficulty here of course, with some 100,000 codes and over 40,000 rules, is to organize them so that they are quickly accessible. By design, the first code in the rule (eg. code "1" above) was considered an "index" code. In other words, any and all rules that would use the code "1" as an anchor would also make "1" the first code. This lended itself to the opportunity to "vector" the rules. That is, create an in-memory array, indexed by the given code, and a list of rules would be handily available at that vector point. So that all we had to do was look at the claim (e.g. codes 1,2,4) fetch the various candidate rules from their vector lists ( the list for code 1 might have 3 rules, likewise for the others, reducing our total candidate rules from 40,000 to only 9 rules). We could then run the rules and dispatch the claim. When I had first received this model, the system was physically running through all 40,000 rules to find the right match. This vectored approach reduced the total rules, but also eliminated guesswork. And with all that, boosted the power and created a very consistent turnaround time for each patient claim.
The vector list above is a common way in programming to organize information so that - if we have something in our hands already - like a filter or a vector based on a natural key - we can quickly find where we need to start looking and eliminate the places where we are not interested. While this sounds a lot like a common index-structure approach, it's actually a process-of-elimination approach. The two have similar qualities, so I will delineate them here at a high level.
An index is a surgical approach to finding data by logging its record-based keys and a record locator into a separately maintained structure. By finding keys, we find physical records. This is important for transactional systems that need fast record-level turnaround. It is not ideal however, for analytics or data warehousing where the vast majority of queries don't care about individual records at all, but about answers that span records. This alternative approach of vectoring, is a way to bucket the information so that we're not attempting to specifically locate the record itself, but the bucket where the record may be located, that is, derive a group of candidates upon which we will take further action.
When we think about this, the candidate approach, as opposed to the surgical approach is the only way to scale anything that has to deal with multiple-record sets (set-based processing). Engines that are geared to examine one-record-at-a-time even if it is to aggregate or perform multi-record operations with the single-records it retrieves, cannot compete with an engine that is designed to defacto treat the records as multiples from the outset. Transactional versus bulk. The transactional engine is inherently slower and non-scalable compared to the bulk engine (for set-based operations).
I noted in a prior entry that Netezza's anti-indexing strategy heavily leverages "where not to look" because no matter how large a table gets, this "vectoring" or "candidate" approach will guarantee that our response time is very consistent. How does this apply to zone maps?
Recall the diagram with the CPU/disk combinations. Each of these disks is divided into its smallest actionable physical portion (an extent) and the extent then becomes the currency-of-the-realm when we're talking about searching for data. The machine automatically tracks the data that is added to an extent, and when we generate-statistics, every extent is accounted for in a zone map. The map has basic statistics on the columns in the table at the extent level, such as the high/low values for a column in the given extent.
Extent ColA ColB ColC
1 200 A B
1 300 A B
1 400 C D
1 500 E F
2 200 F G
2 400 H I
2 600 J K
3 100 L M
3 200 J O
Note the above layouts of the extents. If we were to look at the high/low values of each, we would find zone maps like this:
Extent ColA High ColA Low
1 500 200
2 600 200
3 200 100
So now if we ask a question to the machine based on say, a ColA value of 200, the machine will search all three extents. It has no choice because the zone map is telling it that pertinent data exists in all three.
Now let's physically sort the data by ColA
Extent ColA ColB ColC
1 100 L M
1 200 A B
1 200 F G
1 200 J O
2 300 A B
2 400 C D
2 400 H I
3 500 E F
3 600 J K
Now if we take a look at the zone map, we would see something like:
Extent ColA High ColA Low
1 200 100
2 400 300
3 600 500
So that now if we query the table for values of 200, the machine searches only one extent, not all of them. (Now, this is a simple example, because a given value could extend across many extents). Likewise a given disk has thousands of extents upon which to store the data, so it can spread the information out into much smaller, manageable chunks.
The point, however, is that by organizing the data this way, the zone maps reduce the total searchable extents. Keep in mind that we don't have to sort the data each time we want to use it. For many warehouses, the data naturally arrives in this contiguous form (for example, transaction date in a retail model). The Netezza machine will track all integers and dates automatically without us having to do anything additional to enable it. So we might take a penalty once to sort the data at the outset, then benefit from the data's natural arrival and collation.
This effectively creates the aforementioned vectored effect. It also guarantees a consistent experience because the data associated with the given value (say 200 above) isn't going anywhere. It will always be in the same location, tracked by the same zone map. Anytime we want to get back to the records associated with value 200, the zone map will serve them up. If the response time last week was 2 seconds, we could quadruple the size of the table and it will still be 2 seconds, because the data itself didn't go anywhere.
Contrast this to other heavy-lifting algorithms in software technologies. The Ab Initio platform, for example, will perform an in-memory Sort that will "intelligently" spill to disk when it runs out of memory. What this means is that it will take a predefined block of memory and attempt to perform all work inside it until it runs out. If so, it will manufacture 16 file locations within which to spill its overflow. These are organized with the same "vectored" approach as the sort keys would dictate. If these 16 files run out of space, they will further divide into 256 files to split the workload. Anyone working with the technology will tell you that this secondary split into 256 is to be avoided if at all possible because it negatively (and significantly) affects the performance of the sort. But we see the difference here. A general-purpose sort in software needs these 256 assets to functionally complete its mission, and adding more means less performance. Now compare this to Netezza which uses not just 256, but thousands of extents to divide the data out - and rather than experience the performance drag, will derive power and lift from this.
So let's apply something additional, especially for highly active tables. If we have a nightly batch window for processing, we likely have all the time we need to keep these zone maps healthy. But if we have an intraday model or even an aggressive micro-batch model, it is possible that the zone maps may be "violated". Note above that as long as the data arrives in contiguous chunks (like transaction date) the data will be set into the extents so that we minimize the total extents for a search. If a transaction date (for example) should appear outside of this contiguous range (say a record from last week's data has arrived) then it means the high/low range for that particular extent will be extended to include the errant record. It could mean that the given extent is also included in queries that don't related to either date, because the extent is simply being included because of its high-low range. That is, every query looking for this extent's primary transaction date, the errant transaction date, or any date in between them, will include this extent in the candidate list. While this is inconsequential in most cases, if it repeats over time it can slowly degrade the table's performance. Highly active micro-batch and intraday models have a tendency to see these effects if they remain unchecked or unguarded. But not to worry - the solution is simple.
What is daunting about this is that our highly-active-table might be a multi-billion-row beast that does not lend itself to re-sorting and in an intraday model, there's no time to re-sort the data anyhow.
A materialized view has the quality of being able to apply a virtual zone map to a table without physically re-sorting the information. All we have to do is pick the columns we want to zone map, and then form the materialized view with an order-by on those columns. Rebuilding one of these is usually very fast. In addition, we can be rebuilding it even while its prior version is active. At one site, we set up a process to rebuild the view thus:
Note that for the few moment that two materialized views are applied to two tables bears no risk at all. And if we drop the original materialized view, it will wait for all pending queries to stop - meaning that this won't break anything the users are doing (another necessity for highly active intraday models)
Important here is that the materialized views are not being used as surrogate tables, but as a means to virtualize the zone maps for faster response time on the table associated with the view. Another important aspect of the materialized view is that it can zone-map additional data types that the regular zone maps do not (e.g. varchars).
Let's say that we have a very active table and even re-creating the materialized view is too much uptake to maintain it. Not to worry, the materialized view has another quality that means we don't have to refresh it each time the table changes. The materialized view maintains its contents until the underpinning table changes. We then have the option to rebuild the view or use it "as is". This "as is" is usually agreeable for the short term, since when data is added to the underpinning table, the materialized view creates an extension to itself. Now whenever we query the table, it will examine the materialized view for default zone-map information but will also examine the extent. Considering that this extra extent is very small it is also insignificant in its overhead. So we can keep this in place for say, all day long and then clean up the materialized view in the evening. The materialized view's performance will not drag over the course of a day unless we do some serious re-churning of the underpinning data (which is not normal for common micro-batch models).
And now - Cluster-based tables
With all the above preliminary stuff out of the way, a discussion on CBTs is fairly straightforward. As of the 6.0 release, Netezza has provided this as a multi-dimensional capability for even more powerful searches. The data in the CBT is physically organized into extents based on keys. And when we perform a groom operation, the data is physically reshuffled so that the keys are physically co-located on the fewest extents possible. Like a self-enforcing zone map. only a lot more powerful. In a zone map approach, the primary way to get the data contiguous is to re-sort the data and the zone maps fall out for free. In a CBT, we organize the data and the execute groom. The initial groom will take awhile, but each subsequent groom will take inconsequential time. The data in the CBT can be organized on more than one key, but the keys can behave independently like a multi-dimensional effect.
Case in point - let's say we have 200 extents of data and the particular set of data we want spans only 10 of those extents (a zone map). We will ignore the other 190 extents and focus all the machine's energy on the 10 where the data is located. But wait, what if we have an additional key upon which to search, and the combination of the first key (10 extents) and second key reduces the total extents to only 2? That is, the first key appears in 10 extents but the first and second key appear in only 2 extents of those 10. Now we search only 2 extents. Again, we have organized and configured data, not hardware, to achieve the goal. We have told Netezza where-not-to-look and now we have zoomed in on a mere 2 extents out of 200.
But let's say that we add a lot more data to the table over the year, expanding this 200 extents to many hundreds if not thousands more. Can we see now that even if this happens, we can go back to the table again and again, ask the the same question as before, and only 2 of the extents will be used to search for data. This of course dispels the vendor-promulgated propaganda that Netezza tables get slower when we add more data.
As the title suggests, one of the challenges of new Netezza users is in learning about the product, what it can (and doesn't) do, and how it applies in data warehousing. When I first published the book (Netezza Underground on Amazon.com) the impetus for the effort was just that - people asking me lots of questions leading to fairly repetitive and predictable answers. It's an appliance after all. We can apply it to a multiplicative array of solutions but on the inside, certain things stand out as immutable truths.
Of course, lots of folks are running around down here in the catacombs, convincing people that they need to read the glowing glyphs on the ancient stone walls as guides on their quest for more information. This is entirely unnecessary. The title of the book (and the blog) is a tongue-in-cheek nod to the way that some might spin the story on the machine in their own favor. Some might claim that we need lots of consulting hours to roll out the simplest solution. Consultants can help, of course (I'm one of them). But it depends on the spin and the tale that is told, that will determine the magnitude and expense of those consultants..
I'll try to provide a balance that delivers value without extraordinary expense. Yet another source of misinformation is Netezza's competitors, who like to toss "innocent" bombs here and there to direct people down the path toward their own product. All is fair in business and war, as they say, but as we bring these issues out of the darkness and into the light, the objective is to become more informed.
I am neither a Netezza nor IBM employee, so apart from compensation for actual work performed in rolling out solutions, for which people would pay me regardless of technology, I don't have any other relationship with these companies. I am a huge fan of the Netezza product and architecture and make no secret of this.
So some may have come here to ask questions about the technology or read what some seasoned experts have to say. We can do all that and a lot more. For now, let's look at the counter-intuitive nature of the product's internals.
I'll paint an imaginary picture first. Let's say we have two boxes. In one box we have thirty-two circles and in the second box we have thirty-two drums. The circles are CPUs and the drums are disk drives. They are in separate boxes, and now we draw a pipeline between them. Make it as large as you want. This depicts a standard SMP (Symmetric Multi-Processor) hardware architecture that is the common platform for data warehousing. With the exception of Teradata and Netezza, this hardware configuration is ubiquitous.
Now let's draw another mental picture. This time we'll have one box. Each of the circles will be mated with one of the drums. Now we have thirty-two circle/drum combinations. In the Netezza machine each of these is called a SPU (snippet processing unit) and represent the CPU coupled with a dedicated disk drive. Some additional hardware exists to coordinate and accelerate this combination, but this is a simplified mental depiction of the fundamental difference between the prior SMP configuration and Netezza's, an MPP (Massively Parallel Processor) configuration.
Now I used thirty-two as a simple example. In reality, Netezza can host hundreds of these cpu/disk combinations, the largest standalone frame containing over eight hundred of them, scaling to over a petabyte in storage capacity. Those of us who regularly operate on these machines are accustomed to loading, scanning and processing data by the terabyte, the smallest tables in the tens of billions of rows.
Some notes on the difference in their operation:
SMP: Typically used for transactional (OLTP) processing and is not purpose-built for data warehousing. It is a general-purpose platform. The typical bane of an OLTP engine is that it performs well on getting data inside, but is lousy on getting data out (in quantity, for reports and such).
MPP: (Netezza specifically) does not do transactional processing at all. It is purpose-built to inhale and exhale Libraries-of-Congress at a time.
SMP: Table exists logically in the database, but physically on the file system in a monolithic or contiguous table space.
MPP: Table exists logically in the database, but is physically spread out across all the SPUS. If we have 100 SPUs and want to load a table with 100,000 records, each SPU will receive 1000 records.
SMP: SQL statements are executed by the "machine" as a whole, sharing CPUs and drives. While the SQL operations may be logically and functionally "shared nothing" - the hardware is "shared-everything". In fact, CPUs could have other responsibilities too, which have no bearing on completing a SQL statement operation. In the above example, the SMP has to access the file system, draw the data into memory nearest the CPUs, then perform operations on the data. Copying data, for example, would mean drawing all the data from the tablespace into the CPUs and then pushing back down into another tablespace, but both tablespaces are on the same shared disk array.
MPP: SQL Statements are sent to all CPUs simultaneously, so all are working together to close the request. In the above example, each SPU only has to deal with 1000 records. Unlike an SMP, the data is already nearest the CPU so it doesn't have to go anywhere. The CPU can act directly on the data and coordinate as necessary with other CPUs. Copying data for example, means that the 1000 rows is copied to a locaton on the local drive. If 100 CPUs perform this simultaneously, the data is copied 100 times faster and it never leaves the disks.
SMP: Lots of overhead, knobs and tuning to make it go and keep it going. From the verbosity of the DDL to the declaration of index structures, table spaces and the like.
MPP: (Netezza) the overhead of adminstration is hidden from the user. In the above example, the user need only declare, load and utilize the table, not be concerned about managing the SPUs or disk allocation. The user's only concern is in aligning the data content with the hardware architecture, an easy task to perform and an easy state to maintain.
SMP: SQL-transforms, therefore, the insert/select operations that run so slow on an SMP platform, continue to run slow, and slower as data is added. They are not a viable option so usually would not occur to us to leverage them otherwise.
MPP: SQL-transforms leverage the MPP and do not slow as the data grows.
In fact, many people purchase the Netezza platform in context of its competitors, who don't (nor can they) use SQL transforms to affect data after arrival.
If we purchase Netezza as a load-and-query platform only, we have missed a special capability of the machine that differentiates it from the other products. If we leverage this power, we re-balance the workload of the overworked ETL tools, in some cases eliminating them altogether. Netezza calls this practice "bringing everything under air", that is, bring the data as-is into the machine and do the heavy-lifting transforms after it's inside.
The Brightlight Accelerator Framework is one example of a flow-based, run-time harness for deploying high-performance, metadata-driven SQL-transforms. While we have matured this capability over time, the consistency of the Netezza platform is the key to its success.
SMP:Scalability is through extreme engineering, leading to hardware upgrade.
MPP:Scalability is a function of data organization as it leverages the hardware's power. Upgrading hardware is therefore rarely the first option of choice.
SMP:Constrained by index structures and the shared-everything hardware architecture through which all data must pass
MPP:Constrained by human imagination, not index structures (there aren't any) and no shared hardware.
We have noted that with traditional SMP architectures, when the machine starts to run out of power, engineers will swarm the machine and start to instantiate exotic data structures and concepts that serve only as performance props, not the 'elegant' model once installed by the master modelers. As the box continues to decline, more engineering ensues, eliciting even stranger approaches to data storage and management. Soon it becomes so functionally brittle that it cannot handle any more functionality, so people resort to workarounds and bolt-ons. Functionality that should be a part of the solution is now outside of it, and functionality that never should have been inside (mainly for performance) has taken up permanent residence.
In a Netezza machine we have so much power available that traditional modeling approaches (e.g. 3NF and Dimensional) may have a defacto home, but now we can examine and deploy other concepts that may be more useful and scalable. Approaches that we never had the opportunity to examine before, because the modeling tool did not (and does not) support it and the natural constraints of the SMP-based engine artificially constrain us to index-based data management.
In addition, the Netezza machine operates on a counter-intuitive principle of finding information based on "where not to look". With this principle, let's say we have a terabyte of data and we know where our necessary information is, based on where-not-to-look. See how this won't change even if we add hundreds of terabytes to the system? If I add another 99 terabytes to the same system, the query will still return with the same answer in the same duration, because the data I'm looking for doesn't appear in those 99 terabytes and the machine already knows this.
For example, if we go over to Wal-Mart and we know where the kiosk is for the special-buys-of-the-day, we can find this easily. It's in a familiar location. What if tomorrow they expand the same Wal-Mart to ten times its current size? Will it affect how long it takes me to find that kiosk? Clearly not. Netezza operates the same way, in that no matter how much data we add, we don't have to worry about scanning through all of it to find the answer.
And when it boils down to basics, the where-not-to-look is the only way to scale. No other engine, especially not one that is completely dependent upon index structures to locate information, can scale to the same heights as Netezza.
Well, some of the stuff above is provocative and may elicit commentary. I'll continue to post as time progresses, so the data won't seem so much like underground information, after a while anyhow.
Modified on by DavidBirmingham
How many logical modelers does it take to screw in a lightbulb? None, it's a physical problem.
I watched in dismay as the DBAs, developers and query analyts threw queries at the machine like darts. They would formulate the query one way, tune it, reformulate it. Some forms were slightly faster but they needed orders-of-magnitude more power. Nothing seemed to get it off the ground.
Tufts of hair flew over the cubicle walls as seasoned techologists would first yelp softly when their stab-at-improvement didn't work, then yelp loudly as they pulled-out more hair. Yes, they were pulling-their-hair-out trying to solve the problem the wrong way. Of course, some of us got bald the old-fashioned way: general stress.
I took the original query as they had first written it, added two minor suggestions that were irrelevant to the functional answer to the query, and the operation boosted into the stratosphere.
What they saw however, was how I manipulated the query logic, not how I manipulated the physics. I can ask for a show of hands in any room of technologists and get mixed answers on the question "where is the seat of power?". Some will say software, others hardware, others a mix of the two, while those who still adhere to the musings of Theodoric of York, Medieval Data Scientist, would say that there's a small toad or dwarf living in the heart of the machine.
To make it even more abstract, users sit in a chair in physical reality. They live by physical clocks in that same reality, and "speed of thought" analytics is only enabled when we can leverage physics to the point of creating a time-dilation illusion: where only seconds have passed in the analyst's world while many hours have passed in reality. After all, when an analyst is immersed in the flow of a true speed-of-thought experience, they will hit the submit-key as fast as their fingers can touch the keyboard, synthesize answers, rinse and repeat. And do this for many hours seemingly without blinking. If the machine has a hiccup of some kind, or is slow to return, the illusion-bubble is popped and they re-enter the second-per-second reality that the rest of us live in. Perhaps their hair is a little grayer, their eyes a little more dilated, but they will swear that they have unmasked the Will-O-The-Wisp and are about to announce the Next Big Breakthrough. Anticipation is high.
But for those who don't have this speed-of-thought experience, chained to a stodgy commodity-technology, they will never know the true meaning of "wheels-up" in the analytics sense. They will never achieve this time-dilated immersion experience. The clock on the wall marks true real-time for them, and it is maddening.
Notice the allusions to physics rather than logic. We don't doubt that the analyst has the logic down-cold. But logic will not dilate time. Only physics can do that. Emmett Brown mastered it with a flux capacitor. We don't need to actually jump time, but a little dilation never hurt anybody.
The chief factor in query-turnaround within the Netezza machine is the way in which the logical structures have been physically implemented. We can have the same logical structure, same physical content, with wildly different physical implementations. The "distribute on" and "organize on" govern this factor through co-location at two levels: Co-location of multiple tables on a common dataslice and co-location (on a given dataslice) of commonly-keyed records on as few disk pages as possible (zone maps). The table can contain the same logical and physical content, but its implementation on Netezza physics can be radically different based on these two factors.
Take for example the case of a large-scale fund manager with thousands of instruments in his portfolio. As his business grows, he crosses the one-million mark, then two million. His analytics engine creates two million results for each analytics run, with dozens of analytics-runs every day, adding up quickly to billions of records in constant churn. His tables are distributed on his Instrument_ID, because in his world, everything is Instrument-centric. All of the operations for loading, assimilating and integrating data centers upon the Instrument and nothing else. They are organized on portfolio_date, because the portfolio's activity governs his operations.
His business-side counterpart on the other hand, sells products based on the portfolio. The products can serve to increase the portfolio, but many of the products are analytics-results from the portfolio itself. This is a product-centric view of the data. Everything about it requires the fact-table and supporting tables to be distributed on the Product_id plus being organized on the product-centric transaction_date. This aligns the logical content of the tables to the physics of the machine. It also aligns the contents of the tables with the intended query path of the user-base. One of them will enter-and-navigate via the Instrument, where the other will use the Product.
We can predict the product manager's conversation with the DBAs:
PM: "I need a version of the primary fact table in my reporting database."
DB: "You mean a different version of the fact? Like different columns?"
PM: "No, the same columns, same content."
DB: "Then use the one we have. We're not copying a hundred billion rows just so you can keep your own version."
PM: "Well, it's logically the same but the physical implementation is completely different."
DB: "Oh really? You mean that instead of doing an insert-into-select-from, we'll move the data over by carrier pigeon?"
PM: (staring) "No, the new table has a different distribution and organize, so it's a completely different physical table than the original"
DB: "You're just splitting semantic hairs with me. Data is data."
I watched this conversation from a healthy distance for nearly fourteen months before the DBA acquiesced and installed the necessary table. Prior to this, the PM had been required to manufacture summary tables and an assortment of other supporting tables in lieu of the necessary fact-table. The user experience suffered immensely during this interval, many of them openly questioning whether the Netezza acquisition had been a wise choice. But once the new distribution/organize was installed, user-queries that had been running in 30 minutes now ran in 3 seconds. Queries that had taken five minutes were now sub-second. Where before only five users at a time could be hitting the machine, now twenty or more enjoyed a stellar experience.
How does making a copy of the same data make such a difference? Because it's not really a copy. When we think "copy" we think "photocopy", that is "identical". A DBA will rarely imagine that using a different distribution and organize will create a version of the table that is, in physical terms, radically different from its counterpart. They see the table logically, just a reference in the catalog.
The physics of the Netezza machine is unleashed with logical data structures that are configured to leverage the physical power of the machine. Moreover, the physical implementation must be in synergy (and harmony) with how the users intend to consume them with logical queries. In the above case, the Instrument-centric consumer had a great experience because the tables were physically configured in a manner that logically dovetailed with his query-logic intentions. The Product-centric manager however, had a less-than-stellar experience because that same table had not been physically configured to logically dovetail with his query-logic intentions. The DBA had basically misunderstood that the Netezza high-performance experience rests on the synergy between the logical queries and physical data structures.
In short, each of these managers required a purpose-built form of the data. The DBA thinks in terms of general-purpose and reuse. To him, capacity-planning is about preserving disk storage. He would never imagine that sacrificing disk storage (to build the new table) would translate into an increase in the throughput of the machine. So while the DBA is already thinking in physical terms, he believes that users only think in logical terms. Physics has always been the DBA's problem. Who do those "logical" users think they are, coming to the DBA to offer up a lecture on physics?
In this regard, what if the DBA had built-out the new table but the PM's staff had not included the new distribution key in their query? Or did not leverage the optimized zone-maps as determined by the Organize-On? The result would be the same as before: a logical query that is not leveraging the physics of the machine. At this point, adding the distribution key to the query, or adding filter attributes, is not "tuning" but "debugging". Once those are in place, we don't have to "tune" anything else. Or rather, if the data structures are right, no query tuning is necessary. If the data structures are wrong, no query tuning will matter.
And this is why the aforementioned aficionados were losing their hair. They truly believed that the tried-and-true methods for query-tuning in an Oracle/SQLServer machine would be similar in Netezza. Alas - they are not.
What does all of this mean? When a logical query is submitted to the machine, it cannot manufacture power. It can only leverage or activate the power that was pre-configured for its use. This is why "query-tuning" doesn't work so well with Netezza. I once suggested "query tuning in Netezza is like using a steering wheel to make a car go faster." The actual power is under the hood, not in the user's hands. While the user can leverage it the wrong way, the user cannot, through business-logic queries, make the machine boost by orders-of-magnitude.
Where does the developer/user/analyst need to apply their labor? They already know how they want to navigate the data, so they need to work toward a purpose-built physical implementation, using a logical model to describe and enable it. Notice the reversal of roles: the traditional method is to use a logical model to "physicalize" a database. This is because in a commodity platform (and a load-balancing engine) the physics is all horizontal and shared-everything. We can affect the query turnaround time using logical query statements because we can use hints and such to tell the database how to behave on-demand.
We cannot tell Netezza how to "physically" behave on-demand in the same way. We can use logical query statements to leverage the physics as we have pre-configured it, but if the statement uses the tables outside of their pre-configured physics, the user will not experience the same capacity or turnaround no matter how they reconfigure or re-state the query logic.
All of this makes a case for purpose-built data models leading to purpose-built physical models, and the rejection of general-purpose data models in the Netezza machine. After all, it's a purpose-built machine quite unlike its general purpose, commodity counterparts in the marketplace. In those machines (e.g. Oracle, SQLServer) we have to engineer a purpose-built model (such as a star-schema) to overcome the physical limitations of the general-purpose platform. Why then would we move away from the general-purpose machine into a purpose-built machine, and attempt to embrace a general-purpose data model?
Could it be that the average Netezza user believes that the power of the machine gives it a magical ability to enable a general-purpose model in the way that the general-purpose machines could not? Ever see a third-normal-form model being used for reporting in a general-purpose machine? It's so ugly that they run-don't-walk toward engineering a purpose-built model, photocopying data from the general-purpose contents into the purpose-built form. No, the power of the Netezza machine doesn't give it magical abilities to overcome this problem. A third-normal form model doesn't work better in Netezza than a purpose-built model.
Enter the new solution aficionado who wants their solution to run as fast as the existing solution. They will be told, almost in reflex by the DBA that they have to make their solution work with the existing structures, even though they don't leverage physics in the way the new solution will need it. And this is the time to make a case for another purpose-built model. One that faces the new user-base with tables that are physically optimized to support that user base. Will all tables have to come over? Of course not. Will all of the data of the existing fact table(s) have to come over? Usually not, which is silver lining of the approach.
But think about this: The tables in Netezza are 4x compressed already. If we make another physical copy of the table, itself being 4x compressed, the data is still (on aggregate) 2x compressed across the two tables. That is, the data is doubled at 4x compression, so it only uses the same amount of space as the original table would have if it were only 2x compressed. In this perspective, it's still ahead of the storage -capacity curve. And in having their physics face the user base, we preserve machine capacity as well.
This is perhaps the one most-misunderstood tradeoff of requiring multiple user bases to use the same tables even though their physical form only supports one of the user-bases. And that is simply this: When we kick off queries that don't leverage the physics, we scan more data on the dataslices and we broadcast more data between the CPUs. This effectively drags the queries down and saturates the machine. The query drag might be something only experienced by the one-off user base, but left to itself the machine capacity saturation will affect all users including the ones using the primary distribution. Everyone suffers, and all for the preservation of some disk space. Trust me, if there is a question between happy and productive users versus burning some extra disk space, it's not a hard decision. Preserving disk storage in the heat of unhappy users is a bad tradeoff.
Or to make an absurd analogy, let's say we show up for work on day one, have a great day of onboarding and when we leave, we notice that our car is a little "whiney". Taking it to a shop, he tells us that someone has locked the car in first-gear and he can't fix it. We casually make this complaint the next day (it took us a little longer to get to work).
DBA: Oh sure, all new folks have their car put in first gear. It's a requirement.
US: (stunned) What the?
DBA: Well, if you had been here first, you could keep all the gears, but everyone we've added since then has to be in first gear for everything.
DBA: Yes, first-gear for your car, your development machine, even your career ladder. About the only thing that they don't put in first-gear are your work hours. Those are unlimited.
US: That's outrageous!
DBA: We can't give everyone all the gears they want. It's just not scalable.
The problem with working with tables that aren't physically configured as-we-intend-to-use-them is that using them will cause the machine to work much harder than it has to. Not only will our queries be slower, we can't run as many of them. And while we're running, those folks with high-gear solutions in the same machine will start to look like first-gear stuff too. The inefficiency of our work will steal energy from everyone. We cannot pretend that the machine has unlimited capacity. If our solution eats up a big portion of the capacity then there's less capacity for everyone else. Even if we use workload management, whatever we throttle the poorly-leveraged solution into will only make it worse, because if a first-gear solution needs anything, it most certainly uses more capacity than it would normally require.
Energy-loss is the real cost of a poor physical implementation. All solutions start out with a certain capacity limit (same number of CPUs, memory, disk storage) and it is important that we balance these factors to give the users the best possible experience. Throttling CPUs or disk space, or refusing to give up disk space merely to preserve disk capacity, only forestalls the inevitable. The solution's structures must be aligned with machine physics and the queries must be configured to leverage that physics.
The depiction(above) describes how the modeler's world (a logical world) in no wise intersects with the physical world, yet the physical world is what will drive the user's performance experience. The high-intensity physics of Netezza is not just something we "get for free", it is a resource we need to deliberately leverage and fiercely protect.
In the above, the "Logical data structure" is applied to the machine catalog (using query logic to create it). But once created, it doesn't have any content, so we will use more logic to push data into the physical data structure. The true test of this pairing is when we attempt to apply a logical query (top) and it uses the data structure logic/physics to access the machine physics (bottom). Can we now see why a logical query, all on its own, cannot manufacture or manipulate power? It is the physical data structure working in synergy with the logical query that unleashes Netezza's power. And this is why some discussions with modelers may require deeper communication about the necessity to leverage the deep-metal physics while we honor and protect the machine's power.
Modified on by DavidBirmingham
In a Netezza shop experiencing some performance stress with their machine, we ask the usual questions as to the machine's configuration, its functional mission. Ultimately we pop-the-hood to find that the data structures and the queries are not in harmony. For starters, the structures don't look like Netezza structures, at least, not optimized for Netezza. We receive feedback that they "just" moved the data from their former (favorite technology here) and ran-with-it. They received the usual 10x boost as a door-prize and thought they were done. Lurking in their solution however, were latent inefficiencies that were causing the machine to work 10x to 20x harder to achieve the same outcome. And their queries were likewise 20x inefficient in how they leveraged the data structures.
More unfortunately, the power of the machine was masking this inefficiency. It's like the old adage, when a person first starts day-trading on the stock exchange, the worst thing that can happen to them is that they are successful. Why? It put a false sense of security in their minds that gives them permission to take risks they would never take if they knew the real rules of the game. The 10x-boost for moving the data over is a "for-free" door prize not the go-to configuration.
What are the real rules of the Netezza game? The first rule is that extraordinary power masks sloppy work. Netezza can make an ugly duckling look like a swan without actually being one. It can make an ugly model into a supermodel without the necessary adult beverages to assist the transformation. It can make sloppy queries look like something even Mary Poppins would approve of, practically perfect in every way and all that.
What's lurking under-the-hood is nothing short of a parasitic relationship between the model, the queries and the machine. We received the 10x boost door-prize and think we have succeeded. But we have only succeeded in instantiating the model and its data into the machine. We have not succeeded in leveraging the entire machine. And, uh, we paid for the entire machine. So why aren't we using it?
In our old environment, the index structures worked behind-the-scenes, transparently assisting each join. Our BI environment is set up to leverage those joins so we get good response times. The Netezza machine has no indexes so the BI queries (whether we want to admit it or not) are improperly structured to take advantage of the machine's physics.
"But that's how we've always done it..." or "But we don't do it that way..."
The short version, the former solution and (favorite technology here) is casting a long shadow across the raised floor onto the Netezza machine. People are forklifting "what they know" to the new machine when very little of it applies.
For example, in a star schema, the index structures are the primary performance center. The query will filter the dimensions first, gather indexes from the participating dimensions and then use these to attack the fact table. The engine does all this transparently. The result is a fast turnaround born on index-level performance. These are software-powered constructs in a general-purpose engine. The original concept of the star-schema was borne on the necessity of a model that could overcome the performance weaknesses of its host platform. It is in fact an answer to the lack-of-power of commodity platforms. In short, just by configuring and loading a star schema on a commodity platform, we get boost from using it over a more common 3NF schema.
The Netezza machine doesn't have indexes. So the common understanding of how a star-schema works doesn't apply. At all. Don't get me wrong, the star-schema has a lot of functional elegance and utility. It does not however, inherently provide any form of performance boost for queries using it. It can simplify the consumer experience and certainly ease maintenance, but it is not inherently more performant than any other model. In fact, using such a model by default could hinder performance.
Why is this?
The primary performance boosters in Netezza are the distribution and the zone map. Where the distribution and co-location preserve resources so that more queries can run simultaneously with high throughput, zone maps boost query turnaround time. They work in synergy to increase overall throughput of the machine. How does installing a star-schema inherently optimize such things? It doesn't.
Can we use a star-schema? Sure, and we should also commit to distributing the fact table on the same key as the most-active or largest dimension (they are often one-and-the-same). This will preserve concurrency for the largest majority of queries. A better approach however, is to specifically formulate a useful dimensional model that leverages the same distribution key for all participating tables. Common star-schemas do not do this by default, and if only two tables are distributed on the same key, all other joins to the other tables will be less performant. They will have to "broadcast" the dimensional data to the fact table. Clearly having all tables distributed on the same key will preserve concurrency, but this doesn't give us the monster-boost we're looking for. Distrubution might get us up to 2x past the door-prize performance we get from moving to the machine. Zone maps are notorious for getting us 100x and 1000x boost.
At one site I watched as several analytic operations remanufactured the star-schema data into several other useful structures, each of which was distributed on a common key. At the end of the operation, these quere joined in co-located manner and the final result came back in orders-of-magnitude faster than the same query on the master tables. I asked where they had derived the key, and they explained that it was a composite key that they had reformulated into a single key because their dimensional tables could all be distributed on it and maintain the same logical relationship. Looking over the table structures, they had a "flavor" of a star schema but certainly not a purist star. The question remained, if the existing star schema wasn't useful to them but their reformulated structure was, why weren't they using the reformulated one as the primary model and ditch the old one? The answer was simple, in that the existing star was seen as a general-purpose model and not to be outfitted or tuned for a specific user group. This is one of the commodity/general-purpose lines-of-thought that must be buried before entering the Netezza realm.
This is the primary takeaway from all that: The way we make an underpowered machine work faster is is to contrive a star schema that makes the indexes work hard. We forget that the star schema is a performance contrivance in this regard. If we attempt to move this model to the Netezza machine because "it's what we do" then we may experience performance difficulties rather than a boost. A common theme exists here: people do what they are knowledgeable of, what they are comfortable with, what they find easy-to-explain and do not naturally push-the-envelope for something more useful and performant.
In Netezza, the star schema has functional value but (configured wrong) is a performance liability. We can mitigate this problem by simply reformulating the star to align with the machine's physics, and by adapting our "purist" modeling practices to something more practical and adaptable. After all, many modeling practices are in place specifically because doing otherwise makes a traditional platform behave poorly. If we forklift those practices to Netezza, we participate in casting-the-long-shadown of an underperforming platform onto the Netezza machine.
We have enormous freedom in Netezza to shape the data the way we want to use it and make it consumption-ready both in content and performance. We should not move from a general-purpose platform (using a purpose-built model like a star) into a purpose-built platform with a general-purpose model like a star. The odd part is that the star is an anomaly in a load-balancing, traditional database, but is seen as purpose-built for that platform. Exactly the opposite is true in Netezza. The machine is purpose-built and the star is only another general-purpose model that doesn't work as well as a model that is purpose-built for Netezza physics and for user needs.
The worst thing we can do of course is think-outside-the-box (the Netezza box). We really need to think-inside-the-box and shape the data structures and queries to get what we want. This mitigates the long-shadows. It's just a matter of adapting traditional thinking into something practical for the Netezza machine.
Modified on by DavidBirmingham
About a year ago I engaged to assess a Netezza-centric data processing environment. They had used stored procedures to build-out their business processing inside the machine using SQL-transforms. As you know, I'm a big fan of the SQL-transforms approach, but I'm not a big fan of how they implemented it. Stored procedure for back-end processing are a bad idea on any platform. But even if they had done it without stored procs, the implementation was a "total miss". I mean, it could not have been more "off" if they'd done it outside the machine entirely.
I received word some months ago that while their shop remains a strong Netezza environment, for this particular application they intended to go in a different direction, with a different technology. This was unfortunate since I had told them exactly what the problem was - not the hardware but the way it had been deployed. But they were in denial! Forklifting their application onto the new machine, they attempted to tweak and tune it. They actually received marginally more lift at the outset, but then it rapidly degraded when more data started arriving. Now I'm in dialog with them to discuss "what went wrong".
What went wrong began, quite literally, many years prior.
It's like this: If rust starts to build in a water pipe, we won't know it until the water pressure starts to slowly degrade. Eventually it becomes a drip and then one day it's closed off altogether. We could attempt tracing it to a single cause, but it would only be another straw on the proverbial camel's back. What "really" happened was that we treated the machine in a sloppy manner. Or rather, we saw that it had incredible power but we weren't particularly good stewards of it. Netezza makes a an ugly model look great, a good model look stellar, a marginal model look like a superstar, and can make the sloppiest query look like the most eligible bachelor in town. Power tends to make people starry-eyed.
Time and again we coach people on a migration. They say "Wow, we just went from a five-hour query on that Oracle machine to a five-minute query on the Netezza machine. Sold!" and they move everything over "as-is" from the other machine. Never mind that those old data structures were optimized for a different technology entirely and never mind that the data processes running against them were likewise optimized with the older structures in mind. They were both optimized in context of a machine that could not handle the load to begin with. They just didn't know it yet. Now standing in the Netezza machine's shadow, it's painfully obvious what shortcomings the old machine had. Not the least of which being a load-balancing transactional engine, which is always the wrong technology for anything using a SQL-transform.
The bottom line: what if just a little tuning of that five-minute query could make it a five-second query? What if we received a 10x boost moving data "as is" from the old machine, but if we had engaged a little data-tuning we could have received 100x? In short, how many "x" have we left on the cutting-room floor? Enzees have learned (some the hard way!) that performance in a Netezza machine is found in the hardware. This hardware has to actually arrive in massively parallel form, not marginally parallel form. So we know that that expecting production performance from the Emulator or the TwinFin-3 is a quixotic existence. This ultimately leads to two universal maxims:
We don't tune queries, we configure data structures. The data structures unlock hardware power.
We use queries to activate the data structures. In Netezza, "query-tuning" is a lot like using a steering wheel to make the car go faster. It just doesn't work that way.
This "additional boost" or "leftover power" is an important question, especially for the aforementioned Netezza-centric application. Even if we had kept the entire application in stored procedures, their implementation could not have been more wrong. They had of course, outsourced the whole development project to a firm in a distant country, who had given them a marginal development team at best. This team proceeded to treat the Netezza machine like "any other database" and completely missed the performance optimizations that make one of these machines a source of legend.
What that team did, was pull two hundred million records from a master store and use this body as a working data set even though only twenty columns were being processed at any given point. Dragging over two-hundred columns (90 percent dead weight) through every processing query (many dozens of them), and without regarding distribution to manage co-located reads and co-located writes, turned a twenty-minute operation into a fifty-five hour operation. We showed them with a simple prototype how a given fifteen hour process could be reduced to four minutes. The point is, they were ridiculously inefficient in the use of the machine. Nobody in the leadership of the company would accept that they were as much as 20x inefficient.
A major "miss" is in believing that Netezza is a traditional database. It is not. It is an Asymmetric Massively Parallel Processor (AMPP) with a database "façade" on the front end. Anything resembling a "database" is purely for purposes of interacting with the outside world, "adapting" the MPP layer for common utility. This is why it is firmly positioned as an "appliance". If the internal workins' of the machine were directly exposed, it would cause more trouble than not. Interfacing to the MPP "as a database" is where the resemblance ends. This is the first mistake made by so many new users. They plug-in their favorite tools and such, all of which interface (superficially) just fine. Then they wonder why the machine doesn't do what they wanted. Or that they are experiencing the legendary performance.
When an inefficient model is first deployed, we could imagine that even in taking up 10x more machine resources than necessary, it still runs with extraordinary speed. But let's say we have 100 points of power available to us. The application requires at least five points of power but we are running at 50 points of power (10x inefficient). We still haven't breached any limits. As data grows and as we add functionality, this five points of power rises to 8 points, where we are now using 80 points of power in the machine. Wow, that 100 points of power is getting eaten up fast. We're not all that far away from breaching the max. But data always grows and functionality always rises, and one day we breach the 10th point of power. And at 10x inefficiency, we have finally hit the ceiling of the machine. It spontaneously starts running slow. All the time. Nobody can explain it.
The odd thing, is that the Netezza machine is a purpose-built appliance. Why then did we allow our people to migrate a schema optimized for a general-purpose machine into a purpose-built machine? Moreover, why did we continue to maintain that the so-called purpose-built model in the old machine was really a general-purpose model in disguise? Did we use general-purpose techniques? Why?
Did we load data into one set of structures expecting them to be the one-stop-shop for all consumers? A common-mistake in Netezza-centric implementations is that one data structure can serve many disparate constituents. The larger the data structure, the more we will need to configure the structure for a constituent's utility, and may need redundant data stores to serve disparate needs. Is this reprehensible? Which is better? Just declare another identical table with a different distribution/organize and then load the two tables simultaneously? If we go the summary-table route, the cost is in maintaining the special rules to build them along with the latency penalty for their construction. It seems counter-intuitive to just re-manufacture a table, but the only cost is disk space. On these larger platforms, preserving disk space while the users are at-the-gates with spears and torches, doesn't seem to be a good tradeoff.
The point: Don't waste an opportunity to build exactly the data model you need to support the user base. Don't settle for a contrived, purist, general-purpose model. If the modelers say "we don't do that", this is a sure sign that we're leaving something very special on the cutting-room-floor. It's a purpose-built machine, so create a purpose-built model, and like the Enzees say, "give the machine some love."
When capacity seems to be topping-off, with a few, simply-applied techniques we can easily recover that capacity. It's just very annoying that it's caught up with us and nobody can seem to explain why. It's because they are looking in the wrong place. If we had concerned ourselves with the mechanics of the machine and its primary power-brokers, the distribution and the zone-maps, and avoided the most significant sources of drain, like nested views and back-end stored procedures, we might be closer to resolution. If not, it may be that resolution would require rework or retrofit. In the above 50+ hour operation, the only answer would be to overhaul the working queries end-to-end. We wouldn't need to do much to the functional mechanics, just streamline the way the queries perform them.
What does this streamlining look like? Well, if we already knew that, we wouldn't be having any problems. We would have been streamlining all along and most of our capacity would still be well-preserved. People do it all the time.
Symptoms of a PDA system under stress:
The thing is slowing down and we haven't changed anything. It's slowing down and we're doing what we've always done.
It's the last application/table/load we implemented. Things went south from there. But then we backed out the application and it's still bad. I think we broke something. It must be the hardware.
More users are querying the machine than ever before. I think we've reached a hardware limit.
We could not get the base tables to return fast enough, so we built summary tables, but these are pain to maintain and we don't like the latency in their readiness. We were told that we would never have to use these.
It's sort of funny how deterministic the machine is. "We've been doing the same thing all along" and now it's not working right? Perhaps we weren't doing the right thing all along, regardless of how consistent our "wrongness" was? This is how we know it's not a hardware problem. In fact, if our folks are blaming the hardware first, it's the first sign of denial that the implementation itself is flawed. If our people build contrived structures like summaries, it's a sure sign that the data model is flawed. It's also a sure sign that we're trying to implement a general-purpose schema rather than a purpose-built one. If our people spend a lot of time swarming around query-tuning, it's a sure sign that our data structures aren't ready for consumption. Nested views never have a good ending in Netezza.
At one site, the users had to link together eight or nine different tables to get a very common and repeatable result. If the users must have the deep-knowledge of these tables and their results are repeatable, we need to take their work and manufacture a set of core tables that require less joining and are more consumption-ready. Consolidation, denormalization and shrinking the total tables in the model are actually performance boosters. Why is that? The more tables are involved in the mix, the more we have to deliberately touch the disk through joining. If we have fewer tables and more data on larger tables, zone maps allow us to exclude whole swaths of the tables altogether. We stay off the disks because the zone maps are optimized for it.
It sure "seems" right to put everything into a third-normal-form and make the model "look like the business", but nobody is reporting on the business. They're analyzing, not organizing. We should be the ones organizing the data for analysis, not requiring the analysts to organize the data "on demand" through piling tables together in their queries.
Modified on by DavidBirmingham
"We're charter members," claimed Bjorn, a tech from across-the-pond, "been working with the stuff for ages."
"Same here," asserted Jack, a DBA from Kansas, "You'd be surprised how fast this thing scoots around in first gear."
"Exactly," said Bjorn, "Those who master the roads in first gear are actually better off. You run at a speed that gets you around but not fast enough that you accidentally run into things."
"So it's a safe speed?" asked the interviewer.
"Indeed," said the pair, almost in unison.
"So what happens when you take it for a spin on the highway?"
The two glanced at each other, then to the interviewer, looked down or elsewhere and then one at a time eventually answered.
"Highway driving is problematic," said Jack. "We'd rather stay on the city roads."
"Yes, city roads," Bjorn agreed, "No need for the highway."
The Director of Integration watched the interview on video and made a smacking sound against his teeth, "Pathetic. We bought the car to drive on the highway most of the time. These guys are acting like a couple of children who are scared of the Big Bad Freeway."
"Well, it has too many lanes in it," smirked his assistant, "If they go on the highway, they might actually get somewhere."
"What's that logo on their shirts?" the Director noted, "Zoom in on that -"
"It's FGA," said the assistant.
"Is that a misspelling? I thought the machine used an FPGA?"
"Oh no, that's correct. It's the First Gear Association. But now they call themselves the First Gear Society. It's way past a club. Now it's a philosophy. Or maybe culture is a better word."
"You mean they actually run around justifying why they stay in first gear?"
"It's almost like a discipline sir," grinned the assistant, "They have rules and protocols for their followers. It's cultish if you ask me."
"I didn't ask you."
"So what are some of these so-called protocols?"
"Well for starters, they can't stand deep-metal. It's like garlic to a vampire."
"But deep-metal is - " he sighed, "Never mind, just keep going."
"If they find themselves going too slow, they try to get the machine to go faster in first gear. If they can't they blame the machine. They think that having a powerful machine is good enough. After all, a couple-hundred horsepower under the hood looks good no matter what gear you're in. And to the other chap's point. If you put it into a higher gear, it'll go so fast that it's hard to steer. They aren't very good at steering, so - "
"So it's a fear thing?"
"Not so much. They like rollercoaster rides, judging from some of their project schedules and deliverables anyhow. They like the whole livin-on-the-edge brinksmanship. As long as someone else is driving."
"Ahh, so just afraid to punch the accelerator themselves, eh?"
"They'll punch it, just in first gear. It's almost like any gears higher than that are a fearful thing."
"How do we get them into the deep-metal? It can't be that hard."
"Oh they go into the deep-metal on expeditions and stuff. They like popping the hood and exploring the architecture. But when they get behind the wheel, it's a first-gear-only driving experience."
"So tell me what to look for in a First-Gear-Society member. What can I expect?"
"Here's a short list," the assistant accessed his handheld and pulled an image onto its screen, showing it to the Director.
"Let me see if I understand this -"
Common symptoms are:
"We were doing just fine until a week or so ago, then everything started to slow down. Jobs are running a lot slower, we're missing all of our deadlines. Has the hardware worn out?"
"We just added some functionality to the solution and two days later the whole machine started to drag. We backed out the solution but the machine is still dragging. I think we broke something. There's a problem with the hardware."
"The machine is having a hardware problem. We've been running these applications for years and then one day everything slowed down. We upgraded to a bigger machine and we didn't get any lift, but a few days later things got worse. A lot worse. This is clearly a hardware problem."
"Wow, they think it's the hardware when they haven't even taken it out of first gear. Amazing."
"Here's the punch line - they honestly believe that they can get more power from the machine if they learn to steer it better."
"No kidding. Take a look at these quotes:"
"We reworked the logic in the queries to tighten things down. Every time we change the query, we get a few percentage points difference in performance, but we need a multiplied boost, like 10x or better."
"We put in summary tables to make things go faster but it didn't help. We have query engineers tightening down the queries so that the join logic is efficient."
"All of our query-tuning has failed. We are at a standstill."
"See all that? They really believe that they can make it go faster from the steering wheel. Like the machine they are sitting in is not the source of power."
"How do we convince them to take it out of first gear?"
"Not sure that there is a way, sir. They are so accustomed to believing that the steering wheel is the place to affect performance that they haven't bothered to examine where the power-plant actually resides."
"Under the hood."
"That's right, the power is in the hardware. The steering wheel can only direct the hardware to the right location, but cannot affect power."
"Well, they can certainly steer it poorly."
"Oh sure, like driving it through mud and all that. But that brings up another issue. Some of them like an extreme challenge, so rather than taking it out of first gear, they try to make the machine go faster by making the gears grind."
"Sure, a grinding gear just seems like it's working hard."
"But they don't see that the friction of the grinding is slowing them down?"
"Not in the least. Look here:"
"We built our tables to provide fast back-end data processing, but the distribution we use for back-end processing doesn't work so well for reporting. Every time they issue a query, it's terribly slow. If we copy the larger tables to another distribution, it makes back-end processing slow but the reporting fast. We can't seem to win here. So we're going to keep the distribution for the back-end and then find a way to build something else for the front end. They want us to make two versions of the big tables, each on different distributions. The experts tell us that this is common and the best thing to do, but it sounds crazy to us."
"See that? They don't even listen to the experts. They would rather grind the gears."
"But all they are saving is disk space. I mean, seriously?"
"Old ways die hard. They would rather preserve disk space and incur the wrath of the users."
"Or preserve disk space and then build-out a convoluted set of summary tables that is twice as hard to maintain."
"Yep, the simple, dumb photocopy of data into another distribution is just too simple. They have a need to engineer."
"Engineering in first gear is not really engineering. It's more like polishing the stick-shift."
"Here's another quote:"
"We have a lot of complex views, and those views join to other views. I've seen EXPLAIN plans for this hit fifty to one-hundred snippets deep. Netezza returns this stuff fast all the time. Only now it doesn't anymore.
"Ahh, so their underpinning tables finally reached a tipping point."
"Yeah, this is funny. They add a ten pound weight to the trunk of the car every day and three hundred days later, and 3000 pounds heavier, the car is starting to run slow. Imagine that."
"Running slow because they have it in first gear."
"Well of course, if they were to use a higher gear, all that weight would be practically weightless."
"Funny how the density of deep metal is a lot like anti-gravity."
"Well, the competition sees these things happening and they say, you know, with that particular vehicle, the more weight you add, the slower it gets. It's just the nature of the machine."
"But that's not true on deep-metal."
"Not at all. We know folks who have tons and tons of weight on the machine but it's light as a feather."
"They're using higher gears and the deep metal together."
"Absolutely. Nobody from the First Gear Society is allowed in their shop. They know better."
"Well that's a problem with nested views. The master query attempts to provide the filter attributes but if the nested view doesn't pass them along, the underpinning tables end up table-scanning."
"And as you know, Netezza is the best table-scanner in the business. It can scan tables, really big tables, in no time flat."
"But it's not supposed to be scanning the tables. It's supposed to be using the zone maps and filter attributes so that it doesn't have to scan the tables."
"Well, exactly, but the people in first gear don't know this. They can't really tell that their queries are getting fractionally slower with each passing day. Then one day it reaches a point of no return. Sort of like how rust slowly clogs up a pipe until one day the pipe just closes over. It doesn't happen in a single query or just because we added a new table or two. It's pervasive and pandemic across the entire implementation."
"I bet when they hear that, they go insane."
"Yeah it's because with any other machine, remediating something like this would be very hard and tedious. But with Netezza we can remediate it incrementally and get more lift with each change. It's just not hard to recover the capacity if you know what you're looking for."
"I think the bottom line is that a really powerful machine tends to hide inefficient work."
"Or like one aficionado put it: Netezza can make a really ugly data model look like a super-model, and can make really bad queries look great. They just don't realize that the ugly models and bad queries are sapping machine's strength like a giant parasite."
"Ugh, now there's a visual I can live without."
Modified on by DavidBirmingham
In this case, a tortoise-brained hare is an animal that is capable of going fast but wonders why he can't.
Over the last six months or so I have seen a "trending" situation with those who use the Organize-On. This is some pretty cool functionality so it's important that we get it right. Generally speaking, folks who understand how zone-maps work will have a splendid time with Organize. Others, not so much. The machine is supposed to be fast, so why are all my queries so slow?
All I have to do is add the Organize On keys and I'm good to go. Answer: No, you have to groom first, as this actually applies the Organize to the physical data. Response: Really? when we did this it took forever! Answer: Yes, the first groom may take a bit of time but every groom after that will be painless.
We CTAS a table every night for maintenance and the groom is always running slow. Answer: if you are using Organize On, don't use a CTAS for regular maintenance. With an Organize, the table is physically broken apart and spread across multiple additional extent-pages to provide for easier groom maintenance. A CTAS re-compresses it. So another groom is required to blow it back out again.
We like to use Materialized Views to optimize our tables but now the Organize doesn't let us. How can we get this back? Answer: You don't, because the Organize essentially replaces the Materialized View and does it so much better. Asking this question means you might not understand zone maps as well as you think you do! Just sayin'
We have included the distribution key in the Organize, along with some other join keys. Answer: (Sigh) -Remove them. Join-only keys do not belong in the Organize. The Organize is for filter attributes. Uh, perhaps you don't understand zone maps? Just sayin'
We have some lookup tables where we applied an Organize. But this didn't seem to matter. Answer- No it won't because if the table is not big enough the system won't use zone maps anyhow.
We were told to turn off Page-Level zone maps but this caused a 5x reduction in performance. Answer: Page-levels radically reduce disk I/O which is our number-one enemy. Turn them back on and go home happy.
It is clear that our outlying cast of folks who have never been introduced to zone maps think that the Organize is either a clustered-index or some other kind of indexing scheme. Fair enough, is this because of the deceptive naming? Like the Materialized View? (okay, don't get me started). The Organize keys are not keys in the same sense as indexes and are certainly not used as join keys.
They are filter-attributes. By this we mean that we fully intend to use these columns with constant values, lists or selected-lists as constraints.
Zone Map Primer
Here I will use my favorite example because it is time-tested and describes the capability. If I pay a visit to Wal-Mart and I'm looking for batteries, I might visit the customer kiosk and the lady tells me that the batteries are "on the end-cap, Aisle Three". This is an indexing model because she told me exactly where to look. This does not scale because when we get to billions of rows we spend more time searching the index than retrieving records.
In a Netezza model however, she would say "It's on an end-cap but I'm not sure which one." So I go to Aisle 1, then 2, then 3 and jackpot, I buy the batteries and go home. The most important part of her answer however, was in what she "did not say". She did not say that the batteries were in Men's or Women's clothing, Automotive or Electronics. In constraining the search to a particular location, she also told me "where not to look". This aspect of "where not to look" is critical to understanding zone maps. It is also critical to stratospheric scalability. Clearly if this Wal-Mart were to quintuple in size, my battery-buying-duration experience would not change in the least.
Now for the geeky part:
Each Netezza disk has 120K physical extents. Each extent has 3MB of space, divided across 24 pages(blocks) of 128k each. If we are running prior to OS 7.x, we will be zone-mapping at the extent-level. If we are 7.x or higher, we will use the page-level. Run don't walk to 7.x and page-level zone maps! They will radically reduce I/O on the extent and this is critically important. A set of records found only on a single page can return in 1/24th of the time than if it has to scan the full extent (96 percent faster). In a real-world experiment, the zone maps on a Twinfin-18 were changed from page-level to extent-level and the same battery of queries executed against it. It performed 5x to 10x slower than with the page-level turned on. Do not underestimating the influence of disk I/O on query turnaround. Netezza is a very physical machine.
Here's another internal example: Compression in Netezza is a nominal 4x. I have seen it much higher. Let's say we have an uncompressed record that is 10,000 bytes in size. When we read it, we will read all 10,000 bytes. If the data is compressed however, we will read only 2500 bytes, a 75% reduction in disk I/O. Netezza is the only platform where compression boosts the power in both reading and writing data because it reduces disk I/O tremendously.
Zone maps are a table that Netezza holds in memory for supporting a table. Each table has its own set, describing the contents of the extents/pages allocated to the table on a given dataslice. The contents of the zone maps are built automatically when we load up our table. It will collect the information from the records stored on the given extent/page for all integers, dates and timestamps. It will store the high and low value of each, representing the table's record "ranges" for that extent.
Thus when we execute a query using one of these columns as a filter-attribute, it will go to the zone maps first and cherry-pick only those extents containing the data we want, excluding all others. this means that the data on those other zone maps won't even see the light of day for the query-in-play. If we use an Organize, we can use a wider range of data types and be more deliberate with zone-map management.
Let's say we want to search-on the transaction-date on our fact table. If we have not physically organized the data, the same transaction-date value may appear across many hundreds or thousands of extents, affecting the high/low range of many zone maps. If however, we were to physically co-locate the records with common dates into tighter physical groups, the records will physically appear inside fewer zone maps. These are the extents/pages that the machine will cherry-pick for scanning and will completely scan each one. We want it to scan as few of these as possible.
In times past, we had three primary ways of doing this.
A brute-force sort, which is pretty egregious when the table gets very large.
A software program that separates the key's distinct values and executes a block-select from the original table to the new, selecting only once for each distinct value. This physically co-locates like-keys. (the data does not have to be sorted, only like-keys co-located)
A materialized view, which would manufacture a virtual zone map (which is why it's not allowed with the Organize)
Let's say that our table's data takes up 400 extents on each data slice. If our transaction-date appears in 300 of these (it is poorly organized) then when a query runs like so:
Select count(*) from mytable where transaction_date = '2014-01-01';
All 300 extents will be searched for this information. However if we Organize on this transaction-date and then groom - the data will be physically shuffled around to co-locate the records with the same transaction-date on as few extents as possible. Let's further say that once this happens, the given date appears in only one extent. What we have just done is optimized the table 300x. We have eliminated 299 other locations to look for data. This is important because scanning a 3MB extent is a lot of work. If we are scanning 299 additional extents for each dataslice, we're really doing a lot of extra work for nothing. If we translate this to a page-level problem, we may have 24x300 pages originally containing the keys. If with Organize we reduce this to a single page, we have further reduced our scanning load by another 96 percent of the extent containing the page.
The important factor in the example is the "out-in-the-open" value of "2014-01-01". It is a "filter attribute". It is not being joined to a table with this value, Doing it this way means that the FPGA/CPU will discover which zone maps have a high/low boundary that contains this value and will retrieve a candidate list of them, If there is only one as opposed to 300, we have radically reduced out workload. Netezza will literally exclude those extents from being examined at all. We have told the machine where-not-to-look. If we join this value however, such as using a time dimension, applying the date value to time dimensions and joining the time dimension to our fact table, we will require the system to fetch the record in order to examine it, at which time it will determine whether to keep it or toss it. this sort of thing can initiate a full table scan, nullifying the zone map entirely.
We don't want this to happen. We want the data to stay on disk and never see the light of day if it's not participating in the query.
To show how dramatic this can be, we were at one site hosting over 100 billion rows in the primary fact table. A full scan of this table took 8 minutes (which is not too shabby in itself, just sayin' ). The reporting users knew that if a query ever exceeded a few minutes in duration, it was probably ignoring the zone maps. This is because once-Organized, this table would return the average query in sub-second response. Think about that,100 billion rows in sub-second response.
This is why paying attention to zone maps is such a big deal. Optimizing distribution can get us boost in the single-digits (2x, 3x etc) on a given query. Optimizing zone maps can get us 1000x boost and higher.
The Organize-On accepts one or more keys which will be applied to physically co-locate records of like-valued keys, then it will update the zone maps. Here is a test to see if we understand the application:
Take one of your largest tables. CTAS the table to another database, order by the distribution key, or by the hidden "rowid" column to make sure that the given filter key is not ordered. This could take a bit of time, of course. Then perform a query using one of your date parameters as in the example above, and time the query. Now perform an
Alter Table tabname organize on (date column name here).
Then perform groom. Once completed, execute the same query and get a timing. We can see that a several-minute query can go sub-second very easily.
What's more the additional keys in the Organize are (more or less) independently organized. They will all enjoy a much faster turnaround than not using the Organize. If records arrive on the table out-of-order, no worries. Run the groom again. Subsequent runs of groom will always be shorter in duration than the first one.
Clearly however, if the zone-map is intended to apply filter-attributes to guarantee the exclusion of extents/pages completely, we cannot use a join-key. Or at least, not a join-only key. This also means that the distribution key is out (unless we plan to call-up an individual record based on the distribution key). Also, generally do not mix high cardinality keys with low cardinality keys. Netezza finds its strength somewhere in the middle. We will find that it disfavors the low-cardinality keys when high-cardinality ones are in-the-mix.
A good way to tell which of the filter attributes for a table are "high-traffic" is to turn on query history and then examine the table/view associated with "column access statistics - $vhist_column_access_stats. This will provide the number of times the column participated in a query and with which table(s) the base table interacted with. Perform a descending sort on the NUM_WHERE column and this will reveal all. In this short list we will see filter-attributes that are most useful. Don't use any of the join-only columns or the distribution key. These may adversely affect the multi-key algorithm's output and might not optimize any zone maps for this table.
At one site, we noted several inappropriate keys in the Organize, and simply by removing them and "grooming" again, the table experienced a 100x boost. The inappropriate keys were washing out the effectiveness of the other keys.
Many of us have seen this skyline, with buildings of various heights stabbing toward the sky. Compare this to the distribution graph that is part of the Netezza Administration GUI application. Normally this should be very flat (but a jagged-edge is usually okay). Clearly a Manhattan skyline is forbidden.
Or is it? if we have a table that is very skewed (like a Manhattan skyline) but the data can be easily "horizontally" sliced with zone maps, our round-trip time for a "tall" data-slice is no different than a "short" one. All we need to look out for is process-skew (too many horizontal slices on one dataslice)
Measuring the madness
Okay, David this is all very interesting but how can I know which extents or pages or whatever is being used by the keys? Well, there are a couple of handy hidden columns on each row that can help tell-the-tale. One is the _PAGEID and one is the _EXTENTID.
select count(*) , datasliceid dsid, _pageid pid from fact_customer group by datasliceid, _pageid
will tell us how many distinct pages are being used for each data slice.
select count(*), datasliceid, _pageid from fact_customer where transaction_date = '2013-01-01' group by datasliceid, _pageid order by datasliceid, _pageid;
In the above, the "count" should be reasonably even for each dataslice.
If the total count is radically more than "1", then let's organize on transaction_id and then groom. Now try the metrics again to see if it did not reduce the total pages.
I am sure with these two columns in-hand you can think of a variety of creative ways to use them. The conclusion of it all is to get the records with common key values packed as closely as we can so they take up as few extents / pages as possible.
It's a wrap
So now that the Organize seems a little better, you know, organized, maybe this will provide a bit of guidance on how to set up your own Organize and zone maps.
And don't forget to groom when changing the Organize keys.
We won't need to groom every time we do an operation. I would suggest a groom on both a schedule and a threshold. Pick a threshold ( a lot of folks like five percent). When the total deleted rows gets above five percent of the total non-deleted rows, or the total pages per unique data point gets above an unacceptable threshold, it's time to groom. But grooming on every operation is expensive, has marginal value and actually may throw away records we wanted to keep (in case of emergency rollback).
Modified on by DavidBirmingham
I recently did a Virtual Enzee presentation and listed the Top Ten requirements for scalable bulk data processing inside a Netezza machine.
I'll come back periodically and elaborate on them
1.Platforms easily scale for increasing stress
We have a Netezza machine, so what could go wrong? I was asked a desperate question by an Enzee as to how to get more power out of their machine. After nearly two days of struggling with them I finally asked how big their machine was. It was a TwinFin-3. The answer I gave them, they clearly did not like and even sought solace on the shoulder of another. Who told them the same thing. Get a bigger box. TwinFin-3 is a dev box, not a production box.
Stress comes in many forms. Constantly changing requirements. The need for functional and physical agility. As these things increase, we need a platform that will work with us, not against us.
2.Human intervention eliminated wherever possible (no eyeball-based actions)
This means ALL aspects, not just operational ones. Everything from table maintenance to application development. AUTOMATE!
It is humorous to hear testers offer up their methods, with naive blurbs like "open the application and examine the contents". No, with billions of rows there is no such thing. We must use statistical checking that operates on sets, such as summaries, counts-of etc. No longer can be "eyeball" the data.
Likewise with runtime processes. Define a table with 200 columns and try to put an ELT query against it. 200 columns in the insert phrase, 200 entries in the select phrase, and to maintain it we have to keep them in sync with "eyeballs". No, this doesn't scale.
3.Architecture-centric platforms express applications with patterns
Oddly application developers, like those who develop using stored procedures, whip out a bunch of application-centric 'code" and when the smoke clears, they see repeatable patterns all over it. Unfortunately, they can't take the patterns anywhere because they are hard-wired.
The more architectural approach is to harness the patterns as capabilities and allow our applications to express from them. The application is then an expression of the capabilitis not the center of gravity.
4.Deliberately simple to leverage and operate
Large-scale systems can have mind-numbing characteristics for the un-initiated. It is incumbent upon us to deliberately simplify their interface points to it. Simple utilities, fewer keystrokes to achieve mundane goals, automation for rote tasks..
5.Built for administrative recovery, not reactionary recovery
This can be as simple as, when data arrives and has errors, we don't come to a full stop. We cordon off the error records into an adminstrative /logical status and report them for later remediation. In systems of scale, we cannot halt the processing of tens of millons/billions of records just because a few stragglers are misbehaving. The time it will take to process the data is the problem. If we are 20 minutes away from the process being complete, then we are always 20 minutes away if we have fully stopped the flow for the sake of a few records. If we allow the process to proceed with error-capture, we will close the 20 minutes and then the admins have more breathng room to fix the problem without the scrutiny or pressure of the clock.
6.Data and metadata-driven
The environment can no longer be driven by application code. It has to be driven by an architectural harness that responds and adapts to data and metadata. This is a non-trivial endeavor, of course, but entirely possible to achieve.
What does this look like? The data model is arguable the most volatile component of the solution. Changes in it can destablize a solution. We need ways and utilities to buffer ourselves from the impact of change all-the-while enabling the change. It won't do to tell the users that the data model is frozen for 6 months because we fear impact to our tightly-woven application code (e.g. stored procs)
7.Blended/hybrid approaches quickly adapt and scale
One doesn't have to make an exclusive choice between ETL and ELT. People really want to leverage the power inside the machine but feel constrained that doing so may obviate the ETL tool. Not so - both of these technologies have a major role to play and we should balance them for the best-of-breed solution
8.Template-driven applications: SQL is an artifact, not the center-of-gravity
In the VIrtual Enzee I offered several examples of templates for SQL transforms (insert-into-select-from), views (to avoid nesting) and stored procedures (to build from a template rather than editing them in a SQL tool)
Why do this? The developer puts application logic into the template. At run time, or installation time in case of the SP or View, we formulate the product from the template. This allows us to automatically include non-optional aspects like operational controls, inline status reporting and other elements that we don't want the developer to worry about, much less hand-craft on their own.
Need another bit of operational control? Add it to the template factory and don't worry about the application logic
More importantly, we can generate a template from the catalog and by definition it is tied to the catalog. It is therefore easy to compare the already-deployed templates to changes in the data model. Since 90 percent of all new columns are invariably pass-through columns, running an impact analysis like this captures over 90 percent of the issues in one shot.
9.Inefficiencies are our number one enemy
One of our clients had a TwinFin 48 they were planning to use for their development phase and then cutover internally to production. I asked them to dial back the developers so that it had the effective power of a TwinFin 12. They were a bit stunned at my request until I noted: The TwinFin 12 has a lot of power for development, but a TwinFin 48 will hide bad data models and sloppy code. Lots of power can make any lousy code/data model look spectacular.
Many cases of Netezza machine under stress, upon review we find that many of their inefficient practices have been going on for years, some since the box arrived. But the machine was so powerful it masks the inefficiency, like allowing the box to eat itself from the inside out
Preserve capacity at the processing level, not by guarding the data storage level. Do not be afraid to spin off replica data structures (even large ones) just for a different distribution, if it means that the machine can close its queries faster.
10.Operational integrity drives functional integrity
We understand this as a matter of quality control. Hamburgers from a national chain should taste the same no matter where we buy them. This is not accidental. The end user data is only as good as the processes that are delivering it.
If we make it so the operators have a difficult time handling it, or the admins don't understand it, or the troubleshooters can't get things done, they will start to grouse about the quality of their existence.
On the flip side, I know folks that we radically simplfied things for, and when we showed them the various utilities they would need to keep things in order, they balked. "Do we have to know all this stuff? Why is there so much stuff to know?" And yet, we have reduced a thousand things down to one, but they cannot grasp how much more complex it would be without our having simplified it.
We know that Netezza embraces simplicity. We just have to be mindful to maintain this spirit when we build things around it.
At the functional/capability level,we need to drive operational integrity into the data itself, outfitting the tables and rows with additional columns for the sole purposes of operational control. Otherwise the functional model is pretty much out-in-the-open and we won't have a way to manage the tables in a consistent, harnessed, repeatable form.
Modified on by DavidBirmingham
I recently bumped up against a Proof-Of-Concept where "Two MPP Powerhouses went Toe-To-Toe" - and I was fairly excited to see that there might be a contender in the ring, stalking the PureData Analytics/Netezza machine. These POC's are always fun to watch. I am sure not quite as gratuitous as gladiators in the Colosseum, but engaging nonetheless.
In this corner...
The contender, dressed in white, was an MPP, er - clustered servers posing as an MPP. Now let's level-set on what an MPP is, and what it is not.
As we can see with the above high intensity graphics - 6 - 2 Cylinder Fiats versus a 12-Cylinger Jag. Now here's the trick question so don't be shy: Which one really accelerates to close the distance faster? Take your time, I'll be right here.
It's no mystery that orchestrated, optimized and purpose-built hardware beats general-purpose commodity hardware every time it's tried. The contender was a cluster of commodity servers posing as an MPP. When we tried to launch scanning analytic queries on it, we could practically hear the whirrrrr-click of the machines as they quietly, well, went silent. For a very long time. I wondered if they would ever offer up the answer. Unlike the Hitchhiker's Guide where they had to wait a million years to get the answer to the ultimate question, we decided to kick off the same query on the PureData/Netezza machine.
I hit the "enter' button while I was standing at the keyboard, then recalled that I needed to check on something else and lowered myself into the chair, but before I could sit - Netezza had the answer. No, it wasn't "42" but something a bit more actionable.
We left the bulding that day satisfied that the Jaguar had in fact smoked the competition. I probably should mention that even as we left the building, the "other guys" still had not come back with an answer. Sad indeed.
Scalability for scanning/bulk operations is a result of strong architecture. It cannot be cobbled together with general-purpose parts. The cluster of servers posing as an MPP had failed. Cluster Failure. Send it back.