DavidBirmingham 2700043KNU Tags:  performance transactional etl twinfin mindset elt netezza 7,611 Views
When processing in bulk, leading us to scale (that is, tens of billions of rows) we can trust Netezza to pull this off rather handily. With hardware and architecture in the bag for us, is there anything we can do wrong, or perhaps inefficiently, that would deny us entry to the soaring heights of performance so elusive on other platforms? After all, if the stratosphere is within reach, we want to go there as quickly as possible.
As it turns out, there are some things that Netezza doesn't do well. And all of them are transactional in nature. Not to worry, as we really don't want Netezza to foray into those realms. We can do it faster and simpler in set-based form. In fact, we find that many newbies have to shake off some transactional thinking that in later days they refer to as cobwebs. The trappings of transactional thinking that are artificial constraints, necessary evils to our existence on an SMP-based RDBMS (but are in our rear-view-mirror and rapidly fading once we cross over to Netezza)
Now don't get me wrong, many people will deploy an SMP-based RDBMS for their warehouses and have a smashing time of it for many years, perhaps far into the future with no scalability issues at all. We could even venture to assume that over half of all warehouses rolled out this way will likely never see a capacity hiccup, simply because they are rolled out firmly in the center of the "bell curve". (A little statistics lingo there)
Recall many years ago, in the prob&stat class we may have slept through, that the bell curve gives us the 80-20 rule or something akin to it. It also gives us a another rule, that twenty percent (or so) of warehouses will be at the small end, twenty percent at the top end, and the rest in the middle. So guess what? This puts eighty percent in the middle-of-the-road and not really on the Netezza radar for the moment. But perish the notion that Netezza intends to ignore that market. It plays very well in that zone too. In fact, Netezza can move-and-shake along the entire continuum of the solution spectrum.
But when it enters the upper twenty percent zone, the air starts to get thin. Nitrogen narcolepsy befalls the heroes-of-the-eighty-percent, and they start to fall away very quickly. When we get into the zone of searching, analyzing, crunching and even just summarizing data on the orders of billions, tens of billions, and hundreds of billions of rows, we have a number of additional rules that will impose themselves on our existence. These are no different than the laws of aerodynamics we would use to defeat the laws of gravitation. One set of rules has sway, while another set of rules is used to overcome the effect of the first. The rules we have to deal with are those concerned with scale. If we don't pay attention to scale, down to the lowest level possible, no amount of efficiency in software will bail us out. When it comes to scale, salvation is in the hardware. More importantly, the architecture of the hardware.
I often adjure prospective Enzees that when evaluating the TwinFin against other platforms, avoid the kool-aid softball questions that anyone can answer. When a proof-of-concept tournament is underway, the easy questions all float around in the eighty-percent zone where any of the technologies play in one form or another. Rather take a tough problem, one that only exists in the twenty-percent zone, and drop it in the lap of the tournament players. Take a look at three things: (1) what was the raw performance number, (2) what was the difficulty or complexity of the solution, (3) How many vendor engineers did it require and as a bonus (4) how long did it take the vendor engineers to give you an answer? Minutes, hours, days? How long they took to formulate an answer, and how many of them were required, is second only to the performance of the solution itself. After all, anyone can test-or-evaluate in the eighty-percent zone where everyone is a hero. Get the problem domain into the twenty-percent zone (where we'll be on day-one after the technology is installed) and get those answers now. Wouldn't it be a bit awkward to ask these questions when it's too late to do anything about it?
I have mentioned in other essays that the Netezza architecture, and its deep attention to details-of-scale are what come alongside us as our ally when the other heroes-of-eighty-percent simply gasp for air. One of these is the architecture of the disk drives themselves. The drives are of course dedicated to their own CPU, RAM, FPGA and all that, but the layout of the disk platter itself is also intriguing. As an aside, each Netezza disk shares information with another sister drive so that if it should crash, it can be hot-swapped and rebuilt from its sister drive very quickly, easily and automatically. So where is this drive data stored on the sister-drive? As the design would have it, on the inner rim of the sister drive's platter. Otherwise, each drive's outer rim holds user data and the middle ring holds temp space and system data. But we can see the wisdom in this model, that the outer rim is spinning fastest, so delivers the data to the user much faster than the inner rims would.
Does our eighty-percent-hero do this for us? Well of course not. The SMP-based model of Netezza's competitors share their drives like any other SMP device. The odd state of affairs is that they are shared-nothing at the functional level, but shared-everything at the hardware level. This dissipates the strength of the machine. Conversely for Netezza, it's shared-nothing all-the-way-down. This bodes well for query support, because it means every query gets the full-strength and undivided attention from the machine, even for the fractions of a second necessary to complete it, and thus each query is returned on a wave of strength, not a river of dissipation.
For those who want to roll out the eighty-percent models and be happy with it, hey I don't disparage anyone from making a living. Functionally speaking, those warehouses have some of the most advanced features on the planet - in the eighty-percent zone. Netezza's customers by definition have already breached the eighty-percent mark and will never revisit it again.
But true to form, when dealing with a non-scalable platform, our enemies rapidly reduce to size and complexity. That is, the larger the size of the solution, the less functionality we can derive from it. And the larger the complexity of the solution, its required storage capacity is stunted from growth. Guess what happens when we migrate either of these solutions to a Netezza machine? When the complex functionality is ported, it suddenly finds a breath of fresh air and data volumes start to grow, sometimes exponentially. If the data sizes are already too unwieldy and we move to Netezza, the additional capacity allows us to build more functionality which in turn leads to - you guessed it - even more growth of data volumes. The short answer is, when migrating or upgrading either of these animals, the new platform had better be able to immediately scale in orders of magnitude, not just incremental percentages. This is why so many migrations onto the eighty-percent-platforms sputter and fail. The initial migration is successful, but they quickly find themselves out of capacity in short order. This was never expected, so the migration is a bust.
But with Netezza, it has the capacity to take on the migrated functionality and grow with it - without any particular hiccups or worrisome meetings about its abilities. It just hums along, keeps all its promises and never complains.
Okay, so I'm a big fan of the technology. Mostly because it makes a migration easy and makes its customers happy. Who couldn't use a little more of that?
But for those heroes-of-the-eighty-percent, there's a place for them should they choose to don the necessary equipment for high-altitude flight. Of course, there aren't any heroes in the twenty percent zone, considering that the only true hero in that space is the TwinFin. And when flying above the clouds (with any technology), humility dictates that we set aside the hero-way - and we really have to trust the hardware, don't we?
For a brief history of why ELT (that is, in-box-data-processing) is even a topic of discussion, we must recognize that the high-powered appliances such as Netezza have not only made such implementation viable, but even desirable.
Just so we level-set on what one of these looks like, it's a SQL statement. Usually an insert/select but can also include updates and the like. Many of you recognize these as multi-stage operations inside a stored procedure. The sentiment of course is that such an animal can perform better inside-the-machine than to take it out of the machine (through an ETL tool), process it, and put it back. This may be true for smaller data sets, but you aren't reading this because you have smaller data sets, are you? Netezza users are big-game hunters. Pea-shooters are for small animals, but if we want big game, we'd better bring a bigger game with us, put on a bigger game face, and bring bigger equipment.
But is it really the size of the equipment? Netezza users know that the size and the architecture are keys to success. Dog-piling a bunch of CPUs together does not a power-frame make.
Okay, back to the storyline here - in-the-box SQL-transforms, in a traditional RDBMS platform, are the realm of small game. Once the game gets bigger, these transforms degrade in performance, and rather rapidly. Watch as a swarm of engineers tries to reconstruct and rebuild the procedures, the SQL, the tables and even upgrade the hardware in a futile attempt to achieve incrementally more power. Emphasis on incrementally-more. Not linearly more.
As they grow weary of this battle, the ETL tools start looking better and better. We reach a critical mass, and the ELT-in-the-box is summarily forsaken as we stand up the ETL tool.
Sometime after this transition, the data volumes once again overwhelm the environment. One thing leads to another, and one day the Netezza machine arrives on the floor. Hopes are high.
But notice the transition above - ELT was forsaken for ETL, likely never to return. But wait, now we really have the power to do the ELT SQL-transforms, but we've mentally and perhaps emotionally (yeah, verily even politically) moved away from SQL-transforms.
Some reading this might scratch their heads and think, What's he talking about? We've been doing ELT in the machines using (PLSQL, etc, name your approach here) for many years. Why would we shy away from it?
Why not use an ETL Tool? I mean, they handle push-down SQL-generation right?
I can perhaps summarize this situation in a single conversation I had with a ETL tool vendor who was hawking the capability for his own tool, and after showing me the mechanics of making one of these little jewels operate with aplomb, I asked him, "So what about doing a few hundred of these in a sequence, or branching them into multiple sequences?" The vendor rep looked almost hurt. "Why would you want to do that?"
Well, if we're really talking about migrating transforms into the machine, this can grow into hundreds of operations rather quickly. This situation apparently overwhelms the logistical capability of even the most powerful ETL tool. But I have hope that they will solve this situation. Eventually.
I am not holding out hope that it will happen soon, or voluntarily. These tool vendors have invested millions in the performance boosting of their own products and will not likely toss this investment on the flames of the appliance movement, even if said flames are the exhaust flames of the appliance's rocket engines. This is why I say "eventually". They don't really have a marketable reason to embrace this approach.
Another problem of the ETL tool is that it is so divorced from the appliance's infrastructure that it cannot control the cross-environment logistics. This is especially true of the "virtual transaction" - that is - multiple flows arriving in multiple tables that are each in context of one another, yet are individually shared-nothing operations. If one of the flows should fail, how do we back out of this? Can we do a rollback of the tables where their flows succeeded? No we cannot. We could certainly implement an approach, but this is neither inherent nor intrinsic to the ETL tool. We need a shared-nothing virtual transaction that will control all of the flow in a common context, commit them in that context and faithfully rollback the results in that same context. ETL tools don't go there. Unless we implement the tool that way. As an application. Once implemented, how do we reuse this for the next application, and the next? We can see that it's not part of the tool itself.
If next-generation "ELT" scenarios are to be successful, they need several very important capabilities that are simply non-optional and non-trivial:
I am sure the visitors of this blog have even more aspects of a "wish list" that they have implemented (perhaps painfully so) and want more control over the data, its processing logistics and error control and recovery. Feel free to add your own comments and suggestions here.
DavidBirmingham 2700043KNU Tags:  performance transformation elt netezza sql etl transform 7,297 Views
I have noted in prior posts several of the "banes" of in-the-box data processing, not the least of which is harnessing the mechanics and nuances of the SQL statement itself. After all, the engine of in-the-box is a series of insert/select SQL statements. I've also noted that we need to squeeze the latency out of the inter-query handoff and management. These are important factors for efficiency, scalability and adaptability.
But this article deals primarily with "adaptive" SQL, that is, the ability to surgically and dynamically control the SQL, the paths of flow between SQL statements, their timings and the ability to conditionally execute them.
I am drawing a contrast between this approach and the common "wired" ETL application. In the wired application of an ETL tool, all components are known and flow-paths predefined. If we want to shut off a particular component or flow, we'd better make that decision at startup because we won't get to do this later. A benefit here is that if we add or change a flow-path, the ETL tool's dependency analysis will (usually) detect it and give it a thumbs-up or thumbs-down. We can (and do) perform this kind of design-time analysis, but what of dynamic run-time analysis?
Case in point: One group performs trickle-feed of data from a change-data-capture, so on any given loadng cycle, we don't know which files will show up. Not to worry in an ETL tool, since we would just build a separate mini-app to deal with the issues. The mini-app would key on the arrival of a specific file, process the file and present results to the database. This is a very typical implementation. But with hundreds of potential files, it's also logistically very daunting and hard to get the various streams to inter-operate. In fact, an ETL tool quickly reduces to "sphagetti-graphics" and the graphical user interface is just in-our-way at that point.
Case in point: One group has multiple query paths/flows where sql statements build one-to-the-next for the final outcome. These can follow a wide range of paths not unlike a labyrinth depending on a variety of different factors. The problem is, these factors aren't known until run-time and only appear in fleeting form as the data is processed. How do we capture these elements and use them as steering logic? In an ETL tool, our options are limited to none. In this particular case, three primary paths of logic were available each time the flows ran. Sometimes all three paths ran end-to-end. Sometimes only one, or two would run, or perhaps none-at-all. The starting conditions and unfolding data conditions determined the execution path.
But we have another name for this don't we? Isn't this just plain vanilla "computer programming"? Where the data shows up and we use the encountered-data and encountered-conditions to guide the IF-THEN-ELSE logic to conclusion? The problem you see, is that we are so accustomed to using IF-THEN-ELSE at the ROW/COLUMN level, we cannot imagine what this would look like at the SET level. Ahh, the conditional logic driving SETS is unique and distinct from that which drives basic elements. But then again, we can only scale in sets, not the basic elements. THis is where the dynamic nature of conditional-sets is invaluable.
But this isn't really about conditional sets, either. Only that conditional sets are a necessary capability and we have to account for them along with many other subtle nuances. Let's follow:
We have an external file and we load this into an intermediate/staging table (TABLE-A) in preparation for processing.
Now we build another target intermediate table (TABLE-B) and an insert/select statement to move / shape the data logically and physically from TABLE-A to TABLE-B.
From here we have several more similar operations, so we build intermediate tables for their results as well, such as TABLE-C, TABLE-D and TABLE-E
TABLE-A >>> TABLE-B >>> TABLE-C >>> TABLE-D >>> TABLE-E
Now let's say we have another chain of work starting from TABLE-V:
TABLE-V >>> TABLE-W >>> TABLE-X >>> TABLE-Y >>> TABLE-Z
Now something interesting happens, in that the developers sense a pattern that allows them to reuse certain logic if they only put these quantities into a couple of working tables, which we will call TABLE-G and TABLE-H, and now the flows look like this:
TABLE-A >>> TABLE-B >>>>>>>>TABLE-C >>> TABLE-D >>> TABLE-E
TABLE-V >>> TABLE-W >>>>>>>>TABLE-X >>> TABLE-Y >>> TABLE-Z
Notice how TABLE-G is feeding TABLE-C and TABLE-H is feeding TABLE-X, so that each of them have a 2-table dependency.
Now we get to the end of the chain of work and learn that TABLE-Z has to leverage some data in TABLE-C! We don't want to rebuild TABLE-C just for TABLE-Z, but in an ETL Tool this data would be bound/locked inside a flow. We could redirect the flow to TABLE-Z, unless the flow to TABLE-Z is entirely conditional and we don't know it until we encounter TABLE-C. What if, for example, the results of TABLE-C are conditional and if the condition is realized, none of the components following TABLE-C are executed. However, we could have TABLE-Z see this absence as acceptable and continue on.
Okay, that's a lot of stuff that might have your head spinning about now, but the simplicity in resolving the above is already in our hands. In any flow model, upstream components essentially have a "parent" relationship to a downstream "child" component. This parent-child relationship pervades flows (and especially trees) and as we can readily see, the above chain-of-events looks a lot like a tree (more so than a flow).
More importantly, each node of the tree is a checkpointed stop. We must build the intermediate table, process data into it and move on, but once we persist the data, we have a checkpointed operation. This is why it behaves so beautifully as a flow and a tree.
Now let's say over the course of SDLC (regular maintenance), that a developer needs to add some more operations and connect other existing operations to their results. This is essentially just introducing new source tables in the where/join clause, but the table as to exist. In short, if we add a new table to the logic of TABLE-X, it will now be dependent upon its original tables and the new ones. (Its query will break if they are not present at run time).
It is easy enough (honestly) to perform a quick dependency-check over all of our queries to make sure that their various source tables are accounted for. In other words, an operation actually exists that will produce the table. What if we picked the wrong table or even misspelled it? At run-time we would know, but we would rather know before execution because it's a design-time issue. This may verify that logically we have a plan to create the dependent table, but it does not deal with the simple fact that conditional circumstances may forego the physical instantiation of the table. Transforms ultimately do not operate on intent, but on the presence of physical assets.
As another nuance, this creates a disparity between the design-time flow of data, and the run-time flow of data. If the run-time is governed (e.g. ETL tools) so that the dependencies and conditions are all evaluated at the start of the application, the design-time and run-time are more easily mated for review by an auditor or analyst. But if any part of it is dynamically conditional, we can see how this could practically nullify the design-time form of the flow. They would simply say, "I know what the flow would do by design, but I want to see what it actually did at run time, because the data isn't matching up". Aha - so "intent" counts for design review, but "intent" is not what puts physical data into the tables. Operational processes do that.
As noted above with the necessity for conditionality and reduction of inter-transform latency, we now have a need to weave together at run-time what the flows will actually do. The "source tables" for a given transform are found in the where/join phrase and these had better be present when the SQL launches or it will be a short ride indeed.
And now, what you did not expect - one of the most powerful ways to use a Netezza machine is to forego the "serialization" of these flows and allow them to launch asynchronously. We can certainly throttle how many are "live" at at time, but if any or all of them can launch independently, how on earth are we supposed to manage the case where one or two of them really are dependent on another one or more? Do we put these in a separate flow? Do we really want our developers to have to remember that if they put an additional dependency in a transform that they have to regard whether that preceding transform has actually executed successfully?
So that's the real trick, isn't it? If I have forty transforms and all of them could run asynchronously except for about ten of them, that can only run after their predecessor completes, I have several options to see to it that these secondary operations do not fail (because their predecessor has not executed yet).
I can serialize them in by putting them into separate flows (or branches). One of them kicks off and runs to completion while the next one waits. This is logically consistent but also inefficient. If those secondary transforms are co-located with the original set, the optimizer can run them when there is bandwidth rather than waiting until the end. It is also logistically unwieldy because a developer has to remember to that if a transform should gain a dependency, it has to be moved to the second flow.
I can fully serialize them into a list, but this is the most inefficient since it "boxcars" the transforms and does not leverage the extra machine cycles we could have used to shrink the duration.
I can link them via their target table and source table, such that this relationship is dynamically identified and the flow path dynamically realized. If a given transform does not run (conditional failure) or simply fails to execute, the dependency breakage is dynamically known. What does this do? What if a given transform is supposed to use an incoming (intake) table if it is present (data was loaded) otherwise use a target-table's contents (e.g. trickle-feed, change-data-capture problem). This allows the transform to do its work with consistency but also have the ability to dynamically change its sources based on availability.
Now, we know ETL tools don't do this. Other tools may attempt to rise to this level of dynamic pathing, but the bottom line is that if those tools don't provide this kind of latency-reduction, high-throughput, dynamically adaptable model, they will not be able to leverage the full bandwidth of the machine. Trust me on this - the difference is between using 90 percent of the machine or only 10 percent at a time. That machine packs the virtual joules to make it happen, so let's make it happen.
When we originally developed our framework to wrap around some of these necessary functions, we had not considered these nuances of dynamic interdependence and frankly, ELT was so new that it didn't really matter. The overhead to execute "raw" SQL was zero, but we could not effectively parallelize/async the queries without losing control. Running async chains of transforms necessitated detailed control, but nobody had a decent algorithm for it, so once again Brightlight had to pioneer this capability. Our architecture allowed us to easily integrate these things into the substrate of the framework as a transparent function. This is the primary benefit of a framework, that the developers can continue to build their applications without disruption, but we can upgrade and enhance the framework to provide stronger and deeper functionality. Whether our framework is right for all applications is not the issue, but whether the complete implementation is right for Netezza. It's a powerful machine and we should not arbitrarily leave any cycles on the table.
Imagine slowly running out of steam because of latent implementation inefficiencies, then ultimately asking for a Netezza upgrade that, if the inefficiencies weren't present, the upgrade wouldn't be necessary. This has happened with more than one of our sites and rather than upgrading to all-new-hardware, we installed, converted and bought back an enormous amount of capacity. They eventually upgraded the hardware much later on, but for the right reasons.
DavidBirmingham 2700043KNU Tags:  analytics elt configuration netezza etl puredata development simd 7,162 Views
As with all analysis of implementations, please accept the following as a composite commentary (much like the Case Studies in Netezza Transformation). The names have been changed largely to protect the guilty. The innocent have already been punished.
So for those of you who may recognize shadows of your own environments in the discussion below, you now have plenty of time to get them cleaned up before anyone finds out about it! But honestly, don't admit to anyone that you are "doing it this way". Just fix it. What happens in the underground, stays in the underground!
I cannot (today) count how many on-site assessments I have executed or the variety of their outcomes. I have to say that on balance, most technology folks are pretty sharp and have things on track. I can usually advise them on how to make things better. This is of course exactly what they are hoping for. What manager wants to hear that they've done most it of it wrong? Or that their investment in the technology and the people, are a bust? No managers I know take their responsibilities so lightly. Some, however, inherit a mess from their predecessor and are flummoxed as to how to unravel it. They don't want me to "put lipstick on the pig", so to speak, but to provide a roadmap on how to dig out of the ditch (or hole, or rat's nest) and move things forward in a healthy direction.
Working with a 10400 Mustang, pre-TwinFin Era, one of our recently arrived data warehouse aficionados took our leadership aside and said, "What they are describing is a reporting system, like a data mart. But we aren't using any technologies to help them with this. We need to have a talk with them about standing up Microsoft SQLServer so we can put a data mart on it and..."
Stop right there. Yikes. He was so full of passion! It was really, really hard to talk him back from the ledge. So I finally said, "If you mention this plan to the client, even once, we will have to remove you from the project." And his eyes went wide like he'd been hit with a two-by-four across the forehead. "Why?" was his impassioned plea. Time to educate him on what Netezza does, right?
Netezza is a data warehouse appliance. It circumscribes and simplifies the data warehouse disciplines. It also makes some strong assumptions about the potential users of the appliance, not the least of which is what-problems-it-solves-well and what-problems-it-does-not-solve-at-all. (World Peace, Global Warming, Time Travel, Cloning of IT Staff Members, and getting the Dallas Cowboys to the Superbowl, to name a few).
Example: What if you were going about doing some-regular-task manually-and-tediously, and someone then showed you a device that would automate it? You might count your blessings and move forward with a skip in your step. But when you share the device's features with someone unfamiliar with the manual, tedious nature of an existence without it, they scratch their heads and say "I don't get it."
I am reminded of a joke where a lumberjack is in the market for a new saw. A powered-chain-saw salesman asks him how many trees he cuts down in a day with his manual saws, and the man says "30". Ahh, says the salesman, with one of these you could cut 100 or more in a single day. The lumberjack doesn't believe him, so the salesman tells him, Look, take this one for a test drive. Use it all day tomorrow and if it doesn't at least double your output, bring it back, no harm done, no questions asked. The lumberjack agrees but returns two days later, clearly disgruntled about the chain-saw's performance. "I was only able to cut down 10 trees with this lousy thing." To which the salesman balked, and wondered if it might not be defective. So he beckoned the lumberjack to follow him outside to their testing area, where he threw a log across two sawhorses and pulled the starter cord on the chain saw. When it roared to life, the lumberjack took a step back and shouted over the sound of the motor - "WHAT'S THAT NOISE?"
Clearly a little product-orientation was in order, no?
A CTO once lamented to me, "Well, we did the best we could with what we had." - Well sure. Don't we all? I don't know of anyone who borrows or rents help to do it poorly. Nor do they take their best people to deliberately make something sub-standard. The problem is, without a baseline knowledge of what the machine can do, how it is typically deployed, and what to embrace or avoid about it, then it's really no different than the lumberjack's problem. He did the best with what he had, didn't he?
What were the outcomes? Poor perception of the product by the user. An objective lack of productivity. General grousing about something that is not well-understood. Where have we seen this before? Give a call to practically any help desk of any product, especially a technology product, and they will bend your ear with "howling" examples of users who mis-applied the product - and some would say - were just plain stupid about it.
Underwriters Laboratory (UL) has a standard policy of quickly adjudicating claims against them no matter how frivolous. Seems that just having the "UL" on the product makes them a lightning rod for litigation. One man took his name-brand lawn mower, which also sported the UL sticker, picked it up while it was still running, and attempted to use it as a hedge-trimmer. He slipped, the lawnmower fell on him, and he sued the maker of the lawnmower and UL. Three boys found a giant bullfrog and decided to kill it by setting it on fire. They grabbed a gas can from the shed, doused the bullfrog with it and tossed a lighted match on the hapless creature. Someone should have told the lads about the volatility of gasoline fumes, because the flames climbed the can's fume-trail directly into its mouth and detonated the can's contents, seriously wounding and severely burning all three boys. They sued the makers of the gas can and UL, which also had a label on the can. Perhaps someone should send this one in to A Thousand Ways to Die. For bullfrogs.
But this is not a dissertation on A Thousand Ways to Fail with Netezza because frankly, it's really hard to fail with a machine this powerful. This is why I say that when we encounter howlers like the guy with the lawnmower or frog-immolation, we're clearly off the beaten path. Why is it then, that the "beaten path" pops up more often than it should? Or for that matter, pops up at all? Aren't data warehousing folks a little smarter than that?
Of course they are. In fact, I don't recall encountering any experienced data warehousing folks who have had a bad experience - quite the opposite. However, as for the folks who have never built a data warehouse but have a lot of experience in "applications" - well in this zone it can get a little choppy.
The point is, across the fruited plain we have exceptions to every rule. My sincere hope is that your project is not inadvertently caught in the "crosshairs of fate".
Rat's Nest Number One.
Upon arrival on site I knew something was wrong. People were squeezed into their cubes, boxes were stacked against walls in every room. The whole office just felt so crowded. And then they introduce me to "the machine". In this case, their production machine was the lowest-powered machine that Netezza had to offer, just short of a Skimmer. The admin at the desk barks at me for parking in the street and not in the garage underground. It's my first day, and nobody said anything about parking. The difference in cost was exactly $1, and if you're like I am and travel a lot, this kind of difference is not worth discussing. Except for here. It's tongue-lashing time. All of these things added up to some significant red flags, moving in a direction of a road lined with red flags.
Case in point, this is not a company doing things expediently or frugally. They are cheap. They will do things according to the lowest denominator of cost and skill, not because they have balanced priorities. They would rather save a few dollars on training or even a rent-an-architect, and allow the least-of-their-staff to painfully slog through the nuances of data warehousing on an immutable deadline. It's the immutable deadline I can't fathom. Here's why:
In a solution implementation, we have cost, duration and quality. Pick two. Whichever two you pick will shortchange the third. Every time it's tried. Well, these folks were shortchanging all three in the blind naivete that it was valid and workable. Without time or resources, quality is always the first, most expedient of the three to fall on its face. Doing it on-the-cheap? Well, what does this say of their readiness for prime-time? Data warehouses have an ongoing cost-of-ownership. It's not trivial. Those who want to play cheap should find another profession, one that does not cherish quality.
I was told by the client that their current back data processing environment used Netezza stored procedures. Another big red flag. Stored procs invite black-boxed code and we cannot capture data lineage through them. Netezza stored procs are ideal for the front-end. Never for the back-end. They are hand-crafted and rather ugly to maintain (this would be true on any platform).
On this particular platform, they had decided not to use monolithic stored procs (a proc with a lot of serialized operations in it) but use modular ones. So modular in fact, that each inbound data stream had its own dedicated "receivor" stored proc, followed by three more role-based stored procs plus another two - one to validate and one to push the data into the final target table. All 150 incoming record descriptions/filestreams had these 6 stored procedures assigned to them. With one catch - they were the same "role" of stored proc, but all of them were different. That's right, to intake 150 tables we saw 6 stored procs each, for a grand total of 900 stored procedures, and this was for just one of several data sources!
Many of you OO aficionados see something screaming out at you, that this should have been one general-purpose loader with six phases of operation, serving all 150 streams. Adding another source and another 100 streams, no problem, they go through the same loader and phases. Need more phases of operation? No problem, just add them to the loader and everyone benefits. Forever. It's a beautiful thing.
Of course, this means a deliberate instantiation of some reusable infrastructure. Many app-developer folks are not familiar with how to do this. After all, with 150 tables incoming,we could expect those definitions to remain pretty stable. But if the same stored procedures are facing the internal data model(s) (and they must), then we have worse than a cut-and-paste rat's nest, we have a hand-crafted rat's nest. If the data model must change, we may effectively invalidate most if not all of the stored procedures. Can you even imagine having to review - and re-review 900 stored procedures so -- oh never mind.
It therefore did not surprise me to learn that they had "frozen" the data model so that the stored procedures could have higher durability. We know this isn't realistic either, because the business will start to drive more requirements into the solution and the model must change to accommodate it, even if it's just attribution of existing tables. How do we keep these things from impacting the existing code? We have no choice but to freeze the data model. But this isn't really a choice, it's more like a un-necessary evil. Their stored procedure implementation only guarantees one thing: their functional code base will be in flux and unstable for the duration of the solution's lifetime.
I made a valiant attempt to explain the rather problematic issues concerning their implementation. (Problematic here, is a polite and professional term for rat's nest without having to say so). I have to admit however, that "rat's nest" may have done disservice to the rats. They also wanted me to "jot down" a list of "enhancements" that would make their solution better, stronger, faster - all that stuff. I could not think of a profesisional term for "burn it to the ground and start over".
Perhaps I could have told them the bullfrog story.
Rat's Nest Number Two
In keeping on our theme with stored procedures - recall - stored procedures in Netezza were originally concieved for supporting the front end BI tools. Not back-end data processing. In fact, pushing the back-end data processing under higher programmatic control is something new - even to ETL tools. That Netezza supports it very well is a bonus. Actually, Netezza does it better than any of the other databases, because the ease of manufacturing an intermediate table, using it and tossing it, is amazingly simple and easy to manage. Other machines, not so much.
But when I say "ELT" or back-end processing, it's still a SQL statement. We have options, like hand-crafting the SQL in script. Been there, done that. Or hand-crafting a stored procedure. Not really interested in doing that again. And then we have generated-SQL from a template or metadata-driven framework.
ETL tools have pushdown, but it's still pretty weak. At least, too weak for power-users like moi. I have no doubt that they will step up, eventually.
In this example, we have the opposite problem from the first. Just as much of a rat's nest, it is a monolithic stored procedure rather than a gaggle of modular ones. The monolithic stored procedure often runs for an hour or more, executes hundreds of SQL statements along the way, and has a lot of detailed steering logic embedded amongst them. It is a veritable nightmare to code and debug, and even worse to troubleshoot. I hear that some developers have been invited to padded cells afterwards, but I think those are just exaggerations. It can't be that bad, can it?
Given choice between only the two, I would choose the gaggle-of-modular over the monolithic. I mean, if you were implementing it and not me. I always have the choice to say no. I don't work for your company, after all. You may not have a good choice. Your uber-architects and their hired guns have told you it's stored-procedures-or-nothing. So it's time to pick a poison I suppose. I'll take hemlock for $400, Alex.
As these stored-proc programmers stared back at me with hollow eyes, I thought I had entered some macabre Tim Burton flick and all we needed was some spooky music, fog-machines and strange howling in the distance to make it complete. They spoke in muted, muffled tones and their questions seemed to drift. Had they slept in like, the last 48 hours? They all looked sooo tired. This is what a monolithic stored procedured does to your staff. Now watch it drain the lifeblood from your operations staff. It is the virtual/technical equivalent of leeches, and you thought we'd left those behind in the Middle Ages (for technology, that would be the 1990's)
Stored procedures don't have single-step capability. When we add another function to it, we have to test all of the functions at once, because it has to run end-to-end. We can creatively work around this in the beginning, but eventually we have to integrate it. When one test takes over an hour, or two, and the answer is buried in the mountain of carefully crafted NZPL-SQL code, at some point we have to wonder what we signed up for. (that would be, we signed up to do it wrong). Ouch.
Stored procedures cannot be parallelized (unlike their more modular counterparts) and as such is a glaringly missed opportunity. They are doomed to be serialized forever.
Now, our framework (that we consult and use as a problem-solving platform for Netezza - nzDIF) handled 100 percent of all data lineage no matter how many intermediate tables, databases or machines are involved in the overall flows and handoff of work. You won't get this with any other product, nor with anythng a stored procedure has to offer. This would be true of any stored procedure on any platform.
This is because on a transactional platform, procs are meant to handle multiple operations on singleton entities so data lineage simply is not an issue. On a Netezza platform, procs are meant to serve the BI platform, not the back end, so likewise data lineage is not much of an issue. Stored procs for the front end are largely summary/filters for pre-existing datasets. We want the lineage on those datasets, not the on-demand operations that consume them. "Could" we expect data lineage from stored procedures in Netezza? Why? The only reason would be to support back-end processing, and stored procedures are not for back-end processing. It's sort of a Catch-22.
Rule #10 in play here
And let's not forget Rule #10, shall we? Recently emblazoned in glowing letters on the catacomb walls of the Underground, Rule #10 is very simple: Never do bulk data processing in a general-purpose RDBMS engine running on a general-purpose platform.
Now, I just had to get Rule #10 into the forefront because this underscores the primary reason why stored procedures are bad for back end data processing. If we have a rule in place against using SQL for bulk processing on a general-purpose platform/engine, then any experience we may have with bulk processing through stored procedures on such a platform is itself a violation and not a marketable skill in the Enzee Universe. More importantly, it institutionalizes the violation and makes it so much worse. We could "maybe" dig ourselves from a ditch if we're using hand-crafted out-in-the-open SQL (also not recommended) but when ensconced behind the fortress of stored procedures, we have to first storm the fortress before we can loot it. Easier said that done.
All that said, folks who have instantiated stored-procedure-based data processing on general purpose platforms have already been doing it the wrong way, so why bring those practices into the Netezza machine? Just sayin'
Rat's Nest Number Three
Ahh, you thought we were done and coming into the home stretch, eh? Well, we're almost there.
This particular rat's nest only appears in places where people have churned a lot of contractors, consultants and other aficionados and hired guns through the company's various revolving doors. And as the person who inherits it rightly recognizes if as a rat's nest, or a hairball, something comes to mind that rushes through their brains like a river of water "Wow, and I deliberately signed up for this. What could I possibly have been thinking?"
Ahh, not to worry, this syndrome is rare, and shall pass. Breathing deeply will override the spooky breathing cadence of the Dark Lord of Expectations, and shall give you extraordinary confidence on how to resolve this problem.
This condition is entirely severable from the technology itself. Any shop that allows the contractors to establish their own standards without oversight is just signin' up for a world-of-hurt somewhere down the line. Fortunately for us, the Netezza machine is like a monster truck with gumbo-mudder tires. No matter how mired-in-clay it may be, we need only fire the engines and punch the accelerator to regain control and be underway in no time.
The first step, like any 12 step program, is to recognize that a problem exists. Bandaging a hemmorraging wound will not heal it. This will only forestall the inevitability of bleeding out. If we are to be proper stewards, bandaging has its benefits while we treat the larger wounds.
First and foremost, commit to some form of data management logistics. And this is not by purchasing a data backup tool. This is a committment to flow-based, insert-only architecture as the rule, with updates and deletes as the exceptions. After all, if we were using an ETL tool, we wouldn't be able to update or delete a flow of work. We can only integrate and filter the data while it's on its way elsewhere, but that elsewhere will always be an insert-only target - because it's a file set and and not a database. Only when we reach the book-end of the database can we perform updates and deletes, and these are largely to support things like constraints and slowly-changing dimensions. We just need to avoid invoking an insert/delete/update protocol for all tables at all times. Center on a theme and accommodate the exceptions. We must have rules, and this is one of them.
Commit to some form of rules-driven architecture. That is, when we encounter a new condition or potential fork in the logic, consider shaping it with a rule (one that we can switch on/off or modify from afar) rather than hard-wired SQL or hard-coded solutions. Is this easy to do? Of course not. Nothing is easy about data warehousing or large-scale flow mechanics, silly rabbit.
Netezza has simplified the harder, tedious and repeatable parts so that we can actually address the issues we never had time for before. The "next level" was never in view, or even on the radar because we were always immersed in the operational weeds of the implementation. With a Netezza machine, lots of that is behind us, but before us stands the new challenge. It goes something like this:
If I were to give you a 400-slice toaster, your problem is no longer toasting bread, but bread management. Keeping the toaster busy has now become a daunting problem of bread logistics, not machine capacity. The problem domain has shifted into a zone that lots of folks don't have any experience with. Time to step up.
I recently did a Virtual Enzee presentation and listed the Top Ten requirements for scalable bulk data processing inside a Netezza machine.
I'll come back periodically and elaborate on them
1.Platforms easily scale for increasing stress
We have a Netezza machine, so what could go wrong? I was asked a desperate question by an Enzee as to how to get more power out of their machine. After nearly two days of struggling with them I finally asked how big their machine was. It was a TwinFin-3. The answer I gave them, they clearly did not like and even sought solace on the shoulder of another. Who told them the same thing. Get a bigger box. TwinFin-3 is a dev box, not a production box.
Stress comes in many forms. Constantly changing requirements. The need for functional and physical agility. As these things increase, we need a platform that will work with us, not against us.
2.Human intervention eliminated wherever possible (no eyeball-based actions)
This means ALL aspects, not just operational ones. Everything from table maintenance to application development. AUTOMATE!
It is humorous to hear testers offer up their methods, with naive blurbs like "open the application and examine the contents". No, with billions of rows there is no such thing. We must use statistical checking that operates on sets, such as summaries, counts-of etc. No longer can be "eyeball" the data.
Likewise with runtime processes. Define a table with 200 columns and try to put an ELT query against it. 200 columns in the insert phrase, 200 entries in the select phrase, and to maintain it we have to keep them in sync with "eyeballs". No, this doesn't scale.
3.Architecture-centric platforms express applications with patterns
Oddly application developers, like those who develop using stored procedures, whip out a bunch of application-centric 'code" and when the smoke clears, they see repeatable patterns all over it. Unfortunately, they can't take the patterns anywhere because they are hard-wired.
The more architectural approach is to harness the patterns as capabilities and allow our applications to express from them. The application is then an expression of the capabilitis not the center of gravity.
4.Deliberately simple to leverage and operate
Large-scale systems can have mind-numbing characteristics for the un-initiated. It is incumbent upon us to deliberately simplify their interface points to it. Simple utilities, fewer keystrokes to achieve mundane goals, automation for rote tasks..
5.Built for administrative recovery, not reactionary recovery
This can be as simple as, when data arrives and has errors, we don't come to a full stop. We cordon off the error records into an adminstrative /logical status and report them for later remediation. In systems of scale, we cannot halt the processing of tens of millons/billions of records just because a few stragglers are misbehaving. The time it will take to process the data is the problem. If we are 20 minutes away from the process being complete, then we are always 20 minutes away if we have fully stopped the flow for the sake of a few records. If we allow the process to proceed with error-capture, we will close the 20 minutes and then the admins have more breathng room to fix the problem without the scrutiny or pressure of the clock.
6.Data and metadata-driven
The environment can no longer be driven by application code. It has to be driven by an architectural harness that responds and adapts to data and metadata. This is a non-trivial endeavor, of course, but entirely possible to achieve.
What does this look like? The data model is arguable the most volatile component of the solution. Changes in it can destablize a solution. We need ways and utilities to buffer ourselves from the impact of change all-the-while enabling the change. It won't do to tell the users that the data model is frozen for 6 months because we fear impact to our tightly-woven application code (e.g. stored procs)
7.Blended/hybrid approaches quickly adapt and scale
One doesn't have to make an exclusive choice between ETL and ELT. People really want to leverage the power inside the machine but feel constrained that doing so may obviate the ETL tool. Not so - both of these technologies have a major role to play and we should balance them for the best-of-breed solution
8.Template-driven applications: SQL is an artifact, not the center-of-gravity
In the VIrtual Enzee I offered several examples of templates for SQL transforms (insert-into-select-from), views (to avoid nesting) and stored procedures (to build from a template rather than editing them in a SQL tool)
Why do this? The developer puts application logic into the template. At run time, or installation time in case of the SP or View, we formulate the product from the template. This allows us to automatically include non-optional aspects like operational controls, inline status reporting and other elements that we don't want the developer to worry about, much less hand-craft on their own.
Need another bit of operational control? Add it to the template factory and don't worry about the application logic
More importantly, we can generate a template from the catalog and by definition it is tied to the catalog. It is therefore easy to compare the already-deployed templates to changes in the data model. Since 90 percent of all new columns are invariably pass-through columns, running an impact analysis like this captures over 90 percent of the issues in one shot.
9.Inefficiencies are our number one enemy
One of our clients had a TwinFin 48 they were planning to use for their development phase and then cutover internally to production. I asked them to dial back the developers so that it had the effective power of a TwinFin 12. They were a bit stunned at my request until I noted: The TwinFin 12 has a lot of power for development, but a TwinFin 48 will hide bad data models and sloppy code. Lots of power can make any lousy code/data model look spectacular.
Many cases of Netezza machine under stress, upon review we find that many of their inefficient practices have been going on for years, some since the box arrived. But the machine was so powerful it masks the inefficiency, like allowing the box to eat itself from the inside out
Preserve capacity at the processing level, not by guarding the data storage level. Do not be afraid to spin off replica data structures (even large ones) just for a different distribution, if it means that the machine can close its queries faster.
10.Operational integrity drives functional integrity
We understand this as a matter of quality control. Hamburgers from a national chain should taste the same no matter where we buy them. This is not accidental. The end user data is only as good as the processes that are delivering it.
If we make it so the operators have a difficult time handling it, or the admins don't understand it, or the troubleshooters can't get things done, they will start to grouse about the quality of their existence.
On the flip side, I know folks that we radically simplfied things for, and when we showed them the various utilities they would need to keep things in order, they balked. "Do we have to know all this stuff? Why is there so much stuff to know?" And yet, we have reduced a thousand things down to one, but they cannot grasp how much more complex it would be without our having simplified it.
We know that Netezza embraces simplicity. We just have to be mindful to maintain this spirit when we build things around it.
At the functional/capability level,we need to drive operational integrity into the data itself, outfitting the tables and rows with additional columns for the sole purposes of operational control. Otherwise the functional model is pretty much out-in-the-open and we won't have a way to manage the tables in a consistent, harnessed, repeatable form.