Modified on by DavidBirmingham
One of the most significant questions to answer on the Netezza platform is that of "distribution" - what does it mean? how does it apply? and all that. If you are a Netezza aficionado, be warned that the following commentary serves as a bit of a primer for the un-initiated. However, feel free to make comments or suggestions to improve it, or pass it on if you like.
In the prior entry I offered up a simple graphic (right) as to how the Netezza architecture is laid out internally.
In this depiction, each CPU is harnessing its own disk drive. In the original architecture, this was called a SPU, because the CPU was coupled with a disk drive and an FPGA (field-programmable gate array) that holds the architectural firmware "secret sauce" of the Netezza platform. The FPGA is still present and accounted for in the new architecture as the additional blade(s) on the blade server. Not to over-complicate things, so let's look at the logical and physical layout of the data itself and this will be a bit clearer.
When we define a table on the Netezza host, it logically exists in the catalog once. However, with the depicted CPU/disk combinations (right) we can see 16 of these available to store the information physically. If the table is defined with a distribution of "random", then the data will be evenly divided between all 16 of them. Let's say we have a table arriving that has 16 million records. Once loaded, each of these SPUs would be in control over 1 million records. So the table exists logically once, and physically multiple times. For a plain straight query on this configuration, any question we ask the table will initate the same query on all SPUs simultaneously. They will work on their portion of the data and return it an answer. This Single-Instruction-Multiple-Data model is a bread-and-butter power strategy of Massively Parallel computing. It is powerful because at any given moment, the answer we want is only as far away as the slowest SPU. This would mean as fast as the SPU can scan 1 million records, this is the total duration of the query. All of the SPUs will move at this speed, in parallel, so will finish at the same time.
What if we put another table on the machine that contains say, 32 million rows? (trying to keep even numbers here for clarity). The machine will load this table such that each SPU will contain 2 million rows each. If no other data is on the machine, we effectively have 3 million records per SPU loaded, but the data itself may be completely unrelated. Notice how we would not divide the data another way, like putting 16 million records on 6 of the SPUs and the other 32 million records on the remaining 10 SPUs. No, in massively parallel context, we want all the SPUs working together for every query. The more SPUs are working, the faster the query will complete.
Now some understand the Massively Parallel model to be thus: Multiple servers, each containing some domain of the information, and when a query is sent out it will find its way to the appropriate server and cause it to search its storage for the answer. This is a more stovepiped model than the Netezza architecture and will ultimately have logistics and hardware scalability as Achilles's heels, not strengths for parallel processing. Many sites are successful with such models. But they are very application-centric and not open for reuse for other unrelated applications. I noted above that the information we just loaded on the Netezza machine can be in unrelated tables, and databases and application suites, because the Netezza machine is general-purpose when it comes to handling its applicability for data processing. It is purpose-built when we talk about scale.
But let's take the 16-million-record vs 32-million record model above and assume that one of the tables is a transactional table (the 32 million) and one is a dimensional table (the 16 million). The dimensional table contains a key called "store_id" that is represented also on the transactional table such that we can join them together in various contexts. Will the Netezza machine do this, and how will it? After all, the data for the tables is spread out across 16 SPUs.
Well, we have the option of joining these "as is" or we can apply an even more interesting approach, that of using a distribution key. Here's where we need to exercise some analysis, because it appears as though the STORE_ID is what we want for a distribution, but this might skew the data. What does this mean? When we assign a distribution, the Netezza machine will then hash the distribution key into one of 16 values. Every time that particular key appears, it will always be assigned the same value and land on the same SPU. So now we can redistribute both of these random tables on STORE_ID and be certain that for say, SPU #5, all of the same store_id's for the dimension table and for the transaction table are physically co-located on the same disk. You probably see where this is going now.
Ahh, but what if we choose store_id and it turns out that a large portion of the IDs hash to a common SPU? What if we see, rather than 1 million records per SPU, we now see around 500k per SPU with one SPU radically overloaded with the remainder? This will make the queries run slow, because while all the SPUs but one will finish fast, in half the time of before, they will all be waiting on the one overworked SPU with all the extra data. This is called data skew and is detrimental to performance.
However, had we chosen a date or timestamp field, our skew may have been even worse. We might see that the data is physically distributed on the SPUs in a very even form. But when we go to query the data, we will likely use a date as part of the query, meaning that only a few SPUs, perhaps even one SPU, will actually participate in the answer while all the others sit idle. This is called process skew and is also detrimental to performance.
Those are just some things to watch out for. If we stay in the model of using business keys for distribution and using dates for zone maps (which I will save for another entry) we will usually have a great time with the data and few worries at all.
Back to the configuration on store_id's. If we now query these two tables using store_id in the query itself, Netezza will co-locate the join on the SPU and will only allow records to emit from this join if there is a match. This means that the majority of the work will be happening in the SPU itself before it ever attempts to do anything else with it. What if we additionally clamped the query by using a date-field on the transaction table? The transaction table would then be filtered on this column, further reducing the join load on the machine and returning the data even faster.
So here is where we get lifting effects where other machines experience drag. For an SMP-based machine, adding joins can make the query run slower. On a Netezza machine, joins can serve as filter-effects, limiting the data returned by the query and increasing the machine's immediate information on where-not-to-look. Likewise the date-filter is another way we tell the machine where-not-to-look. As I have noted, the more clues we can give the machine as to where-not-to-look, the faster the query will behave.
I have personally witnessed cases where adding a join to a large table actually decreases the overall query time. Adding another one further decreases it. Adding another one even further and so on - five-joins later we were getting returns in less than five seconds where the original query was taking minutes. This filter-table and filter-join effect occurs as a free artifact of the Netezza architecture.
Distribution is simply a way to lay the data out on the disks so that we get as many SPUs working for us as possible on every single query. The more SPUs are working, the more hardware is being applied to the problem. The more hardware, the more the physics is working for us. Performance is in the physics, always and forever. Performance is never in the software.
While the above is powerful for read-query lift (co-located reading) there is another power-tool called the co-located write. If we want to perform an insert-select operation, or a create-table-as-select, we need to be mindful that certain rules apply when we create these tables. For example, if we want to perform a co-located read above and drop the result into an intermediate table, it is ideal for the intermediate table to be distributed on the same key as the tables we are reading from. Why is this? Because the Netezza machne will simply read the data from one portion of the disk and ultimately land it to another portion of the same disk. Once again it never leaves the hardware in order to affect the answer. If we were to distribute the target table on another key, the data will have to leave the SPU as it finds a new SPU home to align with its distribution key. If we actually need to redistribute, this is fine. But if we don't and this is an arbitrary configuration, we can buy back a lot of power just by co-locating the reads and writes for maximum hardware involvement.
So that's distribution at a 50,000 foot level. We will look at more details later, or hop over to the Netezza Community to my blog there where some of these details have already been vetted.
It is important to keep in mind that while distribution and co-location are keys to high(er) performance on the machine, the true hero here is the SPU architecture and how it applies to the problem at hand. We have seen cases where applying a distribution alone has provided dramatic effects, such as 30x and 40x boost because of the nature of the data. This is not typical however, since the co-located reads will usually provide only a percentage boost rather than an X-level boost. This is why we often suggest that people sort of clear-their-head of jumping directly into a distribution discussion and instead put together a workable data model with a random distribution. Once loaded, profile the various key candidates to get a feel for which ones work the best and which ones do not. We have seen some users struggle with the data only because they prematurely selected a distribution key that - unbeknownst to them - had a very high skew and made the queries run too slow. This protracted their workstreams and made all kinds of things take longer than they should have.
So at inception, go simple, and then work your way up to the ideal.
On multiple distribution keys:
Many question arise on how many distribution keys to use. Keep in mind that this is as much a physical choice as a functional one. If the chosen key provides good physical distribution, then there is no reason to use more keys. More granularity in a good distribution has no value. However, a distribution key MUST be used in a join in order to achieve co-location. So if we use multiple keys, we are committing to using all of them in all joins, and this is rarely the case. I would strongly suggest that you center on a single distribution key and only move to more than one if you have high physical skew (again, a physical not functional reason). The distribution is not an index - it is a hash. In multi-key distributions, the join does not first look at one column then the next - it looks at all three at once because they are hashed together for the distribution. Joined-at-the-hip if you will.
On leaving out a distribution key:
One case had three tables joining their main keys to get a functional answer. They were all distributed on the same key, which was a higher-level of data than the keys used in the join. The developer apparently thought that because the distribution keys are declared that the machine will magically use them behind the scenes with no additional effort from us. This is not the case. If the distribution key(s) (all of them) are not mentioned in the join, the machine will not attempt co-location. In this particular case, using the higher-level key would have extended the join somewhat but would not have changed the functional answer. Simply adding this column to the join reduced that query's duration by 90 percent. So even if a particular distribution key does not "directly" participate in the functional answer, it must directly participate in the join so as to achieve co-location. And if this does not change the functional outcome, we get the performance and the right answer.
How it affects concurrency:
Many times people will ask: Why is it that the query runs fast when it's running alone, but when it's running side-by-side with another instance of itself they both slow to a crawl? This is largely due to the lack of co-location in the query. When the query cannot co-locate, it must redistribute the data across the inter-blade network fabric so that all the CPUs are privy to all the data. This quickly saturates the fabric so that when another query launches, they start fighting over fabric bandwidth not the CPU bandwidth. In fact some have noted that the box appears to be asleep because the CPUs and drives aren't doing anything during this high-crunch cycle. That's right, and it's a telltale sign that your machine's resources are bottlenecked at the fabric and not the CPU or I/O levels. By co-locating the joins, we keep the data off the fabric and down in the CPUs where it belongs, and the query will close faster and can easily co-exist with other high-intensity queries.
Modified on by DavidBirmingham
I hear a lot of feedback on the use of CDC to put data into a PureData for Analytics, Powered By Netezza Technology device. In the other machines (traditional database engines) the data flies into the box, the CDC is on-and-off the machine in seconds. But in my Netezza machine, the CDC seems to grind. I have it running every fifteen minutes, they say, and the prior CDC instance is still running when the next instance kicks off. This is totally unacceptable. Maybe we shouldn't be using CDC for this?
Or maybe they just don't have it configured correctly?
There are two major PDA principles in play here. One is strategic and the other is tactical. Many people can look at the tactical principle and accept it because it is testable, repeatable and measurable. The strategic one however, they will hold their judgment on because it does not fit their paradigm of what a database should do. I'll save the strategic one for last, because its implications are further reaching.
The CDC operation will accumulate records into a cache and then apply these at the designated time interval. This micro-batch scenario fits Netezza databases well. The secondary part of this is that the actual operation will include a delete/insert combination to cover all deletes, updates and inserts. So when the operation is complete, the contents of the Netezza table will be identical to the contents of the source table at that point in time (even though we expect some latency, that's okay).
The critical piece is this: An update operation on a Netezza table is under-the-covers a full-record-delete and full-record-insert. It does not update the record in place. A delete operation is just a subset of this combination. This is why the CDC's delete/insert combination is able to perfectly handle all deletes, updates and inserts. The missing understanding however, is the distribution key.
If we have a body of records that we need to perform a delete operation with against another, larger table, and the larger table is distributed on RANDOM, think about what the delete operation must do in a mechanical sense. It must take every record in the body of incoming records and ship it to every SPU so that the data is visible to all dataslices. It must do this because the data is random and it cannot know where to find a given record to apply the operation - it could literally be anywhere and if the record is not unique, could exist on every dataslice. It's random after all. This causes a delete operation (and by corollary an update operation) to grind as it attempts to locate its targets.
Contrast this to a table that is distributed on a key, and we actually use the key in the delete operation (such as a primary key). The incoming body of records is divided by key, and only that key's worth of data is shipped to the dataslice sharing that key - the operation is lightning-fast. This is why we say - never, ever perform a delete or update on a random table, or on a table that doesn't share the distribution key of the data we intend to affect. Deletes and Updates must be configured to co-locate, or they will grind.
Now back to the CDC operation. Whenever I hear that the CDC operation is grinding, my first question is: Do you have the target Netezza tables distributed on the same primary keys of the source table? The answer is invariably no (we will discover why in a moment). So then I ask them, what would it take to get the tables distributed on the primary key? How much effort would it be? And they invariably answer, well, not much, but it would break our solution.
Why is that?
Because they are reporting from the same tables that the CDC is affecting. And when reporting from these tables, the distribution key has to face the way the reporting users will use the tables, not the way CDC is using the tables. This conversation often closes with a "thank you very much" because now they understand the problem and see it as a shortcoming of Netezza or CDC, but not a shortcoming of how they have implemented the solution.
Which brings us to the strategic principle: There is no such thing as a general purpose database in Netezza.
What are we witnessing here? The CDC is writing to tables that should be configured and optimized for its use. They are not so, because the reporting users want them configured and optimized for their own use. They are using the same database for two purposes because they are steeped in the "normalization" protocol prevalent in general-purpose systems - that the databases should be multi-use or general-purpose.
But is this really true in the traditional databases? If we were using Oracle, DB2, SQLServer - to get better performance out of the data model wouldn't we reconfigure it into a star schema and aggressively index the most active tables? This moves away from the transactional flavor of the original tables to a strongly purpose-built flavor.
Why is it that we think this model is to be ignored when moving to Netezza? Oddly, Netezza is a Data Warehouse Appliance - it was designed to circunscribe and simplify the most prevalent practices of data warehousing - not the least of which - is the principle that there is no such thing as a general-purpose database. In a traditional engine we would never attempt to use transactional tables for reporting - they are too slow for set-based operations and deep-analytics. Yet over here in the Netezza machine, somehow this principle is either set-aside or ignored - or perhaps the solution implementors are unaware of it - and so these seemingly inexplicable grinding mysteries arise and people scratch their heads and wonder what's wrong with the machine.
And again, they never wonder what's wrong with their solution.
If we take a step back, what we will see are reports that leverage the CDC-based tables, but we will see a common theme, which I will cover in a part-2 of this article. The theme is one of "re integration" versus "pre-integration". That is, integration-on-demand rather than data that is pre-configured and pre-formulated into consumption-ready formats. What is a symptom of this? How about a proliferation of views that snap-together five or more tables with a prevalence of left-outer-joins? Or a prevalence of nested views (five, ten, fifteen levels deep) that attempt to reconfigure data on-demand (rather than pre-configure data for an integrate-once-consume-many approach?) Think also about the type of solution that performs real-time fetches from source systems, integrates the data on-the-fly and presents it to the user - this is another type of integration-on-demand that can radically debilitate source systems as they are hit-and-re-hit for the very same set-based extracts dozens or hundreds of times in a day.
I'll take a deep-dive on integration-on-demand in the next installment, but for now think about what our CDC-based solution has enticed us to do: We have now reconfigured the tables with a new distribution key that helps the reports run faster, but because this deviates from the primary-key design of the source tables (which CDC operates against) then the CDC operation will grind. And when it grinds, it will consume precious resources like the inter-SPU network fabric. The grinding isn't just a duration issue - it's inappropriately using resources that would otherwise be available to the reporting users.
What's missing here is a simple step after the CDC completes. Its a really simple step. It will cause the average "purist" data modeler and DBA to retch their lunch when they hear of it. It will cause the admins of "traditional" engines to look askance at the Netezza machine and wonder what they could have been thinking when they purchased it. But the ultimate users of the system, when they see the subsecond response of the reports and way their queries return in lightning fashion compared to the tens-of-minutes, or even hours - of the prior solution, these same DBAs, admins and modelers will want to embrace the mystery.
The mystery here is "scale". When dealing with tables that have tens of billions, or hundreds of billions of records, the standard purist protocols that rigorously and faithfully protect capacity in the traditionl engines - actually sacrifice capacity and performance in the Netezza engine. It's not that we want to set aside those protocols. We just want to adapt them for purposes of scale.
The "next step" we have to take is to formulate data structures that align with how the reporting users intend to query the data, then use the CDC data to fill them. It's not that the CDC product can do this for you. It gets the data to the box. This "next step" in the process is simply forwarding the CDC data to these newly formulated tables. When this happens, the pre-integration and pre-calculation opportunities are upon us, and we can use them to reduce the total workload of the on-demand query by including the pre-integration and pre-calculation into the new target tables. These tables are then consumption-ready, have far fewer joins (and the need for left-outer joins often fall by the way-side). After all, why perform the left-outer operations on-demand if we can perform them once, use Netezza's massively parallel power for it, and then when the users ask a question, the data they want is pre-formulated rather than re-formulated on demand.
This necessarily means we need to regard our databases in terms of "roles". Each role has a purpose - and we deliberately embrace the notion of purpose-built schemas, and deliver our solution from the enslavement of a general-purpose model. The CDC-facing tables with support CDC - we won't report from them. The reporting tables face the user - we won't CDC to them.
Keep in mind that this problem (of CDC to Netezza) can rear its head with other approaches also - such as streaming data with a replicator or ETL tool to simulate the same effect of CDC. Either way, the data arrives in the Netezza machine looking a lot like the source structures and aren't consumption-ready.
I worked with a group some years ago with a CDC-like solution, and they took the "next step" to heart, formulated a set of target tables that were separate from staging and then used an ETL tool to fill them. The protocol was simply this: The ETL tool sources the data and fills the staging tables, then the ETL sources the staging tables and fills the target tables. This provided the necessary separation, so functionally fulfilled the mission. The problem with the solution however, was that for the transformation leg, the ETL tool was yanking the data from the machine into the ETL tool's domain, reformulating it and then pushing it back onto the machine. The data actually met itself coming-and-going over the network. A fully parallelized table was being serialized, crunched and then re-paralellized into the machine. As the data grew, this operation became slower and slower. That's what we would expect right? The bottleneck now is the ETL tool. The proper way to do this, if an ETL tool must be involved, is to leverage it to send SQL statements to the machine, keep the data inside the box. The Netezza architecture can process data internally far faster than any ETL tool could ever hope - so why take it out and incur the additional penalty of network transportation?
The ETL tool aficionados will balk at such a suggestion because it is such a strong deviation from their paradigm. But this is why Netezza is a change-agent. It requires things that traditional engines do not because it solves problems in scales that traditional engines cannot. In fact, performing such transformations inside a traditional engine would be a very bad idea. The ETL tools are all configured and optimized to handle transformation one way - outside the box. This is because it is a general-purpose tool and works well with general-purpose engines. There is a theme here: the phrase "general purpose" has limited viability inside the Netezza machine. If we embrace this reality with a full head of steam, the Netezza technology can provide all of our users with a breathtaking experience and we will have a scalable and extensible back-end solution.
Modified on by DavidBirmingham
How many logical modelers does it take to screw in a lightbulb? None, it's a physical problem.
I watched in dismay as the DBAs, developers and query analyts threw queries at the machine like darts. They would formulate the query one way, tune it, reformulate it. Some forms were slightly faster but they needed orders-of-magnitude more power. Nothing seemed to get it off the ground.
Tufts of hair flew over the cubicle walls as seasoned techologists would first yelp softly when their stab-at-improvement didn't work, then yelp loudly as they pulled-out more hair. Yes, they were pulling-their-hair-out trying to solve the problem the wrong way. Of course, some of us got bald the old-fashioned way: general stress.
I took the original query as they had first written it, added two minor suggestions that were irrelevant to the functional answer to the query, and the operation boosted into the stratosphere.
What they saw however, was how I manipulated the query logic, not how I manipulated the physics. I can ask for a show of hands in any room of technologists and get mixed answers on the question "where is the seat of power?". Some will say software, others hardware, others a mix of the two, while those who still adhere to the musings of Theodoric of York, Medieval Data Scientist, would say that there's a small toad or dwarf living in the heart of the machine.
To make it even more abstract, users sit in a chair in physical reality. They live by physical clocks in that same reality, and "speed of thought" analytics is only enabled when we can leverage physics to the point of creating a time-dilation illusion: where only seconds have passed in the analyst's world while many hours have passed in reality. After all, when an analyst is immersed in the flow of a true speed-of-thought experience, they will hit the submit-key as fast as their fingers can touch the keyboard, synthesize answers, rinse and repeat. And do this for many hours seemingly without blinking. If the machine has a hiccup of some kind, or is slow to return, the illusion-bubble is popped and they re-enter the second-per-second reality that the rest of us live in. Perhaps their hair is a little grayer, their eyes a little more dilated, but they will swear that they have unmasked the Will-O-The-Wisp and are about to announce the Next Big Breakthrough. Anticipation is high.
But for those who don't have this speed-of-thought experience, chained to a stodgy commodity-technology, they will never know the true meaning of "wheels-up" in the analytics sense. They will never achieve this time-dilated immersion experience. The clock on the wall marks true real-time for them, and it is maddening.
Notice the allusions to physics rather than logic. We don't doubt that the analyst has the logic down-cold. But logic will not dilate time. Only physics can do that. Emmett Brown mastered it with a flux capacitor. We don't need to actually jump time, but a little dilation never hurt anybody.
The chief factor in query-turnaround within the Netezza machine is the way in which the logical structures have been physically implemented. We can have the same logical structure, same physical content, with wildly different physical implementations. The "distribute on" and "organize on" govern this factor through co-location at two levels: Co-location of multiple tables on a common dataslice and co-location (on a given dataslice) of commonly-keyed records on as few disk pages as possible (zone maps). The table can contain the same logical and physical content, but its implementation on Netezza physics can be radically different based on these two factors.
Take for example the case of a large-scale fund manager with thousands of instruments in his portfolio. As his business grows, he crosses the one-million mark, then two million. His analytics engine creates two million results for each analytics run, with dozens of analytics-runs every day, adding up quickly to billions of records in constant churn. His tables are distributed on his Instrument_ID, because in his world, everything is Instrument-centric. All of the operations for loading, assimilating and integrating data centers upon the Instrument and nothing else. They are organized on portfolio_date, because the portfolio's activity governs his operations.
His business-side counterpart on the other hand, sells products based on the portfolio. The products can serve to increase the portfolio, but many of the products are analytics-results from the portfolio itself. This is a product-centric view of the data. Everything about it requires the fact-table and supporting tables to be distributed on the Product_id plus being organized on the product-centric transaction_date. This aligns the logical content of the tables to the physics of the machine. It also aligns the contents of the tables with the intended query path of the user-base. One of them will enter-and-navigate via the Instrument, where the other will use the Product.
We can predict the product manager's conversation with the DBAs:
PM: "I need a version of the primary fact table in my reporting database."
DB: "You mean a different version of the fact? Like different columns?"
PM: "No, the same columns, same content."
DB: "Then use the one we have. We're not copying a hundred billion rows just so you can keep your own version."
PM: "Well, it's logically the same but the physical implementation is completely different."
DB: "Oh really? You mean that instead of doing an insert-into-select-from, we'll move the data over by carrier pigeon?"
PM: (staring) "No, the new table has a different distribution and organize, so it's a completely different physical table than the original"
DB: "You're just splitting semantic hairs with me. Data is data."
I watched this conversation from a healthy distance for nearly fourteen months before the DBA acquiesced and installed the necessary table. Prior to this, the PM had been required to manufacture summary tables and an assortment of other supporting tables in lieu of the necessary fact-table. The user experience suffered immensely during this interval, many of them openly questioning whether the Netezza acquisition had been a wise choice. But once the new distribution/organize was installed, user-queries that had been running in 30 minutes now ran in 3 seconds. Queries that had taken five minutes were now sub-second. Where before only five users at a time could be hitting the machine, now twenty or more enjoyed a stellar experience.
How does making a copy of the same data make such a difference? Because it's not really a copy. When we think "copy" we think "photocopy", that is "identical". A DBA will rarely imagine that using a different distribution and organize will create a version of the table that is, in physical terms, radically different from its counterpart. They see the table logically, just a reference in the catalog.
The physics of the Netezza machine is unleashed with logical data structures that are configured to leverage the physical power of the machine. Moreover, the physical implementation must be in synergy (and harmony) with how the users intend to consume them with logical queries. In the above case, the Instrument-centric consumer had a great experience because the tables were physically configured in a manner that logically dovetailed with his query-logic intentions. The Product-centric manager however, had a less-than-stellar experience because that same table had not been physically configured to logically dovetail with his query-logic intentions. The DBA had basically misunderstood that the Netezza high-performance experience rests on the synergy between the logical queries and physical data structures.
In short, each of these managers required a purpose-built form of the data. The DBA thinks in terms of general-purpose and reuse. To him, capacity-planning is about preserving disk storage. He would never imagine that sacrificing disk storage (to build the new table) would translate into an increase in the throughput of the machine. So while the DBA is already thinking in physical terms, he believes that users only think in logical terms. Physics has always been the DBA's problem. Who do those "logical" users think they are, coming to the DBA to offer up a lecture on physics?
In this regard, what if the DBA had built-out the new table but the PM's staff had not included the new distribution key in their query? Or did not leverage the optimized zone-maps as determined by the Organize-On? The result would be the same as before: a logical query that is not leveraging the physics of the machine. At this point, adding the distribution key to the query, or adding filter attributes, is not "tuning" but "debugging". Once those are in place, we don't have to "tune" anything else. Or rather, if the data structures are right, no query tuning is necessary. If the data structures are wrong, no query tuning will matter.
And this is why the aforementioned aficionados were losing their hair. They truly believed that the tried-and-true methods for query-tuning in an Oracle/SQLServer machine would be similar in Netezza. Alas - they are not.
What does all of this mean? When a logical query is submitted to the machine, it cannot manufacture power. It can only leverage or activate the power that was pre-configured for its use. This is why "query-tuning" doesn't work so well with Netezza. I once suggested "query tuning in Netezza is like using a steering wheel to make a car go faster." The actual power is under the hood, not in the user's hands. While the user can leverage it the wrong way, the user cannot, through business-logic queries, make the machine boost by orders-of-magnitude.
Where does the developer/user/analyst need to apply their labor? They already know how they want to navigate the data, so they need to work toward a purpose-built physical implementation, using a logical model to describe and enable it. Notice the reversal of roles: the traditional method is to use a logical model to "physicalize" a database. This is because in a commodity platform (and a load-balancing engine) the physics is all horizontal and shared-everything. We can affect the query turnaround time using logical query statements because we can use hints and such to tell the database how to behave on-demand.
We cannot tell Netezza how to "physically" behave on-demand in the same way. We can use logical query statements to leverage the physics as we have pre-configured it, but if the statement uses the tables outside of their pre-configured physics, the user will not experience the same capacity or turnaround no matter how they reconfigure or re-state the query logic.
All of this makes a case for purpose-built data models leading to purpose-built physical models, and the rejection of general-purpose data models in the Netezza machine. After all, it's a purpose-built machine quite unlike its general purpose, commodity counterparts in the marketplace. In those machines (e.g. Oracle, SQLServer) we have to engineer a purpose-built model (such as a star-schema) to overcome the physical limitations of the general-purpose platform. Why then would we move away from the general-purpose machine into a purpose-built machine, and attempt to embrace a general-purpose data model?
Could it be that the average Netezza user believes that the power of the machine gives it a magical ability to enable a general-purpose model in the way that the general-purpose machines could not? Ever see a third-normal-form model being used for reporting in a general-purpose machine? It's so ugly that they run-don't-walk toward engineering a purpose-built model, photocopying data from the general-purpose contents into the purpose-built form. No, the power of the Netezza machine doesn't give it magical abilities to overcome this problem. A third-normal form model doesn't work better in Netezza than a purpose-built model.
Enter the new solution aficionado who wants their solution to run as fast as the existing solution. They will be told, almost in reflex by the DBA that they have to make their solution work with the existing structures, even though they don't leverage physics in the way the new solution will need it. And this is the time to make a case for another purpose-built model. One that faces the new user-base with tables that are physically optimized to support that user base. Will all tables have to come over? Of course not. Will all of the data of the existing fact table(s) have to come over? Usually not, which is silver lining of the approach.
But think about this: The tables in Netezza are 4x compressed already. If we make another physical copy of the table, itself being 4x compressed, the data is still (on aggregate) 2x compressed across the two tables. That is, the data is doubled at 4x compression, so it only uses the same amount of space as the original table would have if it were only 2x compressed. In this perspective, it's still ahead of the storage -capacity curve. And in having their physics face the user base, we preserve machine capacity as well.
This is perhaps the one most-misunderstood tradeoff of requiring multiple user bases to use the same tables even though their physical form only supports one of the user-bases. And that is simply this: When we kick off queries that don't leverage the physics, we scan more data on the dataslices and we broadcast more data between the CPUs. This effectively drags the queries down and saturates the machine. The query drag might be something only experienced by the one-off user base, but left to itself the machine capacity saturation will affect all users including the ones using the primary distribution. Everyone suffers, and all for the preservation of some disk space. Trust me, if there is a question between happy and productive users versus burning some extra disk space, it's not a hard decision. Preserving disk storage in the heat of unhappy users is a bad tradeoff.
Or to make an absurd analogy, let's say we show up for work on day one, have a great day of onboarding and when we leave, we notice that our car is a little "whiney". Taking it to a shop, he tells us that someone has locked the car in first-gear and he can't fix it. We casually make this complaint the next day (it took us a little longer to get to work).
DBA: Oh sure, all new folks have their car put in first gear. It's a requirement.
US: (stunned) What the?
DBA: Well, if you had been here first, you could keep all the gears, but everyone we've added since then has to be in first gear for everything.
DBA: Yes, first-gear for your car, your development machine, even your career ladder. About the only thing that they don't put in first-gear are your work hours. Those are unlimited.
US: That's outrageous!
DBA: We can't give everyone all the gears they want. It's just not scalable.
The problem with working with tables that aren't physically configured as-we-intend-to-use-them is that using them will cause the machine to work much harder than it has to. Not only will our queries be slower, we can't run as many of them. And while we're running, those folks with high-gear solutions in the same machine will start to look like first-gear stuff too. The inefficiency of our work will steal energy from everyone. We cannot pretend that the machine has unlimited capacity. If our solution eats up a big portion of the capacity then there's less capacity for everyone else. Even if we use workload management, whatever we throttle the poorly-leveraged solution into will only make it worse, because if a first-gear solution needs anything, it most certainly uses more capacity than it would normally require.
Energy-loss is the real cost of a poor physical implementation. All solutions start out with a certain capacity limit (same number of CPUs, memory, disk storage) and it is important that we balance these factors to give the users the best possible experience. Throttling CPUs or disk space, or refusing to give up disk space merely to preserve disk capacity, only forestalls the inevitable. The solution's structures must be aligned with machine physics and the queries must be configured to leverage that physics.
The depiction(above) describes how the modeler's world (a logical world) in no wise intersects with the physical world, yet the physical world is what will drive the user's performance experience. The high-intensity physics of Netezza is not just something we "get for free", it is a resource we need to deliberately leverage and fiercely protect.
In the above, the "Logical data structure" is applied to the machine catalog (using query logic to create it). But once created, it doesn't have any content, so we will use more logic to push data into the physical data structure. The true test of this pairing is when we attempt to apply a logical query (top) and it uses the data structure logic/physics to access the machine physics (bottom). Can we now see why a logical query, all on its own, cannot manufacture or manipulate power? It is the physical data structure working in synergy with the logical query that unleashes Netezza's power. And this is why some discussions with modelers may require deeper communication about the necessity to leverage the deep-metal physics while we honor and protect the machine's power.
Modified on by DavidBirmingham
Sometimes the average Netezza user gets a bit tripped-up on how an MPP works and how co-located joining operates. They see the "distribute on" phrase and immediately translate "partition" or "index" when Netezza has neither. In fact, those concepts and practices don't even have an equivalent in Netezza. This confusion is simply borne on the notion that Netezza-is-like-other-databases-so-fill-in-the-blank. And this mistake won't lead to functional problems. They will still get the right answer, and get it pretty fast. But it could be soooo much faster.
As an example, we might have a traditional star-schema for our reporting users. We might have a fact table that records customer transactions, along with dimensions of a customer table, a vendor table, a product table etc. If we look at the size of the tables, we find that the product and vendor tables are relatively small compared to the customer, and the fact table dwarfs them all. A typical default would be to distribute each of these tables on their own integer ID, such as customer_id, vendor_id etc. and then putting in a transaction fact record id (transaction_id) that is separate from the others, even though the transaction record contains the ID fields from the other tables.
Then the users will attempt to join the customer and the transaction fact using the customer_id. Functionally this will deliver the correct answer but let's take a look under-the-covers what the performance characteristics will be. As a note, the machine is filled with SBlades, each containing 8 CPUs. For example, if we have a TwinFin-12, this is 12 SBlades with 8 CPUs, or 96 CPUs. They are interconnected with a proprietary, high-speed Ethernet configured to optimize inter-CPU cross-talk.
Also whenever we put a table into the machine, it logically exists in one place, the catalog, but physically exists on disks assigned to the CPUs. A simplistic explanation would be that if we have 100 CPU/disk combinations and load 100,000 rows to a table that is distributed on "random", each of the disks would receive exactly 1000 records. When we query the table, the same query is sent to all 100 CPUs and they only operate on their local portion of the data. So in essence, every table is co-located with every other table on the machine. This does not mean however, that they will act in co-location on the CPU. The way we get them to act in co-location (that is, joining them local to the CPU) is to distribute them on the same key.
But because our noted tables are not distributed on the same key, they cannot co-locate the join. This means that the requested data from the customer table will be shipped to the fact table. What does this look like? Because the customer table has no connection to the transaction_id, the machine must ship all customer records to all blades (redistribution) so that the CPUs there can attempt to join on the body of the customer table. We can see how inefficient this is. This is not a drawback of the Netezza machine. It is a misapplication of the machine's capabilities.
Symptoms: One query might run "fine". But two of them run slow. Several of them even slower. Results are inconsistent when other activities are running on the machine. We can see why this is the case, because the processing is competing for the fabric. Why is this important to understand? The inter-CPU fabric is a fairly finite resource and if we allow data to fly over it in an inefficient manner, it will quickly saturate the fabric. All the queries start fighting over it.
Taking a step back, let's try something else. We distribute the transaction_fact on the customer_id, not the transaction_id. Keep in mind that the transaction_id only exists on the transaction table so using it for distribution will never engage co-location. Once we have both tables distributed on the customer_id, let's look at the results now:
When the query initiates, the host will recognize that the data is co-located and the data will start to join without ever leaving the CPU where the two table portions are co-located. The join result is all that rises from the CPU, and no data is shipped around the machine to affect the answer. This is the most efficient and scalable way to deal with big-data in the box.
Now another question arises: If the vendor and product dimensions are not co-located with the transaction_fact, how then will we avoid this redistribution of data? The answer is simple: they are small tables so their impact is negligible. Keep in mind that we want to co-locate the big-ticket-or-most-active tables. I say that because we have sites that are similar in nature where the customer is as large as two of the other dimensions, but is not the most active dimension. We want to center our performance model on the most-active datasets.
This effect can rear its head in counter-intuitive ways. Take for example the two tables - fact_order_header and fact_order_detail. These two tables are both quite monstrous even though the detail table is somewhat larger. Fact_order_header is distributed on the order_header_id and the fact_order_detail is distributed on the order_detail_id. The fact_order_detail also contains the order_header_id, however.
In the above examples, the order header was being joined to the detail, along with a number of other keys. This achieved the correct functional answer, but because they were not using the same distribution key, the join was not co-located. So we suggested putting the order_detail table on the same distribution as the order-header (order_header_id). Since the tables were already being joined on this column, this was a perfect fit. The join received an instant boost and was scalable, no longer saturating the inter-CPU fabric.
The problem was in how the data architects thought about the distribution keys. They were using key-based thinking (like primary and foreign keys) and not MPP-based thinking. In key-based thinking, functionality flows from parent-to-child, but in MPP-based thinking, there is no overriding functional flow of keys - it's all about physics. This is not to say that "function doesn't matter" but we cannot put together the tables on a highly physical machine and expect it to behave at highest performance unless we regard the physics and protect the physics as an asset. Addressing the functionality alone might provide the right functional answer, but not the most scalable performance.
At far back as the 2007 Enzee Universe conference in Boston, people have scratched their head on the notion of a large-scale database that actually boasts about the absence of index structures. After all, indexes are the mainstay of relational databases and we simply can't get by without them, right? This is a simple example of how the power and architecture of the technology frees us to think about data loading and storage, data retrieval and in-the-box processing in a completely different way.
Firstly, it's no rumor that Netezza has no indexes. And for those of us who can't stand dealing with them, this is a huge plus. One of the Enzees at the 2011 conference asked point-blank - "What does Netezza have that Oracle does not?" and the clear answers will arise in this and later blog entries, but for now the absence of maintenance of index structures, will do just fine. What does Netezza have?
Or rather, the vendor has spent enormous thought labor into simplification of complex matters so that we don't have to deal with them. We can get to the business of - well - the business. The application data and information that we want to spend more time with if only we weren't dealing with the technical administration of the database engine. This is one of the greatest weaknesses of the traditional relational database when it comes to data warehousing in general and bulk processing in particular. It is also one of their greatest weaknesses in the space of analytics. After all, we want to apply the analytics by selecting subject areas and data sets to analyze, but if we have to stand up lots of infrastructure to support this mission we are regressing toward functional hardening rather than functional flexibility. Brittleness ensues and someone asks for a refactor, a re-architecture or whatever. When this request arrives in relation to a complex, engineered, multi-terabyte (or say tens-of-terabytes) data store, many will see it as a mind-numbing proposal. For most infrastructures, the weight and complexity of the installation has overwhelmed the logistical capacity of the humans to ever hope reeling it back in. There's not enough power in the machine to help. This could take a very long time to remediate.
Not so with an appliance. Such big-data issues are its bread-and-butter zone. If we have issues with a particular data model or information construct, the machine has the power (and then some) to get us out of the bad direction and into the new one. How did the direction get "bad" in the first place? The business changed its information priorities since the inception of the original model, and now the original no longer serves it well. The business has moved, because that's what businesses do. The analysts found opportunities in the nuggets of information-gold, and now want direct access to that gold rather than having to navigate to it or salivate while waiting for it.
One of the core features of the Netezza platform is how the data is distributed on the disk drives. Because it is an MPP, each of the disk drives is its own self-contained, shared-nothing hardware structure. Contrast this to the common SMP-based platform where the CPUs exist in one duck pond and the disk drives exist in another duck pond. The data is then couriered between duck ponds through throttled estuaries. If we issue a query, all of the data has to be pulled through this estuary and presented to the CPU ducks so that the data can be manipulated and examined in software. It is a software engine on a general-purpose platform.
However, imagine the Netezza platform where each Intel CPU is mated with special hardware (an FPGA), a bounty of RAM to manipulate and cache data, a self-contained Linux operating system, and its own dedicated disk drive. Imagine also that the disk drive is itself divided into multiple sections, where the inner sections are used for internal processes like RAID, temp space and system data but the outermost ring is where user data is stored, offering the benefit of the fastest disk speed for the oft-accessed information. All of these little attention-to-detail aspect cause each of the CPUs to run at a much more powerful factor than their common SMP counterparts. Why? Because their data is co-located with the CPU, where with the SMP we have to drag the data out of one duck pond to get it close to the CPUs that may operate on it. And with SMP there is no such thing as dedicated CPU-to-disk access. The SMP CPUs are shared-everything, but also the disk drives - shared-everything at the hardware level but shared-nothing at the functional-logical level. Netezza's CPUs are shared-nothing at the functional-logical level and shared-nothing at the CPU/disk level.
In Netezza, let's imagine putting 100 of these CPU/disk combinations to work. When we form a table on the host, the table exists logically in the catalog in one place. But it exists physically in 100 places. If we load 100 million records into the machine distributed randomly, then each of the CPUs will "own" 1 million of those records. If we issue a query to the table, each of the CPUs will operate only on the million records it can see. So for any given query, we are not serially scanning 100M records. We are scanning 1M records 100 times.
Now some may object, that we're still scanning the data end-to-end, and for plain-vanilla queries, this is true. However, I recall the first time I performed a join on two tables that were 100M records each doing a one-to-one join on unique keys, on a machine with 24 of these CPUs and the answer returned in 13 seconds. This was a plain vanilla test, so your actual mileage may vary. However, I did the same test on the same machine with 1B records joined to 1B records and the answer returned in less than 300 seconds. Imagine attempting this kind of join on a traditional relational database and expecting a return in any time to actually use the result. (and no, I did not use co-location for this test - a matter for another blog entry)
We have an additional "for free" aspect of the machine called a zone map. If data is laid down in contiguous form (the way transactions arrive in a retail point-of-sale model for example) then the machine will automatically track these blocks and keep their locations in a "zone map". If we then query the database with a given date, the machine will ignore the places where it knows the data is not, and use the zone maps to locate the range of data where it needs to search.
As as example of this, we know of a 150-billion-row table that is over 25 terabytes in size, distributed and zoned such that over 95 percent of its queries return in 5 seconds or less. For a traditional RDBMS, it would take more time than that just to get its index scans squared away so that it could even approach the storage table. This is also why the Netezza machine itself can be scaled into the petabyte zone while maintaining the same performance. No indexes are in the way to load it. No indexes are necessary to search it. Imagine now: the two most daunting aspects of data warehousing and analytics - loading the data and keeping up with the user base - have been washed away with the elimination of indexes. (Don't we turn indexes off in a traditional SMP database so it will load faster, and don't we chase a rainbow trying to index and re-index the tables to keep up with user needs?) Not with Netezza. Without indexes on the columns, all columns are fair game for searching. Without indexes in the way for loading, we can deliver information into the machine, a reprocess it while there, with no penalty from the use of indexes.
By this measure, this is an anti-indexing strategy because Netezza operates on the basic principle of where-not-to-look. In other words, if we can tell it where the data is not, it can zone-in and find the data. When we think about it, this is how a common brick-and-mortar warehouse works. If we showed up and asked for a box of nails, the attendant knows that he doesn't have to look in the parts of the warehouse that carry lawn equipment, hammers, saws or window draperies. He knows where the nails don't exist.
Contrast this to the common SMP-based indexed database, which uses exactly the opposite approach. The indexes are searched for the specific key and then this key is applied to the primary data store. This is why indexed structures in general cannot scale in the same manner as an anti-indexed structure. Keep in mind, with a Netezza platform it won't matter how much data we put on that 25-terabyte table. We could double, triple it or more - and it would always provide the answer in a consistent time frame. This is because from query-to-query - it's still not-looking in the same places, no matter how big those places get. It will still continue to ignore them because the data's not there and it already knows that.
I've had people tell me that there is no real difference in the SMP versus the MPP. The SMP, "properly configured" (they claim) is just as good as any old MPP. However, there is no "proper" way to configure a general purpose hardware architecture so that it will scale. The only way we could hope to coordinate these CPUs is in software, and the only way we can get data into the CPUs is by accessing the shared disk drives in another duck pond on completely different hardware. The SMP configuration is by definition the wrong configuration for scalable bulk processing of any kind, so there is no way to "properly configure" something that already the inappropriate storage mechanism. This would be no different than claiming that a "properly configured" VW Bug (Herbie notwithstanding) could be just as fast as a stock car. The VW Bug is not the wrong platform for general purpose transportation. But it's the wrong platform for model requiring high-scale and high-performance, just an an SMP-based RDBMS cannot scale with the same factor (for set-based bulk processing) as a machine (Netezza) specifically built for this mission. Only a purpose-built machine can ultimately harness this level of data, and only an appliance can remove the otherwise mind-numbing complexity of keeping it scalable.
In the next weeks leading up to IOD (where I will be speaking on most-things Netezza) I'll offer up some additional insights on the internals of the architecture and how it differs from the traditional platforms.
As the title suggests, one of the challenges of new Netezza users is in learning about the product, what it can (and doesn't) do, and how it applies in data warehousing. When I first published the book (Netezza Underground on Amazon.com) the impetus for the effort was just that - people asking me lots of questions leading to fairly repetitive and predictable answers. It's an appliance after all. We can apply it to a multiplicative array of solutions but on the inside, certain things stand out as immutable truths.
Of course, lots of folks are running around down here in the catacombs, convincing people that they need to read the glowing glyphs on the ancient stone walls as guides on their quest for more information. This is entirely unnecessary. The title of the book (and the blog) is a tongue-in-cheek nod to the way that some might spin the story on the machine in their own favor. Some might claim that we need lots of consulting hours to roll out the simplest solution. Consultants can help, of course (I'm one of them). But it depends on the spin and the tale that is told, that will determine the magnitude and expense of those consultants..
I'll try to provide a balance that delivers value without extraordinary expense. Yet another source of misinformation is Netezza's competitors, who like to toss "innocent" bombs here and there to direct people down the path toward their own product. All is fair in business and war, as they say, but as we bring these issues out of the darkness and into the light, the objective is to become more informed.
I am neither a Netezza nor IBM employee, so apart from compensation for actual work performed in rolling out solutions, for which people would pay me regardless of technology, I don't have any other relationship with these companies. I am a huge fan of the Netezza product and architecture and make no secret of this.
So some may have come here to ask questions about the technology or read what some seasoned experts have to say. We can do all that and a lot more. For now, let's look at the counter-intuitive nature of the product's internals.
I'll paint an imaginary picture first. Let's say we have two boxes. In one box we have thirty-two circles and in the second box we have thirty-two drums. The circles are CPUs and the drums are disk drives. They are in separate boxes, and now we draw a pipeline between them. Make it as large as you want. This depicts a standard SMP (Symmetric Multi-Processor) hardware architecture that is the common platform for data warehousing. With the exception of Teradata and Netezza, this hardware configuration is ubiquitous.
Now let's draw another mental picture. This time we'll have one box. Each of the circles will be mated with one of the drums. Now we have thirty-two circle/drum combinations. In the Netezza machine each of these is called a SPU (snippet processing unit) and represent the CPU coupled with a dedicated disk drive. Some additional hardware exists to coordinate and accelerate this combination, but this is a simplified mental depiction of the fundamental difference between the prior SMP configuration and Netezza's, an MPP (Massively Parallel Processor) configuration.
Now I used thirty-two as a simple example. In reality, Netezza can host hundreds of these cpu/disk combinations, the largest standalone frame containing over eight hundred of them, scaling to over a petabyte in storage capacity. Those of us who regularly operate on these machines are accustomed to loading, scanning and processing data by the terabyte, the smallest tables in the tens of billions of rows.
Some notes on the difference in their operation:
SMP: Typically used for transactional (OLTP) processing and is not purpose-built for data warehousing. It is a general-purpose platform. The typical bane of an OLTP engine is that it performs well on getting data inside, but is lousy on getting data out (in quantity, for reports and such).
MPP: (Netezza specifically) does not do transactional processing at all. It is purpose-built to inhale and exhale Libraries-of-Congress at a time.
SMP: Table exists logically in the database, but physically on the file system in a monolithic or contiguous table space.
MPP: Table exists logically in the database, but is physically spread out across all the SPUS. If we have 100 SPUs and want to load a table with 100,000 records, each SPU will receive 1000 records.
SMP: SQL statements are executed by the "machine" as a whole, sharing CPUs and drives. While the SQL operations may be logically and functionally "shared nothing" - the hardware is "shared-everything". In fact, CPUs could have other responsibilities too, which have no bearing on completing a SQL statement operation. In the above example, the SMP has to access the file system, draw the data into memory nearest the CPUs, then perform operations on the data. Copying data, for example, would mean drawing all the data from the tablespace into the CPUs and then pushing back down into another tablespace, but both tablespaces are on the same shared disk array.
MPP: SQL Statements are sent to all CPUs simultaneously, so all are working together to close the request. In the above example, each SPU only has to deal with 1000 records. Unlike an SMP, the data is already nearest the CPU so it doesn't have to go anywhere. The CPU can act directly on the data and coordinate as necessary with other CPUs. Copying data for example, means that the 1000 rows is copied to a locaton on the local drive. If 100 CPUs perform this simultaneously, the data is copied 100 times faster and it never leaves the disks.
SMP: Lots of overhead, knobs and tuning to make it go and keep it going. From the verbosity of the DDL to the declaration of index structures, table spaces and the like.
MPP: (Netezza) the overhead of adminstration is hidden from the user. In the above example, the user need only declare, load and utilize the table, not be concerned about managing the SPUs or disk allocation. The user's only concern is in aligning the data content with the hardware architecture, an easy task to perform and an easy state to maintain.
SMP: SQL-transforms, therefore, the insert/select operations that run so slow on an SMP platform, continue to run slow, and slower as data is added. They are not a viable option so usually would not occur to us to leverage them otherwise.
MPP: SQL-transforms leverage the MPP and do not slow as the data grows.
In fact, many people purchase the Netezza platform in context of its competitors, who don't (nor can they) use SQL transforms to affect data after arrival.
If we purchase Netezza as a load-and-query platform only, we have missed a special capability of the machine that differentiates it from the other products. If we leverage this power, we re-balance the workload of the overworked ETL tools, in some cases eliminating them altogether. Netezza calls this practice "bringing everything under air", that is, bring the data as-is into the machine and do the heavy-lifting transforms after it's inside.
The Brightlight Accelerator Framework is one example of a flow-based, run-time harness for deploying high-performance, metadata-driven SQL-transforms. While we have matured this capability over time, the consistency of the Netezza platform is the key to its success.
SMP:Scalability is through extreme engineering, leading to hardware upgrade.
MPP:Scalability is a function of data organization as it leverages the hardware's power. Upgrading hardware is therefore rarely the first option of choice.
SMP:Constrained by index structures and the shared-everything hardware architecture through which all data must pass
MPP:Constrained by human imagination, not index structures (there aren't any) and no shared hardware.
We have noted that with traditional SMP architectures, when the machine starts to run out of power, engineers will swarm the machine and start to instantiate exotic data structures and concepts that serve only as performance props, not the 'elegant' model once installed by the master modelers. As the box continues to decline, more engineering ensues, eliciting even stranger approaches to data storage and management. Soon it becomes so functionally brittle that it cannot handle any more functionality, so people resort to workarounds and bolt-ons. Functionality that should be a part of the solution is now outside of it, and functionality that never should have been inside (mainly for performance) has taken up permanent residence.
In a Netezza machine we have so much power available that traditional modeling approaches (e.g. 3NF and Dimensional) may have a defacto home, but now we can examine and deploy other concepts that may be more useful and scalable. Approaches that we never had the opportunity to examine before, because the modeling tool did not (and does not) support it and the natural constraints of the SMP-based engine artificially constrain us to index-based data management.
In addition, the Netezza machine operates on a counter-intuitive principle of finding information based on "where not to look". With this principle, let's say we have a terabyte of data and we know where our necessary information is, based on where-not-to-look. See how this won't change even if we add hundreds of terabytes to the system? If I add another 99 terabytes to the same system, the query will still return with the same answer in the same duration, because the data I'm looking for doesn't appear in those 99 terabytes and the machine already knows this.
For example, if we go over to Wal-Mart and we know where the kiosk is for the special-buys-of-the-day, we can find this easily. It's in a familiar location. What if tomorrow they expand the same Wal-Mart to ten times its current size? Will it affect how long it takes me to find that kiosk? Clearly not. Netezza operates the same way, in that no matter how much data we add, we don't have to worry about scanning through all of it to find the answer.
And when it boils down to basics, the where-not-to-look is the only way to scale. No other engine, especially not one that is completely dependent upon index structures to locate information, can scale to the same heights as Netezza.
Well, some of the stuff above is provocative and may elicit commentary. I'll continue to post as time progresses, so the data won't seem so much like underground information, after a while anyhow.
Modified on by DavidBirmingham
A "little" blurb about IBM Integrated Analytics System- IBM has had this version of the Analytics MPP in beta for a bit, and has recently announced (today).
Our team at Sirius was privileged to get an orientation and a little more than a sneak-peek, but access to our own beta machine to give it a go. The Enzee community will be pleased about a number of things in this release.
But even at the orientation, a number of things popped out that would "take your feet off the desk" so to speak. This is no ordinary release. IBM has integrated the Db2 BLU engine with the Netezza MPP, so now we can build and issue portable queries with a common, consistent experience. This is quite an achievement and many kudos to the engineers who have participated.
Power Systems vs Blade Servers
Under the hardware covers we now have Power Systems. This matters for a lot of good reasons, the best one being horizontal elasticity. Most Enzees know whenever we want to upgrade our hardware to more power, say from one rack to two, we have to bring the two-rack into the shop, copy the data from the one-rack to the two-rack, and off-we-go.
With IIAS, we simply add-a-frame, perform some simple configuration commands, the data rebalances to the new capacity, and off-we-go. Simple and painless. How cool is that?
In this hardware scenario, dataslices and zone maps (now synopsis tables) behave the same. Distribution and organization follow the same rules. Enzees will experience no changes in these core areas.
Db2 BLU experienced this radical speed also, but with Netezza-oriented MPPs, it's even more profound. Netezza tables are row-oriented, so the "read" operation takes a page from disk into the FPGA, where the desired columns are stripped from rows, and undesired rows are filtered from the stream.
In Columnar mode, the pages don't store rows, but columns. When we ask for certain columns, the other columns' pages aren't touched. So now we read even less of the disk itself, and even less data meets the CPU.
Oh, yeah, you heard that right. In current Netezza, an electro-mechanical drive will not only burn out, but even in operation, the read-head has to seek a page on the disk, fetch it, and multi-task with other read operations to optimize the physical read-head over the spinning disk. Even the spinning disk is optimized to carry the user data on the outer-third of the disk where it spins fastest. This can serialize queries at the read-head and is potential for concurrency bottlenecks.
With solid-state drives, the read-time is radically faster, and memory is fair-game by any process, without waiting. It's solid-state so mechanical breakdown is not even on the radar. More importantly, it reads and writes data at phenomenal speed compared to a hard drive.
Where before, the disk read speed was the number-one drag on a query, now we will likely see this re-balanced to other areas of the machine. Don't get me wrong, it still takes time to read memory, so we should not forego the use of zone maps - er - synopsis tables now - to reduce and filter the total amount we read. This is just good stewardship of the CPUs and other resources. Just because we "can" read a lot of data faster, doesn't mean we "should" - we should filter and reduce the total data arriving into the CPU for the most efficient query utility.
What it does not have, is the FPGA. This hardware-filtration workhorse, IBM is taking the step to remove, because the disk drives are so powerful, columnar tables are so data-efficient, and the CPUs are so much stronger, the hardware filtration power may well be superfluous. Let's shake that tree. We're all scientists, right? In retrospect, the purpose of the FPGA is to simulate column-oriented streams from row-based storage. But if we have columnar tables, the FPGA is less of a player.
Common SQL Engine
This means SQL statement portability and consistency across various platforms, and the libraries such as SQL Toolkit, Inza, Fluid Query etc are now baked-in to the engine and available at our fingertips when we power-up.
We'll have more consistent experience, and less guesswork for functional/operational behaviors.
And we'll experience seamless integration with the Data Science Experience (DSX), machine learning, and a bunch of other offerings IBM has already announced or is in the queue.
Ease of configuration
As we were moving through our POC, the IBM team assigned to us was amazingly helpful. They knew where all the hooks were to tweak this or tune that, because the common engine and underpinning metadata are pervasive and well-understood. Anytime we had a question or thought we'd encountered an issue, the solution was invariably a simple configuration change.
This speaks volumes for the level of effort, thought and insight poured into this release. IBM has thought-through the wide variety of priorities between these platforms and has provided the ones that matter most, seamlessly.
Expect to hear good things as this rollout proceeds.
Synopsis Tables vs Zone Maps
Essentially work the same way.
DashDb and DashDb-local have been rolled-up to Db2 Warehouse. Same SQL engine. Same built-in analytic libraries. Db2 Warehouse with the MPP hardware is the IIAS.
If we plan to rollout the Db2 Warehouse first and later upgrade to MPP hardware, we need to enable the queries and tables for this, and this simply means to leverage Synopsis Tables rather than indexes. Do this, and we'll be 90+ percent on the way to leveraging MPP hardware when we go there. If not, and the tables/queries are index-dependent, we'll need to review the environment (for the largest tables) to make sure we're leveraging the MPP correctly.
These are not hard to do and much of it can be automated.
A while back we bought a new house and decided that we would keep our first house for a few additional months to muse about whether we would keep it for a rental home. Well, shortly afterward was Sept 11th, 2001 and we were boxed into keeping it for quite a while longer. In all this, opportunists came out of the woodwork to take our house for a "song".
We tired of the songbirds quickly, but some of them were just bizarre. One couple looked at the house and said they would buy it if we would put $20k of upgrades into it. New this, new that. Our answer was no, we're selling "as is". But they persistent and felt that their requests were reasonable. No, I said, you can spend your money fixing it up like you want it, because I'm not spending my money to fix it up like you want it. They were confused. We weren't "playing the game" you see. Oddly, they had a realtor working with them who kept chanting, "Ask for what you want. It never hurts to ask."
Well, when you're asking the wrong questions, and it's obvious, it definitely hurts to ask. When on site with a large financial services firm, we were being examined for the replacement of their current consulting firm. Their lament: "They ask all the wrong questions." Apparently we were asking the right questions, because they gave the business to us.
Recently I walked through a demo of some things we've done for other clients. Two minutes into the demo, they started asking questions. This or that? Those or these? But nothing whatsoever to do with the core problems the solution was geared for, or the core questions that everyone asks.
It's like this: I have a list of questions on left side of the white board and another list on the right side. The questions on the right side deal with things like very-large-scale backup and high-speed recovery, disaster recovery, hot-failover between two Netezza machines, real-time replication, continuous processing, and of course, heavy-lifting processing inside-the-box.
These are things that accelerate business logic, the rapid turnaround of business rules and features, catalog-aware design-time and run-time modules, developer productivity, testing accuracy and turnaround, rapid-testing and rollout, release management and patched releases - operational integrity, a firm harness around the functional data, and radical simplification of very complex problems - in short, a high-wall of protection around a high-speed delivery model. Something that shrinks the time-to-delivery as well as shrinks operatiional processing time and protects capacity, while protecting the data itself.
I knew better than to hold my breath for a question on migration -
Whew! Whenever I give this short list to a CIO, they are usually asking in terms of having rolled out a large-scale environment with Netezza and are experiencing pain in one or more of those areas. Of course, we focus on those areas. Painkiller is not enough. You have to remove the source of pain..
How do I know that these are the questions Enzees are asking? Because they are - to me personally, to the Netezza community forums, and are the hot-topics at every Enzee Universe. Good grief, that's the short list of the lightning-round question-and-answer sessions at the Best Practice forums.
But they did not hear any of this. They were asking all of the wrong questions. The reason, methinks, is that they weren't in any pain. They felt the freedom to ask about the inconsequential because they had never experienced the consequences, or have even foreseen the consequences. In other words, I am solving problems they don't have.
Or do they? Anyone who buys a large-capacity storage devicesis by definition biting off a data management challenge. One of our clients has over 200 TB (uncompressed) on their TwinFin. They currently do their backups to another TwinFin of the same size. They would like to perform their backups offline, but no commodity backup/recovery system on the planet can handle this kind of load with any aplomb. Some claim to, of course, but have not considered the true needs of the Netezza user. It's all Texas-Sized Data Warehousin'
For example, one of the off-the-shelf products claims that once the data is offloaded, they have "hooks" allowing the user to query the offline archives. That's right, the user query will be dispatched to their proprietary engine and it will fetch the answer back for you. But wait a second. That protocol is for a transactional system. Going out onto the file system to fetch a transaction might have some value, since the only alternative is to bulk-load the entire dataset back into the device to query it, when all we wanted was one transaction.
However, for a Netezza "scanning analytic" query, potentially spanning tens of terabytes in scale, such a mechanism is no more than a child's toy. What we need is the ability to rapidly reload the data back into the machine so we can query it there, inside Netezza's MPP. More importantly, once it's back online we can join it to other tables, that's right, down on the MPP where the CPUs burn brightest.
But this is simply an example of how one vendor has solved a problem that no Netezza user is asking. Netezza users need to scan and analyze the data in-the-box. Those tools provide data access outside-the-box, without considering that having the data outside-the-box is the problem we were trying to solve in the first place!
Another issue is how the ETL tools interact with Netezza. All of them do, some better than others. Some are certainly getting better than the others. The game is on, they know that data processing is migrating back into the machine. Netezza is leading the way, and they want a piece of the action. Can you blame them?
But hold on. Is bulk-data processing in an ETL tool the same as bulk-data processing inside a Netezza box? No, not really. "Going parallel" in an ETL tool means spreading work out across CPUs for a select set of operations. Not only this, we will compete with other processes for those same CPUs. The net is, we cannot put all the CPUs on the machine to work for us. Clearly some of the ETL tools are otherwise proud of their scalability. And they should be. Add a few more CPUs to that rack and watch the ETL tool scale with it. Nothing bad about that. Capture those business rules in a point-click and off-we-go. They have spent millions of dollars tuning their performance models, pouring the brain power of their brightest people into making their product go-parallel and push that data hard. Imagine them handing off this hard-won, multi million-dollar performance capability to an upstart appliance? Hmm, no, they'll be the last who are draggged kicking and screaming into the inexorable future.
Anyone who is kicking the tires of a large-scale storage and processing device like Netezza is also in for a subtle surprise: Once you are migrated, you will unleash the creativity of you staff to add functionality that was never possible before. This will generate business, which will generate more data. The extra capacity will naturally offer confidence that all-new-business-great-and-small are not only do-able, but with strength that isn't available with other platforms. No worries, sez aye, the Netezza machine will scale with you. And you have chosen well, grasshopper.
Then one day they look around and say - have we backed-up this data recently? How ahout disaster recovery? What about archiving old data to make room for new? What about the ability to make the offline data available for the ad-hoc folks? That is, available in time for them to actually use it? These simple questions raise fear in the hearts of the mightiest of IT champions when they know it should have been asked, and applied a long time before ---- Before the locomotive was moving at 90 miles an hour dragging 500 boxcars. Before the locomotive even left the station. The sheer logistics of performing a backup of data this size, and this transient, is mind-numbing. I've noted that one client, hosting over 150 TB (uncompressed) naively plugged their commodity backup tool into the Netezza machine. After over-a-week of whirr-click backup activity with no end in site, someone finally said "Kill it. If it takes a week to back it up, it will take even longer to restore it". This is a wise observation, but also tells us something:
The commodity tools expect us to accept that the restore-process will be slower than the backup-process. In the Netezza world, the backup and restore can both happen faster, and take up less storage than the commodity backup tool. If the commodity backup has to offload in uncompressed form, we need to provide generous workspace for it. If it's a Netezza-compressed backup, we only need to provide for the amount relevant to our compression ratio. Some sites get a 16x, others get almost twice that. The mileage varies because of the nature and compressibility of the data. Either way, offload is fast because it comes right off the disk without passing through the host. Likewise the reload, directly to the disk without the host. In a traditional RDBMS, offload/reload has to pass through the host to get on and off the device. For bulk analytic data, the missing middle-man is just the ticket for rapid-reload.
But they weren't asking this question either. The problems we had already solved and were bringing to the table as Netezza-centric solutions, the bread-and-butter, core-mission capabilities that people ask for all the time, wasn't even on their radar. The disconnect is simple: They have read white-papers on what people are doing after they get the back-end squared-away. The nice-to-haves. The critical parts, you know, the failure points that could mean the demise of the business, or perhaps their own paychecks? Not even on the table.
This is akin to someone about to engage in the construction of a large cargo ship. To be sure, some folks are concerned about the utility of the ship's interior. But when putting pen to paper, which is more important, that the break rooms have contemporary wallpaper, or that the ship can master tempestuous seas and high swells? Methinks the crew of said ship couldn't care less about the amenities if they got wind that the architects gave short shrift to things like hull stress and multiple watertight compartments. You know, things to keep the ship from being claimed by the sea when stress is high. Boy, those light fixtures sure look cool, don't they?
Back to basics. Netezza is an appliance. It can perform as a load-and-query device just like all the other load-and-query devices. A primary differentiator, one that more customers are experiencing all the time, is that Netezza is a powerful data processing platform. When leveraged this way, it also becomes a problem-solving platform. We simply wrapped some additional logic around these core capabilities in order to harness the logistics of multiple sequential (or asynchronous) queries being fired off to the device, managing its workload, intermediate tables and whatnot. The appliance makes such an endeavor simple, and further simplifies the user's interaction with the machine for purposes of pattern-based utilization. Change-data-capture, referential integrity checking etc are all far more effective inside the machine than outside the machine.
For the shops that would rather master these aspects in the ETL tool, hey, no harm done. Lots of people do it that way. But they all eventually get to the same point in the game: the ETL tool runs out of gas. Or to give it more gas will require a significantly larger power-plant. They then realize that all those CPUs in the Netezza machine are just sitting there, doing nothing most of the time. Enough CPUs respond to peak, which is about five percent of the time. The remaining 95 percent of time, the box is running less than half capacity with a big chunk of that completely idle. Wouldn't it be great to get a handle on all that power? Recover it as part of an ongoing capacity model?
The most important part of this approach is to decide you want to do it. Lots of options naturally come your way when you try to get creative in the direction of power- because power drives the creatviity further. Ideas are given wings that can fly higher and farther than any other traditional RDBMS could even dream about. Managers start having ideas too. By golly if that machine can do this, then why not that? And why not, indeed?
In fact, most of the time when dealing with a standard RDBMS, the managers will ask why-not? in the sincerest of spirits. Then forth from the mouths of IT come the exact, precise reasons why-not. They are good reasons, logical reasons, and effectively put a wide moat around the management's idea-factories. Soon their ideas fade. They forget how good the ideas were. They will never see the light of day. The underpowered nature of the machines constrain them.
Contrasted with the Netezza machine, the question of why-not is more rhetorical in nature. The person asking the question is not expecting an answer either, but not for the same reasons as the manager above. He's not expecting an answer because the IT folks are already on it. His answer to the question will not be verbal, but will be positively expressed in real functional terms. Likely in a very short period of time.
Why-not? becomes more of a water-cooler phrase. Almost like, "I'm trying do decide whether we should eat at the club or eat at the diner. How about today we eat at the club?" To which the respondent says - "why not?" This is a very different and rarefied existence than the one borne on the constraints of an underpowered machine.
Once you gather a head of steam on all this, you realize that not only is Netezza an enabler for large-scale, complex problem solving, it provides the impetus for us to construct a problem-solving platform upon it. The ability to capture larger patterns of set-based processing, express them in simpler terms, and have them available to all - as capabilities that leverage the capabilites of the Netezza machine.
In your own domain, if you have a Netezza machine in-house and you're using it for data processing, even if it's the most inefficient model on the planet, it still beats the socks off the plain-vanilla ETL tool's work for the same operations. If you have coupled an ETL tool with this approach and are getting the gains out of it, even though this process may have initially been painful, you have answered the question with the right answers. The tools may not be quite up to speed, but that's okay. Your compass has not betrayed you. Stay the course and the benefits will be magnanimous indeed.
And for those who continue to use the machine as a load-and query device, and have not forayed into the radically rewarding realm of ELT and data processing inside the machine, there is only one question to ask, and it's the right question:
So I published this book last Spring ('11) on how the Netezza machine is a change-agent. It initiates transformation upon people or products that happen to intersect with it. Most of the time this transformation makes the subject better. Sort of like how heavy-lifting of weights will make the body stronger. Or the pressure can crush the subject. Stress works that way. We could imagine the Netezza machine as the change-agent entering the environment. Everything brushing against it or interacting with it will have to step-up, beef-up or adapt. I sometimes hear the new players say things like "But if the Netezza machine could only.." That's like a Buck Private saying of his drill sargeant, "If he could only ..." No, the subject must consider that the Netezza machine is never the object of transformation but rather is the initiator of it. But it's not a harsh existence by any means. Products that can adapt are far-and-away better than before. Those that cannot adapt now, will eventually, or remain in their current tier.
Having been directly or indirectly alongside these sorts of product integrations and proof-of-concepts (POCs) numerous times, it's always an interesting ride. The vendor shows up ready-to-go with visions-of-sugarplums in their head. And the suits who show up with them, are salivating for the ink on the license agreement. In less than an hour into the POC, all of them have a very different opinion of their product than when they arrived. Their bravado is reduced to a shy, sort of sheepish spin. Throw them a bone, not everyone walks out of this ring intact. Some of them shake their fist at the Netezza machine. It is unimpressed. Others shake their fist at their own product. Alas, it is but virtual, inanimate matter. What is transforming now? The person in the seat.
So I have watched them scramble to make the product hit-the-mark. Patches? We don't need no stinkin' patches. Except for today, when they will be on the phone in high-intensity conversations with their "engine-room" begging for special releases while on-site. Alas such malaise could have been avoided if only they had connected their product - at least once - to a Netezza machine. In so many cases, they will claim that they have Netezza machines in-the-shop so they are prepared-and-all-that. It is revealed, sometimes within the first hour, that the product has never been connected to a Netezza machine. It doesn't even do the basics, or address the machine correctly. It is especially humorous to hear them speak in terms of scalability as though a terabyte is a high-water mark for them. One may well ask, why are we wasting our time with underpowered technology? Well, in point of fact, when placed next to the Netezza machine it's all underpowered, so really it's just a matter of degree.
Case in point, Enzees know that in order to copy data from one database to another, we have to connect to the target database (we can only write to the database we are connected to). And then use a fully-qualifed database/tablename to grab data from elsewhere - in the "select" phrase. Forsooth, their product wants to do it like "all the others do" and connect to the source, pushing data to the target. Staring numb at the white board in realization of this fundamental flaw, they mutter "If only Netezza could....". But that's not the point. They arrived on site, product CD in hand, without ever having performed even one test on real Netezza machine, or this issue (and others) would have hit them on the first operation. They would have pulled up a chair in their labs, started the process of integration and perhaps call the potential customer "Can we push the POC off until next week? We have some issues (insert fabricated storyline here) and need to do this later."
Cue swarming engineers. Transformation ensues.
Another case in point, many enterprise products are built to standards that are optimized for the target runtime server. That is, they fully intend to bring the data out of the machine, process it and send it back. One of my colleagues made a joke about Jim Carrey's "The Grinch" and the mayor's lament for a "Grinch-less" Christmas. Well, didn't the Grinch tell Cindy-Lou Who that in order to fix a light on the tree, he would take the tree, fix it and bring it back? Seems like a lot of hassle for one light? Why can't you fix it here and not take it anywhere? Enzees see the analogy unfolding. No, we don't want to take the data out, process it and put it back. We want "Grinch-less" processing, too. Fix the data where it already is.
Why do this? Well, in 6.0 version of the NPS Host, the compression engine could easily give us up to 32x compression on the disk. Or even a nominal 16x compression, meaning that our 80 terabytes is now 5TB of storage. And while we may have to de-compress it on the inside of the machine to process it, the machine is well-suited to moving these quantities around internally. Woe unto the light-of- heart who would pull the data out into-the-open, blooming it to its full un-compressed glory, on the network, CPU, the network again - just to process it and put it back.
Unprepared for the largesse of such data stores, our vendor contender's product centers on common scalar data types. Integer, character, varchar, date. No big deal. Connect to the Netezza machine and find out that the "common" database size is in the many billions and tens of billions of rows. A chocolate-and-vanilla software product without regard to a BigInt (8 byte) data type, cannot exceed the ceiling of 2 billion (that's the biggest a simple integer can hold). This does not bode well for integrating to a database with a minmum of ten billion records and that's just the smallest table. Having integers peppered throughout the software architecture by default - would require a sweeping overhaul to remediate. As the day wears on, we see them struggle with singleton inserts (a big No-No in Netezza circles) and lack of process control over the Netezza return states and status. These are not exotic or odd, but no two databases behave the same way. The moment that Netezza returned the row-count that it had successfully copied four billion rows, we watched the product crash because it could not store the row-count anywhere - the product had standardized on integers, not big integers, so the internal variable overflowed and tossed everything overboard. Quite unfortunately, this was a data-transfer product and performed destructive operations on the data (copy over there, delete the original source over here). So any hiccup meant that we could lose data, and lots of it.
Cue announceer: "And the not-ready-for-prime-time-players..."
Oh, and that "lose data and lots of it" needs to be underscored. In a database holding tens of billions of rows (hundreds of terabytes) of structured data, that is, each record in inventory, with fiducial, legal, contractual, perhaps even regulatory wrappers around it, and we're way, way past the coffin zone. Some of you recall the "coffin zone" is the point-of-no-return for an extreme rock-face climber. Cross that boundary and you can't climb down. But we're not climbing a rock face are we? The principle is the same. Lose that data and we'll get a visit from the grim reaper. He doesn't hold a sickle, just a pink slip in one hand and a cardboard box in the other (just big enough for empty a desk-full of personal belongings).
One test after another either fails or reveals another product flaw. When the smoke clears, the "rock solid offering" complete with sales-slicks and slick-salesmen, is beaten and battered and ready for the showers. The product engineers must now overhaul their technology (transform it) and fortify it for Netezza, or remain in their tier. The Netezza machine has spoken, reset itself into a resting-stance, presses a fist into a palm, graciously bows, and with a terse, gutteral challenge of a sensei master, says: "Your Kung Fu is not strong!"
Now it's transformation-fu.
Superficially, this can look like a common product-integration firefight. But this kind of scramble tells a larger tale: They weren't really ready for the POC. This would be similar to an "expert" big-city fireman, supremely trained and battle-hardened in the art of firefighting and all its risks, joining Red Adair's oil-well -fire-fighting team ( a niche to be sure) and finding that none of the equipment or procedures he is familiar with apply any longer. He will have to unlearn what he knows in order to be effective on a radically larger scale. He might have been a superhero back home, faster than a speeding bullet, able to leap tall (burning) buildings in a single bound, but when he shows up at Red Adair's place, they will tell him to exchange his clothing for a fireproof form and get rid of the cape. Nobody's a hero around an oil-field fire. Heroes leave the site horizontally, feet-first. No exceptions.
Enzees have experienced a similar transformation (with a different kind of fire). The most-oft-asked questions at conferences are just that flavor: How do we bring newbees into the fold? How do we get them from thinking in row-based/transactional solutions into set-based solutions? How do we help them understand how to use sweeping-query-scans to process billions of rows? Or use one-rule-multiple-row approaches versus cursor-based multiple-rule-per-row? How do we get testers into a model of testing with key-based summaries instead of eyeballs-on-rows (when rows are in billions)?
We were dealing with a backup problem at one site because of a lack of external disk space. Commodity tools often use external disk space for this purpose, until they are connected to a Netezza machine and their admin tool complains that they need to add "another hundred terabytes" of workspace. We gulp, realizing that the workspace is only today a grand total of ten terabytes in size. And you need another hundred! Yeesh, you big-data-people!
Most of the universe outside the Enzee universe will never have to address problems on this scale. It is not the machine itself that is the niche. It is the problem/solution domain. Most of the commodity products that are stepping up are doing so only because it's clear that Netezza is here to stay and they need to step into Netezza's domain. I suppose at some point they expected Netezza to give them a call to start the integration process, but the Netezza Enzee Universe already had all that under control. It's amazing how lots-of-power can simplify hard tasks to the end of ignoring commodity products entirely.
Another case in point, a product vendor "popped over" with a couple of his newbee product guys and spent two weeks trying to get their product to play in-scale with Netezza. Before throwing in the towel, they offered up the common litany of observations. "No indexes? What the?" and "Netezza needs to change X", or the favorite "Nobody stores this much data in one place." The short version is, you brought a knife to a gun fight, as Sean Connery would assert, or perhaps, you brought a pick-axe and a rope to scale Mt. Everest. What were you thinking? You see, most people who have never heard of Netezza (I know, there really are folks out there who don't know about it, strange as is seems) do not understand the scale of data inside its enclosure. Billions of records? Tens of billions of records? A half-trillion records? Is that all you got?
We will watch a switch flip over in their brains as they assess what they are trying to bite off. A small group will embrace the problem and work toward harnessing the Netezza machine in every way possible. Another group will provide a bolt-on adapter for Netezza to interface to their core product engine. While another, larger group will assess the expense of such things, the marketplace they currently address, and conclude that they will for now remain in their current tier. This is like a 180-lb fighter climbing into the ring with a heavyweight, and walking away realiizing that they need to add some muscle, some speed, and some toughness or just stay in their own weight class and be successful there.
Another case-in-point is the need for high-intensity data processing in-the-box in a continuous form, coupled with the need for the reporting environment to share the data once-processed, likewise coupled with the need for backup/restore/archive and perhaps a hot-swap failover protocol. We can do these things with smaller machines and their supporting vendor software products. But what about Netezza, with such daunting data sizes, adding the complexity of data processing?
At one site we had a TwinFin 48 (384 processors) and two TwinFin 24's (192 processors) with the '48 doing the heavy-lifting for both production roles. When it came time to get more hardware, the architects decided to get another '48 and split the roles, so that one of the machines would do hard-data-processing and simply replicate-final-results to the second '48, limiting its processing impact for any given movement. This was not the only part of their plan. They then set up replicators to make "hot" versions of each of these databases on the other server. This allowed them to store all of the data on both, providing a hot DR live/live configuration, but it would only cost them storage, not CPU power. Configured correctly, neither of the live databases would know the difference. Our replicators (nzDIF modules) seamlessly operated this using the Netezza compressed format to achieve an effective 6TB/hour inter-machine transfer rate, plenty of power for incremental/trickle feeds across the machines.
Some say "I want an enterprise product that I can use for all of my databases". Well, this is the problem isn't it? Netezza is not like "all of our other databases". Products that have a smashing time with the lower-volume environments start to think that a "big" version of one of those environments somehow qualifies their product to step-up. I am fond of noting that Ab Initio, at one site loading a commodity SMP RDBMS, was achieving fifteen million rows in two hours. Ab Initio can load data a lot faster than that (and is on record as the only technology that can feed Netezza's load capacity). So what was the problem? The choice of database hardware? Software? Disk space? Actually it was the mistaken belief that any of those can scale to the same heights as Netezza. I could not imagine, for example, that if fifteen million rows would take two hours, what about a billion rows (1300 hours? ). Netezza's cruising-speed is over a million rows a second from one stream, and can load multiple streams-at-a-time.
Many very popular enteprise products have not bothered to integrate with a Netezza machine, and many of those who have, provide some form of bolt-on adapter for it. It usually works, but because the problem domain is a niche, it's not on their "product radar". It's not "integrated as-one". What does this mean? Netezza's ecosystem, and now assimilated by IBM, through IBM's product genius and sheer integration muscle, will ultimately have a powerful stack for enterprise computing such that none of the other players will be able to catch up. If those vendors have not integrated by now, the goal-line to achieve it is even now racing ahead of them toward the horizon. Perhaps they won't catch up. Perhaps they won't keep up. Some products (e.g. nzDIF) are at the front-edge, but nzDIF is not a shrink-wrapped or download-and-go kind of toolkit. We use it to accelerate our clients and differentiate our approaches. It's a development platform, an operational environment and expert system (our best and brightest capture Netezza best practices directly into the core). This has certainly been a year where we've gotten the most requests for it. But there's only one way to get a copy.
Cue Red Adair.
"No capes!" - Edna Mode, clothing-designer-for-the-gods, Disney/Pixar's The Incredibles
Modified on by DavidBirmingham
I'm wont to say the Netezza technology does a bit more than suggest it will run fast - it's an explicit promise practically nailed to the cabinet.
One may well ask (I'm glad you did) we did everything we were supposed to do, but it still runs slow. Why?
Many sites I visit, I hear something similar, and the irony is, they aren't far from the mark. It's not like we'll have to revamp everything, or overhaul the model, or perform some massive architectural rework threatening the vast spacetime continuum-
No, it's usually a tweak here and there. Sometimes not.
Here's the deal. It's a data warehouse appliance. It circumscribes decades of tried-and-true data warehousing, making it easier than ever to rollout fairly standardized warehouse design. Problem is, many folks who buy one have never been exposed to data warehouse principles. No wonder when we pop-the-hood, we don't find what we expect to find - an implemented data warehouse. What we find is an oddity.
Stay with me, it'll be painless, I promise. In the data warehouse world, we embraced a common, simple truth: 3NF Third-Normal-Form (transaction-style) schemas are lousy for reporting. They require too many joins and drag the transactional operation to its knees. Formulating optimized tables in the same machine won't help, because reporting is set-based, and transactional is entity-based, and the very engine we're running on favors the latter and disfavors the former. Hence the need to move the data off the machine and reformulate it for better performance.
The spirit in play here, is the acknowledgement that the 3NF model doesn't work, and the embracing of a model that does. This typically involves a dimensional-style model. It means the tables of the transactional model are merged, consolidated etc into fewer tables. It also means data types, data values and the like are all scrubbed to a known value, and their types aligned, to avoid any kind of on-demand drag. After all, would we rather zap a field into upper case on the back-end, or run the upper() function in the on-demand where-clause? Running it in the where-clause creates egregious drag. Zapping it in the back-end costs us nothing.
But once these sorts of schemas are moved into Netezza technology, something seems to happen to the mind. One might think it's okay to take those source tables, CDC-them over to the Netezza machine, slap some reports on top of these tables, and call it a day.
Not so fast. (Oops, that's what we're trying to fix - the system that's not so fast - I make myself laugh sometimes)
Bringing the source's 3NF model into Netezza is fine for Staging. Even for a central Hub schema to consolidate all sources. What the 3NF model is notoriously poor at, is set-based operations such as analytic queries for reporting. These were poor in our source system and they are poor over here, too. They are always poor for reporting. No amount of high-powered hardware will change this.
But that's okay. If we have the data inside the machine, we're not far from finishing-up the next step. It's just a lot of people now who aren't taking the next step. They leave the original source structures "as is" and wonder why they can't get a faster query. I mean, the high-powered hardware's there, right?
Pre-integration vs Re-Integration
Pre-Calculation vs Re-calculation
Pre-formulation vs re-formulation
Pre-scrub vs re-scrub
See a pattern here?
Saw a chat between several folks describing the following configuration:
Sources -> Hadoop - > Atomic EDW -> Dimensional Analytics
And claimed that the Atomic EDW and Dimensional model were no longer necessary. We can go straight to the interface to Hadoop, and push all our queries against it. While this might be functionally true, how does it dovetail with the above assertions about pre-integration and pre-calculation?
A configuration without the Atomic EDW and Dimensional (data marts) - requires the user queries to re-calculate and re-integrate the very same data, on demand, repeatedly.
For example, if it takes three minutes to bring all the "raw" data together, what if we brought the data together and take the three-minute-hit just once? Then the user queries are benefitting from this pre-integration because they are not experiencing integration-on-demand.
It goes like this: We walk into a bike shop and take a look at all the cool bikes, various frames and options, and pick one. Rather than take it off the rack, the owner says he will have one ready in a week. What the? Just sell me that one, No, he says, we will build if from scratch, shower it with love, have reruns of the Dick Van Dyke show playing in the background, and it's all good. You'll love it. So you come back a week later to pick up the bike, and a lackey does the transaction for you while the owner is having the same speech with another customer. Doesn't seem very efficient, does it?
A 3NF model in Netezza practically requires a complete data rebuild every time we issue the query. The same tables are completely re-scanned, their data re-joined, their quantities re-calculated - hmm - we just did this five minutes ago when we ran the query last time, didn't we? On-demand re-integration/re-calculation is where we end up dissipating our power, and wasting our time.
If we were to take the next step with the raw data, reformulate new tables that pre-integrate and pre-calculate the necessary values, we have a consumption-ready model. When our queries hit it, the data streams from the machine faster, because all the heavy-lifting is already done. Integrate-once/consume-many.
At a site where a "raw" model like this prevailed, a very complex view was placed on the tables, joining dozens of them and performing a wide variety of work. As an experiment, we simply persisted the contents of the view without any filters. The same raw data as experienced by all on-demand queries would be pre-integrated into a single table. Where queries on the original view would take ten minutes or longer, the queries on this persisted form took under twenty seconds. We can see why - all the heavy lifting had been persisted - integrate once / consume many. This of course is not operationally practical since all those source tables were being trickle-fed every five minutes.
But the spirit of the operation remains. We should trickle-feed the inbound data to staging, and then forward the changes to a persisted location optimized for reporting.
I didn't mention that this view was used for over 200 "location" reports. Meaning that the same query ran over 200 times - that's 200 times 10 minutes - or 2000 minutes - 33 hours if running linearly. Because this query saturated the machine with data, they couldn't run anything in parallel. Their reports were always behind. After we converted it to the 20-second model, they could run the reports in a little over an hour if running linearly. Once we applied zone maps for their dates and their LOCATION_ID, the queries became sub-second, and could run dozens of them in parallel. The entire reporting cycle, over 200 reports finished in less than three minutes.
Isn't this the sort of performance we were looking for? The pre-integration and the attention to machine physics reduced a 33-hour job to under 3 minutes.
Now can we see how not-addressing inefficiency will dissipate the machine's power rather than harness it?
A symptom of a poorly implemented model can often be found in the shape of the query. If the query is complex, has a lot of work happening on-demand, or is performing a lot of left-outer joins, it simply means the data isn't consumption-ready. If we open our BI tool and find that a lot of columns are wrapped with NVL or COALESCE into known defaults, it's time to apply those defaults to the data and not clutter the business logic with scrubbing that should have already been done.
Let's look at a query template, or rather, the structure of a query attempting to leverage these transaction-facing tables.
Select (some stuff here)
from (Sub-select here) sub-select here)
Join master (columns, where-clause stuff )
Join secondary (join columns, where-clause stuff )
It's the presence of deep sub-selects, multi-nested views, and the left-outer than are all more than telltale - the data is in a raw state and not ready for mass consumption.
Our teams have remediated countless models from this malaise. Not particularly hard to do, but at all these sites, their folks were too busy to engage the problem full-time, and with the necessary confidence to get it right. Others took our advice to create "sandbox" databases, pull some data inside and formulate tables that worked for them, as a prototype. Once the prototype showed promise, they had evidence to discuss changes with the data modelers to eventually make it operational. This is exactly the process you'd see us follow on your site, because nothing works better than your own data to answer your performance questions.
By the same token, "some" of these sites knew what to do, but didn't do it. One site's leader said to me, "We went to Enzee Universe and took a lot of notes. We hired a guy to put this together and he said we didn't need any of that." Followed by a long awkward pause. I said "Is that why I'm here now?"" and he said, "Exactly." He was worried I was going to tell him he had to rebuild from scratch, but we know that's never the case. Performance optimization is centered in the big-ticket, heavy-lifting tables. If we reduce the noise around those, everything starts to hum like a sewing machine.
The takeaway in all this - if performance is an issue (don't feel you have a lot of data but the system seems stressed) the problem isn't your queries - it's your data model. Performance in Netezza is solely derived from hardware, and the hardware's physics is unlocked by optimized data structures, not efficient queries. Tuning a query in Netezza is a lot like using a steering wheel to make a car go faster. A query is logical but a performance is physical.
Get the data structures in a place where they leverage machine physics, and get the queries in a place where they unlock that physics, and cut 'em off the chain.
Modified on by DavidBirmingham
Hey, I just couldn't resist!
Please note that in the following commentary, I am attempting to unpack the POTUS 2016 election analytics via the polls, not take political sides. I am one of those "independent/undecideds" that everyone complains about, partly because no particular party "represents" me. Call me a rogue!
Storytime: - Can Mind have an effect over Matter - is Telekinesis possible?
Setup a random number generator in a computer so that it generates hundreds of numbers per second, all of them integers from 0 thu 99. Have it collect as many as it can in one minute. When it's done, the average of all numbers generated should hover around 50 (due to the Gaussian distribution (bell curve)). Once this is working, set up an observation and then focus your mind on something in the computer, something imaginary even, and "declare" this object in your mind to be the CPU or the generator itself.
Now start the recording and focus your mind and think - or even say out loud - "aim high". Continue these thoughts with as much intensity as possible. Perform this observation many times to get a lot of data. When the experiment is done and the observations are collated, the majority if not all of the averages will be above the value of 50. Conversely, when it's repeated with the phrase "aim low" the final results will show a downward skew below 50.
How is this possible? Is the mind truly able to affect an electronic random number generator? Does the concentration of the mind somehow force this outcome?
When independent observers watched the researchers repeat this and collected data both for "aim high" and "aim low" - an interesting pattern emerged. At the end of the test, an observer said "I noticed you kept resetting the test until you got it calibrated, what was that all about?" Well, said the researcher, when I'm trying to take a reading I have to make sure everything is the same. If I see that the average isn't moving I reset and refocus." "How many times do you do this?" Well, I don't really keep track of that."
The problem here is that the researcher was throwing out valuable information. It was showing that the return numbers really were no better than random chance but the researcher wanted to believe so much that a lack of rigor in the test protocols was a-okay. The researcher was throwing out observations that proved the hypothesis wrong and keeping those that agreed with the hypothesis.
So now that the election is done and the outcome is known, let's have a little fun with the analytics. You know, the pollsters. They claimed to have the answer (predictive analytics) - but most of them were dead wrong. Why was that?
Many of the news stories today start out with "The <winning candidate name here> in a surprising victory" - or "stunning upset", or "the nation changed its mind in the eleventh hour" - are all just weak ways of failing to admit that they got it wrong. Not only did they have it wrong in the end, they had it wrong all along.
The Electoral College complicates the analytics, so before we get started, a lot of foreigners read this blog and I have received "casual" questions in the past about the Electoral College, so here's a quick primer on it.
Ironically, the Broadway show "Hamilton" has been in post-election news, but the players in this production have all admitted that they didn't vote. The creator of the Electoral College was Alexander Hamilton, who also advocated that the POTUS be called a "king", and advocated steep immigration restrictions and even closed borders. This is ironic in the light of recent statements from the "Hamilton" cast. I wonder if they know these things about the play's namesake?
The election of a President Of The United States (POTUS) is unlike many other elections, in that it's not a popular vote. It is a collection of results of fifty-one separate elections (50 states and the District of Columbia).
There's a strong reason for this. The POTUS is elected by states that are each independently sovereign of one another. The United States is just that- a union of sovereign states, each with their own governments and laws.
In America, state sovereignty is protected in a variety of ways. The FBI cannot insinuate itself into every criminal investigation. There are strong lines of jurisdiction that determine whether its a state or federal crime, all borne on the sovereignty of the state. The federal government can't arbitrarily send troops into a state without the governor's permission. The same is true for a variety of federal agencies. For example, when Katrina hit Louisiana. President Bush was on-site the next day with an offer to the governor to send in troops to keep order and distribute food and provisions for survivors. The governor hesitated on this offer. A day or two later the levees in New Orleans broke and stranded millions. The fact remains, the troops could not enter Louisiana without permission. The states have rights.
The Constitution protects the rights of the states to maintain their sovereignty. A popular vote for "anything federal" would violate this sovereignty and put all voters, regardless of their geography, into a common pool. A candidate that made a lot of promises to the population centers would win hands-down. Moreover, election fraud in one location affects the whole. The POTUS election is this way by design, to keep democracy at bay, so that cheating is more complicated, the sovereignty of each state is honored, unbalanced power is not given to population centers to leave the majority of the country out of the race entirely.
Even as this is written, the state of California has laws that allow non-citizens to vote for local elections, and their totals are driving a "popular vote" for the loser because some 4 million votes were illegally cast for POTUS. Those haven't been excluded yet. They will likely line-up the popular vote with the electoral vote.
"Popular vote" would become a way for "big city dwellers" to tyrannize the country. Our founding fathers understood tyranny quite well, and the many forms it can take.
But what if the outcome were decided by popular vote? Consider that in states that are lopsided, like California, New York and Texas, voters for one candidate are always represented while voters for the other are not. Many voters for the other candidate don't go to the polls at all because they know it's a waste of time. But if this were a popular vote - those folks would have a vote that counted and be encouraged to go vote. Such things come into play when the votes are counted differently and would significantly affect how many additional votes are cast.
What if the candidates tie in Electoral votes or don't reach the necessary 270? The contest goes to the House of Representatives. Many people mistakenly believe that the House would cast 435 votes for President. This is not the case. The representatives of each state must "caucus" and have their own internal election, and only one vote from that state is cast for POTUS - 50 votes in all. Why? Because the states elect the President, not the people. It's a running theme.
This is a bit of a startling realization for some, that even the Father of the Constitution, James Madison, commented that "democracy is evil" - primarily because it allows the will of the many to trample the rights of the few. The founders deliberately put in place a variety of checks and balances that keep democracy at bay. As such, America is a Representative Republic, not akin to any form of democracy at all. This is confusing to some, who've been told America is a democracy and the Electoral College is "undemocratic" and if it's not democratic it must be bad.
History has proven that "the majority is often wrong" - and this was proven out in the POTUS pre-election polling - all but a few were completely wrong.
Recall that many hundreds of years ago, a chap named Galileo challenged the majority "opinion" on celestial mechanics. Oddly, the Aristotelians in the universities - all of academia - stood in opposition to him. Even the leaders of his own faith disowned him. He was a lone voice in in a sea of consensus - and all they had to do was look into the telescope. In all walks of life, we often find that a small number of people "have it right" while the majority is wrong. The founders of America wisely recognized this "herd effect" of humanity and didn't want it to have any power. One of the reasons for this is that the "herd effect" is often driven by a majority of the least-informed and most-fearful.
Should population centers control the presidency? This question was asked and answered by the founding fathers in the form of the Electoral College.
This system however, radically complicates the prediction of an overall outcome. Each state's activity has to be predicted and modeled, and depends on a lot of very dynamic factors.
Even further - in the 2016 election - the media, the newspapers, the consultants and a wide range of politicians, including many politicians in the same party as the eventual winner - all claimed that the eventual winner would lose in a landslide and be the most devastating loss in history. All of them were wrong. Instead, the very opposite became reality. The naysayers were proven wrong, the "favored" candidate lost and that candidate's party was decimated.
The problem here isn't that the nation "suddenly turned at the last moment" as some have suggested, but the pollsters themselves fell ill to a common problem in these kinds of analytics. They had it wrong from the beginning and their polls failed to reflect reality. There's an obvious reason we'll get to a little later.
Here was a chance for predictive analytics to shine like no other - a time-boxed, measurable event and outcome. This is why those of us who champion analytics are so appalled by the epic-fail. It could have been a shining moment but instead just a fizzle. For analytics. The worst part is that people without any predictive computing also predicted what would happen in the aftermath of the election - and they were also right. No computer algorithms required. Perhaps the nature of a "learning machine" needs to be couched in terms of what is being learned and how we know it's true?
Many of us who want excellence in analytics were absolutely appalled at the abject lack of accuracy of the presidential polling. It seemed a lot like a circus and the election night seemed like their version of damage control more so than reporting an expected outcome.
After all, if the polls were scientific and on-the-money, when the polls closed and the votes were reported, the science should have already reflected the reality of the outcome. No surprises at all. Why was it such a shocker? An upset? A cinderella story?
Think about that for a moment. We hire someone to do analytics for us, find market opportunities and reduce risk, and when we act on their carefully crafted predictions, it goes south and we lose millions of dollars. Would this be a good testimony to the accuracy of our analysts, or an abject failure? Would we ever trust them again? Would we throw out the baby with the bathwater, so to speak, and forsake the value of analytics altogether? Some have done this, and later revisited it with a bit more scrutiny ( and are happy with their results now).
When it comes to our companies, our livelihoods, etc - trusting the data and the analysts is hard because it's personal. It's not like a video game where we get unlimited retries. The one failure may put us out of business. The one success may make us handsomely solvent for decades.
Let's get some things out of the way - we have a few broad reasons why they got it so wrong.
- On the dark side - they knew the results and were lying. I don't see any benefit for the pollster or the candidate in this. In fact, showing a candidate artificially high in the polls could spur the opposition voters to the polls!
- On the lighter side - they were incompetent - I can't accept that so many were off because of this - that's just me talkin'
- On the analytic side - they were victims of "confirmation bias". This explains it much better. They were sincere, but sincerely wrong. They were simply too personally invested in the outcome.
- Reuters and their partner IPSOS, recently shared that much of their polling data favored the eventual winner, including the most controversial issues - by over 83%. They don't see their failure to report these known facts as having any influence on the election. In fact, they completely changed their time-honored measurement system in a manner that favored the eventual loser. Why would they do this? It's really simple: they could not bring themselves to believe that they were wrong.
Many who have attempted to find a reason for the failure fall into one of two major buckets - those who look at it scientifically and those who look at it politically. The political view has a problem in that the presence of bias - of the modelers and analysts - tends to taint the model and inject a "confirmation bias". That is, the polls say what the analyst expected them to say, but the analyst may not realize that their bias has already affected the outcome. Many who look at it scientifically are using politics first, so the confirmation bias creeps in again.
Case in point, an analyst may ask the question, "If the election were held today, who would you vote for?" And it's interesting how many of those voters polled were dishonest in their response. How we know this will follow shortly. Taking these responses at face value, the analyst sees that the numbers trend as they expected, so they report the results. Another form of confirmation bias is when the analyst has a hypothesis in hand, such as "We know that nobody could possibly want to vote for Candidate A for so many important reasons, thus..." and this hypothesis is what drives their questions and likewise the rest of the analysis. They only ask questions in this context and only hear answers in this context.
This is a lot like the circular question of "Do you still kick your dog?" If someone answers "yes", they're a reprobate. If they answer "no", it implies they used to kick their dog...
Unbeknownst to many, this circularity is one of the major flaws of the scientific method. This is why the scientific method can't be used to "prove" anything. It can be used to falsify, but not to prove. Bias rears its head in common scientific experiments and even in police forensics.
"What happened to the evidence you collected in the bathroom of the crime scene?"
"We analyzed it and threw it out as irrelevant."
"You threw it out as irrelevant? Why?"
"Because it didn't match the suspect."
(real conversation between a prosecutor and CSIs in a triple murder)
In scientific circles - a professor at an Ivy league university had an intern who reported that the professor had collected a wide array of samples from a recent field trip and the intern had expected to spend the weekend collating them as part of his duties to the professor. But when the labeled/bagged samples arrived, only a fraction of them made it into triage. He asked the professor what happened to the rest of it, and the professor said it had to be discarded because "it didn't fit the profile".
This "doesn't match the suspect" or "doesn't fit the profile" is a problem because it means the analysts have a hypothesis concerning a specific outcome and have thrown out evidence suggesting, or even proving, that their hypothesis is wrong. The professor, in throwing out evidence that didn't match a profile, is aligning to his original hypothesis and discarding evidence that disagrees with it. In the case of the CSI, if they throw out all but the data that "matches the suspect" they are by definition throwing out the case against the real criminal. If any of that evidence could exonerate the "suspect" but the "suspect" is falsely convicted, the real criminal goes free.
In the case of Reuters/IPSOS - their hypothesis was that the eventual loser would win - has to win - in a total landslide. Any data that contradicted this simply had to be erroneous. Yet it was not - it was speaking the truth and they ignored it.
So the flaw of the scientific method is that the scientist can inadvertently use the hypothesis as the filter through which all evidence is examined, rather than it's intended purpose - a springboard for investigation to be either confirmed, rejected or modified based on the evidence discovered. Since many researchers apply for grants based on "the hypothesis", they must have a reasonable confidence that the hypothesis has merit in order to influence donors for funding. If the scientist has enough failures, those funding sources will dry up.
Thomas Edison claimed to have failed over 1000 times in inventing the lightbulb. His explanation was that in each case, he eliminated a candidate with the expectation that a final candidate would succeed - claiming 1 percent genius, 99 percent perspiration. Unfortunately, donors these days aren't so forgiving. Scientists like Edison, Newton and Lavoisier had independent means of supporting themselves while performing science. Today, scientists expect to be financially supported even while they do the work. That's reasonable, but it increases the cost of science, and tends to tempt the scientist into fudging the data to match the hypothesis. He has bills to pay, kids in college, vacations to pay for. A dependency on money, and lots of it, has been shown to affect their judgment.
As for polling, the polling firms sell their results to the candidates and it's collectively worth billions of dollars. Down-ballot candidates in 2012, 2014 and 2016 spent over a billion dollars in polling alone.
Some reason this way: if you're a pollster in the business of selling polls, and the first poll you take shows your "guy" waaay ahead of the pack, do you share this or do you show the race to be "very close"? This "very close" will cause your client to be nervous and want to purchase another polling result as soon as possible. You have a vested interest - even a conflict of interest - in whether or not you deliver the right polling results because it's your job to sell more results. Since everyone knows that the only polls that count - are the polls in the final week - hey - why not massage the numbers? Who's it gonna hurt?
In the book "Wrong: Why Experts Can't be Trusted" - the author cites one case after another of scientists eliminating, adding or fudging data. One researcher went so far as to use a magic marker to put stripes in the fur of a lab rat so that it could pass certain acceptance criteria. He notes that even though doctors have formally accepted a correct form to administer Cardio-Pulmonary Resuscitation (CPR) the average training manual and even the Red Cross still use the former method.
Periodically in the news will appear a case of fraud or collusion in the area of climate science. If the science is there, why the fraud? For example, the IPCC announced that global warming has been in a "pause" since 1997. The current trend is more toward cooling, as exemplified by the glacier forming in Mt. St. Helens' caldera. Many scientists are asking the impertinent question: Is the "pause" really a "pause" or is it something else, like "reality"?
In May of 2015 NASA announced that the polar ice caps are not receding, in fact the Antarctic sea ice is expanding.
For folks to claim that "the majority of climate scientists believe..." isn't relevant, because many centuries ago the majority of scientists believed the Sun revolved around the Earth. Still, science is science, and not subject to consensus or democratic vote. One "uncovered" memo after another has revealed that the "result" of climate science can be purchased regardless of the contradicting data. So why does anyone trust it? After all, doesn't Earth itself do more damage to itself, than mankind could possibly keep up with?
In the case of Presidential polling, there's no compelling reason to deliberately get it wrong. Some like Reuters may withhold information, but this isn't the same as false reporting. Statistics show that false polls don't do a lot to suppress or affect voter turnout or sentiment. In fact, one could make the case that a false poll strongly in favor of one candidate could strengthen the resolve of a person voting for the opponent. Likewise if a candidate is seen as strong in the polls, some of the candidate's supporters may not go vote under the assumption that their vote wouldn't count all that much. Both of these dynamics are partially in play in 2016 but nobody can measure it, so nobody knows for certain its impact.
I worked with a firm some ten years ago that had so much cash rolling in, they could have wallpapered the offices with 100-dollar bills and not missed any of it. Their business was so dramatically profitable that their investors loved them, their customers loved them, but one love was lost and had not returned - working for the corporate offices was drudgery. They had not upgraded their computing systems in many years, so many of the employees spent countless hours, every day, pulling data and collating spreadsheets.
In the meantime, a lack of visibility across their corporation made their expenses invisible. They were hemorrhaging cash in most departments and could not see it, and didn't care because so much more cash arrived to replace it, and then some. Such situations always have a day of reckoning. We were onsite because that day had arrived and the senior management may as well have been in witness protection they were so frightened.
Analytics matters. It tells you where you stand.
Unless it's presidential polling. Which is just bizarre. I hear that it's - uh - like the most powerful position in the world? I could be wrong about that, but for the presidential polling to be so completely off?
The reason we as voters don't particularly care about the presidential polling is that it's rarely accurate. We're sort of "inoculated" to it by now. We hear all sorts of stories of pollsters using the polls to shape opinion rather than reflect it. We want to believe that they're doing it for the right reasons - to be accurate and regarded as reliable, not as agenda-driven political hacks. Hope springs eternal, but at the end of every election season we see how completely wrong they were. At the beginning of every election season, the lessons of four-years-prior are forgotten and we find ourselves watching the polls ever-so-hopefully.
Pardon the analogy, but isn't this a lot like Lucy and Charlie Brown, where she holds the football for him and rips it away at the last moment? She promises each time she won't do that, but always does. Why does Charlie keep coming back? Well, because it's funny, and it's Peanuts, and we know it's make-believe.
But presidential polling isn't make believe and the outcome has real-world consequences.
Only three of the mainstream pollsters were even remotely accurate. All the rest (dozens of them) couldn't have been more off if they had just manufactured the numbers from thin air. In fact, any of us could lick a finger, test the political wind, and produce a poll that was more accurate than ninety-percent of those claiming to use analytical science. Just embarrassing.
And if it's science, why were they so wrong? I mean, so completely wrong?
This is why the final election outcome was such a "shocker". Expectations. Just the setting of false expectations is enough to set someone off. Tell your wife you have a romantic weekend planned and then at the last minute get called into a non-optional emergency meeting at a client site - uh - yeah - set those expectations carefully! In one particular case, I had set my client's expectations that I would be unavailable. They didn't even bother to call.
Nate Silver, famed analytics guru of many past elections, put his private formula alchemy to work and at the beginning of the election night, already had one candidate favored with 70 percent chance of winning. No margins, you see, just whether the candidate would win. As the polls closed in each time zone, Silver changed his percentages. They went to 60, then 50 and dipped below 50, moving downward "as the world turned" and polls closed by the hour. The opponent likewise rose in the other direction. Betting odds were a reversal of fortune for many.
People look at Silver's messaging in real-time, analyze it and proclaim, "The frontrunner is losing ground" or "the underdog is gaining ground". Why doesn't anyone see such sentiments as odd? The reason I say this is simply:
Friends of mine go to the tracks on occasion, probably more often than their wives would like, and spend time betting on dogs or horses. At the beginning of the day, the track officials publish the betting odds, much like Silver published "odds" during the course of the campaign. But those odds at the track are only good before the race begins. The officials don't change the odds after the starting bell.
And what happens when the gate opens and the horses charge forth? Seems to me they're just like Olympic runners in a starting block. They all have the same starting point - zero - and all of them have to gain ground faster than the opponents - to break the ribbon first. I mean, everyone gets that, it's why we watch races. I still recall many Summer Olympics ago, one of my favorite runners (Gail Devers) was in the 100-meter hurdle. She was at least five hurdles ahead when she hit the last hurdle and tripped over it. She landed on her knees and tried to recover, but finished third. She was favored to win, too. That really was a case of gaining ground only to lose it later. Many recall the Winter Olympic snowboarder who was favored to win second place. The frontrunner and backrunner tangled right out of the gate and the hero ran well ahead of them. When she hit the last hill, she decided to "hot dog" and brought her board up to touch it, lost her balance and spilled out. She hurriedly tried to make it right but the other two sped past her, an opportunity lost.
The lesson in all this, is that once the polls close - the outcome is prescribed. They can't gain or lose ground. At the voting precincts, there is no frontrunner or underdog after the polls close. It's all over but the counting.
The odd part about this being applied to the 2016 presidential race, is that when the polls closed and votes were reported, one jumped out in front and stayed out in front, and ultimately won the contest. Anything prior to the votes being counted - predictions, polling, exit polling - didn't matter any longer - because most of them got it completely wrong. The only question people have later is - if most of the pollsters were completely wrong, how do we know the others aren't just a fluke?
Isn't that what we'd say about any other kind of contest? If fifty people make a prediction for an outcome and only three get it right, we chalk it up to random chance, not the skills of the predictor. The point being - if science is directly applied, we move closer to the expected outcome. If no science is applied, we can't expect better than random chance.
So for Silver to claim "gaining ground" or "losing ground" after the polls close, is ridiculous. It's Silver's form of "damage control" after being proven so completely wrong. Moreover, he was repeatedly proven completely wrong about the winning candidate from the time the candidate announced a presidential bid. Every prediction he made - polling to the primaries - crashed and burned. He and others expected their front-runner to win in a landslide of 500 or more electoral points. Epic fail. How embarrassing is that?
Not for him, per se, but for analytics in general. It is because of Silver's past prior success that advanced analytics has gained ground in the marketplace, but when folks like Silver so completely and visibly fail, it sets-back analytics and in some ways can bring shame to those who sold their analytics based on Silver's success with it.
To Silver's credit, part of his "damage control" was his explanation of how close he called the "popular vote". Well, Nate, that's reeeeeal nice. But as noted above, the popular vote doesn't mean squat. This is the United States, each state is a sovereign entity not beholden to the other states, and do not participate in a popular-vote-based election. It is in their best interests, and always has been, to avoid being pooled with the popular vote.
A similar effect happened with the 2004 POTUS election, where one candidate was handily whipping the incumbent - based on exit polls alone - but when they started counting votes, the incumbent immediately jumped in front and never fell behind even once. Various groups in favor of the challenger cried foul - but strangely did not point a finger at the exit pollsters. The outcome of the election would have been the very same without their reports. And since their reports were so completely wrong, why report them at all? The more sinister among us would claim they were trying to affect the outcome. Voters are a little smarter than that, so it's hard to swallow. Especially if its across fifty sovereign states, each with the own vested interests - this just makes it a lot harder to cheat.
Another strange effect happened with the 1980 election, where the polls leading into the election night had the incumbent ten points ahead, but as the night unfolded, the challenger took the race by a total landslide. Only many years later was it learned that in the week prior to the election, the incumbent was taken aside and told that he wasn't ten points ahead, but ten point behind. No way, no how would he recover this ten points in a few days. This was a "brace yourself" moment, so that nothing about the election night was unexpected for the candidate. I suspect that the same numbers were available to the challenger as well. This was the first time that the loser conceded the race even before the polls closed on the west coast
In the 2016 election, one candidate was behind in most of the prior predictive polls, and led in only a few, while the other candidate enjoyed a comfortable "lead" throughout the campaign season. The underdog claimed that the polls could not be trusted. People laughed. The polls that showed him ahead were ridiculed. They however, were closer to the mark than anyone realized. Oddly, these same polls were the most accurate ones four years ago in 2012. Why weren't they trusted this time? Confirmation bias. They disagree with our hypothesis - so there's just no way they can be right.
Since both candidates were "seeking low ground" in their rhetoric - in the most bizarre and tumultuous race ever, the candidates dished out their fair share of mud. As a result, many voters were uncomfortable in openly committing to either candidate. Of course, each candidate had their own "openly loyal base" but knew they could not win with the base alone. They had to reach the "independents" and "undecideds" - but how to find them? How to know what they really think?
One of the pollsters used an interesting question - "How are your neighbors voting?" - and this seemed to unlock a wealth of information. In the final analysis, this one question unlocked the hidden information. He had found the true sentiment of "undecided" voters. While a person may not feel comfortable sharing their own opinion, it was easy to share the opinion of an "imaginary neighbor". This pollster's numbers were the most accurate, state by state than any of the other pollsters. He called it, but he had to use a little subterfuge to make it happen.
This subterfuge it seems, is the hallmark theme of politics. The politicians keep a public face and a private face. One candidate was revealed to have told donors to expect public responses to the voters that were incongruous with the private responses to the donors - not to worry, it's just politics.
On a personal note, my father was a District Attorney (an elected office) for over 30 years in East Texas, and was in office at the time of my wedding (to my wife of now 30 years) - and my wife's parents had taken-on the expense of the wedding reception. Dad told them to invite an additional 300 guests to the wedding and reception, which would have blown their budget sky-high. They objected but Dad said - no worries, it's just politics. None of them will actually show up, but it's bad form not to invite them. Sure enough, none of them showed up. But this is a bit of a subterfuge in itself, is it not?
Politics is an art of partial truths and partial subterfuge. Anyone who reveals their agenda from the outset is considered a poor political player. One must have a public agenda and a private agenda, if they want to "get anything done". At least, that's the "common wisdom".
This is why many technologists often divorce themselves from politics entirely. If I took a poll of technologists nationwide, of those eligible to vote I would find that only a small percentage are actually registered to vote. Technologists often watch politics like a sporting event, if they watch sporting events at all.
But this politics-as-usual problem - the subterfuge and hidden agendas - had apparently wearied the American voter. So when a candidate stepped forward with no political experience at all - nobody knew how to measure it. They still don't. Even now they are attempting to describe the candidate's victory within a political paradigm, and nothing they come up with is accurate. I read one dissertation that attempted to do the same-old shoe-horn of analyzing based on demographics, when the winning candidate had clearly appealed to a populist voter base that cross-sectioned a wide array of disparate demographics.
No wonder they were so completely wrong- they were looking in the wrong place, and asking the wrong questions. Conspiracy theories emerge. Are they that incompetent? Are they deliberately lying? Either way, how can we trust them?
Silver moves into "damage control" in the next days with "we were only 2 percentage points off" - well no, claiming that one candidate had over 70 percent chance of success is a lot more than 2 percentage points - but since he doesn't do polls himself but bases his information on existing polls and other information - he was effectively drinking from a poisoned well. If he had taken a superficial look at the candidates he would have seen why. One was a career politician and one had never run for office, and didn't understand the first thing about politics - so didn't conform to the common model. This not only confounded the opponent, it confounded the media - and the analytics.
I kept after my kids to pay attention to this election season because they'll tell their grandchildren about it - this will never happen again in our lifetimes.
In large part, this was unlike any other presidential political season for these very reasons, but the pollsters treated the second candidate like the first, attempting to shoe-horn everything into a common "career politician" model. One time after another, the non-politician candidate beat the predictions and nobody could understand why. In the end, the "political class" of consultants and the "establishment" were very afraid. Here they had set up a system through which all political candidates had to arrive, but this candidate proved that none of it was necessary. It relegated the established political engine to irrelevancy.
Moreover, the winner is about to enter office without any obligations to donors or other influencers- and no fingerprints on any of the problems taking place in government now. This was not the case with any of the other candidates.
Has anyone ever witnessed something so strange? The candidate stuck to several core issues that resonated with voters in all demographics and gathered more diversity under the candidacy than anyone prior. The losing candidate on the other hand, kept going through one re-invention after another, as if a phoenix rising from flames. Voters saw this as phony.
How does this apply to our internal corporate issues? Does dirty politics play a role in how numbers are reported? Do we seriously think that if the eeevil political player down-the-hall is able to manipulate numbers to his/her advantage, that they will be altruistic and avoid the urge to cheat? No doubt many avoid the urge, but there is an ever-present propensity to cheat, to capture numbers and spin the story to one's favor. It's just human nature.
When Inmon coined the phrase "Single version of the truth" - this is exactly the problem it addressed - to make sure, in certain, objective and scientific terms, that everyone was reporting from the same place, same totals, same everything, so that nobody could cheat. It's bad enough that someone would cheat to pad their numbers and look better, it's even worse when an underperformer pads their numbers to look even marginally acceptable. Corporate heads wanted neither, but a single place to go where everything was laid bare, the good, bad and the ugly.
In Red Storm Rising, Tom Clancy tells a story of the Russian Politburo and their analysts, who would arrive with three reports in-hand. One was the worst-case, one the best-case and one the middle-ground. When the analysts arrived, they would attempt to "read" the sentiment of the Politburo members - before choosing which report to proffer. The Politburo was known for being harsh with people who disagreed with them. So the analysts would attempt to discern the sentiment and intersect it with a report that aligned with Politburo sentiment rather than challenged it. In this particular storyline of Clancy's, this sentiment was ill-placed, the report was the "best case" and the outcome was disastrous.
Sometimes the folks asking the questions are their own worst enemy. They ask strongly biased questions in the wake of harsh outcomes for dissenters. If a person wants to keep from getting their head lopped off, they do whatever, say whatever to avoid this outcome, but it's not doing the decision-maker any good. If anything, it's misleading the decision-maker down a disastrous path. Unless of course, the decision-maker is just so good at what they do, they don't care about the opinions of others anyhow. Except to see who is loyal to them, of course.
This is the dichotomy presented to us by presidential polling versus our internal analysts. The pollsters and the analysts both have a loyalty or bias, so it is incumbent upon us to either determine that bias, or guide that bias in our favor. In business we want our analysts loyal to our goals and success. We want their honest answer as to where we're headed and whether or not it's a good idea, how to steer toward success and how to avoid danger, and the most effective way is to join-at-the-hip. Their fate is our fate - they have a vested interest in helping us get it right.
Consider the story of the king who went to visit a wise old soothsayer, who told the king, "You will cross a river, and a great king will be defeated." So the king mustered his troops, crossed the river and his entire army was routed. The soothsayer was right - a king had been defeated - but I'll bet the king asking the question would have wanted a more specific answer.
Conversely, who are the pollsters loyal to? Clearly at least one pollster was chasing "the answer" and found it - and reported it regardless of how many other laughed at him for it. The others, who had it wrong, were loyal to something else. We don't need to know what that something else was, just that they weren't pursuing the truth. And if they were pursuing the truth but were that-far-off - they certainly weren't pursuing it well. With the stakes so high, wouldn't we want the person to pursue it well? Don't we want them to be loyal to us?
And "loyal to us" isn't the Politburo gambit of "reading" us to tell us what we want to hear. We want them to tell us what we need to know.
Or are we really okay with analysts who "found what they were looking for" and upon "finding it" proclaimed "see I told you so" even as the company was entering Chapter 11? One particular very-large energy company in Texas (Enron) had one and only one analyst telling them they were on the wrong path. He was right, and could say "see I told you so" - but the decision-makers weren't listening.
This is a lesson for the analysts out there - just like only a few 2016 pollsters got it right while the others laughed at them - you as an analyst might be up against similar odds - and feel like Galileo in conflict with the greatest academics of his time. Your chief analysts may tell you that you're wrong - that you're making a bad career move to disagree with them. What if they are using the "common metrics" and you have found an outlier, a significant anomaly that creates tension in all the common answers? If the data is on your side, you have some decisions to make.
Many years ago I worked with a chap who built hardware parts for computers. One IEEE-certified schematic for a device showed that he needed a much larger wire than was necessary. The wire could hold more power than common house-current, but the device was powered by a nine-volt battery. The larger wire seemed like overkill, but the specification called for it. He took his case for the larger wire to the boss, who told him that he needed to use a smaller wire. The engineer stood his ground on the side of the schematic - and claimed that he'd taken an engineering oath not to follow instructions from people in opposition to a schematic specification. A battle ensued that lasted for many days until the young engineer tendered his resignation. A contractor was called in to fill his shoes, and he noted the same issue with the size of the wire. He called the vendor who said "This has already been published in errata. Do you not have a copy of it?" The contractor said no so they faxed the same. Lo and behold, the new schematic had specified a smaller wire. Why didn't the first engineer think to do this instead of sticking to the original data - in the face of such a glaring anomaly?
What does all this mean? We can stand our ground with bad data in our hands and be sincerely wrong. Or we can look at other aspects of the problem and regard glaring inconsistencies as problems to solve rather to ignore. Nate Silver ignored the "poisoned well" of the data he was using, as did the other pollsters. The ones who got it right, stuck to their answers even through ridicule - because they knew this race was different in too many ways to count, and required more than just the common metrics.
An old joke goes like this: Some analysts got together to determine the "meaning" of "two plus two" - and brought in a mathematician. His answer was "four - what's your point?"
They brought in a philosopher, who answered with "Well, "two" in one universe might mean different things than in ours, the same for the meaning of "plus" or even "four", so can you be more specific?" They thanked him for his time.
Then they brought in the attorney. Upon hearing the question, he rose from his seat, shut the door, seated himself and leaned into them, "What do we want it to be?"
Modified on by DavidBirmingham
Many years ago we encountered an environment where the client wanted the old system refactored into the new. The "new" here being the Netezza platform and the "old" here being an overwhelmed RDBMS that couldn't hope to keep up with the workload. So the team landed on the ground with all hopes high. The client had purchased the equivalent of a 4-rack striper for production and a 1-rack Striper for development. Oddly, the same thing happened here as happens in many places. The 4-rack was dispatched to the protected production enclave and the 1-rack was dropped into the local data center with the developers salivating to get started. And get started they did.
The first team inherited about half a terabyte of raw data from the old system and started crunching on it. The second team, starting a week later, began testing on the work of the first team. A third team entered the fray, building out test cases and a wide array of number-crunching exercises. While these three teams dogpiled onto and hammered the 1-rack, the 4-rack sat elsewhere, humming with nothing to do.
We know that in any environment we encounter, with any technology we can name, the development machines are underpowered compared to the production environment. And while the production environment has a lot of growing priorities for ongoing projects, we don't have this scenario for our first project, do we? Our first project has a primary, overarching theme: it is a huge bubble of work that we need to muscle-through with as much power as possible. That "as much as possible" in our case, was the 4-rack sitting behind the smoked glass, mocking us.
And this is the irony - for a first project we have a huge "first-bubble" of work before us that will never appear again. the bubble includes all the data movement, management and backfilling of structures that we will execute only once, right? Really? I've been in places where these processes have to be executed dozens if not hundreds of times in a development or integration environment as a means to boil out any latent bugs prior to its maiden - and only - conversion voyage. But is this a maiden-and-only voyage? Hardly - typically the production guys will want to make several dry runs of the stuff too. We can multiply their need for dry runs with ours, because we have no intention of invoking such a large-scale movement of data without extensive testing.
And yet, we're doing it on the smaller machine. No doubt the 1-rack has some stuff - but I've seen cases where it might take us two weeks to wrap up a particularly heavy-lifting piece of logic. If we'd done this on the larger 4-rack, we would ahve finished it in days or less. Double the power, half the time-to-deliver (when the time is deliver is governed by testing)
In practically every case of a data warehouse conversion, the actual 'coding' and development itself is a nit compered to the timeline required for testing. I've noted this in a number of places and forms, in that the testing load for a data warehouse conversion is the largest and most protracted part of the effort. And if testing (as in our case) is largely loading, crunching and presenting the data, we need the strongest possible hardware to get past the first bubble. A data conversion project is a "testing" project more so than a "development" project, and with the volumes we'll ultimately crunch, hardware is king.
But I've had this conversation with more people than I can count. Why can't you deploy the production environment with all its power, for use in getting past the first bubble, then scratch the system and deploy for production? What is the danger here? I know plenty of people, some of them vendor product engineers, who would be happy to validate such a 'scratch' so that the production system arrives with nothing but its originally deployed default environment.
Yet another philosophy is that we would pre-configure the machine for production deployment, but nobody likes developers doing this kind of thing in a vacuum. They would rather see deployment/implementation scripts that "promote" the implementation. I'm a big fan of that, too, for the first and every following deployment. That's why I would prefer we used the production-destined system to get past the first-bubble-blues, then scratch it, and get the original environment standing up straight, and only then treat it as an operational production asset.
Most projects like this have a very short runway in their time-to-market, and we do a disservice to the hard-working folks who are doing their best to stand up this environment, They need all the power they can get, especially when they enter the testing cycle.
And for this, it's an 80/20 rule for every technical work product we will ever produce. Take a look sometime at what it takes to roll out a simple Java Bean, or a C# application, or a web site. Part of the time is spent in raw development, and part of it in testing. If I include the total number of minutes spent by the developer in unit testing, and then by hardcore testers in a UAT or QA environment, and it is clear that the total wall-clock hours spent in producing quality technology breaks into the 80/20 rule - 20 percent of the time is spent in development, and 80 percent in testing.
And if the majority of the time is spent in testing, what are we testing on Enzee space? The machine's ability to load, internally crunch and then publish the data. On a Netezza machine, this last operation is largely a function of the first two. But we have to test all the loading don't we? And when testing the full processing cycle we have to load-and-crunch in the same stream, no? What does it take to do this? Hardware, baby, and lots of it.
I can say that multiple small teams can get a lot of "ongoing" work done on a 1-rack, no doubt a very powerful environment. I can also say that a machine like this, for multiple teams in the first-bubble effort, will gaze longingly at the 4-rack in the hopes they can get to it soon, because so much testing is still before them, and they need the power to close.
What are some options to make this work? Typically the production controllers and operators don't like to see any "development" work in the machines that sit inside the production enclosure. They want tried-and-tested solutions that are production-ready while they're running. At the same time, they have no issues with allowing a pre-production instance into the environment because they know a pre-production instance is often necessary for performance testing. Here's the rub: the entire conversion and migration is one giant performance test! So designating the environment as pre-production isn't subtle, nuanced, disingenuous or sneaky - it accurately defines what we're trying to do. It's a performance-centric conversion of a pre-existing production solution, now de-engineered for the Netezza machine. As I noted, development is usually a nit, where the testing is the centerpiece of the work.
With that, Netezza gives us the power to close, to handily muscle-through this first-bubble without the blues - we only hurt ourselves with "policies" for the environment that are impractical for the first-bubble.
This brings us full-circle yet again to a common problem with environments assimilating a Netezza machine. The scales and protocols put pressure on policies, because those policies are geared for general-purpose environments. There's nothing wrong with the policies, they protect things inside those general-purpose environments. But the same policies that protect things in general-purpose realm actually sacrifice performance in the Netezza realm. Don't toss those policies - adapt them.
Within a few minutes, Neon and Ruth Guardian arrived at the office of the Architect. The label "Most Recent" was alongside the open door. Inside, they could see a high-back leather chair and the back of a man's head. Beyond him. an array of computer screens in a grid, filled and scrolling with operational information.
Neon knocked and entered. He wondered why such a complex command console was required for such a simple data warehouse "Mr. Recent?"
The Architect turned around to greet him, "Hello, Neon - I've been expecting you." He clicked his remote mouse toward the wall and several screens blinked. "But I'm afraid that Most Recent is a status, not a name." He then pointed to the floor, "Please remain standing. You won't be long."
Neon glanced toward Ruth and wanted to roll his eyes, but refrained. "I have a few questions," Neon said, pulling up a chair.
The Architect confidently smiled and said, "I suspect that you do, but keep in mind that you are still irrevocably a consultant. Some of my answers will make sense, and some won't."
Guardian leaned into Neon's ear and whispered, "Actually none of his answers will make sense."
The Architect continued with the tiniest smile, "The question you ask first will be the most important to you, but is also the most irrelevant."
Neon paused, then asked, "Why didn't you protect the company and its interests when you designed the data warehouse?"
"Okay, I was wrong - that is a very good question. Let me think about that for a moment."
"Take your time."
"I suppose the question is not about the data warehouse itself, but the outcome of its applications."
"No, it's the data warehouse itself." Neon focused.
"But the data warehouse is practically perfect in every way." He was noticeably uncomfortable.
'It's full of junk," Guardian sighed.
The Architect continued, "Look at the algorithms, the flows, the data management, the sheer muscle in the machinery. Infinitely capable and infinitely scalable." He grinned and laughed to himself, "Null, missing and invalid values are simply scattered anomalies within what is otherwise a harmony of mathematical precision."
"But you're supposed to correct those," Neon challenged, "the data warehouse process has a responsibility to scrub bad data and either exclude it or make it right."
"What if the data doesn't want to be excluded? What if we don't know how to make it right? What if it cannot be made right? What if excluded data finds a way to come back?"
"The data isn't like an animal," Neon huffed, "it doesn't have a personality. It doesn't feel pain or make decisions. It doesn't - " he paused, leaning backward.
"It doesn't," The Architect agreed, "and it isn't." He shared a long, intense eye-to-eye gaze with Neon, "you still don't understand, do you? All data is merely binary one's and zero's recorded on media, and the metadata to describe it is no different. Data or metadata - what is the difference? - it isn't real. It is a collection of magnetic signals that record the semblance of something in the real world"
There is no data Neon recalled. And if metadata itself is data, then it, too is not real. There is no data, no metadata. What then - is reality? "This is just a philosophical crock," he concluded.
"When data flows through the systems," The Architect explained, "it is not really flowing, only signals representing the data are changing in memory, organized such that the mathematical states cascade in value from one machine to the next - but it's not like water. It doesn't have - physicality."
It's not real in the physical sense, Neon's brain was on fire, but it represents reality, and affects reality - like - sorcery?
"He's doing it too," Guardian said, "talking in circles and not accepting responsibility."
"Don't reduce," Neon focused, "the data is junk, no matter how you choose to describe it, or how capable the technology is. Magnetic state leaves one place and lands in another, and you took no responsibility to make it right - is that about it?"
"Anomalies in the source data reduce its quality," the Architect asserted, "to a level that is beyond correction, but not beyond control."
"They are beyond neither," Neon corrected, "correction and control are simply a function of diligence."
"Diligence and control are easy," he said, "but correction is the realm of the data stewards. What if I correct the data? From what incorrect state to what correct state? And who decides? Who controls?"
It suddenly dawned on Neon that a major missing piece had nothing to do with the architecture, data, or technologies involved. It had to do with governance. No controlling authority existed to control quality of process or result.
"But you're not controlling it at all. Data quality is not beholden to anomalies," Guardian pointed out, "Increasing the quality - eliminating junk - is a deliberate act of will."
"Choice?" the Architect clarified, "is not relevant. The data flows. The programs run. Everything is doing what it is supposed to do. Everything is fulfilling its purpose."
"And that would be?"
"This data mart here," Neon pointed to a screen, "What is it's purpose?"
"Interesting - ", the Architect raised an eyebrow," you asked that question much sooner than any of the others. Impressive." He cleared his throat and continued, "the mart was designed to deliver business facts and dimensions."
"He didn't answer your question," Guardian said, "ask it a different way."
"I don't decide purpose any more than you choose functionality," said the Architect, "Only the users know the purpose. Why do you need to know?"
Neon was growing impatient, "And who are the users of this data mart?"
"The warehouse has developed over the years through several iterations," the Architect continued. "In each case, the basic data remained the same, only the architects and technology changed. We stabilized the staff and the technology, so now all things are safe from harm."
"All things," Neon said - "like - "
"Jobs, careers," the Architect said, "and their security. So we are not beholden to fate. You do understand fate, don't you Neon?'
"But the data -"
"The data supports our existence, but is not the reason for it."
"The technology is about to change. We're tossing out the hand-crafted scripts and replacing them with a framework environment." Neon asserted.
"Netezza?" the Architect rubbed his chin, "will change nothing. We will implement it in the same manner as all before it."
"The changing of the technology might change more than you think. It may change how you approach the problem. And reduce anomalies to zero."
"A problem is simply an opportunity without a solution. The technology serves us, and we are its master. What is the significance of technology without the technologists?"
Neon recalled his conversation with Miro the Virginian, and muttered to himself, "the slaves."
"Pardon?" the Architect queried.
"Change," Neon observed, crossing his arms, "the problem is change. Your architecture is not adaptive, so cannot account for change. It exists with the fear of change, not the adaptation to it. Your technologists are slaves to their own creation - and you are their slavemaster."
Ruth's eyes widened. She'd never seen a consultant speak to the Architect this way before.
Neon continued, "Hmm. From where I'm sitting, the only thing more significant than the data warehouse itself is -"
"Me," interjected the Architect. "Data changes shape, is delivered different ways, but we continue."
"I was about to say that the only thing more significant than the warehouse is its failure," Neon sighed, "to preserve the security and integrity of the data and fulfill known purposes of the users."
"Users, schmoozers," the Architect snickered, "Developers implement a data warehouse. Users get what they ask for."
"And not what they need?"
"Who can say what they need?"
"I can," Guardian offered, "they need to be successful. The data isn't helping them. You seem to think the data is secondary."
"Secondary only to my own importance, which cannot be questioned."
"I am an engine of rhetoric. I ask rhetorical questions that are not meant to be answered, and I give rhetorical answers that are not meant to be questioned," he stared down his inquisitor, irritated and unamused. "I am the Architect."
"Do you really?", the Architect leaned forward, now oblivious to the screens flashing all around him, "and what exactly do you see?"
"Someone on his way out," Neon said, rising, "you have a choice. Pick the door that cooperates with me and save your skin. Or pick the door that defends this nonsense and - "
"The loss will be for Red Corporation either way," the Architect said, "the loss of a sublime data architecture in favor of something simple - and maintainable by simpletons," he cleared his throat, "or the loss of very important staff and the business knowledge between their ears."
"It can be better - "
"Optimism," The architect observed, "your greatest strength - and your greatest weakness."
Neon sighed, "I'm a realistic optimist. I think things can't get any worse than they are now - "
Neon turned to leave and Ruth followed.
"I've built a lot of these," the Architect called after him, "and I will build many more. I've gotten very efficient at it!"
Neon and Ruth made a quick exit. Walking down the hallway back to their offices, Neon wondered what Ruth's role in this mess had been. "Why didn't you stop them?"
"I wasn't here," Ruth said, "to guide the effort." Ruth thought about it for a moment and said, "The leaders need good data to make sound decisions. It protects their careers, the company and ultimately my job and the reason for it. I hunt for the bad data and kill it. I hunt for the good data and preserve it."
"So your something of a data hunter?"
"More like a data protector."
"We shouldn't borrow lines from another script."
"Oh, sorry, couldn't resist. In any case, I protect things."
"I protect, " she paused for effect, "that which matters most."
Modified on by DavidBirmingham
Neon browsed the list of strange and cryptic entries rolling on the screen. "These are back doors" he said under his breath.
"Yes," said Ruth Guardian, the data quality expert for Red Corp, "Each one represents a potential hole in security for private information."
"It's worse than that."
"If this data content gets to the desktop of a decision maker," he sighed, "it could take the company in a dangerous direction."
Her eyes widened.
"And if the wrong or incomplete data appears on the customer website, the customer will assume - and rightly so - that you're not protecting their information."
"Who's in charge of applications here?"
"His name is Miro," Ruth informed, rising, "Follow me." They exited and moved quickly down the hallway. "He's was born in Virginia, but I think he spent some time overseas in his youth."
"Miro the Virginian doesn't talk like a Southerner, then?" Neon smiled.
"Maybe the South of France," she laughed.
Miro was standing on a platform watching three teams of application developers, slinging code and user screens with abandon. Miro looked like the conductor of - some strange orchestra.
"Need some of your time," Neon approached behind him.
"I can take time," Miro said casually, "if I don't take the time, how will I ever have the time?" He laughed at his own joke.
Neon peered over the shoulder of an application programmer - the complexity of the environment was mind-crushing. But something was odd. The complexity seemed to be - artificial.
"You know why I'm here."
"Yes, your arrival was heralded in company newsletters," Miro said, "but the question is - do you know why you are here?"
"I am here to reduce instability," Neon said.
"But that is a task, not a goal. What is the reason for you to do this?" He smiled and turned away, "You are here because you were sent here by the company leaders. They sent you, and you obeyed. This is the reality."
"The reality is that the environment is unstable."
"No, the only reality - is causality. Dirt breaks the screens, non?" Miro waved his hand, "We have dirt. We have screens. We have to clean the dirt. More dirt, more breakage. More code, less breakage. It's a cause and effect. Dirt is the cause. It creates one of two effects - more code or more breakage. Cause and effect."
Neon held up his hands, "I get it, alright? You have to harden the presentation layer because of all the dirt and disconnection in the data layer - "
"Watch this," Miro smirked as delivery people entered the room with boxes of pizza and two portable soda fountains. They quickly refilled the tankers on each developer's desktop and left the pizza in the middle of the room. "On the hour - every hour."
One of the developers spun around, popped open the first pizza box and hungrily wolfed down two big pieces. He washed it down by gulping his soda loudly.
"You see, " Miro pointed out, "the food and soda are like opiates. It doesn't matter how much work they have, as long as they are fed and watered." He held his hand to his ear and cupped it as though expecting to hear -
BUUUURP! The developer belched and returned to work.
"Satisfaction comes in many forms," Miro grinned, "Cause and effect. The food is the cause and the work is the effect. They don't have to understand it."
Neon recognized them as slaves of the same opiates that once were his master. "All you're doing is dealing with symptoms, not root causes," Neon asserted, "The root cause is bad data - and you've just found ways to manage the symptoms - like over-coding."
"Overcoding?" Miro corrected, "We only code what is needed to make things work. It fulfills its purpose."
Neon held up a finger as if to say hold that thought - then reached down, hit three keys - and all of the screens in the room went BSOD - Blue Screen of Death.
Miro gasped, the developers screamed and began a recovery fire drill. Miro quickly composed himself, raised an eyebrow and said "Okay, you have some skills."
"The environment is brittle because of the overcoding. Less code, fewer failure points. Cause and effect. The extra code is not necessary - it's a symptom -"
"Oh? How will you solve the problem otherwise?" Miro challenged.
Neon rolled his eyes and muttered, "Clean the data first."
"It is so simple for you, consultant," Miro huffed, "But you know nothing. The data is what it is. If you think you can change it - you must approach the Architect. He will not suffer the likes of you - "
"But all of the code, and all of the work," Neon observed, "gets in the way of serving what the users really need."
"You know what the users need?"
"Yes, well -"
"No, I don't think you do." Miro asserted mysteriously, "You think you know what they need, but do you really?"
"He's talking in circles," Ruth observed, "welcome to my world."
Neon turned to leave with Ruth on his heels.
"This isn't over," Miro snapped.
"Oh yes it is, " Neon retorted, "The leaders turned this over to me and I will put an end to it."
"I survived your predecessors, consultant " Miro called after him, "and I will survive you!"
Yep, Neon thought, I've heard that before, too.
Ruth quickly joined Neon in the hallway, "Now what?"
"Can I speak to the data architect?"
"Which one? There have been six before you. The most recent -"
"Just let me talk to him."
Modified on by DavidBirmingham
Neon strode through the crowd, half-gliding as his long black raincoat floated behind him. Orpheus had called - and now it was time to meet with the database vendor. He made his way confidently toward the large black building.
Orpheus joined Neon outside building and both of them entered. Orpheus led him to a large lobby full of people, signed him into the visitor list, turned and said, "You're here to see Oracle. Whatever they tell you is only for you to know and nobody else. It's a non-disclosure thing." With that, he departed and left Neon amidst the others.
Neon found his way to a chair and sat down to wait quietly. Next to him was a young man who looked like he'd just graduated college. He was tapping feverishly on the keyboard of a well-worn laptop.
"Why are you here?" the kid asked.
"To see Oracle," Neon muttered, closing his eyes, "and you?"
"Metadata," the kid whispered, almost hypnotized by his own work. "If you notice," he tapped several keys and filled the screen with data, "how the information moves when the metadata changes..."
He was right. As he moved his mouse, the metadata changed, controlling the information on the screen almost like water in a pool.
"That's interesting," Neon said, "how are you doing that?"
"Making the information move like that?"
"There is no information," the kid said, "only metadata."
"But there has to be -"
"Think about it," said the kid mysteriously, "When we see an "A" on the screen, it's not really an "A", but the way the screen has been programmed to represent the digital code for "A". And below that, the "A" is a hexadecimal value, which is another representation for binary digits. And we know that binary digits are representations of electrical signals that can only be "on" or "off". Ultimately, each level is a representation of the one below it, so there is no true data, just representations of electrical signals. It's all metadata."
"There is no information," Neon whispered.
"Mr. Neon," an admin called out, and Neon left the young man to his own mysterious devices. "Oracle is down the hall to the left."
Neon entered the conference room where only a small, unassuming little woman stood pouring herself some coffee. "Oracle?"
"Shortly," she said, not turning around, "they're in the next room wrapping up. But we can talk for now."
Neon waited in the uncomfortable silence while the woman slowly stirred her coffee.
"You don't drink coffee do you?" she asked nicely.
"Not really, no."
"How did I know that?" she asked, motioning for him to sit down. She answered her own question, "Your eyes are naturally bright. You don't need the stuff."
They moved toward the large conference table while she continued, "Ever wanted to predict the future?" She sipped her coffee and paused for effect. "Or see things others cannot?"
"What do you mean?"
"Predictive analytics. Remember the little girl on the beach in Indonesia? The water rushed out to sea, and everyone thought it was odd, but still they ignored it. She knew a tsunami was coming because she'd studied it in school. She sounded the warning, saved lives - and I noticed - that none of the other tabloid prophets had anything to say about it."
Neon smiled, "That's because you can't really predict the future."
"What if you could?" she smiled, "in the same way that she did - she obviously knew what was about to happen before it happened."
"That's different. That's not predicting, that's analytics."
"In the business world, and in technology, there's no difference."
Neon paused for a moment, "Go on."
"People go about their business, everything is routine. Then something odd happens one day. Maybe it's deja-vu. Maybe something more. Ever been somewhere that the odd things were so glaringly out of place to you, but nobody else seemed to notice?"
Neon was waiting for her to make her point. Perhaps like information, there is no point, he thought.
"Have a seat," she invited, "I've got something to show you."
She produced a laptop, popped it open and said, "Look." Filling the screen were various utility scripts.
Neon's eyes got wide, "Is this what I think it is?"
"Yes," she said, "From the underground." She went on to explain that while all of the scripts had been built for utility, she'd done it under the license of a prior customer. The customer had received value for their money, and she had walked out with the reusable collateral.
"So these weren't developed with a bootleg copy of Linux?" Neon queried.
"Would it matter?" she said, "Either way, they're from the underground. How much would something like this be worth to you?"
Neon thought for a moment, "Functionally I'm sure there's value, but we'd have to retrofit these into our current naming conventions, configuration management - I mean, it would be easier to build them from scratch than accepting them from you."
"But you have to know what to build," she said mysteriously," you have to know ahead of time what you're going to need. What if these contain ideas about things you didn't know you needed, but really do?"
"Cut to the chase, just give me the insights and I'll apply them."
"Insights are valuable too," she said, "like seeing into the future. You will need some of these functions, and you won't know it until it's too late."
"It will never be too late," Neon said, "we can always add it later."
"And if you never do, you'll suffer when there could be a better way. Ignorance will enslave you."
"I'll take that risk."
"That's not the Netezza way," she said, standing, "the Netezza way - adaptive architecture - is to add it to the mix - keep it in the arsenal - knowing that you may need it because so many people often do."
"That's a little extreme - "
"Is it?" she leaned forward, "What must you have in place before developing? What are the top three pitfalls of development? What are the top three missed opportunities if you start wrong? Do you know how to apply metadata so that data doesn't really matter anymore?"
Neon shifted his weight in the chair.
"Like the little girl - predicting the future from prior knowledge. Harness the information, and harness the future."
"Nobody can amass that much information," he shook his head.
"It's not about the information," she said, "it's about how the information is used."
Neon whispered in recollection, "There is no information... only -"
"What transforms raw data into actionable information? Metadata - Behavioral and structural metadata are not the realm of tacticians," she said, "they only understand raw data. Rise above the data - connecting it with metadata - and become an information alchemist. You'll have the persona of a shaman, or a sorcerer, not simply a developer - once you master the power of metadata, and its power over data to create information."
"But data drives the processes - "
"But metadata determines which processes to drive - "
Neon's eyes widened, "I'll take that cup of coffee now."
She retrieved the coffee, using the time to explain, "In the corporate layoffs of earlier years, many IT professionals developed a deep distrust for corporate IT, to the point that most of them had gone underground, to a "1099-culture" of contractors that popped on and off the grid like ghosts. This created an underground of disavowed professionals who sometimes take extreme measures to be successful."
"What do you mean, ghosts?" Neon asked.
"You see those folks out there?" she pointed out a small window to a view of a hallway, "they work here and fulfill a purpose. Sometimes they are replaced by someone who is better or different than they are - happens all the time." She took a long draw of her coffee, "then there are those who don't really have a purpose, or their purpose is lost. They might leave the company and reappear as consultants or contractors."
"Happens all the time," Neon observed.
"Yes, but the legends that you hear - of vampire firms that suck the life blood and money from a company, or ghosts that appear on and off the books to fix spot problems, or aliens that come from far away places", she shook her head, "are part of an underground - or an underworld. In many cases these are people who have lost their original purpose - or who are trying to find one - some are even outcasts who live in the underground and supplant or assist people - " she pointed to the hallway, "like them."
Neon reflected, "Go on -"
"So there's nothing really wrong with being a ghost," she said, "I'm something of one myself. The point is - the underground is a dangerous place for unprotected hex," she grinned.
Neon raised an eyebrow.
"I mean software - and software licenses. There are bootleggers and pirates, crackers and hackers all over the globe. Vendors have wisely chosen to lock down their own licensing model to keep the vampires at bay. A form of garlic, if you will."
"And the vampires reciprocate with a model of entry by invitation only," Neon smiled.
"Or we'd have bootleg AI products," Neon finished, "But this makes it even harder for a consultant to succeed - they always have to be working for a licensed customer in order to get exposure to the product - making them even more desperate to get a copy of it themselves."
"Bingo," she said, taking another sip, "it's a pickle, no doubt about it."
"So how do we solve it?"
"What's to solve? People work their way around it," she said, then explained how one consulting firm put their daytime workforce in play and then replaced them with a night-time workforce to do unit testing for the daytime work force. The night work force was "free" to the company, but they were getting trained in a live environment.
"So they found a way," Neon leaned back in his chair.
"They always do," she said, sipping her coffee and maintaining a long silence.
"I interviewed a fellow the other day," she began, "for the third time."
"Why so many -"
"Wasn't my intention," she smiled, "he just used different names each time. The third attempt he had someone in the room with him coaching him on the answers. Desperate times call for desperate measures." she took a long sip of the coffee. "It's one thing to have the answers - it's another thing to actually know them."
"Seems like it would be easier to get a job somewhere else doing something else?"
"Where else?" she said, "and what else? The tactical underground lives for the information - bootleg or otherwise. Anything involving information will always give them work to do. They live for the "I", so to speak, the "I" in IT".
"But you said there is no information- "
"Bingo. They are living for something that is not real. The information will always change, faster than you can blink."
"So when we say there is no information, it's not that the information isn't there, only that it's passing through so quickly - "
"Like water in the plumbing. The plumbing is the metadata - it carries the water - regulates it - manages it. Which is more important, the water or the plumbing - and who makes more impact - the plumber or the one who is constantly focused on the water?"
"Here's another dilemma," she continued, "a team with ill-trained Netezza people arrives on site and does damage trying to understand how to make the technology fit into how they do things. It's how they do things that's more important to them, because they are predicting they may need a backup - like another technology and another team - to backfill if they fail."
"They are predicting their own failure," Neon said.
"Predicting is hard to do," she smiled, "but planning it is not. You can't fool me - some teams do such a bad job that I can't be convinced their failure wasn't deliberate. Netezza is so easy to use it would be like failing to cross a country road. You have to trip yourself a bunch of times - deliberately - you see what I mean?"
"Sounds almost - insidious."
She sipped her coffee, "You've seen the so-called support systems in the underground - Is this sort of like the blind leading the blind - or perhaps the blind misleading the blind?"
"Never thought about it that way, " Neon mused.
"Think about where you are now," she continued, "I bet you'd like to learn more - can you share information like: How much did you spend on your development staff? Your consulting staff? That much? That little? What was the outcome? How long did it take? that long? That quick? How do you schedule work and scope projects? etc. etc. " she waved her hand in the air, "the list goes on and on - "
"What's the answer?"
"Collaboration," she said slowly, then paused, "and someone to make it happen." She leaned forward - "a facilitator. In gaming systems, it's a sprite - something that exists only to bridge the player into the game." She leaned forward, "the real game."
Neon's eyes sparkled, "So if end users collaborate on each other's work, they could trade ideas and watch each other's backs." He stopped himself, "But that is a huge amount of information -"
"You don't learn very fast, do you?" She smiled. "It's not about the information, it's about the facilitation. As far as you're concerned, they need a broker. Someone to bridge them into the game."
Neon's cell phone beeped and he turned around to check the display. It was Ruth Guardian, and it was time to leave. He turned to the woman to apologize, but she and her laptop were gone. On her chair was a business card with the Oracle logo, and on the back was scribbled a cell phone number.
He looked around quickly, and noted that the door on the far end of the conference room was slowly closing, telling him that she'd exited through it. He darted toward the door and opened it quickly.
The hallway was empty.
Musing over his recent adventure with the white rabbit, Neon sat silently, thinking Dilbertesque thoughts - still steaming from a reprimand by his pointy-haired boss (someone who strangely looked like an elf from Lord of the Rings, but without whispering most of his words)
A courier dropped off a package containing a cell phone (with a design that not even Mr. Spock's agent would have agreed to)
The cell phone rang -
"Neon do you know who this is?"
Neon whispered, disbelieving - "Orpheus?"
"Yeeesss. I've been wanting to meet you, but we haven't much time."
"What do you mean?"
"You must quickly take this phone across the building. Don't let anyone see you."
Neon complied, ducking low as he darted through and between the cubicles.
"When you get to cube 4184, look inside. There's a consultant there from one of my competitors working on a project that should have been awarded to me".
"Yes I see him."
"Take him out."
"What?" Neon whispered as the consultant looked into his eyes.
"Okay, tell him to go to the nearest window and jump. I'll be waiting to catch him."
"Wha -?" Neon said again in disbelief.
"Okay, tell him to take a red pill, then stick his finger in the nearest mirror. We'll drop a harness in the water and lift him out."
"You've lost me."
"Just make sure he takes the pill, we'll do the rest"
"Wait - "
"The pill, Neon, and wash your hands afterward."
"Look - "
"No you look, do I have to spell it out? He's a competitor. Keywords are jump. Water. Long walk off a short pier. I'm throwing you some bones here."
"You want to get rid of him?"
"No, not get rid of him, just cancel his contract. Punch his ticket, you know - he'll find plenty of work to do elsewhere."
"Like where else?"
"Where do you want him to go?"
"Out? Work with me here..."
"I don't have time for this - he's got work to do and so do I"
"Yes but his work should be my work - and that's the way it works."
"I'm still lost. He's giving them the same thing you would - in fact exactly the same thing."
"Yes but with a major difference, mine is half the cost for the same value."
Neon thought about this for a moment. "That's impossible," he whispered.
"And in half the time, or less," Orpheus added.
"So where are you?"
"Everywhere and anywhere, but mostly in Chicago."
"No, I mean - forget it. Look, Orpheus, I'm uncomfortable with this conversation..."
BZZZZT! Orpheus sent an electric shock through the phone
"Ow! " Neon dropped the phone, shook his hand and retrieved it from the floor
"How's that for discomfort? Now get to it."
"Okay - look - I can't do this -"
"What - next thing you'll tell me is that someone becomes 'special' by doing an action flick with a runaway bus opposite Dennis Hopper and Sandra Bullock. Is that what I'm about to hear?"
Neon was silent. That one really hurt.
"Look, if you hang up the phone, a bunch of those same guys will lock you in a room, put fake flesh-colored putty in your mouth and then let an electronic shrimp-creature crawl into your navel -"
"Your only other way to escape is to jump out on a rickety scaffold in high winds - but there's an Infinity with the motor running waiting for you - 20 stories down."
"We're done here"
"We'll see...." Orpheus snorted, "Just wait until you meet the Mad Hatter".
"What is with you and this Alice In Wonderland thing?"
"Seeya, Alice - "
So Neon slowly walked back to his desk, not understanding just how competitive this business could be. The words of Orpheus rang in his mind as he flopped into his chair.
Charging more money for no better value? Less time and less money for the same result? Could that be possible? If so - the stakes were very high indeed. More importantly, the more that others knew that the value was the same, the less traction the competitor would have. The less traction, the less money, and they would implode. They would have a vested interest in shutting down and silencing anyone who got in their way - or who could reveal their secret.
Within moments, several members of the competitive firm appeared in Neon's cubicle, one of them holding a ball of putty. The following ordeal was horrific.
BEEP BEEP BEEP
Neon awoke from his dream state clutching his stomach. Electronic shrimp creature, indeed. It was all just a dream, wasn't it?
The phone rang, "Yes?" Neon asked. His head hurt and his mouth tasted like - putty.
"Infinity, is that you?"
"Yes, You must meet me downstairs, I have a statement of work for you, and a new contract"
"Does it involve surfing?"
"No, it - "
"Time travel? I don't want to do another time travel - "
"Shut up and meet me downstairs - "
Neon met Infinity at the coffee shop on the first floor. They stood in line, made small talk, ordered their respective cups of coffee and found a table toward the rear. Neon broke the ice, "Whatcha got?"
Infinity took a light sip of her brew and smiled. "There's something you need to know," she said, "that thing about the shrimp creature may have felt like a dream, but just in case, eat some creole tonight to wash it all down. Can't be too careful."
Neon just stared at her. What a strange person.
"Someone wants to meet you," Infinity said.
Neon felt a light touch as he looked up and heard "Hello, Neon," - it was a voice he instantly recognized -
"Yes, and it's a pleasure to meet you, Neon," he smiled, "I've been waiting a long time." He pulled up a chair next to Infinity, who simply smiled, rose, and departed without a word.
"I have two contracts here," Orpheus said, pulling out the portfolios and laying them side by side. "They are both very difficult projects, with especially difficult managers in charge of them. Both of them are a real pill."
Neon scanned the first several pages of each statement of work.
"These two projects, they are very similar, yes?" Orpheus asked.
"Practically identical, except for one thing," Neon flipped to the back pages of each - "The Blue project says it will take 10 months with 10 people. Yet the second one, for Mr. Red, will take 5 months and 5 people. How is this possible?"
"Which one do you believe?" Orpheus asked, "The Red or Blue?"
"Do you believe in fate, Neon?"
"I know precisely what you mean. Why not?"
"Because I am the master of my destiny."
"Is that really true? Think of the project you are on now - are you its master, or its servant?"
"Well, I - "
"Do you answer support calls in the middle of the night? Is the environment unstable? Do you feel the wheels could fly off at any moment?"
"Well, sometimes but - "
"And this feels like control to you? It sounds like the warehouse is your master, and you are its servant. You are not in control - your fate is tied to the health of the fact tables and the Service Level Agreement."
Neon sank in his chair, "go on."
"There is a question in your mind, Neon," Orpheus said ominously, "like a splinter - a question you cannot answer yourself. What is the question?"
"There is only one question," Neon affirmed, "and only one way to solve the same problem in half the time - without killing myself."
"What is the question?"
"How do I effectively apply an adaptive approach?"
"Almost - "
"Okay, how do I effectively apply an adaptive approach with PureData For Analytics?"
"Bingo," Orpheus laughed, "Next, we'll go up on the rooftop and jump from building to building."
Neon's eyes widened, "what?"
"PSYCHE!" Orpheus pointed his finger, "are you always this gullible? No - but you're right. The only way to produce the same quality in the half the time and half the people - creating a more stable and resilient environemnt - is with an adaptive model."
Neon glared. What a strange person.
"So which will it be?" Orpheus added, "the Red or the Blue?"
Neon held both projects in his hands, "If I pick the Blue, I get more revenue and put more people to work. If I pick Red I get less revenue and less people working."
"Yes, yes," Orpheus was excited - Neon was catching on.
"So as a consultant I am predisposed to pick the Blue project."
"But as a master of destiny you are predisposed to pick the project that will lead to mastery of the data warehouse, rather than being its servant."
Neon stared at the two projects, then took a big gulp of coffee.
"The other approaches - we'll call them application-centric - are a dream world," Orpheus continued, "designed to control and master the architect and all his developers. It turns a good developer into this," he held up a tiny cage with a rock inside it.
"A pet rock?"
"It was the best I could do on short notice - work with me. It means you are the servant of the warehouse, instead of its master. The warehouse should be in the cage, not you. Take the Blue project, and you will remain a servant."
"And the Red project?"
"I'll show you how deep the rabbit hole goes," Orpheus grinned.
"Again with the Alice in Wonder -"
"Okay, Okay, I'll show you how the adaptive approach builds automatic resilience into the design."
"Red or Blue, my choice?"
"Free will, Neon."
"No, that's a line from another script I'm looking at."
"Oh, okay, well then - you are free to choose. And remember," Orpheus leaned close and whispered, "I am only offering you productivity." He finished his coffee, "and the opportunity to be the master of your destiny."
Neon downed his coffee also, snatched up the Red folder and said "I'm in".
"Good," said Orpheus, "follow me please."
They made their way to the street where a stretch limosine awaited. As they approached, Neon noted his distorted image on the vehicle's smoked glass window.
Once inside the vehicle, the driver said "Hello, Neon."
"He knows me," Neon quizzed Orpheus.
"Zephyr is my best architect," Orpheus laughed, "he knows the adaptive approach better than anyone."
"Stick with me Dorothy," said Zephyr, "and Kansas will be bye-bye."
As the car moved forward, it meandered through traffic, passing a black van, and Neon could see his reflection again in the car window glass - from the inside. It was crystal clear now. "Can you see that?" He said.
"The reflection?" Orpheus smiled, "depends on which side of the glass you are on. From the outside, it's distorted - but on the inside it's a clear picture." He leaned forward to Neon, "More to the point - the image on the outside was of a servant. Now its an adaptive architect. Things will be clearer every time you view the looking glass."
"Again with the Alice in - "
"Okay, okay - I'll put it like this: One day you'll look in the mirror and see what you've become - an adaptive architect - and you'll no longer be a servant."
Neon was fixated on the window.
"CopperTop?" Orpheus asked, handing him a battery for his beeper.
"Energizer," Neon said flatly, "Rabbits are a theme with me. It's personal."
Modified on by DavidBirmingham
Neon had followed the girl called White Rabbit to the rave party. Now standing against a wall, alone and his paranoia rising, he wondered what he could have been thinking. If she's the rabbit, then I'm in the rabbit hole. There's no telling how far down this goes.
A dark-haired athletic woman with a "don't try it" expression sauntered towards him and said, "You're Neon."
"How do you know that name?" Neon crossed his arms and glanced around, feeling exposed.
"I'm Infinity," she said with a curt smile.
"The one that cracked the NSA mainframe?"
"That's the one."
"I thought you were a guy."
"I get that a lot," she moved closer, and closer still, then leaned into his face and said with a sultry tone, her breath against his cheek, "Why are you here, Neon?"
"I'm not sure."
"Of course you are," she whispered into his ear, "You want the answer to the question."
"Do you know the question?"
"What the heck is a zone map?"
BEEP BEEP BEEP the alarm clock awoke Neon from his restless nightmare.
- excerpt from The Waitrix
I fell asleep dreaming, even as I hit the pillow, glad to be free of that whining analyst on the third floor. Every time he sends me a report, it turns out to be worse than wrong. It's too wrong to be worth the paper it's printed on, yet his supervisors say he's the next Einstein. When I ask him personally about the paper's details, he just scratches his head. When I ask about his conclusions, he scratches it again. Whenever other people ask him for things, he just bobs his head like everything is under control. So between bobbing his head and scratching it, I nickname him Bob Scratchit. Many can appreciate the irony.
But I had acted on his analysis. Woe came to me in waves for many weeks. Nobody wanted to sit with me in the lunch room, return my phone calls or give me access to the data necessary to see what-went-wrong. I became a scourge and a pariah. I blame Scratchit. He started it. My partner had also listened to and acted on Scratchit's advice. He was practically perp-walked out the door and doesn't look particularly good in an orange jumpsuit. I need to watch myself, because Scratchit always has an agenda, and even if his conclusions are wrong, I get the impression that he's toying with me. And that's kinda scary.
Later that evening I startled awake as my laptop screen flashed from the other side of the room. I rose to inquire, and on the screen was the smiling face of my partner Jake, his carefully coiffed Bob Marley locks draping his shoulders. He hadn't changed much, but looked a little tired, and I wondered how he was able to find me and impose himself on my evening, like a ghost from my workplace.
"Knock, knock!" Jake said with a grin, "Anyone home?"
I touched the camera-On button, "Hi Jake, yes I'm here."
"Ahh, good to see you," he said, "Knees'er shakin'?"
"I can tell your knees-er shakin'. Don't be scared."
"Not scared. Just annoyed. It's late, can't this wait?"
"You haven't changed. Still keep puttin' things off 'till later. That's gonna change, my friend."
"I'm sending you three WebEx appointments."
"What are they about?"
"You'll see," Jake said mysteriously. "Come here every night by 8pm for the appointments." He
seemed to pause, pain written on his face, "Don't put it off, and don't take my word for it." he beeped out.
In that moment, the clock chimed for 8pm and startled me. The WebEx session fired up to reveal
the smiling face of-
"Jack Blaines," I said before he could get a word out.
"That's right, mate," the native Australian grinned broadly.
"But, I recall you from what, twenty years ago, when data warehousing was barely a concept and -"
"That's right mate, we were all part of the inception."
"You haven't aged a day - "
"Nope, and where I am, data warehousing hasn't moved an inch either. Here, we had to make the most of what we had."
"The most of data warehousing past, you mean."
"That's right, do more with less. In this time, we don't have any network bandwidth because, well, we don't have any networks. But we still have to do bulk processing and analytics."
"The hard way."
"Here's a takeaway - software only guides the hardware, but the hardware does the work. I'll bet that never changes even for you."
"Well, if I'm honest - "
Just then a series of pictures started flashing across the screen, of machines and technologies that had long ago entered the fossil graveyard of computing history.
"Look - gotta go - hard stop at midnight. see-ya."
"Wait, I have some more quest - "
But the WebEx evaporated leaving only a screen with the usual icons. Why would Jake connect me with Blains? Data warehousing of yesteryear wasn't going to help me.
As I drifted off to sleep, the screen fired-up again and another WebEx started. This time it was the smiling image of Hunter Hardwick, an aficionado of all-things-warehousing.
"Hell0 my friend," he called to me, "I'm so happy we could be together for this celebration." he then raised a glass filled with a bubbly drink.
"What are we celebrating?"
"A toast!" Hardwick ignored me, "A toast to the present-state of data warehousing."
"Hasn't data warehousing sort of - you know - been eclipsed by Big Data?"
"Oh not hardly! The Big Data technologies are still a valuable part of the warehouse, but the warehouse itself isn't going away."
"Wait a second, you're speaking of the warehouse as something larger, not just a data store."
"Well of course, the warehouse is a living environment, not unlike a brick-and-mortar warehouse, with workers, forklifts, loading docks, storage carrels and operational protocols to tie it all together."
"Okay, I see that, but why are you making a toast to data warehousing's present-state? Wouldn't you want to toast to something more profound?"
"What's not to like? Data warehousing appliances, fluid queries, various options for large-scale storage, big-ticket network bandwidth - the world is my oyster. Glasses up!"
Something overcame me and my eyes grew heavy. As much as I wanted to celebrate the present, my body could not go on. I must have slept for hours before the computer screen fired-up again and started flashing. The third WebEx foretold by my former partner Jake, now attempted to materialize on the display.
"Are you there?" I heard my own voice, am I still dreaming? "Is this thing on?"
"Yes I'm here," I said groggily, wondering if this was a recording of me.
"That's what the bad comedians say in Vegas, you know," my voice jested.
"Is this thing on?" my voice chuckled, "Hey, I'm calling you from your future," but the signal was breaking up severely, like a bad cell-phone connection, and the image on the screen was full of static. I could see a silhouette of what might be an image of me, but too fuzzy to make out.
"You mean, I'm calling me from my future," I corrected, trying to gather my senses.
"Whatever," my voice laughed, "You're gonna love it here."
"In the future?"
"Sure, you have a lot to look forward to."
"Tell me more," I sat up.
"Well, networks here are all wireless, with practically unlimited bandwidth."
"Uh, no kidding -?"
"Absolutely, With wireless networks, everything is infinitely elastic. I can transfer terabytes in micro-seconds. Wires are pretty much a thing of the past."
"Well, except for power I suppose."
"Nope, wireless power too. Tesla finally won the argument. Changed the entire architecture of data centers. Now we don't have raised floors, we have raised machines. We can use the horizontal and vertical space in the building. IBM's Websphere machine really is a sphere now, most machines now use their entire outer-cabinet real-estate for something. No more boxes and racks."
"And storage is all in the cloud, nobody has real data centers hosting storage anymore."
"So centralized data centers all over the world?"
"No, as storage became denser and faster, we can put a petabyte on a flash stick. No big deal."
"To your point though, with the use of quantum entanglement and basically tossing out all of Einstein's constraining math, we can transfer data instantly. Storage isn't really a priority anymore."
"Pretty cool," I grinned, "But unlimited power, how -"
"No gas-powered cars anymore either. They all run on electromagnetic quantum engines. Basically generate their own fuel by capturing particles that leap-in-and-out of existence."
"That's - well, incredible."
"Yeah, when I go to work, I fly-by-wireless in my hovercar. No need to refuel - and it gets all its power wirelessly too," my voice yawned, "So yesterday."
"How far away is work?"
"I still live in Texas and can be at our Colorado center within minutes."
"But that's - "
"And I visit our site at Maria Tranquility within the hour."
"Funny, naming a site after a place on the Moon."
"You named a site after a location on the Moon."
"That's because it's actually on the Moon."
"Oh, no way!"
"YES way, dude. Dozens of shuttles and private craft leave every hour. Our biggest problem now is traffic control to the lunar surface."
"And data warehousing, well, it has undergone several facelifts. What was impossible in your time is literally yesterday's news."
"That's a pretty big claim."
"The boast of data warehousing's future was for real," my voice said, "actually, the folks in your time thought they were being visionary, but actually underplayed, or should I say, underestimated the innovators."
"What are your saying?"
"Their predictions came true," my voice said, "but fell short. Their boast of data warehousing's future fell radically short of what actually came to be."
"Like I said, you have a lot to look forward to. But when you think about it, even with predictive analytics, if you can see the future, or even accurately predict it, you've already changed it."
"If you know the outcome, doesn't it affect your behavior? If a corporate exec is very sure of the outcome, won't this affect how he markets his products, or would be just market them the same-old-way because its always been that way? And once he makes this change, it can radically affect the outcome of the prediction, with one problem - the future's not here yet so we won't know if we were right. And if it turns out wrong, we can't go back and replay it because we were the ones who changed the future by acting on it."
"So you're saying that if I can see the future, it's already in my past?"
"Something like that."
"I'd like to think - "
"Stop just thinking about it and start knowing it. We know that if someone tells us that a certain lottery number is a winner, won't we go out and buy a ticket when we wouldn't before? And when that corporate exec unleashes his campaign upon the masses, won't it affect the customer behavior? His predictive model cannot tell anyone what will happen when he acts on the predictions."
"You're right, it's not a problem here. Not yet anyhow."
"Look," my voice said, holding a square object against his silhouette, "Can you see this?"
"No, it's too fuzzy."
"It's a book," the voice said, "Changes a lot of things."
"Who's the author?"
"Someone you know," my voice said mysteriously.
"When will it arrive on bookshelves?" I implored, "And what's the title? I'd like to get a copy."
"That's kind of up to you, But there are people nipping at your heels even now. Those ideas you had for the future really work. Don't let someone else publish them before you do."
"Wait, what are you saying? Is that my book?"
"It could have been. You trained the author on everything in it. You just delayed publication because you thought you had more time."
"But I don't, you're saying."
"These ideas are already in your head, and this author was over a decade late in delivering them," my voice paused, "Why don't you just deliver them now?"
"Where do I start?"
"Start where you would start," my voice was breaking up even more, "Good luck."
The image faded and my voice trailed off. Now the screen was filled with static. I stared at it for long moments, trying to assimilate what I had just experienced.
The most of data warehousing's past, the toast of it's present, and the boast of its future, were these just a dream, or should I -
The phone rang.
"Hello, this is Jim Stone, how are you today?" said the pleasant voice of the robo-caller.
I know that if I say anything except "fine, how are you?" the robot will politely say "Goodbye" and hang up.
So I scream into the receiver, "They have me trapped in a box filled with scorpions! You have to help me!"
"Goodbye," it said pleasantly, and disconnected.
BEEP BEEP BEEP
The alarm clock woke me up.
Modified on by DavidBirmingham
For the past many months I have been diligently updating and upgrading the original 2008 Netezza Underground to address the many features of TwinFin, Striper and other offerings from IBM. I have recently been notified that it has passed final edit and is available on Amazon.com.
All I can say is "whew!" and many thanks to those who helped put it together. It's been a whirlwind.
Here is the URL
When I started the project I realized that a big part of the original book remains timeless. I didn't leave it "as is" though - practically every page and all the chapters have new material, case studies and such. I peppered the book with some additional graphics since the intrinsic points require a bit more reinforcement than mere words will suffice.
The original chapter on "Distribution Stuff" is now "Performance Stuff" and is twice as long, covering the various aspects of setting up tables, troubleshooting, page-level zone maps and a lot more.
Fortunately, this time around there is a better mechanism to contact me if you have questions or want to report any errata (hey, it could happen!) - you can reach me through this blog, on linked-in or directly through my primary email address at Brightlight Consulting:
Modified on by DavidBirmingham
In a Netezza shop experiencing some performance stress with their machine, we ask the usual questions as to the machine's configuration, its functional mission. Ultimately we pop-the-hood to find that the data structures and the queries are not in harmony. For starters, the structures don't look like Netezza structures, at least, not optimized for Netezza. We receive feedback that they "just" moved the data from their former (favorite technology here) and ran-with-it. They received the usual 10x boost as a door-prize and thought they were done. Lurking in their solution however, were latent inefficiencies that were causing the machine to work 10x to 20x harder to achieve the same outcome. And their queries were likewise 20x inefficient in how they leveraged the data structures.
More unfortunately, the power of the machine was masking this inefficiency. It's like the old adage, when a person first starts day-trading on the stock exchange, the worst thing that can happen to them is that they are successful. Why? It put a false sense of security in their minds that gives them permission to take risks they would never take if they knew the real rules of the game. The 10x-boost for moving the data over is a "for-free" door prize not the go-to configuration.
What are the real rules of the Netezza game? The first rule is that extraordinary power masks sloppy work. Netezza can make an ugly duckling look like a swan without actually being one. It can make an ugly model into a supermodel without the necessary adult beverages to assist the transformation. It can make sloppy queries look like something even Mary Poppins would approve of, practically perfect in every way and all that.
What's lurking under-the-hood is nothing short of a parasitic relationship between the model, the queries and the machine. We received the 10x boost door-prize and think we have succeeded. But we have only succeeded in instantiating the model and its data into the machine. We have not succeeded in leveraging the entire machine. And, uh, we paid for the entire machine. So why aren't we using it?
In our old environment, the index structures worked behind-the-scenes, transparently assisting each join. Our BI environment is set up to leverage those joins so we get good response times. The Netezza machine has no indexes so the BI queries (whether we want to admit it or not) are improperly structured to take advantage of the machine's physics.
"But that's how we've always done it..." or "But we don't do it that way..."
The short version, the former solution and (favorite technology here) is casting a long shadow across the raised floor onto the Netezza machine. People are forklifting "what they know" to the new machine when very little of it applies.
For example, in a star schema, the index structures are the primary performance center. The query will filter the dimensions first, gather indexes from the participating dimensions and then use these to attack the fact table. The engine does all this transparently. The result is a fast turnaround born on index-level performance. These are software-powered constructs in a general-purpose engine. The original concept of the star-schema was borne on the necessity of a model that could overcome the performance weaknesses of its host platform. It is in fact an answer to the lack-of-power of commodity platforms. In short, just by configuring and loading a star schema on a commodity platform, we get boost from using it over a more common 3NF schema.
The Netezza machine doesn't have indexes. So the common understanding of how a star-schema works doesn't apply. At all. Don't get me wrong, the star-schema has a lot of functional elegance and utility. It does not however, inherently provide any form of performance boost for queries using it. It can simplify the consumer experience and certainly ease maintenance, but it is not inherently more performant than any other model. In fact, using such a model by default could hinder performance.
Why is this?
The primary performance boosters in Netezza are the distribution and the zone map. Where the distribution and co-location preserve resources so that more queries can run simultaneously with high throughput, zone maps boost query turnaround time. They work in synergy to increase overall throughput of the machine. How does installing a star-schema inherently optimize such things? It doesn't.
Can we use a star-schema? Sure, and we should also commit to distributing the fact table on the same key as the most-active or largest dimension (they are often one-and-the-same). This will preserve concurrency for the largest majority of queries. A better approach however, is to specifically formulate a useful dimensional model that leverages the same distribution key for all participating tables. Common star-schemas do not do this by default, and if only two tables are distributed on the same key, all other joins to the other tables will be less performant. They will have to "broadcast" the dimensional data to the fact table. Clearly having all tables distributed on the same key will preserve concurrency, but this doesn't give us the monster-boost we're looking for. Distrubution might get us up to 2x past the door-prize performance we get from moving to the machine. Zone maps are notorious for getting us 100x and 1000x boost.
At one site I watched as several analytic operations remanufactured the star-schema data into several other useful structures, each of which was distributed on a common key. At the end of the operation, these quere joined in co-located manner and the final result came back in orders-of-magnitude faster than the same query on the master tables. I asked where they had derived the key, and they explained that it was a composite key that they had reformulated into a single key because their dimensional tables could all be distributed on it and maintain the same logical relationship. Looking over the table structures, they had a "flavor" of a star schema but certainly not a purist star. The question remained, if the existing star schema wasn't useful to them but their reformulated structure was, why weren't they using the reformulated one as the primary model and ditch the old one? The answer was simple, in that the existing star was seen as a general-purpose model and not to be outfitted or tuned for a specific user group. This is one of the commodity/general-purpose lines-of-thought that must be buried before entering the Netezza realm.
This is the primary takeaway from all that: The way we make an underpowered machine work faster is is to contrive a star schema that makes the indexes work hard. We forget that the star schema is a performance contrivance in this regard. If we attempt to move this model to the Netezza machine because "it's what we do" then we may experience performance difficulties rather than a boost. A common theme exists here: people do what they are knowledgeable of, what they are comfortable with, what they find easy-to-explain and do not naturally push-the-envelope for something more useful and performant.
In Netezza, the star schema has functional value but (configured wrong) is a performance liability. We can mitigate this problem by simply reformulating the star to align with the machine's physics, and by adapting our "purist" modeling practices to something more practical and adaptable. After all, many modeling practices are in place specifically because doing otherwise makes a traditional platform behave poorly. If we forklift those practices to Netezza, we participate in casting-the-long-shadown of an underperforming platform onto the Netezza machine.
We have enormous freedom in Netezza to shape the data the way we want to use it and make it consumption-ready both in content and performance. We should not move from a general-purpose platform (using a purpose-built model like a star) into a purpose-built platform with a general-purpose model like a star. The odd part is that the star is an anomaly in a load-balancing, traditional database, but is seen as purpose-built for that platform. Exactly the opposite is true in Netezza. The machine is purpose-built and the star is only another general-purpose model that doesn't work as well as a model that is purpose-built for Netezza physics and for user needs.
The worst thing we can do of course is think-outside-the-box (the Netezza box). We really need to think-inside-the-box and shape the data structures and queries to get what we want. This mitigates the long-shadows. It's just a matter of adapting traditional thinking into something practical for the Netezza machine.