IBM Db2 Warehouse is a data warehousing solution for the private cloud that is delivered via Docker containers. It offers the speed and ease of the cloud, while enabling you to deploy it where you need it, including private cloud and existing cloud infrastructures as long as Docker containers are supported. Learn how to get started with this trial in a blog by Mitesh Shah, the product manager for this solution. Click here to get started.
Database Development with IBM Hybrid Data Architecture
CindyRussell 2700016DC0 9,331 Views
Software development is one of the hot technology topics these days. Let’s face it … software encodes the data and analytics transformation that our world is currently undergoing and software applications drive database designs. Not only do you need technologies that make it fast and simple to build new applications, but you also need technologies that most easily let you store varied data types such as structured relational and unstructured JSON data.
Db2 has just released the free Db2 Developer-Community edition that stores relational and JSON data in the same database to reduce the need for multiple databases and make your coding life easy! This new developer edition is free, fast and easy to download for non-production use. What’s more, Db2 is enterprise-ready with the key scalaibility and reliability features you are going to need to deploy to production in the enterprise.
IBM will be presenting a free workshop on October 5, 2017 in Lisbon, Portugal in conjunction with the International Db2 User Group (IDUG) meeting that is being held there. You are invited to join us to expand your knowledge and skills around Db2.
This is an excellent opportunity to expand your skills for free! Learn more and register here: ibm.biz/db2wkshp.
I was on a recent data warehousing call and the interest is apparent around Apache Spark. dashDB Local includes a deeply embedded Spark engine that provides both fast in-memory processing and also key analytics capabilities such as machine learning. We were asked about demos, and we shared this YouTube playlist that is a full series of tutorials around using dashDB Local with Spark. It is worth checking out, and you should know it was created by our technical architect for dashDB Local / Spark engine. Enjoy!
We all realize we live in a now kind of world and this is true for analytics. Data has been referred to as the "new oil" or basis of competitive advantage and therefore insights are needed sooner rather than later. It used to be that data was moved in overnight batch jobs from the transaction system to the analytics system and reports followed later. However in a growing list of use cases, the insights are needed faster than this. Data volumes have also grown such that some batch jobs run out of night!
Enter HTAP ... Hybrid Transaction / Analytic Processing. HTAP makes real-time analytics possible in the transaction or operational database — providing analytics across events as they are happening so you can impact the outcome rather than know in hindsight what happened. In addition, HTAP can help to simplify the IT environment because you need fewer separate systems and copies of data to achieve transaction and analytics processing.
HTAP works with a combination of technologies to achieve these results, including:
You will see these and other technologies in and coming to the IBM analytics portfolio including DB2. You can learn more about HTAP in DB2 in these blogs by our experts:
Machine Learning is a technique for advanced analytics that is able to enhance itself to provide continually improving analytics and support for decision making. Studies show that when companies embrace data and advanced analytics techniques, they gain a competitive edge. For technical professionals, this means that you must enhance skills in this hot area -- across both structured and unstructured data.
IBM will be offering the Winning with Machine Learning events in the following cities.
Dates and Registration
These events are recommended to jump start your use of Machine Learning with practical tips on how to get started. You will also hear about IBM’s plans to infuse cognitive capabilities via Machine Learning across our private cloud data and analytics portfolio. There are two tracks in the event. One is for Data Management and the other is for Data Science. Register for one of these events near you.
CindyRussell 2700016DC0 Tags:  sddc warehouse sde docker dashdb environment software-defined data local 11,980 Views
Software-defined environments can change the way we think about the world of application, integration and middleware. But what are they and how can you get started? Here are a few Q&As to help you quickly understand the technology and what it can do for you.
What is an SDE and why use one when you have traditional environment approaches?
Software Defined Environments (SDEs) help bring value to your organization through responsive and adaptive solutions. SDEs optimize the entire computing infrastructure — compute, storage and network resources—and can automatically tailor themselves to the needs to the workload. An SDE is a service-based model that abstracts the computing infrastructure and manages and adjusts it based on policies. Without an SDE, you must manually allocate compute, storage and network resources to different workloads and this can be a time-consuming and repetitive task.
The SDE removes manual steps by taking application characteristics, best-available resources and service level policies into account to dynamically allocate available resources to the various workloads. It can also deliver continuous, “on the fly” optimization and reconfiguration to address changing workload and infrastructure needs. We call this scale out and scale in and it is achieved through policy-based compliance checks and updates.
How do containers compare to VMs and SDEs?
Containers, such as Docker, are designed to simplify the packaging and distribution of software. They work above the operating system level of the computing stack and are fast and light at startup. Because they are portable, they can bring together all elements of an application so they deploy quickly, operate independently from other containerized applications and can be easily ported to new locations, for example from development to production or from a vendor like IBM to you.
Containers work within IT policies and allow you to embrace your own data center standards. For example, existing monitoring and security overlays can remain on the host operating system while the application is isolated inside its container. You can fully utilize existing infrastructure capabilities like storage copy and replication services. Existing monitoring tools such as systems management, network monitoring, and even popular cloud management tools, including OpenStack and Kubernetes, can continue to be used. Thus, containers work within the existing processes and guidelines you already have established.
Do containers sound like a virtual machine (VM)? Well not really. VM virtualization technology abstracts at the hardware level and creates an entire virtual operating system. You can use VMs and Container technologies together, for example, Docker containers can be created inside VMs to make a solution ultra-portable.
So what does IBM have to offer in the SDE space?
IBM dashDB Local is the IBM data warehouse offering for SDEs such as private clouds, virtual private clouds and other infrastructures that leverages Docker container technology. It deploys to any environment that supports Docker and is designed to provision a full data warehouse stack in minutes. You can manage dashDB Local on the server or cloud environment of your choice, while maintaining existing operational and security processes.
dashDB Local can be deployed on any supported Docker platforms on Linux and Windows with minimal prerequisites. Entry level hardware requirements start at 8GB RAM and a single CPU core, which is suitable for a development / test environment or QA work on your laptop. This comes in handy for Data Scientists or Business Analysts as well. For larger servers like 48 core 3 TB RAM servers, the dashDB container will auto-configure to make the best use of the available resources with minimal tuning. To summarize, this configuration empowers users to make the efficient use of the hardware what you already have in your data center or in the cloud environment of your choice.
For more information on dashDB Local, please visit the dashDB Local page on the public Docker repository to access the free trial. Start by creating a Docker hub ID which you will need in order to gain access to the dashDB Local trial.
CindyRussell 2700016DC0 Tags:  dashdb management data power hybrid warehouse systems 11,885 Views
The data warehouse remains an a key part of any data architecture, but as always, the challenge is in managing a growing volume of data types and sources and responding quickly to new needs for analytics. This can often result in using the best database or data warehouse for the workloads, data and application needs. Think about the tradeoffs between data, applications and workloads such as Hadoop vs. structured data; web clickstreams vs. financial data; marketing insights vs. quarterly reporting.
dashDB Local, part of the IBM Hybrid Data Architecture
CindyRussell 2700016DC0 10,219 Views
IBM Data Server Gateway for OData version 1.0.0 enables you to quickly create OData RESTful services to query and update data in IBM DB2 LUW.
Learn more about this new gateway and how to download and install it here in a blog by Arvind Gupta and Sujan Ghosh.
CindyRussell 2700016DC0 18,319 Views
It's no secret that data volumes and data sources are proliferating. If you think back to even as little as 12 months ago, you probably have more data sources and volumes now than you did even that short time ago. These data types are varied -- more structured data collected from your systems; third party sources as a service on the cloud; text or data born on the cloud and more. The traditional data warehouse excels at core reporting and analytics, yet it probably is not the most cost-effective way to store every type of data. In the case of text, your storage choices are more limited to solutions such as Hadoop.
Right workload on the right platform: hybrid gets you there
A hybrid data architecture comes in to help you store all of this varied data based on the importance of the data, privacy and governance requirements for the data, the original source of the data and available capacity on your systems. Such an architecture suits varied data placement strategy by handling data on-premises and in the cloud, or both, and it enables management of a full range of data: structured and unstructured, at rest and in motion. These traditional and next-generation technologies are designed to meet a broadened range of analytics or transaction processing needs.
Query across the platform and even to other stores
IBM recognizes the need to store all types of data and has developed a hybrid solution so you can place the right workload on the right platform for each need. IBM Fluid Query (in PureData System for Analytics, dashDB Local and more platforms coming soon) and IBM Big SQL (in IBM BigInsights), provide the ability to query data where it resides and without moving it around just for the purpose of analysis. This integrated capability can query remote data in platforms in the IBM data architecture as well as popular sources including Oracle, Hortonworks and others. For example, PureData System for Analytics can query into BigInsights, DB2, dashDB, Oralce databases or others. dashDB Local can query into PureData System for Analytics, BigInsights or other platforms.
An example: get deeper insights
Here is an example of how your data may be stored and why you gain deeper insights when you are querying more data:
If you combine all of this data, you can tell how weather affects sales of your products and which sellers have the most success in doing this and which products are selling. By looking at social media, you can tell the need this product is solving under weather conditions and whether or not a trend is taking hold. By using IBM Fluid Query or IBM Big SQL, you can query all of these data sources together to get this deep insight. You have a choice of which platform to use to begin the query -- and often it is recommended to use the platform that contains the most data or the most important data. It is a common SQL engine -- and you are not required to move all of this data around just to satisfy the needs of this one query.
If you query these sources individually, you do not get the same rich insights, but rather simple pieces of a picture. When you combine all of this data, the picture of what is happening becomes richer and more actionable.
In summary, most organizations require a hybrid data architecture to handle the breadth of data coming at them. When you add a common analytics engine, suddenly you can take advantage of value from data. This is richer, deeper insights that provide a business advantage in some way. Now that you understand the hybrid data architecture, use this community to explore and learn about its member products.
CindyRussell 2700016DC0 Tags:  parquet data machine-learning event-driven machine applications microservices learning spark 10,921 Views
by Anson Kokkat, product manager for Db2 Event Store
Introducing IBM Db2 Event Store Enterprise Edition Open Beta
I was able to attend the Strata Data Conference last week and what I found everywhere was the prevalence of customers trying to get the most out of Spark and Data Science in their data projects. Every customer is trying to get a handle on the large amounts of information that are streaming into their organization and be able to do analytics on that data as it is coming in.
>> Bookmark and visit the IBM Db2 Event Store Community: https://developer.ibm.com/data/db2/eventstore/
It's no secret that data volumes and workloads are growing. This is driving a hybrid data architecture where you can select the right deployment platform (cloud, private cloud or on premises) to match the data, workload and application requirements. For a hybrid data architecture ,the key question becomes … how are you going to manage it all? End-to-end visibility is essential if you are going to keep your environment in top shape.
What is IBM Data Server Manager?
IBM Data Server Manager (DSM) provides a single view across your hybrid data architecture, back through time, and across your private and public cloud. It works with variety of IBM data repositories and data servers including:
This is the end-to-end visibility you need to simplify and manage complexity across a data environment that may have hundreds of data servers installed in on-premises, public cloud and private cloud deployments. Refer to the “About Data Server Manager” section below if you are not currently using Data Server Manager and would like more information or wish to download the free version.
What’s new for Data Server Manager?
Data Server Manager version 2.1.3 is now available. Here are a few highlights:
For more information on this release and to get started with an upgrade from Fix Central (for DSM Enterprise Edition customers) learn more about these enhancements here.
Three New Best Practices Guides for Data Server Manager
Three new best practices guides are also available for Data Server Manager and you should be aware of them:
About Data Server Manager
IBM Data Server Manager is a simple, scalable and smart solution to help administer, monitor, manage and optimize IBM databases across the enterprise.
For those who are new to DSM, there is a freely available edition that you can download and use at no charge. This free edition offers database development, administration and basic performance monitoring capabilities to the above listed platforms at no charge.
There is also the Data Server Manager Enterprise Edition that is available at a charge. It offers all of the capabilities of the free edition, plus advanced capabilities including enhanced monitoring history, centralized configuration management, refined performance management and query tuning capabilities with expert advice for DB2 for Linux, UNIX and Windows as well as a growing list of the above platforms.