Managing the data lifecycle
Sorry for the late notice. Cliff Leung, chief architect of Optim Query Workload Tuner, and I will be presenting a webcast on Optim Performance Management this Thursday via the DB2 Tech Talk series. This setting provides an opportunity to drill down a little given that we have 90 minutes. We are hoping that it will be informative with lots of time for questions. You can register here. I hope you will join us.
Take Charge of your Data Environment: IBM Performance Management Solutions
Performance issues are often at the source of declining productivity, increasing capital expenditures, and lost revenue. This Thursday, November 10, we’re hosting a webinar, IBM Performance Management Solutions, Featuring OPM and OQWT, to walk you through IBM Performance Management offerings. I'm joined by Bob Harbus, DB2 Technical Specialist and Danny Zilio, Software Analyst – Optim Query Workload Tuner. We'll be discussing how you can use InfoSphere Optim Performance Manager and InfoSphere Optim Query Workload Tuner to manager performance, both during development and in production. Join us to share your questions, challenges, and insights.
Hope to hear you on the call!
Holly Hayes, Optim Tools Product Manager
IBM_Optim 27000269HS Tags:  purequery bommireddipalli ods webcast announcement fixpacks download java 5,786 Views
I talk to many, many people about pureQuery and the benefits it can bring to performance, security, and problem determination of database applications. A key part of being able to deliver these enhancements to the wide variety of existing Java applications is the ability to layer pureQuery optimization onto existing applications without making changes to the application code. This process is called client optimization.
We have some enterprise customers now reaping the benefits of client optimization leading to this delivery which helps further improve security, manageability, and scalability to support complex enterprise environments. The main result of these requirements is delivery of a shared repository that can be used to securely store pureQuery artifacts. Not only does this provide added security, it provides the DBA with centralized control of the client optimization process. In other words, after the initial pureQuery Runtime setup within the application server by the administrator, you, the DBA, can modify the application’s pureQuery Runtime environment configuration without requiring further changes to the application server.
In addition, the centralized repository makes it possible to do continuous capturing, providing a step toward the vision of a more automated capture, configure, and bind process.
Another enhancement based on customer requests is the ability to group SQL into packages based on string tokens or special register values to streamline and simplify package management. An example of where this can be useful is if you are currently using special registers to set the schema qualifier during dynamic execution. Now you can package up the appropriate SQL statements into separate packages (one per schema for example) and do the same thing statically using the QUALIFIER BIND option.To help make sense of these enhancements, the product documentation for client optimization has been updated with scenario-based guidance on running, deploying, managing, and troubleshooting the client optimization process.
If you are interested in hearing more about the capabilities in this release from product experts, I invite you to join us on June 24th for the Virtual Tech Briefing entitled pureQuery Deep Dive Part 3: Client optimization administration. My colleagues Patrick Titzler and Chris Farrar, who some of you may remember from the first pureQuery deep dive tech briefing, will provide and overview of the new capabilities, how it fits into the overall direction we’re going, and will be available to answer questions. Patrick is planning also to do some demos of the tooling support for these features delivered in the Optim Development Studio Fix Pack 3.
Hope you have the upcoming Virtual Tech Briefing on pureQuery by Information Champion Dan Galvin on your calendars. We had a meeting to discuss this today, and hearing about pureQuery (a topic I spend a fair amount of time on) from someone who is not an IBMer is refreshing, and I hope you feel the same. I think it’s important for people to hear a perspective from someone who is both really knowledgeable on the topic, and has real-world implementation expertise as well.
Anyway, I hope you can join me, Dan, and Sonali (who will demo some cool tooling features) on March 11th at 10 AM Pacific. Please register, and even if you cannot attend that day, you can at least access the replay.
I'm on the plane today returning from two customer visits in the D.C. area where I was talking about the advantages of pureQuery in DB2 for z/OS environments for DBAs (as opposed to applications developers). One uses WAS on z/OS and the other uses WAS on Linux for System z. Both customers were very familiar with how to manage and monitor CICS applications accessing DB2 data, and they felt out of control with Java application access to their mainframe. One customer even said, "we tried to tell our application teams to stop using Hibernate" because they couldn't see/review the SQL or know which application was running the dynamic SQL in their system. They said the application developers ignored them(!). I told them (hopefully, I wasn't too preachy) that they needed to stop resisting these contemporary application architectures (Java, .NET) and instead take back control by transforming those dynamic SQL statements to static SQL statements via pureQuery. I told them that if they continue the "no Java will access MY mainframe", that they are headed down a path of getting rid of their mainframe. The application development teams don't care what DBMS is used for their application, and if met with resistance from the z/OS team, they will simply deploy the applications on some other DBMS.
Related to this, I just watched this new short 7- minute video on pureQuery for DB2 for z/OS, and it's a great quick summary of the value that can be gotten with pureQuery. In my mind, it's a no-brainer to implement pureQuery when you have Java or .NET applications accessing DB2 for z/OS.
If you want to get a good idea of what the process is like in more detail, check out the upcoming virtual tech briefing on this very topic. It's entitled pureQuery Deep Dive Part 1: Client optimization, scheduled for February 4th. Patrick Titzler, one of the authors of the original tutorial on this topic, and Chris Farrar, client optimization technical lead, will be presenting. It should be good.
We recently returned from IOD, which was quite an intense experience for those of you who have never been. We both gave lots of demos and talked to lots of customers about the Optim Query tuning solutions. And you can imagine that any session on query tuning, with or without a tools focus, was really packed. It seems as if people can never hear enough about query tuning, because it’s actually pretty interesting to do, and because it can have such an impact to the day to day life of a DBA (or whoever in your organization is tasked with reviewing and turning queries and query workloads).
Ray blogged earlier about how important it is that developers and DBAs collaborate more in the query tuning process. Not only can developers build up their skills, they can hopefully come to the DBA with some of the basic stuff taken care of, or at least a better understanding of what the issues are. The earlier in the cycle that issues are discovered, the less expensive and labor-intensive the tuning process is.
Anyway, we wanted to share with everyone who could not make the conference some scenario-based demonstrations of how query tuning solutions can work together. We are co-presenting at an upcoming Virtual Tech briefing (complimentary!) on November 19th. The focus of this is on z/OS, so we’ll talk about query tuning from both a development and DBA perspective, and discuss how to use Optim (such as Optim Development Studio, Optim Query Tuner and Optim Query Workload Tuner) and other z/OS tools to work through some typical scenarios. The scenarios we are planning to cover include:
We hope you can join us. Register today!
Ray and Saghi
Today I was talking with a customer who has Java applications that call COBOL and SQL stored procedures to do some additional business logic. My customer asked me if IBM offers a tool that can assist programmers and DBAs to do seamless debugging between Java applications and stored procedures.
This isn’t the first time that I have heard this request from customers because it can be a real pain point.The scenario is this: during development, developers need to debug a Java application that invokes a COBOL stored procedure and a SQL procedure. To debug that, you might need multiple tools - one that debugs the Java application, one for debugging COBOL stored procedures and another for the SQL. In fact, by co-installing (shell sharing) Optim Development Studio with Rational Developer for System z, you actually can do this end to end debugging without switching contexts. ODS provides the SQL procedure debugging capability, and RDz provides Java application and COBOL stored procedure debugging capabilities.
Next week, please join me on a follow up to our previous session on on building and deploying SQL stored procedures for DB2 for z/OS. In next week's session, Marichu Scanlon and I will go into more details on debugging, including some hints and tips. And we have our Rational friends on board to demonstrate debugging how to debug COBOL stored procedures.
May the demo gods be smiling upon us.
On the heels of Vijay's virtual tech briefing on Optim Development Studio 101 (you can register for the replay here, if you missed it!), I'm going to be taking you on a deep dive in one aspect of using the product for SQL Stored Procedure development (with a focus on z/OS). Marichu Scanlon from our continuous engineering team will be on board to help answer questions.
There has been a lots of interest out there in this topic because many people use stored procedures to encapsulate business logic, improve performance, and, with DB2 for z/OS v9 native SQL procedures, they can use them for reduced cost because they are zIIP-eligible.
The first of the two sessions of this briefing will be on September 24 It will last about one hour and will cover topics of creating, deploying, running and working with existing stored procedures. Within this session I will also be answering a lot of the commonly asked questions I have seen both on the forums and customer interactions, and of course I will also be taking your questions during the event itself. Keep in mind, during the September 24 session I'll only touch briefly on how debugging plays in the stored procedure development life cycle since we will be having a follow up deep dive session on October 22 that focuses on how to enable debugging in a z/OS environment. The event is free, and you can register for it here to get the details on accessing the event.
Look forward to seeing you at the tech briefing!
In case you missed the DB2 Chat with the Lab a couple weeks back, which starred our own Deb Jenson and Manas Dadarkar discussing and demoing Data Studio and other capabilities for database administration in the Optim portfolio, the replay is now available from ChannelDB2, from which you can also download the charts. This is a great introduction to Data Studio for those of you who may not have taken the plunge yet or are interested in hearing more about specific support for DB2 9.7. Even better - download Data Studio and try it out yourself! You can find links to both downloads (the stand-alone and the IDE packages that Srini blogged about a while back) on the Integrated Data Management Community Space download tab.
As lead product manager for the Optim and Data Studio administration capabilities, I have been getting lots of questions about Data Studio ever since DB2 announced the deprecation of DB2 Control Center. I think some people haven’t gotten the word yet about the capabilities in Data Studio 2.2 that fill big gaps in the administrative capabilities that were available from Control Center. I did blog about some of these capabilities in a previous posting, but I’d like to now give you more information and a demo of some of these capabilities as well as to talk about how easy it is to add on more capabilities to help you get your job done.
To that end, I’d like to invite you to join me and a Data Studio architect, Manas Dadarkar, for a DB2 Chat with the Lab event on August 27th.
Looking forward, it’s true that as of today, Data Studio does not have all the capabilities that Control Center does, but I think it covers a large percentage of what most people need to do on a day to day basis. I recommend that you install Data Studio in one of its flavors (IDE or stand-alone) alongside Control Center so you can get familiar with using it. In the meantime, we’ll be working hard on our side to fill the gaps that affect most users.
Hi, everyone. Glad to be back after some travelling. I wanted to give you a heads up on a virtual tech briefing that I will be giving next week focusing on an overview of Optim Development Studio. We have a special guest speaker - Nik Teshima - who is the product manager for Rational Application Developer for WebSphere Software (RAD). If you're wondering why we invited Nik along to this briefing, it's to help address questions we hear a lot from people who don't quite understand the complementary relationship between RAD and Optim Development Studio.
I will walk you through some of the capabilities that Optim Development Studio brings plenty to RAD’s robust Java development environment and why this moves into the "must have" category if you are doing any kind of database access development.
You will also see how the two products shell-share seamlessly (along with MANY other Rational and Optim products) -- you can install them alone or together in a modular fashion to include the capabilities you need. Anyway, please join Nik and me next week on Thursday, August 20 at 1 PM Eastern for the Integrated Data Management Virtual Technical Briefing on Optim Development Studio 101 where I do plan to demo some of these capabilities and will be happy to answer your questions.
You may have read about the new capabilities in Optim Development Studio in my developerWorks article entitled What's new and cool in Optim Development Studio 2.2.
Nowyou have the opportunity to see it in action. My colleague, ZeusCourtois, has put together a video series for ChannelDB2 that walksthrough, step by step, the features I described in the article. Here's the link to the first video in the series: http://www.channeldb2.com/video/whats-new-in-optim-development-1
The 5 parts correspond to the following topics:
Also, don't miss the Optim Development Studio 101 virtual tech briefing coming up on August 20th at 1 PM Eastern, 10 AM Pacific to get a tour of Optim Development Studio from Vijay Bommireddipalli, whom you may know as an occasional blogger on this site. We also have a surprise guest for this briefing. You'll see how Optim Development Studio can extend the capabilities in Rational Application Developer to turbo-charge the development and optimization of data persistence layers.
You can register for this briefing here. See a schedule of upcoming briefings here.
I'm looking forward to next Wednesday, July 22nd when I get to participate in one of the virtual technical briefings that Kathy Z blogged about recently. The topic is InfoSphere Data Architect 101, and I'm planning to do something with one of the technical architects that is a combination of presentation and demonstration, so hopefully we'll keep it interesting for you.
If you want to get a little background before coming, you can check out this great introductory video. Also, Holly covered some of the new privacy capabilities in the first virtual tech briefing, Data Studio becomes Optim: What does it mean for you, which will be available for replay for a limited time.
Just sign in with your computer and email address!
Date: July 22nd
Time: 10 AM Pacific, 1 PM Eastern (but sign in 30 minutes early if you can)
The whole thing is done via the computer, so you may want to go to the web site ahead of time and click on the system check link.
Talk with you soon.
-- Anson Kokkat
This is my first time 'showing my face' among the illustrious bloggers on this site, but know that I am the one behind the scenes flogging them with a big stick to blog frequently, write articles, make videos, etc.. .
I wanted to tell you about something a little new we're trying this year.. we toyed around with what to call them - technical chats, lunch and learns, technical webinar - but since we ended up partnering with developerWorks to do these we are using the term they use for these interactive sessions - 'Virtual Technical Briefing'.
The goal is provide you with access to some of our technical experts who will present and demo on various technical topics to do with integrated data management (of course involving our products). And we wanted to keep it short so you'd be motivated to squeeze it in during your lunch hour (if the time is right) or come do the replay when you can.
These sessions are pretty cool - it's done all over VOIP and your network connection. You just need an email address to sign in and join us. (Join early or sign in a few days in advance to go through the system check).
Our kickoff session is called Data Studio becomes Optim: What does it mean for you?
The live event is Tuesday, June 30th at 10 AM Pacific, 1 PM Eastern. Holly Hayes and Kevin Foster will present.
We have a tentative schedule and list of topics posted on developerWorks. But it's not too late to influence that. Let me know if there's something you are just dying to hear more about, and we'll see what we can do. You can add a comment here or send an email to firstname.lastname@example.org...
See you there!
I was looking at Scott Ambler’s surveys on IT project success rate. It is very interesting how project success as seen through Scott’s surveys present a more hopeful picture for project success than from the Standish Group’s Chaos Report, which in its 2006 refresh reported a 35% success rate and a 46% “challenged” rate. (Nice blog entry summarizing a variety of research on the topic in Dan Galorath’s blog and 2006 Standish numbers from an SD Times article.) Standish defined success as “on time, on budget, meeting the spec”, while challenged means they had cost or time overruns or didn’t fully meet the user’s needs. But I digress…
Scott’s data indicates that projects that use evolutionary development methodologies, e.g. Agile or Rational Unified Process, fare better than those using traditional waterfall or ad-hoc processes. That’s not surprising given the emphasis on tight collaboration among stakeholders and continuous evolution and validation. Really, it’s pretty intuitive. So I was thinking about key characteristics of iterative methodologies and how they relate to database and data access development. (I know, Scott has already thought about this too.
See his Agile Data site. And Rafael did a Webcast on it earlier in the year.) But more specifically, I wanted to look at how our Data Studio portfolio supports evolutionary development methodologies. Yes, there’s more to do, but I think what we offer goes a long way towards accelerating solution delivery with high quality results. Vijay and I are going to do a Webcast on this April 28th titled Accelerating Solution Delivery for Data-Driven Applications. Hope you’ll join us.
In some ways, this is also the companion Webcast to Rafael’s Performance Optimization webcast. In his blog, he talked about how from a lifecycle perspective performance optimization can broken down into doing it right the first time or fixing it after that fact. His Webcast focused on the latter and this one on the former.
What are your stories about evolutionary methodologies and database development? Have you used Data Studio software in this context?