Here are the key new features in Data Studio Developer 1.2 and Data Studio pureQuery Runtime 1.2:
- Optimize your existing JDBC applications (client optimization) without changing a line of Java code.
- Problem determination made easier for developers with the ability to drill up into the application code to identify and display the source code location for problem queries
- Enhanced impact analysis within database applications with the ability to drill down into the SQL execution for “where used” analysis
- Additional Data Studio pureQuery Runtime features:
- Heterogeneous batching for improved performance of updates, even across different tables
- Ability to run natively on z/OS
- Collaborate effectively with the DBA for static SQL development:
- Improved bind tools to facilitate the bind process from design to bind and verify
- Improved package management
- Enable the developer to complete pureQuery projects faster
- Check the quality of SQL and share SQL easily with DBA or team members
- Migrate existing Java™ database applications to pureQuery
- Turn your Java methods into a database stored procedure with a single click
- Static SQL support for stored procedures
- Advanced and customizable code generation
- Improved Data Web Services for rapid SOA-enablement of data assets
- Static SQL support for Data Web Services
- Web Services over JMS for highly reliable Web services applications
- Create and deploy Data Web Services onto IBM WebSphere DataPower
Static SQL can provide more consistent or even better performance and a better authorization model for DB2 applications. For a great overview of static SQL benefits, see the “No Excuses” article in Resources.
The decision to leverage DB2’s static SQL for a Java program was a design-time decision. The developer chose a particular API or Java persistence framework which implicitly made the decision for you on the SQL execution mode. Unless SQLJ was chosen as the API, all other access choices used dynamic execution. There was no way to reap the benefits of static SQL in your existing JDBC or framework-based applications. If you used pureQuery, you needed to use the annotated-method style API to get the ability to switch between static SQL and dynamic SQL.
You can get the benefits of static SQL with any existing DB2 JDBC applications, using pureQuery’s client optimization feature. The applications can use a Java persistence framework such as Hibernate, JPA, or iBatis, or it can be any plain JDBC for database access. As a matter of fact, you don’t even need access to the source code. You don’t need to change any code. pureQuery’s client optimization feature works by binding SQL that you have previously captured from the running application. It allows you to capture the dynamic SQL calls, select which statements to run statically, bind the statements selected, and switch the runtime SQL execution mode for the statements selected from dynamic SQL to static SQL.
In order to leverage this feature from the Data Studio Developer tooling, follow these high-level steps.
- Step 1. Enable the project for client optimization
To tell Data Studio Developer that a particular project is eligible for client optimization, follow these steps:
- Right click on the Java project that contains your application code or binaries. Then select pureQuery -> Add pureQuery Support.
- Check the box for Enable SQL capturing and binding for
JDBC applications as shown below:
Figure 1. Enable SQL capturing and binding for JDBC applications
Alternately, for a project that already has pureQuery support added, follow the steps below
- Right click on the project and select Properties.
- From the left menu of the Properties page, expand pureQuery and select Properties.
- Check the box for Enable SQL capturing and binding for JDBC applications.
- Step 2. Capture the SQL
To capture the SQL, you must execute the application in capture mode. Capture mode acts like a JDBC driver interceptor to collect all the SQL statements coming from the application, through the driver and to the database. When you turn capture mode ON, pureQuery collects all the successfully executed SQL from the application into a file. When the application is executed, the capture file contains all the SQL statements that could potentially be switched to static SQL. If all the data access paths in the application are executed, then the captured information contains the complete list of SQL statements that are issued to the database.
The SQL can be captured in any of the following scenarios:
- Within the development environment, run unit tests to exercise all parts of your application that generate SQL. This is the recommended approach for those cases in which you have appropriate test cases.
- Within the QA or production environment, you can use the command line version of the capture capability to capture the SQL.
If you are using the command line version of the capture facility, you can import the captured contents into the development environment to use the problem isolation and dependency analysis benefits offered by the new pureQuery outline view described in Problem isolation.
Within the development environment, to enable capture, you must turn it on in the DB2JccConfiguration.properties file, as shown below. Content assist with information about each option and colorization is provided to make this easy.
Note that Step 1 above sets up the project with capture mode
Figure 2. Set capture mode ON
- Step 3. Bind the captured SQL
To run any SQL statement statically, it needs to have a DB2 SQL package associated with it. You create packages by using the bind process. Now that you have captured your SQL, you need to bind the SQL in that capture file to the target database, and this creates the package. (Optionally, you can customize your packages to remove certain packages or SQL statements within packages. More details about pureQuery’s client optimization will be described in future developerWorks articles and tutorials.) Bind is the process that saves the access plan associated with a SQL statement into a DB2 SQL package. Select the captured file, in this case, capture.pdqxml, right click and select pureQuery > Bind.
Figure 3. Bind captured SQL
Once the packages are created, then the DBA may need to grant appropriate authorization privileges for the packages. You can do this from the Database Explorer in Data Studio Developer.
- Step 4. Execute using static SQL
You are now ready to switch the SQL execution mode for your selected SQL statements from dynamic SQL to static SQL execution. In your DB2JccConfiguration.properties file, use content assist to switch capture mode to
OFF, set the execution mode to
STATIC, and then run your application again to reap static SQL benefits.
Figure 4. Execute using static SQL
Important: Not all JDBC applications are good candidates for client optimization without modifications. For example, use of DDL or special registers may require special handling. Upcoming developerWorks articles and tutorials will delve deeper into this topic.
Previously, database application developers and DBAs faced numerous pain points when trying to isolate poorly performing SQL or trying to determine all the SQL statements being issued against the database for audit purposes. Finding and making the correlation between the SQL statements and the related Java source code was tedious, time-consuming, and sometimes nearly impossible.
Correlating SQL executed on the database to the actual line of code triggering it included gathering and wading through traces from database drivers and different data access components accessed by the application. The process had to be repeated every time any problem occurred in an application. The ability to correlate depended on the underlying components to provide appropriate traces and was a continuous burden on developers to add traces and keep them correct.
The DBA was also limited in identifying what Java classes were issuing the SQL due to the limited information found in database traces. Because the developers chose JDBC or a JDBC-based framework, the DBA had limited tools to help the developer know what applications the SQL was coming from.
The correlation gets more complex with the three-tier architectures and when frameworks are used. Applications using frameworks like Hibernate or JPA generate SQL on the fly, and it's difficult for the developer to trace back a particular SQL statement (or set of statements) to the query language of the framework that generated it, even when the source code is available. When the source code is not available, it's even more difficult. Therefore, if an end user, developer, or DBA complains about a poorly performing SQL statement, it can be a huge effort to try and locate that SQL in the originating Java source.
Data Studio Developer 1.2 makes it easier to correlate SQL statements to a particular Java class and line number, even when a framework generates the SQL. You can see this using the new pureQuery outline view. Further, in this view, you can easily drill up into the application source code that triggered the generation of that SQL.
To take advantage of the pureQuery outline view you need to do one of the following:
- Write your application using the pureQuery annotated-method style API. This proactively makes information for troubleshooting available during development and testing.
- Capture the SQL from the running application (or import captured SQL that you have previously captured) as described in client optimization. After you do this, you are able to take advantage of the view as well.
From the list of SQL statements in the outline view, you can double click on any SQL statement to see which line of code created the SQL, or you can right click on the SQL and select Show in source.
Figure 5 below shows how the pureQuery outline view lists the SQL statements that were generated from an OpenJPA application and how each SQL links to its source in the application:
Figure 5. Link SQL to Java source
Previously, there was no easy way to gain insight into which database objects were referenced by which parts of a Java data access application. Team members working on different parts of the application did not have a way to gain insight into what SQL the other parts would be issuing to the database. In addition, on the database side, schemas are continuously changed as part of the application development process. The inability to gain insight into how much the change would impact an application made such changes risky. Developers and DBAs could not easily work together to understand the impact of such changes. Because of this, the complicated process of determining the impact of a change slowed down development, resulting in delays for delivering a final product, or perhaps even sometimes resulting in the decision not to make changes because of such delays.
With the Data Studio Developer 1.2 pureQuery outline view described above, you can more easily gain insight into what parts of your application use which tables, columns, or views from the database. This allows the developer and DBA to more easily correlate individual SQL statements issued by a particular Java class and, at the same time, determine which tables and columns are referenced in the statement.
Just as the pureQuery outline lets you drill up into the application code, it also lets you drill down into the database objects an SQL statement uses. Developers can now view which database objects their SQL affects. DBAs and developers can work together and iteratively to come up with database changes that would have the least impact on the application -- thus reducing or eliminating risks. Effective team development is made possible by having information about all SQL used in the application and database affected by each SQL at the developer’s fingertips.
Use the Database tab in the pureQuery outline view to see which database objects are used by your application. When you drill down into each schema and table or view, you’ll see the SQL statements that use the objects. Drilling further into the SQL, you will see which columns are used by the SQL. Double clicking on the SQL -- or right clicking on the SQL and selecting Show in source -- shows the approximate line number in the application that triggered the creation of this SQL.
The Database tab in the pureQuery outline view also provides insight into how much your application might need to change if a given table or view were to change. To get such information for columns, you can use the filter menu on the pureQuery outline view and provide the column name. The resulting filtered outline view only lists SQL statements that use this column. Alternately, if Java source code is available, you can right click inside a column in the SQL statement and select pureQuery-> Show In-> pureQuery outline.
Each team member can now easily assess which parts of their application would get impacted by a proposed schema change by the DBA. By working together, DBAs and developers can determine which changes can bring the most benefit with the least amount of risk.
The Database tab shows schemas and tables used by the SQL statements in the application. The filter shown in the figure below restricts the view to just SQL statements that use the FIRSTN column.
Figure 6. Correlating SQL statements to database columns
Data Studio Developer 1.1 supported many JDBC best practices such as
executing SQL statements in batch. JDBC supports collecting SQL statements
going to one table and executing the set of statements in batch. The SQL
statements are sent over wire in bulk in one network trip. The pureQuery
inline style facilitates leveraging this JDBC best practice by using the
updateMany API. The pureQuery annotated-method
style also supports this implicitly when a set of objects is passed into a
method with an associated INSERT, UPDATE or DELETE.
If you wanted to collect a set of SQL operations that was going to act on more than one table, JDBC does not support batching those statements. For example, assume you wanted to do a bulk insert into a table that required also inserting data into a set of child tables due to foreign key constraints. The only alternative was to add the logic to a stored procedure and then send the data over in bulk to the stored procedure.
With pureQuery heterogeneous batching, you can batch INSERT, UPDATE, and DELETE statements that refer to different tables. These heterogeneous batch updates enable all associated tables to be updated in one network round trip to the server. pureQuery adds methods for heterogeneous batch updates that you call to indicate to pureQuery that you are starting and ending a batch update. Heterogeneous batching can potentially improve the performance on INSERT, UPDATE and DELETE statements that operate on multiple tables. It can also improve the cohesiveness of your Java database logic with its ability to indicate that a set of INSERT, UDPATE and DELETE operations are related.
IBM Data Studio pureQuery Runtime for z/OS enables you to deploy the pureQuery runtime on z/OS servers and access DB2 for z/OS data using either Type 2 or Type 4 JDBC access. For more information on the possible performance benefits and CPU savings using pureQuery on z/OS, see the performance study article in the Resources.
The data developer and DBA experiences were disconnected; if the developer used tools to provide bind options to bind packages, the DBA would have to start from scratch to provide the same bind options on the command line. More commonly, DBAs create the bind options, and developers needed to use them to bind to development or test databases. Such bind options created by the DBAs could not be reused by the developers in the development environments. Developers and DBAs using Data Studio Developer did not get feedback on packages created on the database as a result of a bind or could not easily determine the packages associated with a particular Java class.
Numerous features in Data Studio Developer 1.2 enhance developer and DBA collaboration. Bind options are contained in common files that can be used in the command line or in the Java integrated developer tools. When using developer tools, content assist with information for bind options allows novice developers to productively create bind options.
Let’s look at some scenarios that are enhanced by these collaboration features:
- DBA and developer working separately.
Typically, DBAs are responsible for bind operations and have the knowledge to provide and tune the appropriate bind options. Developers or quality assurance professionals are expected to simply use the options suggested by the DBA and run binds for testing.
DBA can simply create the bind options file and send it to the developer or quality assurance professionals to reuse in the development environment.
- Developer sharing roles with DBAs
Content assist for bind options with information associated with each option empowers novice developers to now create complex bind options without having to open documentation or have prior knowledge about the options. The developer is now empowered to perform some of the DBA- like tasks easily, thus potentially reducing the burden on the DBAs.
- DBA sharing roles with the developer
DBAs who have Data Studio Developer tools available can use the productivity features in the editors to build complex bind options easily. They can test drive on test or production databases without changing the development projects. Greatly improved package browsing in Database Explorer allows the DBA to now validate the packages deployed on the database.
Data Studio Developer 1.2 now makes static SQL development and deployment easier even for more complex scenarios. Some of the key new features include:
- Content assist and colorization in editors to build generation options to design the package
- Content assist and colorization in editors to build complex bind options
- Ability to bind one, many, or all artifacts within the development environment
- Ability to bind to development, test, production or any other database of choice, without changing your development project settings
- Preview packages and SQL in packages before deploying them
- Connect from the design view to the database view to verify packages created on the database
- Remove, replace or add new versions for packages
- Create single packages from many pureQuery interfaces
The next section goes into more detail on these features.
Data Studio Developer gives more control and feedback to the developers and DBAs on static SQL development and the bind process, from design to bind.
- Step 1. Design your packages
Using the content assist and colorization in new editors in Data Studio Developer 1.2, you can specify the package name, collection ID or package version. When working with existing JDBC or framework-based applications, you can also remove certain SQL from packages.
The figure below shows the content assist for designing packages.
Figure 7. Design your packages
An example of the properties to design packages is shown here:
com.demo.ActData=-collection myNewColl -rootPkgName myPkg -pkgVersion V1
- Step 2. Preview the packages
With Data Studio Developer, you can preview the packages in design mode to ensure that it has the right name, the correct SQL, and so on. Repeat the “design –preview” cycle until the package is ready for binding.
To preview the packages before you bind them, go to the SQL tab of the pureQuery outline view and click on the potential package. The Properties view provides more information such as the version name, collection ID and so forth.
To see the SQL that would be included in the package, expand the package node. If you have specified that certain packages or SQL should not be bound (as part of client optimization), then use this view to ensure they are not included in the pureQuery outline SQL tab.
To repeat the “design – preview” cycle, simply change the default.genprops file and refresh the pureQuery outline view.
The figure below shows the packages in preview. You can see that the myPkg package is bound into the myNewColl collection ID and has V1 for the version ID. You can also see which statements will be included in the package.
Figure 8. Preview your packages
- Step 3: Bind your package
New editors make specifying complex bind options much easier. The combination of content assist, descriptive information for each option, and an integrated editing experience enable you to build complicated bind scripts without requiring you to wade through manuals or remember complicated syntax. Colorization makes working with complex sets of options easier.
The figure below shows content assist for the bind options, per server.
Figure 9. Content assist for bind options
An example of the properties to bind packages is shown here:
com.demo.ActData =-isolationLevel CS -bindOptions "QUALIFIER SSURANGE"
When the package is ready for binding, you can bind to the development, test, or production database without changing your development project environment.
The figure below shows how you can right click on a pureQuery interface and select the pureQuery->Bind menu. Select a database to bind to and select OK.
Figure 10. Bind your package
- Step 4: Verify the bind
After the bind succeeds, you can easily navigate to the bound package on the database by switching to the SQL tab in the pureQuery outline view and double clicking on the package name. (Alternatively, right-click on the package and select Show in Database Explorer.)
This gives you instant feedback on the packages created on the database as a result of the successful bind, as shown below.
Figure 11. Link to packages in the Database Explorer after bind
The next section describes improvements in the Database Explorer that allow you to get more information about packages.
The Database Explorer is enhanced for packages to give you most of the information you want to know about your packages within the development environment.
Data Studio Developer lets you explore existing packages easily. By looking at the Properties view, you can see information such as the SQL statements in each package, the bind options that were used, last bound time, package version, collection, isolation level, user privileges and so forth. Figure 12 shows this improved support for DB2 package navigation.
Figure 12. Improved DB2 package navigation
Data Studio Developer 1.2 advances its focus on seamlessly integrating SQL development tools into the Java development environment. The ability to have all the SQL issued from the application readily available to the developers empowers the developer in many scenarios as described in the next few sections.
The previous sections describe how you can use the pureQuery outline view to see all the SQL in an application.
By right clicking on the SQL from the pureQuery outline, you can run Visual Explain to see the access path of the SQL (Figure 13). The developer can now improve the SQL (or work with the DBA to do so) and repeat the cycle until the SQL uses a desired access path.
Figure 13. Actions on SQL from pureQuery outline
The ability to export SQL from an application lets you compare changes in SQL between application versions. This can provide significant benefits when investigating deteriorating performance over releases or issues introduced in newer releases.
If developers want to send the SQL to the DBA for feedback, they can do this from the pureQuery outline view. Select one or more SQL statements, right click and select Export SQL. The SQL from the application as a whole can now be shared with the DBA for regular feedback and improvements during the application development cycle. Exporting all of the SQL in a Java class lets the DBA easily review all the SQL being issued by a Java class without having to look at the Java source code and the developer can quickly and easily generate this list.
Team members can be aware of the SQL issued by other parts of the application, which gives them an overall view of the SQL and database objects used.
If the DBA also has the Data Studio Developer, they could also review all the SQL in the Java class files by just using the pureQuery outline view without having to export it. The DBA could run the SQL and run explain on the SQL all from within the pureQuery outline view. The pureQuery outline view provides an easy-to-read summary of all the SQL in the Java class.
Many times, migrating from existing applications to pureQuery is desirable so applications can get all the benefit from the pureQuery platform’s ease of use programming model. This process is much easier to do in Data Studio Developer 1.2 by exporting existing SQL from the outline view and then generating a pureQuery interface from that exported SQL.
- From the pureQuery outline, select one or more SQL statements,
right-click, and select Export SQL to File, as shown below.
Figure 14. Export SQL to file
You can now generate pureQuery code from one or more SQL statements.
- From File -> New->Other, expand Data and select pureQuery Annotated-method interface.
- You can now generate a pureQuery data access layer for all SQL
statements in the *.sql file in one shot using the wizard.
The wizard steps include importing the *.sql file and optionally changing the default bean characteristics proposed by the tools. The pureQuery data access layer with the specified interface name and suggestions for APIs for each SQL statements will be generated in one shot.
The figure below shows the actions in the wizard.
Figure 15. Generate pureQuery code for a list of SQL statements
Data Studio Developer provides the ability to tweak the generated code using user interface wizard dialogs and settings.
You have more flexibility in customizing the generated code outside of the tools. Data Studio Developer 1.2 exposes the Eclipse JET templates used for generating all the SQL create, read, update and delete (CRUD) operations, the pureQuery database access layer, and test code. You can treat these templates as your own, customize them and share them with all developers. When your team uses Data Studio Developer 1.2, the generated code now follows the changes from the customized templates. You can use this functionality to create new files, change existing generated files, and more.
Frameworks such as Hibernate often make assumptions about mappings between the Java objects and underlying relational data and do not provide control to the developer to change those mappings. For example, the framework might assume that any CHAR(1) field should be mapped to a Boolean data type. If your business application actually stores several different status codes in that field, that can mean the database will choose a suboptimal access path based on the assumption that it is looking for a Boolean value.
Data Studio Developer 1.2 now gives you the ability to specify default database to Java type mappings at a workspace level. You can customize these settings to match your development guidelines and business requirements. You can then export these preferences and share with developers to ensure consistent usage of type mappings.
Here are some additional features that can help with productivity and flexibility in large projects:
- Organize generated code for pureQuery interfaces and tests in different DB2 SQL packages.
- Format SQL when code is generated or after SQL is modified.
- Generate complete data access layers from one or more SQL statements in one shot.
Java database routine/stored procedure development has never been easier. Simply point to your Java method and click to deploy.
In addition, you can now use pureQuery annotated-method style static SQL in your procedures and gain the same performance benefits as you have been getting from COBOL using static SQL, but with the programming ease of pureQuery and Java. Right click anywhere inside the Java procedure and select Create Stored Procedure.
Figure 16. Create procedure
The first release of Data Studio introduced a slick new solution for exposing database data as Web services. This solution, aptly named Data Web Services, lets you create Web services from database operations like SQL statements (SELECT, UPDATE, INSERT, DELETE, XQueries) and stored procedures without requiring any programming! Some of the salient features of Data Web Services are:
- A no-coding approach to developing Web services
- Built-in support for both SOAP and REST-styled binding
- Ability to deploy Web services into an existing SOA infrastructure with just a few clicks of the mouse
- Auto-generation of a WSDL file based on exposed Web services
- Integrated data access development and Web services development environment with built-in best practices
Data Studio 1.2 further enhances Data Web Services with a number of features that improve security, manageability, and provides a wider range of deployment options. Let's look at some of the key enhancements.
Data Studio Developer now enables pureQuery use inside Data Web Services. The benefits of pureQuery like better performance, security, and manageability are now extended to Data Web Services as well.
For J2EE Web servers, there is a new deploy option that allows you to specify whether the Web service accesses the database using JDBC or pureQuery. The package-based authorization model of static SQL enabled via pureQuery also gives the DBA better control over what data is exposed via Web services.
Figure 17. Static SQL support for Data Web Services
For provisioning Web services that require guaranteed delivery, you can use SOAP over JMS instead of SOAP over HTTP. As shown in Figure 18, you can now pick SOAP over JMS as the underlying transport when you choose WebSphere Application Server as the target platform. JMS applications interact using either a point-to-point, or a publish/subscribe messaging model. Data Web Services in this release only support point-to-point messaging model using queues. Data Web Services developed using the no-coding approach in Data Studio can be deployed into a reliable messaging environment by defining the necessary artifacts in WebSphere Application Server to enable JMS.
Figure 18. Data Web Services using SOAP over JMS
Data Studio Developer adds the WebSphere DataPower SOA appliance as a target platform for deploying Web services. DataPower is a high performance, purpose-built, and easy-to-deploy network device that simplifies, helps secure, and accelerates XML and Web services (see Figure 19). You can now follow the same familiar development model as Data Studio 1.1 -- that is, drag and drop Web service assembly from database operations -- and choose to deploy the generated artifacts onto DataPower instead of a J2EE server. When deploying onto J2EE servers like WebSphere Application Server, Data Studio Developer generates a J2EE Web application ( *.war) file. When you choose to deploy onto DataPower, Data Studio generates XSLT files (instead of a *.war file), that can be deployed onto DataPower.
This solution combines the best of both worlds - the simplified development model of Data Web Services with the wire-speed processing of DataPower for high throughput of Web services.
Figure 19. Deploy Data Web Services on WebSphere DataPower appliance
We've only scratched the surface of the key new features in Data Studio Developer 1.2. The best way to really get a feel for the enhancements described here is to download the trial version and play with it yourself. We also recommend that you take a look at a video one of our engineers put together that reviews many of the key features described here.
Table 1 summarizes the Version 2.1 feature support per database platform:
Table 1. Feature support per database platform
|Feature||DB2 for LUW||Informix Dynamic Server||DB2 for z/OS||DB2 for i|
to enable static execution for existing JDBC applications
|Problem determination for developers||Yes||Yes||Yes||Yes|
|Impact analysis for database applications||Yes||Yes||Yes||Yes|
|Web services-based access to DB2 using WebSphere DataPower SOA appliances||Yes||No||Yes||Yes|
|All other features||Yes||Yes||Yes||Yes|
Special thanks to Kathryn Zeidenstein for her contributions to this article
Studio software: The big picture"
(developerWorks, July 2008): Get an overview of the IBM Data Studio software. See how you
can use this new product to improve productivity, increase quality of
service, and encourage greater alignment across IT roles.
Excuses' Database Programming for Java"
(IBM Database Magazine, May 2008): Make your programs fly with static SQL and pureQuery.
Studio pureQuery Runtime for z/OS Performance"
(IBM Database Magazine, July 2008): Get a brief overview of pureQuery technology and the new
client optimization features in Version 1.2 that extend the benefits of
static SQL to any existing Java application.
- "What’s new and exciting in IBM Data Studio Developer
2.1" (developerWorks, December 2008): See how the new implementation of Data Studio
Developer can improve your productivity and enable better collaboration
between developers and DBAs.
of IBM Data Studio Developer 1.2 features:
Watch a lead tools developer for Data Studio walk you through the new Data
Studio 1.2 features.
- Participate in the discussion forum.
- Data Studio Community Space
- Data Studio Team Blog
Sonali Surange leads IBM Data Studio pureQuery tools in the IBM Data Server Tools organization at the IBM Silicon Valley Lab in San Jose, Calif. Previously at IBM, Sonali led the development of several key components of the Visual Studio .Net tools for IBM databases.
Rafael Coss is a Solution Architect from the IBM Data Studio Enablement team based in the IBM Silicon Valley Lab in San Jose, CA, where he is responsible for the technical development of partnerships and customer advocates for the IBM Data Studio portfolio. Previously, he held several other positions in IBM, including responsibilities for evangelizing DB2 pureXML capabilities to customer and partners. He joined IBM in 1998 and received his master's degree in Computer Science from California Polytechnic State University in San Luis Obispo, CA.
Vijay Bommireddipalli is a Solution Architect in the Data Studio Enablement team at IBM's Silicon Valley Lab in San Jose, CA, where he enables customers and partners on IBM Data Studio technologies. Prior to joining this team, Vijay was part of the pureXML enablement team. He joined IBM in 2000, after finishing his master's degree in Electrical and Computer Engineering at University of Massachusetts - Dartmouth.