While I was in Rome for IDUG EMEA, I had a chance to talk to a lot of customers about Data Studio and Optim products. One of the customers expressed some concern about the difficulty of maintaining the Optim software on each desktop that runs our Eclipse-based software offerings. I'm sure a lot of other customers are worried about this topic, and it caused me to realize that this is something we don't discuss much in our conference presentations or in our blog/Web content.
The good news is that we've actually done quite a lot in this area and are continuing to do more. Our install process for the Optim products allows you to operate in one of three modes:
- An install image can be uniquely installed on each desktop. This does require each user to install the product, but we support the Rational Installer update process, which allows the users to easily check to see if updates are available in the internet and automatically install them without having to manually download updates or feed CDs into their PCs.
- You can also choose to install the Optim products on a file server that acts as the production image of the product. A small launcher file is invoked on the end user's desktop, which will load the Optim product into the end user's PC from the file server. With this approach, you only have to maintain the copy of the product that is stored on the file server, and individual users are upgraded instantly each time you apply maintenance to the file server image.
- You can also choose to package the installer using tools like Microsoft's Systems Management Server (SMS) product to deploy the installation to multiple end user desktops. Here is a link to the documentation on this mode of installation in the Installation Manager Information Center. We’re working on getting this information into the Information Management installation documentation as well. We’re also working on creating sample scripts to aid in the packaging and deployment, reducing the development time that is typical when using these tools.
All three modes require less manual labor than the traditional technique of using CDs or DVDs to install and upgrade products. The last two modes fully automate the process for the end users, so that an organization only has to perform maintenance on the centralized file server image (mode 2) or maintenance occurs automatically as a deployable update (mode 3).
In my dual role as CTO and VP of Data Servers, I spend a fair amount of time on the road talking to people about what's going on both with database server technology and with Integrated Data Management, both from a businesses and technical perspective. Anyway, I like these road trips because they help me validate the business reasons behind the technical offerings we make. Because I meet with so many different people in different job roles (CTOs, CIOs, DBAs, architects, developers..) it helps me to calibrate the tradeoffs and priorities to get a product or new functionality out the door and to place that new capability in the context of real customer problems. In addition, no matter how many times I give the "IDM vision" pitch, I almost always get feedback that helps me to make it just a little better the next time, such as finding out what part of the strategy might not be clear when people hear it for the first time. To be honest, these roadtrips also help me keep my technical chops since I have to make sure I'm up to date on the latest functionality and be prepared to answer some tough questions from the audience.
For this reason, I'm really looking forward to my all day appearance at the upcoming Tridex DB2 Users Group on October 15
. I start off with the vision pitch, because I find that helps to place the more technical talks in the correct perspective. Then, I'll go into detail on the integration work we are doing with WebSphere. This work is critically important in making the whole database/application server layer more efficient and easier to manage, as well as providing DBAs with more control. Then I go into a deep dive on Query Workload Tuner, which is our follow-on product to DB2 Optimization Expert. Finally, I'll be closing with a talk on using pureQuery for high volume applications, in which I discuss more about the performance aspects of using pureQuery such as heterogeneous batching, static support, and more.
If you're in the New York area, I think you would get a lot of value from attending this free event. Note that they do require you to register ahead of time, so go to web site
, download the invitation, fill out the registration form attached to that invitation, and email it in.
Hope to see you there.
I want to start out by introducing myself as this is my first of hopefully many entries to this blog. My name is Eric Naiburg and I am responsible for Product Marketing Strategy of the Optim solutions. I have recently celebrated my 1 year anniversary with the Optim group and rejoining IBM. Prior to this role, I spent more than 2 years away from IBM working for Dr. Ivar Jacobson as VP of Sales and Marketing for Ivar Jacobson Consulting and a brief stint at CAST Software as Director of Product Marketing.
In my previous role at IBM, I work for Rational Software actually spending 4 years at Rational pre-acquisition and 3 years as part of IBM. I held many roles at Rational including product manager for the modeling solutions, product marketing of solutions and desktop products just to name a few. Before joining Rational, I was the product manager for the ERwin data modeling tool with Logic Works which was acquired by Platinum Technologies and later CA.
I have also published 2 books with Addison Wesley. UML for Mere Mortals and UML for Database Design. Both of these books were fun to write and provided great learning experiences for me and my co-author Robert Maksimchuk.
So, now to the Blog….
Data Privacy - "The Untold Story"
Data protection and privacy continue to be a tremendous focus and risk for the IT community today. While organizations are making great strides to protect data privacy in production application environments, but the “untold story” of implementing similar strategies in non-production (testing, development and training) environments is often overlooked. When I talk to people at conferences and in meetings, I ask them the question, “how are you protecting the privacy of your data in development and test environments”. The scary result is often they look to the floor, have a nervous smile on their faces and say, “we know it is an issue and we know we need to do something, but it isn’t on the top of our priority list at this time”.
That is why I call it the “untold story”. We know of the threat, but don’t do much about it until it is too late and a breach or loss happens. There is significantly more non-production data floating around organizations than there is data in production. It is used for testing, development, training and more. Additionally, when the data is used, it may be copied into spreadsheets for use in automated testing tool or for manual testing inputs exposing the data outside of the database itself and now even a bigger risk.
Because the data is being used and moved, it needs to be protected. Since testers, developers and others can see the data, encryption just isn’t enough, the data must be de-identified or masked. Non-production data does not have to be real; it however does have to be realistic. The process of masking creates realistic data from your production data and if done correctly will ensure the referential integrity across a single database or entire system. Masking does not prevent the loss or theft of data, but it makes it of no value if that occurs.
So, to keep your organization from being the next negative headline of data theft, mask your non-production data making it realistic, but not real.
My kids told me the other day that they can tell it's Fall because my calendar is full of travel for work. The good part about this travel is that there are lots of opportunity to meet with customers as well as other IBMers at two big conferences: IDUG Europe
and Information on Demand (IOD)
IDUG Europe is in Rome this year, and the only bad part is that I can't stay in Italy longer! I have to leave Thursday morning so I can attend some Boy Scout leader training Fri-Sun. Hopefully, I can sneak out in the early morning and see some of the cool sights again like the Forum. IOD is in... SURPRISE, Beautiful Las Vegas! OK, I guess that's no surprise. It's interesting to hear the reactions from folks on Las Vegas. It seems like a love or hate relationship. As artificial and overindulgent as Vegas is, I enjoy it. I enjoy the food (ask me for restaurant suggestions), the weather of the Fall there, the safety of walking along The Strip anytime of the day or night, and most of all, the people watching. There's nothing better than getting a good seat with my favorite beverage and watching the crowd parade past.
OK, back to the topic of technical conferences. I have a friend who works as a developer for a DBMS competitor (no need to ask ;-)), and he's always jealous when he hears how much my fellow developers and I get to go and present at these conferences. At his company, they mostly only let product managers present to customers and leave the geeky developers back at the lab. There is very little that can substitute for face to face contact with the people who are working on the products. I'm glad that IBM lets us geeky folks out of our cages for these events. Conferences seem to be suffering from lower attendance (no surprise, considering the economy), but it's really unfortunate when you consider the value than can be gotten. The conference web sites usually have an ROI justification to help with this, and I really think that the focus, the proximity to knowledgeable experts, and the hands on labs really do justify the costs.
I just plotted out the sessions that I plan to attend at IDUG. I always like how IDUG has a high number of customer presentations. I always enjoy listening to those. IOD also has a lot of customer talks this year. Curt mentioned some of them in his blog post
Well, I hope to see you in Rome or in Vegas. In both places, I have a "new" session called, "Why are DB2 for z/OS DBAs interested in Data Studio and Optim Tooling?
" (IDUG Rome: on Monday, Oct 5th, and IOD on Monday, Oct 26th -- as always, check the monitors for last minute gate changes when you arrive at the airport). I felt this tailoring of a session for DB2 for z/OS DBAs was needed because we have a lot of stuff labeled Optim, and not all of it is of interest to a DB2 for z/OS DBA as of today. I also take a no-glitz look at the free Data Studio offering and show exactly what it can be used for on a DB2 for z/OS system. Holger Karn will join me at the IOD session to talk about some performance monitoring futures that I know will get you excited. At IOD, I have another session with Jay Bruce called, "Query Tuning on IBM DB2 for z/OS
" that discusses the task and shows tooling that can help you do it.
Hope to see you there!
I know you all are probably getting bombarded with news about the upcoming Information on Demand NA conference
in Las Vegas, but I wanted to make sure you were aware of some highlights from an Integrated Data Management perspective. I'll be joining my colleague Al Smith, originally from Princeton Softech, presenting our strategy and vision (I need to earn my flight there, after all), but I think what's even more interesting are the customer speakers that are lined up. There's a broad representation across industries of companies who are focused on different aspects of our solutions. For example:
- Scotiabank Speeds Application Testing and Protects Data Privacy with IBM Optim (BFM-2345)
- State Farm Optimizes App Performance with Optim Development Studio, pureQuery (TDM-1718)
- Efficient Test Data Management: An ROI Success Story (Travelport) (TDM-2841)
- Evaluating Java Data Access Technologies at Handelsbanken (TDM-2177)
- A Day in the life of a DBA - how to keep your sanity (Blue Cross Blue Shield) (TDM-1499)
Also, if you want to get your hands dirty, please reserve your seat in some of the hands-on labs. I know our technical enablement team has some great labs lined up that include integration of some of the products into solutions, such as:
- Accelerating Java Applications (HOL-1276) which talks about using Optim Development Studio, Optim Query Tuner, Optim pureQuery Runtime, and Performance Expert Extended Insight together. This is a great use case that DBAs should be aware of. (It actually reflects pretty closely the Java Acceleration Solution demo.)
- Model-driven data governance using InfoSphere Data Architect and Optim (HOL-1277) includes Optim Test Data Management and Data Privacy Solutions together. If any of you are responsible for data privacy or are looking for ways to help 'bake in' privacy safeguards starting at design time, then this is the lab for you
If you want to talk to me, be sure to get your seat reserved at the Meet the Experts on Tuesday afternoon (you can enroll through the SmartSite link below). It will be a very busy week, but I think you'll find it worthwhile.
To help you with planning, here are the key links:Integrated Data Management roadmap to the conference.
(Print it out and take it along with you.)IOD Conference SmartSite
to help you plan your agenda and enroll.IBM Optim on Twitter
(This is already active, but we'll use this to keep you informed of updates at the conference)IOD 2009 on Twitter
(for the bigger picture on conference activities)
Look forward to seeing you.
It’s only been a few months since our last release, and my team and I have been busy taking our products closer to realizing the Integrated Data Management vision for integrated lifecycle development and heterogeneous database support.
So what's different in this release? With the announcement and release of Optim Development Studio 2.2
(formerly Data Studio Developer), we added:
- Features that help developers and DBAs work together to create high performing Java data access code with high productivity, on Oracle, DB2, and Informix Dynamic Server databases.
- Speedy iterative testing
- Enhanced impact analysis
- Support for building and debugging SQL/PL procedures to run against either DB2 9.7 or Oracle databases.
As the architect for pureQuery tools, I wrote a new developerWorks article
that goes into some detail on the new capabilities, mostly from a pureQuery perspective. (You can see more about the other Oracle support capabilities in Venkatesh's blog
.) I’ll give you some highlights of this release from a pureQuery perspective in hopes that it will convince you to go read the article
and download the trial code!
Some of the important features –Oracle pureQuery support
The big news of course is support for pureQuery capabilities for Oracle databases – pureQuery code generation, SQL content assist, validation and all the editing capabilities ( now also available for JDBC, and native SQL in JPA and Hibernate applications), client optimization, dependency analysis, hot spot analysis. If you aren’t familiar with these capabilities, my article does review them.
The screenshot below shows a sample pureQuery application that was run against Oracle. Using the SQL outline (formerly called pureQuery outline), you can see the performance metrics from the Oracle queries. You can also see predicted cost using the EXPLAIN Data
option. (That EXPLAIN Data
option is new for DB2 and IDS as well.) Visibility of data privacy attributes to developers
The other interesting integration work that was done is the ability to maintain data privacy attributes from modeling through development and test. Anson Kokkat touched on this in his recent blog
. Production databases often contain sensitive information such as credit card numbers or social security numbers. When data architects create data models for such databases using InfoSphere Data Architect, they can identify which attributes or columns contain sensitive information and specify appropriate privacy policies to be used with them.
By associating this model with their database, developers can easily see which columns are identified as containing sensitive information and this can help them maintain compliance in how they handle that data in their applications. This protection also extends further to their applications - they can see how those private columns are being used in context to ensure they are not doing something inappropriate, such as printing out data in those columns.
The screenshot below shows the private columns in the SQL outline (a little padlock icon is used to indicate a private column), their privacy properties, and how you can navigate to the model to get more information. Other key capabilities include:
- Ability to copy and paste objects or data subsets as an aid to developers. If you need something more heavy duty, the copy/paste wizard can generate a script that can be used with Optim Test Data Management solutions. (Note that this capability is currently available only with DB2 LUW 9.7 and Oracle.)
- For DB2, you can now specify that literals be replaced with parameter markers as part of the SQL capture process. Because this makes the statement less ‘unique’ it is now eligible to be bound statically. This was a requirement we had from several customers. Also, we’ve made a lot of enhancements in package management to give you more granularity in identifying and rebinding only packages that have been impacted by a change. In addition, you can bind in the background.
There are really a lot more enhancements, but you’ll need to read the article – I don’t have room to list everything here. We’re also working on some videos that will show these features in action.
Did you know it's easy to upgrade to the newest IBM Optim Database Administrator 2.2.1 ? It just became available at the beginning of September. There's a web install available here
If you already have the product installed, we recommend upgrading'; by choosing the Web install circled below you'll only need to download about 20MB.
If you haven't tried the product yet, what are you waiting for?
On the heels of Vijay's virtual tech briefing on Optim Development Studio 101 (you can register for the replay here
, if you missed it!), I'm going to be taking you on a deep dive in one aspect of using the product for SQL Stored Procedure development (with a focus on z/OS). Marichu Scanlon from our continuous engineering team will be on board to help answer questions.
There has been a lots of interest out there in this topic because many people use stored procedures to encapsulate business logic, improve performance, and, with DB2 for z/OS v9 native SQL procedures, they can use them for reduced cost because they are zIIP-eligible.
The first of the two sessions of this briefing will be on September 24 It will last about one hour and will cover topics of creating, deploying, running and working with existing stored procedures. Within this session I will also be answering a lot of the commonly asked questions I have seen both on the forums and customer interactions, and of course I will also be taking your questions during the event itself. Keep in mind, during the September 24 session I'll only touch briefly on how debugging plays in the stored procedure development life cycle since we will be having a follow up deep dive session on October 22 that focuses on how to enable debugging in a z/OS environment. The event is free, and you can register for it here
to get the details on accessing the event.
Look forward to seeing you at the tech briefing!
In case you missed the DB2 Chat with the Lab a couple weeks back, which starred our own Deb Jenson and Manas Dadarkar discussing and demoing Data Studio and other capabilities for database administration in the Optim portfolio, the replay is now available from ChannelDB2
, from which you can also download the charts. This is a great introduction to Data Studio for those of you who may not have taken the plunge yet or are interested in hearing more about specific support for DB2 9.7. Even better - download Data Studio and try it out yourself! You can find links to both downloads (the stand-alone and the IDE packages that Srini blogged about
a while back) on the Integrated Data Management Community Space download tab
Today's entry is inspired by a recent Dilbert cartoon
where the pointy-haired boss tells Dilbert that he needs to get better at anticipating problems. While we'd all like to see problems before they happen, we need a little help here, and inspiring words from the pointy-haired boss just doesn't cut it.
Today's DBAs have a lot of responsibility; arguably more than they have had in the past in terms of number of systems and the complexity of these systems. Most DBAs have implemented early detection mechanisms for production systems, but what about non-production or less-critical systems like development or test systems? These are often called "non-critical systems" until a severe issue occurs with them, and then they suddenly become critical because they are preventing new work from being implemented on schedule. Sometimes it may be difficult to justify the cost of robust monitoring software like DB2 Performance Expert
, Tivoli OMEGAMON for DB2
, or IBM Tivoli Monitoring
for these labeled "less-critical" systems, so what's a pro-active DBA to do?
One solution is the Data Studio Administration Console (DSAC)
. It is a no-charge offering with your data server license that supports DB2 for z/OS and DB2 for Linux, UNIX, and Windows with an "at-a-glance" view to see the health and availability of these systems. It is not a full-blown performance monitor, but it does show several key indicators like whether the system is up/down, locking rates, resource utilization, etc.In new news, although DSAC used to be the delivery vehicle for the Q Replication Dashboard, we have just made available a new and improved Q Replication Dashboard One of our Gold Consultants, Frank Fillmore, will be discussing this dashboard in a webcast (two sessions to accommodate different timezones) with IBM on September 15. Get the details from his blog.
With this change, you might be asking what other changes are in store for DSAC? You may have heard us talking about our next generation performance manager. It has a new architecture along with a web browser interface that will support DB2 and eventually other DBMSs. Once we roll out this performance manager (be sure to attend IOD
to find out more), we plan to use this new architecture for the next release of DSAC. It will still provide the same high-level health and availability capabilities that DSAC 1.2 provides today, but the Web user interface will be refreshed and have consistency with our other Web UI offerings.
So, don't let the pointy-haired boss get you down the next time they ask you to anticipate problems better -- just smile, thank them for their leadership, and go take a look at DSAC to prevent those critical situations.
Many of us on this blog talk about what IBM is doing to improve the Java database applications space, both from an application development and database administration point of view.
Here is the way I like to categorize the areas in which these improvements are seen:
- A comprehensive monitoring capability that gives you a broad, yet contextual perspective on database and application performance, and early warning of emergent problems.
- Better insight into applications, and the ability to stabilize, and improve resource utilization in a non invasive way
- Building performance into the applications early on in the development cycle.
Performance Expert with Extended Insight, pureQuery, Optim Development Studio, and Query Tuner are all important components
of the solution that we call the Java Acceleration Solution
. But if you are a big picture person and want to see how these components work together, we put together this 15 minute video video
on developerWorks that we hope you'll find entertaining as well as giving you an example of this bigger picture as you follow along with our friends at the fictional Great Outdoors Company.
By the way, we also developed some hands on lab material that walks you through these scenarios so that you can explore, and try it out yourself. If you're interested, we'll be offering this as a lab at the Information on Demand Conference
in Las Vegas this year. if you are attending this year, and would like to try it out, be sure to look for the Java Acceleration Solution Hands-on Lab (HOL-1276) in the Integrated Data Management Track of IOD.
Our team has been working on some new articles detailing Oracle support in our Optim Development Studio offering. The article by Sonali covers pureQuery (development, runtime and tooling) aspects for Oracle databases. A new article by Thomas Sharp has details on the SQL development aspects with Oracle - connecting to Oracle using Oracle JDBC and DataDirect JDBC drivers, working with (creating, deploying, running, debugging) PL/SQL stored procedures/ functions and PL/SQL package entities. The article also has information on using PL/SQL artifacts with an Oracle-compatible DB2 9.7 database. The PL/SQL-based enhancements in ODS make it convenient you to not only develop with Oracle but to switch easily between Oracle and DB2 9.7.
Give the article a read and try out the scenarios described ( for some live demos refer to this video recording). As usual, stay tuned for more updates.
As lead product manager for the Optim and Data Studio administration capabilities, I have been getting lots of questions about Data Studio
ever since DB2 announced the deprecation of DB2 Control Center. I think some people haven’t gotten the word yet about the capabilities in Data Studio 2.2 that fill big gaps in the administrative capabilities that were available from Control Center. I did blog about some of these capabilities in a previous posting
, but I’d like to now give you more information and a demo of some of these capabilities as well as to talk about how easy it is to add on more capabilities to help you get your job done.
To that end, I’d like to invite you to join me and a Data Studio architect, Manas Dadarkar, for a DB2 Chat with the Lab event on August 27th
Looking forward, it’s true that as of today, Data Studio does not have all
the capabilities that Control Center does, but I think it covers a large percentage of what most people need to do on a day to day basis. I recommend that you install Data Studio
in one of its flavors (IDE or stand-alone) alongside Control Center so you can get familiar with using it. In the meantime, we’ll be working hard on our side to fill the gaps that affect most users.
Hi, everyone. Glad to be back after some travelling. I wanted to give you a heads up on a virtual tech briefing
that I will be giving next week focusing on an overview of Optim Development Studio
. We have a special guest speaker - Nik Teshima - who is the product manager for Rational Application Developer for WebSphere Software (RAD)
. If you're wondering why we invited Nik along to this briefing, it's to help address questions we hear a lot from people who don't quite understand the complementary relationship between RAD and Optim Development Studio.
I will walk you through some of the capabilities that Optim Development Studio brings plenty to RAD’s robust Java development environment and why this moves into the "must have" category if you are doing any kind of database access development.
You will also see how the two products shell-share seamlessly (along with MANY other Rational and Optim products) -- you can install them alone or together in a modular fashion to include the capabilities you need. Anyway, please join Nik and me next week on Thursday, August 20 at 1 PM Eastern for the Integrated Data Management Virtual Technical Briefing on Optim Development Studio 101
where I do plan to demo some of these capabilities and will be happy to answer your questions.
Hi, I’m the lead architect for Optim pureQuery Runtime
, and I want to start using this blog to help address questions that I get from people as they learn about or use pureQuery capabilities. In this first blog, I’ll discuss the new SQL replacement capability.
By using client optimization, an administrator can modify the SQL from a captured application. The enhanced tooling to support this capability is described in Sonali's article, What's new and cool in Optim Development Studio 2.2
. The intended usage of this feature is to let a DBA make a change to an SQL statement without the need to edit and recompile an application. This could be useful, for example, in late night or weekend emergencies when an application can't easily be changed. It is also useful in cases where a third party application embeds or generates sub-optimal SQL and a change to the application is not possible without contacting the vendor. In any of these cases, you should aim to change the application directly at the first practical opportunity to use the improved SQL.
When I talk about this capability to people, there are two questions that frequently come up:
- What is the extent of the change I can make to the SQL?
- Isn't there a security risk to allow editing of the SQL? How can we control access?
Great questions. I'll discuss each of them separately.What can I change in the captured SQL?
There are restrictions on what you can change when creating the replacement SQL. The Optim Development Studio pdqxml editor will prevent many of the restricted changes, which is why it is strongly recommend that you use this editor to create the replacement SQL. The primary restrictions on the replacement SQL are:
- You may not change the SQL statement type. For example, you can’t change a SELECT to an INSERT.
- The number and types of any input parameters or output result columns must be unchanged. For example, if your SELECT statement is expecting two columns of CHAR and INT as result, your changed SELECT statement must also expect a result of two columns of CHAR and INT.
At first glance, these may seem to greatly narrow the capability. But when you think about it, any changes like those described above couldn't work very well anyway without a corresponding application changes, at which point you would likely just change the SQL directly in the application.
Nevertheless, there are quite a number of useful changes that you could make that would not violate the restrictions. You can:
- Influence access path / index usage:
- Add an ORDER BY clause
- Add OPTIMIZE FOR 1 ROW
- Other "tricks" to influence the DB2 Optimizer (like adding OR 0 =1 to a predicate)
- For Oracle - add a comment hint to end of the statement
- Influence fetch size for distributed queries:
- Add FETCH FIRST n ROWS ONLY clause
- Add an OPTIMIZE FOR n ROWS clause
- Add FOR FETCH ONLY or FOR UPDATE clause
- Add a predicate that narrows the data returned - just be sure it uses literals and not parameters
- Change the locking behavior:
- Add WITH ISOLATION clause
- Add SKIP LOCKED DATA clause
- Add FOR UPDATE clause
- Directly manage the schema name for object references:
- Add or change the schema qualifier on a table or other object reference.
How can we control access?
- Help manage EXPLAIN DATA:
Some people have expressed concern that there is a security risk involved with the ability to change captured SQL. While there is some potential for abuse, there are means for controlling changes. I’ll discuss some control points within the context of how this feature can be used with either static or dynamic SQL.
Client optimization is most frequently used to convert dynamic SQL execution to static execution. To use SQL replacement here, you must modify the SQL before performing the bind operation. Any changes made to the SQL in the capture file after the bind are ignored. So control over the capture file contents needs to be managed across the capture/configure/bind process. In many instances, this would be done by the same administrator. Ultimately the bind is performed by an administrator with all the authority to perform all the SQL contained in the file. This is not very different than a traditional 3GL program deployment. And any bound SQL is then visible for inspection in the DB2 catalog.
There are scenarios where client optimization is used (for example, to gather performance metrics) even when the eventual execution mode is dynamic SQL. In these cases, the capture file is examined at execution time for the existence of replacement SQL. If present, that SQL is used in the prepare and execute. But you can disable that execution-time replacement. A pureQuery configuration property, enableDynamicSQLReplacement, controls whether this is allowed. The default is false, so you have to do something to turn on the execution time replacement. The gives control at a datasource or application level..
An important point here is that all the basic building blocks of security remain in place.
That is, SQL privileges are necessary to execute the bound packages or the dynamic statements. But even with that being true, additional care must be taken. To prevent unexpected changes to the file you must control write access to the file. It can be locked down on the executing server by making it read-only. It is also important to control the updateability of execution-time properties that can affect the application’s execution. The capture file, and any application properties files need to be thought of, along with any executables, as a collection of related resources, all of which need protection.
I hope you’ve found this useful. Let me know if there are other questions you have about pureQuery, and I’ll do my best to answer them.
-- Bill Bireley