“Who moved my cheese?” If you are a Maximo Asset Management or Industry Solutions customer who has added SmartCloud Control Desk to their environment, you might feel like that cheese-less mouse. Many global changes were made to the user interface by SmartCloud Control Desk during its installation. For instance, there’s now a Navigation bar on the left side of the User Interface. All of these global changes can be reversed by updating the system properties of your Maximo Asset Management server.
Here is a document that describes all of the new user interface changes and how to control them, so you can decide whether to keep them or revert to your previous look and feel. You will change these properties in the System Configuration->Platform Configuration->System Properties. You can perform a Live Refresh after you update these values (it will not require a server restart), but you will need to logout and log back in before you will see your updates take effect.
Link to document:
Starting from release 8.6 has been released a command line that extract reports in HTML, PDF or CSV format from the TWS database.
It's simply to install and simply to use. You can just untar the package and configure some properties files to point to the database.
The command line comes with some templates to extract the historical data. These templates are properties files where to configures the filters for the jobs, workstation, time interval etc.
DocumentationDocumentation link how to untar the reporcli image. Note: it is important to assign the permission and ownership to a user as explained.
Documentation link how to run the cli with some templates:
After untar the reportcli image, all the templates available are under:
The reportcli.sh (or reportcli.bat ) is under <REPORTCLI_DIR> and can be run directly on a shell (or command prompt) or can be scheduled with a TWS job (classic or dynamic job, respectively with a FTA or a Dynamic Agent).
When you try to run the reportcli.sh (or reportcli.bat) ensure you are logged on the shell with the user which is the owner of the all directories and files. This is user need to be the same user when you'll shedule a job to run the reportcli.sh.
Basically the reports templates available are one for each report type. Note that some reports in the layout contain charts, other only table.
All reports can be extracted to all the output formats (HTML, PDF, CSV) and the output can be sent via email too.
All the key/valued contained on the templates can be overriden passing pairs of key/value on the command line.
Run a custom sql report passing your own query on the fly. It extracts all jobs definitions in a file jobdefinitions.pdf into the /tmp directory.
./reportcli.sh -o /tmp/ -r jobdefinitions -p /opt/IBM/reportcli/reports/templates/sql.properties -k QuerySQL="select JOD_NAME,JOD_TASK_STRING from MDL.JOD_JOB_DEFINITIONS where JOD_NAME like '%FC_CLU%'" -k REPORT_OUTPUT_FORMAT=pdf
Run a job run statistics report:
./reportcli.sh -o /tmp/ -r jobrunstatists -p /opt/IBM/reportcli/reports/templates/jrs.properties -k REPORT_OUTPUT_FORMAT=pdf
An examples of workstation workload summary report:
/opt/IBM/reportcli/reportcli.sh -o /tmp/ -r workloadsummary -p /opt/IBM/reportcli/reports/templates/wws.properties -k REPORT_OUTPUT_FORMAT=pdf
If you have questions please ask on this blog
Hi, this is to inform you that there is a new PDF document on the TWS WIKI (path Tivoli Workload Scheduler>TWS Distributed>Distributed-driven scheduling on JES) that compares the two solutions available to schedule wokload from TWS distributed on JES of z/OS. In fact, there are two products that can accomplish this task: Tivoli Workload Scheduler for Applications for z/OS and the Tivoli Workload Scheduler distributed Agent for z/OS. If you have Tivoli Workload Scheduler and a z/OS system where you would like to schedule workload that does not require the use of the Tivoli Workload Scheduler for z/OS apparatus, eiher may provide an interesting solution. The TWSdistributed Agent for z/OS, being of a newer design, is better integrated wih the concept of dynamic scheduling. An for this reason, soon the document will be extended with information on how to migrate from the extended agent of TWS for Applications to the newer agent for z/OS.
FlorianaFerrara 270000UB00 5,466 Views
or go to registration link
dplantz 1000003W46 7,214 Views
It is officially live! The IT Service Management ( Tivoli) support and development staff's are daily contributing Q&A in IBM's dWAnswers forum. Check it out at https://developer.ibm.com/answers/questions/index.html Simply type in a "tag" - which is usually your products acronym to see what topics exist. For example: IBM Tivoli Monitoring version 6 is simply ITMv6
dplantz 1000003W46 6,831 Views
Looking for some answers for your Tivoli (now ITSM) products? Try dWAnswers where customers & support staff help each other!
nicochillemi 120000CS4W Tags:  iwsz hclautomation plugin iws batch_applications workloadautomation batch_processing batch_modernization 6,845 Views
Ahead with innovation!
I think workload automation is probably one of the biggest challenges we must engage if we want to move with the times in information technology. I strongly believe we are in front of a major innovation in our IBM Workload Automation solutions developed by our partner HCL.
When I presented this topic for the first time, and when I finished talking one of my colleagues, @OzgunOdabasi270002TRU1, said something that I really liked. He said, “Why don’t you begin from the end with your presentation?” I started thinking about this and eventually convinced myself that Ozgun’s suggestion would be totally effective! Therefore I decided to start this series of blog posts from the conclusion, so let’s begin by looking at this picture:
I am pretty sure some of you saw this and thought, “Oh my goodness!” This thought was probably the same whether you work in the mainframe or distributed systems area. The first thing we notice is the name of the file, JOB3FTP, which makes us think about a file transfer job even without specific skills. The second thing is the content, which requires some skill to understand. This is a file transfer batch job, startable by an IBM z/OS system, written in XML and designed to be executed outside of z/OS.
It isn’t easy to find people with concurrent mainframe and XML skill, which generates a question: Who is going to write such a job to be initiated from z/OS and executed outside? A young guy from the university will probably enjoy this kind of work in a z/OS job library, using hexadecimal characters and being careful with continuations, because he is learning something and doesn’t mind this challenge. But the reality in IT is quite different, since the number of jobs to write can be very huge and can be tedious work. So we need a way to automate this process.
End–to-end batch processing
In my next post I will get back to the role of XML, which illustrates the power of the workload automation plug-in innovation. But first it is important first to give a picture of the end-to-end batch processing concept.
Traditionally batch in computer science identifies the ability to run complex processing “in silence,” without any human interaction. This year in IBM we are celebrating the 50th anniversary of the mainframe, and we could also say that batch processing was born in 1964 with the mainframe.
Batch processing has solved a lot of problems and saved people time with their mainframe customer business due to its unattended characteristic (meaning that humans do not have to oversee it). IBM Workload Scheduler (IWS) automates the scheduling, taking into account all the prerequisites, so that there is no need for manual scheduling of batch jobs.
For more than 30 years batch processing was confined to a mainframe, but in the 1990s there was a fast explosion of distributed platforms, from UNIX to Linux, from Windows to proprietary applications, extending the need of batch processing to distributed environments as well.
IBM and HCL offer IWS to completely automate batch processing independently on the target platforms. It automates, monitors and controls workflow throughout the enterprise IT infrastructure.
IWS supports many types of configurations, so a job can run in any type of platform: z/OS, Linux, Windows and so on. So end-to-end batch processing is the ability to start jobs from any chosen platform and execute them on other platforms. The most common end-to-end batch configuration is based on an engine located in IBM z/OS and several agents installed in the target z/OS and distributed platforms. This is IBM Workload Scheduler for z/OS.
Between 1990 and 2000 the IBM Workload Scheduler was for the first time enhanced in a multiplatform direction. That enhancement was a big innovation for that period, because it made possible starting simple scripts or batch commands from z/OS and running them on UNIX, Linux, Windows, OS/400 systems and of course on Cloud.
In the second part of this series, we will continue our discussion on the evolution of IBM Workload Scheduler and further innovations in workload automation. We will talk about the application batch processing workloads, XML and the innovation of plug-ins.
Please stay tuned, and share your thoughts with me on Twitter @nicochillemi.
by Simone Guedes
Authorized assets and configuration items ComputerSystem can be created in a few different ways, such as procurement, promotion, UI, and integration framework. Duplicated assets and configuration items might be created in one of these processes and these duplicates can cause problems in managing the assets and configuration items.
In SmartCloud Control Desk 7.5.1, we solved this problem. An integration identifier (DIS GUID) is generated for assets and configuration items ComputerSystem whenever new ones are created or when naming attributes are modified. We can check if other assets and configuration items exist that have the same DIS GUID, and take appropriate action when duplicates are discovered.
When an authorized asset or configuration item ComputerSystem is saved, the system cleanses any naming attributes, computes an integration identifier (DIS GUID) for the object, and uses this GUID to check if there is a duplicate of the asset or configuration item in the system.
If a duplicate is detected, then the system consults configuration properties to determine whether to:
Configuring enhanced reconciliation
Note: Out of the box values of these system properties are:
On July 27, 2012, IBM Software released the Base Services 22.214.171.124 fix pack for Maximo Asset Management, Maximo Asset Management Essentials, and Tivoli Asset Management for IT.
L2 Support and Development strongly recommend that you apply this maintenance to insure your system has the latest fixes and enhancements.
For more information on this Fix Pack including links for download, prerequisites, and considerations see the download document located here: http://ibm.co/mx71111
Do not install Tivoli's process automation platform 126.96.36.199 with SmartCloud Control Desk.
Fix pack 188.8.131.52 for Tivoli's process automation platform is now available. This fix pack is not backwards compatible with prior releases. SmartCloud Control Desk version 7.5 is based on Tivoli's process automation platform 184.108.40.206. If you install the 220.127.116.11 fix pack, several product functions will be broken.
This information is also available in a flash from IBM support.
A new module released titled: Running level one IBM Tivoli Monitoring scope sensor
Has anyone checked out the Tivoli presence for IEA training lately? We have >380 modules across 53 products!
Tivoli IEA Education Assistant
nicochillemi 120000CS4W Tags:  batch_applications plugin iwsz iws hclautomation batch_processing batch_modernization workloadautomation 6,523 Views
In my previous post on innovation in workload automation, I talked about the challenges of job definitions and how end-to-end batch processing has helped over the last 50 years. IBM Workload Scheduler (IWS) can automate batch processing independent of the target platform to monitor and control workflow throughout the enterprise IT infrastructure. In this post I want to continue talking about the evolution of IBM Workload Scheduler and how new innovations have helped with the challenges of job definition.
Application batch processing
In the early 2000s, the ability to schedule simple scripts or commands was not enough anymore. In fact there were a small number of typical common applications, like SAP, Oracle, IBM Storage Manager and PeopleSoft, that had internal batch processing through jobs that were often written in a proprietary language.
In order to include this type of batch processing in the whole workload automation production environment, the IBM Rome Lab created specific agents, one for each type of application. So, we had the IBM Workload Scheduler for SAP extended agent together with three other IBM Workload Scheduler extended agent software components.
But information technology evolution never stops, so from the end of the last millennium until today we have had an exponential growth of several vendor applications, like Java, database, Cognos, VMware, IBM InfoSphere DataStage, Structured Query Language (SQL), Urban Code Deploy, file transfer, and many others—most of them also requiring scheduling capabilities coverage.
Can you imagine how much development effort would be required if the HCL Lab had to develop an extended agent for every new vendor application that came into the market?
It would be an enormous challenge, but there is a lucky fact! Almost all new applications that come into the marketplace support the XML language. This means that an XML source file can be used to run proprietary application jobs. This is valid both when the job starts from a distributed engine and when it starts from a IBM z/OS scheduling engine. In this second case, which is more common, it is important to be able to produce XML jobs if we want to schedule application vendors’ jobs from z/OS and we are also dealing with the lack of XML skill (as discussed in my earlier post).
The importance of XML
The XML language reached such a high penetration inside the world of IT that now it is impossible to give up using it, since as I said almost all existing emerging applications today support XML for executing a sequence of their native commands. Try to think how a typical end-to-end batch processing environment, independent of its size, can gain an advantage from this language. Many executions of a batch workload can be automated without caring about which applications should be targeted. This way different types of jobs are able to run in sequence or in parallel, remaining in the same batch flow.
Now we can come back to the initial unresolved question from part 1: Who is going to write these XML jobs to be initiated from z/OS and executed outside it in order to run proprietary application batch jobs? This issue can be solved with the Workload Automation Plug-in feature, which is in my opinion one of the most innovative enhancements we had in our IBM workload automation solution during the last 10 years.
The workload automation plug-in innovation
I started this series of blog posts from the conclusion by showing you an XML job source member in z/OS. Finally now I can show you the beginning of the story by looking more closely at the heart of this very strong innovation: workload automation plug-ins.
Considering that a new extended agent for every emerging application has a high cost in effort and is not practical anymore, the solution now is interfacing an application’s batch through XML executions. The bottleneck of this solution is writing such jobs. The workload automation plug-in feature eliminates this bottleneck.
Creating an XML file transfer job using the related IBM Workload Scheduler application plug-in
As you can see in the previous example, the IWS workload console provides a very user-friendly screen, where we can specify the library where the XML job will be placed and the parameters that will be used to build the XML sequence of commands, including security information. The result of this task is the complete creation of the XML job. The example shows a file transfer plug-in. Even if a plug-in is valid for every type of platform, imagine how important this can be for a mainframe-centric workload automation environment.
There is no longer a need to develop an extended agent for every emerging application. It is enough to provide the new plug-in, which requires a very small development effort and at the same time provides a much easier implementation for customers.
If you work with z/OS, I know you needed to write a JCL (Job Control Language) at least one time, for example, for copying one library to another. If you work with UNIX or Linux environments, I am sure you had to prepare a scripting language source file in order to execute several commands at the same time. But did you ever try to write an XML source file to execute a job, especially if you have a typical mainframe background?
It would be great if you would share with us your experiences with different types of jobs, especially if you had to deal with emerging applications. Is there anything more you would suggest to IBM laboratories to enhance innovation in workload automation? Please share your ideas with me on Twitter @nicochillemi.
drenigma 100000J5RU Tags:  throughtput garbage_collection generational oltp ism tpae batch_processing gencon responce_time performance 1 Comment 10,787 Views
Software systems workloads can always be articulated as a mix of two flavors of computing tasks:
OLTP is the computing activity of serving requests originated by interactive users waiting for an answer in a acceptable time; whereas, batch processing refers to programs running in the background trying to finish the given amount of work in the designated window of time. The key performance metric for OLTP system is the response time experienced by the end user, while for batch systems, the reference metric is the throughput - the number of tasks completed in the unit of time. See H. H. Liu: Software Performance and Scalability: A Quantitative Approach – J. Wiley & Sons, for a more detailed discussion.
One of the benefits that the Java platform has introduced is the fact that it manages the memory for you, through the usage of the Garbage Collection (GC) algorithm. Unfortunately, a not well tuned GC can result in sub-optimal performance and scalability. The initial step of the GC tuning is choosing the GC policy according to the characteristics of the workload. Today's JVMs support multiple GC algorithms that you can control thought the command line options. The IBM JVM support 4 different policies through the -Xgcpolicy: option:
Ideally optavgpause should be the best choice for pure OLTP systems, while optthruput is better suited for batch processing (take a look to Java technology, IBM style: Garbage collection policies, Part 1 and Part 2 articles on developerWorks for a deeper analysis of garbage collection policies).
TPAe based systems are actually a mix of the two workload profiles and their behavior is characterized by the presence of a lot of transient objects. Based on these considerations, Tivoli performance decided to test the gencon policy that is able to provide a good trade-off between short pause times and good overall throughput. Results achieved in all the experiments that we did were very encouraging. As you can see in the following graph about one of the scenarios we run, the response time improved and became more stable, as is shown by a reduced standard deviation:
The gencon policy was also beneficial for the CPU utilization. Also from this point of view we observed a lower and more stable demand, as you can see clearly in the following graphs (default policy and gencon):
We decide to include the gencon policy in the performance best practices for IBM Tivoli Service Management Software Products. as a consequence of the good results obtained in all experiments that we run.
If you need a quick answer to your question about a process automation product, try posting it on the Maximo and Process Automation Solutions forum. The forum features a large community of active users that include IBM product developers, customers, and business partners. Discussion topics include configuration, implementation, and general sharing of product knowledge and experience.
The forum covers all products built on Tivoli's process automation engine, including: