Modificado por ingoa
IBM Tivoli Software is pleased to announce a pilot program that you can use to create and manage your own Requests For Enhancement (RFEs) from a web portal. RFEs are your way to tell the Tivoli team what features you would like to see within Tivoli products. From the Tivoli RFE Community, you can collaborate with Tivoli Development teams and other product users through your ability to search, view, comment on, submit, and track RFEs. The pilot will be started with 26 Tivoli products, with the goal to transition 100% of Tivoli products to the RFE Community Tool in Q1 2011.
What is the Tivoli RFE Community?
The Tivoli RFE Community is a place where you can collaborate with Tivoli development teams and other product users through your ability to search, view, comment on, submit, and track product requests for enhancement (RFEs).
The community acts as a front-end database bridged to the back-end databases used by Tivoli Development and Product Management. New submissions, user comments, status changes, and developer comments are bridged between the Tivoli RFE Community and back-end systems several times a day. By using this bridge, the Tivoli development teams can work with the processes and tools they are most comfortable with, while allowing you to collaborate in an open community environment.
How will this change help you?
By using the Tivoli RFE Community as your preferred method for RFE submission, you can gain the following advantages:
Eliminate multiple touch points that slow down communication in the RFE process.
Increase the transparency in the development process.
Provide predictable response times for feature requests:
Initial triage in 30 days
Disposition in 90 days
Empower you to influence Tivoli product direction and road maps as well as break down barriers between users and development.
Provide a better tool for you to review other RFE ideas on the same product and cast a vote for their favorites.
Some popular features of the Tivoli RFE Community
Online submission of RFEs
Browse RFEs by product
Top 20 Watched RFEs
Top 20 Voted RFEs
Online searching for RFEs
Vote on RFEs
Set email or RSS feed notifications
Comment on RFEs
Start or join a group to discuss RFEs
How can I get started?
Go to the Tivoli RFE Community home page at Tivoli RFE Community (updated link on Jun 9, 2014)
After a while of
looking at moving BIRT reports to Cognos I though I would share some of my
experiences and lessons learned. One of my toughest lessons learned was that I
did not ask for help getting started with Cognos and I should not have done
that. I am not implying Cognos is complicated but I wandered off in the wrong
direction and tried to do BIRT with Cognos. You need to re-think some aspects
when you use Cognos and that will take a while or you need to have this
explained to you. If there is someone in your company who knows Cognos ask for
help getting started. You can of course also ask your local IBM representative
for our Tivoli Common Reporting Quick Start Service Offering. There is also
plenty of material on the TCR DeveloperWorks Web site
First let's demystify
the concept of a Cognos Data Model and if you need to model data or not to be
able to run reports. The answer is no you do not necessarily need to create a
Cognos Data Model if you are using the reports Tivoli provided. If you use the
reports for the Tivoli Monitoring Agent for Operating System shipped with
Tivoli Monitoring V6.2.2 Fix Pack 2 a Cognos Data Model and a small set of
reports is provided. The data model exposes sufficient information from the
Tivoli Data Warehouse hence creating a new model may not be necessary.
A Cognos Data Model is
an abstraction layer between the database and the Cognos Report- and Query
Studio transforming data into information. The whole purpose of a data model is
to make sense out of cryptic table-, view- and column names providing structure
(folders or namespaces) so that the report user understands what the data is
without having to worry about from where it originates. You can of course
expose the raw tables and views but that will not make the end user any less
confused. A good example is the concept of providing self-service report
capabilities to your end users. The Cognos Report- and Query Studio are components
shipped with Tivoli Common Reporting, which the report users will be working
with. When you start the Cognos Report- or Query Studio the data model is
displayed and he or she can drag and drop the data you exposed in the model to
the report. You only have to go through the process of creating and tuning a
Cognos Data Model if one is not provided to you or if the model does not expose
the data you need. If you want to know more details around the technical and functional differences between BIRT and Cognos you can read this article.
When looking at moving
BIRT reports to Cognos Reports I tend to categorize source reports into two groups:
Custom BIRT Reports
Standard BIRT Reports
The Standard BIRT
reports are typically those where you have used the BIRT provided properties
and functions to create the reports. The custom BIRT reports are those where
Java code has been embedded in the report for mostly formatting the output or
controlling queries at run-time. I have not seen a large amount of the custom
type reports but what I have seen so far is that most of what required Java
code in BIRT can be configured in Cognos using existing properties. Key here is
to understand what a particular Java script is performing and then transform
that into a standard Cognos function if possible.
Attempting to manually
convert these two types of reports you will need to look at the following
aspects of the BIRT reports:
1. The BIRT Data
Sources (database connections)
2. The BIRT Data Sets
3. The BIRT Report
Parameters (user interactions)
4. The graphical
presentation of the data
5. Report automation
(store to disk or email)
Sources are no longer in the report themselves but embedded
in the Cognos model and managed in the Cognos Content store. When you logon to
Tivoli Common Reporting you create these database connections using the IBM
Cognos Administration tool.
The Data Sets can be approached in two ways. The easiest way is to create a new
report using an existing query subjects in the Cognos data model. You provide
the necessary prompts on a prompt page for the WHERE statements in the BIRT
Data Set. If you do not see a way to use a query subject provided by the Cognos
Data Model, you can create your own model and cut and paste the Data Set into a
query subject just as it is. You will need to configure the Cognos query
subject as Pass-Through and replace the question marks for the BIRT parameters
with the prompt macro. The prompt macro can then be tied to a prompt page where
the users provide the necessary input.
Parameters are implemented using a prompt page in
Cognos. The prompts that need to be created are the paramters in the WHERE
clause in the BIRT Data Set. As mentioned earlier if you cannot change the SQL
statement in the BIRT data set you can work with Pass-Through query subjects,
prompt macros and combine that with a prompt page. This does however require
you to provide your own Cognos Data Model for that specific report.
presentation of the data is not much different
in Cognos except that you have much more built in control leveraging properties
for each object on the report. One example is that you can embed a tiny 50 by
50 pixel graph inside a table should you need to do this by using Cognos
properties. Getting the look & feel similar to your BIRT report is highly
likely going to be the most time consuming task as you do not want to surprise
your users. I can recommend working with a power user and have the converted
reports approved before they are released in production. Involving the users
early will allow a smoother transition from BIRT to Cognos. What I have learned
is that once the power of the self-service reporting feature becomes clear to
the end user there are few obstacles left moving away from BIRT. There are of
course other use cases where this approach may not apply.
Automation was a daunting task using the BIRT
technology and a lot of post-report processing had to be done e.g. renaming the
reports, compressing them and sending an email. If you have scheduled BIRT
reports you can use the Cognos GUI to implement this. There is no scripting
necessary and you can even let your end users schedule, store and email
reports. Of course you need to exercise this with care or you will end up with
your TCR solution sending frequent emails with a wide range in size from Kbytes
to Megabytes. The current Tivoli Common Reporting V1.3 does not provide a
command line interface to all these features but that will change in Tivoli
Common Reporting V2.1. The automation capabilities in Cognos provide storing
the reports to a disk location on the TCR server or emailing the report to one
or more users. When saving a report to a file system, Cognos provides an after-script
property where you can specify any additional processing you need.
I hope this has shed
some light on what is required to manually move a BIRT report to a Cognos Report
leveraging Tivoli Common Reporting V1.3 or above when those versions become
available. If you decide to embark the journey to move your BIRT reports to
Cognos you will be spending a lot if time learning Cognos. There is a large
Cognos user community and you can pretty much Google any question you have and
find answers on the web. Although I work for IBM am I trying to be neutral when
I say "Cognos makes much more fun compared to BIRT". As always should
you have questions, please feel free to contact me.
Bjoern W. Steffens
IBM announced on Sep 15, 2010, a definitive agreement to acquire OpenPages, a privately-held company based in Waltham, MA. OpenPages provides software that helps companies more easily identify and manage risk and compliance activities across the enterprise through a single management system. Financial terms were not disclosed.
The acquisition of OpenPages expands IBM's business analytics capabilities to support compliance and risk management processes. OpenPages software allows businesses to develop a comprehensive compliance and risk management strategy across a variety of domains including operational risk, financial controls management, IT risk, compliance and internal audits. The result is an aggregated, enterprise-wide picture of all exposures, helping CFOs and CIOs understand how these risks can impact the organization's future performance.
For example, organizations are under increased pressure to ensure their compliance mandates are not geographically siloed. A manufacturer may have aggressive revenue goals for an emerging market, but those goals may generate business risks such as not aligning to regional regulations and unintended costs associated with extending the necessary financial, IT and business controls to a remote location. OpenPages software instantly highlights any inconsistencies in risk and performance goals, giving business leaders a comprehensive view of the business opportunities and risks associated with the expansion.
The full press release can be found here.
The OpenPages Platform serves as the foundation for a company's enterprise risk management (ERM) efforts by unifying enterprise-wide risk and compliance initiatives into a single management system.
OpenPages solutions include:
Policy Management - Provides the capability for organizations to develop organizational policy, in a granular fashion, many levels deep. The result is more agility and focus in the organization – key stakeholders can be automatically alerted to the pieces of policy with which they need to be concerned. As policy evolves, discrete sections of organizational policies can evolve as well, ensuring policy is always up to date, and all changes are tracked over time
Issues Management - Provides a full-featured issue remediation component to manage enterprise-wide issue tracking and resolution. Root causes can be identified, with common issues grouped together to provide management visibility. At a lower level of detail, multi-step action plans can be created to drive issues to completion, ensuring closed loop validation of progress towards risk management goals.
Risk and Control Self Assessment - Best practice risk and control self-assessment (RCSA) capabilities out of the box, aimed at delivering business value to its customers. Taking into account the many variables that make each leading organization unique, OpenPages enables its customers to configure process and data according to the organization’s business structure and assessment objectives.
Key Risk Indicators - KRIs act as early warning signals by providing the capability to indicate changes in an organization’s risk profile. KRIs are a fundamental component of a full featured risk and control framework and sound risk management practice. Their usefulness stems from potentially helping the business to reduce losses and prevent exposure by proactively dealing with a risk situation before an event actually occurs.
Loss Event Database - Capture all pertinent data to support root causal analysis and provide consistent data for regulatory and economic capital modeling. The events can then be automatically driven through root-cause analysis processes where the information gathered can drive the event through workflow depending on conditional values, such as the size of the loss. OpenPages loss events can also integrate with third party loss event databases.
IBM and Netezza Corporation announced they have entered into a definitive agreement for IBM to acquire Netezza, a publicly held company based in Marlborough, Mass., in a cash transaction at a price of $27 per share or at a net price of approximately $1.7 billion, after adjusting for cash. Netezza will expand IBM's business analytics initiatives to help clients gain faster insights into their business information, with increased performance at a lower cost.
The acquisition, which is subject to Netezza shareholder approval, applicable regulatory clearances and other customary closing conditions, is expected to close in the fourth quarter of 2010.
Netezza provides a family of data warehouse appliances and software products that allow you to easily deploy high-performance data analytics across your enterprise.
The product family comprises of:
- Skimmer: Ideal as an application specific appliance, a small data mart or a development system.
- TwinFin: High-performance data warehouse for business analytics
- Cruiser: Built for queryable archives and DR consolidation. Available late 2010.
- Mantra: Provides real-time auditing, monitoring, reporting and theft detection for critical data assets.
Netezza provides an alternative to large teams of database administrators to tune enterprises databases to make them respond better to complex queries, by deploying purpose-built data warehouse appliances that solve business problems. Netezza data warehouse appliances are installed quickly and easily, integrate with your preferred ETL and BI tools and can be largely left alone to get on with the job.
IBM press release announcing the acquisition can be found here
When you install Tivoli Common Reporting V1.3 you will see that there are some Cognos components available next to the BIRT engine. A few of our customers do frown when they see the word Cognos because Cognos itself does introduce a fairly large number of components. The fact through is that Tivoli Common Reporting V1.3 only provides a small subset of the Cognos architecture components namely:
- The Query Studio for simple and ad-hoc queries
- The Report Studio for more advanced and printed reports
- The Framework Manager to create custom Cognos models to be used by the Report and Query Studio.
These three components above have been encapsulated and installs on a single box with little effort. These components can also be installed on several machines or be integrated with an existing Cognos environment and there has not been much change to the Tivoli Integrated Portal implementing Tivoli Common Reporting. There is however an additional component and that is the derby server holding the Cognos Content Store. Tivoli Common Reporting V1.3 also installs the BIRT engine separately so any BIRT reports can be run as well. There are no plans at this point in time to abandon the BIRT technology in Tivoli Common Reporting going forward.
So what is the difference between BIRT and Cognos? BIRT delivers the Report Designer which is an IDE environment based on Eclipse required to create reports. The Report Designer leverage the database tables or views from the source database. All the SQL logic, page formatting and graphical layout are all stored in the report definition file (xml). Once a BIRT report has been deployed to the Tivoli Common Reporting Server it cannot be changed or modified by the end user. Changes and modification must be made by a report developer and then re-deployed to the Tivoli Common Reporting Server.
Cognos reporting with Tivoli Common Reporting delivers the Query- and Report Studio which are online tools available through the Tivoli Integrated Portal. These tools allow users to create ad-hoc (query studio) reports and more advanced (report studio) reports leveraging data models using a supported browser. There is no additional tool needed to create or modify reports. A data model is a layer in between the database and the Query- and Report studio consolidating data making it more user friendly. One example is that the Windows CPU Utilization metric in the Tivoli Data Warehouse is named “AVG_%_Processor_Time” which may not make sense to many technicians. In the Cognos model this metric can be renamed to e.g. “Average % Processor Time” and you can also hide the remaining 49 metrics in that particular table to the report user. A model can also contain OLAP definitions (dimensional model) allowing for advanced analysis of the data. The OLAP part of a model is not necessary but can be included. Cognos modeling is also another aspect that needs to be grasped, which can be a daunting task unless you keep it simple. A Cognos model is basically a three layer view of the database:
- A database view (the raw database tables). This is the placeholder or container where you pull the raw database tables needed for the consolidation view. You typically hide all these components in the Cognos model.
- A consolidation view (relational model). This is the place where metrics are selected to be disclosed to the end users. Here links and joins are provided between tables e.g. between CPU, Memory and Disk tables allowing the user to select metrics from these tables without realizing they come from 3 or more different database tables.
- An analytical view (dimensional model). This is the step where you can create OLAP cubes and dimensional drill up and drill down features. This is where things can get complex. If you need to delve into the analytic part of a Cognos model you need to plow through the Internet, perhaps study some books or even attend formal education. Note that this final step is necessarily not needed to provide a custom Cognos model.
I hope this short article has shed some light on what the added value of Cognos is. If you are a Tivoli customer and has not yet started using your Tivoli Data Warehouse data for reporting, my recommendation is to explore the Cognos technology delivered with Tivoli Common Reporting V1.3. As always, should you have questions or comments, please do not hesitate to contact us.
One of the most exciting projects on IBM alphaworks
is the many eyes
With the goal
to "democratize" visualization and to enable a new social kind of data analysis, and developed in CUE's Visual Communication Lab, it has an impeccable pedigree, and is available for the Internet to give it a try.
Some of the featured visualisations that caught my eye were:
Obama's speech in Cario - the emphasis on "we":
US Government expenses 1962-2004
Creating a new visualisation is easy. For example, obtain the data set from a publicly available data source (such as wikipedia). I picked up a list of all IBM acquisitions since 1999. After uploading the data, Many Eyes shows you the data to confirm it has understood the data uploaded:
The next page then provides a list of options to visualise the data. For example, the matrix chart looks like the following:
A Treemap view of this data looks like the following:
The same data can be visualised as a Bubble Chart:
Clearly there are exciting ways complex data can be visualised. Even better is that fact that we can soon expect such visualisations soon in Tivoli products.
For example, here's a visualisation of the BGP Event Data in a Treemap (which could be provided in Tivoli Network Manager at some time in the future):
Or, Event Data could be visualised as following for debugging root causes by NOC operators:
When it comes to IBM Tivoli Directory Server (ITDS)
most of the time the underlying DB2 component can be considered a black box as the database is created, tuned and managed by ITDS processes and tools. There are cases,
however, when an Administrator needs to tune performance or troubleshoot
slow running queries that cannot be resolved using conventional ITDS
methods. On such occasions it is necessary
to delve down into DB2 and at this point it is often useful to have a good idea
how ITDS is implemented from a DB2 perspective. In particular when starting to investigate dynamic
SQL snapshots or explain plans, it helps to understand commonly used tables and
Each LDAP attribute that can be
searched by the user is mapped to an attribute relation, meaning that there is
a table per attribute name. The entries
in these attribute tables consist of a unique entry identifier (EID), a
normalised attribute value and a truncated attribute value (used for indexing
if length of column is greater than a certain length).
EIDs are used to implement
the hierarchy through a parent EID (PEID). In the diagram below, O=IBM had an EID of 4
and a PEID of 3.
addition to attribute tables, there are two special tables using during ldapsearches – ldap_entry and
ldap_desc. The ldap_entry table holds the
data for an LDAP entry as well as hierarchical information. It is used to obtain the EID of an entry and
to support single level searches. Each row of the table
includes a PEID-EID pair; i.e. one parent-child relationship. If an entry has three children, there will be
three entries with the entry as the PEID in each.
To put this in context, if I perform a one level ldap search, this will
be mapped to SQL similar to the following:
The purpose of the ldap_desc table is to support the subtree search
feature of LDAP. For each LDAP entry
with a unique ID (AEID), this table contains the unique identifiers (DEID) of
the descendant entries. For every entry
in the directory, a row exists in this table for each of its ancestors
including itself. A parent-child
relationship is single-level whereas an ancestor-descendant relationship shows
all entries below any entry. Each row in
the descendant table therefore includes an AEID-DEID pair; i.e. one
ancestor-descendant pair. Thus if an
entry A, has two children (A1 and A2) and each of those children have two
children (A11, A12, A21, A22) then there will be:
Six entries for A in the Ancestor Table (A:A1,
A:A2, A:A11, A:A12, A:A21, A:A22),
Two entries for A1 in the table (A1:A11,
Two entries for A2 in the table (A2:A21,
This allows the implementation of a “multi-level” search
Each indexed attribute normally has three indexes:
tablename – full attribute value index
tablenameI – short attrribute name index (for
wildcard at end searches such as ABC*)
rtablename – short reverse attribute name index
(for wildcard at start searches such as *DEF)
It is possible to check if indexes are correctly defined by using the
“describe indexes” DB2 command:
db2 describe indexes for
For example, to see indexes for attribute cn:
By default the ldap_entry table has three indexes:
ldap_entry_peid - with columns EID and PEID
ldap_entry_peid2 - with column PEID ascending
ldap_entry_trunc - with column DN_TRUNC
As the data distribution in the ldap_entry table may not be linear, sometimes
it is beneficial to add an index similar to:
By default the ldap_desc table will have an index called ldap_desc_aeid,
with columns DEID and AEID ascending. This index tunes ITDS for subtree scope searches. If single scope ldap searches are being run
extensively, it may be beneficial to add an index as follows:
Of course, there are many more aspects to the ITDS DB2 implementation, apart from just tables and indexes. More information on tuning ITDS can be found in the "Performance Tuning for IBM Tivoli Directory Server" Redpaper or in the DeveloperWorks articles "Troubleshooting IBM Tivoli Directory Server performance Part 1" and "Part 2".
Netcool/Impact Operator Views
is a very neat feature where you can use policies to retrieve additional information from various sources and display a Web page providing a consolidated enterprise view of an IT asset. An Operator View basically consists of three logical building blocks:
- One or more data sources
- An Impact Policy publishing variables from the defined data sources
- A Web Page containing tags where the variables from the Impact Policy are being inserted at run-time by the Impact GUI Server.
Operator Views can also be customized and you can easily build your own and branded Operator Views.
Kevin Morris has written a very good guide how to get started with Operator Views. This document is based on Impact V4 but the theory and basics are still the same for Impact V5. It can be found on OPAL
with the NavCode 1TW10NI07 (search for this code on OPAL). You do need to have a fairly good understanding of XHTML and CSS to be able to digest this material or look at it several times.
Please feel free to contact me should you have questions about this or
any other topic or need more information how to create a Netcool/Impact Operator View.
ThnX for reading this post and do let us know if there are any topics
of special interest you would like to know more about
On July 1, 2010, IBM announced its intention to acquire BigFIx
. The press release of this announcement can be found here
This is exciting news, and there has already been a lot of discussion and questions about what this means. Here are a few highlights that I have been able to garner about BigFix.
BigFix's philosophy is to put as much intelligence as possible at the agent, therefore agents have the ability to detect problems and remediate them immediately without waiting for input from the central server. This also means that remediation can take place without the computer being connected to the network. How does it do this? There are approximately 4,300 pre-defined polices to help define best practice configuration and security posture that are made available to the agents. Custom policies and fixlets can be defined as required.
The broad capabilities of BigFix include -
- Find all IP devices
- Enables remote deployment of agents
- Devices with agents installed can sense other devices in the same subnet
- You can define what you want to know about and ignore the rest, thereby reducing the amount of information passed back to the server.
- Software Distribution
- Seems to be based on native packaging.
- Patch Management
- BigFix currently create the policies for new vendor patches as they are made available by the OS vendors, and makes them available on internet facing servers
- Customer servers download the new polices from these BigFix servers and are then able to add these polices to their agent configurations.
- Licence Management
- Adds visibility of what is installed and what is being used.
- Sometimes uses install software and don't use it. This visibility helps determine if the licence is being used appropriately.
- Power Management
- Provides estimates of current power consumption
- Provides 'green' profiles that users can adopt on their systems. User can choose by interacting with a BigFix icon on their desktop.
- Provides estimates of available savings from implementing power management profiles.
- Security Configuration Management
- Provides pre-canned configurations that are compliant with federal guidelines
- Provides associated reports for auditing
- Endpoint Protection
- Provides protection against virus and malware threats.
- Execution of remote tasks via Fixlets
- It is possible to execute commands or scripts using Fixlets
Platform support, as managed to targets, includes Linux, UNIX, MacOS and Windows, covering laptops, desktops and servers.
Pre-canned reports, graphs and pie charts are available to help with auditing and reporting requirements, and on screen dashboards facilitate real time snapshot views of the estate.
Scalability estimates are that 250,000 computers can be managed using a single server.
The list of discreet products from BigFix are -
Watch this space for more information about what this acquisition means with respect to IBM's device lifecycle management strategy.
Instructions for Help has always been an essential component of systems management as it is key that operators knows what action is required when an event is raised on an enterprise event management console.
I recently looked at a few Wiki solutions and Joomla (Content Management System) to see if there was something that I could provide to my customers while deploying our solutions. I was looking for a solution for customers not having something in place already and need a solution for this topic or at least an idea how this topic could be approached. I got stuck around Joomla as there is a fair amount of good online documentation available, I could also find a large amount of books at the bookstore in town and on amazon. The data structure of Joomla is simple and you can also brand a Joomla site if you have a little bit of XHTML and CSS skills. The Joomla articles or database content is divided into sections and categories. The figure below illustrates the data structure of the prototype I experimented with in the lab.
Once your content has been organized in the database and you have found a suitable layout and screen design for your environment, you can make it publicly available on the Intranet. Should you have wider requirements most web hosting companies provide Joomla deplyoments hence this approach is not limited to Intranet solutions. You may also need to visit the security parameters around Joomla in order to restrict access to content and articles should that be required. In general when a Joomla appliance (see links below) is started, the Web Server is completely open on the Intranet, you do of course need to log in to manage the content. The figure below illustrates how the user interface could look like applying very few and swift modifications to the design. The content shown is a sample Instruction for Help for the IBM Tivoli Composite Application Manager (ITCAM) for Transactions where a particular Web Site is no longer responding as required.
The value of Joomla is that information is available online, it can be updated in real time and the design can be modified by mouse clicks. This could be important because the instruction for help must be up to date at all times. I am provoking a little bit here but I consider an MS Office document to be outdated once it has been saved. The other aspect is that the necessary information is not tied into or becomes dependent on a particular piece of technology or vendor. It becomes a true component that can be re-used over and over again reachable through a simple URL. Another aspect to keep in mind is that the content or articles are stored in a database. This allows for certain manipulation through database clients adding the aspect of automation should that be necessary. The only gotcha here is that the each article is stored in the database including the HTML tags.
How can this be leveraged with Tivoli software and I am particularly focusing on IBM Tivoli Monitoring V6 (ITM6)? Each document in the Joomla database can be access through a unique URL, which can be pasted into the expert help for an ITM6 Situation. Once that Situation fires and you right click the event on the Situation Event Console and select "Situation Event Results ..." that particular article is shown together with the enterprise alert. The ITM6 Situation was triggered by the ITCAM for Transactions Web Response Time Agent (WRT). Note: You could also potentially use Netcool/Impact to look inside the Joomla
database and enrich an OMNIbus event with this URL.
For more information around Joomla, where appliances and TurnKey solutions can be found, visit the links below
Please feel free to contact me should you have questions about this or any other topic or need more information how to brand a Joomla Web site. ThnX for reading this post and do let us know if there are any topics of special interest you would like to know more about.
I have recently discovered the IBM Institute for Advanced Security which "helps clients, academics, partners and other businesses more easily understand, address and mitigate the issues associated with securing cyberspace by providing access to IBM's thought leadership and know-how on cybersecurity".
What I can say is that they have had some pretty thought provoking webinar's. The ideas and concepts discussed in these presentations sometimes form a superset of those currently resolved by Tivoli solutions alone, which is a good thing as it prompts us to start contemplating the next security challenge and how we would broaden our solutions to mitigate risk. A couple of recent presentations are:
"Security Analytics for a Smarter Planet"
Presented by J.R. Rao, Senior Manager, Secure Software and Services, Thomas J. Watson Research Center. This presentation examines stream computing as a basis for performing predictive security analytics to help you detect symptoms of cyber attacks before they strike; move beyond basic sense-and-respond capabilities towards tracking patterns, identifying failures and preventing disruptions; and comprehensively monitor risk over time to improve information assurance.
You can download recordings from here
and the slides (PDF) here
"Cyber Insider Threat: Hunting The Weak Link Within"
Presented by Jeff Jonas, Chief Scientist and IBM Distinguished Engineer. Criminals go to great lengths to minimize and/or obscure their information signal. When these criminals are located inside your work force they know your policies, procedures, and people. This knowledge positions them to even further reduce their signals. When such employees are embedded in your systems and network the cyber consequences are extraordinary. How do you even the odds in your favour?
You can download recordings from here
and the slides (PDF) here
More information on the Institute can be found here
, which also gives access to their white papers, webinars, podcasts, etc.
One of the most captivating Security presentations I have attended recently was presented by Dr. Jean Paul Ballerini at the European IBM Tivoli Security User Group
. His talk focussed on X-Force research.
The IBM X-Force research and development teams study and monitor the latest threat and protection issues. These include vulnerabilities, exploits and active attacks, viruses and other malware, spam, phishing, malware and other emerging trends. Every quarter they release a trend and risk report that can make fascinating reading.
A few points from the 2009 Trend and Risk report that I found interesting are:
- By the end of 2009 patches were unavailable for over half of all disclosed vulnerabilities
- More than 50% of vulnerabilites are in readers and multimedia applications
- Four of the top 5 web-based exploits are for Adobe products, with number 2 being an Adobe Acrobat and Reader vulnerability from 2007 (that has a fix)
- Malicious web link have increased by 345%
- Web applications are the most vulnerable (67% no patch)
- Messages with misspellings are found to be more trusted than word perfect emails with malicious links
You can find out more about X-Force Trends Reports here
Pictures sometimes convey information much better than text, so here are a select few images from the report:
Maybe we first need to spend a minute on the "classic-edition" of The Tivoli Advisor.The Tivoli Advisor
aims to bring information, knowledge
assets and factoids to the Tivoli and IBM software community. As
software strives to embrace new trends in technology, it becomes
increasingly more challenging to keep up with the multitude of changes
and functionality being provided. The Tivoli Advisor
aims to bridge this gap, and to provide articles to help its readers
harness these changes. By providing the information to you in high level
snapshots we aim to provide easily digestible and comprehensible
articles. Much of the content will be provided not only from The
Tivoli Advisor team
, but from respected individuals in the IBM
Tivoli Software organisation and our peers. Coverage will extend over a
wide range of topics including Automation, Storage, and Security.
In the past, we have leveraged a magazine-style publication of The
- you can download the first 15 issues here: The Tivoli Advisor in PDF
. However, in todays ever-increasing dynamic world, a quarterly publication is not the best way to publish content. In addition, there is already so much information and collateral out there: Whitepapers, assets on OPAL (the IBM open process automation library), forums and articles on developerWorks, etc. An there are conferences, user groups and business partners that provide a tremendous value to our solutions as well.
Thinking about this for a while, we decided o revitalize The
in a new way, using a group blog. This allows us to not be dependent on publication cycles (and funding, btw) and be able to dynamically link content that already exists in other places. Obviously, we don't want to replicate stuff - we rather want to act as an Information Broker for the wider Tivoli community. This is where we feel we could provide value to you, our reader.
Finally, a blog-style should also allow our readership - you - to communicate closer with us - the editorial board. We have tried to solicit feedback on the magazine (letters to the magazine), but honestly we haven't received a lot of feedback. On the other hand, we received may encouraging emails and had lots of discussions during events like user group meetings. Commenting on blog entries should reduce the barrier of engaging with The
. Again, we feel that such interaction amonst the technical community has a lot of value - for customers, partners as well as Tivoli software. We are looking forward hearing from you !
Hope this works out for all of us.
If there's anyone out there interested to find out the latest / recent or even most (if not all) acquisitions by IBM, the best resource can be found here
In recent acquisitions, I have to admit I hadn't even heard of Lombardi and Initiate acquisition!
Right. After a detailed discussion about the magazine, and what we'd like to do, I'm pleased to announce that there is a lot of interest towards maintaining a group blog!
So, here's a warm welcome to the new authors for this blog:
- Ingo Averdunk
- Lesley Malone
- Bjoern Steffens
- Gary Hamilton
- Manav Gupta