I use social networking site day in & day out personally & professionally to connect to my friends & colleagues. Hence, I must be having multiple accounts and I really don't want to login each of them to update the same status that I want to pass it to my all friends & colleagues. Rather, I wish to login to just one of my social networking site and expect the same message is published to my other social networking acconts without explicitly login to them. It wasn't really possible until the technology called Oauth introduced sometime back.
To give you real world example, Assume I am connected to my Linkedin account and update my status, I wish the same status message to be published to my twitter account without login to twitter. Oauth allows me to publish my status update on Linkedin to my twitter account. Now I do not have to login to both(Linkedin & Twitter) to update the same status update.
Similarity, we have several other example which show how Oauth technology is being implemented among many social networking sites to give you a capability to share your data with other sites. Some of them are real examples :
1. I like to get my horoscope updates from an online horoscope website to my facebook wall every day. 2. I can share my pictures which are uploaded on facebook with an online printing service to get them printed & delivered to my home address. 3. I like to share the content/news/videos that I read on internet with my friends on my social networking site page.
Oauth is a federated identity technology which provide Open standard for authentication & authorization. It allows the user to share their personal data( eg. status messages, photos, videos, contact lists) from one site to other sites without explicitly providing the credentials.
I talked about a similary technology called OpenID in my previous blog, then how Oauth is similar or different than OpenID. The similarity lies with the the fact that both the technologies provides authentication framework from one site to other sites without explicitly providing the credentials however there are many significant difference between them. Oauth is complementary to OpenID and can be used & implemented in-lieu of OpenID for authentication. Some of the differences are as follow :
OpenID can only provides authentication method but Oauth provides authentication & authorization method both. Expension to OpenID allows Relying Party to access user's attributes stored in OpenID provider with user's approval however, Oauth allows Client Application(aka Relying Party) to access user's private data(such as photos, videos, contact lists etc) with user's approval. OpenID authentication works based on the OpenID URL where as Oauth authentication works based on Oauth token(valet key).
Oauth works on a concept called valet key. The idea of valet key comes from an additional key provided by luxary car manufactures. Some of these luxary cars comes with an additional key known as valet key that can start the iginition & opens the door but prevent gaining access to joyrides. So, if possess a luxary car and at many places handover your car for valet parking, you do not want your car to be used for joyride. So you provide your can with a valet key which can allow a valet parking helper to gain access to your car but prevent joyriding. This way you are allowing them to park you car on your behalf but at the same time preventing them to only drive for a limited distance.
Each technology uses its own terminoloy to explain the concepts & so as Oauth. I will be using below explained terminologies while explaining Oauth.
1. Resource Owner - A person/user who owns an account with one of its trusted Identity provider which supports Oauth 2. Client Application - A web application which provides it services online 3. Resource Server - A server at Identity provider's end that contains user's private data such as photos, videos etc.. 4. Authorization Server - A server at Identity provider's end which implements authentication & authorization
A high level flow of Oauth authentication & authorization is as follow :
1. User accesses Client application on the web 2. Client application provides various Identity providers list which support Oauth protocol 3. User choose it preferred Identity provider, Client application redirect the authentication request to preferred Identity provider and user provides its credential. 4. Identity provider validates the user credential, and redirects the request to Client App. including authentication code. 5. User accesses the redirected Client App URL with authentication code 6. Client App. send this authentication code along with its Client ID & Client secret which it got during registering with the Identity provider intially. 7. Identity provider validates authentication code, Client ID & Client secret and returns a access token(valet key) which is a long lifetime token. 8. This completes the users authentication process & Client App logged in user.
Oauth Client App. & Oauth Provider go through a registration process before Client App. allows its users to provides their identity through a trusted Identity provider. Client application must first register itself with Identity provider(authorization server). Registration process is typically a one-time task. Once registered, the registration remains valid, unless the Client app. registration is revoked by Identity provider. At the time of registration, the client application is assigned a client ID and a client secret (password) by Identity provider(authorization server). Client ID and secret is unique to the client application on that Identity provider(authorization server). Client application also needs to register redirect URI which his used to redirect the user post authentication by Identity provider(Authorization Server).
Here are the detailed steps of this entire authentication & authorization flow with access to shared data stored at resource server at Identity provider's end. Each step is self explanatory..
:For those with inquisitive mind, I will soon come up with the Part -2 entry on this blog.. till then keep watching this space..
Whenever I want to access some of the services like slideshare, zoomin etc on Internet, it requires me to signup for a new account without which I will not be able to use their services to the fullest. If I register to all of these online services I will end up having several hundreds users account & passwords to remember. Nowadays, these online service gives you a flexibility to use your e-mail id as username. This gives you some relief to use your e-mail id as username in all of these online services and saves you with the trouble with multiple usernames.
Now as I am human being, I tend to use the same single password to all of these different online accounts with my personnel e-mail account. That's basically means I am sharing my e-mail/username & password to these online services whom we really do not trust much. To safe your identity on internet, it not a good practice to use your personnel e-mail id password on various online services that you do not trust. So eventually, I am sharing my personnel e-mail id & its password to these services which might cause hazardous results in the password is compromised.
Now what do I do ? Can't I really login to these services without sharing my password ? The answer is YES and is achieved by a technology called OpenID. Using OpenID authentication I can login to these online services (slideshare, zoomin) using my Facebook account or Google(Gmail) account or any other preferred OpenID provider in the market and do not really have to share my password with these online services.
OpenID is an Open standard for authentication & and based of the concept of Federated Identity solution. Federated Identity allows a Service Provider (SP) to offer a service without implementing its own authentication system, and to instead trust another entity—an Identity Provider (IdP)—to provide authenticated users to them.
Some of the benefits of using OpenID are : You do not have remember hundred username/passwords. It also eliminate the Sign Up Process at Your Favorite Websites. From the application developers perspective it does saves time and effort of developing and maintaining a log-in system(Authentication).
OpenID provides a framework for the communication between Identity Provider and the Identity Consumer (Service Provider). OpenID provides a decentralized authentication which means you can provide your identity to choosing multiple Identity Providers. It uses only standard HTTP(S) requests and responses for the communication between Service Provider & Identity Provider. Some of the industry leading Identity Providers are Google, Yahoo, AOL, LiveJournal, MySpace, Facebook, Twitter etc..
Some of the terminology used when talking about OpenID technology:
User-Agent: User's Web browser
Relying Party (RP): A Web application(aka Service Provider) that accepts OpenID authentication
OpenID Provider (OP): A trusted Identity Provider which provides OpenID Authentication on which a Relying Party relies for to authenticate the user
OpenID Provider (OP) Endpoint URL: URL of OpenID provider which is obtained by performing discovery on User-Supplied Identifier
OpenID Provider (OP) Identifier: An Identifier for an OpenID Provider.
User-Supplied Identifier: An Identifier that was presented by user to Relying Party while selecting it preferred OpenID provider.
Claimed Identifier: An Identifier that user claims to possess; the overall aim of the protocol is verifying this claim. The Claimed Identifier is either: • The User-Supplied Identifier, if it was an URL. • The CanonicalID (XRI and the CanonicalID Element), if it was an XRI(Extensible Resource Identifier)
OpendID Authentication flow basically involves communication between User, Relying Party (Service Provider) & OpenID provider. The basic flow is as per below.
1. User access Relying Party web application URL 2. User selects its preferred OpenID Provider out of the list provided by Relying Party and present the User-Supplied Identifier which represent the selected OpenID Provider. 3. After normalizing the User-Supplied Identifier, the Relying party perform discovery of OpenID provider URL based on the identifier supplied by user by requesting XRDS document 4. OpenID provider respond with xml based XRDS document which contains one or more set of OpenID endpoint URL & Protocol version. 5. Relying Party redirect the user to the selected OpenID Provider Endpoint URL 6. The user access the OpenID Provider Endpoint URL 7. The user provides its credential in the form of username/password 8. OpenID Provider verifying the credentials for the user 9. Once the credentials are validated, OpenID provider will redirect the user to Relying Party URL including user credentials in the URL 9. User access Relying Party with the credentials 10. Relying Party reads the credentials & allow the user to access its services
OpenID peformas three major operations during the authentication flow. They are Initiation, Normalization & Discovery.
Initiation - Its a process where Relying Party initiate an authentication process by presenting a form to User with a field to enter user's preferred OpenID Provider. The form field's "name" attribute should have the value "openid_identifier", so that User-Agents (typically browsers) can automatically determine that this is an OpenID form.
Normalization - User's input regarding its preferred OpenID Provider must be normalized by Relying Party by retrieving its content & redirecting the request to OpenID provider and finally applying the syntax rules to the final destination URL.
Discovery - User's selection about the preferred OpenID Provider allows Relying Party to redirect the request to specific OpenID Provider. Since, Replying Party does not keep OpenID Provider URLs with them, they need to discover the Identity provider of-the-fly. Based on the OpenID Provider name, Replying Party Provider perform discovery of the URL by requesting XRDS document that contains the necessary information. This is XML based document which contains one or more set of OpenID endpoint URL & Protocol version.
These three operations by Relying Party plays a major role in achieving this whole process called OpenID authentication flow.
In many of the use case, Relying Party require access to user's information(such as name, gender, e-mail, mobile no. etc) stored with OpenID Provider with the user's approval. OpenID Attribute Exchange facilitates the transfer of user attributes(such as name and gender) from the OpenID identity provider to the relying party. Now each relying party may request a different set of attributes, depending on their requirements.
There are more internals to OpenID to those who want to dig it more.. Here we go on exploring this further in next blog entry - Demystifying OpenID- Part 2.
The term "Cloud" is talked so much but when it comes to explain it to someone what it means, it becomes a little hard. Through this blog entry I am trying to make it easier to understand the concept called "Cloud Computing".
Composed of two words "Cloud" + "Computing" where word "Cloud" comes from the traditional diagrams where any internet based resources are represented by cloud like symbol and Computing defines utlizing the computing power of shared system resource on-demand. When they both are combined togather becomes "Cloud Computing" which provides a way of delivering computing capability of hosted applications as a service rather than a product over the Internet and this is not only limited to hosted applications but also applicable to Hardware, Software, Infrastructure etc..
When we talk about "utlizing the computing power of these shared system resource" Does it sound similar to Grid Computing ? Wondering how is this different than Grid Computing ? Cloud Computing has evloved from Grid Computing which provides on-demand resource provisioning. Cloud computing can be considered as a hybrid of Grid computing, and Utility computing. Grid computing utilise disparate computers to form one large infrastructure, harness the unused system resources where as Utility computing is paying for what you use on shared resources like you pay for a public utility (such as electricity, gas, and so on).
In a traditional approach, any organization that want to deploy an application for their customer(users) have to follow a old application hosting platform method where they will have to first set up the Infrastructure like Networks, Firewall, Load Balancer etc.. Then would need to precure platform(eg. Application Servers, databases etc) to host their applications & data. At the end there will be a significant time spent on implementing these applications on these platforms. This approach takes quite a long time(can take months or years) & cumbersom process to actually make an application available to their end users.
Considering the dynamic nature of businesses, it require to compete in the market & survive and need to adopt low cost hardware with more eficiency in time to delivery & with highly scalable infrastructre on-demand; Cloud is which fits these dynamic requirements in the present. With Cloud, Cloud consumers really does not have to worry about buying the hardware, software and setting up the network & firwall. They can simply approach to a Cloud Provider which will cater to these Infrastructure, Platform & Software requirements and Consumer can pay as per use model.
Some of the essential characteristics of Cloud Computing which differentiate it from the traditional approach of hosting applications are : 1. Every system resources( either hardwares or softwares) are exposed as services to its consumer over Internet 2. Since services are provided over Internet, it provides a wide range of Client access method (e.g., mobile phones, tablets, laptops, and workstations). 3. It provides on-demand utilization of the system resources and use to cater to mutliple Clients' requirement using multi-tenant model 4. The infrastructure is maintained by the Cloud Provider and has control over the Infrastructure where as Cloud Consumer is an agreement with Cloud Proiveder to host its services.
We will look at other aspects of Cloud in my next blog entry…
In the era of high-end or Cloud computing, every organization uses information technology to make their work fast, efficient, manageable and scalable as the business growth happens, but they tend to forget one most essential thing, until some negative impact happens on business infrastructure and data, and that's due to breach of SECURITY. Two most essential things on which an organization can never compromise is DATA and NETWORK. These are highly critical as they form a back-bone of business. Data can be anything from press releases, employees records, inventions to product architecture or blueprints, etc. which is somewhere or other related to confidentiality, integrity and availability for an organization which needs to be maintained for both internal & external use and compliant with industry standards.
HACK is the term when spoken by anyone, people get afraid of, as it relates to some terror attack. To answer, YES it is a kind of terror attack, a Cybercrime which happens everywhere in the world and the impact of it one cannot imagine in worst of his/her nightmares. If someone says you that your device (PC, Laptop, Smartphones,etc.) has been hacked, people get panic and a question crops up in their mind about every data and application they use and stored on the device, whether it has get compromised ? May be or may be not. Not everything. Compromising any number of devices connected to network and again interconnecting them forms a BOTNET, which is regulated by a group of Black Hat Hackers. A botnetis a collection of compromised computers/hand-held devices connected to the Internet (each compromised device is known as a 'bot'). When a device such as computer is
compromised by an attacker, there is often code within the malware that
commands it to become part of a botnet. The "botmaster" or "bot herder"
controls these compromised computers via standards based network
protocols such as IRC and http.
Everyday multiple breaches happens and hundreds of vulnerabilities are reported and some are exploited in wild causing serious repercussions to infrastructure, operations and services due to which sectors such as telecom, banking, government, transportation and many other industries coming to halt and breakdown, and eventually may shutdown your business. This happens for multiple purposes such as monetary gain, revenge, political matters, nationwide social security, curiosity and many others. An attacker can be a disgruntled employee, teenagers/students, business competitors, black hat hackers, biased geeks and many more to add on.
In present scenario, more sophisticated attacks have been there which are very complicated to identify. To be secure, defense cannot be the only workout option which every corporation follows in, if you are not aware of or identified the risk associated with your business, making it more vulnerable to both internal and external attacks, thus making your business down at GROUND ZERO level. Every organization utilizes multiple vendor's and third-party software which exposes the surface of attack to such a large number of attacks, out of which a few number of attacks are sufficient enough to bring the whole infrastructure down. Top product vendors like Microsoft, Sony, Oracle, Adobe, Mozilla, Apple, etc. are not only the ones where most of the vulnerabilities are found to be critical enough to open a big window for breach. The third-party and open-source applications share a big amount of bugs which was around 78% in year 2011, reported by top security research teams, includes X-Force having world's largest threats and vulnerabilities research database.
X-Force Research and Development Team's Annual Report for 2011 based on the trend analysis of vulnerabilities cropped up shows the visibility of high number of flaws in software doubled up when compared with those in year 2010, out of which more than half of were highly critical for any kind of attacks to be considered. X-Force Threat Analysis Service study shows that more than half of softwares utilized are vulnerable for any organization and half that are not vulnerable last year may become vulnerable this year. A Billion Dollar Question may come up to every CEO/CTO/CISO mind about creating a Security framework for their business that safeguards and monitors the Network/User/Application activities where each and every application, employee, confidential data and business infrastructure are very well-coupled, interconnected and integrated with each other.
The answer is IBM's ISS.
ISS created a security framework when implemented takes care of every aspect from people to endpoints, which helps an enterprise's business flourish. Products such as Enterprise Scanner, Network Intrusion Prevention System and QRadar provides a Security Intelligence approach which does preemptive protection to ensure the availability of your revenue-producing services and to protect your corporate data by identifying where risk exists, prioritizing and assigning protection activities and reporting on results. So overall, it provides Risk, Log and Event Analytics essential for an enterprise that helps in reducing the surface window for any attack to happen.
Implementation of Effective Vulnerability Management with Enterprise Scanner (ES):
For organizations that prefer to manage security operations in-house,
IBM offers vulnerability management scanners that conduct automated and
continuous scanning to identify potentially damaging vulnerabilities in
your network infrastructure.Vulnerabilities evolve and will continue to evolve as long as the old legacy applications having security loopholes have not been taken care and people creating new software applications doesn't have awareness on securing coding practices. Vulnerability management is an on-going process that protects your
valuable data, customer information, critical network assets and
intellectual property. Scanners from IBM are designed to identify
vulnerabilities quickly and accurately, as well as provide remediation
steps and blocking techniques. IBM vulnerability management solutions track and communicate risk
reduction efforts from initial identification through remediation.
Vulnerability management is a key component of an effective information
security strategy, which provides comprehensive, preemptive protection
against threats to your enterprise security In today's scenario, Effective Vulnerability Management is a cyclic practice for any Enterprise to identify, classify, remediate and mitigate vulnerabilities. Enterprise Scanner manages both known (vulnerabilities reported by security community already fixed by relevant vendors) and unknown (Zero-Day) vulnerabilities. The IBM X-Force research and development team designed the IBM Common Assessment Module (CAM) and provides the content updates that maintain Ahead of the threat protection along with following features:
1. Passive/Active Asset identification with the inclusion of the IBM Proventia Network Anomaly Detection System (ADS). Asset identification techniques used are Ping sweep, UDP probe, Asset fingerprinting, NetBIOS-based discovery, TCP/UDP Port discoverry, OS fingerprinting, Integrated Networked Messaging Application Protocol (NMAP) database. 2. Asset classification - Hierarchical group structure that mirrors your organizational structure, providing context for both scanning and reporting. 3. Vulnerability assessment - Discovery-based assessment, Scripted assessment, Allows for new content without updating product binaries, Provides smaller content updates (IBM X-Press Update product enhancements) powered by X-Force, Supports faster time to market with security content, Automated security intelligence updates on the newest electronic threats. 4. Attack emulation – Performs specific tests in a non-impacting manner (posing no danger to your network) to analyze the effects of a real attack. Renowned vulnerability database by the ISS X-Force Research and Development team recognizes vulnerabilities and programmatic errors that could compromise an asset. Automatically detects new vulnerabilities based on X-Force expert recommendations. 5. Scan windows - Automated scanning during open scan windows, Auto-pause/auto-resume—automatic scan suspension upon closure of scan windows; resumes when the scan window reopens. Configurable refresh period refreshes data automatically during open scan window, helping to ensure up-to-date vulnerability information, Emergency scans—providing quick results such as adhoc scans of your network on request. 6. Reporting - Reports that illustrate information in the context of your organization such as Group and report on risk in applicable business context, by geography, network layout, business system or any other useful grouping of assets, Flexible event analysis, Enterprise-level multiscan and provides multiscanner reports, Preconfigured report templates, Exportable reports to PDF, CSV, HTML formats. 7. Easy-to-install, configure and manage - Integrated with SiteProtector (centralized command, reporting and analysis for ES and IPS) management system designed to unify the protection of network, server and desktop assets. ES also has its own Local Management Interface with Proventia Manager.
Prevention is Better Than Cure: Preemptive Protection with Intrusion Prevention System - Protection that works Ahead of Threat
Organizations need to stay ahead of the latest threats and keep
business critical applications secure. In today’s environment,
companies are required to do more with fewer resources all while
maintaining a secure environment. Organizations need improved
protection against issues facing businesses today. Proventia Network IPS helps stop malicious Internet attacks
they impact your organization, the only effective way to preserve
network availability, reduce the burden on your IT resources and prevent
security breaches. Deployed in-line on your network, Proventia Network
IPS helps stop threats before they impact your business and delivers easy-to-use data security and Web application protection
policies to help businesses prevent data loss and attacks targeting Web
applications. Core Capabilities of IPS are:
1. IBM Virtual Patch technology—Shielding vulnerabilities from exploitation, independent of a software patch. 2. Client side application protection—Protects end users againstattacks targeting applications used everyday such as MicrosoftOffice files, AdobePDF files, multimedia files and web browsers. 3. Advanced network protection—Advanced intrusion prevention including DNS protection 4. Data security—Monitoring and identification of unencrypted personally identifiable information (PII) and other confidentialdata. 5. Web application security—Protection for web apps, Web 2.0 and databases (same protection as web application firewall). 6. Application control—Reclaim bandwidth and block peer-to-peer networks and tunneling.
With the whole world becoming virtual, the Security has become very important factor like never before. Up until recently, 'Hacking' worried only prime websites with large online business transactions. But nowadays, even users and owners of social networking sites (the CEO of Facebook isn't spared too) could be victims of profile hacking,
Even the biggest IT player like IBM felt need to build, new security business group overnight, dedicated solely to handle this purpose
Here's the look at various reports that were doing the rounds in media about this topic: " 90% of sites are vulnerable to application attacks (Watchfire®)" "78% percent of easily exploitable vulnerabilities affected Web applications (Symantec™)" "80% of organizations will experience an application security incident by 2020 (Gartner)"
Well, that’s a lot of vulnerable web sites. Isn't it?
But still, one may ask, why invest in testing now instead of just responding to an attack after it happens? To answer this question, lets look at all the negative impact it will have. Such as - Loss of customer confidence & hence harm to your brand - Disturbance to your online means of revenue collection - Related legal fees - Unnecessary Media attention
Hence in software design, security is becoming increasingly an important parameter as applications become more frequently accessible over networks( & are vulnerable to a wide variety of threats) But, not all these vulnerability testing can be done manually. As there are various permutations & combinations, by which an hack or attack can happen. In this scenario, IBM® Rational® AppScan Enterprise tool is indeed a life-saver. This tool is used for the security assessment. It could be used to test Web Application or a Web Service from security perspective
AppScan Enterprise Edition scans for vulnerabilities by traversing an application similarly to the way a user browses a Web site. It starts from the home page or some other entry point, as defined by the user, and follows all of the links. Each page is analyzed and, based on the characteristics of the page, AppScan sends a number of tests. The tests are sent in the form of HTTP requests. AppScan determines the presence of vulnerabilities based on the responses from the Web server. The application is treated as a black box, and AppScan communicates with it just like a browser does. AppScan has thousands of built-in tests and checks for hundreds of different types of vulnerabilities.
IBM® Rational® AppScan has various other editions too, to name basic ones: - Enterprise Edition: For web application & web services security testing - Source Edition: This edition, scans, the source code itself, & accordingly recommends the secured code practices - Build Edition: This edition, is designed to find, any security hole, while building, or packaging the code files.
Considering various dimensions involved in this type of testing, Appscan provides very comprehensive reporting for user's assessment. User can view report in three different views, namely Security Issues - It lists all the issues found Remediation Tasks - It also provide the remediable steps that need to be taken Application Data - It lists the testdata it used during testing
It also has the 'Delta Analysis' feature, by which report is compared between two sets of scan results and it highlights the difference in security issues discovered.
I was part of IBM Security Role & policy Modeler 1.0 product development. & I am proud to say, Appscan really helped us make our product less vulnerable to such security threats.
To know about the latest Appscan V 8.5 & more, Please visit the link: http://www-01.ibm.com/software/awdtools/appscan/
Tivoli Directory Integrator (TDI), also referred to as ‘Blue
Glue’ is an easy to use integration tool that can integrate various IBM
products. It can be used to integrate different IBM products for critical scenarios
that provide value-add to businesses that want to leverage the capabilities of
their product portfolio.
TDI can be a
very, useful, handy and effective tool for identity management functions. It
can be used to:
Provide batch and real-time synchronization between identity data sources so that enterprises can establish and up-do-date identity data infrastructure
Build metadirectory, identity data warehouse or provision directly into existing systems
Synchronize all instances of identity data and other data across enterprise to the authoritative source, which increases accuracy and decreases administrative costs. The synchronization can be done in real time to ensure data integrity and effectively meet compliance and governance requirements
A specific example of TDI being used with IBM Tivoli
Identity Manager (ITIM) is for the development of ITIM adapters. ITIM adapters
are software components that facilitate communication between the ITIM server
and a managed resource like Tivoli Directory Server etc. These adapters can be
developed utilizing the wide range of connectors provided by TDI. See ITIM Adapter Architecture diagram below:
TDI can also
be used to aid with specific administration tasks with ITIM. ITIM has both LDAP
and database data stores containing useful data and TDI has connectors for
these resources. For example using specific connectors we can query simple things like orphan accounts
direct from the LDAP data store and get real time information and not the stale
data provided by the ITIM built in reporting tools, which requires a time
consuming data synchronization to move the data from LDAP to DB2 database
tables. It can also be used to identify dis-allowed accounts as well as to
identify reconciliation failures.
above, TDI is a very nifty tool that can be used for performing many tasks with
IBM Identity and Access Management (IAM) products. TDI is bundled with Tivoli
Identity Manager, Tivoli Access Manager , Tivoli Directory Server and other IAM
IBM Tivoli Identity Manager now features role and policy modeler functionality.
I am proud to have been part of the development team, as manager, that developed this functionality of TIM from scratch in the last 18 months. We went through the most rigorous schedule possible. Happy it's seeing the light during the festival of lights in India - Diwali! The product has really turned out extremely well with an excellent quality. I am dying to hear the real customer feedback when it starts being deployed.
Over next few weeks, I and SMEs from my team will write more details on the new features and offerings coming out with the Role and Policy Modeler.
IBM Tivoli® Identity Manager is a policy-based identity and access governance solution that helps automate lifecycle management of user roles, identities and access rights.
New IBM Security Role and Policy Modeler (RaPM) functionality built into Tivoli Identity Manager V5.1 provides role mining and role lifecycle management capabilities. Role hierarchy helps to simplify and reduce the cost of user administration by enabling the use of an organizational role structure.
I have worked as a manual and automation tester.My experience involved working on multiple projects from varied domains. This enabled me gain more knowledge and insights into the different QA processes and models followed by different teams depending upon their development cycles for example Waterfall, Iterative, Rapid Application Development and Agile. Each project and its development life cycle has helped me analyze the real issues and how and where the real improvements can help irrespective of the development model.
Based on my experience I am going to write about one of the most effective process “Defect Analysis”. If done at right time and with right perspective, it enhances the team deliverables and eventually the product quality.
Challenges faced by the QA team 1.Quality of defects. 2. Lengthy defect cycle before closure. 3. Number of defects 4. Quality of code 5. Pressure on the team towards the end. 6. Lack of consistency in QA and development deliverables.
Proposal based on the experience Historically Defect Analysis is process done after the release. Lot of metrics are derived from all the defects to evaluate the QA and development team performance. Proposal is to do Defect Analysis during QA cycle. To ensure ongoing quality improvement when it matters.
Project Phases for Defect Analysis * Phase I- Initial project development. For agile projects we can say sprint 1 of QA * Phase II - Integration testing stage * Phase III - Regression cycle
Stake holders in Defect Analysis * Tester * QA lead * QA Manager * Developer * Development lead * Project Manager.
Defect Analysis when done during the release helps - * understand what is going on * what can be expected * what needs to be worked upon
The following sections will cover the phases mentioned above in more detail. Phase-1 - Project development just started, made builds available to QA for testing. We have to Do Things, if done right the first time itself it saves lot of trouble later for all of us. Lot of defects at this point could be because of following reasons 1. No Unit testing done 2.Requirements not frozen or not clear between QA and Development team Action Development Lead need to act here. If the defects are valid, lead should ensure * Code is Unit tested. * Code reviews are done.
Result: * Development team will catch as many defects at their end by self review. If there are lot of invalid defects, QA team lead should ensure that there is a process streamlined that ensures that QA team has access to functional specification and is updated every time there is any change in design or functional requirement.
QA Lead too should check that QA members are entering defect in correct format. The least information that should go in a defect should be 1.Clear, precise title 2.Steps to reproduce 3.Test data if any 4.Test environment details 5.Build number 6.Logs, snapshot, etc to enable easy debugging. 7.Correct priority severity.
Having all the information in first go, helps faster closure of the defect. Testers should read all the defects been logged. [Defect logged by peers. This should be a daily activity.]. This ensures that the team members don't land verifying/debugging affected components and are also spared from logging duplicate defects.This saves time for both the QA team as well as the development team. Duplicate defects introduces time cost. 2.Try new scenarios around the affected area, before and after fix to catch more defects.
Result of action:- By giving all possible information and symptoms QA helps locating the problem and location which helps developers to get their act faster. More ever QA 's understand of internal work flows of the product gets clear.
Phase II - Half way through Project development cycle. Defect count is definitely going to be high at this time and that means everybody is in action.
This is the time when Development Lead and QA Lead should ensure that timely action is taken on the defects. Usually Development Leads ensures that the blocker, critical, high severity defects are immediately resolved. The minor and normal defects may keep piling. That increases the pressure on the team as the defect count increase towards the end of the release.
“Any strategy like target 2 minor, 2 normal defect per week to each developer ensures application cleanup and helps keep defect count low.” QA Lead should ensure that QA team is not taking long to respond on defect waiting for any action from their side. The defects should be verified and closed as soon as possible. Immediate defect verification is important because it helps in following scenarios:- If the defect is to be reopened, the developer who worked on it, has all the thoughts fresh in his mind. Reopening the defect after a long duration is time consuming for the developer to resolve it again. General testing around the fix, ensures that fix has not broken anything new. QA metrics on response look great !!
Confirm that every defect has a associated test case, this is to help the QA team working on the next version of the product. In case of tight schedules a test case with brief title should be written. The detailed test cases can be taken up after release.
Result of action:- 1.Defect count continuous to be manageable.
Phase III - Thrashing ,Regression testing Any Severity 1 defect at this time indicates need of more testing. * This is sign of more defects in hiding. We need to catch them all. * After a fix, to confirm that defect and related all subroutines are working as expected and no new defect has been introduced.
Ensure faster closure, pull in free developers to regress bugs and positive test cases. Thumb rule - Any defect analysis done post release will not be as effective as done before release.
Benefits By been proactive and by being on the top of the problem from the start helps both QA and development team deliver quality products.
Next Steps and Recommendations * All projects following Agile processes should make defect analysis an mandatory activity along with their other sprint exit activities. * Performance and security defects generally crop up towards the end however if code review checklist include evaluation of code for these two factors as well, it would be like icing on the cake. * Project Lead should continuously monitor defects to track trends and confirm that team strategies are in-line.
IBM is holding annual Software Universe Event in Mumbai on 19th and 20to October 2011. This event is attended by industry and technology leaders from India. IBM, on its side, is lining up some of its best speakers and innovators.
One of the focused tracks in the event is on Security, Risk Management and Compliance.
Some of the major security concerns today are 1) Greater complexity and increased attack 2) Growing threat to critical infrastructure/cyber security 3) Compliance regulations and organization level practice increase system complexity
IBM Strategy and security framework to deal with these concerns is summarized in the below image.
Key Deliverable from Tivoli for IBM Security Solutions are categorized as (1) IAM and Compliance products (2) Data Center and Operation Security products
IBM has offerings such as TIM, TDS and PIM in Identity family. There are products such as TAM, TFIM, etc in Access family and TSIEM in the compliance family IBM offers products in Network intrusion prevention, security server protection among many others IBM also offers an extensive and proven portfolio of market leading software, consultancy and services to help clients with cloud security.
The company has gathered extensive inputs from the customers and has come out with the key themes that are driving its security roadmap:
Proliferation of smart and mobile devices
cyber security and threat advancements
B2C expansion & secure collaboration
securing cloud ... and as a service
Application vulnerabilities and attacks
Enable Governance and Analytics
Participants can look forward to a more elaborate discussion on these important offerings from IBM from the speakers and presenters in Software Universe.
I found this diagram as an easy way to explain, what additional security challenges cloud introduces to an organization. What is so different about it?
To me, it is important to understand the holistic picture before one deep dives in to specifics of cloud security. IBM security framework has helped achieving this understanding by explaining host of security requirements in a cloud computing environment.
In my first blog on this topic, I would like to start with this perspective in simple and understandable overview. To continue this chain one can comment with specific solution in each of this area.
Security Governance, Risk Management and Compliance
In a cloud scenario, It is critical and important to demonstrate the Laws of the land and ensure data are stored and accessed within regulatory constraints, Encryption is applied to the data as permitted by country/jurisdiction.
Since public clouds are by definition a black box to the subscriber, potential cloud subscriber need to demonstrate regulatory compliance to the change, image, and incident management, as well as incident reporting for tenants and tenant-specific log and audit data.In addition, providers sometimes are required to support third-party audits, and their clients can be directed to support e-Discovery and forensic investigations when a breach is suspected.
People and Identity
Cloud environments usually support a large and diverse community of users. In addition, clouds introduce a new tier of privileged users: administrators working for the cloud provider. Privileged-user monitoring, including logging activities, becomes an important requirement.
How do you control passwords and access tokens in the cloud?
How do you federate identity in the cloud?
How can you prevent userids/passwords being passed and exposed in the cloud unnecessarily, increasing risk?
Data and Information
Typical concerns include the way in which data is stored and accessed, compliance and audit requirements, and business issues involving the cost of data breaches. All sensitive or regulated data needs to be properly segregated on the cloud storage infrastructure, including archived data. Increased control to the data is needed specially for privileged users administering cloud environment.
Encrypting and managing encryption keys of data in transit to the cloud or data at rest in the service provider's data center is critical to protecting data privacy and complying with compliance and regulatory mandates. The encryption of mobile media and the ability to securely share those encryption keys between the cloud service provider and consumer is an important and often overlooked need. It is critical that the data is encrypted and only the cloud provider and consumer have access to the encryption keys.
Application and Process
Typical application security requirements are carried over to the images that host those applications. In addition, cloud users demand support for image provenance and for licensing and usage control. Suspension and destruction of images must be performed carefully, ensuring that sensitive data contained in those images is not exposed.
Organizations need to ensure that the Web services they publish into the cloud are secure, compliant, and meet their business policies.
Network, Server and End point
In the shared cloud environment, subscriber need to ensure that all tenant domains are properly isolated and that no possibility exists for data or transactions to leak from one tenant domain into the next. To help achieve this, clients need the ability to configure trusted virtual domains or policy-based security zones. As data moves further from the client's control, they expect capabilities like Intrusion Detection and Prevention systems to be built into the environment.
Protecting the hypervisor which interacts and manages multiple environments in the cloud is very critical and important. The hypervisor being a potential target to gain access to more systems, and hosted images.
The cloud's infrastructure, including servers, routers, storage devices, power supplies, and other components that support operations, should be physically secure. Safeguards include the adequate control and monitoring of physical access using biometric access control measures and closed circuit television (CCTV) monitoring.
Further details can be extracted from IBM red book and white paper
When the invite mail for the 'Develothon 2010' had hit my mailbox, that very moment I decided - I just cannot afford to miss this eminent event. Because the experience I had at the 'developerWorks Unconference' at Hotel Orchid, Mumbai on Dec 11, 2009 compelled me to attend this even as well (Add on : I had won the "Best speaker" trophy there :)) But, as usual, I was hard at work and I did not realise until a day before this event.And then I thought to give a whirl.To my surprise, I got a consent mail around 11.45 pm.Phew! what next? I started scribbling with the presentation around midnight and finally came up with its finished version around 2 hours later.Now, my clock warned me that I had just few hours to take rest before I reach the venue.But, honestly, I was so turned on to attend this event that I was unperturbed.
When I reached the venue, I met Bharathi Muthu, IBM,developerWorks Manager, who greeted me with a big smile.Trust me, this enthusiastic lady creates a special aura.I was so impressed and glad to meet her. She questioned me "Ankita, you presenting again?" Oh my God! My brain started interpreting in less than a moment whether she did not want me to present but she meant that she was scared whether I will grab the 2nd trophy for the unconference. LOL
Next then, as envisioned, the event was just exemplary.The agenda for the day was "Information Management".It was a learning day because the speakers there, were all my peers but we hardly stepped ahead and shared expertise on those topics since I come from security and identity management domain.So, this event gave a chance to get acquainted with a next door domain and technology thats IM and Cognos.
And then came the last 1 hour of 'Unconference'.I am sure the name must have baffled you as it did to me but Bharathi clarified "Unconference is a conference with no specific agenda.Just come up with any topic which holds the attention of the audience".
When the audience was asked to vote on the order of the presentations they would like to hear, I was astonished to see that I got the least votes :( But, then I decided, that may be audience is unware of the topic I am sharing and yes, its the right thing to share with them.That motivated me to hold on.Finally came my turn to share the presentation on "Federated repository in Websphere Application Server".I gave a 10 mins presenation and shared my experience as a developer and also as a end user for this product.I am sure I must have appeared as if I was the product advocate :).But yeah, it was good to hear to other IBMers as well customers and it was an amazing experience which I carried away while leaving the place.Oops, forgot to mention the most important thing.The results.It was an open voting scheme where audience voted for all speakers and guess what, I got the "Best Speaker" trophy again! :) Yeppie, it was the 2nd developerWorks unconference trophy which I got back home :):) What more can I expect for!
I am looking forward to more such unconferences in future(You guessed it right..to grab the trophies again :) ) because they provide a platform to share on diversified technogies with customers and also get to know from them, that how are they using our as well as other products and technologies.To summarise, 'Develothon 2010' was a very well organised event which itself speaks of the best efforts put in by the developerWorks team.It was a job well done! I really appreciate it.
(This video is hosted at https://download.boulder.ibm.com/ibmdl/pub/software/dw/ibm/mydw_demo/mydw_demo.html and gives a very good overview on creating profiles on mDW. 5 min video with sufficient information to get started....)
(This video is hosted at http://www.youtube.com/watch?v=GsSjXiQCF-k&feature=player_embedded
and gives a very good overview on how mDW helps us to work better in a Smarter World community. 5 min video
with sufficient information you may love to watch....)