Modified on by SamReynolds
In our first DevOps blog we talked about how we began our journey and touched briefly on the three main pieces of a DevOps movement: Culture, Process, Tools.
You may ask how we got started with changing the culture. And how do we know it's working?
What if I told you that if you carve out two hours a week for the next month to work on automating a task that you'd then free up 4 hours a month for yourself and 3 of your co-workers? Would you do it?
At first you have to realize that any change will require people doing some work. With the workloads we all carry these days we have to evaluate everything we do today for its priority and usefulness, always looking at our return on investment. Can we spend 10 hours over the next month to make a process better so that we save 5 people 4 hours every month for the foreseeable future? What's the trade off for those original 10 hours? Can we push out a deliverable date by a month in order to spend those hours?
We began by asking ourselves questions about pain points and areas of frustration. And what we found is that at the heart of the beginning of the DevOps transformation is the culture change. How do we stop doing the tasks that no longer matter? How do we evaluate what really needs to be done? Can we push lower priority work out some to give us time to change the processes that are time consuming? How do we change our mind set as an organization?
These are the kinds of changes in our thought process that have to be fostered in order to make DevOps work. It's not easy and it takes time to get everyone on board with shifting our focus and thinking in a DevOps way.
How do you get the buy in to change the culture? It's pretty simple: Prove it.
You make a small change that saves an hour of time or reduces frustration which proves that the time spent to evaluate and change the process is, in fact, worthwhile and provides real benefits.
- One of the first things we did was take a deep dive into a process that had 140 steps and reduced it to 81. This had been a widely shared pain point, so while fairly simple it proved management's commitment and showed real benefits.
- Another pain point was the upgrade of a tool we all use and are completely dependent upon. We had to upgrade the tool but in the past it has taken over 4 hours per user and the process was clunky and prone to errors. One person was assigned the task of creating an automated script that would reduce the process to 30 minutes. Management worked to free up cycles to allow him to spend two days which saved everyone else in the organization 4 hours each.
- We have another tool that is used by a large portion of our organization that takes nearly 30 seconds to complete a transaction. This has been a pain point for several years that users have been just dealing with in (mostly) silence. We have a small task force that is spending approximately an hour a week to find a solution to this problem. Once it is resolved we know it will save users time and a lot of frustration.
These were relatively small pain points, but they are helping to change the culture by establishing credibility that long-standing pain points can change and people will see real benefits from the work put into transformation. We have other items that are larger which we are working to automate that will save people time as well as increase quality.
We have opened the door to change and as an organization we are saying to ourselves "we don't have to do a task just because it's always been done". When people realize that the leaders of your organization are serious about letting the people who do the work change the work as they see fit, it empowers a person to look at every process and procedure in a new light.
That says to me that it's working.
With z/OS V2R2, Communications Server brings enhancements in a number of areas, including scalability, simplification, and autonomics.
If there is a single, overarching theme in z/OS V2R2 CS, it is scalability. By enabling the TCP/IP stack and its strategic device drivers to utilize 64-bit (above the bar) storage,a substantial inhibitor to workload growth is relieved. Extremely large Enterprise Extender implementations benefit from internal optimizations to allow scaling to tens of thousands of connections per LPAR. And a substantially restructured IKE daemon provides a significant reduction in the time necessary to establish a large number of VPN connections.
Shared Memory Communications over RDMA (SMC-R), first introduced in V2R1 and on the zEC12/zBC12, is also enhanced in V2R2. Most significant is the ability to share the RoCE adapter, with a single adapter shareable by up to 31 virtual servers (LPARs or second level guests under zVM®). (This enhancement requires an IBM z13, and is also available via PTF on z/OS V2R1.) Are you unclear on how SMC-R will benefit your environment? A new SMC Applicability Tool is provided which will provide a projection of what percentage of your current traffic could benefit from SMC-R enablement. (This tool is also available via PTF on V1R13 and V2R1.)
The IBM Configuration Assistant for z/OS Communications Server has long been a valuable tool for configuring policy-based networking functions such as AT-TLS, IPSec, and Intrusion Detection Services. In V2R2, the Configuration Assistant gains an entirely new discipline: the ability to configure a TCP/IP profile, allowing a graphical interface and wizard-driven approach to configuring a TCP/IP stack.
There are additional enhancements in many other areas, such as security, TCP/IP autonomic tuning, and support for CICS transaction tracking. For more information, consider downloading the “z/OS V2R2 Communications Server Technical Update” presentation from the Winter 2015 SHARE Conference.
When source VIPA is in effect, you can control which static VIPA is used for your IPv4 dynamic XCF interfaces by using the new SOURCEVIPAINTERFACE parameter on IPCONFIG DYNAMICXCF in z/OS V2R1. This designation removes the need to manage the order of the VIPAs in the HOME list. For the syntax of this new parameter, see the V2R1 IP Configuration Reference.
If your TCP/IP profile contains DEVICE/LINK/HOME statements for HiperSockets and static VIPA and you want to migrate these to the preferred (and much simpler) IPv4 INTERFACE statement, try this tip on z/OS V2R1: Take a dump of your TCP/IP stack. Use IPCS against the dump and issue the command TCPIPCS PROFILE(CONVERT). The resulting output will represent these IPv4 definitions as INTERFACE statements (and also show you what the updated HOME statement would look like.). You can then review this output to assist in modifying your TCP/IP profile. For more details on TCPIPCS PROFILE(CONVERT), see the V2R1 IP Diagnosis Guide.
Modified on by NatashaLishok
Do you want to bookmark topics in IBM Knowledge Center (KC) that are useful to you and be able to read them without Internet access? Did you know that you can create a PDF file with your selected KC topics for offline use? We introduce "My Collections" in KC - With this function, you can add topics to a collection, organize their order, and create a PDF file that includes these topics so that you can read them offline and on mobile devices.
In less than three minutes you can learn how to use this function. For more details watch this how-to demo : Creating a PDF on IBM Knowledge Center. It takes only a few clicks to add a topic to a collection, and another few clicks to set their order and create the PDF file. We are pretty sure you will find it useful.
If you want to learn more about how to use IBM Knowledge Center, check out the IBM Knowledge Center demo video that introduces basic and advanced search, creating collections, sharing, and language choice of KC.
We also recommend reading this recent post Navigating and searching for the z/OS V2R2 Communications Server documentation in IBM Knowledge Center for more information on navigating and searching for documentation in IBM Knowledge Center.
Modified on by SamReynolds
One of the things that was easy to do with the older Bookshelf (BookManager) publications was search within a specific z/OS component like Communications Server. There is also a way to accomplish this with the z/OS V2R1 Information Center available at: http://pic.dhe.ibm.com/infocenter/zos/v2r1/index.jsp
Here are the steps to create a search scope of only the z/OS Communications Server books:
1. Access the search area and select Scope.
2. Select New.
3. Name your scope and select the z/OS Communications Server library. Optionally, select any additional criteria. Press OK.
Now when you click on Scope, you will see the following:
You can establish similar "book shelves" for other product publications you want to search, perhaps UNIX or LE. You can also create more granular scopes for just one book, such as this one:
Happy searching and reading!
Are you thinking about enabling FIPS140 mode for your AT-TLS connections? If so, check out the new techdoc entitled "Setting up AT-TLS for FIPS 140 mode." This techdoc provides one-stop shopping regarding the components you'll need to configure (and in which order) as well as verification steps along the way.
Modified on by NatashaLishok
ENS is a very mature enterprise z/OS product team that is responsible for z/OS Communications Server, ISPF, and IBM Multisite Workload Lifeline. We have been around for 40+ years and already have good processes in place for design, development, build, test, information development and service. We made the transformation to agile several years ago and feel like our current processes allow us to deliver on 4 week sprints, innovate on items that don't work well and deliver with high quality.
When the team started our DevOps journey back in August of 2014 there was push back in the form of two questions:
1) Why DevOps ? We already use automation.
2) What problem are we trying to solve?
In order to answer these questions for a group of well seasoned software engineers we had to demonstrate WHY DevOps and be able to show small wins which made their daily tasks easier to do.
How did we do this? We started by doing two things in parallel:
First, we created a core DevOps work group which consisted of engineers from each of the disciplines of our product development life cycle (design, development, build, FVT, SVT, performance, IDD and service). Technical leaders who were open to change were selected for the core work group.
Second, we asked the entire organization two questions:
1) What are your top two pain points?
2) If you could change one thing in the organization what would it be and why?
While we were waiting on the answers to our two survey questions we did the following with the core work group:
1) We discussed terms that were overloaded and created definitions for those terms that were relevant to the team, specific to our organization and product. For example:
DevOps = maximize the predictability, efficiency, security and maintainability of operational processes - this objective is supported by automation
Continuous Integration = merging developer code into the product stream in regular, repeatable, short intervals and rapidly propagating that code to all test systems automatically and quickly
Pre-integration = work done before developer code is merged into the product stream, for example, unit and regression test and peer review
Continuous Deployment = after passing all the automated delivery tests, each code commit is deployed to end users as soon as it is available. Because changes are delivered quickly and without human intervention, continuous deployment can be seen as risky. It requires a high degree of confidence both in the existing application infrastructure and in the development team.
Continuous Test = continuous testing adds manual testing to the continuous delivery model. With continuous testing, the test group will constantly test the most up-to-date version of available code. Continuous testing generally adds manual exploratory tests and user acceptance testing. This approach to testing is different from traditional testing because the software under test is expected to change over time, independent of a defined test-release schedule.
Continuous Monitoring = monitoring the continuous testing and getting defects reported in real time
Continuous Delivery = a software development discipline where you build software in such a way that the software can be released to end users at any time
Production = today it means deploying to our SVT enterprise customer environment daily and in the future will mean deploying to an environment that can be accessed by our external customers to provide early feedback on pre-GA product code features
2) We defined the purpose and objective of the core work group: Our high level focus areas would be Culture, Process and Tools.
3) We appointed a manager as the owner, project manager and technical lead for the work group
4) We agreed to meet bi-weekly for one hour.
5) We created an online community to track and store our meeting agendas, actions and collateral.
6) We agreed to use value stream mapping to document our end to end pipeline and processes.
By our second core work group meeting we had the results to our survey questions from the organization. We were surprised by the feedback and how easy it was to identify one or two pain points that were pervasive across the organization. The key is to act fast to solve these first few pain points to demonstrate to the organization "Why DevOps" and get buy-in to the DevOps journey.
Answering the question "What problem are we trying to solve?" will start to become more clear as the team defines the overloaded terms, states the purpose and objective of the work group, documents the first pipeline of your product development life cycle and implements your first couple of small changes which proves the value of DevOps.
We will continue to share our strategies and experiences with this blog series and welcome your feedback!
You can also reach out to the author (Frank Varone) of this blog at firstname.lastname@example.org
Modified on by NatashaLishok
IBM Education Assistance (IEA) for z/OS V2R2 Communications Server is now available!
z/OS V2R2 Communications Server supports enhancements in a number of areas, including security, scalability, simplification, and platform efficiency. The Communications Server IEA material contains a high-level overview of the Communications Server functions in V2R2. It provides an overview of each function, regarding its problem statement, solution, and corresponding benefits if you enable this function. Understanding these enhancements can help you use them more effectively to meet your business requirements.
For each function, you will gain a better understanding of that enhancement in the following aspects:
- Usage and innovation
- Interactions and dependencies
- Migration and coexistence considerations
Download the PDF today and learn about all the enhancements included in z/OS V2R2 Communications Server!
Modified on by SamReynolds
In 2013 the IBM zEC12 (zBC12) introduced the 10GbE RoCE Express feature. RoCE Express provides an RDMA capable network adapter providing access to RDMA over Converged Ethernet (RoCE). RoCE provides an optimized network interconnect for System z communications. Along with RoCE Express, z/OS provided a new RDMA based solution called Shared Memory Communications over RDMA (SMC-R). SMC-R is a sockets based solution providing transparent access to RoCE for TCP sockets applications over standard Ethernet.
The IBM System z13 introduces the capability to share (virtualize) the 10GbE RoCE Express feature among multiple (up to 31) LPARs (or z/VM guest virtual machines) using standardized PCIe virtualization (SR-IOV) technology.
When RoCE Express is exploited by z/OS with SMC-R, the combined solutions provide two key value points:
Improved latency that can provide improved transaction rates for latency sensitive transactional based workloads, and
Lower CPU cost for workloads that transfer larger payloads (i.e. analytics, streaming, FTP, big data, data replication, web services, etc.)
SMC-R does this while preserving the critical qualities of services (load balancing, security, isolation, reuse of IP topology, etc.) required by system z clusters in enterprise data center networks without requiring any application or middleware changes along with zero or minimal operational changes.
When customers enable SMC-R they should immediately experience the benefits, and longer term the benefits can be extended as they expand their exploitation of RDMA technology on System z.
With the IBM System z13 RoCE virtualization capability users can now share RoCE Express features which:
Extends access to RoCE to additional workloads across multiple z/OS instances (LPARs) which will reduce the number of required physical RoCE features
Expands (effectively doubles) the bandwidth of your RoCE Express features by enabling concurrent use of both 10GbE RoCE Express physical ports
Customers who have multiple CPCs in a single site (or an extended LAN among multiple sites) with z/OS centric workloads (e.g. SYSPLEX, DB2, WAS, CICS, MQ, IMS etc.) will be natural candidates for benefiting from RoCE Express and SMC-R.
So, if you understand the technology but you’re not sure you have an environment that might benefit from it, then we can offer you some help. If you would like some guidance and assistance with assessing how your specific application workloads might be applicable to SMC-R, or possibly assess the potential level of benefits that you might anticipate for your environment, a new tool called Shared Memory Communications Applicability Tool (SMCAT) has been created. SMCAT has been provided to assist our customers with this assessment process. SMCAT is now available via PTF for z/OS V1R3 (UI24872) and z/OS V2R1 (UI24762) customers. SMCAT does not require SMC-R or any special hardware. Instead SMCAT monitors your existing TCP/IP workloads and produces a summary report to help you understand how your workloads might be eligible for and benefit from SMC-R and RoCE Express.
If you have additional questions about SMC-R, RoCE Express, RoCE virtualization or using SMCAT then your next step is exploring the reference materials provided at:
If you are just getting started then the FAQ document might be a good first step. You can also reach out to the author (Jerry Stevens) of this blog at this email: email@example.com
Modified on by JeffHaggar
The QDIO Accelerator function can boost performance of IPv4 traffic forwarded over OSA-Express QDIO and HiperSockets interfaces including sysplex distributor traffic which is routed to a target stack. The optimized packet forwarding provided by QDIO Accelerator improves latency and reduces CPU consumption.
The function applies to traffic which arrives inbound over an OSA-Express QDIO or HiperSockets interface and is forwarded outbound over OSA-Express QDIO or HiperSockets. With QDIO Accelerator, the first time such a packet is forwarded using a given route in the stack routing table, the z/OS stack creates a QDIO Accelerator route. Subsequent eligible packets which would normally be forwarded by the stack on this route instead get processed at the DLC layer without having to traverse the forwarding stack. This provides a much more efficient path for that traffic.
QDIO Accelerator can be especially valuable for sysplex distributor traffic being forwarded to a target stack when you do at least one of the following:
- use HiperSockets to provide dynamic XCF connectivity between stacks on the same CPC
- use VIPAROUTE to route packets to a target stack over an OSA-Express QDIO interface
To enable the QDIO Accelerator function, specify QDIOACCELERATOR on the IPCONFIG statement in the TCP/IP profile of any stack which will perform IP forwarding or which is serving as a sysplex distributor stack. You can enable VIPAROUTE to a specific target stack by using the VIPAROUTE statement in the VIPADYNAMIC block. With VIPAROUTE, the stack forwards packets from the distributor stack to a target stack using a route from the stack routing table rather than using the dynamic XCF connectivity.
To display the QDIO Accelerator routes, use the Netstat ROUTE/-r report option with the QDIOACCEL modifier. You can use the Netstat VCRT/-V report with the DETAIL modifier to see which sysplex distributor connections are eligible for acceleration.
With VTAM tuning statistics, you can display information such as the number of packets and bytes accelerated for each interface. Because the accelerated packets do not traverse the forwarding stack, these packets are not included in a packet trace on that stack. However, these are included in an OSA-Express Network Traffic Analyzer (OSAENTA) trace.
Beginning with z/OS CS V2R1, QDIO Accelerator can co-exist with IP security. IP forwarded packets can be accelerated as long as all routed traffic is permitted by your IP filter policy and is not subject to logging. Sysplex distributor traffic is always eligible for acceleration using QDIO Accelerator because these packets are subject to IP filtering at the target stack rather than the distributor stack.
z/OS V2R1 is the last release which will support the GATEWAY statement. To migrate your static routing definitions to the BEGINROUTES block, try this tip: Take a dump of your TCP/IP stack. Use IPCS against the dump and issue the command TCPIPCS PROFILE(CONVERT). The resulting output will represent your GATEWAY definitions in BEGINROUTES format. You can then review this output to assist in modifying your TCP/IP profile. For more details on TCPIPCS PROFILE(CONVERT), see the IP Diagnosis Guide.
Modified on by NatashaLishok
IBM Knowledge Center Overview
IBM Knowledge Center is one central repository that contains ALL IBM product documentation. Compared with the previous Information Center, IBM Knowledge Center has the following advantages:
- United and Comprehensive: IBM Knowledge Center brings together IBM hardware and software product information in a single location. Now you see your products more easily, or scan multiple versions of a product to compare their features.
- Personalized and Customizable: IBM Knowledge Center brings personalization and customization to our documents. Knowledge Center remembers your profile preferences and search queries, allows you to sort search results easily, and can help you create and publish custom documents.
- Easy to Use: IBM Knowledge Center lets users filter out extraneous content so they can focus on what matters to them. They can easily build their own personalized library. They can save search queries, and create persistent, personalized collections. This promotes continuous improvement of information by letting customers rate topics and comment on their user experience.
- Information currency: We can update our content continually – our new goal is quarterly.
IBM Knowledge Center link: https://www.ibm.com/support/knowledgecenter/
Navigating to the z/OS V2R2 Communications Server documentation
z/OS documentation is located in the Table of Contents under IBM Operating Systems > System z Operating Systems > z/OS. The z/OS product page lists the available content by release and highlights z/OS-specific content. From the Table of Contents, click z/OS 2.2.0, and then you can see the z/OS Communications Server section in the Table of Contents. Click z/OS Communications Server, and then you can see the z/OS Comm Server page with the links to details for each book. To navigate to z/OS Comm Server information more quickly in the future, bookmark the z/OS Comm Server page or the abstract pages of each Comm Server book.
IBM Knowledge Center provides breadcrumbs that reflect the path to a topic, starting with the z/OS release. To return to a parent topic, click the corresponding link in the breadcrumb trail. See the following example.
Searching for the z/OS V2R2 Communications Server information
To perform a simple search, enter your keywords into the search field and press Enter. As you type, IBM Knowledge Center suggests possible keywords, product pages, and collections related to your search.
By default, the search filter is initially set to search all products and releases. You can click Add Products to set z/OS 2.2.0 as the filter.
IBM Knowledge Center allows you to use Boolean operators (AND/+, OR/|, NOT/-, and so on) to create complex queries.
After you sign in with your IBM ID, you can also add comments, rate pages, and communicate with others on IBM Knowledge Center. Your feedback about the Comm Server documentation will be greatly welcomed.
To enjoy the rich Web experience that this “one-stop shop” provides, check out IBM Knowledge Center!
Modified on by SamReynolds
Colocation, colocation, colocation! Does colocating your application workloads on the same z Systems physical machine (CPC) really matter? In some cases colocation really can make a big difference. When you have application workloads that have communication patterns that are network intensive, meaning they either frequently communicate (i.e. exchanging many messages) in order to complete a single transaction (e.g. multi-tiered application workloads) or they exchange large amounts of data (bulk, streaming or other big data type solutions such as analytics related workloads), then the physical location or proximity of the applications can make a difference. The differences can impact your cost and your overall results.
The IBM System z13™ and z13s™ introduced new technology that offers an opportunity for clients to take a closer look at this aspect of colocation of IBM z/OS application workloads. IBM introduced z Systems technology called Internal Shared Memory (ISM). The ISM technology allows one z/OS instance to directly access (share) virtual memory within another z/OS instance (e.g. LPAR or guest virtual machine) within the same physical machine. The ISM architecture enables direct memory access (DMA) capability for software exploitation.
With ISM, IBM also announced Shared Memory Communications – Direct Memory Access (SMC-D). SMC-D exploits ISM which enables applications to directly and transparently communicate with other applications executing in other z/OS instances running in other Logical Partitions on the same physical z13 System. The direct communications is provided transparently for applications using TCP sockets.
Some history will help give perspective. Prior to ISM, z Systems provided a very efficient technology called HiperSockets. HiperSockets provides a logical internal (logical) LAN within z Systems allowing the operating system to communicate using numerous protocols such as TCP/IP, UDP, SNA etc. Communications with HiperSockets is accomplished by creating, exchanging, and processing standard IEEE 802.3 packets (frames) in software. HiperSockets provides a very efficient memory to memory transfer (of standard packets) without requiring physical networking hardware.
SMC-D with ISM goes beyond HiperSockets by eliminating all packets along with all of the TCP/IP protocol and packet related processing. SMC-D provides a direct socket to socket transfer of data. This model provides significant savings in host network processing which translates to significant savings in CPU, latency, and throughput.
In addition to HiperSockets, z/OS instances on the same CPC can use other network technology to communicate to same-CPC z/OS instances, such as Ethernet using the IBM OSA-Express family of adapters. While there are several options, typically HiperSockets would provide the most optimal option. While HiperSockets will continue to be an important technology (i.e. due to its versatility), the benefits of SMC-D are compelling.
Shared Memory Communications architecture now has two variations:
- Shared Memory Communications – RDMA (SMC-R for cross platform using RoCE)
- Shared Memory Communications – DMA (SMC-D for same platform using ISM)
Both forms of SMC can be used concurrently. The protocol dynamically selects the appropriate variation based on proximity of the peer hosts (i.e. same CPC instances use SMC-D).
So what are the benefits or differences of SMC-D? Benchmarks results comparing the technologies have shown that SMC-D using ISM provides significant savings in CPU, latency, and throughput. Here is a quick performance summary of Request / Response patterns (transactional) and streaming (bulk) workloads highlighting the differences in performance when comparing SMC-D to HiperSockets:
- Request/Response Summary for Workloads with 1k/1k – 4k/4k Payloads:
- Latency: Up to 48% reduction in latency
- Throughput: Up to 91% increase in throughput
- CPU cost: Up to 47% reduction in network-related CPU cost
- Request/Response Summary for Workloads with 8k/8k – 32k/32k Payloads:
- Latency: Up to 82% reduction in latency
- Throughput: Up to 475% (~6x) increase in throughput
- CPU cost: Up to 82% reduction in network-related CPU cost
- Streaming Workload:
- Latency: Up to 89% reduction in latency
- Throughput: Up to 800% (~9x) increase in throughput
- CPU cost: Up to 89% reduction in network-related CPU cost
As you can see the benefits of SMC-D with ISM are compelling. If you currently exploit HiperSockets, then the applicability of SMC-D is easy for you to evaluate. If you are not sure if you have or could have the z/OS network traffic patterns that apply to your environment, then you can evaluate your workload network patterns using the SMC-Applicability Tool (SMC-AT).
With the potential for this type of savings it is easy to see how colocation of network intensive workloads on the IBM z13 or IBM z13s using SMC-D with ISM can make a difference.
 Benchmarks results shown here are from a controlled IBM internal lab using standard tools. Your actual results may vary. Performance information is provided “AS IS” and no warranties or guarantees are expressed or implied by IBM.
Modified on by NatashaLishok
Although being a highly unlikely occurrence, IT departments need to spend resources to ensure that their business critical data and applications can be successfully recovered in the event of an unplanned failure of their production site. Minimizing data loss is the highest priority but it can be at a tradeoff with application availability. Typically, following an unplanned outage, the entire data center can be restarted at the disaster recovery (DR) site but this can take several hours or longer before the applications are available.
What is sometimes overlooked is how to maintain application availability for a far more likely scenario, a planned outage for a maintenance activity. IT departments schedule maintenance windows in order to apply software fixes or perform application upgrades. Their goal is to minimize the number of maintenance windows required as well as ensure the duration of each window is as short as possible. Despite these efforts, these maintenance windows can still last for several hours and occur multiple times per year. Since recovering the data and applications on the DR site could take several hours, it makes little sense to attempt to utilize the DR site during planned maintenance activities, as the time it takes to switch to the DR site and back to the production site could be longer than the maintenance window itself. As a result, IT departments try to schedule these maintenance windows with the aim of minimizing the impact to their customers, usually on a weekend night.
What if there was a way to quickly switch access to business critical applications and their data from one site to another in a few minutes, rather than a few hours? With application unavailability for maintenance windows shrinking down to several minutes, these windows could be scheduled more frequently, ensuring systems and applications are always running with the most up-to-date fixes. So how can this be accomplished? By using a software data replication product to keep data sources used by the applications in sync across two sites, and IBM Multi-site Workload Lifeline to distribute connections for these applications, such a reduction in site switch times can be achieved.
IBM Multi-site Workload Lifeline, or Lifeline for short, provides the ability to perform a graceful switch of the applications and their data sources, called workloads by Lifeline, during planned outages. By using simple Lifeline commands, workload migration from one site to another can be easily performed, minimizing the down time for planned events such as scheduled maintenance activities. So what makes Lifeline different from existing disaster recovery solutions? Well first, Lifeline is not an all-or-nothing solution. Rather than initially plan for, and provide system resources for the planned recovery of all workloads in the production site, IT departments can focus on their most critical workload first, and gradually roll out the solution for additional workloads, as needed. A second differentiator is that Lifeline requires no application changes or changes to the clients accessing the applications and data. Following a planned outage, no manual changes in the network topology is necessary before the workload is able to be accessed on the alternate site.
As mentioned earlier, a key component to ensure a quick switch of applications and data to the alternate site is software data replication. Depending on the data source being used by the application, a different software replication product would be used to keep the data source in sync across the sites. For example, for applications utilizing DB2, IBM InfoSphere Data Replication for DB2 would be used to keep DB2 data in sync. Lifeline ensures connections for a workload are distributed to only one site at a time, to make certain that updates to the data source are occurring on only one site at any point in time.
Lifeline enables the graceful switch of a workload from one site to the other by:
- First, preventing new connections for the workload to be distributed to either site while allowing existing connections to the production site a chance to complete their work,
- Next, resetting any connections on the production site that have not completed their work. This guarantees that no additional updates to the workload's data source can occur on this site.
- Finally, allowing new connections for the workload to be distributed to the alternate site.
In subsequent blogs, I'll cover more topics such as how Lifeline can also be used to quickly recover from unplanned outages as well as the different types of workloads that Lifeline can provide workload recovery for both planned and unplanned outages. In the meantime, you can learn more about Lifeline by going to the following link: