Modified on by SamReynolds
Are you tired of locating target new function APARs from tons of z/OS Communications Server APARs? Do you want a summary of all z/OS Communications Server new function APARs with clear information on release, theme, closed date, and how to enable the new function? It’s time to say goodbye to the old way of finding new function APARs and to embrace the new one.
With the new function APAR summary for V2R1 and V2R2, you will have a one-stop experience to find all new function APARs, categorized by releases and themes. Closed dates, short descriptions, and other reference materials are also included for you to get the main idea at a glance of how the new function APARs can help with your business.
The z/OS Comm Server V2R1 & V2R2 New Function APAR Summary pages are now available on the IBM Support Portal – z/OS Communications Server product support page. You can find the V2R1 and V2R2 New Function APAR Summary pages listed under “Documents”:
Taking the V2R1 New Function APAR Summary as an example, you can find all APARs categorized into five themes under which APARs are listed in the order of closed date. All APARs are labelled with dates, short descriptions, and direct links to the APARs and how to docs (if there are any).
As soon as a V2R1 new function APAR is closed, it can be found via the V2R1 New Function APAR Summary. If you want to read about more V2R1 new functions after reading the APARs, a link to the book “z/OS V2R1 Communications Server: New Function Summary” is attached on the upper-right side of the page for your reference.
For direct links to the z/OS Comm Server V2R1 & V2R2 New Function APAR Summary pages, see:
Modified on by SamReynolds
IBM offers rich technical documentation resources for users. In addition to PDF books and resources in Knowledge Center, the following useful documentation-related resources are also available:
Modified on by SamReynolds
Design thinking is a framework that guides designers through a series of activities to facilitate the creation of new and innovative solutions. Design thinking is not new to the IT industry and it certainly is not new to IBM. My first exposure to design thinking concepts within IBM occurred well over 3 years ago when I was working on an IBM Pure Application System (IPAS) development team. I have to admit that my initial exposure to design thinking left me dazed and slightly confused.
I was working on a disaster recovery solution. I worked closely with the lead architect to define our disaster recovery lifecycle. We created a 50+ page document describing the lifecycle. The document described all possible disaster recovery states, transitions from those states, processing associated with each state transition, as well as the user interface. When our user experience (UX) group invited me to a walkthrough of our disaster recovery solution I was very excited.
In the past we had similar groups in IBM, but we called them something like the human factors group or usability group. My first clue that something different was about to happen should have been the name: UX group.
I walked into the UX walkthrough thinking we were going to cover the details of the life cycle document I worked on. Much to my surprise we did not. The walkthrough consisted of a story about the people that had to use our solution. It focused far more on them than the solution. It described their roles and associated a name and face to each role. I vividly remember sitting in this meeting wondering what was going on. To me the lifecycle was a series of transitions between states. Why would I care how Adam, the disaster recovery administrator, feels when he gets called at 3:00 AM to recover from a catastrophic hardware failure? Well it took some time, but I now understand why I should care.
IBM has always been a great innovator of new technology. Heck, we pride ourselves on being at the top of the U.S. patent recipients list for each of the last 23 years. In 2015 alone, IBM received 7,355 US patents. There is no doubt in IBM’s ability to develop leading edge technology, but as Millennials and Generation Zers advance in the workforce and the number of Baby Boomers and Gen Xers in the workforce declines, technology is not enough. Millennials and Generation Zers grew up taking technology for granted. They crave products that have a sexy design and are intuitive to use. IBM needs to develop solutions that meet the demands of today’s and tomorrow’s workforce.
It may seem obvious that we need to develop products that are easy to use and fulfill our customers' demands, but exactly how to do that is a little less obvious. Today in IBM there is a subtle switch from an emphasis on building technology to an emphasis on building solutions. When I walked into that IPAS disaster recovery walkthrough 3+ years ago, I was focusing on technology – how to build it and how to wire it. The members of the UX group were focusing on a solution.
IBM utilizes design thinking in concert with an Agile development process to deliver to customers an exceptional user experience. About 18 months ago I had the opportunity to attend a design thinking workshop. As I participated in the workshop it occurred to me that we had been using design thinking principles back in my IPAS days. I just did not know it by that name.
Here in the z/OS Communications Server organization we are using design thinking to drive the content for our next release. I am fortunate to be the technical lead on the development of our very first design thinking hill; although, at times I feel like the crew of the starship Enterprise - boldly going where no man has gone before.
A hill is a piece of work stated in terms of a ‘who, what, and wow’. In my case, the ‘who’ is a z/OS network security administrator. Unfortunately I can’t disclose the ‘what’ at this time, but I assure you it will wow a z/OS network security administrator. I know this because we are using the design thinking framework to develop the hill.
My hill, like all design thinking hills, was born at a hills workshop. At a hills workshop a list of candidate hills is created. Each hill is ranked based on cost to implement and value to customer. A set of hills to capture in an upcoming release is agreed to. The intent is to pick hills that will provide maximum value to customers within the allotment of available resources. Keep in mind that hills focus on a ‘who, what, and wow’. Some enhancements don’t have that wow factor, but need to be done in a release nonetheless. A "foundations" bucket is created for items like this.
Customer validation of selected hills is important. After all, what good is creating a solution to a problem that our customers do not experience? To that extent, my team surveyed a group of customers, asking them if our hill statement addressed a real need and if they were interested in our solution. 83% of 23 respondents agreed our hill statement addressed a real need and 90% expressed interest in our solution. A few months later we followed up with a second survey to help us prioritize the aspects of our solution. To put this in design thinking terms, we were interested in understanding what our customers viewed as their minimal viable experience.
Once our hill statement was validated we focused on understanding our customers' pain points. My team interviewed a small group of customers to discern the people involved with our hill, what those people do today, and how our hill can improve that. Based on the information we obtained from the interviews, we defined a set of personas exemplifying the typical roles, responsibilities, and concerns of people involved with our hill. We then documented our understanding of the ‘as-is’ scenario and created a proposed ‘to-be’ scenario.
In parallel we started to solicit customers to become sponsor users. A sponsor user partners with us to shape the solution of our hill. Being a sponsor user requires a significant time commitment on the part of the sponsor. Typically we engage with a sponsor user once every month or two to solicit their input on our solution. The first activity we did with our sponsor users was to walk them through our understanding of the personas, ‘as-is’ scenario, and ‘to-be’ scenario. Feedback from this and all future sponsor user activities is incorporated into the design of our solution. Currently our hill has 4 sponsor users. We will continue to engage with them until our solution ships.
This is a significant change in the way we develop new function. I think this is a good change and I am excited to continue working with my sponsor users so we really can wow z/OS network security administrators in the next release of z/OS Communications Server. I hope you agree and are equally excited. I know our sponsor users are.
Modified on by SamReynolds
The fantastic Summer 2016 SHARE Conference is now in the books! Six speakers from the Enterprise Networking Solutions organization presented 14 sessions on z/OS Communications Server and 3 on ISPF, including two-part hands-on labs for Configuration Assistant for z/OSMF, and the ISPF editor. We also participated in a panel session where our attendees brought in plenty of great questions for discussion.
At this conference we continued our focus on z/OS V2R2, with special attention given to the Shared Memory Communications protocol, including sessions on SMC-R, SMC-D, and SMC security considerations. Other topics discussed included Enterprise Extender, sysplex networking technologies, network encryption technologies, IDS, FTP security, z/OS CS performance, and z/OS CS storage usage. Attendance at our sessions was very good, and we would like to thank all of those who attended our sessions for the great feedback and dialogue.
For those that couldn't be at the conference last week, you can download most of the charts for the topics we presented by going to the following link:
Please plan to join us for the Winter 2017 SHARE Conference in San Jose, California - March 5-10, 2017.
Modified on by SamReynolds
IBM Doc Buddy, a no charge mobile application that enables retrieving support documentation for z products, accelerates the time you spend in resolving problems and improves the total information experience.
A key strength of IBM Doc Buddy is that it enables looking up message documentation without Internet connections after the initial setup. You can download the message documentation that applies to your systems and retrieve messages by entering message IDs. The application also includes links to the relevant product Support Portals and contact information. After reading one message, you can access the Support Portal of the product that you are working with or call a relevant contact for further debugging information.
IBM Doc Buddy supports both iOS and Android devices. You can search for "IBM Doc Buddy" in the Google Store and Apple App store or click http://ibmdocbuddy.mybluemix.net/ to download the application.
Modified on by SamReynolds
IBM Knowledge Center has been updated with a new appearance in June. The homepage is updated as below. To learn more about the new IBM Knowledge Center, take a look at the IBM Knowledge Center Version 2 video tour.
To get the right information, you can search for a keyword or select a product from an alphabetical list. For z/OS Communications Server documentation, you can use the following links to quickly locate the information.
Compared with the former version of IBM Knowledge Center, the new IBM Knowledge Center has the following enhancements for search and navigation:
Show the table of contents by clicking on the TOC icon
By default, IBM Knowledge Center opens a full page content view with a breadcrumb and previous topic / next topic controls on every page. Therefore, when reading the information, you may be wondering where the Table of Contents is. In fact, the Table of Contents is always available, but hidden by default. Click the TOC icon () to show the Table of Contents.
Select the same topic in other versions of your product
If you get the topic you want but not the version you want, you can switch to the version you want. For example, if you search by using Google, and land on the wrong version of a topic that you are interested in, click the drop-down in the topic title area to switch versions.
See related documents from other IBM technical content sites
When you search, the Knowledge Center also returns results related to your keywords from IBM developerWorks, IBM Redbooks, and IBM Support technotes to the right of the main results.
Search scope and filter
In the new IBM Knowledge Center, you can search all of IBM Knowledge Center, a set of product versions, or just within a single version by changing the "search in" drop-down next to the search entry field. By default, IBM KC search selects the product scope you are currently browsing (auto-context search), but you need to click on the content itself for the filter to activate.
Modified on by SamReynolds
We've all heard and experienced how valuable social media can be. Most of us cannot deny that social media touches us on a daily basis both in our personal lives and professional lives.
One of the greatest values of social media is the ability to engage with a large community of people. "Engage" is a key word, for engagement is the key to social media. Without engagement, you're just speaking into an empty room. Through engagement, you have the opportunity to reach people all over the world. This allows the building of relationships beyond the normal service/support process. Through active engagement we can build relationships that go beyond our normal transactions with customers
Through Facebook (IBM Communications Server page), Twitter (@IBM_Commserver), and Youtube (zOSCommServer channel), we strive to provide content designed to help customers fully utilize our products, explore new features, and provide technical notes to help customers in their day-to-day management of their complicated networks.
In addition to these channels, we've established a presence on IBM Developerworks to provide additional blogs and answers to further meet our customers' support needs. This can range from how to use a feature, best practices, simple diagnosis steps, or a simple question.
Just like a tool in a tool box, social media is another tool that we intend to use to engage our customers, provide value, and build relationships. Come engage with us on social media. Search for information on DeveloperWorks Answers using tags: zoscs; commserver; zos.
Modified on by SamReynolds
As Jerry Stevens wrote in his March 8 blog post, the IBM System z13™ and z13s™ introduced an exciting new technology called Internal Shared Memory (ISM) which allows one z/OS instance to directly access (share) virtual memory within another z/OS instance (e.g. LPAR or guest virtual machine) within the same physical machine via DMA. At the same time, z/OS Communications Server introduced Shared Memory Communications – Direct Memory Access (SMC-D), which uses ISM so that TCP sockets applications can directly and transparently communicate with applications executing in other z/OS instances running in other Logical Partitions on the same physical z13 System. Put simply, SMC-D provides SMC-R semantics within a CPC, essentially by substituting RDMA and RoCE with DMA using ISM. (Check out Jerry's blog post for a more complete description.)
Both of the SMC technologies can provide some serious CPU reduction and equally serious throughput boosts. But what about security? How do SMC-D and SMC-R fit in with existing security domains like VLANs? What sort of isolation is available across different RoCE and ISM interfaces? And what happens to all of those security features in Communications Server when the vast majority of the application data is being passed between the communication peers using some form of DMA? We're talking about features like:
- SAF-based access controls like PORTACCESS and NETACCESS
- IP packet filtering
- Cryptographic security protocols like TLS/SSL, SSH and IPsec
- Intrusion Detection Services
How do those security features play (or not play) together with SMC-R and SMC-D?
The short answer is generally "just fine." The main reason for this happy coexistence is that SMC technologies preserve the TCP semantics for managing the sessions -- again, SMC is completely transparent to the TCP applications programs. Since many of the Communications Server's security features operate on or within some aspect of the TCP session semantics, they can continue to operate as usual. Of course, the TCP/IP stack has a little more to keep track of, and has to ensure that when a significant change occurs in the state of the TCP session (for example, access controls change on the fly, preventing access to a port that was previously permitted), the stack needs to reflect that same change on the related SMC-R or SMC-D session. But again, all of this is transparent to the applications.
For a the complete security story around Shared Memory Communications, check out this newly revised white paper . Originally published after V2R1 to explain the SMC-R security considerations, this paper is now expanded to cover SMC-D as well.
So remember, colocation, colocation, colocation, but also make sure you lock the doors and engage that security system!
Modified on by SamReynolds
Last year I wrote the first blog in this series about our approach in getting started with adopting DevOps for z/OS. Lets take a closer look at some of the DevOps challenges for large operating system products and how we are addressing them.
How does operating system level code get to a DevOps continuous delivery paradigm? Most successful continuous delivery implementation examples are SaaS-type products. Operating system level code has significant differences from SaaS products/services:
- Not easily decomposable into smaller deliverables
- Customers value stability over having “latest and greatest feature set on a rapid delivery cycle"
- Multiple releases in support, each with its own service stream, complicates delivery
How to discourage yourself. Comparing “canonical” DevOps principles and examples to operating system level code is discouraging. We're so different from a SaaS product. This is z/OS, not Gmail! We can't decompose large, monolithic code into small enough customer deliverables. We have to maintain service streams on three releases in the field. We don't have any resources to create new automation. Our customers are very conservative about putting changes into production, and most of them won't accept continuous deliveries. There is so much control and process over what we can disclose to which customer. etc ...
The approach we used. Stop comparing ourselves to canonical SaaS continuous delivery products (like Gmail or Facebook) and finding ourselves wanting. Instead, work on improving internal processes across the organization and moving them toward continuous delivery through automation. Even if it's just to an internal test group, this would be a prerequisite to achieving continuous delivery anyway. Internal improvements are something that the development group controls and can show positive results early, even if small, to help get the right buy-in from the team.
How we moved forward. Thoroughly document existing delivery pipelines with an emphasis on identifying hand-offs and pain points. This information is often not fully documented in one place. This is across the organization, not just development. Formally documenting this has the following benefits:
- Requires the teams to think end to end about how they are delivering to customers today
- Provides opportunity to question “why are we doing it this way”
- Identifies pain-points and bottlenecks that can be addressed
- Identifies opportunities for automation and removing the waste out of the system
- Will help the team answer the question "What problem are we trying to solve?"
Make it graphical and clear. Pipeline documentation should be graphical and clear, which will make it easier to identify bottlenecks and pain points. The next two diagrams show parts of the z/OS Communications Server pipeline as an example. This is not the only way to do it, but it shows what’s meant by graphical and clear, and shows the value gained by doing it that way. We started with a very high level overview of the release cycle shown in diagram 1.
Then we zoomed in on specific phases for the more detailed view shown in diagram 2.
Look for bottlenecks and items to shift left. Graphical pipeline documentation should show hand-offs and bottlenecks. In the example in diagram 2, we saw that the build starts at 4:00pm but the SMP/E apply was scheduled for 8:00pm. Most builds completed in less than four hours so there was significant idle time. The result was we moved the build start time back to 5:00, which gave developers more opportunity to get fixes in that had “just missed” previous builds. Lots of small improvements like this can improve your development processes and efficiency.
Look for opportunities to automate. Pipeline documentation should also make clear what’s automated and what’s not. It should be easy to identify which processes are manual and should be candidates for automation. You need to assess the value of doing that automation so you can prioritize automation activities by the return on investment (ROI). Pick out the 1-2 highest value opportunities for automation and pursue those, and don’t try to do everything at once. It’s important to show positive results early to get buy-in from your development community.
Look for pain points to remove waste and inefficiencies. Pipeline documentation is a source for identifying pain points but not the only one. While developing the pipeline documentation, the developers should ask and document “what are the pain points”. But also a general question to the development, test, build, ID and support community on what their pain points are can be very beneficial. We were surprised with the low-hanging fruit that was identified which were process-related that yield better efficiencies for the development team. These small changes can go a long way toward getting buy-in with your development and test community. This is especially true for the ones that are rooted in “we’ve always done it this way”.
What about continuous delivery to customers? This goal can seem far-fetched for enterprise operating system level code but we looked at it differently. It’s true that our customers can be very conservative about what they will put into production. What if you were able to transform your current product delivery pipeline to continuously deliver code to SVT? Then you could think about other possibilities to give code earlier to our customers for early feedback. Our thinking needs to go beyond the traditional thinking of our current Beta and ESP programs. This plays into Design Thinking principles of sponsored users and early, regular feedback.
How can ENS afford to do DevOps with just our existing resources? It's important to acknowledge that DevOps is an investment and that creating pipeline workflows, automation and tooling work requires resources.
- We should collaborate with other z products where it is efficient and makes sense
- Treat DevOps as an investment that is weighted with other investments we make, e.g., new function development (including test)
- To make the necessary investments for DevOps in ENS, capacity for new function in releases under development is reduced. We go into this with our eyes open and make the right trade offs based on ROI
Conclusion. DevOps is a journey that we are on that involves the entire organization
- People – You need the right people to drive the culture change
- Process – Stop doing things that have no value (challenge the process)
- Technology – Standardize your tools and automation on current technology and limit the number of tools the team has to learn and use
- You start by understanding what you’re doing now and look for improvements, not by comparing yourself to the ideal case and getting discouraged over how far away you are from your goals
- Lots of small victories and improvements are the path to DevOps in mainframe operating systems
- Pipelines and improvements need to be developed from the bottom up (by the people doing the work) instead of the top down
- Using existing resource to make the investment for the future
If you are interested in our other blogs on our DevOps journey we have one on changing the culture and our process focus. We will continue to share our strategies and experiences with this blog series and welcome your feedback! You can also reach out to the authors Frank Varone (email@example.com) and Mike Fox (firstname.lastname@example.org).
Modified on by SamReynolds
If you had a chance to check out my previous blog on changing our culture then you may recall that we identified the three pieces of DevOps as Culture, Process and Tools. This blog will discuss how we addressed the second prong of the DevOps journey for ENS.
When the ENS journey began, we had to look at the process focus areas that made sense for our organization. We decided on Continuous Improvement, Continuous Integration, and Continuous Delivery. Once we identified the target areas, then we had to define what those mean for ENS and what initiatives we could manage with our current workload. Here are some examples of the initiatives we tackled as part of the process piece of our journey.
Continuous Improvement - The initiatives we decided on were based on pain points and areas of improvements that we could only learn were needed from going through the exercise of value stream mapping. This process is time consuming and even a little bit tedious but it really is crucial for kicking off any DevOps journey. Most of us are too ingrained in our own daily processes to see where areas of improvement are needed and it often takes someone indirectly or not at all related to the process to ask the necessary questions.
- Performance issue with the publications review tool: When going through the publication reviews process from the developers' stand point, we learned there was a performance issue with the tool that developers use to review publications. This is something that was considered part of the Information Development (ID) process but the issue was on the developer/reviewer's side. Had we not looked at it from all the parties involved, we might have overlooked this issue. We engaged the team that supports this tool to upgrade their database version, significantly improving end-user performance.
- Some items were obvious, such as upgrading tools and migrating to newer versions of our internal source control manager to allow better collaboration and increased functionality.
- After reviewing the daily build process in our core workgroup, we realized we were starting our builds over an hour earlier than was necessary, so we moved the build time out and this gave the development team more time every day to check in code.
Continuous Integration - The first step was to identify what CI meant to us, the ENS organization. Where are we integrating continuously? With whom? How will we do it? We know we are agile and we know that Systems Verification Test (SVT) doesn't need to wait until we complete every line item to begin testing, so when we looked at our product and our customer's needs we knew that a daily code delivery to SVT was our who.
The biggest goal here is to prevent build failures, which slow the availability of code to test. Our organization has a build verification test (BVT), or smoke test, that every new build must pass before it's made available to test. We determined that we could reduce the number of BVT failures by making that BVT available to all developers, so they can run it on their patch before integrating it into the build. We are also working on beefing up our collection of automated test buckets and making them easily available for developers to run on their patches before integrating them into the build. This emphasizes a key DevOps principle, which is to fail early and cheaply.
Continuous Delivery - Again we had to decide what this meant to ENS. (See a theme?) We know who we are and who we are NOT. We aren't writing applications for a mobile device or updating a search engine. We are a strictly on-premises product that is built on security and stability and our customers do not want frequent new versions. But we need to be able to get feedback earlier in the development cycle and to do that we cannot wait until a beta program that starts near the end of a two-year release cycle to get customer feedback. So we took the Configuration Assistant piece of our product and made pre-Beta development code available to select users. We provide drops and ask customers to try certain items out to get their feedback that will help guide us in our work to architect a really great end-user experience. This is not a simple task and there was overhead with getting it set up and there is overhead with maintaining it, but what we get out of it is really beneficial to creating what the customer wants and needs.
As our journey continues we are always looking for new initiatives that fall into our process focus areas that will enable us to become more efficient in our day-to-day execution and eliminate wasted cycles. If you are interested in how we got started with DevOps, check out our first blog in the series, Enterprise Network Solutions approach to getting started with adopting DevOps for z/OS.
Modified on by SamReynolds
Colocation, colocation, colocation! Does colocating your application workloads on the same z Systems physical machine (CPC) really matter? In some cases colocation really can make a big difference. When you have application workloads that have communication patterns that are network intensive, meaning they either frequently communicate (i.e. exchanging many messages) in order to complete a single transaction (e.g. multi-tiered application workloads) or they exchange large amounts of data (bulk, streaming or other big data type solutions such as analytics related workloads), then the physical location or proximity of the applications can make a difference. The differences can impact your cost and your overall results.
The IBM System z13™ and z13s™ introduced new technology that offers an opportunity for clients to take a closer look at this aspect of colocation of IBM z/OS application workloads. IBM introduced z Systems technology called Internal Shared Memory (ISM). The ISM technology allows one z/OS instance to directly access (share) virtual memory within another z/OS instance (e.g. LPAR or guest virtual machine) within the same physical machine. The ISM architecture enables direct memory access (DMA) capability for software exploitation.
With ISM, IBM also announced Shared Memory Communications – Direct Memory Access (SMC-D). SMC-D exploits ISM which enables applications to directly and transparently communicate with other applications executing in other z/OS instances running in other Logical Partitions on the same physical z13 System. The direct communications is provided transparently for applications using TCP sockets.
Some history will help give perspective. Prior to ISM, z Systems provided a very efficient technology called HiperSockets. HiperSockets provides a logical internal (logical) LAN within z Systems allowing the operating system to communicate using numerous protocols such as TCP/IP, UDP, SNA etc. Communications with HiperSockets is accomplished by creating, exchanging, and processing standard IEEE 802.3 packets (frames) in software. HiperSockets provides a very efficient memory to memory transfer (of standard packets) without requiring physical networking hardware.
SMC-D with ISM goes beyond HiperSockets by eliminating all packets along with all of the TCP/IP protocol and packet related processing. SMC-D provides a direct socket to socket transfer of data. This model provides significant savings in host network processing which translates to significant savings in CPU, latency, and throughput.
In addition to HiperSockets, z/OS instances on the same CPC can use other network technology to communicate to same-CPC z/OS instances, such as Ethernet using the IBM OSA-Express family of adapters. While there are several options, typically HiperSockets would provide the most optimal option. While HiperSockets will continue to be an important technology (i.e. due to its versatility), the benefits of SMC-D are compelling.
Shared Memory Communications architecture now has two variations:
- Shared Memory Communications – RDMA (SMC-R for cross platform using RoCE)
- Shared Memory Communications – DMA (SMC-D for same platform using ISM)
Both forms of SMC can be used concurrently. The protocol dynamically selects the appropriate variation based on proximity of the peer hosts (i.e. same CPC instances use SMC-D).
So what are the benefits or differences of SMC-D? Benchmarks results comparing the technologies have shown that SMC-D using ISM provides significant savings in CPU, latency, and throughput. Here is a quick performance summary of Request / Response patterns (transactional) and streaming (bulk) workloads highlighting the differences in performance when comparing SMC-D to HiperSockets:
- Request/Response Summary for Workloads with 1k/1k – 4k/4k Payloads:
- Latency: Up to 48% reduction in latency
- Throughput: Up to 91% increase in throughput
- CPU cost: Up to 47% reduction in network-related CPU cost
- Request/Response Summary for Workloads with 8k/8k – 32k/32k Payloads:
- Latency: Up to 82% reduction in latency
- Throughput: Up to 475% (~6x) increase in throughput
- CPU cost: Up to 82% reduction in network-related CPU cost
- Streaming Workload:
- Latency: Up to 89% reduction in latency
- Throughput: Up to 800% (~9x) increase in throughput
- CPU cost: Up to 89% reduction in network-related CPU cost
As you can see the benefits of SMC-D with ISM are compelling. If you currently exploit HiperSockets, then the applicability of SMC-D is easy for you to evaluate. If you are not sure if you have or could have the z/OS network traffic patterns that apply to your environment, then you can evaluate your workload network patterns using the SMC-Applicability Tool (SMC-AT).
With the potential for this type of savings it is easy to see how colocation of network intensive workloads on the IBM z13 or IBM z13s using SMC-D with ISM can make a difference.
 Benchmarks results shown here are from a controlled IBM internal lab using standard tools. Your actual results may vary. Performance information is provided “AS IS” and no warranties or guarantees are expressed or implied by IBM.
The Winter 2016 SHARE Conference was a great educational event, with a wealth of excellent customer interaction! Six speakers from the Enterprise Networking Solutions organization presented 13 sessions on z/OS Communications Server, 3 on ISPF, and one on IBM Multi-Site Workload Lifeline, including two-part hands-on labs for Configuration Assistant for z/OSMF, and the ISPF editor. We also participated in a panel session where our attendees brought in plenty of great questions for discussion.
Given the recent availability of z/OS V2R2, there was a focus on z/OS V2R2, with special attention given to the new Shared Memory Communications - Direct Memory Access (SMC-D) protocol. Other topics discussed included Enterprise Extender, sysplex technologies, network security, z/OS CS performance, FTP security, and z/OS CS storage usage. Attendance at our sessions (and across the board at SHARE in San Antonio) was very good, and we would like to thank all of you who attended our sessions for the great feedback and dialogue.
For those that couldn't be at the conference last week, I will remind you that you can download most of the charts for the topics we presented by going to the following link:
Please plan to join us for the Summer 2016 SHARE Conference in Atlanta, Georgia, July 31 - August 5, 2016.
The Winter 2016 SHARE Conference is in San Antonio, Texas next week (February 29th - March 4th). As always, there will be a good selection of content focused on z/OS Communications Server, including the following sessions from six of our team here in Research Triangle Park, NC:
- z/OS V2R2 Communications Server Technical Update, Part 1 of 2 (Gus Kassimis and Sam Reynolds)
- z/OS V2R2 Communications Server Technical Update, Part 2 of 2 (Gus Kassimis and Sam Reynolds)
- Shared Memory Communications over RDMA (SMC-R) - Optimized TCP communications over Ethernet (Gus Kassimis)
- New Shared Memory Communications protocol - Direct Memory Access (SMC-D) - Going beyond HiperSockets (Gus Kassimis)
- z/OS Communications Server Performance: Updates and Recommendations (Dave Herr)
- Sysplex and Network Technologies and Considerations (Gus Kassimis)
- Understanding z/OS Communication Server Storage Usage (Mike Fitzpatrick)
- z/OS Communications Server Network Security Overview (Lin Overby)
- z/OS Communications Server Intrusion Detection Services (Lin Overby)
- Safe and Secure Transfers with z/OS FTP (Lin Overby and Sam Reynolds)
- Enterprise Extender on z/OS Communications Server: SNA Hints and Tips (Sam Reynolds)
- TCP/IP Stack Configuration with Configuration Assistant for z/OS V2R2 CS: Hands-on Lab Part 1 of 2 (Mike Fox)
- TCP/IP Stack Configuration with Configuration Assistant for z/OS V2R2 CS: Hands-on Lab Part 2 of 2 (Mike Fox)
- Enabling Continuous Availability and Reducing Downtime with IBM Multi-Site Workload Lifeline (Mike Fitzpatrick)
Also, there will be a panel sessions for open discussion of mainframe networking topics:
- z/OS Communications Server Free-for-All (Matthias Burkhard, Mike Fitzpatrick, Dave Herr, Gus Kassimis, Lin Overby, and Sam Reynolds)
Lastly, I will be presenting the following ISPF topics:
- ISPF Hidden Treasures and New z/OS 2.2 Features
- ISPF Editor - Beyond the Basics Hands-on Lab, Part 1 of 2 (with Tom Conley and Liam Doherty)
- ISPF Editor - Beyond the Basics Hands-on Lab, Part 2 of 2 (with Tom Conley and Liam Doherty)
We hope to see you there! For those that can’t join us, I’ll be tweeting (IBM_Commserver on Twitter) and posting updates to Facebook (Facebook.com/IBMCommserver) throughout the week.
Modified on by NatashaLishok
Do you want to bookmark topics in IBM Knowledge Center (KC) that are useful to you and be able to read them without Internet access? Did you know that you can create a PDF file with your selected KC topics for offline use? We introduce "My Collections" in KC - With this function, you can add topics to a collection, organize their order, and create a PDF file that includes these topics so that you can read them offline and on mobile devices.
In less than three minutes you can learn how to use this function. For more details watch this how-to demo : Creating a PDF on IBM Knowledge Center. It takes only a few clicks to add a topic to a collection, and another few clicks to set their order and create the PDF file. We are pretty sure you will find it useful.
If you want to learn more about how to use IBM Knowledge Center, check out the IBM Knowledge Center demo video that introduces basic and advanced search, creating collections, sharing, and language choice of KC.
We also recommend reading this recent post Navigating and searching for the z/OS V2R2 Communications Server documentation in IBM Knowledge Center for more information on navigating and searching for documentation in IBM Knowledge Center.
The z/OS V2R2 Communications Server New Function Documentation provides a single point of entry for you to easily retrieve documentation on V2R2 new functions. If you want to know all about what's new in z/OS V2R2 Communications Server, the light-weight New Function Documentation is your best quick start guide.
In this document, you will find the following information in each V2R2 new function:
- A brief introduction to the new function
- A table of topics in IBM Knowledge Center with hyperlinks
Speaking of topics in IBM Knowledge Center, do you know that you can create a PDF file with the topics you customize? This function is called "My Collections". You can follow the How to use z/OS V2R2 Communications Server New Function Documentation demo video at https://youtu.be/Ral2pixkEOk to bookmark a collection of IBM Knowledge Center topics and create a PDF file for each new function.