The IBM z/OS Communications Server TCP/IP Implementation Redbook series provides understandable, step-by-step guidance about how to enable the most commonly used and important functions of z/OS Communications Server TCP/IP. Final versions of the four V1R13 volumes are now available for you to download and enjoy:
IBM z/OS V1R13 Communications Server TCP/IP Implementation: Volume 1 Base Functions, Connectivity, and Routing (SG24-7996-00)
IBM z/OS V1R13 Communications Server TCP/IP Implementation: Volume 2 Standard Applications (SG24-7997-00)
IBM z/OS V1R13 Communications Server TCP/IP Implementation: Volume 3 High Availability, Scalability, and Performance (SG24-7998-00)
IBM z/OS V1R13 Communications Server TCP/IP Implementation: Volume 4 Security and Policy-Based Networking (SG24-7999-00)
Modified on by SamReynolds
In 2013 the IBM zEC12 (zBC12) introduced the 10GbE RoCE Express feature. RoCE Express provides an RDMA capable network adapter providing access to RDMA over Converged Ethernet (RoCE). RoCE provides an optimized network interconnect for System z communications. Along with RoCE Express, z/OS provided a new RDMA based solution called Shared Memory Communications over RDMA (SMC-R). SMC-R is a sockets based solution providing transparent access to RoCE for TCP sockets applications over standard Ethernet.
The IBM System z13 introduces the capability to share (virtualize) the 10GbE RoCE Express feature among multiple (up to 31) LPARs (or z/VM guest virtual machines) using standardized PCIe virtualization (SR-IOV) technology.
When RoCE Express is exploited by z/OS with SMC-R, the combined solutions provide two key value points:
Improved latency that can provide improved transaction rates for latency sensitive transactional based workloads, and
Lower CPU cost for workloads that transfer larger payloads (i.e. analytics, streaming, FTP, big data, data replication, web services, etc.)
SMC-R does this while preserving the critical qualities of services (load balancing, security, isolation, reuse of IP topology, etc.) required by system z clusters in enterprise data center networks without requiring any application or middleware changes along with zero or minimal operational changes.
When customers enable SMC-R they should immediately experience the benefits, and longer term the benefits can be extended as they expand their exploitation of RDMA technology on System z.
With the IBM System z13 RoCE virtualization capability users can now share RoCE Express features which:
Extends access to RoCE to additional workloads across multiple z/OS instances (LPARs) which will reduce the number of required physical RoCE features
Expands (effectively doubles) the bandwidth of your RoCE Express features by enabling concurrent use of both 10GbE RoCE Express physical ports
Customers who have multiple CPCs in a single site (or an extended LAN among multiple sites) with z/OS centric workloads (e.g. SYSPLEX, DB2, WAS, CICS, MQ, IMS etc.) will be natural candidates for benefiting from RoCE Express and SMC-R.
So, if you understand the technology but you’re not sure you have an environment that might benefit from it, then we can offer you some help. If you would like some guidance and assistance with assessing how your specific application workloads might be applicable to SMC-R, or possibly assess the potential level of benefits that you might anticipate for your environment, a new tool called Shared Memory Communications Applicability Tool (SMCAT) has been created. SMCAT has been provided to assist our customers with this assessment process. SMCAT is now available via PTF for z/OS V1R3 (UI24872) and z/OS V2R1 (UI24762) customers. SMCAT does not require SMC-R or any special hardware. Instead SMCAT monitors your existing TCP/IP workloads and produces a summary report to help you understand how your workloads might be eligible for and benefit from SMC-R and RoCE Express.
If you have additional questions about SMC-R, RoCE Express, RoCE virtualization or using SMCAT then your next step is exploring the reference materials provided at:
If you are just getting started then the FAQ document might be a good first step. You can also reach out to the author (Jerry Stevens) of this blog at this email: firstname.lastname@example.org
Modified on by NatashaLishok
ENS is a very mature enterprise z/OS product team that is responsible for z/OS Communications Server, ISPF, and IBM Multisite Workload Lifeline. We have been around for 40+ years and already have good processes in place for design, development, build, test, information development and service. We made the transformation to agile several years ago and feel like our current processes allow us to deliver on 4 week sprints, innovate on items that don't work well and deliver with high quality.
When the team started our DevOps journey back in August of 2014 there was push back in the form of two questions:
1) Why DevOps ? We already use automation.
2) What problem are we trying to solve?
In order to answer these questions for a group of well seasoned software engineers we had to demonstrate WHY DevOps and be able to show small wins which made their daily tasks easier to do.
How did we do this? We started by doing two things in parallel:
First, we created a core DevOps work group which consisted of engineers from each of the disciplines of our product development life cycle (design, development, build, FVT, SVT, performance, IDD and service). Technical leaders who were open to change were selected for the core work group.
Second, we asked the entire organization two questions:
1) What are your top two pain points?
2) If you could change one thing in the organization what would it be and why?
While we were waiting on the answers to our two survey questions we did the following with the core work group:
1) We discussed terms that were overloaded and created definitions for those terms that were relevant to the team, specific to our organization and product. For example:
DevOps = maximize the predictability, efficiency, security and maintainability of operational processes - this objective is supported by automation
Continuous Integration = merging developer code into the product stream in regular, repeatable, short intervals and rapidly propagating that code to all test systems automatically and quickly
Pre-integration = work done before developer code is merged into the product stream, for example, unit and regression test and peer review
Continuous Deployment = after passing all the automated delivery tests, each code commit is deployed to end users as soon as it is available. Because changes are delivered quickly and without human intervention, continuous deployment can be seen as risky. It requires a high degree of confidence both in the existing application infrastructure and in the development team.
Continuous Test = continuous testing adds manual testing to the continuous delivery model. With continuous testing, the test group will constantly test the most up-to-date version of available code. Continuous testing generally adds manual exploratory tests and user acceptance testing. This approach to testing is different from traditional testing because the software under test is expected to change over time, independent of a defined test-release schedule.
Continuous Monitoring = monitoring the continuous testing and getting defects reported in real time
Continuous Delivery = a software development discipline where you build software in such a way that the software can be released to end users at any time
Production = today it means deploying to our SVT enterprise customer environment daily and in the future will mean deploying to an environment that can be accessed by our external customers to provide early feedback on pre-GA product code features
2) We defined the purpose and objective of the core work group: Our high level focus areas would be Culture, Process and Tools.
3) We appointed a manager as the owner, project manager and technical lead for the work group
4) We agreed to meet bi-weekly for one hour.
5) We created an online community to track and store our meeting agendas, actions and collateral.
6) We agreed to use value stream mapping to document our end to end pipeline and processes.
By our second core work group meeting we had the results to our survey questions from the organization. We were surprised by the feedback and how easy it was to identify one or two pain points that were pervasive across the organization. The key is to act fast to solve these first few pain points to demonstrate to the organization "Why DevOps" and get buy-in to the DevOps journey.
Answering the question "What problem are we trying to solve?" will start to become more clear as the team defines the overloaded terms, states the purpose and objective of the work group, documents the first pipeline of your product development life cycle and implements your first couple of small changes which proves the value of DevOps.
We will continue to share our strategies and experiences with this blog series and welcome your feedback!
You can also reach out to the author (Frank Varone) of this blog at email@example.com
Modified on by RaquelPrieto
PTF test for PI38376: A TCP connection can use the wrong maximum segment size (MSS) on V2R1
A test role here at Communications Server for z/OS involves more than just testing the latest and greatest code. Customers run into issues and fixes need to be made available for them, but not before they are internally tested. Today I describe my experience testing this PTF as a still relatively new member of the z/OS System test team. A lot of frustrations were had, but fortunately I still have all of my hair and had the chance to learn some new things.
Gathering Information - I know some stuff, maybe?
To start, I look at the PTF record and corresponding web pages in our internal source control tool to gather some initial information. I end up with a Notepad++ document full of haphazardly pasted notes from various resources to sift through and make some sense of. Fortunately there are a ton of details, which makes for a happy tester. I'll leave the majority of the nitty-gritty out and summarize the situation:
- A distributed DVIPA (DRVIPA) is defined for at least two systems in a sysplex: one being the primary distributor and the other a backup.
- If the backup stack is started before the primary distributor and takes over the DVIPA, an implicit (host) route from the backup to the primary distributor is created for that DRVIPA with an MTU size of 576 during the SYN stage of a TCP handshake. In the failing case the multipath routing algorithm is used, which chooses the smallest MTU value among all possible routes to the DRVIPA but ignores the default host route. Although OMPROUTE uses OSPF to advertise host routes with larger MTU sizes, the MTU for this particular route remains "stuck", resulting in an MSS (maximum segment size) of 536 for outbound TCP connection setup requests. No bueno.
"The problem occurs when an implicit host route for the DRVIPA is generated with the default MTU 576 instead of 65535 on a backup system. This is accomplished by starting the backup system first before the distributor."
To make matters a bit more convoluted, the conglomerate of notes inform me that I will not be able to view the incorrect MTU with a simple netstat route display. Instead, I'll have to dump the TCP/IP stack after recreating the scenario and scour through the raw memory. Staring at hexadecimal. Looking for something called an "RTE" in something else called an "RTOP."
Additionally, the customer's error description included steps to recreate the error:
1. Define DRVIPA to be used
-Backup definition must be defined without the MOVEABLE IMMED
2. Start TCPIP on a backup system without OMPROUTE
3. Start and stop the primary distributor without OMPROUTE to force the DRVIPA takeover on the backup stack
-At this point the host route for the DRVIPA with the MTU of 576 is created on the backup stack
4. Restart the primary distributor with OMPROUTE to takeback the DRVIPA
5. Start OMPROUTE on the backup so OSPF host routes will be learned from the distributor
-At this point the MTU value set at 576 will get "stuck"
6. If a connection is established from the backup to the DRVIPA on the distributor, a netstat display on this connection will show the MSS set to 536
7. Dump the TCP/IP address space
-Examine the dump to find the MTU value of 576 from RTOP in RTE
So far, I know our environment has, at least, bits and pieces of this customer configuration. The SVT environment has DVIPAs defined with their corresponding VIPADISTRIBUTE and VIPABACKUP definitions. Since these DVIPA definitions were built to be highly customizable to suit a customer's needs, the amount of options and parameters possible combined with ensuring correct syntax can be overwhelming at times. For this reason my preferred method is to take from example - there are already so many different kinds of configuration files saved over the years in our test environment that there is likely one I can use as a template for this test.
As a tester, however, I could have saved a decent amount of time if all of the test information I needed were in a single location, instead of a number of separate records/web pages. I had to dig around to find useful pieces of information, and the first place that I looked (the PTF test record directly assigned to me) did not contain detailed error recreation instructions.
Research - I figure out some stuff
Terms and concepts
From the information I've gathered so far, I need to define some acronyms and understand some concepts not previously encountered.
- RTOP: Google search didn't come up with anything, and neither did the two 'Terminology' bots on Sametime, so I went to the V2R1 Knowledge Center. A search gave me the RTOPTS, or run-time options for the language environment which doesn't seem related, so I asked a more experienced tester who wasn't familiar with the acronym either. Luckily I was eventually able to find someone who was familiar with it, and it turns out that RTOP is an identifier (an "eye catcher" as we call it) for a control block of a group of routes to a given IP address destination in a dump of the TCP/IP address. Internal stuff, so that's why it wasn't publicly searchable.
- RTE: Another hopeless Google search but 'RTE', turns out, is related to a TCPIPCS ROUTE report in IPCS. All of that happens to be in the Knowledge Center. Looking as a sample TCPIPCS ROUTE report gives me more clues - it looks like RTE is just a shortened name for the "Route" field in the report. RTOP has something to do with this report, but I don't quite see where it fits in just yet. Eventually, the same person who explained what RTOP was explained that RTE is an eye catcher for the route control block in a memory dump; there can be multiple RTE entries for a single RTOP.
- MOVEABLE IMMED parameter for VIPABACKUP definitions: sticking to Knowledge Center for this one, the MOVEABLE IMMEDIATE parameter refers to the behavior of a DVIPA in the case of stack takebacks. So, if the stack owned the DVIPA then went down, transfers ownership of that DVIPA to the backup stack defined, and then comes back up, the stack will regain control and all new connections for that DVIPA. In this test scenario, the customer does not have MOVEABLE IMMEDIATE defined, so I will need to ensure that it is also not defined on my test systems.
- Implicit host route/routing, and how to spot it: I thought way too hard about this one and should have just asked someone right away because, as it turns out, it is simply a host route or a route to an IP address on the HOME list (signified by the H flag in a netstat display). It appears that the term "implicit" is older language exclusive to the mainframe crowd. That happens quite a bit around here.
- Multipath, including where it's defined and how to tell that it is enabled: What I want to know is where and how multipath is defined on a stack or router, or at least how to tell that it is being used. Back at the Knowledge Center I came across the Routing section with an OSPF overview. Just as the name suggests, multipath allows for a routing table to contain multiple routes ("paths") to a destination. There is also an IPCONFIG MULTIPATH or NOMULTIPATH statement for the TCP/IP profile, which either enables or disables multipath routing for outbound traffic, respectively. When testing I will need to verify that the MULTIPATH statement is configured for the TCP/IP stack I will use; I can also confirm it is being used by looking at a routing table netstat display. If multiple routes exists for a single destination, multipath is in use, at least for inbound traffic.
- How to examine a dump of the TCP/IP address space to find the IP routing table that should show the DRVIPA implicit host route with MTU 576: I know this part is specific to our environment and what we already have set up, but to find the MTU value buried in raw hex could be a formidable undertaking.
The combined PTF records already provide a lot of information that will save me a significant amount of time by outlining steps for recreation with a good amount of detail - high quality test relies on the presence of as much relevant information as possible. The challenge at this point, however, is determining how I will mimic the customer's configuration and recreation steps in our own shared test environment.
For this test I will need at least two systems in the same sysplex. One as the primary distributor of the DRVIPA and one as a backup. Likewise, I'll need a method to "stop the distributor" which can be done using a few different ways such as forcing the stack to the leave the sysplex, or using a VARY DEACTIVATE command on the DRVIPA itself, or stopping the stack entirely. I'll need to define this DRVIPA similar to the customer's configuration and be able to access and modify TCP/IP profiles. Finally I'll need to be able to stop and start OMPROUTE on either stack.
At this point I think it's time to bring up some systems and see all of this for myself.
Stay tuned for Part Two where I dive in and actually do some testing!
Modified on by NatashaLishok
Ever wonder why z/OS Communications Server support asks for multiple traces for network issues? If so, here is the reason why.
The z/OS packet trace is collected from the perspective of the z/OS host that is sending or receiving data. The trace is collected before the data reaches the physical network, a.k.a the OSA NIC (network interface card). So in the case of outbound or sent packets trace records are collected before the data is processed by the VTAM DLC layer to be sent to the OSA NIC. Conversely, for inbound or received packets, they are traced after they arrive over the OSA NIC and before they are processed by the DLC layer in VTAM and TCP/IP.
Note that for the majority of network throughput issues ALL packet application data is not needed in the packet trace, so feel free to use the "abbrev=100 option" when you collect it!
The next possible choice for a z/OS "network" type trace is the OSAENTA trace. This trace captures packets from the OSA NIC perspective. This means that once packets sent from TCP/IP are sent over the OSA NIC, they are captured in the OSAENTA trace. Conversely, once packets arrive over the OSA NIC, before they reach VTAM and TCP/IP, they are collected in the OSAENTA trace. Hopefully, the picture is now more clear!
Note that the OSAENTA trace does not collect the entire packet application data contents. It is truncated to 200 bytes, so keep that in mind!
When diagnosing a network performance issue it is imperative to have the full network picture or as close to one as possible. Many peer hosts are often multiple router hops apart which add multiple points to look at. Any delays captured with just a z/OS packet trace don't always tell the full story, so corresponding traces outside of z/OS are often requested. You may want to go ahead and collect an "external network" trace somewhere between the z/OS and remote host endpoints. This additional trace(s), when collected simultaneously with the z/OS packet trace, provides a greater insight as to where delays or packet loss may be occurring.
There are many workstation-based tools that are available for viewing network traces. The z/OS packet trace and OSAENTA traces are designed to be viewed with IPCS. I know not everyone is very comfortable or familiar with IPCS, so consider that there is a nice option called SNIFFER that can be used to format both of these traces into binary files that can be loaded into one of these other trace viewing products to make your life simpler without having to use IPCS to look at the traces.
Modified on by SamReynolds
It's looking like I'm seeing some sort of storage creep, and it seems to be related to z/OS CommServer TCP/IP. Sometimes I'm not quite sure how to approach identifying if the storage increase is related to TCP/IP, or even how to get to the root cause. Over time, I've learned it's helpful to use some simple commands to gain a general knowledge of my system's TCP/IP storage use. This has allowed me to not only protect my system, but also to quickly get to the bottom of any issues.
There are several commands I use to periodically monitor and collect information regarding TCP/IP's storage usage for current use, high-water mark, and limit (if I have it configured). Analyzing the data, I have learned over time what's typical for TCP/IP storage use during normal and acceptable peak workloads. My automation issues these commands every 15 minutes so they are recorded in the system log. This way there's historical information to pinpoint a problem area and time frame should TCP/IP's storage usage appear to be abnormal.
- D TCPIP,,STOR - issue this command for each of your TCP/IP stacks
- D NET,CSM - this command can be used to determine overall CSM ECSA and CSM Fixed storage utilization
- D NET,CSM,OWNERID=ALL - use this command to identify what application is using CSM storage and how much it is using
TCP/IP Storage -
TCPCS STORAGE CURRENT MAXIMUM LIMIT
ECSA 2858K 3313K NOLIMIT
PRIVATE 8631K 8634K NOLIMIT
ECSA MODULES 9671K 9671K NOLIMIT
HVCOMMON 1M 1M NOLIMIT
HVPRIVATE 1M 1M NOLIMIT
TRACE HVCOMMON 2579M 2579M 2579M
- Limits for ECSA and Private storage can optionally be configured in the TCP/IP Profile (GLOBALCONFIG statement, parameters ECSALIMIT and POOLLIMIT).
- There are no recommended limits that will work for every system because every system is different.
- This output does not include CSM storage.
ECSA storage - The use of common storage can be controlled with the TCP/IP GLOBALCONFIG profile statement - ECSALIMIT. ECSALIMIT limits can be set to keep TCP/IP from monopolizing all common storage on a system. This is to protect other subsystems' ability to access common storage in the event TCP/IP hits a situation where it consumes too much ECSA. This parameter is intended to improve system reliability by limiting TCP/IP's common storage use.
Private storage - The amount of storage TCP/IP uses in its user region. There are several ways in which you can limit TCP/IP's use of private storage:
- with the TCP/IP GLOBALCONFIG profile statement - POOLLIMIT
- via the REGION keyword in TCP/IP's start up JCL. This can also be overridden by installation exits such as IEFUSI. If you choose to limit the region size you should also set a POOLLIMIT in the profile.
ECSA modules - This is common storage used by TCP/IP load modules.
64-Bit common area size
64-Bit private area size
Trace HVCommon - 64-Bit common storage used for tracing
Here are some considerations you should make when choosing to define ECSA or private (pool) storage limits.
- Accommodate for temporary application "hang" conditions, where TCP/IP must buffer large amounts of inbound or outbound data. Add a reasonable fudge factor to the observed maximum usage values. It is not uncommon to set limits that are 50% over the peak usage.
- Care should be taken when coding the ECSALIMIT parameter. Setting it too low can cause TCP/IP to terminate prematurely.
- The benefit of specifying limits is that you will receive warning messages before storage obtain calls start failing when there is not enough storage available to satisfy the requests.
- ECSALIMIT does not include any of your CSM storage used by TCP/IP.
- When choosing to limit Private storage, make sure you don't use a value that is lower than or equal to what your installation exit (IEFUSI) enforces.
- Remember that the values set for ECSALIMIT and POOLLIMIT can be changed via OBEYFILE command processing (VARY TCPIP,,OBEYFILE).
CSM storage - The Communications Storage Manger (CSM) is a VTAM component that allows authorized host applications to share data with VTAM, TCP/IP, and other CSM users without the need to physically copy the data.
- Your CSM storage will be located in either ECSA or data space storage, and can be fixed or pageable.
- CSM storage definitions are controlled by SYS1.PARMLIB member IVTPRM00 which is read by VTAM during initialization.
- The limits you set can be dynamically changed with a MODIFY CSM command. This allows you to control the amount of CSM storage that can be used in ECSA or can be FIXED at any point in time.
D NET,CSM - This command provides a quick overview of how much storage has been allocated by CSM, and how much of it is in-use or free for use by a CSM user. You'll find that CSM can be in either ECSA or data space storage. The command output also lets you know what you have defined as the maximum in your IVTPRM00 parmlib member.
SIZE SOURCE INUSE FREE TOTAL
4K ECSA 144K 112K 256K
16K ECSA 16K 240K 256K
32K ECSA 0M 512K 512K
60K ECSA 0M 0M 0M
180K ECSA 0M 360K 360K
TOTAL ECSA 160K 1224K 1384K
4K DATA SPACE 31 0M 256K 256K
16K DATA SPACE 31 0M 0M 0M
32K DATA SPACE 31 0M 0M 0M
60K DATA SPACE 31 0M 0M 0M
180K DATA SPACE 31 0M 0M 0M
TOTAL DATA SPACE 31 0M 256K 256K
4K DATA SPACE 64 4352K 128K 4480K
16K DATA SPACE 64 0M 256K 256K
32K DATA SPACE 64 96K 416K 512K
60K DATA SPACE 64 0M 0M 0M
180K DATA SPACE 64 0M 360K 360K
TOTAL DATA SPACE 64 4448K 1160K 5608K
TOTAL DATA SPACE 4448K 1416K 5864K
TOTAL ALL SOURCES 4608K 2640K 7248K
FIXED MAXIMUM = 120M FIXED CURRENT = 6877K
FIXED MAXIMUM USED = 6877K SINCE LAST DISPLAY CSM
FIXED MAXIMUM USED = 6877K SINCE IPL
ECSA MAXIMUM = 120M ECSA CURRENT = 1633K
ECSA MAXIMUM USED = 1633K SINCE LAST DISPLAY CSM
ECSA MAXIMUM USED = 1633K SINCE IPL
CSM DATA SPACE 1 NAME: CSM64001
CSM DATA SPACE 2 NAME: CSM31002
D NET,CSM,OWNERID=ALL - Use this command to see how much CSM storage each of the CSM 'users' are currently using. If you want to see only the CSM usage of TCP/IP, you can also specify the user by "OWNERID=TCP ASID". (An example of OWNERID command output is not shown.)
Considerations to make when choosing to set your CSM limits:
- ECSA CSM can't be larger than your system ECSA limit which you defined in your system parameters in parmlib member IEASYSnn (CSA).
- When setting FIXED CSM, ensure that you have enough real frames to back the FIXED allocation.
So, now that you've learned to monitor TCP/IP's ECSA, private, and CSM storage usage, you may be thinking "what's next" if you think you're seeing a TCP/IP storage-related problem. If no SVC dumps are generated for the issue, when you see that storage use is on the rise, take a console dump of the TCP/IP address space. If you think the problem is related to your CSM storage usage by TCP/IP, include the CSM dataspace in the dump. Here's a sample console dump command you can use:
DUMP COMM=(tcpip storage growth)
If needed, the IBM Support Center can assist you with identifying the cause of your TCP/IP storage growth.
Modified on by JeffHaggar
The QDIO Accelerator function can boost performance of IPv4 traffic forwarded over OSA-Express QDIO and HiperSockets interfaces including sysplex distributor traffic which is routed to a target stack. The optimized packet forwarding provided by QDIO Accelerator improves latency and reduces CPU consumption.
The function applies to traffic which arrives inbound over an OSA-Express QDIO or HiperSockets interface and is forwarded outbound over OSA-Express QDIO or HiperSockets. With QDIO Accelerator, the first time such a packet is forwarded using a given route in the stack routing table, the z/OS stack creates a QDIO Accelerator route. Subsequent eligible packets which would normally be forwarded by the stack on this route instead get processed at the DLC layer without having to traverse the forwarding stack. This provides a much more efficient path for that traffic.
QDIO Accelerator can be especially valuable for sysplex distributor traffic being forwarded to a target stack when you do at least one of the following:
- use HiperSockets to provide dynamic XCF connectivity between stacks on the same CPC
- use VIPAROUTE to route packets to a target stack over an OSA-Express QDIO interface
To enable the QDIO Accelerator function, specify QDIOACCELERATOR on the IPCONFIG statement in the TCP/IP profile of any stack which will perform IP forwarding or which is serving as a sysplex distributor stack. You can enable VIPAROUTE to a specific target stack by using the VIPAROUTE statement in the VIPADYNAMIC block. With VIPAROUTE, the stack forwards packets from the distributor stack to a target stack using a route from the stack routing table rather than using the dynamic XCF connectivity.
To display the QDIO Accelerator routes, use the Netstat ROUTE/-r report option with the QDIOACCEL modifier. You can use the Netstat VCRT/-V report with the DETAIL modifier to see which sysplex distributor connections are eligible for acceleration.
With VTAM tuning statistics, you can display information such as the number of packets and bytes accelerated for each interface. Because the accelerated packets do not traverse the forwarding stack, these packets are not included in a packet trace on that stack. However, these are included in an OSA-Express Network Traffic Analyzer (OSAENTA) trace.
Beginning with z/OS CS V2R1, QDIO Accelerator can co-exist with IP security. IP forwarded packets can be accelerated as long as all routed traffic is permitted by your IP filter policy and is not subject to logging. Sysplex distributor traffic is always eligible for acceleration using QDIO Accelerator because these packets are subject to IP filtering at the target stack rather than the distributor stack.
The Summer 2013 SHARE Conference is now in the history books! Five speakers from the z/OS Communications Server organization presented 16 sessions on various topics, with a significant focus on the just-announced z/OS V2R1 release. The V2R1 focus included an overview of z/OS V2R1 Communications Server, a detailed review of the new Shared Memory Communications for RDMA (SMC-R) capability, a V2R1 performance update, and a look at the newly-rewritten IBM Configuration Assistant for z/OS CS. Other topics discussed included Enterprise Extender, sysplex technologies, z/OS mail, OSA, and network security. Attendance at our sessions (and across the board at SHARE in Boston) was fantastic, and we would like to thank all of you who attended our sessions for the great feedback and dialogue.
Congratulations to Gus Kassimis for receiving a best session award for the session "Sysplex Networking Technologies and Considerations" that he presented at the Winter 2013 SHARE conference in San Francisco.
For those that couldn't be at the conference this week, I will remind you that you can download most of the charts for the topics we presented by going to the following link:
I especially recommend downloading and looking over the charts for the z/OS V2R1 CS Technical Update, and the sessions on SMC-R. Here are direct links for those downloads:
Please plan to join us for the Winter 2014 SHARE Conference in Anaheim, California, March 9-14, 2014.
The Summer 2014 SHARE Conference is in Pittsburgh, Pennsylvania next week (August 4th-8th). As always, there will be a good selection of content focused on z/OS Communications Server, including the following sessions from six of our team here in Research Triangle Park, NC:
Introduction to z/OS Communications Server
z/OS Communications Server Technical Update, Part 1
z/OS Communications Server Technical Update, Part 2
z/OS V2R1 CS: Shared Memory Communications - RDMA (SMC-R), Part 1
z/OS V2R1 CS: Shared Memory Communications - RDMA (SMC-R), Part 2
z/OS V2R1 CS Performance Update
Sysplex Networking Technologies and Considerations
Leveraging z/OS Communications Server Applications Transparent Transport Layer Security (AT-TLS) for a Lower Cost and More Rapid TLS Deployment
z/OS Communications Server Intrusion Detection Services
Enterprise Extender on z/OS Communications Server: SNA Hints and Tips
Change is Coming: Motivation and Considerations for Migrating from SMTPD/Sendmail to CSSMTP
z/OS Communications Server Hints and Tips
Additionally, Kim Bailey from our group will be presenting the following ISPF topics:
ISPF Hidden Treasures and New Features, Part 1
ISPF Hidden Treasures and New Features, Part 2
ISPF Editor - Beyond the Basics Hands-on Lab, Part 1
ISPF Editor - Beyond the Basics Hands-on Lab, Part 2
Last, but not least, Kim Bailey and Lin Overby will be assisting with the "z/OSMF Hands-on Labs - Choose Your Own" that run three times during SHARE week, and where you can pick from several topics, including the z/OSMF Configuration Assistant for z/OS Communications Server.
We hope to see you there! For those that can’t join us, I’ll be tweeting (IBM_Commserver on Twitter) and posting updates to Facebook (Facebook.com/IBMCommserver) throughout the week.
Modified on by SamReynolds
Colocation, colocation, colocation! Does colocating your application workloads on the same z Systems physical machine (CPC) really matter? In some cases colocation really can make a big difference. When you have application workloads that have communication patterns that are network intensive, meaning they either frequently communicate (i.e. exchanging many messages) in order to complete a single transaction (e.g. multi-tiered application workloads) or they exchange large amounts of data (bulk, streaming or other big data type solutions such as analytics related workloads), then the physical location or proximity of the applications can make a difference. The differences can impact your cost and your overall results.
The IBM System z13™ and z13s™ introduced new technology that offers an opportunity for clients to take a closer look at this aspect of colocation of IBM z/OS application workloads. IBM introduced z Systems technology called Internal Shared Memory (ISM). The ISM technology allows one z/OS instance to directly access (share) virtual memory within another z/OS instance (e.g. LPAR or guest virtual machine) within the same physical machine. The ISM architecture enables direct memory access (DMA) capability for software exploitation.
With ISM, IBM also announced Shared Memory Communications – Direct Memory Access (SMC-D). SMC-D exploits ISM which enables applications to directly and transparently communicate with other applications executing in other z/OS instances running in other Logical Partitions on the same physical z13 System. The direct communications is provided transparently for applications using TCP sockets.
Some history will help give perspective. Prior to ISM, z Systems provided a very efficient technology called HiperSockets. HiperSockets provides a logical internal (logical) LAN within z Systems allowing the operating system to communicate using numerous protocols such as TCP/IP, UDP, SNA etc. Communications with HiperSockets is accomplished by creating, exchanging, and processing standard IEEE 802.3 packets (frames) in software. HiperSockets provides a very efficient memory to memory transfer (of standard packets) without requiring physical networking hardware.
SMC-D with ISM goes beyond HiperSockets by eliminating all packets along with all of the TCP/IP protocol and packet related processing. SMC-D provides a direct socket to socket transfer of data. This model provides significant savings in host network processing which translates to significant savings in CPU, latency, and throughput.
In addition to HiperSockets, z/OS instances on the same CPC can use other network technology to communicate to same-CPC z/OS instances, such as Ethernet using the IBM OSA-Express family of adapters. While there are several options, typically HiperSockets would provide the most optimal option. While HiperSockets will continue to be an important technology (i.e. due to its versatility), the benefits of SMC-D are compelling.
Shared Memory Communications architecture now has two variations:
- Shared Memory Communications – RDMA (SMC-R for cross platform using RoCE)
- Shared Memory Communications – DMA (SMC-D for same platform using ISM)
Both forms of SMC can be used concurrently. The protocol dynamically selects the appropriate variation based on proximity of the peer hosts (i.e. same CPC instances use SMC-D).
So what are the benefits or differences of SMC-D? Benchmarks results comparing the technologies have shown that SMC-D using ISM provides significant savings in CPU, latency, and throughput. Here is a quick performance summary of Request / Response patterns (transactional) and streaming (bulk) workloads highlighting the differences in performance when comparing SMC-D to HiperSockets:
- Request/Response Summary for Workloads with 1k/1k – 4k/4k Payloads:
- Latency: Up to 48% reduction in latency
- Throughput: Up to 91% increase in throughput
- CPU cost: Up to 47% reduction in network-related CPU cost
- Request/Response Summary for Workloads with 8k/8k – 32k/32k Payloads:
- Latency: Up to 82% reduction in latency
- Throughput: Up to 475% (~6x) increase in throughput
- CPU cost: Up to 82% reduction in network-related CPU cost
- Streaming Workload:
- Latency: Up to 89% reduction in latency
- Throughput: Up to 800% (~9x) increase in throughput
- CPU cost: Up to 89% reduction in network-related CPU cost
As you can see the benefits of SMC-D with ISM are compelling. If you currently exploit HiperSockets, then the applicability of SMC-D is easy for you to evaluate. If you are not sure if you have or could have the z/OS network traffic patterns that apply to your environment, then you can evaluate your workload network patterns using the SMC-Applicability Tool (SMC-AT).
With the potential for this type of savings it is easy to see how colocation of network intensive workloads on the IBM z13 or IBM z13s using SMC-D with ISM can make a difference.
 Benchmarks results shown here are from a controlled IBM internal lab using standard tools. Your actual results may vary. Performance information is provided “AS IS” and no warranties or guarantees are expressed or implied by IBM.
The Summer 2014 SHARE Conference was a great educational event and continued celebration of the 50th anniversary of the mainframe! Six speakers from the Enterprise Networking Solutions organization presented 12 sessions on z/OS Communications Server and 4 on ISPF, and also participated in 3 z/OSMF hands-on labs. As with Winter SHARE, there continued to be a focus on z/OS V2R1, including an overview of z/OS V2R1 Communications Server, a detailed review of the Shared Memory Communications for RDMA (SMC-R) capability, and a V2R1 performance update. Other topics discussed included Enterprise Extender, sysplex technologies, network security, z/OS mail strategy, and z/OS CS hints and tips. We also had our first zNextGen session with our "Introduction to z/OS Communications Server." Attendance at our sessions (and across the board at SHARE in Pittsburgh) was very good, and we would like to thank all of you who attended our sessions for the great feedback and dialogue.
SHARE loves user experience sessions, and we were fortunate to have a z/OS CS-centric user experience session at this conference: Jim Darby of Nordstrom and Tom Cosenza of IBM Lab Services presented their experience with implementing IPSec on z/OS. Thanks to both of these gentlemen for their presentation!
For those that couldn't be at the conference last week, I will remind you that you can download most of the charts for the topics we presented by going to the following link:
Please plan to join us for the Winter 2015 SHARE Conference in Seattle, Washington, March 1 - 6, 2015.
The Winter 2013 SHARE Conference is now in the history books! Five speakers from the z/OS Communications Server organization presented 16 sessions on various topics, including an overview of the z/OS V2R1 Communications Server release, Enterprise Extender, IPv6, zEnterprise architecture, sysplex technologies, IBM Multi-site Workload Lifeline, OSA, and network security. We would like to thank all of you who attended our sessions for the great feedback and dialogue.
We would also like to thank two presenters for providing user-experience sessions at the SHARE conference. Heinz Kluemper of Finanz Informatik presented the topic "Roadmap to Securing Enterprise Extender Traffic over an APPN Global Connection Network", and Sig Perdomo presented the topic "Getting Started with IPv6 at the DTCC". Both sessions were well-received and much appreciated.
Congratulations to Mike Fitzpatrick for receiving a best session award for the session "Getting the Most Out of Your OSA Adapter with z/OS CS" that he presented at the Summer 2012 SHARE conference in Anaheim.
For those that couldn't be at the conference last week, I will remind you that you can download the charts for the z/OS V2R1 Communications Server Technical Update presentation by going to the following link:
z/OS CS Technical Update
The technical update presentation provides a preview of some of the expected content of the z/OS V2R1 Communications Server release planned for the second half of this year.
Please plan to join us for the Summer 2013 SHARE Conference in Boston, Massachusetts, August 11-16, 2013.
The Winter 2013 SHARE Conference is in San Francisco, California next week (February 3rd-8th). As always, there will be a good selection of content focused on z/OS Communications Server, including the following sessions from five of our team here in Research Triangle Park, NC:
- z/OS Communications Server Technical Update, Part 1
- z/OS Communications Server Technical Update, Part 2
- zEnterprise System - Network Architecture and Virtualization
- zEnterprise System - Network Design & Implementation
- IPv6 on z/OS
- Getting the Most Out of Your OSA Adapter with z/OS CS
- Sysplex Networking and Technology Considerations
- Multi-site Workload Lifeline
- z/OS CS Network Security Overview
- z/OS CS IPSec and IP Packet Filtering
- Safe and Secure Transfers with z/OS FTP
- Application-Transparent Transport Layer Security
- z/OS Intrusion Detection Services
- SNA Strategy and Migration Considerations
- z/OS CS EE Hints & Tips
We hope to see you there! For those that can’t join us, I’ll be tweeting (IBM_Commserver on Twitter) and posting updates to Facebook (Facebook.com/IBMCommserver) throughout the week.
Modified on by SamReynolds
In our first DevOps blog we talked about how we began our journey and touched briefly on the three main pieces of a DevOps movement: Culture, Process, Tools.
You may ask how we got started with changing the culture. And how do we know it's working?
What if I told you that if you carve out two hours a week for the next month to work on automating a task that you'd then free up 4 hours a month for yourself and 3 of your co-workers? Would you do it?
At first you have to realize that any change will require people doing some work. With the workloads we all carry these days we have to evaluate everything we do today for its priority and usefulness, always looking at our return on investment. Can we spend 10 hours over the next month to make a process better so that we save 5 people 4 hours every month for the foreseeable future? What's the trade off for those original 10 hours? Can we push out a deliverable date by a month in order to spend those hours?
We began by asking ourselves questions about pain points and areas of frustration. And what we found is that at the heart of the beginning of the DevOps transformation is the culture change. How do we stop doing the tasks that no longer matter? How do we evaluate what really needs to be done? Can we push lower priority work out some to give us time to change the processes that are time consuming? How do we change our mind set as an organization?
These are the kinds of changes in our thought process that have to be fostered in order to make DevOps work. It's not easy and it takes time to get everyone on board with shifting our focus and thinking in a DevOps way.
How do you get the buy in to change the culture? It's pretty simple: Prove it.
You make a small change that saves an hour of time or reduces frustration which proves that the time spent to evaluate and change the process is, in fact, worthwhile and provides real benefits.
- One of the first things we did was take a deep dive into a process that had 140 steps and reduced it to 81. This had been a widely shared pain point, so while fairly simple it proved management's commitment and showed real benefits.
- Another pain point was the upgrade of a tool we all use and are completely dependent upon. We had to upgrade the tool but in the past it has taken over 4 hours per user and the process was clunky and prone to errors. One person was assigned the task of creating an automated script that would reduce the process to 30 minutes. Management worked to free up cycles to allow him to spend two days which saved everyone else in the organization 4 hours each.
- We have another tool that is used by a large portion of our organization that takes nearly 30 seconds to complete a transaction. This has been a pain point for several years that users have been just dealing with in (mostly) silence. We have a small task force that is spending approximately an hour a week to find a solution to this problem. Once it is resolved we know it will save users time and a lot of frustration.
These were relatively small pain points, but they are helping to change the culture by establishing credibility that long-standing pain points can change and people will see real benefits from the work put into transformation. We have other items that are larger which we are working to automate that will save people time as well as increase quality.
We have opened the door to change and as an organization we are saying to ourselves "we don't have to do a task just because it's always been done". When people realize that the leaders of your organization are serious about letting the people who do the work change the work as they see fit, it empowers a person to look at every process and procedure in a new light.
That says to me that it's working.
Did you know that z/OS V1R13 now supports TLS v1.2? This TLS protocol version offers a number of new cipher suites - many of which use the SHA2-based hashing that you've been asking for! APAR OA39422 enables this function in System SSL and APAR PM62905 allows you to access most of the new System SSL function through AT-TLS.