Modified on by SamReynolds
In 2013 the IBM zEC12 (zBC12) introduced the 10GbE RoCE Express feature. RoCE Express provides an RDMA capable network adapter providing access to RDMA over Converged Ethernet (RoCE). RoCE provides an optimized network interconnect for System z communications. Along with RoCE Express, z/OS provided a new RDMA based solution called Shared Memory Communications over RDMA (SMC-R). SMC-R is a sockets based solution providing transparent access to RoCE for TCP sockets applications over standard Ethernet.
The IBM System z13 introduces the capability to share (virtualize) the 10GbE RoCE Express feature among multiple (up to 31) LPARs (or z/VM guest virtual machines) using standardized PCIe virtualization (SR-IOV) technology.
When RoCE Express is exploited by z/OS with SMC-R, the combined solutions provide two key value points:
Improved latency that can provide improved transaction rates for latency sensitive transactional based workloads, and
Lower CPU cost for workloads that transfer larger payloads (i.e. analytics, streaming, FTP, big data, data replication, web services, etc.)
SMC-R does this while preserving the critical qualities of services (load balancing, security, isolation, reuse of IP topology, etc.) required by system z clusters in enterprise data center networks without requiring any application or middleware changes along with zero or minimal operational changes.
When customers enable SMC-R they should immediately experience the benefits, and longer term the benefits can be extended as they expand their exploitation of RDMA technology on System z.
With the IBM System z13 RoCE virtualization capability users can now share RoCE Express features which:
Extends access to RoCE to additional workloads across multiple z/OS instances (LPARs) which will reduce the number of required physical RoCE features
Expands (effectively doubles) the bandwidth of your RoCE Express features by enabling concurrent use of both 10GbE RoCE Express physical ports
Customers who have multiple CPCs in a single site (or an extended LAN among multiple sites) with z/OS centric workloads (e.g. SYSPLEX, DB2, WAS, CICS, MQ, IMS etc.) will be natural candidates for benefiting from RoCE Express and SMC-R.
So, if you understand the technology but you’re not sure you have an environment that might benefit from it, then we can offer you some help. If you would like some guidance and assistance with assessing how your specific application workloads might be applicable to SMC-R, or possibly assess the potential level of benefits that you might anticipate for your environment, a new tool called Shared Memory Communications Applicability Tool (SMCAT) has been created. SMCAT has been provided to assist our customers with this assessment process. SMCAT is now available via PTF for z/OS V1R3 (UI24872) and z/OS V2R1 (UI24762) customers. SMCAT does not require SMC-R or any special hardware. Instead SMCAT monitors your existing TCP/IP workloads and produces a summary report to help you understand how your workloads might be eligible for and benefit from SMC-R and RoCE Express.
If you have additional questions about SMC-R, RoCE Express, RoCE virtualization or using SMCAT then your next step is exploring the reference materials provided at:
If you are just getting started then the FAQ document might be a good first step. You can also reach out to the author (Jerry Stevens) of this blog at this email: firstname.lastname@example.org
The IBM z/OS Communications Server TCP/IP Implementation Redbook series provides understandable, step-by-step guidance about how to enable the most commonly used and important functions of z/OS Communications Server TCP/IP. Final versions of the four V1R13 volumes are now available for you to download and enjoy:
IBM z/OS V1R13 Communications Server TCP/IP Implementation: Volume 1 Base Functions, Connectivity, and Routing (SG24-7996-00)
IBM z/OS V1R13 Communications Server TCP/IP Implementation: Volume 2 Standard Applications (SG24-7997-00)
IBM z/OS V1R13 Communications Server TCP/IP Implementation: Volume 3 High Availability, Scalability, and Performance (SG24-7998-00)
IBM z/OS V1R13 Communications Server TCP/IP Implementation: Volume 4 Security and Policy-Based Networking (SG24-7999-00)
Modified on by RaquelPrieto
PTF test for PI38376: A TCP connection can use the wrong maximum segment size (MSS) on V2R1
A test role here at Communications Server for z/OS involves more than just testing the latest and greatest code. Customers run into issues and fixes need to be made available for them, but not before they are internally tested. Today I describe my experience testing this PTF as a still relatively new member of the z/OS System test team. A lot of frustrations were had, but fortunately I still have all of my hair and had the chance to learn some new things.
Gathering Information - I know some stuff, maybe?
To start, I look at the PTF record and corresponding web pages in our internal source control tool to gather some initial information. I end up with a Notepad++ document full of haphazardly pasted notes from various resources to sift through and make some sense of. Fortunately there are a ton of details, which makes for a happy tester. I'll leave the majority of the nitty-gritty out and summarize the situation:
- A distributed DVIPA (DRVIPA) is defined for at least two systems in a sysplex: one being the primary distributor and the other a backup.
- If the backup stack is started before the primary distributor and takes over the DVIPA, an implicit (host) route from the backup to the primary distributor is created for that DRVIPA with an MTU size of 576 during the SYN stage of a TCP handshake. In the failing case the multipath routing algorithm is used, which chooses the smallest MTU value among all possible routes to the DRVIPA but ignores the default host route. Although OMPROUTE uses OSPF to advertise host routes with larger MTU sizes, the MTU for this particular route remains "stuck", resulting in an MSS (maximum segment size) of 536 for outbound TCP connection setup requests. No bueno.
"The problem occurs when an implicit host route for the DRVIPA is generated with the default MTU 576 instead of 65535 on a backup system. This is accomplished by starting the backup system first before the distributor."
To make matters a bit more convoluted, the conglomerate of notes inform me that I will not be able to view the incorrect MTU with a simple netstat route display. Instead, I'll have to dump the TCP/IP stack after recreating the scenario and scour through the raw memory. Staring at hexadecimal. Looking for something called an "RTE" in something else called an "RTOP."
Additionally, the customer's error description included steps to recreate the error:
1. Define DRVIPA to be used
-Backup definition must be defined without the MOVEABLE IMMED
2. Start TCPIP on a backup system without OMPROUTE
3. Start and stop the primary distributor without OMPROUTE to force the DRVIPA takeover on the backup stack
-At this point the host route for the DRVIPA with the MTU of 576 is created on the backup stack
4. Restart the primary distributor with OMPROUTE to takeback the DRVIPA
5. Start OMPROUTE on the backup so OSPF host routes will be learned from the distributor
-At this point the MTU value set at 576 will get "stuck"
6. If a connection is established from the backup to the DRVIPA on the distributor, a netstat display on this connection will show the MSS set to 536
7. Dump the TCP/IP address space
-Examine the dump to find the MTU value of 576 from RTOP in RTE
So far, I know our environment has, at least, bits and pieces of this customer configuration. The SVT environment has DVIPAs defined with their corresponding VIPADISTRIBUTE and VIPABACKUP definitions. Since these DVIPA definitions were built to be highly customizable to suit a customer's needs, the amount of options and parameters possible combined with ensuring correct syntax can be overwhelming at times. For this reason my preferred method is to take from example - there are already so many different kinds of configuration files saved over the years in our test environment that there is likely one I can use as a template for this test.
As a tester, however, I could have saved a decent amount of time if all of the test information I needed were in a single location, instead of a number of separate records/web pages. I had to dig around to find useful pieces of information, and the first place that I looked (the PTF test record directly assigned to me) did not contain detailed error recreation instructions.
Research - I figure out some stuff
Terms and concepts
From the information I've gathered so far, I need to define some acronyms and understand some concepts not previously encountered.
- RTOP: Google search didn't come up with anything, and neither did the two 'Terminology' bots on Sametime, so I went to the V2R1 Knowledge Center. A search gave me the RTOPTS, or run-time options for the language environment which doesn't seem related, so I asked a more experienced tester who wasn't familiar with the acronym either. Luckily I was eventually able to find someone who was familiar with it, and it turns out that RTOP is an identifier (an "eye catcher" as we call it) for a control block of a group of routes to a given IP address destination in a dump of the TCP/IP address. Internal stuff, so that's why it wasn't publicly searchable.
- RTE: Another hopeless Google search but 'RTE', turns out, is related to a TCPIPCS ROUTE report in IPCS. All of that happens to be in the Knowledge Center. Looking as a sample TCPIPCS ROUTE report gives me more clues - it looks like RTE is just a shortened name for the "Route" field in the report. RTOP has something to do with this report, but I don't quite see where it fits in just yet. Eventually, the same person who explained what RTOP was explained that RTE is an eye catcher for the route control block in a memory dump; there can be multiple RTE entries for a single RTOP.
- MOVEABLE IMMED parameter for VIPABACKUP definitions: sticking to Knowledge Center for this one, the MOVEABLE IMMEDIATE parameter refers to the behavior of a DVIPA in the case of stack takebacks. So, if the stack owned the DVIPA then went down, transfers ownership of that DVIPA to the backup stack defined, and then comes back up, the stack will regain control and all new connections for that DVIPA. In this test scenario, the customer does not have MOVEABLE IMMEDIATE defined, so I will need to ensure that it is also not defined on my test systems.
- Implicit host route/routing, and how to spot it: I thought way too hard about this one and should have just asked someone right away because, as it turns out, it is simply a host route or a route to an IP address on the HOME list (signified by the H flag in a netstat display). It appears that the term "implicit" is older language exclusive to the mainframe crowd. That happens quite a bit around here.
- Multipath, including where it's defined and how to tell that it is enabled: What I want to know is where and how multipath is defined on a stack or router, or at least how to tell that it is being used. Back at the Knowledge Center I came across the Routing section with an OSPF overview. Just as the name suggests, multipath allows for a routing table to contain multiple routes ("paths") to a destination. There is also an IPCONFIG MULTIPATH or NOMULTIPATH statement for the TCP/IP profile, which either enables or disables multipath routing for outbound traffic, respectively. When testing I will need to verify that the MULTIPATH statement is configured for the TCP/IP stack I will use; I can also confirm it is being used by looking at a routing table netstat display. If multiple routes exists for a single destination, multipath is in use, at least for inbound traffic.
- How to examine a dump of the TCP/IP address space to find the IP routing table that should show the DRVIPA implicit host route with MTU 576: I know this part is specific to our environment and what we already have set up, but to find the MTU value buried in raw hex could be a formidable undertaking.
The combined PTF records already provide a lot of information that will save me a significant amount of time by outlining steps for recreation with a good amount of detail - high quality test relies on the presence of as much relevant information as possible. The challenge at this point, however, is determining how I will mimic the customer's configuration and recreation steps in our own shared test environment.
For this test I will need at least two systems in the same sysplex. One as the primary distributor of the DRVIPA and one as a backup. Likewise, I'll need a method to "stop the distributor" which can be done using a few different ways such as forcing the stack to the leave the sysplex, or using a VARY DEACTIVATE command on the DRVIPA itself, or stopping the stack entirely. I'll need to define this DRVIPA similar to the customer's configuration and be able to access and modify TCP/IP profiles. Finally I'll need to be able to stop and start OMPROUTE on either stack.
At this point I think it's time to bring up some systems and see all of this for myself.
Stay tuned for Part Two where I dive in and actually do some testing!
Modified on by NatashaLishok
ENS is a very mature enterprise z/OS product team that is responsible for z/OS Communications Server, ISPF, and IBM Multisite Workload Lifeline. We have been around for 40+ years and already have good processes in place for design, development, build, test, information development and service. We made the transformation to agile several years ago and feel like our current processes allow us to deliver on 4 week sprints, innovate on items that don't work well and deliver with high quality.
When the team started our DevOps journey back in August of 2014 there was push back in the form of two questions:
1) Why DevOps ? We already use automation.
2) What problem are we trying to solve?
In order to answer these questions for a group of well seasoned software engineers we had to demonstrate WHY DevOps and be able to show small wins which made their daily tasks easier to do.
How did we do this? We started by doing two things in parallel:
First, we created a core DevOps work group which consisted of engineers from each of the disciplines of our product development life cycle (design, development, build, FVT, SVT, performance, IDD and service). Technical leaders who were open to change were selected for the core work group.
Second, we asked the entire organization two questions:
1) What are your top two pain points?
2) If you could change one thing in the organization what would it be and why?
While we were waiting on the answers to our two survey questions we did the following with the core work group:
1) We discussed terms that were overloaded and created definitions for those terms that were relevant to the team, specific to our organization and product. For example:
DevOps = maximize the predictability, efficiency, security and maintainability of operational processes - this objective is supported by automation
Continuous Integration = merging developer code into the product stream in regular, repeatable, short intervals and rapidly propagating that code to all test systems automatically and quickly
Pre-integration = work done before developer code is merged into the product stream, for example, unit and regression test and peer review
Continuous Deployment = after passing all the automated delivery tests, each code commit is deployed to end users as soon as it is available. Because changes are delivered quickly and without human intervention, continuous deployment can be seen as risky. It requires a high degree of confidence both in the existing application infrastructure and in the development team.
Continuous Test = continuous testing adds manual testing to the continuous delivery model. With continuous testing, the test group will constantly test the most up-to-date version of available code. Continuous testing generally adds manual exploratory tests and user acceptance testing. This approach to testing is different from traditional testing because the software under test is expected to change over time, independent of a defined test-release schedule.
Continuous Monitoring = monitoring the continuous testing and getting defects reported in real time
Continuous Delivery = a software development discipline where you build software in such a way that the software can be released to end users at any time
Production = today it means deploying to our SVT enterprise customer environment daily and in the future will mean deploying to an environment that can be accessed by our external customers to provide early feedback on pre-GA product code features
2) We defined the purpose and objective of the core work group: Our high level focus areas would be Culture, Process and Tools.
3) We appointed a manager as the owner, project manager and technical lead for the work group
4) We agreed to meet bi-weekly for one hour.
5) We created an online community to track and store our meeting agendas, actions and collateral.
6) We agreed to use value stream mapping to document our end to end pipeline and processes.
By our second core work group meeting we had the results to our survey questions from the organization. We were surprised by the feedback and how easy it was to identify one or two pain points that were pervasive across the organization. The key is to act fast to solve these first few pain points to demonstrate to the organization "Why DevOps" and get buy-in to the DevOps journey.
Answering the question "What problem are we trying to solve?" will start to become more clear as the team defines the overloaded terms, states the purpose and objective of the work group, documents the first pipeline of your product development life cycle and implements your first couple of small changes which proves the value of DevOps.
We will continue to share our strategies and experiences with this blog series and welcome your feedback!
You can also reach out to the author (Frank Varone) of this blog at email@example.com
Modified on by NatashaLishok
Ever wonder why z/OS Communications Server support asks for multiple traces for network issues? If so, here is the reason why.
The z/OS packet trace is collected from the perspective of the z/OS host that is sending or receiving data. The trace is collected before the data reaches the physical network, a.k.a the OSA NIC (network interface card). So in the case of outbound or sent packets trace records are collected before the data is processed by the VTAM DLC layer to be sent to the OSA NIC. Conversely, for inbound or received packets, they are traced after they arrive over the OSA NIC and before they are processed by the DLC layer in VTAM and TCP/IP.
Note that for the majority of network throughput issues ALL packet application data is not needed in the packet trace, so feel free to use the "abbrev=100 option" when you collect it!
The next possible choice for a z/OS "network" type trace is the OSAENTA trace. This trace captures packets from the OSA NIC perspective. This means that once packets sent from TCP/IP are sent over the OSA NIC, they are captured in the OSAENTA trace. Conversely, once packets arrive over the OSA NIC, before they reach VTAM and TCP/IP, they are collected in the OSAENTA trace. Hopefully, the picture is now more clear!
Note that the OSAENTA trace does not collect the entire packet application data contents. It is truncated to 200 bytes, so keep that in mind!
When diagnosing a network performance issue it is imperative to have the full network picture or as close to one as possible. Many peer hosts are often multiple router hops apart which add multiple points to look at. Any delays captured with just a z/OS packet trace don't always tell the full story, so corresponding traces outside of z/OS are often requested. You may want to go ahead and collect an "external network" trace somewhere between the z/OS and remote host endpoints. This additional trace(s), when collected simultaneously with the z/OS packet trace, provides a greater insight as to where delays or packet loss may be occurring.
There are many workstation-based tools that are available for viewing network traces. The z/OS packet trace and OSAENTA traces are designed to be viewed with IPCS. I know not everyone is very comfortable or familiar with IPCS, so consider that there is a nice option called SNIFFER that can be used to format both of these traces into binary files that can be loaded into one of these other trace viewing products to make your life simpler without having to use IPCS to look at the traces.
Modified on by SamReynolds
Get connected with z/OS Communications Server, ISPF, and Multi-site Workload Lifeline!
The Communications Server team continues to build and expand the content we offer over social media outlets. Follow our online discussions, find online presentations, and participate in online virtual Q&A sessions. You can find us on the following venues:
z/OS Communications Server
You can find Chinese contents in the following venues:
Modified on by SamReynolds
It's looking like I'm seeing some sort of storage creep, and it seems to be related to z/OS CommServer TCP/IP. Sometimes I'm not quite sure how to approach identifying if the storage increase is related to TCP/IP, or even how to get to the root cause. Over time, I've learned it's helpful to use some simple commands to gain a general knowledge of my system's TCP/IP storage use. This has allowed me to not only protect my system, but also to quickly get to the bottom of any issues.
There are several commands I use to periodically monitor and collect information regarding TCP/IP's storage usage for current use, high-water mark, and limit (if I have it configured). Analyzing the data, I have learned over time what's typical for TCP/IP storage use during normal and acceptable peak workloads. My automation issues these commands every 15 minutes so they are recorded in the system log. This way there's historical information to pinpoint a problem area and time frame should TCP/IP's storage usage appear to be abnormal.
- D TCPIP,,STOR - issue this command for each of your TCP/IP stacks
- D NET,CSM - this command can be used to determine overall CSM ECSA and CSM Fixed storage utilization
- D NET,CSM,OWNERID=ALL - use this command to identify what application is using CSM storage and how much it is using
TCP/IP Storage -
TCPCS STORAGE CURRENT MAXIMUM LIMIT
ECSA 2858K 3313K NOLIMIT
PRIVATE 8631K 8634K NOLIMIT
ECSA MODULES 9671K 9671K NOLIMIT
HVCOMMON 1M 1M NOLIMIT
HVPRIVATE 1M 1M NOLIMIT
TRACE HVCOMMON 2579M 2579M 2579M
- Limits for ECSA and Private storage can optionally be configured in the TCP/IP Profile (GLOBALCONFIG statement, parameters ECSALIMIT and POOLLIMIT).
- There are no recommended limits that will work for every system because every system is different.
- This output does not include CSM storage.
ECSA storage - The use of common storage can be controlled with the TCP/IP GLOBALCONFIG profile statement - ECSALIMIT. ECSALIMIT limits can be set to keep TCP/IP from monopolizing all common storage on a system. This is to protect other subsystems' ability to access common storage in the event TCP/IP hits a situation where it consumes too much ECSA. This parameter is intended to improve system reliability by limiting TCP/IP's common storage use.
Private storage - The amount of storage TCP/IP uses in its user region. There are several ways in which you can limit TCP/IP's use of private storage:
- with the TCP/IP GLOBALCONFIG profile statement - POOLLIMIT
- via the REGION keyword in TCP/IP's start up JCL. This can also be overridden by installation exits such as IEFUSI. If you choose to limit the region size you should also set a POOLLIMIT in the profile.
ECSA modules - This is common storage used by TCP/IP load modules.
64-Bit common area size
64-Bit private area size
Trace HVCommon - 64-Bit common storage used for tracing
Here are some considerations you should make when choosing to define ECSA or private (pool) storage limits.
- Accommodate for temporary application "hang" conditions, where TCP/IP must buffer large amounts of inbound or outbound data. Add a reasonable fudge factor to the observed maximum usage values. It is not uncommon to set limits that are 50% over the peak usage.
- Care should be taken when coding the ECSALIMIT parameter. Setting it too low can cause TCP/IP to terminate prematurely.
- The benefit of specifying limits is that you will receive warning messages before storage obtain calls start failing when there is not enough storage available to satisfy the requests.
- ECSALIMIT does not include any of your CSM storage used by TCP/IP.
- When choosing to limit Private storage, make sure you don't use a value that is lower than or equal to what your installation exit (IEFUSI) enforces.
- Remember that the values set for ECSALIMIT and POOLLIMIT can be changed via OBEYFILE command processing (VARY TCPIP,,OBEYFILE).
CSM storage - The Communications Storage Manger (CSM) is a VTAM component that allows authorized host applications to share data with VTAM, TCP/IP, and other CSM users without the need to physically copy the data.
- Your CSM storage will be located in either ECSA or data space storage, and can be fixed or pageable.
- CSM storage definitions are controlled by SYS1.PARMLIB member IVTPRM00 which is read by VTAM during initialization.
- The limits you set can be dynamically changed with a MODIFY CSM command. This allows you to control the amount of CSM storage that can be used in ECSA or can be FIXED at any point in time.
D NET,CSM - This command provides a quick overview of how much storage has been allocated by CSM, and how much of it is in-use or free for use by a CSM user. You'll find that CSM can be in either ECSA or data space storage. The command output also lets you know what you have defined as the maximum in your IVTPRM00 parmlib member.
SIZE SOURCE INUSE FREE TOTAL
4K ECSA 144K 112K 256K
16K ECSA 16K 240K 256K
32K ECSA 0M 512K 512K
60K ECSA 0M 0M 0M
180K ECSA 0M 360K 360K
TOTAL ECSA 160K 1224K 1384K
4K DATA SPACE 31 0M 256K 256K
16K DATA SPACE 31 0M 0M 0M
32K DATA SPACE 31 0M 0M 0M
60K DATA SPACE 31 0M 0M 0M
180K DATA SPACE 31 0M 0M 0M
TOTAL DATA SPACE 31 0M 256K 256K
4K DATA SPACE 64 4352K 128K 4480K
16K DATA SPACE 64 0M 256K 256K
32K DATA SPACE 64 96K 416K 512K
60K DATA SPACE 64 0M 0M 0M
180K DATA SPACE 64 0M 360K 360K
TOTAL DATA SPACE 64 4448K 1160K 5608K
TOTAL DATA SPACE 4448K 1416K 5864K
TOTAL ALL SOURCES 4608K 2640K 7248K
FIXED MAXIMUM = 120M FIXED CURRENT = 6877K
FIXED MAXIMUM USED = 6877K SINCE LAST DISPLAY CSM
FIXED MAXIMUM USED = 6877K SINCE IPL
ECSA MAXIMUM = 120M ECSA CURRENT = 1633K
ECSA MAXIMUM USED = 1633K SINCE LAST DISPLAY CSM
ECSA MAXIMUM USED = 1633K SINCE IPL
CSM DATA SPACE 1 NAME: CSM64001
CSM DATA SPACE 2 NAME: CSM31002
D NET,CSM,OWNERID=ALL - Use this command to see how much CSM storage each of the CSM 'users' are currently using. If you want to see only the CSM usage of TCP/IP, you can also specify the user by "OWNERID=TCP ASID". (An example of OWNERID command output is not shown.)
Considerations to make when choosing to set your CSM limits:
- ECSA CSM can't be larger than your system ECSA limit which you defined in your system parameters in parmlib member IEASYSnn (CSA).
- When setting FIXED CSM, ensure that you have enough real frames to back the FIXED allocation.
So, now that you've learned to monitor TCP/IP's ECSA, private, and CSM storage usage, you may be thinking "what's next" if you think you're seeing a TCP/IP storage-related problem. If no SVC dumps are generated for the issue, when you see that storage use is on the rise, take a console dump of the TCP/IP address space. If you think the problem is related to your CSM storage usage by TCP/IP, include the CSM dataspace in the dump. Here's a sample console dump command you can use:
DUMP COMM=(tcpip storage growth)
If needed, the IBM Support Center can assist you with identifying the cause of your TCP/IP storage growth.
Modified on by JeffHaggar
The QDIO Accelerator function can boost performance of IPv4 traffic forwarded over OSA-Express QDIO and HiperSockets interfaces including sysplex distributor traffic which is routed to a target stack. The optimized packet forwarding provided by QDIO Accelerator improves latency and reduces CPU consumption.
The function applies to traffic which arrives inbound over an OSA-Express QDIO or HiperSockets interface and is forwarded outbound over OSA-Express QDIO or HiperSockets. With QDIO Accelerator, the first time such a packet is forwarded using a given route in the stack routing table, the z/OS stack creates a QDIO Accelerator route. Subsequent eligible packets which would normally be forwarded by the stack on this route instead get processed at the DLC layer without having to traverse the forwarding stack. This provides a much more efficient path for that traffic.
QDIO Accelerator can be especially valuable for sysplex distributor traffic being forwarded to a target stack when you do at least one of the following:
- use HiperSockets to provide dynamic XCF connectivity between stacks on the same CPC
- use VIPAROUTE to route packets to a target stack over an OSA-Express QDIO interface
To enable the QDIO Accelerator function, specify QDIOACCELERATOR on the IPCONFIG statement in the TCP/IP profile of any stack which will perform IP forwarding or which is serving as a sysplex distributor stack. You can enable VIPAROUTE to a specific target stack by using the VIPAROUTE statement in the VIPADYNAMIC block. With VIPAROUTE, the stack forwards packets from the distributor stack to a target stack using a route from the stack routing table rather than using the dynamic XCF connectivity.
To display the QDIO Accelerator routes, use the Netstat ROUTE/-r report option with the QDIOACCEL modifier. You can use the Netstat VCRT/-V report with the DETAIL modifier to see which sysplex distributor connections are eligible for acceleration.
With VTAM tuning statistics, you can display information such as the number of packets and bytes accelerated for each interface. Because the accelerated packets do not traverse the forwarding stack, these packets are not included in a packet trace on that stack. However, these are included in an OSA-Express Network Traffic Analyzer (OSAENTA) trace.
Beginning with z/OS CS V2R1, QDIO Accelerator can co-exist with IP security. IP forwarded packets can be accelerated as long as all routed traffic is permitted by your IP filter policy and is not subject to logging. Sysplex distributor traffic is always eligible for acceleration using QDIO Accelerator because these packets are subject to IP filtering at the target stack rather than the distributor stack.
Modified on by SamReynolds
Colocation, colocation, colocation! Does colocating your application workloads on the same z Systems physical machine (CPC) really matter? In some cases colocation really can make a big difference. When you have application workloads that have communication patterns that are network intensive, meaning they either frequently communicate (i.e. exchanging many messages) in order to complete a single transaction (e.g. multi-tiered application workloads) or they exchange large amounts of data (bulk, streaming or other big data type solutions such as analytics related workloads), then the physical location or proximity of the applications can make a difference. The differences can impact your cost and your overall results.
The IBM System z13™ and z13s™ introduced new technology that offers an opportunity for clients to take a closer look at this aspect of colocation of IBM z/OS application workloads. IBM introduced z Systems technology called Internal Shared Memory (ISM). The ISM technology allows one z/OS instance to directly access (share) virtual memory within another z/OS instance (e.g. LPAR or guest virtual machine) within the same physical machine. The ISM architecture enables direct memory access (DMA) capability for software exploitation.
With ISM, IBM also announced Shared Memory Communications – Direct Memory Access (SMC-D). SMC-D exploits ISM which enables applications to directly and transparently communicate with other applications executing in other z/OS instances running in other Logical Partitions on the same physical z13 System. The direct communications is provided transparently for applications using TCP sockets.
Some history will help give perspective. Prior to ISM, z Systems provided a very efficient technology called HiperSockets. HiperSockets provides a logical internal (logical) LAN within z Systems allowing the operating system to communicate using numerous protocols such as TCP/IP, UDP, SNA etc. Communications with HiperSockets is accomplished by creating, exchanging, and processing standard IEEE 802.3 packets (frames) in software. HiperSockets provides a very efficient memory to memory transfer (of standard packets) without requiring physical networking hardware.
SMC-D with ISM goes beyond HiperSockets by eliminating all packets along with all of the TCP/IP protocol and packet related processing. SMC-D provides a direct socket to socket transfer of data. This model provides significant savings in host network processing which translates to significant savings in CPU, latency, and throughput.
In addition to HiperSockets, z/OS instances on the same CPC can use other network technology to communicate to same-CPC z/OS instances, such as Ethernet using the IBM OSA-Express family of adapters. While there are several options, typically HiperSockets would provide the most optimal option. While HiperSockets will continue to be an important technology (i.e. due to its versatility), the benefits of SMC-D are compelling.
Shared Memory Communications architecture now has two variations:
- Shared Memory Communications – RDMA (SMC-R for cross platform using RoCE)
- Shared Memory Communications – DMA (SMC-D for same platform using ISM)
Both forms of SMC can be used concurrently. The protocol dynamically selects the appropriate variation based on proximity of the peer hosts (i.e. same CPC instances use SMC-D).
So what are the benefits or differences of SMC-D? Benchmarks results comparing the technologies have shown that SMC-D using ISM provides significant savings in CPU, latency, and throughput. Here is a quick performance summary of Request / Response patterns (transactional) and streaming (bulk) workloads highlighting the differences in performance when comparing SMC-D to HiperSockets:
- Request/Response Summary for Workloads with 1k/1k – 4k/4k Payloads:
- Latency: Up to 48% reduction in latency
- Throughput: Up to 91% increase in throughput
- CPU cost: Up to 47% reduction in network-related CPU cost
- Request/Response Summary for Workloads with 8k/8k – 32k/32k Payloads:
- Latency: Up to 82% reduction in latency
- Throughput: Up to 475% (~6x) increase in throughput
- CPU cost: Up to 82% reduction in network-related CPU cost
- Streaming Workload:
- Latency: Up to 89% reduction in latency
- Throughput: Up to 800% (~9x) increase in throughput
- CPU cost: Up to 89% reduction in network-related CPU cost
As you can see the benefits of SMC-D with ISM are compelling. If you currently exploit HiperSockets, then the applicability of SMC-D is easy for you to evaluate. If you are not sure if you have or could have the z/OS network traffic patterns that apply to your environment, then you can evaluate your workload network patterns using the SMC-Applicability Tool (SMC-AT).
With the potential for this type of savings it is easy to see how colocation of network intensive workloads on the IBM z13 or IBM z13s using SMC-D with ISM can make a difference.
 Benchmarks results shown here are from a controlled IBM internal lab using standard tools. Your actual results may vary. Performance information is provided “AS IS” and no warranties or guarantees are expressed or implied by IBM.
Modified on by SamReynolds
What's New in z/OS V2R2 Communications Server?
Throughout the journey to the new digital enterprise, z/OS network capability supports a fully featured Communications Server with integration of SNA and TCP/IP protocols, making the mainframe a large server capable of serving worldwide clients simultaneously. Through its unique design and qualities of service, z/OS Communications Server offers unmatched availability, scalability, and security to meet the emerging business challenges of cloud, data analytics, and the security demands of mobile and social applications.
This page provides an overview of selected enhancements that are provided by z/OS V2R2 Communications Server.
Capture the potential of the mobile enterprise via scalability, economics, and platform efficiency
In the new competitive market, it is essential for you to understand customer sentiment, analyze information for more targeted insights, conduct transactions with a mobile device, and serve customers across the globe. In z/OS V2R2, enhancements to Communications Server can help you reduce the time to respond, even more critical in the new mobile landscape. Communications Server delivers improved scalability and performance for outstanding throughput and service within your existing environment. Smarter scalability can better prepare you to handle growth and spikes in workloads while maintaining the qualities of service at the same time.
- Shared Memory Communications over RDMA adapter (RoCE) virtualization
The enhanced Communications Server support for RDMA over converged Ethernet (RoCE), which is designed to reduce communications latency and lower CPU cost for many workloads, now can deliver improved economics with as many as 31 z/OS images that share each RoCE adapter. It also supports selecting between TCP/IP and RoCE transport layer protocols automatically based on traffic characteristics, and MTU sizes up to 4K for RoCE adapters.
This enhancement requires IBM z13 or later systems, and is also available with the PTF for APARs OA44576 and PI12223 on z/OS V2R1.
- SMC Applicability Tool (SMCAT)
In z/OS V2R2, the SMC Applicability Tool provides the capability to evaluate TCP/IP network traffic for potential applicability for exploiting SMC-R. SMCAT can be utilized without requiring enablement of the SMC-R function on any system or requiring any special hardware. You can use SMCAT to monitor a TCP/IP stack for a set of configured destination IP addresses or subnets, and to provide a report in the TCP/IP stack job log. The report provides details of the amount of TCP workload that can potentially use SMC-R if SMC-R is available.
This monitoring tool is also available on z/OS V2R1 with the PTF for APAR PI29165 and z/OS V1R13 with the PTF for APAR PI27252.
- 64-bit enablement of the TCP/IP stack
By enabling the TCP/IP stack and its strategic device drivers (including OSA-Express QDIO, HiperSockets, and RoCE) to utilize 64-bit (above the bar) storage, a substantial inhibitor to workload growth is relieved. These z/OS V2R2 Communications Server enhancements provide performance improvements and virtual storage constraint relief by significantly reducing ECSA use.
- Enhanced Enterprise Extender scalability
z/OS V2R2 Communications Server improves the scalability of Enterprise Extender connections. Internal optimizations are intended to improve performance for installations with thousands of Enterprise Extender connections per LPAR.
- Enhanced IKED scalability
z/OS V2R2 Communications Server provides increased scalability by improving the Internet Key Exchange daemon (IKED) to concurrently negotiate IPSec tunnels with a large number of remote IKE peers. This enhancement significantly reduces the amount of time needed to establish a large number of IPSec tunnels, while also reducing CPU utilization.
- Increase single stack DVIPA limit to 4096
z/OS V2R2 Communications Server supports an increased number of application-instance dynamic virtual IP addresses (DVIPAs) for a single TCP/IP stack, raising the previous limit of 1024 to 4096. With this enhancement, up to 4096 application instance DVIPAs that are defined by VIPARANGE statements can be defined on a single TCP/IP stack. This improves scalability within a Parallel Sysplex, particularly when the sysplex is operating with a smaller number of systems than usual, as might be the case during planned outages for one or more LPARs.
- VIPAROUTE fragmentation avoidance
z/OS V2R2 Communications Server enhances its support for VIPAROUTE by automatically adjusting the TCP Maximum Segment Size (MSS) for each IPv4 route to prevent fragmentation within the sysplex. This new support simplifies VIPAROUTE configuration and helps improve VIPAROUTE performance by eliminating packet fragmentation issues that can arise for some routes.
This enhancement is also available on z/OS V2R1 with the PTF for APAR PI39519.
- TCP autonomic tuning enhancements
z/OS V2R2 Communications Server offers new autonomic features to provide for smarter self-monitoring and tuning of the TCP/IP stack, with a focus on performance-related functions such as dynamic right sizing (DRS) and delayed acknowledgements (DELAYACKs). The enhancements are based on real-time data and can improve overall performance of TCP connections.
Today's enterprise environment accesses data from many untrusted network sources, such as from mobile devices, social computing sites, and new cloud environments. Therefore, security of critical information assets remains a top priority, including defending your networks, protecting your data, and authenticating users and business partners. z/OS V2R2 Communications Server can help you meet this security challenge by strengthening the use of z/OS as a secured networking hub that helps protect your most valuable information, and helps you to develop innovative applications while reducing operational risk.
- AT-TLS certificate processing enhancements
z/OS V2R2 Communications Server enhances Application Transparent Transport Layer Security (AT-TLS) to support new System SSL enhancements for OCSP (online certificate status protocol), CRL retrieval over HTTP and LDAP, and certificate validation as described by RFC 5280.
- TLS session reuse support for FTP and AT-TLS applications (AT-TLS)
With SSL sessions enabled to be reused across different TCP ports in z/OS V2R2, Communications Server provides FTP support to enable new data connections to reuse associated SSL sessions for better compatibility, security, and performance with compatible FTP servers and clients. This enhancement is available for System SSL users and for both AT-TLS and native SSL users of FTP.
- Simplified access permissions to ICSF cryptographic functions for IPSec
z/OS V2R2 Communications Server is enhanced to help simplify security configuration for IPSec. You are no longer required to permit all network applications that are sending or receiving IPSec protected traffic to the relevant SAF resources in the CSFSERV class. Only the user ID that is associated with the TCP/IP stack must be permitted to those SAF resource profiles.
- TCP/IP profile IP security filter enhancements
With z/OS Communications Server, you can define a set of limited default IP filters in the TCP/IP Profile to help you protect the TCP/IP stack during initialization before Policy Agent installs an IPSec policy. In z/OS V2R2, you can specify additional default filter parameters, including source and destination address ranges, and source and destination port ranges. This enhancement enables greater flexibility in configuring the default filter rules.
Simplification and usability
IBM continues to simplify z/OS administration and management, and extends the reach of your existing skills. By improving administrative ease, the Configuration Assistant for z/OS Communications Server can help your company gain quality and productivity improvements while reducing opportunities for error.
- TCP/IP stack configuration with Configuration Assistant for z/OS Communications Server
As a valuable tool for configuring policy-based networking functions such as AT-TLS, IPSec, and Intrusion Detection Services, z/OS V2R2 Communications Server further extends its functions in Configuration Assistant. With an entirely new discipline, you can now configure TCP/IP profiles with an integrated graphical interface and wizard-driven help. These new functions, which build on existing capabilities for the policy agent, can make it faster and easier to create and maintain TCP/IP configurations.
With the PTFs for APARs PI66143 and PI63449, Configuration Assistant also provides a function to import existing TCP/IP profile data.
Availability and business resilience
- Activate Resolver trace without restarting applications
z/OS Communications Server includes a Trace Resolver function to provide a variety of diagnostic information that can be used by application programmers and network administrators. In z/OS V2R2, Communications Server provides a new component trace (CTRACE) option to capture the same information recorded by the Trace Resolver in CTRACE records, and to view formatted trace data using IPCS. With this new function, you can dynamically enable and disable tracing without an application restart.
- Reordering of cached resolver results
On systems where the system resolver cache has been implemented, z/OS V2R2 Communications Server can help improve load balancing by allowing you to request system-wide round-robin reordering of the IP address lists associated with each cached host name.
Standards and statements of direction
- z/OS V2R2 Communications Server supports a number of capabilities intended to make it meet the requirements of the United States National Institute of Standards and Technology (NIST) Special Publication SP800-131A.
- IBM plans to further extend the capabilities of the Configuration Assistant for z/OS Communications Server, a plug-in for z/OSMF, in z/OS V2R2. Additional planned enhancements will be designed to support making dynamic configuration changes to an active TCP/IP configuration.
- z/OS V2R2 is planned to be the last release to include the Trivial File Transfer Protocol Daemon (TFTPD) function in z/OS Communications Server.
- As previously announced in Hardware Announcement 114-009, dated February 24, 2014, the Simple Mail Transport Protocol Network Job Entry (SMTPD NJE) Mail Gateway and Sendmail mail transports are planned to be removed from z/OS. IBM now plans for z/OS V2R2 to be the last release to include these functions. If you use the SMTPD NJE Gateway to send mail, IBM recommends that you use the existing CSSMTP SMTP NJE Mail Gateway instead. In that same announcement, IBM announced plans to provide a replacement program for the Sendmail client that would not require programming changes. Those plans have changed, and IBM now plans to provide a compatible subset of functions for Sendmail in the replacement program and to announce those functions in the future. Programming changes or alternative solutions to currently provided Sendmail functions might be required. No replacement function is planned in z/OS Communications Server to support using SMTPD or Sendmail as a (SMTP) server for receiving mail for delivery to local TSO/E or z/OS UNIX System Services user mailboxes, or for forwarding mail to other destinations.
- To help you plan for migration to CSSMTP functions for sending SMTP mail and to other solutions for receiving SMTP mail, z/OS V2R2 Communications Server includes migration health checks designed to help you determine whether the mail functions planned to be withdrawn are in use. Also, z/OS V2R2 Communications Server provides a test mode for CSSMTP along with a utility program that copies JES email job output to both CSSMTP and SMTPD, allowing the two daemons to be run simultaneously. When run in this mode, CSSMTP is designed only to log errors while SMTPD continues to serve as the production mail program.
For more information about what's new in z/OS V2R2 Communications Server, see z/OS V2R2 Communications Server: New Function Summary and z/OS Communication Server V2R2 New Function APAR Summary.
All statements regarding IBM's plan, directions, and intent are subject to change or withdrawal without notice.
Modified on by SamReynolds
As Jerry Stevens wrote in his March 8 blog post, the IBM System z13™ and z13s™ introduced an exciting new technology called Internal Shared Memory (ISM) which allows one z/OS instance to directly access (share) virtual memory within another z/OS instance (e.g. LPAR or guest virtual machine) within the same physical machine via DMA. At the same time, z/OS Communications Server introduced Shared Memory Communications – Direct Memory Access (SMC-D), which uses ISM so that TCP sockets applications can directly and transparently communicate with applications executing in other z/OS instances running in other Logical Partitions on the same physical z13 System. Put simply, SMC-D provides SMC-R semantics within a CPC, essentially by substituting RDMA and RoCE with DMA using ISM. (Check out Jerry's blog post for a more complete description.)
Both of the SMC technologies can provide some serious CPU reduction and equally serious throughput boosts. But what about security? How do SMC-D and SMC-R fit in with existing security domains like VLANs? What sort of isolation is available across different RoCE and ISM interfaces? And what happens to all of those security features in Communications Server when the vast majority of the application data is being passed between the communication peers using some form of DMA? We're talking about features like:
- SAF-based access controls like PORTACCESS and NETACCESS
- IP packet filtering
- Cryptographic security protocols like TLS/SSL, SSH and IPsec
- Intrusion Detection Services
How do those security features play (or not play) together with SMC-R and SMC-D?
The short answer is generally "just fine." The main reason for this happy coexistence is that SMC technologies preserve the TCP semantics for managing the sessions -- again, SMC is completely transparent to the TCP applications programs. Since many of the Communications Server's security features operate on or within some aspect of the TCP session semantics, they can continue to operate as usual. Of course, the TCP/IP stack has a little more to keep track of, and has to ensure that when a significant change occurs in the state of the TCP session (for example, access controls change on the fly, preventing access to a port that was previously permitted), the stack needs to reflect that same change on the related SMC-R or SMC-D session. But again, all of this is transparent to the applications.
For a the complete security story around Shared Memory Communications, check out this newly revised white paper . Originally published after V2R1 to explain the SMC-R security considerations, this paper is now expanded to cover SMC-D as well.
So remember, colocation, colocation, colocation, but also make sure you lock the doors and engage that security system!
The Winter 2016 SHARE Conference was a great educational event, with a wealth of excellent customer interaction! Six speakers from the Enterprise Networking Solutions organization presented 13 sessions on z/OS Communications Server, 3 on ISPF, and one on IBM Multi-Site Workload Lifeline, including two-part hands-on labs for Configuration Assistant for z/OSMF, and the ISPF editor. We also participated in a panel session where our attendees brought in plenty of great questions for discussion.
Given the recent availability of z/OS V2R2, there was a focus on z/OS V2R2, with special attention given to the new Shared Memory Communications - Direct Memory Access (SMC-D) protocol. Other topics discussed included Enterprise Extender, sysplex technologies, network security, z/OS CS performance, FTP security, and z/OS CS storage usage. Attendance at our sessions (and across the board at SHARE in San Antonio) was very good, and we would like to thank all of you who attended our sessions for the great feedback and dialogue.
For those that couldn't be at the conference last week, I will remind you that you can download most of the charts for the topics we presented by going to the following link:
Please plan to join us for the Summer 2016 SHARE Conference in Atlanta, Georgia, July 31 - August 5, 2016.
Modified on by SamReynolds
In our first DevOps blog we talked about how we began our journey and touched briefly on the three main pieces of a DevOps movement: Culture, Process, Tools.
You may ask how we got started with changing the culture. And how do we know it's working?
What if I told you that if you carve out two hours a week for the next month to work on automating a task that you'd then free up 4 hours a month for yourself and 3 of your co-workers? Would you do it?
At first you have to realize that any change will require people doing some work. With the workloads we all carry these days we have to evaluate everything we do today for its priority and usefulness, always looking at our return on investment. Can we spend 10 hours over the next month to make a process better so that we save 5 people 4 hours every month for the foreseeable future? What's the trade off for those original 10 hours? Can we push out a deliverable date by a month in order to spend those hours?
We began by asking ourselves questions about pain points and areas of frustration. And what we found is that at the heart of the beginning of the DevOps transformation is the culture change. How do we stop doing the tasks that no longer matter? How do we evaluate what really needs to be done? Can we push lower priority work out some to give us time to change the processes that are time consuming? How do we change our mind set as an organization?
These are the kinds of changes in our thought process that have to be fostered in order to make DevOps work. It's not easy and it takes time to get everyone on board with shifting our focus and thinking in a DevOps way.
How do you get the buy in to change the culture? It's pretty simple: Prove it.
You make a small change that saves an hour of time or reduces frustration which proves that the time spent to evaluate and change the process is, in fact, worthwhile and provides real benefits.
- One of the first things we did was take a deep dive into a process that had 140 steps and reduced it to 81. This had been a widely shared pain point, so while fairly simple it proved management's commitment and showed real benefits.
- Another pain point was the upgrade of a tool we all use and are completely dependent upon. We had to upgrade the tool but in the past it has taken over 4 hours per user and the process was clunky and prone to errors. One person was assigned the task of creating an automated script that would reduce the process to 30 minutes. Management worked to free up cycles to allow him to spend two days which saved everyone else in the organization 4 hours each.
- We have another tool that is used by a large portion of our organization that takes nearly 30 seconds to complete a transaction. This has been a pain point for several years that users have been just dealing with in (mostly) silence. We have a small task force that is spending approximately an hour a week to find a solution to this problem. Once it is resolved we know it will save users time and a lot of frustration.
These were relatively small pain points, but they are helping to change the culture by establishing credibility that long-standing pain points can change and people will see real benefits from the work put into transformation. We have other items that are larger which we are working to automate that will save people time as well as increase quality.
We have opened the door to change and as an organization we are saying to ourselves "we don't have to do a task just because it's always been done". When people realize that the leaders of your organization are serious about letting the people who do the work change the work as they see fit, it empowers a person to look at every process and procedure in a new light.
That says to me that it's working.
The Summer 2014 SHARE Conference is in Pittsburgh, Pennsylvania next week (August 4th-8th). As always, there will be a good selection of content focused on z/OS Communications Server, including the following sessions from six of our team here in Research Triangle Park, NC:
Introduction to z/OS Communications Server
z/OS Communications Server Technical Update, Part 1
z/OS Communications Server Technical Update, Part 2
z/OS V2R1 CS: Shared Memory Communications - RDMA (SMC-R), Part 1
z/OS V2R1 CS: Shared Memory Communications - RDMA (SMC-R), Part 2
z/OS V2R1 CS Performance Update
Sysplex Networking Technologies and Considerations
Leveraging z/OS Communications Server Applications Transparent Transport Layer Security (AT-TLS) for a Lower Cost and More Rapid TLS Deployment
z/OS Communications Server Intrusion Detection Services
Enterprise Extender on z/OS Communications Server: SNA Hints and Tips
Change is Coming: Motivation and Considerations for Migrating from SMTPD/Sendmail to CSSMTP
z/OS Communications Server Hints and Tips
Additionally, Kim Bailey from our group will be presenting the following ISPF topics:
ISPF Hidden Treasures and New Features, Part 1
ISPF Hidden Treasures and New Features, Part 2
ISPF Editor - Beyond the Basics Hands-on Lab, Part 1
ISPF Editor - Beyond the Basics Hands-on Lab, Part 2
Last, but not least, Kim Bailey and Lin Overby will be assisting with the "z/OSMF Hands-on Labs - Choose Your Own" that run three times during SHARE week, and where you can pick from several topics, including the z/OSMF Configuration Assistant for z/OS Communications Server.
We hope to see you there! For those that can’t join us, I’ll be tweeting (IBM_Commserver on Twitter) and posting updates to Facebook (Facebook.com/IBMCommserver) throughout the week.
The Winter 2016 SHARE Conference is in San Antonio, Texas next week (February 29th - March 4th). As always, there will be a good selection of content focused on z/OS Communications Server, including the following sessions from six of our team here in Research Triangle Park, NC:
- z/OS V2R2 Communications Server Technical Update, Part 1 of 2 (Gus Kassimis and Sam Reynolds)
- z/OS V2R2 Communications Server Technical Update, Part 2 of 2 (Gus Kassimis and Sam Reynolds)
- Shared Memory Communications over RDMA (SMC-R) - Optimized TCP communications over Ethernet (Gus Kassimis)
- New Shared Memory Communications protocol - Direct Memory Access (SMC-D) - Going beyond HiperSockets (Gus Kassimis)
- z/OS Communications Server Performance: Updates and Recommendations (Dave Herr)
- Sysplex and Network Technologies and Considerations (Gus Kassimis)
- Understanding z/OS Communication Server Storage Usage (Mike Fitzpatrick)
- z/OS Communications Server Network Security Overview (Lin Overby)
- z/OS Communications Server Intrusion Detection Services (Lin Overby)
- Safe and Secure Transfers with z/OS FTP (Lin Overby and Sam Reynolds)
- Enterprise Extender on z/OS Communications Server: SNA Hints and Tips (Sam Reynolds)
- TCP/IP Stack Configuration with Configuration Assistant for z/OS V2R2 CS: Hands-on Lab Part 1 of 2 (Mike Fox)
- TCP/IP Stack Configuration with Configuration Assistant for z/OS V2R2 CS: Hands-on Lab Part 2 of 2 (Mike Fox)
- Enabling Continuous Availability and Reducing Downtime with IBM Multi-Site Workload Lifeline (Mike Fitzpatrick)
Also, there will be a panel sessions for open discussion of mainframe networking topics:
- z/OS Communications Server Free-for-All (Matthias Burkhard, Mike Fitzpatrick, Dave Herr, Gus Kassimis, Lin Overby, and Sam Reynolds)
Lastly, I will be presenting the following ISPF topics:
- ISPF Hidden Treasures and New z/OS 2.2 Features
- ISPF Editor - Beyond the Basics Hands-on Lab, Part 1 of 2 (with Tom Conley and Liam Doherty)
- ISPF Editor - Beyond the Basics Hands-on Lab, Part 2 of 2 (with Tom Conley and Liam Doherty)
We hope to see you there! For those that can’t join us, I’ll be tweeting (IBM_Commserver on Twitter) and posting updates to Facebook (Facebook.com/IBMCommserver) throughout the week.