Predicting the computer industry direction is always fun but a risk, so in 2010 I decided on ten technologies and let 100's of people vote. Here are the top ten technologies and the voting results.

Here are the selected top ten technologies that were new in 2010, which might have become "business as usual" in 5 to 10 years.

Power Systems already deliver the following technology - but what next?
- Reliability, Availability, Serviceability
- From the guys that do mainframes
- Scaling for Consolidation of 100's of workloads
- Powerful POWER processors
- Power 795 - 256 core with SMT=4
- Virtualisation
- Virtual CPU with high utilisation
- Virtual Disks with reduced costs, hardware independence
- Virtual Network with reduced costs, hardware independence
- Virtual Optical & Tape for flexibility
In no particular order, the ten technologies:
FCoCEE = Fibre Channel over Converged Enhanced Ethernet
Merging Fibre Channel and Network packets on one fabric.

We are running two networks that are both encapsulating data in a packet with details of the destination on a cable with switches to direct traffic. We don't need both as that adds complexity and costs to every computer plus two teams of administrators to manager the network and SAN.
For:
- Reduced costs
- 50% fewer adapters, PCIe slots & remote I/O drawers
- 50% fewer cables & switch ports
- Less floor space, less electricity & less heat
- Higher RAS - 50% fewer failures
Against
- Switches cost ~20% more than 10Gb + 8Gb
- Infrastructure changes happen slowly
- Politics & interdepartmental fighting!
Solid State Drives
SSD's are fast
- 150 to 250 times faster than a hard disk
- Fairly expensive but price trend is going down
Three ways of connecting it …
- 1 Within SAN disk subsystem
- New option in the “fancy” SAN storage sub-systems
- Speed & density fits between RAM cache & disks
- Main advantage = SAN aids availability
- 2 SAS 2.5 or 3.5 inch hard disk drive replacement
- Main "issue" it is too fast! SAS controller saturation
- Size = 69 GB supported POWER6 & 7

- 3 PCIe adapter with SSD on-board storage

For
- Extremely fast I/O
- Low electrical power and not getting hot like a spinning disk
- Use for "hot" data like critical database tables, sort area, temporary area
- Use for "money is not an issue" problems
- Will not go round corners! It is a Star-Trek joke about brown spinning disks with centrifugal forces would stop a star-ship from changing direction.
Against
- Expensive but already reducing as sizes rise
Active Memory Expansion (AME)
- POWER7 & AIX 6 TL04 SP2 or later
- Activation Key by machine or purchased with the server
- Set at LPAR level & dynamic Expansion Factor
- Memory shrinks (releasing RAM for other workloads) or Memory grow (for improved performance)
- Tradeoff in more memory but some CPU use
- Watch the Nigel's YouTube video or see the AIXpert Blog on AME
For
- Simple to operate
- No-brainer performance improvements
- Fight the Java bloat
- Large benefits for RAM limited applications
Against
Automatic Cluster Load Balancing
I would argue
- No IBM client should buy a single computer from IBM
- Nor just 2 computers for high availability
- IBM clients should buy six servers at a time!
OK perhaps 4 growing up to 6 or multiple "six packs" (has a nice ring about that term).

Think about operating a cluster of servers and not just one:
- Zero single point of failure
- Smaller boxes are cheaper
- Plenty HA options
- Islands of compute cycles
- Good for isolation
- Bad for hot spots
- Ugly if hot spots move around
- So flow workload across the cluster
- Need to flow to new servers too
- You already have this technology . . .
Live Partition Mobility since 2007
- Field tested and approved
- Dumb to manually monitor & move = man-power intensive
Systems Director + VMControl Systems Pools*
- System Pool fancy name for cluster!
- Monitor across the pool
- Automated LPM to level workload
- Optional: Recommend & ask permission mode
- Note: from 2020 - this function was taken over by PowerVC
For:
- Think Clusters
- For Cost effective HW purchase
- We have the proven tools to load balance
- Maximising investment & flexibility from day 1
Against
- LPM pre-reqs
- Software costs but are these relatively low costs
Machine Evacuation for Maintenance
Maintenance like:
- Adding CPUs or memory
- Replacing parts
- Updating system firmware
I have performed a "hot node add" to my POWER7 based Power 770 with near zero training and a Principal Engineer watching.
Is there a better way to do CEC Hot Node Add & Repair Maintenance (CHARM)?
with
- No risks of a hardware accident or incident causing an outage
- No risks of config change requiring a power off
- No need for disruptive firmware upgrades before the non-disruptive upgrade.
Go Five-Pack
If you remove one node from a six pack, you still have 83% performance
- Live Partition Mobility work to other nodes
- Remove complete node
- IBM Service Representative can take his or her time
- Add improved node back
Live Partition Mobility since 2007
- Field tested and approved
Systems Director + VMControl Systems Pools*
- Define "empty node X" plan
- Execute the plan - it automatically decides where
- Allows hassle free maintenance
- VMControl decides load balancing reuse
- Can be used for weekend powering off of the servers
For:
- Simple and zero risk maintenance
Against
- LPM prerequisites
- Relatively low software costs
Active Energy Manager (AEM)
A Systems Director plug-in - the oldest and cheapest!
Four reasons
- Less electricity - Near your building maximum?
- Reduced costs - €$£
- Green credentials
- Save the planet + Annual Report
- POWER Server Over-clocking
- Serious “Street credibility”
- EnergyScale monitoring allows higher GHz
- Payback in 6 months (depends on the Power model)
For:
- No brown out, €$£, Green, Over-clock
Against:
GHz Barrier
Different chips hit different frequency barriers:
Technology Barrier by Platform
| Vendor / Platform |
Chip |
GHz used
in servers |
| IBM |
POWER |
~ 4 |
| AMD / Intel |
AMD / x86_64 |
3 -> 2.5 |
| Sun / Oracle |
Sparc |
3 |
| HP |
Itanium |
2 |
The conclusion is that performance gains are via having more cores & threads - not higher GHz

Worst case is the “single thread application”
- If you have one then start planning a change of vendor - the vendor is doomed and you can quote me on that.
- Note: Symmetric Multi Processing (SMP) arrived in 1994
For:
- It has already happened and is not optional.
Against:
- Stuck with old single threaded applications are never going to run faster is a dead end.
Versioned WPARs for AIX 5.2
Micro AIX running inside AIX since 2008
Standard WPAR Reminder:
- Application encapsulated in each WPAR
- Share AIX6+AIX7 kernel default memory is just 60 MB
- Private file systems /, /tmp, /home & /var plus read-only /usr & /opt
- Network alias for each WPAR
- Resource constraints each WPAR
- Live Application Mobility jumps a WPAR to other global AIX
AIX 5.2
- Came out in 2002
- Functionally stabilised 2007
- Last fix in 2008
- Runs on AIX 7 with POWER7
- Separate LPP = £$€
Prepare
For:
- Get AIX 5.2 supported
- Ditch ancient hardware, free up space
- Lower HW maintenance
- Move up to the virtual shared everything world
- High performance
Against:
- Small Cost
- Application support
Prebuilt Solutions
Reason:
- IBM does the integration & not 1000’s of customers
- Sold as whole package not parts
- Upgradeable as workloads grow
For:
- Cost by reducing man-hours
- Speed of implementing solution to days
- Higher utilisation of resources
- Managed Solution (not sum of the parts)
Against:
Systems Director

One tool to rule them all!
Higher function plug-ins covered in other items
- Active Energy Manager (AEM)
- VMControl Cluster Load Balancing & Evacuation
- Workload Partition Manager (WPAR)
Systems Director Base
- Inventory & Topology views for relationships
- Automation Plans for self healing
- Automated Updates for HMC, Firmware, AIX, VIOS
R E S U L T S
Total voters cam to 630
- Technical IBMers - 40 voters
- Customer Technical Universities: from the USA conference 300 votes and 90 voters from the French conference
- AIX Virtual User Group session - 200 voters from the USA and worldwide
They all got votes so they are all winners.
In order:
- Solid State Drives
- In 2020, this proved correct. SSD and Flash technology dominate high performance servers and is used widely.
- The predicted larger SSD units and lower costs cam true.
- Fibre Channel over Converged Enhanced Ethernet = FCoCEE
- In 2020 a complete failure to get adopted regardless of the obvious costs savings.
- The comment on tricky computer room politics came true.
- The SAN and Network team refuse to co-operate to make massive cost saving and increase reliability of large servers specifically RDBMS.
- Pre-Built Solutions
- Pre-Builts solutions are happening and for new emerging solutions like SAP HANA, Artificial Intelligence, Hyper-Converged Nutanix, and Red Hat OpenShift / Containers.
- AIX Active Memory Expansion
- Doubles your effective memory at low cost.
- A popular feature with 10 times better "bangs per buck" costs now as server memory sizes growth since 2010.
The yellow bars are all System Director functions but Systems Director project was cancelled.
- 5 Systems Director base functions like inventory, monitoring and alerting worked well there is no direct replacement from IBM. AIX real-time alerts (which are part of PowerSC) performs the alerting function.
- 6 Systems Director Active Energy Manager- no direct replacement is available for this function. The HMC REST API can do some monitoring.
- 7 Systems Director Cluster Balancing - this function is available with PowerVC
- 8 Systems Director Cluster Evacuation - this function is available with PowerVC and also, the Lab Services LPAR Manager offering.
9. Ignore GHz barrier
- In retrospect, this was always going to happen and is not optional. The frequency barrier was not a good category.
- Note: that applications that is single threaded has no place in modern computer room.
10. Versioned Workload Partitions (vWPAR)
- These were popular to allow client to keep running AIX 5.2 and 5.3 for years after the natural end of operation system releases, by using vWPAR technology.
- WPAR is still used by many clients with great success - ironically the industry preferred the lower tech dockers and containers. In an interesting twist of fate, AIX WPAR technology can now be managed by docker and container management software!
- - - The End - - -
Other places to find content from Nigel Griffiths IBM (retired)
[{"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"SWG10","label":"AIX"},"Component":"","Platform":[{"code":"PF002","label":"AIX"}],"Version":"All Versions","Edition":"","Line of Business":{"code":"LOB08","label":"Cognitive Systems"}},{"Business Unit":{"code":"BU054","label":"Systems w\/TPS"},"Product":{"code":"HW1W1","label":"Power -\u003EPowerLinux"},"Component":"","Platform":[{"code":"PF016","label":"Linux"}],"Version":"All Versions","Edition":"","Line of Business":{"code":"","label":""}},{"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"SWG60","label":"IBM i"},"Component":"","Platform":[{"code":"PF012","label":"IBM i"}],"Version":"All Versions","Edition":"","Line of Business":{"code":"LOB57","label":"Power"}}]