Continuing coverage of the [Systems Technical University 2014] conference, we had an early morning awards ceremony to celebrate top sellers that led big wins in Europe for FlashSystems, XIV, Power Systems, and PureSystems.
Afterwards, there were several breakout sessions on day 2.
- Storage Technology Futures -- fresh from IBM research labs, tomorrow in your datacenter
Axel Koester presented several projects from IBM Research labs that have contributed to actual products, including the incredible scalability of [PERCS] that was incorporated into IBM General Parallel File System (GPFS).
- Cloud Storage and Active Cloud Engine
My presentation started off explaining the taxonomy of cloud storage. There are basically four kinds of cloud storage: persistent storage, ephemeral storage, hosted storage, and reference storage. Each of these has unique access patterns and service level requirements.
IBM has three distinct cloud storage offerings, so I covered IBM XIV Storage Systems, SONAS and Storwize V7000 Unified with Active Cloud Engine, and Linear Tape File System (LTFS) Enterprise Edition (LTFS-EE).
- FlashSystem competitive overview
Henrik Wilken provided an excellent presentation comparing IBM FlashSystems to the dozen or more competitors that offer all-flash or hybrid flash-and-disk combinations.
- IBM Tivoli Storage Productivity Center
From 2001 to 2003, I was the chief architect for what is now called Tivoli Storage Productivity Center. It continues to be the top most requested topic for briefings at the IBM Tucson Executive Briefing Center.
I presented an overview of Tivoli Storage Productivity Center, with a brief update on what's new in TPC 5.2.1 and the SmartCloud Virtual Storage Center v5.2.1 releases.
- IBM Archive Storage Solutions - Data Retention for Government Compliance and Industry Regulations
I can't believe it has been nine years since I was on the Product Development Team for the IBM DR550 Data Retention storage solution!
In this session, I explained the lessons we learned from the DR550, its successor the Information Archive, and how we now position System Storage Archive Manager (SSAM) software as their replacement. SSAM was recently certified by KPMG to meet a variety of US, European and International laws.
technorati tags: IBM, GPFS, Axel Koester, PERCS, XIV, SONAS, Storwize V7000 Unified, Linear Tape File System, LTFS, LTFS-EE, Henrik Wilken, Tivoli Storage Productivity Center, TPC, SmartCloud, Virtual Storage Center, VSC, DR550, Information Archive, SSAM, KPMG
Continuing coverage of the [Systems Technical University 2014] conference, we had several breakout sessions on day 1.
- IBM Smarter Storage Strategy
I presented IBM's Smarter Storage Strategy. This is focused on three key areas:
- Data-intensive Solutions. Storage is needed for Big Data analytics. IBM is focused on efficiency in all dimensions: capacity efficiency with data footprint reduction techniques, energy efficiency, administrator efficiency with ease-of-use interfaces, and reduced complexity.
- Business-critical workloads. Storage needs to allow business to prioritize which applications and workloads are most critical, and automate Quality of Service (QoS) for each application based on its business importance. The result is a balance between performance and cost across the spectrum of applications.
- Start quickly and add value. IBM is committed to support private, hybrid and public cloud deployments. Storage needs to support not just VMware, but also Hyper-V, KVM, PowerVM and z/VM. That is why IBM is a platinum sponsor for the OpenStack foundation.
- Demystifying OpenStack
Eric Aquaronne presented an excellent session on OpenStack foundation, an open source collaboration of various companies to bring a consistent Cloud-management standard across compute, storage and network resources.
- Replication for Business Continuity and Disaster Recovery
I have been involved with Business Continuity and Disaster Recovery my entire 28-year career at IBM System Storage, so when I was asked to cover BC/DR in 75 minutes, I focused just on aspects related to disk-to-disk replication.
I divided the presentation into three sections:
- Business priorities. You need to prioritize which business processes are most important, and prioritize your recovery accordingly.
- Technical implementation. Once priorities are set, there are seven "Business Continuity Tiers" to choose from. BC Tier 1 is the least expensive, recovering from physical tapes stored in an off-site vault. The fastest recovery is BC Tier 7, which automates the storage, server and network fail-over to a secondary site in as little as 30 minutes.
- Ongoing management. Just setting up a BC/DR implementation is not enough. It needs to be monitored to ensure that it continues to provide the protection you expect. BC/DR exercises should be performed one or more times per year to ensure that everyone has the skills and procedures documented to succeed in the event of a real disaster.
Of these seven BC tiers, BC Tier 6 is focused on storage replication, such as Metro or Global mirror available on our DS8000, XIV Storage System, SONAS and SAN Volume Controller. BC Tier 7 involves system automation, such as Tivoli Distributed Disaster Recovery Manager and GDPS.
- What is Big Data? Architectures and Practical Use Cases
This session was an expanded version of the one I gave in Belgium last year. Big Data is a big topic, and there are a variety of "big data" related sessions at this conference. I focused on three key areas:
- The change in the role of Storage Administrator. In the past, most of the data was structured and stored in databases, managed by database administrators. However, in today's environment, over 80 percent of the data is unstructured, outside of traditional relational databases, so either the database administrators need to learn new skills, or storage administrators will need to step up and help manage this unstructured data content.
- The change in the role of Business Analyst. We are no longer just looking at the financial consequences of patterns and trends. The new role of Data Scientist needs to apply statistical models, show some business acumen, and be able to "tell a story" that is supported by the data when communicating findings to Business and IT leaders.
- The change in the role of Decision Maker. In the past, Decision Support Systems were available only to the top-level business executives. Now, empowered employees have access to real-time analytics that can help them make decisions and take immediate actions.
This session packed the house, with standing room only. I would like to offer a special thanks to IBM VP Bob Sutor, Stephen Brodsky, Linton Ward, and Ralph McMullen in helping me finalize my presentation.
This is shaping up to be an awesome conference!
technorati tags: IBM, #ibmtechu, Smarter Storage Strategy, Data-intensive, Business-critical, QoS, VMware, Hyper-V, KVM, PowerVM, z/VM, OpenStack Foundation, Business Continuity, Disaster Recovery, BC/DR, Big Data, storage administrator, DBA, Business Analyst, Data Scientist, Decision Maker, Empowered Employee, Bob Sutor, Stephen Brodsky, Linton Ward, Ralph McMullen
The first official day of the [Systems Technical University 2014] conference had keynote sessions in the morning. The conference features experts from IBM Power Systems, IBM System x, IBM PureSystems, and IBM System Storage.
The keynote sessions were started with Amy Purdy, IBM Director of Technical Training Services, the group that is running this conference.
This conference is not focused on System z solutions, as many of the System z clients were in New York City for this birthday event, but it came up several times during the keynote sessions.
Amy offered a special [Happy 50th Birthday to the IBM System zEnterprise mainframe]. Fifty years ago this week, [IBM announced its famous S/360] mainframe that raised IBM's revenues from $3.6 Billion USD in 1965, to $8.3 Billion in 1971.
(FTC Disclosure: I work for IBM, and this blog post may be considered a paid, celebrity endorsement of IBM products and services. IBM has business relationship with both Intel and Amazon mentioned during the course of the keynote sessions, but I have no financial stake in either company. I was the chief architect for DFSMS, the storage management component of the z/OS mainframe operating system, and was part of the team that ported Linux to the System z mainframe.)
Nicolas Sekkaki, IBM Vice President of Systems and Technology Group in Europe, discussed IBM's commitment to client's privacy, the x86 and POWER server platforms, and a variety of mind-bogging announcements. He is focused on three trends: Big Data, Cloud, and Mobile.
IBM is focusing its hardware efforts on high-value, high-margin solutions such as System Storage, POWER Systems and System zEnterprise mainframe environments. Did you know that 65 percent of the world's business transactions are processed by either POWER systems or System zEnterprise mainframe?
IBM is also extending its continued focus on Linux and Open Source initiatives. For the System zEnterprise mainframes, 78 percent of our clients run Linux on System z. Over 290 clients have added the "zBX" option that allows them to run Windows and AIX on the mainframe as well. It is now less expensive to run workloads on System zEnterprise -- about 1 dollar per day per server -- than public cloud offerings from Amazon Web Services. Linux on POWER also has lower Total Cost of Ownership (TCO) than Linux-x86.
Nicolas also mentioned major changes for the POWER Systems, starting with the [OpenPOWER Consortium], formed by IBM, Google, Mellanox, NVIDIA and Tyan.
The move makes POWER hardware and software available to open development for the first time as well as making POWER Intellectual Property licensable to others, greatly expanding the ecosystem of innovators on the platform. The consortium will offer open-source POWER firmware, the software that controls basic chip functions. By doing this, IBM and the consortium can offer unprecedented customization in creating new styles of server hardware for a variety of computing workloads.
IBM POWER has switched from being "Big Endian" to being "Bi-Endian", allowing operating systems to choose between "Big Endian" or "Little Endian" modes. The Big Endian mode allows for Linux compatibility with the System zEnterprise mainframe, and the Little Endian mode for compatibility with Linux-x86.
Thorston Kahrmann, Intel Account Director for EMEA, presented Intel's rich history of collaboration with IBM, from technologies like BlueTooth and PCiE Generation 3, to platforms like BladeCenter and NeXtScale, to Industry Standards.
IBM had a lot of "firsts" in the x86 server area, including the first 16-processor server, the first to offer hot-swap memory, and over 100 leading performance benchmarks.
The latest Intel Xeon chip is the E7 version 2. For example, changing from DB2 v10.1 on the old E7, to running DB2 BLU columnar acceleration on the new E7 version 2, resulted in a 148 times increase in performance. A query on a 10TB database that previously took four hours was completed in under 90 seconds.
Thorston also wanted to remind the audience that nearly every System Storage product from IBM, from the high-end XIV, SAN Volume Controller, SONAS and FlashSystem V840, to midrange and entry level Storwize products, are all based on Intel's x86 processors.
Louise Hemond-Wilson, IBM CTO and Distinguished Engineer for Lab Services, reminded everyone today was also the [International "Draw-a-Bird" day].
Louise covered the findings from the latest 2012 CEO study, gathering insight from 1709 CEO interviews. The major focus areas for CEOs are:
- Empowering employees through company-wide values
- Engaging customers as individuals, rather than via demographics
- Amplifying innovation with strategic and tactical partnerships
With smartphones, tablets and ubiquitous Internet access, everyone is now a technologist, so that IT is now becoming a competitive differentiator. IT projects and Business projects are no longer separate. If your IT department is seen as an expense, it will continue to get its budget cut. If, however, your IT department is part of your revenue stream, then it can be viewed as an asset.
Sadly, over 75 percent of IT projects fail, either are way over budget, delivered late, or some combination of the two. Business leaders are pushing for IT improvements, but often CIOs are too afraid to take the risks to move the business forward. Louise cited three reasons for this, which she called the three C's:
- The IT and Business leaders did not full understand the context of the project.
- The content of the project was not properly defined between IT and Business architects.
- The collaboration between IT and Business personnel was not properly established.
Louise wrapped up her session with asking a simple question: How much is the cost of a light bulb. Some might focus on the cost of the bulb itself, while others might add the cost of maintenance, having ladders and personnel to replace them as needed, and others might include the electricity consumed. Both Business and IT leaders need to focus on Total Cost of Ownership (TCO) in their planning.
technorati tags: IBM, #ibmtechu, Amy Purdy, Technical Training Services, mainframe50, zEnterprise, mainframe, Nicolas Sekkaki, OpenPOWER, Linux, zBX, Amazon Web Services, Thorston Kahrmann, Intel, E7v2, EMEA, CEO Study, TCO, Louse Hemond-Wilson, STG Lab Services
Modified by TonyPearson
I have arrived safely to Istanbul, Turkey for the [Systems Technical University 2014] conference. The conference will feature experts from IBM Power Systems, IBM System x, IBM PureSystems, and IBM System Storage.
Here is the view from my hotel window. Up until the 19th century, this was open countryside. Around 1890, the Bomonti brothers from Switzerland set up a brewery, which was moved to this section of town in 1902, becoming the first Turkish brewery. In 1934, the brewery was nationalized and became the Istanbul Tekel Beer Factory. The Hilton Bomonti hotel where the conference is being held is named after these brothers.
Since this is my first time to Istanbul, and I did not have meetings until later in the afternoon for the conference, I decided to a bit of sightseeing.
(A special thanks to Gail Godbey of [Encounter Tours/Kaletours] who organized this entire tour of sightseeing for me on such short notice!)
The hippodrome was a stadium for horse and chariot racing, but now is just a square with a few obelisks. This one is the Thutmosis Obelisk from Egypt. The word hippodrome comes from the Greek hippos, meaning horse, and dromos, meaning path or way. Hippodromes were common features of Greek cities in the Hellenistic, Roman and Byzantine eras. My tour guide Erol Azor did a great job explaining everything.
My favorite stop of the day was the Blue Mosque, named after the blue tiles used on the dome. It is 43 meters high, making it one of the tallest mosques in the city. There are over 3,000 mosques here in Istanbul. In Turkish, this place is called Sultan Ahmet Camii after the Sultan Ahmet that had it built from 1609-1616. There are six minarets. The legend goes that the Sultan asked for a "gold" minaret, but the word for "gold" in Arabic sounds a lot the number six in Turkish, so that is why there are six of them.
Right next to the Blue Mosque is the Hagia Sofia, which was a Christian church first, then converted to a mosque, and now is a musuem. It was closed on Mondays, so all I could do was take pictures from the outside. Tulips are in full bloom throughout the city this month of April. If you notice, the minaret on the right is different color. Often, new sultans would add a minaret to an existing mosque, using whatever materials were available at the time. Kind of like adding a bedroom to an existing house.
Underneath the ground is the Basilica Cistern which held the drinking water for the city. The water came in on viaduct, and was kept underground. Today, it has a foot of water, and some fish, for people to admire the architecture employed.
Of course, no visit to Istanbul is complete without stopping at the Grand Bazaar. With over 4,000 tiny shops, it is a madhouse of gold and silver jewelry, blue jeans, leather goods, scarves, persian rugs, and antiques. Some places offered me free samples of Turkish delight, which are delicious cubes of flavored gelatin.
My day ended at the Topkapi palace. The word Topkapi is Turkish for "Cannon Gate", as this castle sits overlooking the peninsula and bosphorus strait that separates the Europe side from the Asian side of the city. Like the palace of Versaille in France, or Buckingham palace in England, the Topkapi palace was home to 36 sultans from 1299 to 1922.
You can spend hours here. There are beautiful gardens and various buildings surrounded by five kilometers of castle wall. Inside the buildings are displays of the family jewels, the clothes the sultans wore, their weapons, and religious relics.
It was good to get a flavor of the city, and a sense of the Turkish culture.
technorati tags: IBM, #ibmtechu, Istanbul, Turkey
Modified by TonyPearson
Next week, April 8-11, I will be presenting a variety of topics at the [Systems Technical University 2014] conference in Istanbul, Turkey. The conference will feature experts from IBM Power Systems, IBM System x, IBM PureSystems, and IBM System Storage.
Here are the titles and abstracts of the eight topics that I will be presenting next week, in chronological order, along with some related sessions for each topic:
IBM Smarter Storage Strategy
Do you want to understand more about IBM's initiatives for building a smarter planet and how that relates to the data economics of your organization? This session will explain it all, including how IBM's design approach and strategy for its various storage products and solutions for efficiency for data intensive solutions, optimization of business critical workloads, and agility to start quickly and add value. I will also position the features and capabilities of IBM's various disk and tape systems in this context.
Clod Barrera will present IBM Storage Strategy - Traditional and New Methods for Storage Deployment. My session is Tuesday morning and will focus on how IBM Storage Strategy is aligned with IBM's business initiatives including Cloud, Analytics, Mobile and Social Business (CAMS). Clod's presentation will be more technical in nature, featuring Flash storage, scale-out grids, object storage directions, and Software Defined Environments.
Axel Koester will present Storage Technology Futures - fresh from IBM research labs, tomorrow in your datacenter. Axel's presentation will focus on what IBM Research is working on, based on industry trends.
Pat O'Rourke will present Power Systems Trends and Direction, which will focus on IBM's strategy for the POWER Systems product line.
Replication for Business Continuity and Disaster Recovery (BC/DR)
Replication of disk storage systems can be used as part of an overall Business Continuity and Disaster Recovery plan. This session will provide an overview of the technologies involved, and other considerations.
Markus Oscheka and Ralf Wohlfarth will present IBM Storage Systems integration into VMware Site Recovery Manager, a more focused session that offers Business Continuity and Disaster Recovery for VMware environments.
Deniz Erguvan will present Disaster Recovery Solution Design with PowerVM and Storage Virtualization.
Thomas Vogel and Torsten Rothenwaldt will present Native IP replication with SVC / Storwize v7.2. This new feature was announced last October 2013.
Thomas Vogel and Torsten Rothenwaldt will also present New HA and DR concepts with SVC enhanced stretched cluster, focused on data federation across data centers.
What is big data? Architectures and Practical Use Cases
Do you understand the storage implications of big data analytics? This session will explain what big data is, and cover the Information Infrastructure and practical use cases.
Ajay Dholakia will present Taming Big Data: An overview of key technologies and architectures. Ajay will focus more on the hardware components (servers, networks, storage), whereas my presentation will focus on the roles of the storage administrator, data scientist and decision maker.
Axel Koester will present BIG DATA at CERN : Analyzing petabytes in seconds(!) at the European particle collider facility, a specific use case.
Jean-Armand Broyelle will present Big Data on Power: come and touch reality!, which will focus on the capabilities to process big data on POWER systems.
Cloud Storage and the Active Cloud Engine™
This session will cover private and public cloud storage options, including XIV, SONAS, Storwize V7000 Unified and Linear Tape File System (LTFS) Enterprise Edition. The use of Active Cloud Engine for local space management and global WAN caching to access files, SmartCloud Storage Access for self-service provisioning, and file-and-sync solutions will also be explained.
Eric Aquaronne and Jeff Borek will present Storage Cloud to energize your company. My session will focus on the technologies involved, whereas theirs will provide a product demo and practical implementation advice.
Mo McCullough will present XIV Overview and Update, Thomas Luther will present SONAS overview and Updates, and Nils Haustein will present Linear Tape File System Enterprise Edition (LTFS-EE) explained. These other topics will all go into more deep dive on each product solution than what I will cover in my high-level overview.
IBM Tivoli Storage Productivity Center
Why is Tivoli Storage Productivity Center (TPC) the #1 most requested topic at the IBM Tucson Executive Briefing Center? One of the chief architects of this product will cover the latest features, and why this product will greatly help your storage admin staff.
Clod Barrera will present Software Defined Storage - Storage for Software Defined Environments which will provide a broader view, while mine is focused specifically on how TPC plays a role in SDS.
Thomas Luther will present TPC for Replication 5.2 Overview and updates, will focus specifically on the Replication support in the latest release.
IBM Archive Storage Solutions - Data Retention for Government Compliance and Industry Regulations
This session will cover the various offerings IBM has for archive solutions, including IBM System Storage Archive Manager (SSAM), N series, and WORM tape storage systems.
Nils Haustein will present Next generation archive storage solutions which will focus specifically on SSAM software, with focus on migration procedures from other archive solutions.
New Generation of Storage Tiering: Less management, lower investment and increased performance
Confused on how to implement storage tiering between Flash, Disk, Tape storage system resources? This session will cover the various techniques and technologies available.
Levi Norman will present IBM FlashSystem Overview, focused on this particular tier of storage.
Axel Koester will present Storage Portfolio Selection Guide: What (not) to use when, providing an overview of the IBM System Storage portfolio, whereas I am focused more on the technologies that make up each tier of storage, and how to take advantage of them to balance cost and performance.
Data Footprint Reduction
Data Footprint Reduction is the catchall term for a variety of technologies designed to help reduce storage costs. This session will cover four techniques for data footprint reduction: thin provisioning, space-efficient snapshots, data deduplication and real-time compression. It will also discuss the IBM storage products that provide these capabilities. Come to this session to learn how these technologies work, and how they will benefit your data center.
bi Relation session:
Antoine Maille will present Demonstrate the TurboCompression Effect, a live demo of the technologies I will be discussing.
Johann Weiss will present The Storwize family - easy to manage, function rich and cloud ready, which will include a discussion of Real-time compression.
Mathias Defiebre and Erik Franz will present ProtecTIER with IBM FlashSystem (or maybe with Storwize). ProtecTIER is IBM's strategic data deduplication solution, which can act as a gateway in front of a variety of back-end storage options.
If you will be at this conference all week, look for me and say "Hello!"
technorati tags: IBM, #ibmtechu, Systems Technical University, POWER Systems, PureSystems, System x, System Storage, Istanbul, Turkey, Smarter Storage, CAMS, Clod Barrera, Axel Koester, Pat O'rourke, Replication, Business Continuity, Disaster Recovery, BCDR, Markus Oscheka, Ralf Wohlfarth, VMware, Site Recovery Manager, SRM, Deniz Erguvan, PowerVM, storage Virtualization, Thomas Vogel, Torsten Rothenwaldt, SAN Volume Controller, SVC, stretched cluster, big data, BigInsights, hadoop, analytics, data scientist, Ajay Dholakia, CERN, Jean-Armand Broyelle, Cloud storage, XIV, SONAS, Storwize, Storwize Family, Storage V7000, Storwize V7000 Unified, Linear Tape File System, LTFS, LTFS-EE, Tivoli Storage, Productivity Center, TPC, Eric Aquaronne, Jeff Borek, Software Defined Storage, Software Defined Environment, SDS, SDE, Thomas Luther, TPC-R, Archive Storage, Government Compliance, SSAM, NENR, N series, WORM tape, Nils Haustein, DR550, Information Archive, storage Tiering, Easy Tier, Flash, FlashSystem, Intelligent ILM, ISTA, Levi Norman, Data Footprint Reduction, Antoine Maille, TurboCompression, Johann Weiss, Mathias Defiebre, ProtecTIER
Modified by TonyPearson
Well, it's Tuesday again, and you know what that means! IBM Announcements!
Starting today, April 1, 2014, the IBM Executive Briefing Centers (EBC) are adopting a new self-hosted model. In the past, each briefing was assigned a "Briefing Host", a member of the EBC staff, who acted as [master of ceremonies] for the day (or more) for the clients. At some locations, if there were three rooms, there would be three or more briefing hosts so that concurrent briefings could be held.
However, the method does not scale. Having a person per briefing means that you are limited to the number of total concurrent briefings. Inspired by self-service provisioning and scalability of the Cloud, IBM has adopted a new methodology.
In the new model, the visiting client rep, sales rep, or IBM Business Partner will be handed instructions and a map. This will include the agenda, the schedule, biographies of each speaker, the locations of the nearest restrooms, and so on.
I can take partial credit for the idea. In 2012, I made the analogy that having briefing centers at each development lab made a lot of sense, because it allowed clients to interact directly with the engineers and executives that made development decisions. I also made the analogy that having a fully-staffed EBC was like a fire department, whether you have five briefings per month, or fifty, you need a team that is ready, staying abreast of the latest technological changes.
In my post, [Like animals in the zoo], I argued there are two kinds of zoos, the self-guided kind, where visitors are handed a map, versus the docent-guided kind, where a member of the zoo staff introduces you to each animal.
The EBC briefing hosts in this analogy were the docents, and the animals that people came to visit were the engineers and executives.
As with zoo docents who are highly trained about every animal to answer every conceivable question, briefing hosts at IBM went through extensive training by [Mandel Communications] to achieve the certification requirements of the [Association of Briefing Program Managers], or ABPM for short.
As for the fire department, IBM management flipped the analogy around. They argued that many smaller communities had "volunteer fire departments", eliminating the need to keep full-time employees doing nothing but playing cards and sliding down brass poles in between fire fighting sessions. When a fire happens, phones calls are made, and this will help get everyone notified to get involved.
In my past 28 years at IBM, I have to say that you know you have good analogies when they can be used in both directions. The zoo analogy was used to prevent management from consolidating all of the EBC staff to Austin, TX. The fire department analogy helped us keep all of our lab equipment to run demonstrations.
The new self-hosted model will address both scheduling and scalability issues. We often had two-day and three-day briefings, and scheduling the rooms, and the briefing managers, based on their availability, was quite challenging.
There are three advantages to the new method:
A coordinator will merely assign rooms, no longer worrying if a briefing host is available for those days. Now, each EBC location can run at full capacity, limited only by real estate and floor space.
Subject matter experts, like myself, that often did double-duty serving as briefing hosts as needed, will have more free time. I personally will be doing more "outbound briefings" to attend conferences and visit clients at their location, eliminating the time I need to be in Tucson to host "inbound" briefings.
The awkward silence that happens when the client rep, sales or IBM Business Partner invites all the clients and presenters, but forgets to invite the briefing host, is completely eliminated.
technorati tags: IBM, Executive Briefing Center, EBC, self-hosted, zoo, docent, volunteer, fire department, Cloud, scalability
Modified by TonyPearson
March 31 is [World Backup Day]!
Recently, a client asked how to backup their IBM PureData System for Analytics devices. IBM had [acquired Netezza in November 2010], and later renamed their TwinFin devices as the IBM PureData for Analytics, powered by Netezza.
The [IBM PureData System for Analytics] is incredibly fast for performing deep, ad-hoc analytics. However, the people who use them are "data scientists", not backup experts.
Likewise, there are backup administrators who may not be familiar with the unique characteristics of this expert-integrated system to know what backup options are available.
As with the rest of the IBM PureSystems line, the IBM PureData System for Analytics (or, PDA for short) has a combination of servers, storage and switches inside.
In a full-frame PDA, there are two servers in Active/Passive mode, these coordinate activity to FPGA-based blade servers, which have parallel access to hundreds of disk drives, storing nearly 200 TB of compressed database data. A system can span up to four frames.
But what do you backup? And why? You don't need to worry about backing up the Linux operating system or NPS server code, that is considered firmware and if anything every got corrupted, IBM would help restore it for you. System-wide metadata, such as the host catalog and global users, groups, and permissions should be backed up periodically to protect against data corruption.
There are a number of reasons to backup your user databases:
- As part of firmware upgrade/downgrade
- To transfer data to another system
- Protect against hardware failure / disaster
- Protect against data corruption
The PDA has three backup formats. You can backup the entire user database in compressed format, backup individual tables in compressed format, or export to a text-format file.
Compressed format is faster, but can only be restored to the same PDA, or a PDA that has the same or higher level of NPS firmware. The text-format is slower, but can be used to restore to lower levels of NPS firmware, or to other database systems.
There are basically two methods to backup your PDA. The first is called the "Filesystem" method. Basically, you can attach an external storage device to the NPS server, and use the built-in command line interface (CLI) to store the backups onto its file system.
On NPS version 6, the nzhostbackup will backup the /nz/data directory which stores the system tables, database catalogs, configuration files, query plans, and cached executable code for the SPU blade servers.
(I have heard that the nzhostbackup will get deprecated in NPS version 7, but I only have access to version 6. As always, [RTFM] for your specific NPS code level.)
- nzbackup -users
The nzbackup with the users parameter will backup the global users, groups and permissions. This is included in the /nz/data backup contents from the nzhostbackup command, but you may want to backup and restore these separately.
- nzbackup -db
The nzbackup with the db parameter will backup a user database in compressed format. To backup individual tables, use the CREATE EXTERNAL TABLE command, which can create compressed or text-format exports.
You may find that your databases are so large, they will exceed the limits of the filesystem on the external storage device. For SAN or NAS deployments, I recommend the IBM Storwize V7000 Unified with IBM General Parallel File System (GPFS). However, if you are using something else, you may need to use the "nz_backup" scripts provided which split up the backup images into smaller pieces that most other filesystems can handle.
The PDA comes with 10GbE Ethernet ports that you can attach a NAS storage device over a Local Area Network (LAN), or add Fibre Channel Protocol (FCP) ports and connect over a Storage Area Network (SAN). To keep things simple, I will refer to whichever network you decide as the "Backup Network" in the drawings.
The second method for backup is called the "External Backup Software" method. As you have probably guessed, it involves sending the backups to a supported software product like IBM Tivoli Storage Manager (or, TSM for short).
In this case, the PDA acts as a client node, similar to a laptop, desktop, or application server with internal disk. Backup data is sent over the LAN to the designated TSM server, and the TSM server in turn writes over the SAN to its storage hierarchy of disk, virtual tape and/or physical tape resources.
Backups can be done by command "on demand", or automated on a schedule. For the /nz/data directory, direct the nzhostbackup command to send the backup copy to local disk, then use TSM's dsmc archive command to transfer this backup copy to the TSM server.
For nzbackup with the users or db parameters, you can send the data directly to the appropriate TSM server by specifying the connector and connectorArgs parameters.
To reduce traffic on the TSM Server, an intermediary "TSM Proxy Node" can be put in between. In this case, the PDA sends the backup to the Proxy Node, the Proxy Node uses a "LAN Free Storage Agent" to send the backups directly to the virtual tape and/or physical tape, and then notifies the TSM Server to updates its system catalog to record which tape holds these new backups.
Another configuration involves installing the TSM LAN Free storage agent directly on the PDA. While this will require FCP ports to be added and consume more CPU resources on the NPS server, it eliminates most of the LAN traffic, allowing the PDA to send its backups directly to virtual or physical tape.
To learn more about this, see my full presentation [Backup Options: IBM PureData System for Analytics, powered by Netezza] on the IBM Expert Network powered by SlideShare, or attend the upcoming [IBM Edge 2014] conference in Las Vegas, May 19-23. I will be there!
technorati tags: IBM, Netezza, PureData, PureData for Analytics, PDA, World Backup Day, Backup, NPS, nzhostbackup, nzbackup, expert-integrated, Tivoli, Tivoli Storage Manager, TSM, dsmc, #ibmedge, Slideshare
Have you signed up for the [IBM Edge2014] conference yet? This is IBM's premiere conference on System Storage and related products, to be held in Las Vegas, NV, May 19-23. I plan to be there!
technorati tags: IBM, Edge, Edge2014, Sheryl Crow
Modified by TonyPearson
My how time flies! It has been nearly a year since our new Tucson Executive Briefing Center had its [Ribbon Cutting Ceremony].
To celebrate this achievement, IBM asked me to write and direct a short film to remind everyone we are here to help clients solve problems, determine an appropriate strategy and make solid purchase decisions.
I have produced other videos for IBM. See my October 2013 blog post [Incorporating Videos] for other examples. This was my first time as writer/director for a project.
This video won't win any Oscars, but I would still like to thank the Academy, my colleagues IBM VP Calline Sanchez, Lee Olguin, Joe Hayward and Kris Keller agreeing to be filmed on camera. Behind the scenes, I want to thank IBM Fellow John Cohn for his superb narration, Andrew Greenfield as cinematographer and editor, Shelly Jost as creative consultant selecting the musical tracks, and Denise White for reviewing the screenplay. Finally, I want to thank our producer, Bill Terry, for funding this effort.
What do you think? Will it go viral? Enter your comments below!
technorati tags: IBM, Tucson, EBC, Joe Hayward, Calline Sanchez, Kris Keller, Lee Olguin, John Cohn, Andrew Greenfield, Shelly Jost, Denise White
Modified by TonyPearson
IBM Cloud announcements at Pulse 2014
Well it's Tuesday again, and you know what that means? IBM announcements! Many of the announcements were made by IBM Executives at the [IBM Pulse 2014 conference].
IBM BlueMix is the newest cloud offering from IBM, providing Platform-as-a-Service (PaaS) offering based on the Cloud Foundry open source project that promises to deliver enterprise-level features and services that are easy to integrate into cloud applications.
In partnership with Pivotal and others, [IBM is a founding member of the Cloud Foundry foundation] to create an open platform that avoids vendor lock-in. Many PaaS stacks, such as [LAMP] or [Microsoft IIS], are typically limited to a single programming language, database and web application server, but not Cloud Foundry! Here is what is supported:
Development Frameworks: Cloud Foundry supports Java™ code, Spring, Ruby, Node.js, and custom frameworks.
Application Services: Cloud Foundry offers support for MySQL, MongoDB, PostgreSQL, Redis, RabbitMQ, and custom services.
Clouds: Developers and organizations can choose to run Cloud Foundry in Public, Private, Hybrid, VMware-based and OpenStack-based clouds.
To learn more, see this article on developerWorks [What is BlueMix?]
POWER and PureApplication Patterns of Expertise on SoftLayer
By the end of 2014, IBM is investing over $1.2B to have [40 Cloud centers across five continents] for SoftLayer.
This week, my fifth-line manager Tom Rosamilia, IBM Senior Vice President IBM Systems & Technology Group and Integrated Supply Chain made two announcements at Pulse. First, in additional to x86-based servers, SoftLayer will also offer POWER-based servers to run AIX, IBM i and [Linux on POWER] applications.
Second, SoftLayer will support PureApplication Patterns of Expertise. What is a pattern of expertise? It can be as simple as a virtual machine encapsulated in [Open Virtual Format], to more dynamic architectures, packaged with required platform services, that are deployed and managed by the system according to a set of policies.
Patterns simplify and automate tasks across the lifecycle of the application. Customers and partners alike are [seeing significant reductions in cost and time] across the application lifecycle with the deployment of a PureApplication System.
Also, this week at Pulse, Robert LaBlanc, IBM Senior Vice President of Software and Cloud Solutions, announced [IBM plans to Acquire Cloudant] which offers an open, cloud Database-as-a-Service (DBaaS) that helps organizations simplify mobile, web app and big data development efforts.
Why not just use a Relational Database Management System [RDBMS], like [IBM DB2 database softwareSQL], CouchDB is known as NOSQL. DB-Engines has a great side-by-side comparison [CouchDB vs. DB2].
IBM SmartCloud Virtual Storage Center offerings
When I introduced [SmartCloud Virtual Storage Center] back in October 2012, I mentioned that it was a great solution for large enterprise that have all of their disk behind SAN Volume Controller (SVC).
To reach smaller accounts, IBM has announced two new offerings:
IBM SmartCloud Virtual Storage Entry for customers that have less than 250TB of disk behind two or four SVC nodes. It is priced per terabyte, by the amount of capacity that is virtualized.
IBM SmartCloud Virtual Storage for Storwize Family for customers that have other Storwize family products (Storwize V7000 or V5000, for example). It is priced per the number of storage enclosures that are managed by the Storwize family hardware.
To learn more about Virtual Storage Center, see the [IBM Announcement page]
I am not at Pulse 2014 this year, but I managed to watch many of these announcements on the [IBM Pulse livestream].
Continuing my series on building a Desktop computer for a kindergarten class, I look at Fedora with Sugar mentioned in the article [Top 6 Linux Distributions for Children (Ages 2 and Up)].
(This series started with my post [Kindergarten desktop - The Challenge]. I have a 512MB RAM system with 40GB disk drive that I will install Linux and educational software for a class full of kindergarten children. My previous post covered three other Linux distributions [LinuxKidX, Qimo, and Foresight for Kids].)
I am not stranger to the Sugar learning platform, developed as part of the One Laptop per Child [OLPC] project.
As I mentioned in my post [Helping Young Students - part 1], I was part of the OLPC development team back in 2008, helped local volunteers deploy laptops to children in Nepal and Uruguay, mentored a college student in India, and learned a lot of Python programming language in the process.
Sugar is now developed by Sugar Labs, a nonprofit spin-off of OLPC. The code is free and open source desktop environment for many other machines, including as a "Desktop Environment" for Fedora Linux.
I kept my 40GB hard drive partitioned as follows. On the extended partition, sda5 will hold my system utilities, like Clonezilla and SystemRescue, and sda6 is my swap space, increased to 1500MB. Partition sda1 has Edubuntu 12.04 on it, and I will use sda2 to install Fedora with Sugar.
[Sugar-on-a-stick], is so named because it is designed so that each child has their own LiveUSB. This can run on PC with Windows or Mac OS without affecting those operating systems, allowing a child to use Sugar in the classroom, then take the stick home and continue on their home PC.
A 2GB or greater USB memory stick can hold both Fedora and Sugar, and use that to boot your desktop. Unfortunately, it requires 1GB of RAM, and I have only 512MB.
Can I just run Sugar natively on a Fedora install? Yes, thanks to the [Sugar not "on a stick"] instructions, I can install Fedora first, then just:
#yum groupinstall "Sugar Desktop Environment"
Unfortunately, the latest Fedora release (F20) recommends 1GB of RAM. Fortunately, I found Dean Howell's rant [Fedora Irresponsibly Lowers Memory Requirement To 512MB] about the Fedora F17 release. I gave this a try.
There are three ways to install Fedora:
- Fedora Desktop Edition - this is a LiveCD that requires 1GB RAM.
- Fedora Network Install - this is a bootable CD that then uses the Internet to download the rest of the files required. Use this if you (a) have a fast Internet connection, or (b) do not have a DVD drive on your system.
- Fedora Install DVD - this has all the software on the DVD itself.
I chose method 3 and downloaded the appropriate ISO file. While F17 only requires 512MB of RAM to run, the graphic installer requires 768MB, and is fully explained in this [29-step F17 installation guide].
To get around this, select "Troubleshooting" which then lets you select low-graphics/text mode installation that ran well under 512MB. I installed both LXDE and Sugar, and everything worked fine!
Why both LXDE and Sugar? Well, Sugar is quite a different environment, and I wanted LXDE as an alternative for the admin and teacher to use.
The article on [Sugar software on Wikipedia] sums it up well:
"Unlike most other desktop environments, Sugar does not use the 'desktop', 'folder' and 'window' metaphors. Instead, Sugar's default full-screen activities require users to focus on only one program at a time. Sugar implements a novel file-handling metaphor (the Journal), which automatically saves the user's running program session and allows him or her to later use an interface to pull up their past works by date, activity used or file type."
Now that I have that working, it is time to upgrade from non-supported F17 to a supported level. Ravi Saive explains the [Four Ways to Upgrade from Fedora 17 to Fedora 18]:
- Clean install of F18
- Fedora Upgrader tool (FedUp) command line interface
- Yum upgrade
- Fedora upgrade script
As you can probably guess from the title of this post, I chose method 2 "FedUp" as it seemed to be the least invasive. I was unsure if method-1 "Clean Install" of F18 would work with 512MB of RAM, and I have been through enough horrors of failed yum upgrades on my own Red Hat Enterprise Linux [RHEL] at work to avoid method 3. Method 4 is just a script to automate the steps of method 3.
The steps are fairly straightforward. First, install the FedUp package, run "yum update" to ensure you have all the latest kernel and F17 packages for everything else, and reboot.
Then run the fedup-cli command, which upgrades all the packages to F18 level and creates a special kernel level that will then finish the install after the second reboot. It took a while, so I let it run unattended. I put the debug log on partition sda5 in case anything went wrong.
#fedup-cli --reboot --network 18 --debuglog=/rescue/fedupdebug.log
What could go wrong? Well, it turns out that fedup works by updating the Grub2 boot loader configuration, but my grub2 resides on sda1 partition instead, owned by my existing Edubuntu. The reboot did not give me the option to run the specialized kernel to finish the process.
Fixing this was a hot mess, but I managed to configure Grub2 on Fedora, and complete the upgrade and get everything working as before. However, even though it just came out last year, [F18 version is already out of support]! This means I get a second chance to do FedUp, this time to F19 release. Oh boy! Fun!
While the second time went smoother, the problem was that F19 doesn't seem to run well in 512MB of RAM, and chances are F20 won't either.
So what have I learned from this?
- Fedora is fully supported, has been around over 10 years, with a vibrant and helpful community.
- Sugar is designed for kids, so adding a traditional desktop environment like XFCE or LXDE can be useful for administrator or teacher.
- Offering multiple Linux versions in a dual-boot or triple-boot approach may complicate the Grub2 loader configuration and maintenance.
- Fedora's "rolling upgrade" approach means that someone will need to consider upgrading to later versions at least every school year or semester to maintain support. Running fedup-cli or any of the other upgrade methods may be too complicated for your average teacher.
If you have any experience with Fedora or Sugar in the classroom, comment below!
technorati tags: OLPC, Nepal, Uruguay, Sugar, Sugar-on-a-Stick, Sugar Labs, Fedora, Linux, Clonezilla, SystemRescue, Edubuntu, LXDE, FXCE, RHEL, FedUp, Grub2, rolling upgrade
Next week, thousands will convene in Las Vegas for [IBM Pulse 2014], an IBM conference that will focus on Cloud, Service and Storage Management.
To lead up to this event, my colleague Steve Wojtowecz, or 'Woj' as we like to call him, IBM VP of Storage and Network Management Software Development, has a five part series that is worth a read. Here are some excerpts:
- Part 1: The Ities
In [Predictions for storage management in 2014], Woj introduces his five-part series with a discussion of the "..ities", namely Utility, Commodity, Simplicity,and Availability.
- "Storage-as-a-utility will pick up momentum. Call it [storage-as-a-service], or a storage / back-up cloud, or whatever name you prefer, deployments of this capability will ramp up dramatically."
- "Making something simple look complex is easy, making something complex look simple is hard. Like it or not, we all like things simple and easy to grasp."
- "Any data that a company is willing to store should be important enough to (1) be protected and backed up as part of a disaster recovery (DR) plan and (2) used for analytics for new business opportunities."
- Part 2: Software Defined Environments
In [Predictions.. Part 2], Woj covers the broad and deep impacts of [Software Defined Environments], abbreviated to just SDE.
- "SDE represents a deep form of change; instead of tying fixed resources to particular IT domains, you centralize and virtualize resources, then govern them with software policies."
- "With Software Defined Compute (SDC...), worldwide spend on x86 stuff since 2000 has declined from $70B to about $56B."
- "That doesn't suggest that fewer workloads are on x86 now (quite the opposite we know), it suggests a massive commoditization of the hardware and revenue shift to SDC."
- Part 3: Impatience
In [Predictions.. Part 3: Impatience], Woj discusses society's impatience with technology reaching the data center.
- "All types of admins (server, storage, network, VM, etc) want the big red EASY button."
- "This level of capability will require technologies to be implemented in an open and collaborative way. "
- "[OpenStack] will progress and be adopted at a much faster rate than other historical open source innovations--such as Linux when it was released--in an effort to deal with SSD and Flash sprawl."
IBM is a [platinum sponsor of OpenStack].
- Part 4: Hybrid Clouds
In [Predictions.. Part 4: Hybrid], Woj discusses Hybrid clouds.
- "Hybrid (specifically hybrid storage and data protection clouds) is no longer hype. Nearly every IT shop speculated that hybrid cloud storage was the future of enterprise storage and in 2014 the future is here."
- "... the industry will see accelerated adoption in enterprises (private cloud), as an off-premise managed service (public cloud), and across both (hybrid cloud) based on cost, compliance, security and criticality of data to the enterprise."
- "IT teams used to thinking of enterprise data as “their baby” are going to have to get comfortable with the idea that the baby is now living somewhere else."
- Part 5: Analytics
In [Predictions...Part 5: Analytics], Woj explains the benefits of analytics for data center operations.
- "Line of business organizations have been using analytics to uncover new revenue streams and business opportunities for years. Now, this technology is being turned inward and applied to the data center itself to drive operational efficiency."
- "This level of insight and predictability starts to dabble into the notion of cognitive computing as applied to storage and the data it holds."
- "Operational analytics will also be applied for productivity / performance gains for the infrastructure itself, like auto-tiering data for priority applications across heterogeneous hardware platforms."
For more insights into these predictions, attend [IBM Pulse 2014] in Las Vegas, next week, February 23-26.
Sadly, I won't be there in person. Although I helped launch the original IBM Pulse back in 2008, I have only been invited once to come back, and that was as a last minute replacement for another speaker in 2012. Unfortunately, I could not accept because of my [near-death experience].
technorati tags: IBM, Steve Wojtowecz, Woj, Pulse2014, Storage trends, Predictions, Hybrid Cloud, Analytics, Software Defined, SDE, SDS, SDC, x86, Cloud
Modified by TonyPearson
Last month, my post [ IBM System Storage Announcements for January 2014] introduced the IBM FlashSystem 840. Last week, I had blog post [Fall in Love with IBM FlashSystem V840 Enterprise Performance Solution]. The similarity in names has raised some confusion.
The first, "Without V" is a 2U storage array that uses Flash to offer 90-135 microsecond latency. Here are some IBM Redbooks that provide guidance:
The second solution, "With V" (for Valentine's Day, of course) is a storage virtualization solution that not only contains the technology from the FlashSystem 840 above, but also borrows technology from our SAN Volume Controller to provide added functionality, like Real-time Compression, Remote Mirroring and Thin provisioning.
We don't have an IBM Redbook specifically yet for the V840, so for now, consider using the [Implementing FlashSystem 840 with SAN Volume Controller] solution guide to get you started.
(Update: Now available! [IBM FlashSystem V840 Enterprise Performance Solution - IBM Redbooks Product Guide])
To learn more about new IBM Redbooks as they get published, follow Burt Dufrasne and team on the [IBM System Storage Redbooks blog]!
technorati tags: IBM, FlashSystem, 840, V840, SAN Volume Controller, Redbook, virtualization, Flash
Modified by TonyPearson
Well, it's Tuesday again, and you know what that means? IBM Announcements!
This week we also have [Valentine's day], so it is perfect time for you to fall in love with the new [IBM FlashSystem V840 Enterprise Performance Solution]! The "V" stands for Valentine.
From the photo, the marketing people staggered the various components to give it a stylized [Dagwood Sandwich] effect. I can assure you that these are just standard 19-inch rack components that fit into 6U of space in standard IT racks.
Starting top to bottom, we have the first FlashSystem V840 Control Enclosure, its 1U-high UPS, a second FlashSystem V840 Control Enclosure and its UPS, and finally a 2U-high FlashSystem V840 Storage Enclosure.
You can have up to a dozen Flash modules, either 2TB or 4TB size, for a maximum of 40TB usable RAID-protected capacity. These can be protected with AES 256-bit encryption. The FlashSystem modules are front-loaded, and slide in and out for easy maintenance.
The system is fully redundant and hot-swappable with concurrent code load to ensure high availability.
(Update: In the comments, readers thought that this was nothing more than just a two-node SVC with FlashSystem 840. There are differences, so I have added the following table.)
SVC with FlashSystem 840
Cabling from controllers to storage
Through SAN fabric ports
Direct attach from V840 Controllers to V840 Storage Enclosures
Call Home Support
GUI screen branding
The system is fully VMware-certified, supporting VAAI interfaces, and an SRA for VMware's Site Recovery Manager (SRM). With Real-time Compression, you can get up to 80 percent capacity savings for workloads like Virtual Desktop Infrastructure (VDI). That in effect gives you up to 5x (200TB) of virtual capacity in 6U of rack space!
You can either keep it as an All-Flash array, or you can virtualize external IBM and non-IBM disk systems, and use the Flash capacity in the Storage Enclosure for IBM's Easy Tier automated sub-volume tiering and data migration. With or without external storage, the FlashSystem V840 can provide local and remote mirroring and point-in-time copies.
technorati tags: IBM, Valentine, FlashSystem, V840, VMware, VAAI, SRA, SRM, Real-time Compression, VDI, All-Flash, storage, virtualization
Continuing my series on building a Desktop computer for a kindergarten class, I look at three other Linux systems mentioned in the article [Top 6 Linux Distributions for Children (Ages 2 and Up)].
(This series started with my post [Kindergarten desktop - The Challenge]. I have a 512MB RAM system with 40GB disk drive that I will install Linux and educational software for a class full of kindergarten children.
First, I re-partitioned the 40GB hard drive as follows. On the extended partition, sda5 will hold my system utilities, like Clonezilla and SystemRescue, and sda6 is my swap space. This gives me three primary partitions to install three flavors of Linux to try out.
The first was [LinuxKidX], which actually started out as a Portuguese-language effort in Brazil. It was then translated to the English language to extend its reach. It is based on the KDE desktop familiar to users of OpenSUSE Linux.
Many of the education software were similar or the same as those from Edubuntu I mentioned in my last post. However, not everything was translated, and unless you are able to read Portuguese, you may not want this one.
Next, I wanted to look at [Qimo for Kids], but first I had to look for the distribution, as the mirrors listed seemed to be unavailable. I was able to find an qimo-2.0-desktop.iso on CNET.com
Unlike Edubuntu, Qimo fits on a CD-ROM for older PCs that may not have DVD drives. Based on lightweight XFCE desktop, the LiveCD runs comfortably in 512MB, with a kid-friendly app launcher at the bottom of the screen. However, Qimo 2.0 is based on Ubuntu 10.04 (Lucid Lynx) LTS, with long term support expiring this May 2013. The Firefox 3.6.3 was too old to run Gmail.
Why hasn't Qimo been enhanced since 2010? It looks like you can just install the packages qimo-session and qimo-wallpaper on newer levels of Ubuntu.
Third, I tried Foresight Linux for Kids 1.0 release. The most recent Foresight is 2.5.3, but Linux for Kids is still at the 1.0 level. The "installer" was very outdated, so the website suggested following the [power-user install HOWTO].
The HOWTO can be a bit intimidating, but I was able to install just fine in 512MB of RAM. Foresight detected I had pre-configured a swap space, and used that to help finish the install process.
Like the others, it had many of the same educational software as before. A key difference is the [Conary package management]. Most systems use either Debian (DEB) or Redhat Package Manager (RPM), but this one is different, and the use of Conary may reduce the number of software applications available.
So what have I learned from these?
All of them seemed to have the same set of educational software: gCompris, eToys, Tux for math and typing.
I want a Linux that uses traditional package management, either DEB or RPM.
The 512MB RAM does not seem to be a difficult limitation. While installation may have been more complicated, they all ran well in 512MB.
If you have had any experience with any of these three distros, please comment below.
technorati tags: Linux, LinuxKidX, Qimo, Foresight, Debian, Redhat, DEB, RPM, , gCompris, eToys, Tux
Modified by TonyPearson
Well, it's Thursday again, and you know what that means? IBM Announcements!
(OK, OK, my long-time readers already know that [storage announcements are usually on Tuesdays], not Thursdays.
However, I was speaking to various clients in Winnipeg, Canada Tuesday and Wednesday this week, so marketing moved the announcement date to today to accommodate my schedule. Sometimes, being the #1 most influential IBM employee in storage comes in handy!)
Here, then, is a quick review of the storage portion of today's announcements.
IBM FlashSystem 840
The [IBM FlashSystem 840] offers twice the capacity as its predecessors, the 810 and 820, with up to 48TB in a dense 2U package.
(Quick recap of previous models: Both the FlashSystem 810 and 820 supported ECC-protected memory and Variable-striped RAID (VSR). The [FlashSystem 810] supported RAID-0 striped across the modules, and the [FlashSystem 820] supported two-dimensional 2D-RAID across modules for higher availability. Fellow blogger Jim Kelley (IBM) on his Storage Buddhist blog has a great post on this: [IBM FlashSystem: Feeding the Hogs].
The new FlashSystem 840 in effect replaces both, so you can choose RAID-0 striping or 2D-RAID, along with your ECC-protected memory and Variable-striped RAID. It offers hot-swappable Flash modules, redundant components, and non-disruptive concurrent code load (CCL).
The FlashSystem 840 also introduces military-grade AES-XTS 256 bit encryption to provide added protection to your data.
For host attachment, you have some great choices: 16Gb/8Gb/4Gb auto-negotiated Fibre Channel (FCP), 40Gb InfiniBand QDR, and 10Gb FCoE. Whatever you decide, you get 90 microsecond writes, and 135 microsecond reads.
Since its introduction just over a year ago, IBM has sold FlashSystem to over 1,000 clients! For more on how this compares to other all-flash arrays, read my previous post about [IBM FlashSystem].
IBM FlashSystem Enterprise Performance Solution
The [IBM FlashSystem Enterprise Performance Solution] combines the incredible feature set of the [IBM SAN Volume Controller] with the FlashSystem 840 announced above. About 25 percent of FlashSystem customers use them in conjunction with SVC, so this offering makes a lot of sense.
Adding SAN Volume Controller provides some key advantages, including Real-time compression, Thin provisioning, FlashCopy point-in-time copies, Stretched Cluster support, Easy Tier sub-LUN automated tiering, and remote copy services like Metro Mirror (synchronous) and Global Mirror (asynchronous).
Adding the SVC also changes the host attachment options: 8Gb/4Gb/2Gb Fibre Channel (FCP), 1Gb and 10Gb iSCSI, and 10Gb FCoE. Depending on the options and features you choose, the SVC layer adds a modest 60 to 100 microseconds to each read and write.
Each SVC node dedicates four of its six cores, and 2GB of its 24GB cache, to use with compression. Those interested in beefing up compression performance, either with FlashSystems or with any other disk, can choose the "Compression Hardware Upgrade Boosts Base I/O Efficiency" (affectionately known as the CHUBBIE) RPQ 8S1296 for SVC systems with software version 188.8.131.52 or higher. Basically, this RPQ adds another 6-core CPU and another 24GB of cache, so that each node can dedicate 8 cores for compression, and 26GB of cache for compression processing. Initial test results show this can increase performance 3x!
IBM Network Advisor
The [IBM Network Advisor v12.1] management software provides comprehensive management for data, storage and converged networks. This single application can deliver end-to-end visibility and insight across different network types--it supports Fibre Channel SANs (including Gen 5 Fibre Channel platform), IBM FICON and IBM b-type SAN FCoE networks--and provides new features to manage your Brocade and IBM b-type SAN switches.
Cisco MDS 9710 Multilayer Director
The [Cisco MDS 9710 Multilayer Director] is mainframe-ready, with full support for System z FICON and Fibre Channel protocol (FCP) environments. This director supports eight module slots for a maximum of 384 ports.
In other news, IBM had once again filed [the most U.S. Patents for the 21st year in a row], and our brothers and sisters over in server land introduced [the X6 architecture for x86 servers] for the System x and PureSystems product lines, optimized for Cloud and Analytics.
technorati tags: IBM, FlashSystem, FlashSystem 810, FlashSystem 820, FlashSystem 840, Jim Kelly, FCP, InfiniBand, FcoE, iSCSI, SVC, SAN Volume Controller, FlashSystem Solution, RPQ, Network Advisor, Brocade, SAN, Cisco, MDS, FICON, Mainframe, Patents, IBM X6, x86 servers, Cloud, Analytics,
Modified by TonyPearson
Last week, in my post [IT Support for the Holidays], I mentioned that I was scrubbing computers in preparation to give them to charity. A local reader asked if I would be willing to donate one of the computers to her kindergarten class. She teaches a class of 20 kids, at the very same elementary school that I went to when I was that age.
So here is the beefiest machine of the set.
Make/Model: Sony PCV-RC850
Processor: 2.4GHz Intel 32-bit
Hard disk: 40GB
Removable media: CD/DVD-ROM and CD/DVD-RW
Keyboard/mouse: standard PS/2
Sound: headphone jack
Ethernet port: 100Mbps
USB ports: two
IBM likes grand challenges, like [Deep Blue computer] to play chess against Grandmaster Garry Kasparov, and the [Watson computer] to play against two experts on the game show Jeopardy! My "Kidergarten Desktop" challenge is certainly on a smaller scale--to install software on this machine that will neet the following requirements
Have age-appropriate educational software and games for the students to learn reading, writing and math. This will also help them be more technology-savvy, learn the [QWERTY keyboard], and be more computer literate.
Have software for the teacher to use for her own job, after the kids have gone for the day, including submitting grades, sending email to parents, typing up lesson plans, data collection, researching the latest trends in education, for example.
Require minimal maintenance, be easy to rescue, repair and recover if necessary.
The 512MB is not enough to run Microsoft Windows 7, but certainly enough to run some flavors of Linux. Inspired by this review of [Top 6 Linux Distributions for Children], I thought I would give a few a spin.
Many of these have LiveCD/LiveDVD/LiveUSB versions that can be booted directly to try them out, and install directly to hard disk if you like it. Unfortunately, this often requires 1GB of memory or more, so I will need a different approach.
I had already scrubbed the [Windows XP] and replaced with [Linux Mint 12 LXDE]. Can I just install the Edubuntu-desktop on Linux Mint? While Linux Mint is Ubuntu-based, it is not binary compatible, so I will need to install fresh.
The [Edubuntu] LiveDVD requires 1GB of memory to try out, so to get this installed, I used the "Alternate Ubuntu 12.04" installer DVD.
Why 12.04 release of Ubuntu? The current release is 13.10 will only be supported for 9 months, and in keeping with "Requirement #3 Minimal Maintenance", the [Edubuntu team recommends installing a Long Term Support (LTS) release], and 12.04.3 is the most recent LTS that will be supported through 2017.
Edubuntu recommends 20GB of disk space to run, so I have partitioned the 40GB drive as follows:
For this machine, I will have three users configured:
admin - Administrator (that would be me for now) assigned to the "wheel" group to allow special priveleges
teacher - Teacher will have her own userid/password, so that she can do her own work
student - One userid/password shared by all students. This should eliminate kindergarten students from having to remember a userid and password that is unique to them. They are only five and six years old, after all!
Ubuntu's [Alternate Installer] uses basic graphic mode that can run in 512MB, and once installed, I was then able to install the Edubuntu Desktop and both preschool and primary-level educational software, to account for all learning ability levels of the children.
admin-$ sudo bash
admin-# apt-get install edubuntu-desktop
admin-# apt-get install ubuntu-edu-preschool
admin-# apt-get install ubuntu-edu-primary
I am not a big fan of Ubuntu's "Unity" panel on the left, and was hoping that Edubuntu-desktop would remove it, but no luck. so I removed it manually.
On the second partition, sda2 I put a few system utilities, including [Clonezilla] and [SystemRescue CD].
This system does not boot USB files natively, and getting Grub2 boot loader to boot ISO files was more difficult than I imagined. I was able to extract the necessary files over to sda2 hard disk to get them to work. I took "Clonezilla" full system backups to a separate SSH server over my local subnet.
Well, that's my start. Any suggestions? Has anyone done this before? Please enter comments below.
Happy new year, everyone!
Are you looking for new storage for 2014? Time to replace that old gear on your IT floor?
The decisions you make about your IT infrastructure affect everything -- from database and business analytics to cloud and virtualization. That's why it's more important than ever to choose wisely.
If you are currently running on storage from HP, HDS, EMC or one of IBM's many other competitors, you might want to take a fresh new look at IBM storage which...
performs faster with greater throughput and lower latency,...
and is easier to use, ...
AND costs less over the next three to five years!
Next week, on January 16, senior IBM executives will share news about breakthrough technologies, featuring Intel® processors, that enhance Smarter Computing servers and storage.
(This webcast will be available worldwide. I, myself, will be in Winnipeg, Canada, freezing my [tuque] off!)
In this webcast, you will learn how to improve decision support and data processing for your mission-critical applications, drive higher performance on analytics and increase agility and flexibility through scalable solutions.
Here is the link to the [Registration Page].
Modified by TonyPearson
Welcome back everyone! Were you the IT Support for your friends and family during the holidays?
Last year, in my infamous "Laptop for Grandma" blog post series, I discussed my week exploring various Linux distributions (aka "distros") to find one that would re-purpose Grandma's laptop into an MP3 player. Here is the entire series for your reference.
With Microsoft [dropping support for Windows XP this April], many people got new PCs for the holidays.
(Why not just upgrade to a newer version of Windows in place? Well, [Microsoft Windows 7 requires a minimum of 1GB of RAM, with 4GB recommended], and these old machines simply do not have enough memory. If the motherboard could support the hardware and software upgrades, the cost of Windows 7 license and 4GB of RAM might get into hundreds of dollars!)
So what happens to the old machines? They come to me, of course, with three requests:
If possible, rescue existing documents and photos from the old PC
Wipe the hard drive clean, what we in the IT storage industry call a "Secure Erase"
Give the old PC to charity or appropriate recycling facility
I had six old machines to work on this year. Generally, I only get the towers, as most people keep their mouse, keyboard and monitor for their next machine.
For five of them, the process was fairly straightforward. First, I would boot up the system to see what it was running, typically Windows XP or Windows Vista, and simply transfer the "My Documents" folder to an external USB drive.
If the system doesn't boot on its own, perhaps because the OS is corrupted on the hard drive or infected by a virus, then I would boot a Linux-based LiveCD, such as my favorite [SystemRescueCD], and copy the data over to USB external drive that way.
Second, from the SystemRescueCD, I would run [fdisk] to delete all the existing partitions and create a new partition, and then run [shred] or [scrub] to perform a secure erase.
(The shred tool is more thorough, but I prefer scrub for its ease-of-use. Its default National Nuclear Security Agency (NNSA) method writes over the entire disk four times with different random patterns of data.)
Third, I would do a fresh install of the now out-dated Linux Mint 12 LXDE from CD. Why Linux Mint 12 LXDE? I don't have to worry about any licensing issues with Linux. Linux Mint is the [fourth most widely used home operating system] in the world.
The latest version of Linux Mint is 16, and version 13 has Long Term Support through 2017, but version 12 is the last release small enough to fit on a 700MB CD for the old machines that cannot read the higher capacity DVD media.
Linux Mint comes with various graphical interfaces, but the Lightweight X11 Desktop Environment [LXDE] edition runs in as low as 256MB of memory, the minimum that Windows XP requires. Many newer operating systems expect 1GB or more. The machine is then ready to give to charity. Whomever gets it can certainly install a different OS if they prefer.
So, the process went smoothly for the first five, but the sixth machine gave me an interesting challenge. Here are the specs:
Operating System: Windows 98
Processor: AMD-K6 (Pentium II-class) 150 MHz
Hard disk: 10GB
Removable media: 3.5-inch floppy and CD-ROM drive
Keyboard port: standard PS/2
mouse port: 6-pin DIN
Ethernet NIC: 10Mb
USB ports: none
Yikes! Windows 98? 32MB of RAM? Even a [Raspberry Pi] has more than this!
My keyboard fits, but my mouse doesn't, so I had to look up Windows 98 keyboard shortcuts to navigate the system. The age of the files indicates this machine was actively used from 1999 to 2005. While most people only keep a PC for 3-5 years, this hardware is 14 years old! It has been sitting in Judy's closet collecting dust the rest of the time.
Without USB port or CD burner, there were only two ways to get data off this system. First, was the 1.44MB floppy disk, and the second was through the Ethernet card. I was able to configure TCP/IP and connect via FTP back to my FTP server, allowing me to copy the files over.
Most of my LiveCDs that I tried just froze mid-boot without sufficient memory. Not even my SystemRescueCD would boot. I was able to use [Basic Linux BL3 version 3.5] which boots from two floppy diskettes and requires only 12MB of RAM.
Basic Linux has neither shred nor scrub utilities, so I used old school "dd" command, which was painfully slow.
dd if=/dev/zero of=/dev/hda1
While this was not as secure as NNSA, Department of Defense (DoD), or Guttman methods of erasure, I figured it was good enough for a 14-year old machine that had not been used since 2005.
While BL3 includes an install-to-hd script to copy the files over to the hard drive, I could not get LILO to boot natively from /dev/hda1. So, I switched to booting from Damn Small Linux [DSL] LiveCD. Using the "dsl 2" boot cheat code, I was able to boot directly to a superuser text-based prompt, allowing me to create two partitions, a 128MB swap and the rest for an ext2 file system.
DSL only requires 8MB of RAM, but having the extra 128MB swap ensures success. I was able to install DSL on the hard drive, fix up lilo.conf, and boot directly from it.
What a great way to start a new year! Happy New Year everyone!
technorati tags: laptop, Linux, Microsoft, Window, Windows XP, Windows 98, LiveCD, Secure Erase, NNSA, DoD, Guttman, BasicLinux, BL3, Damn Small Linux, LowRam
Are you trying to find the right way to explain Storage Management concepts to your friends and family at the next holiday cocktail party?
One of my readers made the following request:
Having been around IBM Storage for some time, I was wondering if by chance you might recall an old recording about the "Hierarchical Sock Manager", I have a vague recollection, but I can't remember who did it or when, which means that I have no way to ask if anyone has a copy. This was an analogy comparing levels of storage of socks (i.e. footwear) to dresser drawers and boxes in the garage. Sound familiar?
I had mentioned this video in my 2007 blog post [Re-arranging the Sock Drawer], so I felt I needed to at least make an effort to track it down.
As it turns out, the IBM sales executive in the video, Charles "C.D." Larson, now works for another company (Hitachi Data Systems). Thanks to social media, I was able get in contact with him, and he sent me a copy of this 1989 video, and granted me permission to post it on YouTube.
To put it on YouTube, I had to convert the VOB file to something YouTube could understand. Since I run Linux, I was able to use the [ffmpeg] utility to do this. The result is now an [18-minute video], uploaded for all to enjoy.
The concepts discussed back then still apply today. Yes, we still have DFSMS for the mainframe mentioned in the video, but we also have extended these concepts to the Active Cloud Engine in the SONAS and Storwize V7000 Unified, as well as the hieararchy management included in the Linear Tape File System (LTFS) Enterprise Edition (LTFS-EE) solutions.
Happy Winter Solstice, or whatever holiday you may choose to celebrate this season!
technorati tags: IBM, DFSMS, DFHSM, DFDSS, RACF, ISMF, ABARS, DFSMShsm, HSM, CD Larson, HDS, ffmpeg, YouTube, SONAS, Storwize+V7000+Unified, LTFS, LTFS-EE