Well, it's Tuesday again, and you know what that means? IBM announcements!
I was afraid IBM was going to pile up all the announcements on one day at Edge next week, so I am glad that our new General Manager, Jamie Thomas, has agreed to spread them out a bit. Last week, IBM [announced new SAN Volume Controller and Storwize models], and yesterday, IBM [announced Elastic Storage].
Today is all about the [enhancements to the IBM System Storage DS8870], one of IBM's Enterprise-class high-end disk systems.
High Performance Flash Enclosure
When IBM designed the DS8870, it changed the bulk power supplies and batteries used in the previous DS8800 model to highly energy-efficient [DC-UPS]. In addition to reducing the overall energy consumption of the DS8870, it also gave the engineers space above the units to put 4U of standard 19-inch rack equipment.
The High Performance Flash Enclosure provides an ultra-dense and ultra-high-performance option. Each HPFE can delivers up to 250,000 IOPS and up to 3.4 GB/s bandwidth.
Up to thirty 387 GB Enterprise Multi-Level Cell (eMLC) flash cards provide up to 11.6 TB of raw capacity, about 9.2 TB usable, in only 1U of 19-inch rack space. A pair of very powerful integrated SAS RAID engines manage RAID-5 across the flash cards. The HPFE attaches directly to GX++ slots in the two DS8870 POWER7+ controllers, rather than using the Device Adapter (DA) loops.
You can have up to four of these HPFE in the "A" frame of your DS8870. Each HPFE can have either 16 or 30 flash cards. For 16 cards, you would have two spare plus two 6+P RAID-5 ranks. For 30 cards, you would add another two 6+P RAID-5 ranks.
Easy Tier Enhancements
Easy Tier is IBM's market-leading sub-volume automated tiering inside the DS8870 disk system. There were several enhancements in this announcement.
The first enhancement is to "Easy Tier Server", a feature that coordinates caching of active blocks of data inside the server's own internal Flash. This had supported Power Systems with EXP30 Ultra drawers, and now expanded to support IBM [Flash Adapter 90].
The IBM FlashAdapter 90 was announced last October as part of the [ IBM Power Systems feature new I/O enhancements].
The second enhancement is to the three-level (Flash,Enterprise,Nearline) tiering algorithm. Inside the DS8870, the new HPFE flash cards will be part of the "Flash Tier" along with solid state drives (SSD) attached to the DA loops. Internal inter-tier load-balancing will take into account the faster nature of the flash cards in the HPFE, and move the busiest blocks accordingly. We we refer to this as "micro-tiering" within the Flash Tier.
Broader Solid State Drive options
Not everybody likes the 400GB solid state drives IBM offered for the DA loops, so IBM is now offering a smaller 200GB and a larger 800GB options as well.
Enhanced Concurrent Code Load
The new DS8870 R7.3 firmware release drastically cuts the activation time of concurrent code load in half.
Nobody likes warmstarts either. These are a necessary evil for some error conditions, but the clever engineers upstairs have figured out ways to reduce the number of warmstarts and eliminate the need to perform a warmstart after certain events to prevent any application impact to the attached host.
Multi-Target Remote Mirror
By now you know that IBM has the market-leading remote mirroring services for high-end disk systems, using less bandwidth and maintaining better concurrency than high-end systems from other vendors.
The DS8870 R7.2.7 firmware release can now support multi-target remote mirror. In previous releases, if you wanted three-site disaster recovery, you relied on Metro/Global Mirror, where site "A" had a Metro Mirror to a bunker site "B", and then site "B" had a Global Mirror to site "C". Not everybody liked this.
Some clients have asked for a "star" configuration, where "A"-to-"B" and "A"-to-"C" are independent of each other. A SCORE request is available for the following configurations:
Two Metro Mirror
One Metro Mirror and one Global Copy
Two Global Copy
While Metro Mirror can support up to 300km distance, and Global Copy can go any distance around the planet, there is no reason why you can't have one or both copies in the same building, or on campus nearby, for use with HyperSwap.
OpenStack Cinder interface support
Last but not least, the DS8870 now offers full support for OpenStack Havana and Icehouse releases. Support is provided through the OpenStack Cinder driver currently available for download. IBM is a platinum sponsor of the OpenStack foundation.
To learn more about the IBM [DS8870 disk system], or any other IBM Storage System solution for that matter, attend next week's [IBM Edge 2014 conference]. Look for me, I'll be there!
technorati tags: IBM, DS8870, #IBMEdge, High Performance Flash Enclosure, HPFE, DS8800, DC-UPS, eMLC, SSD, Flash card, RAID-5, Easy Tier, automated tiering, HyperSwap, OpenStack, Cinder, Havana, Icehouse
International Technology Group [ITG] has just published a series of papers about IBM SmartCloud Virtual Storage Center (VSC) and SAN Volume Controller/Storwize storage hypervisor virtualization technology detailing the cost benefit advantages over EMC and VMware.
IBM delivers up to 72% lower storage TCO than EMC storage virtualization and management solutions in large enterprises ... and up to 35% lower storage TCO than VMware tools in mid-sized environments
Here are the reports:
To learn more, check out the [SmartCloud VSC Wiki] full of resources available.
Also, you can watch an interview with the study's author, International Technology Group Managing Director, Brian Jeffery, live from next week's IBM Edge Conference in Las Vegas. Brian will be interviewed on [TheCUBE by Wikibon] on Monday afternoon. Watch it live on May 19!
I will be at Edge next week. If you plan to be there, I would be glad to discuss these ITG findings with you and your clients in person.
technorati tags: IBM, #IBMEdge, TCO, SmartCloud, Virtual Storage Center, VSC, SAN Volume Controller, SVC, Storwize, EMC, VMware, ITG, Brian Jeffrey
Today, I attended the IBM Fast Data Forum. This was a special announcement event for press, analysts and IBM employees.
My fifth-line manager, [Tom Rosamilia], IBM Senior Vice President of Systems Technology Group, kicked off the ceremonies.
The world is changing fast, and technology has changed the way we live, and the way we work. For example, nearly [80 percent of people use their smart phone 22 hours a day]. Tom then introduced our first speaker, Jamie Thomas.
Jamie Thomas, IBM General Manager of Storage and Software Defined Environments
Jamie announced [IBM Elastic Storage], a new offering that is available as a software defined storage solution, based on IBM's General Parallel File System (GPFS) technology already deployed at 45,000 installations.
IBM Elastic Storage provides a global name view across data center locations. It can manage up to a Yotabyte of information, combining Flash, disk and tape resources. It supports OpenStack interfaces, Hadoop and standard POSIX file system conventions.
IBM Elastic Storage provides automated tiering to move data from different storage media types. Infrequently accessed files can be migrated to tape and automatically recalled back to disk when required. Unlike traditional storage, it allows you to smoothly grow or shrink your storage infrastructure without application disruption or outages.
IBM Elastic Storage software can run on a cluster of x86 and/or POWER-based servers, and can be used with internal disk, commodity storage, or advanced storage systems from IBM or other vendors.
IBM partnered with various clients in different industries in a special beta program. Jamie led a client panel to discuss their experiences with IBM Elastic Storage:
Alan Malek, Director of IT, Cypress Semiconductor.
"Total cycle time is key". Over the past 31 years, they bought whatever file storage was available. Now, with IBM Elastic Storage, the performance was very consistent for their engineering workloads with full load balancing.
Russell Schneider, Principal Storage Consultant, Jeskell.
Russell's company works with a lot of federal agencies, "Big Data has become Bigger Data". For example, research on Global Warming and Climate Change requires a large amount of storage across agencies.
In another example, when the tsunami hit Japan a few years ago, an agency here in the USA realized they had 14PB of data stored as a single copy in a data center at sea level less than a mile from the coast. They realized they needed to have a secondary copy, and an option to cache to a third location depending on regional disasters.
Matthew Richards, Products, OwnCloud.
For those not familiar with OwnCloud, it provides a Dropbox-like file sharing service, but in the Enterprise, with on-premise storage. It has been fully tested and certified with IBM Elastic Storage to provide a secure file sharing platform.
With IBM Elastic Storage, they were able to scale linearly up to 20,000 users, and are now testing 100,000 users. The need to have intelligent access to files at scale is what Matthew likes about IBM Elastic Storage.
Dr. Michael Factor, IBM Distinguished Engineer at IBM Research
Michael started out explaining there are three areas for storage: block, file and object. The fastest growing type of data is unstructured fixed content with associated metadata. This is ideal for object storage. Michael has been working with OpenStack Swift, an open source interface defined for object storage. He defined "storlets" as follows:
Storlets extend an object store by moving computation to the data -- filtering, transforming, analyzing -- instead of bringing data to the computation.
Storlets have been deployed on a variety of European Union research projects. For example, in partnership with Phillips, a pathology storlet can count the number of cancer cells in an image. By bringing the computation to the data, it eliminates having to transfer large amounts of data over the network.
Storlets can run on-premise and on IBM's SoftLayer IaaS cloud offering.
Bruce Hillsberg, IBM Director of Storage Systems at IBM Research
Bruce led another panel discussion, this time of IBM storage experts:
Vincent Hsu, IBM Fellow and CTO of Storage.
The problem is the isolation of data into "storage silos". Isolation causes problems in managing large amounts of data at scale, and costs more as storage is not fully utilized. IBM Elastic Storage brings everything together, eliminating storage silos.
IBM Elastic Storage can scan [10 billion files on a system in 43 minutes].
Dr. Michael Factor, IBM Research.
Michael explained how IBM works with clients all over the world to ensure that storage solutions meet client requirements. For example, storlets can be used to use rich metadata to manage photographs, and display them based on GPS satellite location, or other content that makes it easier to manage these images.
IBM Elastic Storage will support OpenStack Cinder and Swift interfaces. IBM is a platinum sponsor of OpenStack foundation, and is now its second most prolific contributor, with hundreds of full-time employees working on this.
Tom Clark, IBM Distinguished Engineer, Chief Architect, Storage Software, Cloud & Smarter Infrastructure.
Storage Management is a critical piece of Software Defined Storage. This is done in three ways:
The use of analytics to optimize the deployment of storage, based on workload requirements. Storage admins set policies, and then IBM Elastic Storage analytics gather metrics and then optimize data placement and movement based on these policies. IBM Elastic Storage has 70 percent lower TCO that competitive offerings.
The focus on backup services. Backups are not just for data protection, but rather can be used to duplicate or replicate data for testing, for training, and for other purposes. IBM Elastic Storage is fully supported by IBM Tivoli Storage Manager.
Being able to support Hybrid Cloud environments, where some data can be on-premise, and other data off-premise. Storage Management challenges will need to deal with this possibility. IBM Elastic Storage is well positioned for this.
Carl Kraenzel, IBM Distinguished Engineer, Director of Watson Cloud Technology and Support.
Watson is ground-breaking technology, and IBM Elastic Storage technology was at the heart of the Watson that was first introduced in 2011.
To consider IBM Elastic Storage based on lower-cost and higher-scalability is not the full picture. Rather, this is an important platform for Cognitive Computing, which we are just at the tip of the iceberg in exploring. IT systems need to be aware of the context of what we are doing.
While the Grand Challenge demonstration on Jeopardy! was exciting, it is time we stop playing games and apply IBM Elastic Storage to business, to help with health care and medical research, and other problems in society. IBM has already deployed this at Anderson Cancer Center and Memorial Sloan Kettering Cancer Center, for example.
Tom Rosamilia provided closing remarks. IBM Elastic Storage is not just for new workloads in Cloud, Analytics, Mobile and Social (CAMS) but also traditional workloads as well. IBM Elastic Storage provides "data democracy" and allows for "better rested storage administrators" that make fewer mistakes.
Tom opened the floor for questions from the audience:
Q1. Data integrity, not just security but also quality? IBM Elastic Storage has end-to-end data integrity checking built-in.
Q2. How does IT transition from full control to auto-pilot? IBM allows you to tap into existing storage. This is not rip-and-replace. With storage virtualization, IBM hides the complexity that normally requires full control over specific assets.
Q3. Storage admins would rather have a root canal without Novocaine than move their data. What is IBM doing to offer automation to help storage admins move to this new infrastructure? IBM storage virtualization breaks that hard link between applications and specific storage devices. IBM Elastic Storage eliminates application downtime previously associated with data movement.
Tom Rosamilia assured the audience that IBM is fully committed to its storage portfolio. IBM Elastic Storage is not just about the profoundness of what IBM announced today, but also where IBM is investing in the future of storage.
technorati tags: IBM, Fast Data Forum, #fastdata, Tom Rosamilia, STG, Jamie Thomas, Software Defined Storage, Software Defined Environment, Elastic Storage, Alan Malek, Cypress Semiconductor, Russell Schneider, Jeskell, Matthew Richards, OwnCloud, Michael Factor, storlets, Bruce Hillsberg, IBM Research, Anderson Cancer Center, Memorial Sloan Kettering, Tom Clark, Carl Kraenzel, Novocaine, data democracy
Modified by TonyPearson
Wow! It has been six years already since IBM acquired Diligent] and launched the [IBM ProtecTIER® data duplication storage solutions]! My how time flies.
Marking the occasion, here is an important letter from our Vice President, Laura Guio:
May 6, 2014
To Whom it may concern
Subject: ProtecTIER Development Update:
This year marks the sixth anniversary of IBM's acquisition of Diligent Technology. Over the past six years IBM has emerged as a leader in enterprise class data deduplication. Our highly scalable, dual node hardware redundancy and gateway design are unique characteristics in the industry. IBM fundamentally believes in the importance of cost saving data deduplication technology and continues to enhance our solution, improve value and increase investment protection for our installed base.
First, it is important to note what IBM has done most recently. IBM is among the first to integrate flash technology along with deduplication to boost performance and lower cost. Integration of the IBM FlashSystem 840 for metadata was completed the day the system was publically announced. The speed of technology integration is a result of our flexible gateway design which simplifies technology adoption. It also is enabled by our global development team providing a 24x7 system design, product test and integration environment.
Secondly, IBM has recently released ProtecTIER Mainframe Edition which enables the same enterprise class deduplication capability now for IBM System z. Another distinctive feature of ProtecTIER is its ability to sustain high throughput for both read and write operations. Most deduplication methodologies have an inherent read performance penalty. Since mainframe tape operations are much more read intensive than distributed systems, we were one of the first to market with a practical deduplication offering for all mainframe tape applications.
That's just what we've done getting out of the starting blocks in 2014. Our development team continues to enhance ProtecTIER. We're also working on refreshing the entire ProtecTIER product line with new model enhancements. A new gateway design is underway which will improve performance of the existing DD5. We expect this to be available as an upgrade, providing investment protection for existing ProtecTIER clients. The SM2 product family is also being redesigned to extend its capacity range. Along with hardware changes, we will widen the disk support matrix offering enhanced flexibility and new levels of price performance.*1*
We expect 2014 to be a busy year for IBM deduplication. We have development facilities around the world in Europe, North America, Central America and Asia, working on ProtecTIER. IBM continues to market, sell, and support ProtecTIER as our strategic offering for cost-reducing deduplication technology. Any suggestion that ProtecTIER is fading away is wishful thinking by our competitors. We are working to expand our markets as we have demonstrated by our recent introduction of ProtecTIER into the mainframe. Furthermore, we are looking to expand the use cases for ProtecTIER, which can now be attached as a NAS file system, to other areas besides pure backup. We're excited about what we are delivering today and where we can provide leadership by leveraging deduplication for customer storage environments.
Vice President, Business Line Executive Storage Systems
IBM Systems and Technology Group
: IBM's statements regarding its plans, directions, and intent are subject to change or withdrawal without notice at IBM's sole discretion. The development, release, and timing of any future features or functionality described for our products remains at our sole discretion.
To learn more about IBM ProtecTIER, consider attending the [IBM Edge conference], May 19-23, 2014 at the Venetian Hotel in Las Vegas. I'll be there to explain Data Deplication technology as part of my "Data Footprint Reduction" presentation!
technorati tags: IBM, IBM acquisitions, Diligent Technology, ProtecTIER, DD5, SM2, FlashSystem, FlashSystem 840, ProtecTIER Mainframe Edition, NAS, Laura+Guio, #ibmEdge
Modified by TonyPearson
Well, it's Tuesday again, and you know what that means? IBM announcements!
Today's announcements are all about the Storwize family, IBM's market-leading Software Defined Storage offerings. Having sold over 55,000 systems, and managing over 1.6 Exabytes of data, IBM continues to be the #1 leader in storage virtualization solutions. The Storwize family consists of the SAN Volume Controller (SVC), Storwize V7000, Storwize V7000 Unified, Flex System V7000, Storwize V5000, Storwize V3700 and V3500.
SAN Volume Controller 2145-DH8
The new 2145-DH8 model is a complete repackaging of this popular storage system. The previous model, the 2145-CG8, was 1U-high x86 server per node, and each node required a separate 1U-high UPS to provide battery protection for its cache. Nobody liked this. The new 2145-DH8 instead is a 2U-high node with two hot-swappable batteries, eliminating the need for UPS altogether. Thus, an SVC node-pair using the 2145-DH8 models takes up the same 4U space, but with fewer cables. The SVC can now also support standard office 110/240 voltage sources.
The new model sports an 8-core processor with 32GB RAM. Since these are 2-socket servers, IBM offers that option to add a second 8-core processor and additional 32GB RAM to help boost Real-time Compression. Each node can have optionally one or two hardware-assisted compression cards which use the Intel QuickAssist chip to boost compression performance.
While the Real-time Compression was in fact, real-time, performed in-line to the read/write I/O process, at latency comparable to uncompressed data for applications, the compression process on older models was entirely software-based, consuming some of the CPU resources, which lowered the maximum IOPS of the solution. With the added cores, added RAM, and hardware-assisted compression chips, IBM resolves that concern. In fact, the new 2145-DH8 with compression can provide more IOPS than an older 2145-CG8 without compression.
The previous model 2145-CG8 allowed you to put up to 4 small SSD drives in the node itself, which were treated the same as externally Flash drives for purposes of having a high-speed storage pool for select volumes, or automated sub-LUN tiering with Easy Tier. The new model 2145-DH8 allows you to attach up to 48 Solid State Drives (SSD) via 12Gb SAS cables. These are housed in the new 2U-high 24F enclosures that can offer up to 38.4 TB of Flash per SVC I/O group.
IBM also re-designed the host/device ports to use Hardware Interface Card (HIC) slots. In the 2145-CG8, you had four FCP ports, two 1GbE Ethernet ports, with options to add two 10GbE Ethernet ports or four additional FCP ports. If you had mostly an FCoE or iSCSI environment, you didn't need the FCP, and if you were mostly a FCP Storage Area Network (SAN) environment, then most of the Ethernet ports went unused. To solve this, the 2145-DH8 can allow you to have up to six HIC cards that are either FCP, Ethernet, or SAS. There are three 1GbE fixed Ethernet ports which can be used for iSCSI and administration.
If you have SVC today, you can upgrade non-disruptively by either swapping out your current SVC engines with the new 2145-DH8 engines, or you can add the new 2145-DH8 engines to your existing SVC cluster. Either way, there is no outage to your applications!
To learn more, see the [Announcement letter: SAN Volume Controller Storage Engine DH8].
New Storwize V7000 hardware
This is the next generation of the popular Storwize V7000. The previous generation had a 4-core processor and 8GB RAM per canister. The new model has an 8-core processor with 32GB of RAM per canister, with the option to double these to boost Real-time compression. There are two canisters per control enclosure, which gives you 64GB to 128GB of RAM per Storwize V7000 I/O group.
The new Storwize V7000 comes with one hardware-assisted compression chip on the mother board of each canister, with the option to add a second chip per canister.
Each canister offers three HIC slots, which can be used for the additional hardware-assist compression chip, FCP or Ethernet ports.
To accommodate these HIC slots, new canisters were needed. Instead of the flat wide style top and bottom, we now have taller, thinner canisters that sit side to side. This side-to-side design is similar to our existing Storwize V5000 and V3700 models.
The previous model could support up to 9 expansion enclosures per control enclosure. The Storwize V7000 can have up to 24 drives in its control enclosure, and now attach up to 20 expansion enclosures, which allows up to 504 drives per control enclosure, and up to a maximum of 1,056 drives per Storwize cluster.
If you have previous models of Storwize V7000, you can add the new Storwize V7000 into the same cluster, or virtualize the previous storage for migration purposes.
To learn more, see the [Announcement letter: New Storwize V7000].
IBM Storwize Family Software V7.3.0
The new software applies new capabilities to both new generation hardware as well as the older models, so people with existing gear can benefit as well.
In prior releases, the sub-LUN automated tiering was limited to two levels: Flash and HDD. This lumped all 15K, 10K and 7200 RPM drives into a common HDD category. In the new v7.3.0 code, you can now have three levels: Flash, Enterprise HDD, and Nearline HDD, or two HDD levels: Enterprise and Nearline. The Enterprise level combines 15K and 10K RPM drives, similar to what is done on the IBM System Storage DS8000 disk systems.
The new code is also able balance your storage pools, and can be used with uniform or mixed storage pools to eliminate performance hot spots.
The new code has been enhanced to detect the hardware-assisted compression chip on the new SVC and Storwize V7000 models, and use those if available.
For the Storwize V3700 and V5000 models, the new code allows up to nine expansion enclosures per control enclosure. In the previous models, the V3700 allowed only four expansions, and the V6000 only six expansions per control enclosure. The V3700 can now support up to 240 drives, and the V5000 can support up to 480 drives.
To learn more, see the [Announcement letter: Storwize Family Software v7.3.0].
IBM Storwize V7000 Unified File Module software v1.5
For Storwize V7000 Unified clients, there is new software for the File Modules that provide NFS, CIFS, FTP, HTTPS and SCP protocol capability. The new v1.5 code now adds NFS v4 and SMB 2.1 levels of support. Most NFS users are still on NFSv3, but about 20 percent of NFS users are using NFS v4 which offers stateful access. The SMB 2.1 for CIFS was introduced by Microsoft in Windows 7 and Windows Server 2008 R2.
Deterministic ID mapping allows you to map Windows userids to UNIX/Linux group and owner id numbers. In the past, the problem is that this mapping is different on each machine, so people often had to stand up a Windows System for Unix Services (SFU) server to provide consistent ID mapping. Now, with v1.5 code, you will no longer have to do this. The deterministic ID mapping will can now replicate the mapping to each machine without an SFU server.
Active Cloud Engine allows up to ten Storwize V7000 Unified to be connected across distance to form a single global name space. WAN caching, however, was restricted to a single site having write capabilities, while the others were read-only. In v1.5 release, IBM now supports multiple independent writers at different locations on the same fileset.
Security enhancements include multi-tenancy, configurable password policies, session policies, and hardened boot and SSH configurations. With NFS v3/v4, you can now use [Kerberos] for security.
Finally, I am please to see that we now have Cinder support for files on the Storwize V7000 Unified on the OpenStack Havana release that just came out last month. The OpenStack Cinder interface can assign LUNs to virtual machines, but the new Havana release allows NAS systems to dole out files that act as LUNs, such as OVA or VMDK files. The advantage is that these files can managed by Active Cloud Engine, cached locally across global name space, have policies place them on appropriate storage tiers, and inactive Virtual Machine images can be migrated to less expensive disk or tape.
To learn more, see the [Announcement letter: Storwize Family Software v7.3.0].
You can learn more about the Storwize family at the [IBM Edge Conference], May 19-23, at Las Vegas. I'll be there!
technorati tags: IBM, Announcements, SAN Volume Controller, SVC, Storwize, Storwize V7000, Flex System V7000, Storwize V5000, Storwize V3700, 2145-DH8, hardware-assisted compression, Real-time Compression, Intel QuickAssist, New Storwize, HIC, Easy Tier, Storwize V7000 Unified, File Modules, OpenStack, OpenStack Havana, OpenStack Cinder, multiple-writer, independent-writer, Active Cloud Engine, Windows SFU, Kerberos, Storwize family, #ibmEdge, Las Vegas
Modified by TonyPearson
Systems Technical University 1001 Arabian Nights
Wrapping up my coverage of the [Systems Technical University 2014] conference, we had a special dinner with entertainment on Wednesday evening.
Before dinner, I was able to catch up with my colleagues from across the pond. Here I am pictured with Ola Surowiec, a Power Systems sales specialist from Scotland.
The dinner was set up as self-service buffet style, with choices of European, Asian, and Middle Eastern cuisine. This is largely the heritage of the Ottoman empire to provide a fusion of flavors from its neighbors.
The city of Istanbul is considered the border between Europe and Asia, with one side of the city on the "European" side, and the other side of the Bosphorus strait being the "Asian" side.
With a population of over 14 million, Istanbul forms one of the largest urban agglomerations in Europe, second largest in the Middle East and the third-largest city in the world by population within its city limits.
The entertainment started with two [belly dancers], one male and one female. (IBM is an equal opportunity employer!) For those not familiar with this particular form of performance art, it is improvised folk dances based on torso articulation and abdominal movements.
I have seen dancers before in Egypt, the country that most people associate with the origin of belly dancing, but the Turkish version is considered more energetic and athletic. Certainly both of our dancers were quite flexible.
This was followed by a live cover band that played the latest English-language hits. Several Americans at the table asked "Wait? We come all the way to Turkey and the local band sings the songs in English?"
I had to explain that [the Beatles made their start playing in Germany]. This let the band hone their performance skills, widened their reputation, and led to their first recording.
Today, what music tops the charts throughout Europe, including countries like Turkey that are predominantly not English-speaking residents, are mostly from American musicians. Emmanuel Legrand has a great article on this titled [Europe's music scene -- A mosaic of talent united by one language].
In the corner, attendees were invited to dress up as their favorite sultan to take photograph. Here for example, are some of the members of the STU event team. Mo McCullough, Don Meyer, Marlin Maddy, Glenn Anderson and Alex Abderrazag pose with two lovely local ladies in full costume.
The word "sultan" derives from the Arabic word meaning "strength", "authority" or "power". Sultans ruled the Turkish empire from 1299 to 1922.
The [Topkapi palace], where I visited earlier in the week, contains clothing on display of the sultans and princes from the second half of the 15th century to the early 20th century.
A fun time was had by all!
technorati tags: IBM, #ibmtechu, Systems Technical University, Istanbul, Ottoman Empire, Ola Surowiec, Power Systems, Emmanuel Legrand, Mo McCullough, Marlin Maddy, Glenn Anderson, Topkapi palace
Continuing coverage of the [Systems Technical University 2014] conference, we had our last set of breakout sessions on day 4.
- New Generation Storage Tiering: Less Management, Lower Investment and Increased Performance
This was not just an update to my session last year in Brussels, Belgium. Rather, I decided to start over and focus I/O density as the metric to focus my efforts, armed with real data from Intelligent Storage Tiering Analysis (ISTA) studies done at various clients. From that, I was able to talk about storage tiering on three fronts:
- Storage tiering between Flash and disk. IBM FlashSystem and IBM Easy Tier on DS8000 and Storwize family for hybrid Flash-and-disk configurations.
- Storage tiering between disk and tape. HSM and Information Lifecycle Management (ILM) on SONAS, Storwize V7000 Unified and LTFS-EE.
- Storage tiering automation across your entire environment. ISTA studies can help identify a target mix of Tier 0, Tier 1, Tier 2 and Tier 3 storage. SmartCloud Virtual Storage Center can recommend or perform the movement of LUNs to more appropriate tiers, based on age and I/O density measurements.
- Next Generation FlashSystem 840 and V840, Architecture Deep Dive
Detlef Helmbrecht, from the IBM Advanced Technical Skills team in Germany, presented this deep dive in our latest IBM FlashSystem offerings. He started with an analogy. Latency is like a single car driving down an empty highway. IOPS, on the other hand, is like a lot of cars stuck in slow traffic, with all lanes filled on the autobahn. While there are more cars transported on a full highway, the individual cars are not driving very fast. Flash versus disk has similar comparisons.
Detlef explained the differences between the previous FlashSystem 810/820 with the new 840, as well as talk about the FlashAdapter 90 now available as a PCIe card.
Finally, we talked about SAN Volume Controller combined with Flash, and the new FlashSystem V840 which combines SVC and FlashSystem 840 to have an incredibly function-rich, robust solution.
- Data Footprint Reduction - Understanding IBM Storage Efficiency Options
My last session of the week! This session covered all of the various technologies for data footprint reduction, including Thin Provisioning, Space-efficient FlashCopy and snapshots, Real-time compression and data deduplication. Frankly, I wasn't expecting many people to attend the last session of the last day, but nearly 50% of the seats were filled, so I was quite pleased on the turn-out.
Fun Fact: Istanbul is considered by TripAdvisor in 2014 as the #1 most popular city to visit in Europe!
technorati tags: IBM, #ibmtechu, STU, Istanbul, TripAdvisor, storage tiering, FlashSystem, HSM, ILM, SONAS, Storwize, ISTA, SmartCloud, Virtual Storage Center, data footprint reduction, FlashCopy, Thin Provisioning, Real-time Compression, Data Deduplication, Detlef Helmbrecht
Modified by TonyPearson
Continuing coverage of the [Systems Technical University 2014] conference, I participated in a "Meet the Experts" session on day 3.
Johann Weiss, Jim Blue and I joined several other local experts to answer questions and respond to comments and suggestions attendees had about IBM System Storage products and solutions. Here is a sample:
I would like to add 1TB of Flash to our FlashSystem 810 and have the system automatically re-stripe across this new capacity non-disruptively?
How can I have XIV systems at two datacenters in an active/active configuration that would allow me to vMotion from one location to the other non-disruptively?
Put them behind the SAN Volume Controller in Stretched Cluster mode.
What about a similar active/active but for NAS?
IBM N series.
I would like HyperSwap on the SVC/Storwize family like the DS8000 offers for AIX?
When will IBM offer a multi-frame XIV?
The "Hyper-Scale" set of features lets you logically connect 144 XIV frames together and treat as a single system. There is no need to physically bolt them together, since the communication is done over standard network switches.
When will IBM devices have native FCoE support?
All IBM System Storage products work within an FCoE framework today, either with native FCoE support, or through Top-of-Rack switches splitting out the traffic between IP and FCP traditional networks. IBM Storwize and N series products support FCoE natively, and any disk behind virtualized by SAN Volume Controller or Storwize can be access via FCoE hosts because of this support.
What is FLAPE?
FLAPE is the combination of Flash and Tape. Both of these technologies are improving over 40 percent year-to-year, but disk is slowing down to 20 percent improvement. It is possible to combine Flash and tape systems, such as IBM LTFS-EE or IBM ProtecTIER TS7600 series.
Only the Storwize V7000 Unified supports file modules to add NAS capabilities, what can IBM offer us that is smaller for NAS deployments, perhaps a Storwize V5000 Unified or Storwize V3700 Unified?
Consider the IBM N3000 series.
Other storage vendors indicate that RAID-5 and RAID-6 are running out of steam, are no longer practical to protect ever growing capacities of disk. What is IBM planning in this area?
IBM XIV Storage System was one of the first to offer a distributed RAID that addresses many of the RAID-5/RAID-6 drive rebuild concerns. IBM DCS3700 and DCS3860 also have Dynamic Disk Pooling to reduce drive rebuild impact. Lastly, IBM GPFS now offers Native RAID support, used in the IBM GPFS Storage Server.
Is it true that GPFS is NFS only?
Do not confuse GPFS the file system with the various storage offerings that are based on GPFS. IBM SONAS and Storwize V7000 Unified, both based on GPFS, support CIFS, NFS, HTTPS, SCP and FTP. IBM GPFS Storage Server can be configured to access GPFS natively, or you can run NFS v3/v4 server to make those protocols available. With Microsoft [Windows Storage Server
], you can provide CIFS access to any GPFS-based storage solution.
LTFS-EE sounds like an exciting alternative to IBM Tivoli Storage Manager HSM space management for moving data from disk to tape. Do you agree?
Yes, we agree. However, TSM HSM space management supports a broader set of file systems. LTFS-EE only provides disk-to-tape movement for IBM GPFS.
Why does the DS8000 implementation of Easy Tier sub-LUN automated tiering support three tiers, but SVC/Storwize only support two tiers?
The same software engineering team works on both, but develop new features for the DS8000 first, get it working, then port it over to the Storwize family. At times, there might be gaps between what is supported on the latest DS8000 version and what is available on Storwize family products.
In an SVC Stretched Cluster, I would like to have the third quorum disk connected over the IP network, rather than FCP.
Personally, I enjoy these interchanges. They are sometimes called "Birds-of-a-Feather" or BOF at some conferences, "Free-for-All" at others. At IBM conferences, they are often titled "Meet the Experts". Whatever you call it, the questions and feedback on what clients are thinking are quite useful for product planning and prioritization of future planned features.
technorati tags: IBM, FlashSystem, SAN Volume Controller, SVC, Stretched Cluster, Storwize, Multi-frame XIV, HyperSwap, Hyper-Scale, N3000, DS8000, RAID-5, RAID-6, Distributed RAID, Dynamic Disk Pooling, RAID rebuild, GPFS, GPFS Native RAID, GNR, SONAS, Storwize V7000 Unified, TSM, LTFS, LTFS-EE, BOF, Free-for-All, Meet the Experts
Continuing coverage of the [Systems Technical University 2014] conference, I attended several breakout sessions on day 3.
Step Right Up! Take your presentation skills to the next level
Glenn Anderson presented this session under the guise of "Professional Development". Whether you are new to public speaking and looking for some guidance, or are an experienced A-list celebrity looking to gain a few pointers, this session covered it all.
Some of my favorites:
Presentations are not Documentation! If a presentation had all the information to stand on its own, nobody would even bother to listen to the speaker. Many new presenters have 3-4 lines for titles, and too many words in small font to ensure they cover all the details to speak on. Don't do it. My rule of thumb is that 50 percent of the information is conveyed verbally, and the other 50 percent visually from the presentation.
Simplicity is the ultimate sophistication. I couldn't agree more. I try to focus on my core message in my presetations. I am a big fan of the [KISS principle] which stands for "Keep it simple, stupid!"
VOICE - Victory over inconsistent conscious energy! There is nothing more painful than hearing a public speaker who talks to softly, too loudly, or in a monotone manner. Mix it up! If you want to capture someone's attention, whisper! Vary your volume for effect.
Presenting is like Pouring Wine. At cocktail parties, the hosts will walk around with the bottle, and refill the glasses of those who are actively drinking the wine, but leave alone those who haven't sipped a drop. Public speakers need to focus on the needs of those in the audience paying close attention, and ignore people who are asleep, paying attention to their laptops and smartphones, or otherwise distracted.
Don't memorize - Extemporize. Too often, new speakers try to memorize their entire presentation. This doesn't go well, and can end up looking like an actor on live stage forgetting his next line. Instead, focus on getting the general idea across in a more natural conversational tone.
Building Open Clouds on POWER Systems
Mandie Quartly presented the excitement of building a cloud using IBM's new Linux-only line of PowerLinux™ servers, KVM, virsh, virtio and OpenStack interfaces. Jeff Scheel was on hand to interject bits of wisdom throughout her session.
IBM is investing heavily into the Linux side of all of its servers, and the latest investments have been focused on the POWER systems.
Storage Clouds in the Big Blue Sky
Dick Vogelsang presented this session focused mostly on the "Self-service" aspect of Cloud Storage. While this sounded like it would be similar to my session from yesterday, it was actually quite different.
Vogelsang explained SmartCloud Storage Access, and compared this to how competitors are providing (or not providing) self-service provisioning of file spaces and LUNs. He gave examples based on VMware, Hyper-V, and OpenStack Foundation.
It is interesting the angle or spin that each speaker gave to each topic!
technorati tags: IBM, #ibmtechu, STU2014, Istanbul, Glenn Anderson, presentation skills, Mandie Quartly, PowerVM, KVM, Power Systems, OpenStack, PowerLinux, storage cloud, Jeff Scheel, Dick Vogelsang, SmartCloud Storage Access, SCSA, VMware, Hyper-V, self-service provisioning
Continuing coverage of the [Systems Technical University 2014] conference, we had an early morning awards ceremony to celebrate top sellers that led big wins in Europe for FlashSystems, XIV, Power Systems, and PureSystems.
Afterwards, there were several breakout sessions on day 2.
- Storage Technology Futures -- fresh from IBM research labs, tomorrow in your datacenter
Axel Koester presented several projects from IBM Research labs that have contributed to actual products, including the incredible scalability of [PERCS] that was incorporated into IBM General Parallel File System (GPFS).
- Cloud Storage and Active Cloud Engine
My presentation started off explaining the taxonomy of cloud storage. There are basically four kinds of cloud storage: persistent storage, ephemeral storage, hosted storage, and reference storage. Each of these has unique access patterns and service level requirements.
IBM has three distinct cloud storage offerings, so I covered IBM XIV Storage Systems, SONAS and Storwize V7000 Unified with Active Cloud Engine, and Linear Tape File System (LTFS) Enterprise Edition (LTFS-EE).
- FlashSystem competitive overview
Henrik Wilken provided an excellent presentation comparing IBM FlashSystems to the dozen or more competitors that offer all-flash or hybrid flash-and-disk combinations.
- IBM Tivoli Storage Productivity Center
From 2001 to 2003, I was the chief architect for what is now called Tivoli Storage Productivity Center. It continues to be the top most requested topic for briefings at the IBM Tucson Executive Briefing Center.
I presented an overview of Tivoli Storage Productivity Center, with a brief update on what's new in TPC 5.2.1 and the SmartCloud Virtual Storage Center v5.2.1 releases.
- IBM Archive Storage Solutions - Data Retention for Government Compliance and Industry Regulations
I can't believe it has been nine years since I was on the Product Development Team for the IBM DR550 Data Retention storage solution!
In this session, I explained the lessons we learned from the DR550, its successor the Information Archive, and how we now position System Storage Archive Manager (SSAM) software as their replacement. SSAM was recently certified by KPMG to meet a variety of US, European and International laws.
technorati tags: IBM, GPFS, Axel Koester, PERCS, XIV, SONAS, Storwize V7000 Unified, Linear Tape File System, LTFS, LTFS-EE, Henrik Wilken, Tivoli Storage Productivity Center, TPC, SmartCloud, Virtual Storage Center, VSC, DR550, Information Archive, SSAM, KPMG
Continuing coverage of the [Systems Technical University 2014] conference, we had several breakout sessions on day 1.
- IBM Smarter Storage Strategy
I presented IBM's Smarter Storage Strategy. This is focused on three key areas:
- Data-intensive Solutions. Storage is needed for Big Data analytics. IBM is focused on efficiency in all dimensions: capacity efficiency with data footprint reduction techniques, energy efficiency, administrator efficiency with ease-of-use interfaces, and reduced complexity.
- Business-critical workloads. Storage needs to allow business to prioritize which applications and workloads are most critical, and automate Quality of Service (QoS) for each application based on its business importance. The result is a balance between performance and cost across the spectrum of applications.
- Start quickly and add value. IBM is committed to support private, hybrid and public cloud deployments. Storage needs to support not just VMware, but also Hyper-V, KVM, PowerVM and z/VM. That is why IBM is a platinum sponsor for the OpenStack foundation.
- Demystifying OpenStack
Eric Aquaronne presented an excellent session on OpenStack foundation, an open source collaboration of various companies to bring a consistent Cloud-management standard across compute, storage and network resources.
- Replication for Business Continuity and Disaster Recovery
I have been involved with Business Continuity and Disaster Recovery my entire 28-year career at IBM System Storage, so when I was asked to cover BC/DR in 75 minutes, I focused just on aspects related to disk-to-disk replication.
I divided the presentation into three sections:
- Business priorities. You need to prioritize which business processes are most important, and prioritize your recovery accordingly.
- Technical implementation. Once priorities are set, there are seven "Business Continuity Tiers" to choose from. BC Tier 1 is the least expensive, recovering from physical tapes stored in an off-site vault. The fastest recovery is BC Tier 7, which automates the storage, server and network fail-over to a secondary site in as little as 30 minutes.
- Ongoing management. Just setting up a BC/DR implementation is not enough. It needs to be monitored to ensure that it continues to provide the protection you expect. BC/DR exercises should be performed one or more times per year to ensure that everyone has the skills and procedures documented to succeed in the event of a real disaster.
Of these seven BC tiers, BC Tier 6 is focused on storage replication, such as Metro or Global mirror available on our DS8000, XIV Storage System, SONAS and SAN Volume Controller. BC Tier 7 involves system automation, such as Tivoli Distributed Disaster Recovery Manager and GDPS.
- What is Big Data? Architectures and Practical Use Cases
This session was an expanded version of the one I gave in Belgium last year. Big Data is a big topic, and there are a variety of "big data" related sessions at this conference. I focused on three key areas:
- The change in the role of Storage Administrator. In the past, most of the data was structured and stored in databases, managed by database administrators. However, in today's environment, over 80 percent of the data is unstructured, outside of traditional relational databases, so either the database administrators need to learn new skills, or storage administrators will need to step up and help manage this unstructured data content.
- The change in the role of Business Analyst. We are no longer just looking at the financial consequences of patterns and trends. The new role of Data Scientist needs to apply statistical models, show some business acumen, and be able to "tell a story" that is supported by the data when communicating findings to Business and IT leaders.
- The change in the role of Decision Maker. In the past, Decision Support Systems were available only to the top-level business executives. Now, empowered employees have access to real-time analytics that can help them make decisions and take immediate actions.
This session packed the house, with standing room only. I would like to offer a special thanks to IBM VP Bob Sutor, Stephen Brodsky, Linton Ward, and Ralph McMullen in helping me finalize my presentation.
This is shaping up to be an awesome conference!
technorati tags: IBM, #ibmtechu, Smarter Storage Strategy, Data-intensive, Business-critical, QoS, VMware, Hyper-V, KVM, PowerVM, z/VM, OpenStack Foundation, Business Continuity, Disaster Recovery, BC/DR, Big Data, storage administrator, DBA, Business Analyst, Data Scientist, Decision Maker, Empowered Employee, Bob Sutor, Stephen Brodsky, Linton Ward, Ralph McMullen
The first official day of the [Systems Technical University 2014] conference had keynote sessions in the morning. The conference features experts from IBM Power Systems, IBM System x, IBM PureSystems, and IBM System Storage.
The keynote sessions were started with Amy Purdy, IBM Director of Technical Training Services, the group that is running this conference.
This conference is not focused on System z solutions, as many of the System z clients were in New York City for this birthday event, but it came up several times during the keynote sessions.
Amy offered a special [Happy 50th Birthday to the IBM System zEnterprise mainframe]. Fifty years ago this week, [IBM announced its famous S/360] mainframe that raised IBM's revenues from $3.6 Billion USD in 1965, to $8.3 Billion in 1971.
(FTC Disclosure: I work for IBM, and this blog post may be considered a paid, celebrity endorsement of IBM products and services. IBM has business relationship with both Intel and Amazon mentioned during the course of the keynote sessions, but I have no financial stake in either company. I was the chief architect for DFSMS, the storage management component of the z/OS mainframe operating system, and was part of the team that ported Linux to the System z mainframe.)
Nicolas Sekkaki, IBM Vice President of Systems and Technology Group in Europe, discussed IBM's commitment to client's privacy, the x86 and POWER server platforms, and a variety of mind-bogging announcements. He is focused on three trends: Big Data, Cloud, and Mobile.
IBM is focusing its hardware efforts on high-value, high-margin solutions such as System Storage, POWER Systems and System zEnterprise mainframe environments. Did you know that 65 percent of the world's business transactions are processed by either POWER systems or System zEnterprise mainframe?
IBM is also extending its continued focus on Linux and Open Source initiatives. For the System zEnterprise mainframes, 78 percent of our clients run Linux on System z. Over 290 clients have added the "zBX" option that allows them to run Windows and AIX on the mainframe as well. It is now less expensive to run workloads on System zEnterprise -- about 1 dollar per day per server -- than public cloud offerings from Amazon Web Services. Linux on POWER also has lower Total Cost of Ownership (TCO) than Linux-x86.
Nicolas also mentioned major changes for the POWER Systems, starting with the [OpenPOWER Consortium], formed by IBM, Google, Mellanox, NVIDIA and Tyan.
The move makes POWER hardware and software available to open development for the first time as well as making POWER Intellectual Property licensable to others, greatly expanding the ecosystem of innovators on the platform. The consortium will offer open-source POWER firmware, the software that controls basic chip functions. By doing this, IBM and the consortium can offer unprecedented customization in creating new styles of server hardware for a variety of computing workloads.
IBM POWER has switched from being "Big Endian" to being "Bi-Endian", allowing operating systems to choose between "Big Endian" or "Little Endian" modes. The Big Endian mode allows for Linux compatibility with the System zEnterprise mainframe, and the Little Endian mode for compatibility with Linux-x86.
Thorston Kahrmann, Intel Account Director for EMEA, presented Intel's rich history of collaboration with IBM, from technologies like BlueTooth and PCiE Generation 3, to platforms like BladeCenter and NeXtScale, to Industry Standards.
IBM had a lot of "firsts" in the x86 server area, including the first 16-processor server, the first to offer hot-swap memory, and over 100 leading performance benchmarks.
The latest Intel Xeon chip is the E7 version 2. For example, changing from DB2 v10.1 on the old E7, to running DB2 BLU columnar acceleration on the new E7 version 2, resulted in a 148 times increase in performance. A query on a 10TB database that previously took four hours was completed in under 90 seconds.
Thorston also wanted to remind the audience that nearly every System Storage product from IBM, from the high-end XIV, SAN Volume Controller, SONAS and FlashSystem V840, to midrange and entry level Storwize products, are all based on Intel's x86 processors.
Louise Hemond-Wilson, IBM CTO and Distinguished Engineer for Lab Services, reminded everyone today was also the [International "Draw-a-Bird" day].
Louise covered the findings from the latest 2012 CEO study, gathering insight from 1709 CEO interviews. The major focus areas for CEOs are:
- Empowering employees through company-wide values
- Engaging customers as individuals, rather than via demographics
- Amplifying innovation with strategic and tactical partnerships
With smartphones, tablets and ubiquitous Internet access, everyone is now a technologist, so that IT is now becoming a competitive differentiator. IT projects and Business projects are no longer separate. If your IT department is seen as an expense, it will continue to get its budget cut. If, however, your IT department is part of your revenue stream, then it can be viewed as an asset.
Sadly, over 75 percent of IT projects fail, either are way over budget, delivered late, or some combination of the two. Business leaders are pushing for IT improvements, but often CIOs are too afraid to take the risks to move the business forward. Louise cited three reasons for this, which she called the three C's:
- The IT and Business leaders did not full understand the context of the project.
- The content of the project was not properly defined between IT and Business architects.
- The collaboration between IT and Business personnel was not properly established.
Louise wrapped up her session with asking a simple question: How much is the cost of a light bulb. Some might focus on the cost of the bulb itself, while others might add the cost of maintenance, having ladders and personnel to replace them as needed, and others might include the electricity consumed. Both Business and IT leaders need to focus on Total Cost of Ownership (TCO) in their planning.
technorati tags: IBM, #ibmtechu, Amy Purdy, Technical Training Services, mainframe50, zEnterprise, mainframe, Nicolas Sekkaki, OpenPOWER, Linux, zBX, Amazon Web Services, Thorston Kahrmann, Intel, E7v2, EMEA, CEO Study, TCO, Louse Hemond-Wilson, STG Lab Services
Modified by TonyPearson
I have arrived safely to Istanbul, Turkey for the [Systems Technical University 2014] conference. The conference will feature experts from IBM Power Systems, IBM System x, IBM PureSystems, and IBM System Storage.
Here is the view from my hotel window. Up until the 19th century, this was open countryside. Around 1890, the Bomonti brothers from Switzerland set up a brewery, which was moved to this section of town in 1902, becoming the first Turkish brewery. In 1934, the brewery was nationalized and became the Istanbul Tekel Beer Factory. The Hilton Bomonti hotel where the conference is being held is named after these brothers.
Since this is my first time to Istanbul, and I did not have meetings until later in the afternoon for the conference, I decided to a bit of sightseeing.
(A special thanks to Gail Godbey of [Encounter Tours/Kaletours] who organized this entire tour of sightseeing for me on such short notice!)
The hippodrome was a stadium for horse and chariot racing, but now is just a square with a few obelisks. This one is the Thutmosis Obelisk from Egypt. The word hippodrome comes from the Greek hippos, meaning horse, and dromos, meaning path or way. Hippodromes were common features of Greek cities in the Hellenistic, Roman and Byzantine eras. My tour guide Erol Azor did a great job explaining everything.
My favorite stop of the day was the Blue Mosque, named after the blue tiles used on the dome. It is 43 meters high, making it one of the tallest mosques in the city. There are over 3,000 mosques here in Istanbul. In Turkish, this place is called Sultan Ahmet Camii after the Sultan Ahmet that had it built from 1609-1616. There are six minarets. The legend goes that the Sultan asked for a "gold" minaret, but the word for "gold" in Arabic sounds a lot the number six in Turkish, so that is why there are six of them.
Right next to the Blue Mosque is the Hagia Sofia, which was a Christian church first, then converted to a mosque, and now is a musuem. It was closed on Mondays, so all I could do was take pictures from the outside. Tulips are in full bloom throughout the city this month of April. If you notice, the minaret on the right is different color. Often, new sultans would add a minaret to an existing mosque, using whatever materials were available at the time. Kind of like adding a bedroom to an existing house.
Underneath the ground is the Basilica Cistern which held the drinking water for the city. The water came in on viaduct, and was kept underground. Today, it has a foot of water, and some fish, for people to admire the architecture employed.
Of course, no visit to Istanbul is complete without stopping at the Grand Bazaar. With over 4,000 tiny shops, it is a madhouse of gold and silver jewelry, blue jeans, leather goods, scarves, persian rugs, and antiques. Some places offered me free samples of Turkish delight, which are delicious cubes of flavored gelatin.
My day ended at the Topkapi palace. The word Topkapi is Turkish for "Cannon Gate", as this castle sits overlooking the peninsula and bosphorus strait that separates the Europe side from the Asian side of the city. Like the palace of Versaille in France, or Buckingham palace in England, the Topkapi palace was home to 36 sultans from 1299 to 1922.
You can spend hours here. There are beautiful gardens and various buildings surrounded by five kilometers of castle wall. Inside the buildings are displays of the family jewels, the clothes the sultans wore, their weapons, and religious relics.
It was good to get a flavor of the city, and a sense of the Turkish culture.
technorati tags: IBM, #ibmtechu, Istanbul, Turkey
Modified by TonyPearson
Next week, April 8-11, I will be presenting a variety of topics at the [Systems Technical University 2014] conference in Istanbul, Turkey. The conference will feature experts from IBM Power Systems, IBM System x, IBM PureSystems, and IBM System Storage.
Here are the titles and abstracts of the eight topics that I will be presenting next week, in chronological order, along with some related sessions for each topic:
IBM Smarter Storage Strategy
Do you want to understand more about IBM's initiatives for building a smarter planet and how that relates to the data economics of your organization? This session will explain it all, including how IBM's design approach and strategy for its various storage products and solutions for efficiency for data intensive solutions, optimization of business critical workloads, and agility to start quickly and add value. I will also position the features and capabilities of IBM's various disk and tape systems in this context.
Clod Barrera will present IBM Storage Strategy - Traditional and New Methods for Storage Deployment. My session is Tuesday morning and will focus on how IBM Storage Strategy is aligned with IBM's business initiatives including Cloud, Analytics, Mobile and Social Business (CAMS). Clod's presentation will be more technical in nature, featuring Flash storage, scale-out grids, object storage directions, and Software Defined Environments.
Axel Koester will present Storage Technology Futures - fresh from IBM research labs, tomorrow in your datacenter. Axel's presentation will focus on what IBM Research is working on, based on industry trends.
Pat O'Rourke will present Power Systems Trends and Direction, which will focus on IBM's strategy for the POWER Systems product line.
Replication for Business Continuity and Disaster Recovery (BC/DR)
Replication of disk storage systems can be used as part of an overall Business Continuity and Disaster Recovery plan. This session will provide an overview of the technologies involved, and other considerations.
Markus Oscheka and Ralf Wohlfarth will present IBM Storage Systems integration into VMware Site Recovery Manager, a more focused session that offers Business Continuity and Disaster Recovery for VMware environments.
Deniz Erguvan will present Disaster Recovery Solution Design with PowerVM and Storage Virtualization.
Thomas Vogel and Torsten Rothenwaldt will present Native IP replication with SVC / Storwize v7.2. This new feature was announced last October 2013.
Thomas Vogel and Torsten Rothenwaldt will also present New HA and DR concepts with SVC enhanced stretched cluster, focused on data federation across data centers.
What is big data? Architectures and Practical Use Cases
Do you understand the storage implications of big data analytics? This session will explain what big data is, and cover the Information Infrastructure and practical use cases.
Ajay Dholakia will present Taming Big Data: An overview of key technologies and architectures. Ajay will focus more on the hardware components (servers, networks, storage), whereas my presentation will focus on the roles of the storage administrator, data scientist and decision maker.
Axel Koester will present BIG DATA at CERN : Analyzing petabytes in seconds(!) at the European particle collider facility, a specific use case.
Jean-Armand Broyelle will present Big Data on Power: come and touch reality!, which will focus on the capabilities to process big data on POWER systems.
Cloud Storage and the Active Cloud Engine™
This session will cover private and public cloud storage options, including XIV, SONAS, Storwize V7000 Unified and Linear Tape File System (LTFS) Enterprise Edition. The use of Active Cloud Engine for local space management and global WAN caching to access files, SmartCloud Storage Access for self-service provisioning, and file-and-sync solutions will also be explained.
Eric Aquaronne and Jeff Borek will present Storage Cloud to energize your company. My session will focus on the technologies involved, whereas theirs will provide a product demo and practical implementation advice.
Mo McCullough will present XIV Overview and Update, Thomas Luther will present SONAS overview and Updates, and Nils Haustein will present Linear Tape File System Enterprise Edition (LTFS-EE) explained. These other topics will all go into more deep dive on each product solution than what I will cover in my high-level overview.
IBM Tivoli Storage Productivity Center
Why is Tivoli Storage Productivity Center (TPC) the #1 most requested topic at the IBM Tucson Executive Briefing Center? One of the chief architects of this product will cover the latest features, and why this product will greatly help your storage admin staff.
Clod Barrera will present Software Defined Storage - Storage for Software Defined Environments which will provide a broader view, while mine is focused specifically on how TPC plays a role in SDS.
Thomas Luther will present TPC for Replication 5.2 Overview and updates, will focus specifically on the Replication support in the latest release.
IBM Archive Storage Solutions - Data Retention for Government Compliance and Industry Regulations
This session will cover the various offerings IBM has for archive solutions, including IBM System Storage Archive Manager (SSAM), N series, and WORM tape storage systems.
Nils Haustein will present Next generation archive storage solutions which will focus specifically on SSAM software, with focus on migration procedures from other archive solutions.
New Generation of Storage Tiering: Less management, lower investment and increased performance
Confused on how to implement storage tiering between Flash, Disk, Tape storage system resources? This session will cover the various techniques and technologies available.
Levi Norman will present IBM FlashSystem Overview, focused on this particular tier of storage.
Axel Koester will present Storage Portfolio Selection Guide: What (not) to use when, providing an overview of the IBM System Storage portfolio, whereas I am focused more on the technologies that make up each tier of storage, and how to take advantage of them to balance cost and performance.
Data Footprint Reduction
Data Footprint Reduction is the catchall term for a variety of technologies designed to help reduce storage costs. This session will cover four techniques for data footprint reduction: thin provisioning, space-efficient snapshots, data deduplication and real-time compression. It will also discuss the IBM storage products that provide these capabilities. Come to this session to learn how these technologies work, and how they will benefit your data center.
bi Relation session:
Antoine Maille will present Demonstrate the TurboCompression Effect, a live demo of the technologies I will be discussing.
Johann Weiss will present The Storwize family - easy to manage, function rich and cloud ready, which will include a discussion of Real-time compression.
Mathias Defiebre and Erik Franz will present ProtecTIER with IBM FlashSystem (or maybe with Storwize). ProtecTIER is IBM's strategic data deduplication solution, which can act as a gateway in front of a variety of back-end storage options.
If you will be at this conference all week, look for me and say "Hello!"
technorati tags: IBM, #ibmtechu, Systems Technical University, POWER Systems, PureSystems, System x, System Storage, Istanbul, Turkey, Smarter Storage, CAMS, Clod Barrera, Axel Koester, Pat O'rourke, Replication, Business Continuity, Disaster Recovery, BCDR, Markus Oscheka, Ralf Wohlfarth, VMware, Site Recovery Manager, SRM, Deniz Erguvan, PowerVM, storage Virtualization, Thomas Vogel, Torsten Rothenwaldt, SAN Volume Controller, SVC, stretched cluster, big data, BigInsights, hadoop, analytics, data scientist, Ajay Dholakia, CERN, Jean-Armand Broyelle, Cloud storage, XIV, SONAS, Storwize, Storwize Family, Storage V7000, Storwize V7000 Unified, Linear Tape File System, LTFS, LTFS-EE, Tivoli Storage, Productivity Center, TPC, Eric Aquaronne, Jeff Borek, Software Defined Storage, Software Defined Environment, SDS, SDE, Thomas Luther, TPC-R, Archive Storage, Government Compliance, SSAM, NENR, N series, WORM tape, Nils Haustein, DR550, Information Archive, storage Tiering, Easy Tier, Flash, FlashSystem, Intelligent ILM, ISTA, Levi Norman, Data Footprint Reduction, Antoine Maille, TurboCompression, Johann Weiss, Mathias Defiebre, ProtecTIER
Modified by TonyPearson
Well, it's Tuesday again, and you know what that means! IBM Announcements!
Starting today, April 1, 2014, the IBM Executive Briefing Centers (EBC) are adopting a new self-hosted model. In the past, each briefing was assigned a "Briefing Host", a member of the EBC staff, who acted as [master of ceremonies] for the day (or more) for the clients. At some locations, if there were three rooms, there would be three or more briefing hosts so that concurrent briefings could be held.
However, the method does not scale. Having a person per briefing means that you are limited to the number of total concurrent briefings. Inspired by self-service provisioning and scalability of the Cloud, IBM has adopted a new methodology.
In the new model, the visiting client rep, sales rep, or IBM Business Partner will be handed instructions and a map. This will include the agenda, the schedule, biographies of each speaker, the locations of the nearest restrooms, and so on.
I can take partial credit for the idea. In 2012, I made the analogy that having briefing centers at each development lab made a lot of sense, because it allowed clients to interact directly with the engineers and executives that made development decisions. I also made the analogy that having a fully-staffed EBC was like a fire department, whether you have five briefings per month, or fifty, you need a team that is ready, staying abreast of the latest technological changes.
In my post, [Like animals in the zoo], I argued there are two kinds of zoos, the self-guided kind, where visitors are handed a map, versus the docent-guided kind, where a member of the zoo staff introduces you to each animal.
The EBC briefing hosts in this analogy were the docents, and the animals that people came to visit were the engineers and executives.
As with zoo docents who are highly trained about every animal to answer every conceivable question, briefing hosts at IBM went through extensive training by [Mandel Communications] to achieve the certification requirements of the [Association of Briefing Program Managers], or ABPM for short.
As for the fire department, IBM management flipped the analogy around. They argued that many smaller communities had "volunteer fire departments", eliminating the need to keep full-time employees doing nothing but playing cards and sliding down brass poles in between fire fighting sessions. When a fire happens, phones calls are made, and this will help get everyone notified to get involved.
In my past 28 years at IBM, I have to say that you know you have good analogies when they can be used in both directions. The zoo analogy was used to prevent management from consolidating all of the EBC staff to Austin, TX. The fire department analogy helped us keep all of our lab equipment to run demonstrations.
The new self-hosted model will address both scheduling and scalability issues. We often had two-day and three-day briefings, and scheduling the rooms, and the briefing managers, based on their availability, was quite challenging.
There are three advantages to the new method:
A coordinator will merely assign rooms, no longer worrying if a briefing host is available for those days. Now, each EBC location can run at full capacity, limited only by real estate and floor space.
Subject matter experts, like myself, that often did double-duty serving as briefing hosts as needed, will have more free time. I personally will be doing more "outbound briefings" to attend conferences and visit clients at their location, eliminating the time I need to be in Tucson to host "inbound" briefings.
The awkward silence that happens when the client rep, sales or IBM Business Partner invites all the clients and presenters, but forgets to invite the briefing host, is completely eliminated.
technorati tags: IBM, Executive Briefing Center, EBC, self-hosted, zoo, docent, volunteer, fire department, Cloud, scalability
Modified by TonyPearson
March 31 is [World Backup Day]!
Recently, a client asked how to backup their IBM PureData System for Analytics devices. IBM had [acquired Netezza in November 2010], and later renamed their TwinFin devices as the IBM PureData for Analytics, powered by Netezza.
The [IBM PureData System for Analytics] is incredibly fast for performing deep, ad-hoc analytics. However, the people who use them are "data scientists", not backup experts.
Likewise, there are backup administrators who may not be familiar with the unique characteristics of this expert-integrated system to know what backup options are available.
As with the rest of the IBM PureSystems line, the IBM PureData System for Analytics (or, PDA for short) has a combination of servers, storage and switches inside.
In a full-frame PDA, there are two servers in Active/Passive mode, these coordinate activity to FPGA-based blade servers, which have parallel access to hundreds of disk drives, storing nearly 200 TB of compressed database data. A system can span up to four frames.
But what do you backup? And why? You don't need to worry about backing up the Linux operating system or NPS server code, that is considered firmware and if anything every got corrupted, IBM would help restore it for you. System-wide metadata, such as the host catalog and global users, groups, and permissions should be backed up periodically to protect against data corruption.
There are a number of reasons to backup your user databases:
- As part of firmware upgrade/downgrade
- To transfer data to another system
- Protect against hardware failure / disaster
- Protect against data corruption
The PDA has three backup formats. You can backup the entire user database in compressed format, backup individual tables in compressed format, or export to a text-format file.
Compressed format is faster, but can only be restored to the same PDA, or a PDA that has the same or higher level of NPS firmware. The text-format is slower, but can be used to restore to lower levels of NPS firmware, or to other database systems.
There are basically two methods to backup your PDA. The first is called the "Filesystem" method. Basically, you can attach an external storage device to the NPS server, and use the built-in command line interface (CLI) to store the backups onto its file system.
On NPS version 6, the nzhostbackup will backup the /nz/data directory which stores the system tables, database catalogs, configuration files, query plans, and cached executable code for the SPU blade servers.
(I have heard that the nzhostbackup will get deprecated in NPS version 7, but I only have access to version 6. As always, [RTFM] for your specific NPS code level.)
- nzbackup -users
The nzbackup with the users parameter will backup the global users, groups and permissions. This is included in the /nz/data backup contents from the nzhostbackup command, but you may want to backup and restore these separately.
- nzbackup -db
The nzbackup with the db parameter will backup a user database in compressed format. To backup individual tables, use the CREATE EXTERNAL TABLE command, which can create compressed or text-format exports.
You may find that your databases are so large, they will exceed the limits of the filesystem on the external storage device. For SAN or NAS deployments, I recommend the IBM Storwize V7000 Unified with IBM General Parallel File System (GPFS). However, if you are using something else, you may need to use the "nz_backup" scripts provided which split up the backup images into smaller pieces that most other filesystems can handle.
The PDA comes with 10GbE Ethernet ports that you can attach a NAS storage device over a Local Area Network (LAN), or add Fibre Channel Protocol (FCP) ports and connect over a Storage Area Network (SAN). To keep things simple, I will refer to whichever network you decide as the "Backup Network" in the drawings.
The second method for backup is called the "External Backup Software" method. As you have probably guessed, it involves sending the backups to a supported software product like IBM Tivoli Storage Manager (or, TSM for short).
In this case, the PDA acts as a client node, similar to a laptop, desktop, or application server with internal disk. Backup data is sent over the LAN to the designated TSM server, and the TSM server in turn writes over the SAN to its storage hierarchy of disk, virtual tape and/or physical tape resources.
Backups can be done by command "on demand", or automated on a schedule. For the /nz/data directory, direct the nzhostbackup command to send the backup copy to local disk, then use TSM's dsmc archive command to transfer this backup copy to the TSM server.
For nzbackup with the users or db parameters, you can send the data directly to the appropriate TSM server by specifying the connector and connectorArgs parameters.
To reduce traffic on the TSM Server, an intermediary "TSM Proxy Node" can be put in between. In this case, the PDA sends the backup to the Proxy Node, the Proxy Node uses a "LAN Free Storage Agent" to send the backups directly to the virtual tape and/or physical tape, and then notifies the TSM Server to updates its system catalog to record which tape holds these new backups.
Another configuration involves installing the TSM LAN Free storage agent directly on the PDA. While this will require FCP ports to be added and consume more CPU resources on the NPS server, it eliminates most of the LAN traffic, allowing the PDA to send its backups directly to virtual or physical tape.
To learn more about this, see my full presentation [Backup Options: IBM PureData System for Analytics, powered by Netezza] on the IBM Expert Network powered by SlideShare, or attend the upcoming [IBM Edge 2014] conference in Las Vegas, May 19-23. I will be there!
technorati tags: IBM, Netezza, PureData, PureData for Analytics, PDA, World Backup Day, Backup, NPS, nzhostbackup, nzbackup, expert-integrated, Tivoli, Tivoli Storage Manager, TSM, dsmc, #ibmedge, Slideshare
Have you signed up for the [IBM Edge2014] conference yet? This is IBM's premiere conference on System Storage and related products, to be held in Las Vegas, NV, May 19-23. I plan to be there!
technorati tags: IBM, Edge, Edge2014, Sheryl Crow
Modified by TonyPearson
My how time flies! It has been nearly a year since our new Tucson Executive Briefing Center had its [Ribbon Cutting Ceremony].
To celebrate this achievement, IBM asked me to write and direct a short film to remind everyone we are here to help clients solve problems, determine an appropriate strategy and make solid purchase decisions.
I have produced other videos for IBM. See my October 2013 blog post [Incorporating Videos] for other examples. This was my first time as writer/director for a project.
This video won't win any Oscars, but I would still like to thank the Academy, my colleagues IBM VP Calline Sanchez, Lee Olguin, Joe Hayward and Kris Keller agreeing to be filmed on camera. Behind the scenes, I want to thank IBM Fellow John Cohn for his superb narration, Andrew Greenfield as cinematographer and editor, Shelly Jost as creative consultant selecting the musical tracks, and Denise White for reviewing the screenplay. Finally, I want to thank our producer, Bill Terry, for funding this effort.
What do you think? Will it go viral? Enter your comments below!
technorati tags: IBM, Tucson, EBC, Joe Hayward, Calline Sanchez, Kris Keller, Lee Olguin, John Cohn, Andrew Greenfield, Shelly Jost, Denise White
Modified by TonyPearson
IBM Cloud announcements at Pulse 2014
Well it's Tuesday again, and you know what that means? IBM announcements! Many of the announcements were made by IBM Executives at the [IBM Pulse 2014 conference].
IBM BlueMix is the newest cloud offering from IBM, providing Platform-as-a-Service (PaaS) offering based on the Cloud Foundry open source project that promises to deliver enterprise-level features and services that are easy to integrate into cloud applications.
In partnership with Pivotal and others, [IBM is a founding member of the Cloud Foundry foundation] to create an open platform that avoids vendor lock-in. Many PaaS stacks, such as [LAMP] or [Microsoft IIS], are typically limited to a single programming language, database and web application server, but not Cloud Foundry! Here is what is supported:
Development Frameworks: Cloud Foundry supports Java™ code, Spring, Ruby, Node.js, and custom frameworks.
Application Services: Cloud Foundry offers support for MySQL, MongoDB, PostgreSQL, Redis, RabbitMQ, and custom services.
Clouds: Developers and organizations can choose to run Cloud Foundry in Public, Private, Hybrid, VMware-based and OpenStack-based clouds.
To learn more, see this article on developerWorks [What is BlueMix?]
POWER and PureApplication Patterns of Expertise on SoftLayer
By the end of 2014, IBM is investing over $1.2B to have [40 Cloud centers across five continents] for SoftLayer.
This week, my fifth-line manager Tom Rosamilia, IBM Senior Vice President IBM Systems & Technology Group and Integrated Supply Chain made two announcements at Pulse. First, in additional to x86-based servers, SoftLayer will also offer POWER-based servers to run AIX, IBM i and [Linux on POWER] applications.
Second, SoftLayer will support PureApplication Patterns of Expertise. What is a pattern of expertise? It can be as simple as a virtual machine encapsulated in [Open Virtual Format], to more dynamic architectures, packaged with required platform services, that are deployed and managed by the system according to a set of policies.
Patterns simplify and automate tasks across the lifecycle of the application. Customers and partners alike are [seeing significant reductions in cost and time] across the application lifecycle with the deployment of a PureApplication System.
Also, this week at Pulse, Robert LaBlanc, IBM Senior Vice President of Software and Cloud Solutions, announced [IBM plans to Acquire Cloudant] which offers an open, cloud Database-as-a-Service (DBaaS) that helps organizations simplify mobile, web app and big data development efforts.
Why not just use a Relational Database Management System [RDBMS], like [IBM DB2 database softwareSQL], CouchDB is known as NOSQL. DB-Engines has a great side-by-side comparison [CouchDB vs. DB2].
IBM SmartCloud Virtual Storage Center offerings
When I introduced [SmartCloud Virtual Storage Center] back in October 2012, I mentioned that it was a great solution for large enterprise that have all of their disk behind SAN Volume Controller (SVC).
To reach smaller accounts, IBM has announced two new offerings:
IBM SmartCloud Virtual Storage Entry for customers that have less than 250TB of disk behind two or four SVC nodes. It is priced per terabyte, by the amount of capacity that is virtualized.
IBM SmartCloud Virtual Storage for Storwize Family for customers that have other Storwize family products (Storwize V7000 or V5000, for example). It is priced per the number of storage enclosures that are managed by the Storwize family hardware.
To learn more about Virtual Storage Center, see the [IBM Announcement page]
I am not at Pulse 2014 this year, but I managed to watch many of these announcements on the [IBM Pulse livestream].
Continuing my series on building a Desktop computer for a kindergarten class, I look at Fedora with Sugar mentioned in the article [Top 6 Linux Distributions for Children (Ages 2 and Up)].
(This series started with my post [Kindergarten desktop - The Challenge]. I have a 512MB RAM system with 40GB disk drive that I will install Linux and educational software for a class full of kindergarten children. My previous post covered three other Linux distributions [LinuxKidX, Qimo, and Foresight for Kids].)
I am not stranger to the Sugar learning platform, developed as part of the One Laptop per Child [OLPC] project.
As I mentioned in my post [Helping Young Students - part 1], I was part of the OLPC development team back in 2008, helped local volunteers deploy laptops to children in Nepal and Uruguay, mentored a college student in India, and learned a lot of Python programming language in the process.
Sugar is now developed by Sugar Labs, a nonprofit spin-off of OLPC. The code is free and open source desktop environment for many other machines, including as a "Desktop Environment" for Fedora Linux.
I kept my 40GB hard drive partitioned as follows. On the extended partition, sda5 will hold my system utilities, like Clonezilla and SystemRescue, and sda6 is my swap space, increased to 1500MB. Partition sda1 has Edubuntu 12.04 on it, and I will use sda2 to install Fedora with Sugar.
[Sugar-on-a-stick], is so named because it is designed so that each child has their own LiveUSB. This can run on PC with Windows or Mac OS without affecting those operating systems, allowing a child to use Sugar in the classroom, then take the stick home and continue on their home PC.
A 2GB or greater USB memory stick can hold both Fedora and Sugar, and use that to boot your desktop. Unfortunately, it requires 1GB of RAM, and I have only 512MB.
Can I just run Sugar natively on a Fedora install? Yes, thanks to the [Sugar not "on a stick"] instructions, I can install Fedora first, then just:
#yum groupinstall "Sugar Desktop Environment"
Unfortunately, the latest Fedora release (F20) recommends 1GB of RAM. Fortunately, I found Dean Howell's rant [Fedora Irresponsibly Lowers Memory Requirement To 512MB] about the Fedora F17 release. I gave this a try.
There are three ways to install Fedora:
- Fedora Desktop Edition - this is a LiveCD that requires 1GB RAM.
- Fedora Network Install - this is a bootable CD that then uses the Internet to download the rest of the files required. Use this if you (a) have a fast Internet connection, or (b) do not have a DVD drive on your system.
- Fedora Install DVD - this has all the software on the DVD itself.
I chose method 3 and downloaded the appropriate ISO file. While F17 only requires 512MB of RAM to run, the graphic installer requires 768MB, and is fully explained in this [29-step F17 installation guide].
To get around this, select "Troubleshooting" which then lets you select low-graphics/text mode installation that ran well under 512MB. I installed both LXDE and Sugar, and everything worked fine!
Why both LXDE and Sugar? Well, Sugar is quite a different environment, and I wanted LXDE as an alternative for the admin and teacher to use.
The article on [Sugar software on Wikipedia] sums it up well:
"Unlike most other desktop environments, Sugar does not use the 'desktop', 'folder' and 'window' metaphors. Instead, Sugar's default full-screen activities require users to focus on only one program at a time. Sugar implements a novel file-handling metaphor (the Journal), which automatically saves the user's running program session and allows him or her to later use an interface to pull up their past works by date, activity used or file type."
Now that I have that working, it is time to upgrade from non-supported F17 to a supported level. Ravi Saive explains the [Four Ways to Upgrade from Fedora 17 to Fedora 18]:
- Clean install of F18
- Fedora Upgrader tool (FedUp) command line interface
- Yum upgrade
- Fedora upgrade script
As you can probably guess from the title of this post, I chose method 2 "FedUp" as it seemed to be the least invasive. I was unsure if method-1 "Clean Install" of F18 would work with 512MB of RAM, and I have been through enough horrors of failed yum upgrades on my own Red Hat Enterprise Linux [RHEL] at work to avoid method 3. Method 4 is just a script to automate the steps of method 3.
The steps are fairly straightforward. First, install the FedUp package, run "yum update" to ensure you have all the latest kernel and F17 packages for everything else, and reboot.
Then run the fedup-cli command, which upgrades all the packages to F18 level and creates a special kernel level that will then finish the install after the second reboot. It took a while, so I let it run unattended. I put the debug log on partition sda5 in case anything went wrong.
#fedup-cli --reboot --network 18 --debuglog=/rescue/fedupdebug.log
What could go wrong? Well, it turns out that fedup works by updating the Grub2 boot loader configuration, but my grub2 resides on sda1 partition instead, owned by my existing Edubuntu. The reboot did not give me the option to run the specialized kernel to finish the process.
Fixing this was a hot mess, but I managed to configure Grub2 on Fedora, and complete the upgrade and get everything working as before. However, even though it just came out last year, [F18 version is already out of support]! This means I get a second chance to do FedUp, this time to F19 release. Oh boy! Fun!
While the second time went smoother, the problem was that F19 doesn't seem to run well in 512MB of RAM, and chances are F20 won't either.
So what have I learned from this?
- Fedora is fully supported, has been around over 10 years, with a vibrant and helpful community.
- Sugar is designed for kids, so adding a traditional desktop environment like XFCE or LXDE can be useful for administrator or teacher.
- Offering multiple Linux versions in a dual-boot or triple-boot approach may complicate the Grub2 loader configuration and maintenance.
- Fedora's "rolling upgrade" approach means that someone will need to consider upgrading to later versions at least every school year or semester to maintain support. Running fedup-cli or any of the other upgrade methods may be too complicated for your average teacher.
If you have any experience with Fedora or Sugar in the classroom, comment below!
technorati tags: OLPC, Nepal, Uruguay, Sugar, Sugar-on-a-Stick, Sugar Labs, Fedora, Linux, Clonezilla, SystemRescue, Edubuntu, LXDE, FXCE, RHEL, FedUp, Grub2, rolling upgrade