HealthAlliance Hospital has implemented an IBM System Storage Grid Medical Archive Solution (GMAS) to make patient records available to clinicians anytime, anywhere. IBM has a [Case Study
] on this implementation.Here is an excerpt from the IBM [Press Release
HealthAlliance Hospital, a member of UMass Memorial Health Care, serves the communities of north-central Massachusetts and southern New Hampshire with acute care facilities, a cancer center, outpatient physical therapy facilities and a remote home health agency. As an investment in continued high-quality patient care, the hospital has implemented a picture archiving and communication system (PACS) from Siemens Medical Solutions so that it can move toward digital health records while eliminating traditional paper and film.
HealthAlliance is now able to make all of their data, including PACS images, available instantly, using the IBM GMAS, a cross-IBM offering comprised of storage, software, servers and services. The GMAS solution provides hospitals, clinics, research institutions and pharmaceutical companies with an automated and resilient enterprise storage archive for delivering medical images, patient records and other critical healthcare reference information on demand.
"Fast, easy access to diagnostic images is a priority," said Rick Mohnk, Vice President and Chief Information Officer of HealthAlliance. "Being paperless not only helps our staff improve their productivity and the quality of patient care, but also lowers our costs and improves our competitiveness. The IBM GMAS has helped us stay competitive and offer the leading edge technology that attracts top physicians to our staff and keeps patients feeling comfortable and well cared for."
Normally when you read or hear the term "grid", you might think of supercomputers, but in this case we are talking about information that is accessible from different interconnected locations. I've mentioned GMAS before in my posts [Blocks, Files and Content Addressable Storage and What Happened to CAS?] but I thought I would provide more detail on the elements of the solution.
Medical imaging equipment are called "modalities", which is just fancy hospital talk for "method of treatment".These have Ethernet connections designed to write to any storage with a CIFS or NFS interface. For example, press the button on the "X-ray" machine, and the digitized version of the X-ray is stored as a file to whatever NAS storage on the other end.
[Picture Archiving and Communication System] refer to the application and the computer equipment to manage these medical images, often stored in a DICOM format and indexed with HL7 metadata headers. There are many PACS vendors, GE Medical Systems, Siemens Medical, Agfa, Fuji, Philips, Kodak, Stentor, Emageon, Brit Systems, Mckesson, Amicus, Cerner, Medweb and Teramedica, to name a few. Many PACS providers embedded specific storage as part of their solution, but now are starting to realize that they need to be part of a larger storage infrastructure.
IBM System Storage [Multi-Level Grid Access Manager] is softwareon IBM System x servers that manages access across the grid of inter-connected hospitals, clinics and imaging facilities. It provides the NFS and CIFS interfaces to the modalities, and places the data into a GPFS file system on DS4000 series disk.
- GPFS and DS4000 series disk
IBM [General Parallel File System] has all the Information Lifecycle Management (ILM) capabilities to move data from one disk storage level to another, automates deletion based on expiration date, and can provide concurrent access from multiple requesters.The IBM System Storage DS4000 series disk products can support both high-speed FC disk as well as low-cost SATA disk.For large medical images, the SATA disk is often a good fit. The advantage of GPFS is that you can have policies todecide which images are placed on FC disk, and which on SATA, and then later move these files based on access reference. Images that are accessed the most frequently can be on FC disk, and those that haven't been accessed in a while on SATA disk.
- TSM space management
IBM [Tivoli Storage Manager for Space Management] supports moving files out of the GPFS file system and onto tape, based on policies. For example,keep the most recent 18 months on disk, and anything older than that gets moved to tape. This is similar to themigrate/recall technology used in DFSMShsm on the mainframe.
- Tape Library automation
Before GMAS, paper and film images had to be retrieved manually from shelves and filing cabinets. The massive amountsof data being stored, and for such long periods of time, makes it impractical to store all of it on disk. With tape automation, any medical image more than 18 months old can be retrieved in minutes. Patients with an appointment can have all of their medical images retrieved in bulk the night before. Emergency room patients can have previous images retrieved while admission clerks check for insurance coverage and perform triage.
- Display Screen
Images archived on the IBM GMAS are accessible in numerous ways. For example, all clinicians can access GMAS through hospital record system, which provides complete paperless and filmless access to the patient record including medical images, lab results, radiology reports, and pharmacy records. Medical workers at any location can also access the grid using their Web browsers. This allows each employee to use the display systems they are already familiar with.
Unlike disk-only based NAS systems, IBM's blended disk-and-tape approach makes this a much more cost-effective solution.For more details on IBM GMAS, read this 6-page[Frost & Sullivan whitepaper
technorati tags: HealthAlliance, IBM, GMAS, Grid, Medical, Archive, Solution, disk, tape, storage, PACS, CAS, Siemens, DICOM, HL7, Grid Access Manager, NFS, CIFS, GPFS, DS4000, FC, SATA, ILM, TSM, HSM, DFSMShsm, paperless, filmless, images, Frost Sullivan, whitepaper
Today, IBM announced a software/server/storage combo that out-performed both HP and Sun. Here is an excerpt from the[IBM Press Release
IBM today announced that its recently introduced E7100 Balanced Warehouse(TM), consisting of the IBM POWER6(TM) processor-based System p(TM) 570 server, the IBM System Storage(TM) DS4800 and DB2(R) Warehouse 9.5, is already lapping the field in performance. The new data warehousing solution is now ranked number one in both performance and in price/performance in the TPC-H business:
- 2 x speed-up over HP system with Oracle 10g and equal number of cores;
- 3.17 x speed up over Sun with Oracle 10g and 38 percent price advantage;
- A new world record by loading 10 terabytes (TB) data at six TB per hour (TB/hr).
"These latest benchmark results further prove IBM's strength and leadership in the business intelligence arena," said Scott Handy, vice president of marketing and strategy, IBM Power Systems. "The E7100 Balanced Warehouse is a complete data warehousing solution comprised of pre-tested, scalable and fully integrated system and storage components, designed to get customers up and running quickly to get to the real benefit of unprecedented business insight and intellect."
Those not familiar with the [IBM Balanced Warehouse], it is the productized version of DB2's ["Balanced Configuration Unit" or BCU] reference configuration. The IBM Balanced Warehouse presents a pre-tested, pre-configured solution for Business Intelligence (BI) applications. These are in the form of "building blocks" thatcan be combined to get to the size you need, with incremental growth as your business expands. Each building block expertly matches the CPU processor and RAM memory of the server, with the appropriate I/O bus, cabling, and capacity of the disk system, resulting in optimal performance.
IBM DB2 software is designed to allow you to combine multiple building blocks into a single system image. This greatly simplifies your data warehouse deployment, and can help ensure success. For example, for a 50TB deployment, you can take a base 2TB building block, add 24 more, each with 2TB of disk capacity, and have a completely balanced environment. IBM clients have built systems over 300TB in this manner with these building blocks.
The IBM Balanced Warehouse is offered in several configurations:
The [C-class models] are designed for SMB customers, employing an IBM System x server with internal or direct attached EXP3000 disk.
The [D-class models] are the next step up, offering department-level data marts and data warehouse for larger deployments, employing an IBM System x server with EXP3000 or System Storage DS3400 entry level disk.
The [E-class models] represent our top-of-the line configurations for our largest enterprise deployments. The [E6000] run Linux on an IBM System x server with System Storage DS48000 disk. The [E7000] run AIX on an IBM System p575 server with DS4800 disk. The new [E7100] mentioned above runsAIX on a POWER6-based IBM System p570 with DS4800 disk.
As I have mentioned before, in my post[Supermarketsand Specialty Shops],companies are looking for complete solutions, preferably from a single vendor like IBM, HP and Sun, rather than buying piece part components from different vendors and hoping the combined ["Frankenstein"] configuration meets business requirements.
The DS4800 is an obvious choice for this solution, providing an excellent balance of cost and performance, in a modular packaging that is ideal for the incremental growth design inherent in the IBM Balanced Warehouse philosophy. To learn more about this disk system, see the official [DS4800 website] for details, descriptions and specifications.
technorati tags: IBM, HP, Sun, Balanced Warehouse, balanced, configuration, unit, BCU, Oracle, 10g, EXP3000, DS3400, DS4800, disk, storage, system, datamart, data, warehouse, Business Intelligence, BI, Frankenstein, supermarket, specialty shop, E6000, E7000, E7100
It's official! My "blook" Inside System Storage - Volume I
is now available.
|This blog-based book, or “blook”, comprises the first twelve months of posts from this Inside System Storage blog,165 posts in all, from September 1, 2006 to August 31, 2007. Foreword by Jennifer Jones. 404 pages.|
- IT storage and storage networking concepts
- IBM strategy, hardware, software and services
- Disk systems, Tape systems, and storage networking
- Storage and infrastructure management software
- Second Life, Facebook, and other Web 2.0 platforms
- IBM’s many alliances, partners and competitors
- How IT storage impacts society and industry
You can choose between hardcover (with dust jacket) or paperback versions:
This is not the first time I've been published. I have authored articles for storage industry magazines, written large sections of IBM publications and manuals, submitted presentations and whitepapers to conference proceedings, and even had a short story published with illustrations by the famous cartoon writer[Ted Rall].
But I can say this is my first blook, and as far as I can tell, the first blook from IBM's many bloggers on DeveloperWorks, and the first blook about the IT storage industry.I got the idea when I saw [Lulu Publishing] run a "blook" contest. The Lulu Blooker Prize is the world's first literary prize devoted to "blooks"--books based on blogs or other websites, including webcomics. The [Lulu Blooker Blog] lists past year winners. Lulu is one of the new innovative "print-on-demand" publishers. Rather than printing hundredsor thousands of books in advance, as other publishers require, Lulu doesn't print them until you order them.
I considered cute titles like A Year of Living Dangerously, orAn Engineer in Marketing La-La land, or Around the World in 165 Posts, but settled on a title that matched closely the name of the blog.
In addition to my blog posts, I provide additional insights and behind-the-scenes commentary. If you go to the Luluwebsite above, you can preview an entire chapter in its entirety before purchase. I have added a hefty 56-page Glossary of Acronyms and Terms (GOAT) with over 900 storage-related terms defined, which also doubles as an index back to the post (or posts) that use or further explain each term.
So who might be interested in this blook?
- Business Partners and Sales Reps looking to give a nice gift to their best clients and colleagues
- Managers looking to reward early-tenure employees and retain the best talent
- IT specialists and technicians wanting a marketing perspective of the storage industry
- Mentors interested in providing motivation and encouragement to their proteges
- Educators looking to provide books for their classroom or library collection
- Authors looking to write a blook themselves, to see how to format and structure a finished product
- Marketing personnel that want to better understand Web 2.0, Second Life and social networking
- Analysts and journalists looking to understand how storage impacts the IT industry, and society overall
- College graduates and others interested in a career as a storage administrator
And yes, according to Lulu, if you order soon, you can have it by December 25.
technorati tags: IBM, blook, Volume I, Jennifer Jones, system, storage, strategy, hardware, software, services, disk, tape, networking, SAN, secondlife, Web2.0, facebook, Lulu, publishing, Blooker Prize, articles, magazines, proceedings, Ted Rall, insights, glossary, early-tenure, mentors, library, classroom, administrator, print, publish, on demand
For those in the US, last friday, the day after Thanksgiving, marks the official start of the Holiday shopping season. This has been called [Black Friday
] as some stores open as early as 4am in the morning, when it is still dark outside, to offer special discount prices. Some shoppers camp out in sleeping bags and lawn chairs in front of stores overnight to be the first to get in.
Not surprisingly, some folks don't care for this approach to shopping, and prefer instead shopping online. Since 2005, the Monday after Thanksgiving (yesterday) has been called [Cyber Monday].USA Today newspaper reports [Cyber Monday really clicks with customers]. Many of the major online shopping websites indicated a 37 percent increase in sales yesterday over last year's Cyber Monday.
On Deadline dispels the hype on both counts:[Cyber Monday: Don't Believe the Hype?"], indicating that Black Friday is not the peak shopping for bricks-and-mortar shops, andthat Cyber Monday is not the busiest online shopping day of the year, either.
Despite the controversy, all of this increased use of the internet could lead to what is now being termed an "Internet Brown-out" in the next few years.Magaret Rouse of [IT Knowledge Exchange] points to this MacWorld article by Grant Gross titled [Study: Internet could run out of capacity in two years]. Here's an excerpt:
A flood of new video and other Web content could overwhelm the Internet by 2010 unless backbone providers invest up to US$137 billion in new capacity, more than double what service providers plan to invest, according to the study, by Nemertes Research Group, an independent analysis firm. In North America alone, backbone investments of $42 billion to $55 billion will be needed in the next three to five years to keep up with demand, Nemertes said.
Internet users will create 161 exabytes of new data this year, and this exaflood is a positive development for Internet users and businesses, IIA says.
If the "161 Exabytes" figure sounds familiar, it is probably from the IDC Whitepaper [The Expanding Digital Universe] that estimated the 161 Exabytes created, captured or replicated in 2006 will increase six-fold to 988 Exabytes by the year 2010. This is not just video captured for YouTube by internet users, but also corporate data captured by employees, and all of the many replicated copies. The IDC whitepaper was based on an earlier University of California Berkeley's often-cited 2003[How Much Info?] study, which not only looked at magnetic storage (disk and tape), but also optical, film, print, and transmissions over the air like TV and Radio.
A key difference was that while UC Berkeley focused on newly created information, the IDC study focused on digitized versions of this information, and included theadded impact of replication.It is not unusual for a large corporate databases to be replicated many times over. This is done for business continuity, disaster recovery, decision support systems, data mining, application testing, and IT administrator training. Companies often also make two or three copies of backups or archives on tape or optical media, to storethem in separate locations.
Likewise, it should be no surprise that internet companies maintain multiple copies of data to improve performance.How fast a search engine can deliver a list of matches can be a competitive advantage. Content providers may offer the same information translated into several languages.Many people replicate their personal and corporate email onto their local hard drives, to improve access performance, as well as to work offline.
The big question is whether we can assume that an increased amount of information created, captured and replicated will have a direct linear relation to the growth of what is transmitted over the internet. Three fourths of the U.S. internet users watched an average of 158 minutes of online video in May 2007, is this also expected to grow six-fold by 2010? That would be fifteen hours a month, at current video densities, or more likely it would be the same 158 minutes but of much higher quality video.
On the other hand, much of what is transmitted is never stored, or stored for only very short periods of time.Some of these transmissions are live broadcasts, you are either their to watch and listen to them when they happen, or you are not. Online video games are a good example. The internet can be used to allow multiple players to participate in real time, but much of this is never stored long-term. An interesting feature of the Xbox 360 is to allow you to replay "highlight" videos of the game just played, but I do not know if these can be stored away or transferred to longer term storage.
Of course, there will always be people who will save whatever they can get their hands on. Wired Magazine has anarticle [Downloading Is a Packrat's Dream], explaining that many [traditional packrats] are now also "digital packrats", and this might account for some of this growth. If you think you might be a digital packrat,Zen Habits offers a [3-step Cure].
In any case, the trends for both increased storage demand, and increased transmission bandwidth requirements, are definitely being felt. Hopefully, the infrastructure required will be there when needed.
technorati tags: Thanksgiving, Christmas, Black Friday, Cyber Monday, MacWorld, Nemertes, IDC, whitepaper, UC Berkeley, How Much Info, study, Xbox 360, video, YouTube
I hope everyone had a great weekend!
Technology Review has a great 6-minute video showing how the PowerTune system works in the ['self-tuning' guitar].
As with any self-tuning equipment, there are three essential parts.
- Measurement. In the case of the guitar, small sensors identify the current note based on string tension.
- Response. Based on the measurement, the self-tuning system either decides that there is no more to do, or to take specific action. In the case of this guitar, the action would be to loosen or tighten the string.
- Action. The action taken that is expected to get closer to the desired result. In this case, tiny motorsinside the handle turn the thumbscrews to loosen or tighten the strings accordingly.
These are part of a "closed-loop design", as it is called in [Control Theory].After the action in step 3 is taken, goes back to step 1, takes a new measurement, and determines a new response. Thiscould mean that the string is tightened and loosened by ever smaller amounts until it is close enough to the desiredaccuracy, in this case an impressive two [cent].
On the server side, IBM has offered this for years. For example, for z/OS applications on System z mainframes, the[Workload Manager (WLM) offers a "goal mode"] that allows you to set desired results for your business applications, for example, how quickly they respond in processing transactions. WLM measures the response time of the transactions, determines anappropriate response if any, and takes action to shift processor cycles (MIPS) or RAM to help out the workloads with the highest priority, in some cases stealing cycles and RAM away from lesser priority tasks.
For storage, we have IBM TotalStorage Productivity Center. It can scan for file systems over 90 percent full, for example, determine an appropriate response based on policies, and take action to expand the file system to a larger size.This may involve dynamically expanding the LUN that the file system sits on, a feature available on IBM SAN VolumeController, DS8000 series, DS4000 series and N series disk systems.This is the kind of closed loop design that can help eliminate those pesky phone calls at 3am.
But why focus on just storage alone? Combining servers and storage into a higher-level closed loop design is accomplished with [IBM Tivoli Intelligent Orchestrator] and [IBM Tivoli Provisioning Manager]. In thiscombo, Orchestrator measures and responds, and can invoke Provisioning Manager workflows to take action. Workflows are like scripts on steroids. Unlike normal scripts which run on a single machine, workflows can communicate with multiple servers, storage and even networking gear to take the appropriate actions on each of those machines, like install updated software, carve a new LUN, or define a new SAN zone.
The products are well integrated with TotalStorage Productivity Center for the storage aspects.
technorati tags: PowerTune, self-tuning, guitar, closed loop, design, IBM, z/OS, WLM, goal mode, TotalStorage, Productivity Center, LUN, SAN Volume Controller, SVC, DS8000, DS4000, N series, disk, storage, Tivoli, Intelligent Orchestrator, TIO, Provisioning Manager, TPM, workflows, zone