JeffHebert 060001UEQ2 Tags:  technology it storage marketing data social media 1 Comment 2,748 Views
JeffHebert 060001UEQ2 Tags:  ibm ntap real-time storage cloud marketing video backup compression data emc it technology justin.tv media social 2,485 Views
#1 The SiliconAngle / Wikibon Cube
You couldn’t miss it. You walk into the show floor and there they were, larger than life. The SiliconAngle / Wikibon Cube broadcasting live from VMworld2011. Guests that were on the cube included, Tom Georgens (NTAP), Pat Gelsinger (EMC), David Scott (HP), Rick Jackson (VMware) as well as many more. The Cube also had 12 Industry Spotlights. The most interesting spotlight had to do with Storage Optimization, especially for VMware.
Oh the times they are a changing. Now that you can deliver HD TV live over the internet, the Cube has broadcast from a number industry shows and user conferences. The great part about this, it is like the ability to watch a sporting event being covered by ESPN but for tech. The Cube brings all of the highlights of these events right into your computer screen. Now if you can’t make an event, no problem, you can catch all the most important messages from the Cube. The Cube is now the new mechanism for delivering content to users in the way they want to receive the content, TV. For more, check out www.siliconangle.tv
#2 Storage Optimization – Industry Spotlight
In the Storage Optimization industry spotlight, the first 15 minutes Dave Vellante and his co-host John Furrier tee up the concept. They discussed storage optimization, where it has come and were it is going, especially in VMware environments. We are hearing more and more about storage efficiency technologies. During the next 15 minutes Dave and I discussed the 5 essential storage efficiency technologies including:
We also discussed the fact that the IBM Real-time Compression technology is not only the most efficient and effective compression technology in the industry; we also learned that IBM really acquired not just a real-time “compression” technology but a platform that can do a number of things in real time. In fact, the 5 IBM storage efficiency technologies all operate in real time which is the most effective for customers.
We have been hearing a great deal about storage optimization in a VMware environment due to the fact that virtualizing servers was successful for the server side of the house but it didn’t do all it set out to do, it didn’t fix the overall IT budget.
Virtualizing servers only pushed the financial problem to the storage side of the house. Users have told us that when they virtualize their servers, storage grows as much as 4x. By leveraging the right storage optimization technologies together, users can get their budgets back under control and also deliver the promise that server virtualization set out to do.
#3 More Free Time for “Real-life”
While on the Cube as a panelist with my good friend Marc Farley (HPsisyphus, formally @3ParFarley) Dave asked us what was the most interesting thing we saw on the show floor while walking around. I didn’t hesitate in my response. There were two in my mind. First, it couldn’t be any more obvious at how fast data is growing. Over 50% of the 19,000 people there had cameras taking pictures and taking video. That data is going to be stored somewhere. Additionally, they had these cameras for a reason. Either we have more bloggers and tweeters than we know about, more marketing people are going to these events or more people are using social media to inform and educate others. The way in which users want to receive data is always changing and evolving, and at least at VMworld 2011 we were delivering content in a number of ways especially photos and video. All that data will end up in the “cloud” somewhere.
The second thing I noticed was the amount of free time VMware has given back to the IT user. I heard, on more than one occasion, end users talking about family, vacations and travel instead of the usual banter about how challenging their jobs are and the issues they have with their vendors which is the normal think I hear at these shows. This was not an anomaly. I am chalking it up to the fact that VMware makes people’s lives easier.
#4 Proximal Data
These “most interesting things” are not in any particular order. I say this because I believe that Proximal Data is THE most interesting thing I saw at the show. Now Proximal Data just came out of “stealth” in early August. They didn’t have a booth at VMworld but they did have a “whisper suite”. So, I have to confess, since I used to be an analyst, sometimes people will ask me to come take a look at their technology and their message to see if it is in line with what is going on in the industry so I got to hear the pitch.
Proximal Data’s message is right on. It hits a very important and growing topic with VMware these days, the I/O bottle neck on virtual servers, and they solve this problem in a very unique and intelligent way.
First, the problem. One of the issues facing VMware today is the number of virtual machines that can be hosted by one physical machine. The more users can get on one system, the more efficient they can be. The problem is, today systems are running into I/O workload bottlenecks that are causing a limitation in the number of virtual machines one system can run.
One way to solve this problem is add more memory to the host but that could be very very expensive. You can add more HBA’s or NIC cards but that can be expensive and also difficult to manage. You can add more flash cache to your storage to improve the I/O bottleneck but doing that only solves ½ the problem, you still need to solve the challenge on the host side, again with memory or host adaptors.
The solution: Proximal Data. With some advanced I/O management software capabilities combined with PCI flash cards on the host, for a very reasonable price per host. The software combined with the card is 100% transparent to both the virtual servers and to the storage, which to me is one of the most important features of the implementation. Transparency is the key to any new technology. IT has a ton of challenges and has done a great deal of work to get their environment to where it is today. To implement a technology that causes all of that work to be undone is very painful. Remember, the hardest thing to change in IT is process, not technology. It’s important to preserve the process. That is what Proximal Data does. Proximal Data can increase the I/O capability of a VMware server with just a 5 minute installation of the PCI card and their software. This technology can double and even triple the number of virtual machines on any physical server and that is a tremendous ROI. A new win for efficiency.
There are a number of folks entering this market these days; however Proximal does it transparently with no agents making it the most user friendly implementation. While these guys won’t have product until 2012, when it hits the market, I am sure it will be very successful.
#5 Convergence to the Cloud
Are we seeing the coming of the “God Box”? A number of vendors are talking more and more as well as investing in public / private cloud. There are more systems popping up that have servers, networks, high availability and storage all in one floor tile. These systems are designed to integrate, scale, manage VM’s simply, increase productivity and ease the management of all possible application deployments in any business. Additionally these boxes help you to connect to the cloud to ease the cost burden. Is the pendulum swinging back to the “open systems” main frame? Only time will tell.
One more for fun. The first meeting I had at VMworld was with a potential OEM prospect of the IBM Real-time Compression IP. I have always said that this technology could revolutionize the data storage business much like VxVM did for Veritas many years ago. Creating a standard way to do compression across a number of system can help users with implementation as well as ease the storage cost burden. I hope this moves forward and I hope more folks step up who want to OEM the technology.
There is a new wave in Open Systems data protection...joining forces IBM and Innovation lead the way with z/OS Distributed Data Protection and Non-disruptive Disaster Recovery using high-speed System z mainframe TS1130 tape, ProtecTIER deduplication, TS7700 VTL, DS8000 disk storage and high performance System z FICON channels...NOT your TCP/IP Network.
New Wave in Open Sytstems Data Protection
JeffHebert 060001UEQ2 969 Views
Smart Computing: The Next Era in IT
We are seeing dramatic shifts as our planet becomes smarter. These shifts are changing the way the world works. Cities are becoming smarter by transforming traffic systems, water systems, security—every possible form of municipal infrastructure. Business process is evolving across every industry—banking, trading, manufacturing. And we're seeing changes in the way people live, enjoying advancements ranging from reduced congestion and pollution to new ways to communicate and collaborate. Every aspect of life is benefiting from the instrumentation, interconnection and infusion of intelligence into the systems of the world.
JeffHebert 060001UEQ2 Tags:  ibm cloud available storage reliable disk scalable iaas 2,110 Views
JeffHebert 060001UEQ2 Tags:  storage iaas paas saas enterprise swiching cloud networking 1,905 Views
JeffHebert 060001UEQ2 Tags:  solutions effective system private cloud aix storage public p effcient ibm hybrid 2,169 Views
Smart storage solutions let you have it allSome items are just bound together: salt and pepper, a horse and carriage, or even smoke and fire. While some may argue it’s hard to grow a data center without adding cost and complexity, IBM begs to differ. Its smarter approach to data storage means increased capacity goes hand in hand with cost efficiency and ease of use.
Capacity and Simplicity
JeffHebert 060001UEQ2 Tags:  saas servers storage network virtualize internet iaas networking consolidate unified paas cloud 2,322 Views
Cisco’s apparently going to try to simplify its sales, services and engineering organizations in the next 120 days
Faced with a nasty loss of credibility, a string of poor financial results, shrinking market share in its core business, an unwieldy and alienating bureaucracy blamed for the top executive exodus it been experiencing, and a stock price that's plunged into the toilet Cisco, once an economic bellwether, is promising to do more than simply kill off its once-popular Flip video camcorder business and lay 550 people off, an admission that its foray into the consumer segment had largely failed.
It said in a press release issued Thursday morning that it's going to a "streamlined operating model" focused on five areas, not apparently the literally 30 different directions it's been going in although it did say, come to think of it, something about "greater focus" so maybe it's not really cutting back.
These focus areas are, it said, "routing, switching, and services; collaboration; data center virtualization and cloud; video; and architectures for business transformation."
Nobody seems to know what that last one is and the Wall Street Journal criticized Cisco for not being able to explain in plain English what it's doing and Barron's complained that it needed a Kremlinologist to decrypt the jargon in the press release.
Anyway Cisco's apparently going to try to simplify its sales, services and engineering organizations in the next 120 days or by July 31 when its next fiscal year begins. Well, maybe not everything, it warned, but sales ought to be reorganized by then.
This streamlining seems to mean that:
It's unclear whether any of this means layoffs.
Cisco piped in a quote credited to Moore saying. "Cisco is focused on making a series of changes throughout the next quarter and as we enter the new fiscal year that will make it easier to work for and with Cisco, as we focus our portfolio, simplify operations and manage expenses. Our five company priorities are for a reason - they are the five drivers of the future of the network, and they define what our customers know Cisco is uniquely able to provide for their business success. The new operating model will enable Cisco to execute on the significant market opportunities of the network and empower our sales, service and engineering organizations."
JeffHebert 060001UEQ2 Tags:  storage vm saas iaas daas cloud optimization virtual hard disk 2,819 Views
Cloud security: the grand challenge
In addition to the usual challenges of developing secure IT systems, cloud computing
presents an added level of risk because essential services are often outsourced to a third
party. The externalized aspect of outsourcing makes it harder to maintain data integrity and
privacy, support data and service availability, and demonstrate compliance.
In effect, cloud computing shifts much of the control over data and operations from the client
organization to their cloud providers, much in the same way organizations entrust part of their
IT operations to outsourcing companies. Even basic tasks, such as applying patches and
configuring firewalls, can become the responsibility of the cloud service provider, not the user.
This means that clients must establish trust relationships with their providers and understand
the risk in terms of how these providers implement, deploy, and manage security on their
behalf. This trust but verify relationship between cloud service providers and consumers is
critical because the cloud service consumer is still ultimately responsible for compliance and
protection of their critical data, even if that workload had moved to the cloud. In fact, some
organizations choose private or hybrid models over public clouds because of the risks
associated with outsourcing services.
Other aspects about cloud computing also require a major reassessment of security and risk.
Inside the cloud, it is difficult to physically locate where data is stored. Security processes that
were once visible are now hidden behind layers of abstraction. This lack of visibility can create
a number of security and compliance issues.
In addition, the massive sharing of infrastructure with cloud computing creates a significant
difference between cloud security and security in more traditional IT environments. Users
spanning different corporations and trust levels often interact with the same set of computing
resources. At the same time, workload balancing, changing service level agreements, and
other aspects of today's dynamic IT environments create even more opportunities for
misconfiguration, data compromise, and malicious conduct.
Infrastructure sharing calls for a high degree of standardized and process automation, which
can help improve security by eliminating the risk of operator error and oversight. However, the
risks inherent with a massively shared infrastructure mean that cloud computing models must
still place a strong emphasis on isolation, identity, and compliance.
Cloud computing is available in several service models (and hybrids of these models). Each
presents different levels of responsibility for security management. Figure 1 on page 3 depicts
the different cloud computing models. READ MORE>
JeffHebert 060001UEQ2 Tags:  reliable technology available real-time storage compression business efficient 2,436 Views
JeffHebert 060001UEQ2 Tags:  hp enterprise ibm efficient range emc effective mid storage performance 2,332 Views
"As the world becomes more interconnected, instrumented and intelligent, more and more information is created. This influx of information creates both challenges and opportunities. Companies must build smarter information infrastructures that can handle all of this information and manage it intelligently. IBM has invested billions of dollars developing smart storage solutions that embody a set of essential technologies: virtualization, thin provisioning, deduplication, compression and automated tiering that will enable you to manage the influx of information and unlock new business opportunities."
In many IT departments, increased user demand has led to haphazard storage growth, resulting in sprawling, heterogeneous storage environments. These environments make it difficult to achieve optimal utilization and to provision storage capacity for new users and applications. Storage virtualization can put an end to these problems. It enables companies to logically aggregate disk storage so capacity can be efficiently allocated across applications and users.
JeffHebert 060001UEQ2 Tags:  range saas virtualize paas mid iaas reliable enterprise available scalable disk torage ibm cloud 1,896 Views
JeffHebert 060001UEQ2 Tags:  virtualize ssd drive daas storage information hard technology consolidate 3,424 Views
Viewed 19817 times | Community Rating: 3.5
Originating Author: Wikibon Daemon
This paper was written and submitted by NetApp and is being republished with permission.
Flexible Choices to Optimize Performance
November 2008 | WP-7061-1008
Solid state drives (SSDs) based on flash memory are generating a lot of excitement. This enthusiasm is warranted because flash SSDs demonstrate latencies that are at least 10 times lower than the fastest hard disk drives (HDDs), often enabling response times more than 10X faster. For random read workloads, SSDs may deliver the I/O throughput of 30 or more HDDs while consuming significantly less power per disk. The performance of SSDscan reduce the number of fast-spinning hard disk drives you need in a storage system.Fewer disk drives translates into significant savings of power, cooling, and data center space. This performance benefit comes at a premium; flash SSDs are far more expensive per gigabyte of capacity than HDDs. Therefore SSDs are best applied in situations that require the highest performance.
The underlying flash memory technology used by SSDs has many advantages, particularly in comparison to DRAM. In addition to storage persistence, these advantages include higher density, lower power consumption, and lower cost per gigabyte. Because of these unique characteristics, NetApp is focusing on the targeted use of flash memory in storage systems and within your storage infrastructure in ways that can deliver the most performance acceleration for the minimum investment.
We are implementing flash memory solutions using SSDs for persistent storage, and we will also use flash memory directly to create expanded read caching devices. Caching can deliver performance that is comparable to or better than SSDs. Because you can complement a large amount of hard disk capacity with a relatively modest amount of read cache, caching is more cost effective for typical enterprise applications. As a result, more people can benefit from the performance acceleration achievable with flash technology.
You get even more flexibility and value from flash technology by combining it with the NetApp® unified storage architecture, which enables you to leverage your investment in flash memory to simultaneously accelerate multiple applications, whether they use SAN or NAS. Storage efficiency features such as deduplication for primary storage further increase your power, cooling, and space savings.
This white paper is an overview of NetApp’s plan to deliver SSDs (both native and virtualized arrays) plus flash-based read caching and of our ability to further leverage both of these technologies in caching architectures. Selection guidelines are provided to help you choose the right technology to reduce latency and increase your transaction rate while taking into consideration cost versus benefit.