Two solid days at VMworld 2011 and I got to do and see a lot. Here is a breakdown of the top 5 things I saw at VMworld.
#1 The SiliconAngle / Wikibon Cube
You couldn’t miss it. You walk into the show floor and there they
were, larger than life. The SiliconAngle / Wikibon Cube broadcasting
live from VMworld2011. Guests that were on the cube included, Tom
Georgens (NTAP), Pat Gelsinger (EMC), David Scott (HP), Rick Jackson
(VMware) as well as many more. The Cube also had 12 Industry
Spotlights. The most interesting spotlight had to do with Storage
Optimization, especially for VMware.
Oh the times they are a changing. Now that you can deliver HD TV
live over the internet, the Cube has broadcast from a number industry
shows and user conferences. The great part about this, it is like the
ability to watch a sporting event being covered by ESPN but for tech.
The Cube brings all of the highlights of these events right into your
computer screen. Now if you can’t make an event, no problem, you can
catch all the most important messages from the Cube. The Cube is now
the new mechanism for delivering content to users in the way they want
to receive the content, TV. For more, check out www.siliconangle.tv
#2 Storage Optimization – Industry Spotlight
In the Storage Optimization industry spotlight, the first 15 minutes
Dave Vellante and his co-host John Furrier tee up the concept. They
discussed storage optimization, where it has come and were it is going,
especially in VMware environments. We are hearing more and more about
storage efficiency technologies. During the next 15 minutes Dave and I
discussed the 5 essential storage efficiency technologies including:
We also discussed the fact that the IBM Real-time Compression
technology is not only the most efficient and effective compression
technology in the industry; we also learned that IBM really acquired not
just a real-time “compression” technology but a platform that can do a
number of things in real time. In fact, the 5 IBM storage efficiency technologies all operate in real time which is the most effective for customers.
We have been hearing a great deal about storage optimization in a
VMware environment due to the fact that virtualizing servers was
successful for the server side of the house but it didn’t do all it set
out to do, it didn’t fix the overall IT budget.
Virtualizing servers only pushed the financial problem to the storage
side of the house. Users have told us that when they virtualize their
servers, storage grows as much as 4x. By leveraging the right storage
optimization technologies together, users can get their budgets back
under control and also deliver the promise that server virtualization
set out to do.
#3 More Free Time for “Real-life”
While on the Cube as a panelist with my good friend Marc Farley
(HPsisyphus, formally @3ParFarley) Dave asked us what was the most
interesting thing we saw on the show floor while walking around. I
didn’t hesitate in my response. There were two in my mind. First, it
couldn’t be any more obvious at how fast data is growing. Over 50% of
the 19,000 people there had cameras taking pictures and taking video.
That data is going to be stored somewhere. Additionally, they had these
cameras for a reason. Either we have more bloggers and tweeters than
we know about, more marketing people are going to these events or more
people are using social media to inform and educate others. The way in
which users want to receive data is always changing and evolving, and at
least at VMworld 2011 we were delivering content in a number of ways
especially photos and video. All that data will end up in the “cloud”
The second thing I noticed was the amount of free time VMware has
given back to the IT user. I heard, on more than one occasion, end
users talking about family, vacations and travel instead of the usual
banter about how challenging their jobs are and the issues they have
with their vendors which is the normal think I hear at these shows.
This was not an anomaly. I am chalking it up to the fact that VMware
makes people’s lives easier.
#4 Proximal Data
These “most interesting things” are not in any particular order. I say this because I believe that Proximal Data is THE
most interesting thing I saw at the show. Now Proximal Data just came
out of “stealth” in early August. They didn’t have a booth at VMworld
but they did have a “whisper suite”. So, I have to confess, since I
used to be an analyst, sometimes people will ask me to come take a look
at their technology and their message to see if it is in line with what
is going on in the industry so I got to hear the pitch.
Proximal Data’s message is right on. It hits a very important and
growing topic with VMware these days, the I/O bottle neck on virtual
servers, and they solve this problem in a very unique and intelligent
First, the problem. One of the issues facing VMware today is the
number of virtual machines that can be hosted by one physical machine.
The more users can get on one system, the more efficient they can be.
The problem is, today systems are running into I/O workload bottlenecks
that are causing a limitation in the number of virtual machines one
system can run.
One way to solve this problem is add more memory to the host but that
could be very very expensive. You can add more HBA’s or NIC cards but
that can be expensive and also difficult to manage. You can add more
flash cache to your storage to improve the I/O bottleneck but doing that
only solves ½ the problem, you still need to solve the challenge on the
host side, again with memory or host adaptors.
The solution: Proximal Data. With some advanced I/O management
software capabilities combined with PCI flash cards on the host, for a
very reasonable price per host. The software combined with the card is
100% transparent to both the virtual servers and to the storage, which
to me is one of the most important features of the implementation.
Transparency is the key to any new technology. IT has a ton of
challenges and has done a great deal of work to get their environment to
where it is today. To implement a technology that causes all of that
work to be undone is very painful. Remember, the hardest thing to
change in IT is process, not technology. It’s important to preserve the
process. That is what Proximal Data does. Proximal Data can increase
the I/O capability of a VMware server with just a 5 minute installation
of the PCI card and their software. This technology can double and even
triple the number of virtual machines on any physical server and that
is a tremendous ROI. A new win for efficiency.
There are a number of folks entering this market these days; however
Proximal does it transparently with no agents making it the most user
friendly implementation. While these guys won’t have product until
2012, when it hits the market, I am sure it will be very successful.
#5 Convergence to the Cloud
Are we seeing the coming of the “God Box”? A number of vendors are
talking more and more as well as investing in public / private cloud.
There are more systems popping up that have servers, networks, high
availability and storage all in one floor tile. These systems are
designed to integrate, scale, manage VM’s simply, increase productivity
and ease the management of all possible application deployments in any
business. Additionally these boxes help you to connect to the cloud to
ease the cost burden. Is the pendulum swinging back to the “open
systems” main frame? Only time will tell.
One more for fun. The first meeting I had at VMworld was with a
potential OEM prospect of the IBM Real-time Compression IP. I have
always said that this technology could revolutionize the data storage
business much like VxVM did for Veritas many years ago. Creating a
standard way to do compression across a number of system can help users
with implementation as well as ease the storage cost burden. I hope
this moves forward and I hope more folks step up who want to OEM the
That includes all the same features like replication, thin provisioning, self-optimized flash tier and Cloud Agile, which is the ability to take advantage of cloud storage technology for replication and recovery of data, all in an array that lists starting at under $11,000, Walsh said. Read More>
WASHINGTON - 01 Mar 2011:IBM (NYSE:IBM) today announced a major expansion of its Institute for Electronic Government (IEG) in Washington, D.C., adding cloud computing and analytics capabilities for public sector organizations around the world.
IBM has moved and expanded the facility in order to meet the growing demand from Government, Health Care and Education leaders who recognize the potential of cloud computing environments and business analytics technologies to improve efficiencies, reduce costs and tackle energy and budget challenges.
According to recent IBM surveys of technology leaders globally, 83 percent of respondents identified business analytics -- the ability to see patterns in vast amounts of data and extract actionable insights -- as a top priority and a way in which they plan to enhance their competitiveness. In addition, an overwhelming majority of respondents -- 91 percent -- expect cloud computing to overtake on-premise computing as the primary IT delivery model by 2015.
by Steve Kenniston After landing in Warsaw, I got into a car with the local sales leader
for Poland and we drove to the event location. It was a 2 hour drive.
First, the roads and the land in Poland reminded me very much of my
home time in Maine. Very scenic and rural but beautiful and peaceful.
We talked storage for 2 hours and I am always festinated at the thirst
for knowledge there is when I travel. It was a great ride followed up
by a customer reception and some local Polish brew.
Thursday I spent the day in Sterdyn, Poland for IBM Storage
University. There were 30 customers at the event and it went very very
well. The event was at Palac Ossolinski,
today used as an event center but has a very rich history, in fact at
one point it was used as a medical facility in WWII. The photo is of
the building where we had the event. The topics we covered were:
The customers were very interactive and provided a lot of insight to
their environments. Interestingly enough I learned during our customer
reception that IBM storage is #1 in Poland with HP second and EMC
third. This is a true testament to the IBM sellers and the customers
who use the IBM products every day to drive their business. I also
learned that the data break down in Poland is 90% block, 10% file which I
found interesting and would be interested to check back 12 months from
today to see how it will be different.
I did learn something very interesting in Poland. The question was
asked “Why XIV”? What is so special about XIV. The answer was
awesome. The answer started with 2 questions:
1) How old is RAID?
2) How old is your iPhone?
The reality is data growth is out pacing what traditional RAID can
handle and data profiles are changing as well. These combined have
driven new technologies like Cleversafe, Cloud Computing, Hadoop and
XIV. Just like the iPhone is a new approach to the smart phone based on
new things we know about how these smart phones are being used, we know
more about how data and storage is being used. New ways to deliver
capacity and performance are needed in order to keep up with the
changing times. I thought it was a very good answer in terms that make
Thursday evening I traveled back to Warsaw where I got in a bit late
and just went to a local pub, Sketch. Grabbed a small bite and some
local mead and then headed back to the hotel. I did get to see the
local Palace of Culture and Science in the middle of Warsaw, very
impressive, built as a gift from Russia to Poland.
I have an early flight to Prague. I am very excited about this part
of the journey as I have always wanted to travel to Prague. Press
meeting right when I land. Stay tuned.
Businesses continue to search for storage solutions that save money
without sacrificing performance. Last year, IBM introduced Scale Out
Network Attached Storage (SONAS), the industry’s first such
network-attached storage (NAS) offering to address this business need.
SONAS is an enterprise class, NAS system that provides extreme
scalability, availability and security—and does so with record-breaking
performance. It’s designed as a single global repository to manage
multiple petabytes of storage and billions of files all under one file
In April, IBM announced significant performance enhancements to
SONAS: improved information lifecycle management (ILM), hierarchical
storage management (HSM) as well as ease of deployment and antivirus
Todd Neville, SONAS program leader at IBM, says SONAS is unique in
that it can very near-linearly scale to almost any performance level.
With SONAS, he says, “You can build a system that’s as fast as you want
it to be; but it’s not just about absolute size, it’s also about bang
for your buck. We’ve significantly increased the software performance in
our upcoming release 1.2, so customers see a significant performance
increase on their current platform with no additional costs.”
Funda Eceral, SONAS market segment manager at IBM, says SONAS is the
only true scale-out NAS system available in the marketplace. “While you
can nondisruptively add capacity with storage building blocks,” Eceral
says, “you can also still continue to independently scale out your I/O
performance with interface nodes. It brings operational efficiency and
extraordinary utilization rates for each customer.”
Three Key Features
This version of SONAS offers three key features, according to Neville:
Ease of deployment. Using Network Data Management Protocol
(NDMP), a SONAS device can be easily integrated into existing
data-center backup infrastructures. “If you have an enterprise backup
deployment using NDMP, you will be able to take SONAS and quickly
connect with a wide variety of popular backup systems,” Neville says.
Built-in antivirus integration. Scalable NAS storage devices
must have a way for an antivirus function to perform scans on files
intelligently, such as when they’re opened or closed. SONAS includes a
built-in functionality that lets a third party like Symantec integrate
into the SONAS device to perform antivirus operations, as simple “full
file-system scans” become cumbersome at enterprise scales.
Physical size. Neville says customers asked IBM to make the
SONAS device more compact, although it supports almost a full petabyte
in a single rack, making it the only offering in IBM’s NAS portfolio
that can do so. It’s now 10 inches shorter than the original device, can
scale up to 14.4 petabytes (with 2 TB drives) and has a single point of
management, which can significantly reduce storage-administration
“Everyone says, ‘We do tiering, HSM and ILM,’ but design
matters—IBM does it differently.” —Todd Neville, SONAS program leader,
“Everyone says, ‘We do tiering, HSM and ILM,’ but design matters—IBM does it differently.” —Todd Neville, SONAS program leader, IBM Next Page >>
"As the world becomes more interconnected, instrumented and intelligent,
more and more information is created. This influx of information creates
both challenges and opportunities. Companies must build smarter
information infrastructures that can handle all of this information and
manage it intelligently. IBM has invested billions of dollars developing
smart storage solutions that embody a set of essential technologies:
virtualization, thin provisioning, deduplication, compression and
automated tiering that will enable you to manage the influx of
information and unlock new business opportunities." http://www-03.ibm.com/systems/storage/news/announcement/20101007.html
In many IT departments, increased user demand has led to haphazard
storage growth, resulting in sprawling, heterogeneous storage
environments. These environments make it difficult to achieve optimal
utilization and to provision storage capacity for new users and
applications. Storage virtualization can put an end to these problems.
It enables companies to logically aggregate disk storage so capacity can
be efficiently allocated across applications and users.
There has been significant discussion in the industry about
storage optimization and making better use of storage capacity. A number
of storage vendors have successfully marketed data de-duplication for offline/backup applications, reducing the volume of backup data by a factor of 5-15:1, according to Wikibon user input.
Data de-duplication as applied to backup use cases is different
from compression in that compression actually changes the data using
algorithms to create a computational byproduct and write fewer bits.
With de-duplication, data is not changed, rather copies 2-N are deleted
and pointers are inserted to a 'master' instance of the data.
Single-instancing can be thought of as synonymous with de-duplication.
Traditional data de-duplication technologies however are
generally unsuitable for online or primary storage applications because
the overheads associated with the algorithms required to de-duplicate
data will unacceptably elongate response times. As an example, popular
data de-duplication solutions such as those from Data Domain, ProtecTier
(Diligent/IBM), Falconstor and EMC/Avamar are not used for reducing
capacities of online storage.
There are three primary approaches to optimizing online storage,
reducing capacity requirements and improving overall storage
efficiencies. Generally, Wikibon refers to these in the broad category
of on-line or primary data compression, although the industry will often
use terms like de-duplication (e.g. NetApp A-SIS) and single
instancing. These data reduction technologies are illustrated by the
following types of solutions:
NetApp A-SIS and EMC Celerra which employ either “data de-duplication light” or single-instance technology embedded into the storage array;
Each of these approaches has certain benefits and drawbacks. The
obvious benefit is reduced storage costs. However each solution places
another technology layer in the network and increases complexity and
Array-based data reduction
Array-based data reduction technologies such as A-SIS operate
in-line as data is being written to reduce primary storage capacity. The
de-duplication feature of WAFL (NetApp’s Write Anywhere File Layout)
allows the identification of duplicates of a 4K block at write time
(creating a weak 32-bit digital signature of the 4K block, which is then
compared bit-by-bit to ensure that there is no hash collision) and
placed into a signature file in the metadata. The work of identifying
the duplicates is similar to the snap technology and is done in the
background if controller resources are sufficient. The default is once
every 24 hours and every time the percentage of changes reaches 20%.
In addition, there are three main disadvantages of an A-SIS solution, including:
With A-SIS, de-duplication can only occur within a single
flex-volume (not traditional volume), meaning candidate blocks must be
co-resident within the same volume to be eligible for comparison. The
deduplication is based on 4k fixed blocks, rather than the variable
block of (say) IBM/Diligent. This limits the de-duplication potential.
There is a complicated set of constraints when A-SIS is used
together with different snaps depending on the level of software. Snaps
made before deduplication will overrule de-duplication candidacy in
order to maintain data integrity. This limits the space savings
potential of de-dupe. Specifically, NetApp's de-dupe is not cumulative
to space efficient snapshots. See (technical description);
The performance overheads of deduplication as described above
mean that A-SIS should not be applied to a highly utilized controller
(where the most benefit is likely to be achieved);
There is an overhead of for the metadata (up to 6%)
To exploit this feature, users are locked-in to NetApp storage.
IT Managers should note that A-SIS is included as a no-charge
standard offering within NetApp's Nearline component of ONTAP, the
company's storage OS.
Host-managed offline data compression solutions
is an example of a host-managed data reduction offering or what it
calls 'split-path.' It consists of an offline process that reads files
through an appliance, compresses those files and writes them back to
disk. When a file is requested, another appliance re-hydrates data and
delivers it to the application. The advantage of this approach is much
higher levels of compression because the process is offline and uses
many more robust algorithms. A reasonable planning assumption of
reduction ratios will range from 3-6:1 and sometimes higher for initial
ingestion and read-only Web environments. However, because of the need
to re-hydrate when new data is written, classical production
environments may see lower ratios.
In the case of Ocarina, the company has developed proprietary
algorithms that can improve reduction ratios on many existing file types
(e.g. jpeg, pdf, mpeg, etc), which is unique in the industry.
The main drawbacks of host-managed data reduction solutions are:
The expense of the solution is not insignificant due to
appliance and server costs needed to perform compression. In
infrequently accessed, read-only or write-light environments, these
costs will be justified.
To achieve these benefits, all files must be ingested, which is
a slow process. Picking the right use cases will minimize this issue.
After a file is read and modified, it is written back to disk
as uncompressed. To achieve savings, files must be re-compressed again
limiting use cases to infrequently accessed files.
Ocarina currently supports only files, unlike NetApp A-SIS
which supports both file and block-based storage. However Ocarina's
implementation offers several advantages over A-SIS (remember A-SIS is
The solution is not highly scalable because the processes related to backup, re-hydration, and data movement are complicated.
On balance, solutions such as Ocarina are highly suitable and
cost-effective for infrequently accessed data and read-intensive
applications. High update environments should be avoided.
In-line data compression
IBM Real-time Compression offers in-line data compression whereby a device sits between servers and the storage network (see Shopzilla's architecture). Wikibon members indicate a compression ratio of 1.5-2:1 is a reasonable rule-of-thumb.
The main advantage of the IBM Real-time Compression approach is
very low latency (i.e. microseconds) and improved performance. Storage
performance is improved because compression occurs before data hits the
storage network. As a result, all data in the storage network is
compressed, meaning less data is sent through the SAN, cache, internal
array, and disk devices, minimizing resource requirements and backup
windows by 40% or more, according to Wikibon estimates.
There are two main drawbacks of the IBM Real-time Compression approach, including:
Costs of appliances and network re-design to exploit the compression devices. The Wikibon community estimates clear ROI will be realized in shops with greater than 30TB's;
Complexity of recovery, specifically users need to plan for
re-hydration of data when performing recovery of backed up files (i.e.
they need to have a Storewize engine or software present to recover from
a data loss).
On balance, the advantages of an Ocarina or IBM Real-time Compression
approach are they can be applied to any file-based storage (i.e.
heterogeneous devices). NetApp and other array-based solutions lock
customers into a particular storage vendor but have certain advantages
as well. For example, they are simpler to implement because they are
An Ocarina approach is best applied in read-intensive
environments where it will achieve better reduction ratios due to its
post-process/batch ingestion methodology. IBM Real-time Compression will
achieve the highest levels of compression and ROI in general purpose
enterprise data centers of 30TB's or greater.
Action Item: On-line data reduction is rapidly coming to
mainstream storage devices in your neighborhood. Storage executives
should familiarize themselves with the various technologies in this
space and demand that storage vendors apply capacity optimization
techniques to control storage costs.
Backups are a necessity. They’re important in any computing environment, and you would be hard pressed to find anybody who would disagree with the criticality of having backup copies of their data. In the event that primary systems or data sets are unavailable, backups are designed to provide the assurance that significant amounts of work, time or money aren’t lost.
To protect the partners, customers and constituents of organizations from risks associated with potential data loss, the U.S. federal government has established various compliance requirements that must be met and maintained. In addition to general business-compliance requirements, many industries have additional regulations that must be met. Examples include Sarbanes-Oxley Act of 2002 (SOX), Payment Card Industry Data Security Standard (PCI DSS), the U.S. Health Insurance Portability and Accountability Act of 1996 (HIPAA), Gramm-Leach-Bliley Act (GLBA), and the Federal Information Security Management Act (FISMA); it’s easy to see why compliance is often referred to as regulatory alphabet soup (which is not far off from the storage industry, I would add).
Depending on the industry, the mandated data-retention timeframe can vary from a few as seven years to as many as 100 years. At the upper end of that spectrum, a significant amount of infrastructure investment and planning is necessary. Unfortunately, systems complexity becomes a byproduct of trying to solve these challenges and that complexity evolves over time until it becomes unmanageable.
Just as the specific requirements for these regulations vary, so do the consequences of being non-compliant, which is often discovered during periodic industry audits or following a breach. Failure to meet compliance requirements could result in warnings or fines, and in extreme cases, termination of operations and prison time. The trouble is: Compliance testing can be difficult to do, and it can come down to having a confidence in whether or not systems will be able to perform adequately under trial.
IBM Real-time Compression appliances reduce storage capacity
utilization by up to 80% without performance degradation. IBM Real-time
Compression appliances increase the capacity of existing storage
infrastructure helping organizations meet the demands of rapid data
growth while also enhancing storage performance and utilization. The
result is unprecedented cost savings, ROI, operational and environmental
The IBM Real-time Compression appliances address data
optimization on primary storage so your capacity is optimized across all
tiers of storage. The IBM Real-time Compression Appliance STN6500 and STN6800
align to your existing storage networking configuration for easy
installation. The appliances install transparently in front of your
existing NAS storage and thru patented real-time compression reduces the
size of every file created. Read more>
A quick summary of the latest announcements by Nick Harris
In the cover story this month, Lee
Cleveland, Distinguished Engineer, Power Systems direct attach storage, and
Andy Walls, Distinguished Engineer, chief hardware architect for DS8000 and
solid-state drives (SSDs), sat down to talk about all of the new storage
technologies IBM has been releasing lately. What I didn’t have room for in the
article was a nice summary of the technologies that can help you improve
access, manage growth, protect data, reduce costs or reduce complexity.
Whatever your goals, IBM has an integrated storage option for every
Here are the quick highlights of the
latest storage announcements:
New advanced software functions
New easy-to-use, Web-based GUI
RAID and enclosure RAS services and diagnostics
Additional host, controller and ISV interoperability
Integration with IBM Systems Director
Enhancements to Tivoli Storage Productivity Center (TPC), FlashCopy Manager
(FCM) and Tivoli Storage Manager (TSM) support
Proven IBM software functionalities
Easy Tier (dynamic HDD/SSD management)
RAID 0, 1, 5, 6, 10
Storage virtualization (local and external disks)
Non-disruptive data migration
Global and Metro Mirror
FlashCopy up to 256 copies of each volume
IBM Storwize Rapid Application
Runs on: AIX 7.1-5.3, IBM i 7.1-6.1
(with VIOS), Red Hat and SUSE Linux, z/VSE, Microsoft Windows, Mac OS X
IBM System Storage TS7610 ProtecTIER Deduplication Appliance Express
The TS7610 is a powerful new addition to the IBM ProtecTIER
solution set, which brings the benefits of the reliability and
performance of disk-based data protection to mid-sized businesses who
need to ensure their backups are successfully completed in a timely
manner. The TS7610 brings the added benefit of inline data deduplication
which can squeeze up to 25TB or more into a single terabyte of storage.
The TS7610 also reduces costs (such as reducing downtime and time spent
managing and supporting systems) up to 45% over standard
non-deduplicated virtual tape library systems.
Cisco’s apparently going to try to simplify its sales, services and engineering organizations in the next 120 days
Faced with a nasty loss of credibility, a string of poor financial
results, shrinking market share in its core business, an unwieldy and
alienating bureaucracy blamed for the top executive exodus it been
experiencing, and a stock price that's plunged into the toilet Cisco,
once an economic bellwether, is promising to do more than simply kill
off its once-popular Flip video camcorder business and lay 550 people
off, an admission that its foray into the consumer segment had largely
It said in a press release issued Thursday morning that it's going to
a "streamlined operating model" focused on five areas, not apparently
the literally 30 different directions it's been going in although it did
say, come to think of it, something about "greater focus" so maybe it's
not really cutting back.
These focus areas are, it said, "routing, switching, and services;
collaboration; data center virtualization and cloud; video; and
architectures for business transformation."
Nobody seems to know what that last one is and the Wall Street
Journal criticized Cisco for not being able to explain in plain English
what it's doing and Barron's complained that it needed a Kremlinologist
to decrypt the jargon in the press release.
Anyway Cisco's apparently going to try to simplify its sales,
services and engineering organizations in the next 120 days or by July
31 when its next fiscal year begins. Well, maybe not everything, it
warned, but sales ought to be reorganized by then.
This streamlining seems to mean that:
Field operations will be organized into three geographic regions
for faster decision making and greater accountability: the Americas,
EMEA and Asia Pacific, Japan and Greater China still under sales chief
Services will follow key customer segments and delivery models still under its multi-tasking COO Gary Moore;
Engineering, still reporting to Moore, will now be led by
two-in-a-box Pankaj Patel and Padmasree Warrior and aside from the
company's five focus areas there will be a dedicated Emerging Business
Group under Marthin De Beer focused on "select early-phase businesses"
"with continued focus on integrating the Medianet architecture for video
across the company."
Lastly, it's going to "refine" - but apparently not dismantle its
hydra-headed, decision-inhibiting Council structure blamed for
frustrating and running off key talent - down to three "that reinforce
consistent and globally aligned customer focus and speed to market
across major areas of the business: Enterprise, Service Provider and
Emerging Countries. These councils will serve to further strengthen the
connection between strategy and execution across functional groups.
Resource allocation and profitability targets will move to the sales and
engineering leadership teams which will have accountability and direct
responsibility for business results."
It's unclear whether any of this means layoffs.
Cisco piped in a quote credited to Moore saying. "Cisco is focused on
making a series of changes throughout the next quarter and as we enter
the new fiscal year that will make it easier to work for and with Cisco,
as we focus our portfolio, simplify operations and manage expenses. Our
five company priorities are for a reason - they are the five drivers of
the future of the network, and they define what our customers know
Cisco is uniquely able to provide for their business success. The new
operating model will enable Cisco to execute on the significant market
opportunities of the network and empower our sales, service and