Last week, IBM announced a new SmartCloud Enterprise Trial. This is an exciting option for customers and partners interested in trying IBM’s IaaS and PaaS offerings before they buy.
Under the trial, which is for new customers only, users create an account, build images, and use the cloud for up to 60 days at no charge. At the end of the second billing cycle, users’ trial accounts automatically convert to fully featured SmartCloud Enterprise accounts. Their images, data, and settings all remain intact.
In years past, IBM would offer limited-time, 90-day trials, usually in the spring and fall. The 90 days were fixed to the calendar, so developers had to be watching for the promotion in order to take advantage of the full 90 days. It was easy to enroll, but feedback was clear: clients and partners wanted an ongoing try-and-buy option. The SmartCloud Enterprise Trial addresses this longstanding requirement.
At the same time, many of our partners have taken advantage of the developerWorks Trial, which has been available since May, 2012. It is a no-charge, 90-day offering for individual developers. The 90 window begins when they create an account. At the end of the 90 days, their assets are purged from the system and there is no option to migrate the images to a permanent account. It truly is a “sandbox” for developers.
Both the SmartCloud Enterprise Trial and the developerWorks Trial provide a subset of the services available on SmartCloud Enterprise. The SmartCloud Enterprise Trial is available immediately in major markets, and will roll out to other countries throughout June and July. The developerWorks Trial is available in all countries where SmartCloud Enterprise is sold.
So why have two trials? The key difference is the preservation of the images and data upon completion of the trial. The SmartCloud Enterprise Trial is designed for anyone who intends to be a long term cloud user but would like to try the service before buying. Thus IBM maintains the images and data created during these trials. When the Trial account converts to a Pay As You Go (PAYG) account, all services and images become available for purchase, but the images created during the Trial persist.
Developers can use the developerWorks Trial for a variety of purposes that often don’t need to persist past 90 days. For example, developers building application patterns for PureApplication System can test their patterns through the SmartCloud Application Services.
In providing two trial options, IBM is addressing the needs to different user types. Please let us know how these options are working for you and your company. We love to share success stories.
In June, I attended three cloud events in 10 days: Cloud Expo East in New York City, Cloud Ecosystem in Frankfurt, Germany, and IBM’s ISV Executive Summit in Stuttgart, Germany. Prior to these events, I prepared to be inundated with questions about IBM’s intent to acquire Softlayer. But the more recent current events related to the Prism Scandal turned almost every conversation into a discussion about data privacy, security, and cloud computing.
The Europeans interpreted the events very differently from the Americans. In the US, the conversation focused on who was responsible and the whereabouts of Edward Snowden. In Europe, the conversation was about the outrage of learning that data—really any data, anywhere, anytime—is accessible by people with the right credentials. At least in my meetings and conversations, Americans were less focused on individual freedoms than the Europeans.
In Europe, there is strong consensus that the best way to control access is to maintain the data in country. Germany, with some of the strictest laws around data privacy, is viewed as a good location for any data center that supports the European Union. But sometimes, even when the data sits on a disk in a data center in a given country, the owner of that data center might employ administrators from other countries, like maybe India, to maintain that system on a regular basis. So while it’s important to know where the data sits, it’s equally important to know where the administrator sits.
More importantly, there are basic data security policies that apply regardless of where the data resides. Is the data encrypted? If so, at what level and who holds the encryption keys? These are the same questions organizations should be asking, whether the data is on the CEO’s laptop, in a secured data center, or somewhere in the cloud. Good data security policy transcends any technology infrastructure, even cloud computing.
There are articles telling us why the Prism program portends the end of cloud computing, and there are articles telling us the very same program will lead to growth in cloud computing. And there are articles telling us that we shouldn’t worry about government spying anyway when we so freely give our personal information away to social media sites and online marketers.
Whether we’re talking about government sponsored surveillance programs, online marketing, or illegal hacking, the warning is the same: as individuals we need to be vigilant about who is maintaining our online data and how. And as technology vendors, we need to ensure that our policies and practices are well designed, transparent and audited.
In December 2012, IBM published the 2012
Tech Trends Report, an annual research study conducted jointly by IBM developerWorks and the IBM
Center for Applied Insights. In this
study, IBM surveyed more than 1,200 IT professionals and more than 700 students
and academics from 13 countries. Not
surprisingly, for both the IT and academic communities, four primary
technologies are transforming business: social, mobile, analytics, and cloud
This research dovetails with IBM’s CEO study,
published in May, 2012. In that study,
CEOs reported for the first time that technology was the most important force
affecting their organizations, ahead of people skills, market factors, and
macroeconomic factors. This is
significant, because when technology is top of mind for the worlds top CEOs, it
indicates that we are in the midst of a major technology shift.
And cloud computing is fundamental to that technology
shift. It provides a viable platform for
the compute requirements of social, mobile, and analytics. All of these technology trends require fast
response times, vast stores of data, and a highly elastic backbone of networks
and servers. Not only can cloud deliver
on the technology requirements, but it can also serve an important financial
model: funding through operational expenses instead of capital expenses.
It is the combination of a technology shift and a financing
shift that puts cloud computing on the forefront of CEOs minds, because it
opens up new possibilities to reinvent business. We call those CEOs on the forefront of this
technology shift pacesetters, and they are creating new business opportunities,
moving into new markets, and driving higher efficiencies in their
business. These pacesetters are treating
cloud computing as a strategic opportunity, not a threat to the status quo.
Social, mobile, analytics, and cloud are each interesting in
their own right, but when treated strategically and as a whole, reinventing
business is not just a possibility, but a concrete business plan. IBM is working with companies who are using
cloud computing as the flexible platform for new applications that use
analytics to comb through social media to more precisely target a customer with
the products and services they need. In
this way, customers derive immediate value and never feel “spammed.” They are consuming this value through their
mobile devices, and increasingly have little patience for more traditional
At the same time, pacesetting CEOs are using cloud computing
to drive higher efficiencies within their own businesses. Companies that are dabbling in cloud computing
often presume that the only value of cloud is lower cost of IT operations. But the pacesetting organizations have
learned that cloud computing allows for more nimble operations, faster time to
market, and ultimately a way to expand the business.
Although the Tech Trends study reveals that we are
witnessing an exciting shift in technology, it also exposes a looming skills
gap at a worldwide level. IBM is ready
to help with our expansive
cloud computing resources right here on developerWorks. Please use and share our materials, and of
course, let us know what else you need to make your business a pacesetting
In high tech circles, the term “ecosystem” is used to maddening effect. Leaders toss the term around as though it is the answer to every business problem: Can’t build the solution your clients need? Just tell them you’ll deliver the extra capabilities through your ecosystem. Can’t reach all markets? Just tell your investors that you’ll get there through your ecosystem. Need to expand your sales force? Just add “the channel” to your ecosystem.
But what is an ecosystem, how do you know when you have one, and how do you ensure its ongoing success? The term ecosystem is borrowed from the biological sciences, where it refers to a collection of living and nonliving organisms that are “linked together through nutrient cycles and energy flows.” There are three important features of this definition when we transfer it to the high tech business: assessing living and nonliving organisms, understanding the difference between nutrient cycles and energy flows, and appreciating the whole ecosystem as separate from the sum of its parts.
These features are especially insightful when we apply the ecosystem to the industry disruption created by Cloud Computing. We’ll look at each feature separately, starting with living and nonliving organisms.
A Collection of Living and Nonliving Organisms
First, the ecosystem is a collection of living and nonliving organisms. In biology, the ecosystem includes both plants and rocks, or water and sand, for example. In high tech, the corollary is people and assets. When building ecosystems, it’s often easy to focus on either people or assets in a vacuum, an approach that will rarely succeed.
For example, several years ago, we were building an offering for the small business market. This was a new venture for IBM, since at the time we didn’t have offerings that reached below the midmarket and cloud computing wasn’t commercially viable. We had an appliance for small business, and we needed applications to ride on top of that platform. A very large ERP vendor had an application that they were building for the small business market and they were looking for a viable platform on which to deliver it.
The match seemed ideal. Both companies flew enthusiastic architects between locations to meet and build presentations that displayed the elegance of the combined solution. Marketing teams built plans on how to roll out the new offering. Executives agreed over dinner that the partnership would be fruitful for both sides.
Everyone was focused on the nonliving organisms—the platform and the application. But the project never got out the door because once we started looking at the living organisms—the sales teams—we realized that neither side had a channel that could reach our new target audience. We lacked the people who were most crucial in turning the asset into mutual profit.
At least as frequently, partnerships are formed when two companies realize great synergies between their teams. The companies might share a common mission or have similar organizational cultures. Often they have a competitor in common. Highly optimistic conversations about the boundless possibilities of a strong alliance reverberate up and down the organizational chains of both companies.
But without assets, these partnerships are what one pundit called “Barney Relationships.” Barney, the friendly purple dinosaur from a long running children’s show, sings, “I love you, you love me….” While this is charming for kids, in business it’s all just talk. Partnerships, and ultimately ecosystems, need assets that drive revenue for all the parties involved.
An ecosystem starts with very basic building blocks: people and assets. In Part 2, we’ll look at how variations of those building blocks feed on and interact with each other.
In this three part series, we're looking at what the phrase "ecosystem" means and how it applies to the IT industry. The term ecosystem is borrowed from the biological sciences, where it refers to a collection of living and nonliving organisms that are linked together through nutrient cycles and energy flows. In Part 1, we applied the notion of living and nonliving organisms to a partner ecosystem. Next, let's focus on nutrient cycles and energy flows. In a partner ecosystem, these two systems for sustainability translate to two units of measure: one is money, and the other is influence. The two systems function separately, but in harmony with each other.
In a natural ecosystem, a nutrient cycle is a nice way of describing how the bigger creatures eat the smaller creatures. In partner ecosystems, the notion of the big devouring the small seems impolite at best and counter-productive at worst. In fact, the whole metaphor of an ecosystem arguably breaks down when a hierarchical nutrient cycle is applied to a network of partnerships.
But if we look at nutrient cycles as feeding systems, the metaphor becomes useful again. Even if the participants in an ecosystem are not devouring each other, they must find ways to feed each other. In the world of IT, money is the food, and if we can follow the money, we can watch the ecosystem in action.
In a traditional IT business model, with perpetual licenses and renewable maintenance streams, it’s easier to follow the money because “feeding times,” if you will, happen at regular intervals. Just as how bears scavenge campgrounds at roughly the same time every night, renewal license models based on long-term capital expense investments, will result in regular feeding times. And the resulting revenue flows are relatively simple to track.
But in a world of subscription-based licensing funded by operational expense investments controlled by a number of Line of Business managers, following the money is like following a cardinal, watching it eat small amounts all day long from a number of feeders scattered over a wide range of locations, and often sharing the feeders with several other species.
Nutrient systems imply that the participants feed each other. And certainly it is true that in order to sustain the partner ecosystem, every participant needs to contribute to the sustainability of the whole. And every participant needs to derive benefit from the system. So the members don’t need to be devouring each other in order to feed off of each other.
Energy flows in the natural world correlate nicely to the flow of content and ideas between influencers in a partner ecosystem. All participants are influencers to some degree, but analysts, online communities, and pundits are primarily focused on tracking and documenting the exchange of ideas between participants. These people and organizations facilitate the flow content throughout the ecosystem. How these participants get paid is independent of the flow of money that we follow in the “nutrient system” previously described.
Although energy ebbs and flows naturally, one of the challenges in maintaining a vibrant ecosystem is sustaining a high level of content flow. It’s easy to create buzz with a product launch or large event, but it’s far more challenging to sustain interest over time. To keep everyone engaged, the content exchange has to be fresh and relevant. Developer communities have demonstrated how sustain interest through constant innovation, candid online discussions, and a willingness to share content.
As the IT community moves from a traditional, in-house delivery of applications to a open, cloud-based delivery of integrated solutions, the ecosystems that support these communities must expand and transform. Companies like IBM need to move away from “feeding bears” and learn how to “feed birds.”
August 12 was the 30th anniversary of the PC. While some of our colleagues won't remember the Charlie Chaplin ads and the sleek design of those first PCs, it was revolutionary for anyone who had been working with "real" computers. (And yes, boys and girls, the first PCs seemed sleek to us!) Pundits and analysts predicted that these machines would change the world. Curmudgeons grumbled that this PC was just a flash in the pan, that PCs could never do the work of the multi-million dollar systems running on raised floors.
Today we find ourselves in a revolution with similar commentary: pundits are talking about how cloud coupled with mobility changes everything, while so-called curmudgeons in the data center caution against blind faith in an unproven technology. The realities of this emerging technology will play themselves out over the next few years, but one thing does seem certain: cloud computing is taking us out of the PC era.
Cloud computing is inextricably linked to the mobile devices that untether us from the ubiquitous laptop. This new computing environment is not simply a replacement technology for PCs. It represents a new attitude toward technology, where the humans--with all our propensity for social interaction and non-linear thinking--are driving technology, rather than the other way around. Applications are linked and mashed and delivered to suit the needs of individuals and groups, whereas previously people had to adjust their behavior in order to access the application.
In both Mark Dean's article
and at the Cloud conference
I attended in June, experts have been calling this the "post-PC era." But that's only because we haven't thought of a better name. This era of computing is about more than just cloud or mobility technologies, and it's more comprehensive than social networking. It's about a people-centric, on demand approach to computing. As with other eras of computing, a better name than post-PC will eventually surface. I, for one, would not deign to name an entire era, seeing as how I have a hard time naming cats. But regardless of what we call this, it is a great time to be in the computing industry.
Modified by AmyHAnderson
And finally, the system needs to be viewed as a dynamic, interconnected network of constituents. Although it doesn’t need to be difficult to decipher, an ecosystem is necessarily more complex than a simple partnership.
Partnerships are one to one relationships. They can be as structured as a strategic alliance, where the two parties make formal revenue commitments and drive sales jointly, or as casual as two sales reps meeting for coffee occasionally. But’s driven by two organizations with common objectives and a belief that mutual revenue will result from interacting with each other.
Effective partnerships happen when the two companies are highly aligned with each other. Corporate cultures, organizational styles, and compensation programs are complementary, making it easy to work with each other. Distributors and resellers have built their businesses around aligning to the vendors with whom they partner, epitomizing the approach to partnering.
Partner programs are one to many relationships. Vendors build programs to provide benefits to groups of partners. Programs help categorize partners by how they specialize and by their level of commitment to the vendor. Partners derive value through financial benefit, influence with the vendor, and non-monetary support such as technical enablement.
When a vendor develops a set of partner programs, it’s easy to mistake that collection of programs for an ecosystem. But unless the constituents in the programs have independent and systematic access to each other, there is no ecosystem.
A true ecosystem is a set of many to many relationships between partners in the “feeding system” and the influencers in the “energy flows.” As described in Part 2, the flow of money and the exchange of ideas are critical to a sustaining a vibrant ecosystem.
And as the ecosystem matures, there is less need for a single orchestrator to direct the activities of each constituent. Two parties may work together on a project or deal that has benefits that reverberates through the ecosystem. This is fundamentally what differentiates an ecosystem from a set of partner programs.
Open source computing models provide the basis for a mature ecosystem. Open systems have long been heralded as the antidote to vendor lock-in, but in reality, the true value of open computing is the fluidity of ideas, and increasingly the opportunity for new economic models.
For example, when a vendor makes a service available as an API, there are several ways to monetize that service, either through direct sales, revenue sharing, or advertising. Highly mature online communities, fueled by social business, create the platform for this exchange that benefits the entire ecosystem.
Summarizing all three parts
In summary, any organization looking to build an ecosystem needs to start with a three-pronged plan:
Define the whole and the sum of the parts
Who are the constituents?
How do they interact?
People and assets required from each constituent
Who is creating the assets?
What are the assets?
Who is doing the selling?
Money and content
How does the money flow through the ecosystem?
What kind of content needs to flow through the ecosystem and what channels will the content require?
Answering these questions is a lot harder than it sounds. But as the industry moves into this next paradigm of computing, getting these answers right will be critical to every vendor’s success.
By creating a cognitive computing system that could play
Jeopardy, IBM charmed the world with Watson, the system that beat two top
champions of the quiz show. Since that
famous game in February, 2011, IBM has engaged Watson on more practical
pursuits, including solutions for the healthcare and finance industries.
But we’re also using Watson to solve our own problems. In an organization as large as IBM, one of
the biggest challenges is knowledge sharing.
In IBM, it is generally true that for any technology question, there is
at least one person in the company with the correct answer. But finding that person is too often
Over the years, there have been several solutions to this
knowledge sharing problem. The
proliferation of wikis and online communities is the most current attempt to
provide a repository of knowledge and expertise. While these tools are immensely helpful and
go a very long way toward solving the knowledge sharing problem, users still
struggle to navigate this vast data source.
Successful navigation requires some prior knowledge of who the experts
are, and their ontology, or how they logically structure and organize their
So we know we have experts, data, and answers to just about
every question. But we can’t find a tool
to help us sift through that vast store of information.
Watson excels as culling through pedabytes of information and
deriving meaning from disparate sources.
Watson can associate people with areas of expertise, and can place
information in a historical context.
Watson, therefore, is the ideal cognitive system for IBM’ers
trying to solve problems for clients.
But Watson requires care and feeding to get to that
seemingly magical state of expertise.
The data store is built by submitting thousands of questions and
providing links to the correct answers.
It also helps to give Watson a data corpus for a subject area, even something as broad as cloud computing.
Imagine, if you will, that you had the opportunity to submit
questions and answers for Watson about cloud computing, What questions would you want Watson to be
able to answer? There are obvious
questions, like “What is cloud computing?” and there are thousands of questions
related to the technical depths below the umbrella phrase of “cloud
computing.” But what about less obvious
questions? For example, is there
agreement on who first coined the term “cloud computing?”
In the next few weeks, I’ll be contributing to the database
of cloud computing questions for Watson.
If you have questions you think I should include, feel free to post them
Today is an exciting day for IBM's
Cloud Computing initiative. Hopefully, you're confirmed your
registration for one of our 40 partner events that we're hosting
worldwide. 20 of those events will take place in one 24-hour period,
starting at 9 am on the East Coast of the US. The rest will be
rolling out over the next three weeks. Just in case you missed the
invitation, it's not too late to register:
In all of these events, we'll be
introducing you to a great new way of describing our cloud computing
strategy to your clients: the cloud adoption patterns. It's widely
accepted that IBM's breadth of cloud offerings is unmatched in the
industry, but it's not always easy to make sense of it all for your
clients. With these adoption patterns, you now have a way to quickly
hone in on what your clients need most and then identify a project
that delivers results for them and revenue for you.
Based on more than 2,000 cloud
computing engagement with clients, IBM has determined that when
clients approach cloud computing, they typically adopt it in one of
Cloud Enabled Data Center:
service management, automation, provisioning, and self service
capabilities for private and hybrid clouds.
Cloud Platform Services: Integrated
stack of middleware optimized for automated deployment and management
of heterogeneous workloads that dynamically adjusts.
& IT as a Service: Capabilities
provided to consumers for using a provider’s applications running
on a cloud infrastructure.
Cloud Service Provider:
reliable, highly secure and scalable platform for creating, managing,
and monetizing cloud services.
When a client sees these four adoption
patterns, it's a straightforward discussion to determine which
pattern most closely matches their business goals. In this way, we
avoid a technology-led discussion and instead focus on the client's
Once we establish the most logical
adoption pattern, the next step is to identify a project to get
started. Under each of the adoption patterns, we have created 3-7
projects that a client can undertake for cloud computing. The
projects are discrete and tangible, and designed to deliver near-term
results for the client. It is only after we identify a project that
we start talking about offerings and products from IBM. In this way,
we can leverage the breadth and depth of our offerings without
overwhelming a client.
As part of our cloud launch today,
we've opened our Cloud Computing Virtual Briefing Center. Please
visit the center for video presentations on the client adoption
patterns, webcasts on the projects, and a host of papers, brochures,
and podcasts that explain all aspects of the products and services
that we're launching today.
I look forward to working with all of
you in delivering cloud solutions to our clients.
Modified by AmyHAnderson
Two years ago, IBM launched the Cloud Specialty. From the beginning, we declared the Specialty—like all IBM Specialties—to be an elite program for partners interested in a deeper investment with IBM. The press and analyst communities received the program with enthusiasm.
The Specialty is built around five partner models for cloud. These models address all partner types, such as ISVs, SIs, VADs, and VARS, and focus on what partners want to do with IBM and cloud. Rather create separate programs for different types of partners, we created a single program with multiple paths, thus allowing each partner to find the right fit for their business’s cloud strategy.
Like any IBM Specialty, the partners demonstrate skills, revenue, and references related to a particularly technology and in exchange, IBM provides marketing benefits. Because the program is elite, the requirements for partners are meant to be challenging, and the benefits from IBM are meant to be generous.
Two years ago, private cloud was in full swing and public cloud—at least for large enterprise clients—was in the early adopter stage. So not surprisingly, the Cloud Builder path, which targets private cloud builders, was immediately popular. For the most part, Cloud Builders came out of traditional VADs and VARs and were accustomed to the requirements of a specialty. We assumed that paths designed for public cloud partners would soon follow in popularity and adoption.
But public cloud partners, by and large, have a different heritage. These partners are traditional ISVs and SIs, and many of them resisted the rigors of the requirements. Many found the certification tests onerous and irrelevant, and still others struggled to publicly identify clients who saw their cloud implementations as a competitive advantage. And too often, they concluded that our generous benefits were not worth the cost of qualification. As a result, the application provider and technology provider paths never took off the way we expected.
Meanwhile, entry level programs built around the Ready For concept have flourished with these same partners. In a Ready For, the partner documents their solution in production and in return, IBM provides a badge, or mark, and make the solution available in a catalog. The Ready For SmartCloud Services program in particular has had very broad appeal.
Our partners have voted with their keyboards, and IBM is responding. We are in the process of revising our Cloud Specialty to focus on Cloud Builders and MSPs. The MSP Initiative is a great place to start on working with IBM in variety of geos and industries. Partners interested in working with IBM as a SaaS provider should pursue the Ready For SmartCloud Services program.
Most importantly, these program changes do not diminish the viability of the five partner paths. We will continue to use these paths as the basis for our discussions about what we can do together and how we can jointly drive success for our clients.
How have the five paths helped you? Please let us know how our programs and partner models are working for you.