Modificado em por johnswan
Containers have become a critical tool in the modern developer's toolbox, and they are particularly useful in hybrid cloud environments, an area where IBM excels. At DockerCon 2015 today, IBM announced that it will be adding production support for IBM Containers to Bluemix, which effectively brings Cloud Foundry and Docker together for developers into one native platform. If you've been waiting for an enterprise-class, Docker-based container service, your wait is over.
IBM Containers is a full runtime and management environment that enables developers to host Docker containers in the cloud. By incorporating container services into Bluemix, IBM has created a more efficient environment that enables faster integration and access to analytics, big data, and security services.
Key features include:
- Integrated DevOps functionality such as log analytics, performance monitoring, image build services, and delivery pipeline
- Elastic scaling groups with auto recovery
- Zero downtime deployments utilizing Active Deploy
- Full networking with private overlays, load balancing, and automated routing
- Integration with Cloud Foundry services
- Support for persistent storage volumes
- Automated container image security and vulnerability scanning with Vulnerability Advisor
We can hardly contain ourselves
All joking aside, this is a huge announcement that merits a deep dive, and there's no one better to do that than IBM Fellow Jason McGee, whose blog post, "Containers: The answer to modern development demands," offers a detailed overview of today's announcement.
Ready to get your hands dirty using containers now? These tutorials from developerWorks will give you some excellent experience:
More are on the way, so keep your eyes peeled and your browser bookmarked to the developerWorks Bluemix page.
Additionally, check out the replay of this webcast, which demonstrates how the service works and details how you can get started with IBM Containers on Bluemix. You'll learn how the IBM Container service lets developers launch Docker containers directly onto the cloud with Docker-native features, standardized interfaces, and orchestration services. And you'll see how this powerful development solution can help you create and manage a new generation of portable distributed applications that have a dynamic lifecycle and can scale to run in concert anywhere from the developer’s laptop to hundreds of hosts in the cloud.
Modificado em por johnswan
Email. Social apps. Mobile devices. Collaboration technologies like these are now an integral part of our lives, fostering connections like never before and putting a universe of information at our fingertips. It's truly empowering ... and for many of us, truly overwhelming.
Well, IBM wants to help, and toward that end we're rolling out IBM Verse -- a transformative, cloud-based solution that's optimized for web and mobile, enabling users to focus on top priorities, find and share content, and take control of action items. IBM Verse redefines how we do work by:
Understanding your needs, working style, and priorities -- and anticipating what you need and when you need it.
Providing less clutter and more clarity with an interface that shows only what you need, so you can make quicker, smarter decisions.
Enabling a shift from “me” to “we” through intelligent, secure, and engaging social apps that foster real-time understanding, cooperation, and communication.
Naturally, we're eager to tell the world about IBM Verse, so here's what we're doing this week to build awareness:
On Tuesday, 18 November -- launch day -- we're hosting a live event in New York City, "A New Way To Work," where we'll officially launch IBM Verse, and an esteemed panel of experts will share their visions for the future of work. Can't make it to NYC? Check out the livestream for the event.
On Thursday, 20 November, catch our videocast, "Extreme mail makeover: Re-imagined for the future of work," to hear IBM and industry experts explore how the IBM Design Studio developed an entirely new experience from the ground up, why information from our social networks is crucial for collaboration and innovation, and why email must evolve to accommodate it.
Want more details? Visit the IBM Verse overview page, or get more background on the IBM Design blog.
We look forward to connecting with you!
See how the controller runs the steps needed to instantiate, configure, and store info on the machines on
In a workload-automation-as-a-service design, the controller is the machine used to provision the tenants for all the offering instances
in the environment. The controller handles:
- Deploying new-offering instance machines when needed.
- Receiving and managing tenant creation, deletion, suspension, and resume requests.
- Persisting tenant-related information on a relational database.
- Receiving and managing the agent donwload requests; in other words, it creates the download package specifically for the tenant
and then sends the package to the client.
- Sending license contents to users that want to download the agents.
For this scenario, the experts describe the sequence of operations for a controller that is the core automation component of the IBM
Workload Automation SaaS offering and is based on a Tivoli Workload Scheduler installation. It's composed of two servers configured in
a high availability setup and it implements some specific web services to receive provisioning and deprovisioning requests, keeps track
of user subscriptions on a DB2 database, and starts provisioning flows that are implemented with workflows automated by Tivoli
Workload Scheduler to provision and configure new virtual machines in order to create new offering instances and to allocate and de-
allocate tenancies for the subscriptions.
In this setup, the controller runs all the steps needed to instantiate the machines on SoftLayer, configure them (in terms of network
configuration, security compliance, and installed appliances), and store their information on its database.
When a new tenant request arrives, the controller finds the best available offering instance. Currently there are two policies that can be
configured on the controller that the controller can choose:
- Balance: Distributes load among all available offering instances.
- Concentrate: Avoids selecting empty offering instances to allocate load to.
The sequence for all the operations performed by the controller is:
- The controller receives a HTTP request.
- The HTTP request is analyzed to guarantee syntactic and semantic conformance.
- The internal database is updated to persist information related to the request.
- A job stream is submitted in order to perform low-level operations needed by the specific request.
- When the job stream completes, a HTTP request is performed against a specific “notification” servlet; this request contains all the
information related to the status of the performed operation.
- The HTTP request is analyzed to guarantee syntactic and semantic conformance.
- The internal database is updated to modify the status of the request according to the status of low-level operations.
- A response is sent to the caller at step number 1 in order to notify that the operation completed.
Integration with SoftLayer
The IBM Workload Automation SaaS server infrastructure can leverage virtual machines running on SoftLayer; the agents that you
install on your environment communicate with the Workload Automation server through the Internet using the HTTPS protocol so you
don’t need to open an incoming port on your organization's firewall.
About IBM Workload Automation
IBM Workload Automation -- a combination of IBM Tivoli Workload Scheduler with the additional provisioning of cloud resources and
infrastructures -- enables a dynamic environment that lets you run unattended workloads and applications in the cloud and monitor
them via centralized, web-based portals. It provides scalability features and performance boosts to help eliminate idle time and improve
data throughput. It allows for workload simulation so you can build forecast models of workflows before you reach the production
stage in your project. Would you like more
Thank you to ...
I'd like to thank IBM's Ilaria Gorga, Luigi Presti, Francesca Ziantoni, Francesca Liliana Pasceri, Riccardo Pizzutilo, Domenico
Agostinacchio. Fabio Barillari, Franco Mossotto, and Alessandro Scotti -- the information for this post came from an article
proposal submitted by this talented group of IT professionals.
Cloud scenario for managing workloads in dynamic environments
Standardize your workload process through template reuse
Explore top-level details of IBM Workload
Examine working details of IBM Workload Automation SaaS mechanisms
See how IBM Workload Automation interacts
with other workload tools
These resources will provide a basic understand of where cloud and data technologies intersect
Databases, big data, and analytics -- regardless of the shape of your data resources environment, cloud infrastructure is designed to
contain, move, and use that data to make your development, administration, and business tasks more efficient.
Since the beginning of the cloud era, developerWorks has been illuminating the technical relationship between data and cloud. Here is a
sample of the knowledge we've supplied to make your work in the data cloud more successful:
- Processing large datasets: One of the best tools for large-scale dataset processing in the cloud is Hadoop. Follow
along as Martin Brown explains how to
overcome the challenges and maximize the advantages of a cloud-based Hadoop deployment.
- Database interoperability: Usable data can be found in any type of storage, so it is critical to be able to have all
the data repositories you use be able to communicate. These two articles -- Creating tables, loading data and Handling queries, reporting business
intelligence -- demonstrate database interoperability between Big SQL and HBase.
- Integrating powerful analysis tools: In the example Using R with databases, you
see how to bring the strength of an efficient data-analysis programming language, R, to bear on data that's housed in relational
- The role of security: Data access cannot be mentioned without considerations for the security of that data. developerWorks security site is an excellent place to start to understand
how to provide data security. In this article on machine data analytics, you get a tour of how to deliver drop-in-place safety and security monitors that
are combined with cloud-based data analytics, which make it possible to quickly deploy monitoring in areas without power or
- Hands-on tool and app building: Best of all, developerWorks offers lots of hands-on documentation to show you
how others have built the cloud data application or tool they needed; often, the apps were built in a cloud development environment such as IBM Bluemix. For example:
Visit the developerWorks big data and analytics site for more insight on
the relationship between data resources and the cloud environment.
You should try Bluemix
should try SoftLayer
Join the Bluemix developers' community
Join the SoftLayer developers' community
Jerry Yuan offers up a blog full of Bluemix user tips
Alexei Karve explores several ways to eliminate virtual machine overcrowding on SoftLayer and explains how to find the
right balance between scale up and scale out
A longtime favorite cloud expert of mine, Alexei Karve (Twitter @aakarve) is a
Senior Software Engineer who's worked with various cloud technologies, including extensions to OpenStack Compute and Glance and
deployment on Softlayer and multiple virtual environment implementations (OpenVZ, Xen, KVM, VMWare ESX, AIX LPARs). In a recent
post on the Thoughts on Cloud
blog, he describes the noisy neighbor problem in which shared hosting and overcommitment increases the
density of an environment's larger virtual machines which in turn enables a small number of applications to monopolize all resources
(network bandwidth, CPU, RAM and disk I/O). This can cause performance problems because disk capacity increases but read/write
Karve proposes a joint solution between provider and client and explains some of the capabilities the provider (in this case, SoftLayer)
may offer that both parties can use to ease this resource overcrowding dilemma, including
- Resizing for when your app encounters resource limits and doesn't scale out (add more nodes to a
system), causing you to scale up (add resources to a single node in a system). SoftLayer allows you to scale up, but it can be
costly and may require downtime. Karve discusses some of the OpenStack mechanisms that let you resize ephemeral storage (without
losing data) and move the VM to another hypervisor.
- Cloning for when you might need to change the architectural configuration supporting your application. For some
periods of time, your application may need more compute horsepower and less resource sharing, so the ability to capture custom
images and replicate them to instances in differently configured environments comes in handy. Karve highlights some tools and
techniques to enable this autoscaling process.
- Karve explores the OpenStack migration scheme accessible on SoftLayer that lets you move instances from one
compute node to another, thereby redistributing the load among the available hypervisors. He examines both live and non-live
- In the case of a hypervisor rebuild (in which you need to empty the source hypervisor by moving virtual machines
to other target hypervisors), a combination of the OpenStack compute service and a shared file system can preserve the user disk data
on the evacuated server.
- Finally, Karve introduces the concept of snapshots, capturing VM instances while they are working correctly in
order to restore to that point in case of a disaster.
As a supplement to this post, Karve has also discussed horizontal and vertical scaling in relationship to the types of resources --
virtualized and shared. In "Achieving flexibility, control with SoftLayer," he explains how the SoftLayer architecture makes it almost as easy to scale
up as it is to scale out. He highlights capabilities such as compute size granular control (instead of the "several sizes fit all" method), the
ability to rapidly resize the resource pool for current conditions, and the ability to move images quickly between virtual and physical
environments based on ever-changing conditions.
To dive even deeper into the subject of horizontal and vertical scaling, Karve explains how to find the right balance between scale up and
scale out in a cloud. He introduces the concepts of reactive and proactive scaling and creates a matrix with
horizontal and vertical scaling:
- "Reactive scaling decisions are based on rules with thresholds on resource utilization and response times."
- "Proactive scaling can be based on historical usage data, modeling, analytics, and tracking social media sites."
Workload scaling: The OpenStack component
Lots of the scaling and balancing techniques Karve describes on SoftLayer belong to OpenStack (supported on SoftLayer) and can be
tremendously helpful in scaling operations. For more introductory information on OpenStack, try these resources:
- "Discover OpenStack:
Architectures, functions, and interactions" starts a series that examines each of the OpenStack components in detail. OpenStack is
an excellent technology if you need to leverage powerful cloud task tools that automate lots of the processes.
- For deeper dive into the individual OpenStack component APIs, Cholleti Ramyasree has provided a glossary of OpenStack REST
APIs using PowerVC as the target system. This should give you a glimpse of the breadth of OpenStack interfaces available to
Workload balancing: The SoftLayer component
To finish out this post on balancing in a SoftLayer environment, SoftLayer's Wissam Elriachy provides an excellent, code-laden tutorial on
"Getting Started with SoftLayer
Local Load Balancers" that explains how to use the SoftLayer API to provision, configure, and manage local load balancers. It
includes information on grabbing details of local load balancers, creating and configuring them, setting up routing and checking types
and methods, and then cancelling the balancer when you are finished.
should try SoftLayer, free for one month
Thoughts on Cloud: More on SoftLayer
Thoughts on Cloud: More from Alexei Karve
This new blog series demonstrates how to perform the tasks associated with creating virtual servers in SoftLayer
Shelbee Eigenbrode (Twitter @shelbee_se) is an IBM Cloud Services IT Architect who focuses a lot of her time on defining cloud services in order to effectively deliver them to users. In a new four-part series on the Thoughts on Cloud blog, she explains in a step-by-step fashion how to order and configure a new virtual server instance on SoftLayer:
- Billing options. I know, not much interest to a developer EXCEPT some of the choices make you examine the access requirements you need, such as whether you want to be billed hourly (for when your server may have a limited or unpredictable amount of use) or monthly (for a more stable usage pattern). It also introduces the concept of replicating your original server instance in case your server use pattern changes.
- Server configuration. Eigenbrode explains choices such as server quantity and location; number and speed of cores per instance; RAM required; operating system; how the the SAN disk can be configured; networking options such as bandwidth, port speeds, and IP addresses; and system, storage, and services add-ons.
- Verification and deployment. She highlights the tasks of verifying your configuration, identifying host and domain names, and setting up your customer account.
The best thing about this series -- Eigenbrode has her nine-year-old daughter follow the instructions and do the work. Read more about SoftLayer from Eigenbrode on Thoughts on Cloud.
Server architecture at SoftLayer
A note on server configuration when it comes to SoftLayer data centers. Each data center features one or more pods, each built to the same specifications with methodologies designed to support up to 5,000 servers; the pods are intended to optimize key performance variables including space, power, network, personnel, and internal infrastructure. Each rack has 40Gbps of connectivity built in (20Gbps each to the private and public networks) for bandwidth and performance needs; each data center has continuous on-site security and is SOC 2 compliant.
SOC is a series of accounting standards that measure the control of financial information for a service organization; SOC 2 controls encompass these principles:
- The system is protected against unauthorized physical and logical access.
- The system is available for operation and use as committed or agreed.
- The system processing is complete, accurate, timely, and authorized.
- Information designated as confidential is protected as committed or agreed.
- Personal information is collected, used, retained, disclosed, and destroyed in conformity with the commitments in the entity’s privacy notice and with criteria set forth in generally accepted privacy principles issued by the AICPA and CICA.
You can explore the capacity and configuration of pods at the various SoftLayer data centers here (it includes bandwidth and ping tests).
You should try SoftLayer, free for one month
Thoughts on Cloud: More on SoftLayer
These five broadly defined concepts can be a starting point to expand the developer's thinking on cloud security
Since security is, was, and always will be one of the top concerns of developing and deploying your assets in a cloud environment, I'd like to introduce a few conceptual models that can define how you think about cloud security at a high level. Although they might not be classified as "models" in the traditional computer-technology sense, I like to call these approaches "models" because a model is an architecture you use to pass on sets of experience in order to build a custom result; models are not specific instructions nor are they the end result you're seeking. Models don't tell you what to do -- they make you think about what you're doing.
When it comes to cloud security issues, five models stand out from the noise of the volumes of information in the technosphere:
Turning an existing cloud model into a security testing environment.
Using existing meta-paradigm practices to address cloud security issues.
Safeguarding the data transmission routes with advanced concepts.
Integrating all the components into an end-to-end concept.
Employing a collaborative, testable development environment designed for that purpose.
There are probably dozens more, but these five resonate for me. As you move through the numbers, they go from using existing practices in new ways to using new practices and tools; and maybe along the way, capturing what you learn and developing new methods.
Let's take a look at each in depth.
Existing cloud delivery models can be security tools
According to longtime systems architect and engineer Judith Myerson, you can eliminate many of the difficult tasks of building a cloud vulnerability testbed by using the existing PaaS environment structure as a basis for a security testing model.
To do this, Myerson compares the standard PaaS model with a generic security testing model. She explores the interrelationships among the PaaS model's three lifecycle structures, focusing on their security testing attributes:
"In the risk-management lifecycle, PaaS testers identify risks to application development, then create risk-based approach to security testing."
"In the application development lifecycle, testers apply the risk-based approach."
"In the business process lifecycle, the testers use spreadsheets and documents to record the results of the risk-based security testing including vulnerability assessments."
Using a generic security model, she identifies three types of security issues of concern:
Security flaws at the application-design level.
Security bugs at the application-implementation level.
Resource outages at the platform level.
Then she weaves the two models together to create a new seven-part tool from existing parts that consists of the
Automated vulnerability scan
Vulnerability assessment process
Security assessment process
Bring programming practices to bear on cloud security
DevOps is the method that responds to the interdependence of application and system software development and IT operations by stressing collaboration among the camps of IT professionals -- developers, adminstrations, integrators, and designers. The DevOps concept brings agile-based iterative and incremental application-development techniques to bear on solving security issues mainly by inserting security (that will integrate into the entire system's security profile) in at every step of every development, deployment, and maintenance process. Due to its iterative and incremental nature, DevOps also brings a certain amount of automation to the security process, especially for the simpler tasks.
Bob Aiello and Leslie Sachs show you how to apply DevOps practices to help mitigate the potential pitfalls and risks associated with cloud-based computing. They identify a few key areas where DevOps can help improve security:
By building your essential infrastructure components with automated, programmatic procedures, you can avoid introducing security defects into your cloud environment.
Through operating systems APIs, you can automate server provisioning that come with security built-ins like cryptographic hashes to verify that the correct packages have been obtained and installed and security consensus standards that can be automated to help ensure that the system is configured as securely as possible.
Automation is one of the strongest tools you can use in your fight to ensure cloud security. Using the DevOps principles, automation can help you detect unauthorized changes that are the result of human or malicious intent; automated, integrated security knowledge and threshold triggers help you weed out unauthorized access. Automation can also help you deal with incidents by making it easy to rebuild your systems when necessary.
Think of the network as part of your development plan
Some of you may not be old enough to remember when the network was mostly about moving physical parts in switches and routers, then reconfiguring the software for each of your firewalls. But cloud environments require nimble, more dynamic resource allocation; since clouds already have virtualized applications and storage, why not virtualize the network too? Emerging approaches such as software-defined networking (SDN) do just that and provide the platform to allow network virtualization.
Unfortunately, the great advantages of SDN come with a price: A host of software-related security issues to network protection.
Paul Ashley and Chenta Lee have provided a detailed look at how to deploy network security on software-defined networks. They define SDN as:
... a new architectural approach that aims to provide a highly flexible network suitable for today's dynamic environment. Existing networking technology is inherently static and difficult to change because minor network alterations often require substantial reconfiguration across many switches, routers, and firewalls. This process is time-consuming for administrators and inherently error- prone.
Ashley and Lee explain that the traditional way of thinking about network management is that:
The control plane manages where traffic is sent on the network.
The data plane forwards the traffic.
Both planes are within the network device and configuration of the control plane is proprietary to the vendor's product.
With SDN, though, the control and data planes are separate and the control plane is centrally managed across the network equipment within the enterprise (independent of vendor equipment).
This setup makes it simpler and faster to manage the flow of traffic. But what about security?
Ashley and Lee describe an approach (through real-life examples) that explains how to integrate automated, sophisticated user-based and IP reputation–based application control in an SDN; features that
Analyze each connection to identify the web or non-web application in use, the action being taken, and the reputation of the application.
Decide to allow or block each individual connection.
Record connection information, including user and application context.
Use recorded information for local policy refinement, including bandwidth management.
Judith Myerson also has some advice to control the vulnerabilities that SDN can introduce to the cloud. She says to follow these four steps to mitigate SDN risks:
Identify vulnerabilities and threats.
Fix with safeguards.
Chenta Lee blogged an entry on using deployment patterns for security services in a software defined environment (SDE) that offers ideas on how to define the composition of your deployment pattern to match the security levels you want in your deployment.
Stop thinking of security, cloud, etc., as separate entities
Security intelligence is the concept that as the IT security landscape grows increasingly complex, threats become more sophisticated, and new technologies emerge, IT security has to get smarter, faster, and more nimble. Smarter means you think ahead of where your attacker is; faster means you match (and exceed) the speed with which you can respond to threats; nimble means you can apply security methods precisely where they are needed in order not to impede the other main objectives of your business -- innovation, sales, and growth.
How does this translate into secure cloud deployment mechanisms? Ravi Muthukrishnan and Sreekanth Iyer show you a working model of how to build security intelligence into cloud and virtualized environments and create proactive threat protection and detection of anomalies. They will explain such security components as:
Collecting task and event logs from system components and classifying them under several categories.
Performing intelligent correlation on the collected logs to detect any anomalies or incidents.
Alerting the security manager (who can then visually analyze logs and prioritize his actions).
Explore an environment designed for cloud deployment
IBM Bluemix is an open-standards, cloud-based platform for building, managing, and running apps of all types, such as web, mobile, big data, and smart devices. As such, integrated security is mostly built in. For example, take a look at these services you can engage to help automate your application security needs:
In fact, Carl Osipov shows you how to secure your IBM BlueMix web app with OAuth 2.0 using the IBM ID Single Sign-on Service.
Finally, Chris Brealey will use the Bluemix platform to demonstrate how to build a mobile app that isn't perfect in order to test to see how the Bluemix Mobile Quality Assurance service performs in "fixing" the app's imperfections. (MQA is a cloud-hosted, multi-tenant service designed to collect and present information about the quality of mobile apps.)
You should try Bluemix
Keep up to date with Bluemix changes and enhancements
Join the Bluemix developers' community
Explore IBM's strategy for SDN
Video: Integrate DevOps with Bluemix
Modificado em por allenkane
IBM Bluemix has been open to developers in beta; here's what they've been doing with it
Here's a sampling of the more than 38 documented projects developers have built using IBM Bluemix.
Jump-start your hackathon efforts with DevOps Services
Millard Ellingsworth shows you how to create a container for a perfect hackathon a brief, intense period of collaborative development generally around a particular cause or topic by combining components from the Hackathon Starter Project, Bluemix DevOps Services (as the collaborative on-the-web development environment and continuous delivery pipeline), and Bluemix for the cloud hosting. You see how to create new instances and how to automatically deploy them to the Bluemix PaaS after each change in coding. Explore further
Build and deploy a mobile-friendly calorie counter
Using PHP, MySQL, AngularJS, and the Nutritionix API, Vikram Vaswani demonstrates the steps to create and deploy an application on Bluemix that allows the user to search for food items by name and retrieve the results through an API to the online nutrition database Nutritionix; group selected food items together to create meal records; save these records to a MySQL database together with their calorie counts by using a PHP/AngularJS app; retrieve reports of total calories that are consumed for varying periods; and access the app from mobile devices. One of the main concepts of this instruction details how deployment to a scalable, flexible platform (Bluemix) can provide the round-the-clock access that an app like this needs to be successful. Explore further
How to quickly send a mobile push notification
Extend an iOS app so it integrates with Worklight
Salim Zeitouni and Ramakrishna Boggarapu will show you how to combine Bluemix and Worklight to provide a personalized user experience in an iOS app through the use of authentication. They will demonstrate how to extend a BlueList application running on iOS in order to leverage Worklight by defining an HTTP adapter that will simulate authentication against a customer server that returns a user identity. The user identity will personalize the interaction with Bluemix Push, MobileData, and CloudCode services.
Since the BlueList application (see how to build it here) leverages native APIs, Worklight can enable the native iOS application to communicate by using the Worklight native API library. You will learn how to configure an iOS native API environment on the server to consume the client requests and communicate with an HTTP adapter. Explore further
Create a natural language question answering system with IBM Watson
IBM Watson may be a first real step to artificial intelligence, so it's probably a "smart" idea to start incorporating its capabilities into your development projects. For example, its natural language abilities. Swami Chandrasekaran and Carmine DiMascio want to show you how to create a natural language question answering system with IBM Watson on Bluemix. The Watson Films app is a simple demonstration of how to build an application that interacts with Watson by using the Watson QAAPI (Question and Answer API); users can ask questions about AFI films. The demo is built on Node.js with Express; you will understand the concepts to building an advanced natural language application. Explore further
Improve scalability with session caching
Abelard Chow, Paul Chen, and Brian Martin understand that a well-designed session persistence framework is required for performance and scalability, but you don't always have the time to construct one properly. So, using the Bluemix SessionCache service, they show you how to easily and quickly build such a framework into your app. You will learn about HTTP sessions and session persistence. Explore further
Enable a photo finder with location services
The team of Jay Allen, Rachel Reinitz, Srikant Varadarajan, and Robert Vila demonstrate how easy it is to use Pitney Bowes's powerful geocoding and address lookup APIs (as Bluemix services) to build an app that combines the latitude and longitude of any U.S. street address with a media search in Instagram. You'll also explore creating service instances and cloning applications. Explore further
Use a custom Go buildpack with IBM Bluemix
Bluemix added the ability to bring your own buildpacks; in this article, Michele Crudele answers the question: "What if Bluemix doesn't support your preferred language and web development framework?" A buildpack is collection of scripts that implements the actions needed to examine the application you're deploying and to download and configure the needed dependencies. See how you can use the Bluemix PaaS pluggable model to link in support for your runtime. Explore further
What if you can't get your mobile app right?
Chris Brealey says it quite well: "I don't know about you, but with the exception of writing 'Hello World,' I hardly ever get my code right the first time." With that thought, he takes you on a journey to write a "bad" mobile app then shows you how the Bluemix Mobile Quality Assurance service can interact with that app and suggest fixes to make it perfect. Experience automated, interactive iterative testing. Explore further
Scalability is a Bluemix built-in service
For the last sample project in this article, we'd like to thank Ryan Baxter for reminding developers that almost regardless of what Bluemix service your application uses, scalability it built into your application. As he demonstrates how to build highly scalable applications with the Bluemix Node.js runtime and Redis service, he notes that "One of the most compelling reasons to use Bluemix to run your application is its ability to quickly and easily scale your application." This is an opportunity for you to examine the scalability built into the Bluemix experience. Explore further
You can explore more of documented projects using this search view; or you can get a more annotated view here.
About IBM Bluemix
IBM Bluemix -- a key technology in the IBM Cloud environment that rolled out early in 2014 -- is a single- solution environment with instant resources for developing and deploying apps quickly across multiple domains. You can use this open standards- based platform to build, run, and manage web, mobile, big data, and smart-device apps. Bluemix supports many popular programming languages and frameworks. Java technology, mobile back-end development support, application monitoring, open source technologies, and much more are available through an as-a-service model in the cloud.
You should try Bluemix
Keep up to date with Bluemix changes and enhancements
Join the Bluemix developers' community
Learn more about Bluemix through documented projects
See what tech experts think about Bluemix
IBM Cloud Manager with OpenStack lets developers build common cloud workload task automation into an application -- Kane Scarlett, developerWorks
DevX.com editor Jason Bloomberg once wrote a good definition on cloud workloads: "The best way to think about a Cloud workload is [as] all the individual capabilities and units of work that make up a discrete application." I'd like to take that thought further and say a key component to successful cloud computing is
the ability to service varied workload requests ...
A data-analysis application would require a differently configured workload than a simple communication-oriented app, and so on. The service tasks I'm talking about would have to include performing the input analysis needed to determine the changes to make and the resources to use and executing those decisions.
By "dynamically," I mean "real-time and automated."
Some of the most cloud-friendly types of workloads are those that require
Unpredictable scaling. Web apps have the potential for unpredictable spikes in traffic.
Processing power. Batch processing and tasks like encoding and decoding data streams require concentrating a great deal of processing power on huge workloads and a short time to completion is often critical.
Round-the-clock data availability. It is a delicate balancing act to have the same set of resources at the ready in a different location and data center but to keep it from incurring costs when it is not needed.
Of these three, only number two is a task you might contemplate handling manually. Even then, automation can ease the setup and processing of this sort of task by managing a potentially complex schedule of processor resource timing.
It is coming to the point in which the entire span of IT infrastructure is controlled by software; the Software Defined Environment (SDE) relies on automation and integrated models of expertise and experience to exist. And application developers can no longer be content with just the concerns of application development -- as a habitual practice, they should also consider the workloads that support their applications.
That area seems like a perfect place for some expert automated assistance. Then the developer can focus most of his effort on what is most important to him -- the development of an award-winning application.
Why developers should be familiar with IBM Cloud Manager with OpenStack
As companies increasingly migrate to SDEs, automating hybrid cloud infrastructure across multiple platforms through open technologies becomes essential. Open means it's easier for you to adopt a cloud model and integrate it with your existing applications.
You probably know this product as SmartCloud Entry, but the latest incarnation of it is called IBM Cloud Manager with OpenStack. Regardless of what it's called, it is solid cloud management software based on OpenStack that is integrated with IBM enhancements and support. It provides support for the latest OpenStack operating system release, Icehouse, and full access to the complete core OpenStack API set to help clients ensure application portability and avoid vendor lock-in. It also extends cloud management support to System z, in addition to Power Systems, PureFlex/Flex Systems, System x, or any other x86 environment. The new solution also provides support for IBM z/VM on System z and PowerVC for PowerVM on Power Systems designed to add more scalability and security to Linux environments.
IBM Cloud Manager with OpenStack is a self-service portal designed to simplify cloud management for the cloud user. It enables you to work with virtual appliances and workloads focusing on the user’s perspective, delivering such self-service capabilities as provisioning and de-provisioning servers, drafting and cloning deployments, taking deployment snapshots, starting up and shutting down servers, and resizing existing servers.
I mentioned two cloud-friendly workloads; one that required unpredictable scaling and one that required additional CPU processing. In this demo, you can see how IBM Cloud Manager with OpenStack automates scaling for a workload of this type, one in which the presenter has overtaxed the system with the need for about 50 percent more CPU processing resources than is currently available. It takes a bare-metal system, provisions an operation system to it, and then expands the GPFS shared storage file system onto the node. It also deploys OpenStack onto the node and expands the OpenStack cloud. At the end of the demo, the visual display has changed -- the system had scaled up to an additional hypervisor and has rebalanced the virtual CPU utilization; it has also automatically added the additional needed storage. (Another video provides a more admin point of view of the process; it demonstrates how to use the product to deploy a ready-to-go OpenStack-based cloud from a cluster of bare metal servers.)
One of the powerhouse pieces of Cloud Manager is IBM Platform Resource Scheduler; this software product delivers enterprise-class dynamic resource management for OpenStack cloud environments that enables:
Intelligent resource pooling and policy-based infrastructure resource management.
Dynamic reconfiguring of heterogeneous resources according to application requirements and real-time demands.
The previous demo includes the roles that PRS plays in the system.
IBM is offering a beta program for Cloud Manager that lets your on-premise cloud reach out to a public SoftLayer cloud for resources during utilization spikes. As I get an opportunity to speak with the designers of this feature and understand the mechanics of how it works, I'll report back to you on this feature.
Try IBM Cloud Manager with OpenStack
Download the trial version (or hosted trial) of IBM Cloud Manager with OpenStack, including IBM Platform Resource Scheduler and try all of the functions of the product for no charge, for up to 90 days. Cloud Manager with OpenStack trial code is available for IBM Power Systems, IBM PureFlex and Flex Systems, and x86 servers.
Download IBM Cloud Manager with OpenStack
Video: IBM's Jeff Borek details Cloud Manager
Explore the IBM Cloud Manager with OpenStack community
Visit the OpenStack Marketplace
Series: Learn about OpenStack components
See what IBM IT experts think about OpenStack
Modificado em por allenkane
Explore how the PureApplication Service on SoftLayer can help you deploy
a complex BPM, portal, analytics, mobile, and database or application workload using patterns --
Kane Scarlett, developerWorks
You've built a enterprise cloud application. In that experience, you've taken great pains to build in
interoperability so that your application is accessible for most devices under most circumstances.
You've been careful to build in portability so that your app can be deployed to as many different
environments as proves to be productive. And you've probably used virtual application patterns to
capture experience and expertise to make your app the best it can be.
Have you ever considered the rest of the deployment process, deploment on the environment?
Don't laugh; some developers and programmers still see application development and deployment
as ending once the app is "launched" into the cloud environment. But master developers realize
that controlling environmental parameters is as important as a top-notch crafted application in a
That's where IBM PureApplication System, an expert integrated cloud system, comes in. Actually,
it's the PureApplication Service for the SoftLayer infrastructure that I'm
focused on. PureApplication Service on SoftLayer allows you to deploy complex environment
components -- portal, analytics, mobile, business process, database, or application workloads -- on
SoftLayer using patterns, then you can export and deploy these patterns on premise for production
with IBM PureApplication System.
In other words, this service enables on-premise-to-off-premise and off-premise-to-on-premise use
cases. It helps you save time and effort by enabling you to build complex deployments that are
guaranteed to run in hybrid environments.
In this video, Jason McGee, IBM
Fellow and CTO of PureApplication System, explains how PureApplication Service on SoftLayer
can help you build deployments that operate successfully in hybrid systems. In a follow-up demo, you can see how
to deploy a mobile application to a public cloud using a pattern and the service, as well as how to
use the service to move an application to an on-premise cloud.
In a series of further videos, he answers the following questions:
- What exactly will PureApplication Service on SoftLayer allow me to do? Run
on-premise designed patterns in an off-premise enviroment so you can choose the right target for
the workload you're trying to run.
When is it best to use on-premise or off-premise environments (or both) for my
applications? The business cases are less about workload and more about what you're
trying to accomplish. For example, you might choose PureApplication Service on SoftLayer for
development and testing and establish your production environment on premise. If your project is
starting out small relative to your overall business, it might make sense to start out off premise
then move it back on premise once it becomes larger or more important. If you're running on
premise but you find you need more capacity, you might switch to off premises. [VIDEO]
What components of PureApplication System are actually running on the SoftLayer
infrastructure? You find the PureApplication System pattern technology and some parts
of its infrastructure management capabilities available on SoftLayer. [VIDEO]
Will your custom patterns developed on PureApplication System be deployable on
PureApplication Service on SoftLayer? YES! [VIDEO]
And a host of other experts explain the connection between PureApplication Service on SoftLayer
and IBM Bluemix in this short
Sign up for PureApplication Service for SoftLayer
PureApplication Service for SoftLayer
Try SoftLayer free for a
Join the IBM Bluemix community
Understand current hybrid cloud industry trends