Innovative uses for WebSphere sMash, Part 3: Managing cloud operations using sMash-ing Assemble flows

This series of articles has focused on actual examples of where IBM® WebSphere® sMash was selected and used to perform innovative and valuable tasks to aid in the operations of IBM's Green Innovation Data Center (GIDC) in Southbury, CT, USA. Part 1 looked at how WebSphere sMash was used to build a flexible framework for constructing data center dashboards, and Part 2 showed how WebSphere sMash can be used to wrap external system management tools with easy to use APIs to aid in the automation of data center operations. The conclusion to this series reveals how you can leverage the WebSphere sMash Assemble flow capabilities to rapidly construct task and activity workflows that can be easily altered through editing and configuration rather than coding changes. This content is part of the IBM WebSphere Developer Technical Journal.

Share:

Aaron Kasman (akasman@us.ibm.com), Advisory Software Engineer, IBM

Aaron Kasman is an advisory software engineer in the Innovation Engineering team in the IBM CIO Office where he focuses on internal Platform-as-a-Service (PaaS) offerings, with an interest toward supporting innovators and situational application development. Prior to this role, he was part of the IBM Software Services for WebSphere team developing IBM.com’s e-commerce presence. His interests include WebSphere sMash, Platform and Software-as-a-Service technologies, content and community management with Drupal and CiviCRM, and visual design.



Andrew J. F. Bravery (abravery@us.ibm.com), Senior Technical Staff Member, Executive IT Architect, IBM

Andy Bravery is a Senior Technical Staff Member, Executive IT Architect and Manager of the Innovation Engineering team in the IBM CIO Office. Mr Bravery has a rich background of experience working in the emerging technology field. His recent work has been around exploiting cloud technologies to support situational applications in the enterprise. Mr. Bravery is a member of the British Computer Society & an Open Group Certified Master IT Architect and holds an honours degree in Physics from the University of Birmingham, UK.



10 November 2010

Also available in Chinese

Introduction

A key attribute of the cloud computing story is that of self-service: putting the control in the hands of the user, rather than having system administrators perform tasks on behalf of those users. One consideration with this, however, is that the operations to be performed often involve tasks that can take several minutes or several hours to complete. For this reason, cloud management systems need to have the ability to handle long running tasks and maintain asynchronous communications with the user.

Part 3 of this series shows how the IBM WebSphere sMash Assemble flow capabilities were leveraged to give our own cloud management system these features, and how WebSphere sMash Assemble offers multiple ways to create flows of tasks from many different types of base artifacts that should fit nicely with the common component types of your own service-oriented system. Tips will be provided along the way that will hopefully enable you to find WebSphere sMash Assemble easy to adopt and productive to use.


Overview of the GIDC cloud management system

The key feature of Green Innovation Data Center (GIDC) cloud is the fact that it is able to control server "instances" across three discrete resource pools. This enables users to request an instance upon which they can perform development tasks. They can then copy that instance as an image and re-instantiate it in one of the other pools to perform pre-production testing, or deploy the instance in the live production environment. All of this can be controlled through a self service user interface.

The process of copying (or capturing) a cloud instance into an image and then moving it between pools is a very long running task. Depending on the size of the instance it can take up to 30 minutes to capture it, and then a similar amount of time to copy it from one pool to another and re-instantiate it.

Because of the time involved, a sub-system needed to be created that could handle these long running tasks, keep the user informed of progress, and choreograph the next steps that needed to be taken. It was in this sub-system that we implemented the WebSphere sMash Assemble component. Assemble lets you string together discrete automated and manual tasks into simple workflows, and triggers the next actions based on the events it detects.

Figure 1 illustrates a conceptual architecture of our environment, showing the resource pools, the cloud management components, and the Assemble component providing the glue to pull it all together.

Figure 1. Conceptual architecture of the GIDC cloud
Figure 1. Conceptual architecture of the GIDC cloud

We implemented our cloud infrastructure with a management layer, exposed as a REST API, that lets a user (or system) perform basic cloud operations.

The cloud management API handles the details of the capture, but the flows that we wrote chain the capture steps with other pieces. The example described in this article will send out a confirmation e-mail indicating the success or failure of the capture process.

An advantage of packaging the steps together in a flow is that they can take place in the context of a system-generated call; you don't need to have a user clicking through individual steps of what could be a lengthy process.

In detailing this example, a variety of the techniques available in the WebSphere sMash Assemble flow will be demonstrated, resulting in a script that executes the capture and e-mail flow. As mentioned earlier, one challenge for this flow is the fact that the capture operation might take many minutes. As a result, the operation is treated as asynchronous. You need to build a script that will kick off the capture process, detect when it is complete, and then proceed on to the next steps.


Introducing WebSphere sMash Assemble

First, let's cover a few flow basics and terms:

  • A flow is a lightweight process specific to WebSphere sMash that is defined in an XML structure.
  • A flow is comprised of one or more activities.
  • A flow is most easily edited in WebSphere sMash's AppBuilder tool, either in the visual editor or in the text editor. When editing flows, you generally tend to switch back and forth between the two editor modes continually, depending on the specific task you are performing.
  • A flow should be saved in a .flow file, with only one flow per flow file.
  • An activity is a building block designed for use in a flow. A developer can use existing activities, or contribute new ones to their palette.
  • Each activity has a type named by its creator, such as GET or while or action.
  • Each instance of an activity in a flow must have a unique name. By default, the name is set to the type, but the author of the flow should rename it to reflect the activity's function in the flow. An activity can also have optional and required inputs and attributes, as defined by the activity's creator.
  • After a particular activity has executed, the name of that activity can be used to access data that resulted from its execution. Members of the activity's return data can be accessed using dot notation.
  • Basic sequencing of activities can be implicit or explicit. When a later activity uses the name of a previous activity as an input or attribute, WebSphere sMash computes an implicit order of activities. If the activity has dependencies on more than one previous activity, the flow will wait to execute that activity until the dependencies have been executed. To specify an explicit sequence between two activities in the flow, use the syntax <control source="previousActivity"/> inside your current activity, where previousActivity refers to the name of the activity that the current activity should follow.

You will see all of this in action shortly.


Outbound calls

Because your flow depends on your cloud's API, you will first focus on approaches in the WebSphere sMash Assemble flow to make calls out to external services. The flow provides several ways to make such external requests. We'll take a look at four of them, from simple to complex as you build your toolkit toward creating the final example.

As a matter of configuration, assume that your cloud manager URLs are defined in your zero.config file as /config/cloudpool1url="http://localhost:8080". This will be referenced in your code rather than addressing explicit URLs.

Also, for simplicity, we have detailed only a subset of the cloud management operations available in our system. The two operations you need to be familiar with are shown below, using the variables to point to the cloud management server.

  • GET
    ${config.cloudpool1url[]}/resources/instance

    returns instance details in JSON form.
  • POST
    ${config.cloudpool1url[]}/resources/instance/<instanceID>/image

    kicks off the instance capture.

The instance metadata takes the form shown in Listing 1. If the instance has been captured to an image, the name of that image is reflected in the imageId field.

Listing 1. Sample instance metadata
{
instanceName: "myInstance"
instanceId: "1"
state: "captured"
owner: akasman@us.ibm.com
imageId: "img-1"
}

a. HTTP call (GET/POST/PUT)

If you want your flow to make a simple HTTP call, you can use a GET, PUT, or POST activity available in the flow activity palette. You can then edit attributes of the GET activity to point to the desired element. For example, suppose that you want to get the details on an instance with ID in your environment:

  1. The first activity in your flow handles retrieving the value of the URL parameter, instanceId. You supply the REST URL as an attribute of your GET activity.
  2. You add a second activity, reply, to return the results of the flow to the browser. That's all there is to it.

Figure 2 shows the flow diagram.

Figure 2. Simple HTTP GET flow
Figure 2. Simple HTTP GET flow

Listing 2 shows the corresponding flow code that would be saved as /public/simple-get/index.flow.

Listing 2. Flow markup for simple HTTP GET flow
<process name="simple-get" persistPolicy="on">
    <receiveGET name="rcvparam"/>
    <GET name="getInstanceInfo" 
		target="/resources/instance/${rcvparam.instanceId[0]}">
        <control source="rcvparam"/>
    </GET>
    <replyGET name="showResponse">
        <input name="body" value="${getInstanceInfo}.  
		The instance state is ${getInstanceInfo.state}"/>
    </replyGET>
</process>

Here’s a tip: Don't use standard <!-- -- > comments in your flow. They will be deleted when you switch to the visual editor. You can, however, annotate the visual layout, but those comments will not appear in the flow text view.

Listing 3 shows the response in the browser when you hit the flow at http://localhost:8080/simple-get/?instanceId=1.

Listing 3. Response to flow, simple HTTP GET
"[instanceId:2, instanceName:mySecondInstance, state:open, imageId:null]. 
The instance state is open"

In the result, notice that when you return to the browser, you can easily extract any member of the JSON element using dot notation, wrapped in a Groovy closure. As you build up your capture instance example, you will see how you can reuse this snippet of code to retrieve instance attributes with a more complex flow.

b. Java-wrapped code that is called by the flow

As alternative to making the GET call, consider a scenario where you have a Java™ API that handles both authentication against the cloud manager as well as retrieving instance details. You’re not worrying about the specific implementation details of the Java code here, but let's assume that this is code you can readily prepare in Java and include in a .jar file that you can place in the /lib directory. Alternatively, you can place source code under the /java directory structure and use the zero compile command from the command line to compile it.

To access this Java code from your flow, add an Action activity from the flow palette. The Action activity is designed to enable you to hit any static Java method.

Figure 3 shows some sample attributes for an action activity. An action's operation is the method of the target class. So, in this example, the action will be bound at run time to a class that you'll write, called com.ibm.CloudManagerHelper, and specifically to the getInstanceDetailsAuthHelper method.

Literal or variable arguments can be passed into the action, and they are bound in sequence to the target Java method's parameters. In other words, in this example, getInstanceDetailsAuthHelper would need to take two String arguments in order to accept the values you're passing in. Assume you're passing in the instanceId and the userid of the user logged in to access the flow. Listing 4 shows a stub of the corresponding Java code that would be executed by this activity.

Figure 3. Attributes of an action activity
Figure 3. Attributes of an action activity
Listing 4. Stub of corresponding Java method
package com.ibm;

public class CloudManagerHelper {

public static void getInstanceDetailsAuthHelper (String instanceId, String userId) {
     
      System.out.println("Owner " + userId + " has requested " + instanceId);
       //code to authenticate against the cloud manager
       //code to request instance details

}

}

The action activity lets you reuse Java code very easily in your flow. It's very generic, so there is little programmatic overhead involved in setting up the call, so this is great for one-off access to some Java code.

c. Call a script

Instead of calling Java code that needs to be manually compiled, a useful alternative is to call a Groovy or PHP script. The script activity takes parameters too, but like the Java example, the parameters are also not validated until run time.

A Groovy script provides the richness of Groovy syntax, strings, and handling, and lends you easy access to the WebSphere sMash libraries, like the WebSphere sMash Connection API. In the flow's script activity instance, you provide the name of the Groovy script you will be calling along with other arguments (Figure 4).

Let's follow the same example in sections a and b above, but this time use a Groovy script. The arguments are then available as variables in the context of the script. In this case, the script that's referenced would be located at /app/scripts/getInstanceDetailsHelper.groovy. Listing 5 provides a similar example of a script that uses the arguments.

Figure 4. Script activity properties
Figure 4. Script activity properties
Listing 5. Sample Groovy script called by the script activity
// The arguments output variable values
print ("Owner " + userId + " has requested " + instanceId);

//make calls to REST APIs to APIs using these values...

d. Custom flow activities

The last, most robust, and most recommended option that will be covered here is to create a custom, reusable, extension activity. After you've taken this step, you'll see a new activity that you can drag and drop into your flow. Most importantly, in structuring an extension activity this way, you create a well-defined interface, unlike the generic ones that you've seen up to now. The metadata that describes the interfaces defines expected inputs and attributes, and can provide validation rules, such as whether fields are required. If required inputs aren't provided, the flow will indicate that it is not ready to be run by displaying an exclamation mark (!) on the offending activity in the flow editor.

Project Zero includes a discussion of the process of creating flow activities in detail (see Resources below). Listing 6 shows the code for an activity that performs a capture instance. As the Project Zero tutorial notes, the naming conventions of custom activities are key: For an activity named captureInstance:

  • The activity definition should be located in a file named /app/assemble/activities/captureInstance.groovy.
  • The code is contained in a method named onCaptureInstanceActivity.

If you prefer to use an alternate scheme, you can manually bind your activity definition in zero.config, as described in the product documentation.

It's worthwhile to note that custom activities can be built in Java and PHP. We will stick to groovy for this example.

Listing 6. captureInstance.groovy
import zero.core.connection.Connection

def onCaptureInstanceActivity() { 


    def instanceName = event.inputs[0]
    def imageDescription = event.attributes['imageDescription']

    Map metaData = new HashMap()


    metaData.name = null
    metaData.description = imageDescription
    metaData.state = 'intial'   
    metaData.imageId = ''


    def status = null
    def responseData = null


    def urlPrefix = config.cloudpool1url[]

 try {

        Connection conn = new Connection("${urlPrefix}/resources/instance/${instanceId}
		/image",Connection.Operation.POST)
        conn.addRequestHeader('Content-Type', 'text/json; charset="UTF-8"')
        conn.setRequestBody(metaData)
        Connection.Response resp = conn.send()

        // work with response headers and body

        status = resp.getResponseStatus()

              // print ('Response status' + status)

        if (status == 200) {
             responseData = Json.decode(resp.getResponseBodyAsString())      
        } else {

              //handle errors
        }

      
    }  catch (Exception e) {

         print (e)

    }
   event.result = responseData
}


//activity definition metadata

metadata = [

    'attributes' : [['name': 'imageDescription', 'required':false]],
    'inputs' : [['name':'instanceId', 'required':true]]

]

The section of at the end of Listing 6, with the comment //activity definition metadata, defines the interface (the inputs and attributes) for the activity.

At this point, you have your custom capture activity and its metadata, which describe attributes and inputs, but if you want it to appear on your palette, you must create two icons for it and tell AppBuilder which category in which to place it.

Let's walk through these steps:

  1. Create two icons (a larger one and smaller one) to create the visual representation of the activity on the palette and in the editor. The specifications of the files are listed below and the file names should match the activity name.
    • Enter a 16x16 size icon file in this path: public/tooling/icon-activity/captureInstance.png.
    • Enter a 48x48 size icon file () into this path: public/tooling/figure-activity/captureInstance.png.
    Figure 5 illustrates these icons for the custom captureInstance activity.
    Figure 5. 48x48 and 16x16 icons for custom activity
    Figure 5. 48x48 and 16x16 icons for custom activity
  2. Next, you need to specify a category where the activity will reside. See Listing 7 for an example of how our Cloud Management category was defined for the palette. Again, the file name should match the required category name, and the JSON file should be placed in the public/tooling/categories directory.
    Listing 7. Cloud Management.json
    {
    	"name" : "Cloud Management",
    	"description" : "Cloud management tasks",
    	"item" : [
    		{
    			"name" : "captureInstance",			
    			"type": "captureInstance"
    		}
    	]
    	}

    Figure 6 shows how the custom captureInstance activity looks on the palette in the Cloud Management category that is defined in Listing 7.
    Figure 6. Custom activity on palette
    Figure 6. Custom activity on palette
  3. Now you're ready to use your capture process by dragging it from the palette into your flow. As mentioned earlier, you have a challenge in that captureInstance is an asynchronous process; when the REST call is made to capture an instance, the capture is kicked off immediately, but it can take many minutes for the captured instance to become available.

    When triggered, the REST call to capture an instance returns success via a HTTP 202 response code. It also changes the state attribute of the instance to "capturing." In order to detect when the capturing process is complete, you can use a while loop activity and poll to detect that the state has changed to "captured." At that time, you know that your image is ready. Although this won't be demonstrated in this article, you will employ a parallel technique for creating an instance from your image since, like capturing, restoring an instance can also take many minutes.

    In Listing 8, you see the WebSphere sMash while activity in action. Figure 7 shows the visual flow and Figure 8 drills down to illustrate the flow inside the while loop. Double clicking on the while activity in the editor enables you to see the sub flow inside the while loop.

    Figure 7. Capture instance flow diagram, first pass
    Figure 7. Capture instance flow diagram, first pass
    Listing 8. Capture flow with while loop
    <process name="captureInstanceFlowSimple" persistPolicy="on">
        <variable name="captureProbe" value="_"/>
        <receiveGET name="rcvparam"/>
        <replyGET name="replyGET">
            <input name="body" value="${FLOW_ID}" content-type="text/plain"/>
            <control source="rcvparam"/>
        </replyGET>
        <assign name="assignSourceInstance">
            <copy from="${rcvparam.instanceId[0]}" to="sourceInstanceId"/>
        </assign>
        <captureInstance name="captureInstance" imageDescription="targetImage">
            <input name="instanceId" value="${sourceInstanceId}" 
    		content-type="text/plain"/>
            <control source="assignSourceInstance"/>
        </captureInstance>
        <while name="whileCapturing" condition="captureProbe=='_' || 
    		captureProbe== 'pending' || captureProbe=='capturing'">
            <control source="captureInstance"/>
            <waitXSeconds name="pollingDelay" xSeconds="10"/>
            <GET name="getInstanceInfo" target="${config.cloudpool1url[] 
    		[]}/resources/instance/${sourceInstanceId}"/>
            <assign name="assignProbe">
                <copy from="${getInstanceInfo.state}" to="captureProbe"/>
            </assign>
        </while>
    </process>

    Let's walk through the code:

    1. First, use the variable activity to define a couple of variables. The captureProbe variable acts as a placeholder for the current status of the instance from the flow's perspective. Default the value to an underscore ("_"); you can use any value here.
    2. Define the sourceInstanceId based on a URL parameter passed into the flow.
    3. Add a replyGET activity at the start of the flow to display the flow's run time ID in the browser.
    4. Kick off the capture by executing the activity that you defined in the previous section.
    5. With capture kicked off and your captureProbe set to the default value, you will always enter your while loop. The while loop declaration describes the conditions under which the loop will continue. You will stay in the loop if the state is default ("_") or if the state is pending.
    6. Next, execute another custom activity, waitXSeconds, to pause for 30 seconds. You do this so as to not bombard the cloud manager with requests, and also to not have the flow run a continuous loop unnecessarily. The code for waitXSeconds is shown in Listing 9.
    7. Reuse the code from Listing 2 where you perform a simple GET to retrieve the instance details. From here, use the copy activity inside an assignProbe block to copy the value of the instanceState to copyProbe.
    At this point, the while condition loops until an exit condition is reached -- ideally, when the capture has completed successfully.
    Figure 8. Contents of while loop
    Figure 8. Contents of while loop
    Listing 9. Groovy script for waitXSeconds activity
    // file /app/assemble/activities/waitXSeconds.groovy
    def onWaitXSecondsActivity() {
            
        def xSeconds = event.attributes["xSeconds"]
    
       logger.INFO {"enter waiting..."}
       Thread.currentThread().sleep(Long.parseLong(xSeconds )*1000) // wait for x seconds
        logger.INFO {"exit waiting..."}
    }
    
    metadata = [
       "attributes" : [["name":"xSeconds"]]
    ] 

Branching and e-mailing

Once the execution is out of the loop, let's assume that the instance state is one of two values:

  • If successful, the value will be captured.
  • If the capture failed for any reason (for example, the system volume is full), the state will be failed.

You want to send an e-mail alert to the instance owner to inform them of the outcome of the flow and the name of the new instance.

Listing 10 builds on Listing 8, but adds two sendMail activities. Figure 9 illustrates the visual flow: the first for a failure case, and the second for a success case. Notice the control element in the sendmail blocks. The source attribute states that each of the sendMail activities should follow the while loop. However, the transitionCondition further specifies the conditions under which the branch is taken. Based on the instance state when the while loop completes, the appropriate branch will be taken, and thus the correct e-mail will be sent.

Listing 10. Branch and send the appropriate email
<process name="captureInstanceFlow" persistPolicy="on">
    <variable name="captureProbe" value="_"/>
    <receiveGET name="rcvparam"/>
    <replyGET name="replyGET">
        <input name="body" value="${FLOW_ID}" 
		content-type="text/plain"/>
        <control source="rcvparam"/>
    </replyGET>
    <assign name="assignSourceInstance">
        <copy from="${rcvparam.instanceId[0]}" to="sourceInstanceId"/>
    </assign>
    <captureInstance name="captureInstance" imageDescription="targetImage">
        <input name="instanceId" value="${sourceInstanceId}" 
		content-type="text/plain"/>
        <control source="assignSourceInstance"/>
    </captureInstance>
    <while name="whileCapturing" condition="captureProbe=='_' || 
		captureProbe== 'pending' || captureProbe=='capturing'">
        <control source="captureInstance"/>
        <waitXSeconds name="pollingDelay" xSeconds="10"/>
        <GET name="getInstanceInfo" target="${config.externalUriPrefix[]}
		/resources/instance/${sourceInstanceId}"/>
        <assign name="assignProbe">
            <copy from="${getInstanceInfo.state}" to="captureProbe"/>
        </assign>
    </while>
    <GET name="getInstanceDetails" target="${config.externalUriPrefix[]}/
		resources/instance/${sourceInstanceId}">
        <control source="whileCapturing"/>
    </GET>
    <sendMail name="sendFailureMail" address="${getInstanceDetails.owner}" 
		sender="akasman@us.ibm.com" subject="Instance capture - failure">
        <input name="body" value="Instance ${getInstanceDetails.instanceId} 
		did not capture successfully"/>
        <control source="whileCapturing" transitionCondition="'failure' 
		== captureProbe"/>
    </sendMail>
    <sendMail name="sendSuccessMail" address="${getInstanceDetails.owner}" 
		sender="akasman@us.ibm.com" subject="Instance capture - success">
        <input name="body" value="Instance ${getInstanceDetails.instanceId} 
		captured successfully to image ${getInstanceDetails.imageId}"/>
        <control source="whileCapturing" transitionCondition="'captured' 
		== captureProbe"/>
    </sendMail>
</process>
Figure 9. Capture instance flow diagram, complete version
Figure 9. Capture instance flow diagram, complete version

Security, persistence, and cleanup

With the logic of your flow complete, there are some configuration steps worth considering:

  • Security

    Flows can be secured in a manner similar to any other WebSphere sMash application resource. Consult the previous article in this series, as well as the ProjectZero documentation for more detail (see Resources).

  • Persistence

    By default, flow run time data is stored in the application's GlobalContext. This doesn't require any configuration by the developer and is suited for a development environment. The downside is that since the GlobalContext does not persist through an application restart, this metadata is lost when the application is restarted. In a production environment, it's likely that you will want this data to persist. To enable this, you can configure JDBC-based database persistence for your flows. Configurations for a number of popular databases are listed in the Project Zero documentation.

  • Cleanup

    WebSphere sMash enables you to control timeouts and cleanup for the flows that you design. It's possible that your flow could be long-running and you might want an unfinished flow to stay alive for a couple of days. On the other hand, perhaps you have a very short process that you want to make sure is cleaned up within 10 minutes. It's worth taking a few minutes to explore the options for enabling, disabling, or adjusting the cleanup parameters for automatic cleanup of flows in their various states to ensure you select the right settings for your scenario.


Flow monitor

Flow support in WebSphere sMash includes a basic Web-based monitor that enables an administrator to access information about a flow in progress, such as:

  • Viewing the state of your running flow instances.
  • Viewing the values of variables.
  • Cleaning up the data from the global context.
  • Canceling a running flow.

To setup the monitor, you need to add a dependency to your application. For example, add this statement to your ivy.xml file:

<dependency org="zero" name="zero.assemble.flow.management" rev="[1.0.0.0, 2.0.0.0["/>

Having done this, you can view the flow monitor by accessing http://localhost:8080/flowadmin. Figure 10 shows output in the flow monitor after the execution of the flow.

Figure 10. Variable values on completion of capture instance flow
Figure 10. Variable values on completion of capture instance flow

Visit the flow management document in Resources to learn how to configure, secure, and use the monitor tools.


Conclusion

This concludes the series on innovative uses of WebSphere sMash. Hopefully, you have seen how the simplified programming model and the well-thought-out tooling can help you rapidly create sophisticated Web applications in an iterative manner, building up from a robust service-oriented base into a set of elegant tools that you can leverage.

Our experience in using WebSphere sMash has been a highly productive one, and we have found that in each of the three projects described in this series, it has been easy to get started and then straightforward to extend our ideas as more functionality was required. It has also been refreshing to work with an environment where early prototypes can be developed into proper running applications by, for example, adding in the necessary security further down the line.

Hopefully, these articles will inspire you to try WebSphere sMash for your own projects, reproduce these experiences, and expand on these ideas for yourself.


Acknowledgements

The authors thank Steve Ims, IBM STSM, for his valuable feedback and attention to detail in reviewing these articles.

Resources

Learn

Get products and technologies

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into WebSphere on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=WebSphere, Web development
ArticleID=577124
ArticleTitle=Innovative uses for WebSphere sMash, Part 3: Managing cloud operations using sMash-ing Assemble flows
publish-date=11102010