We're just finishing up the XForms face-to-face meeting in Amsterdam. Given my prior post and the approach of XForms 1.1 toward last call status, it seemed a good idea to talk about the improvements to XForms 1.1 that make the repeat construct easier to work with.
It's about being able to say more exactly what you mean, really. For example, on my first night in Amsterdam, my hotel room became quite cool, but it wasn't clear how to turn up the heat. Apparently I'm just not old school enough to relate to a radiator, so I called down to the front desk. I was informed that in order to turn the heat up in my room I had to "squeeze the knob". On the radiator was implied. Despite being pretty good with a pun, it seems I still needed a friend to help me fully appreciate how important my complaint about the advice truly was, especially in Amsterdam. In fine propeller head form, I complained that the proper advice was to "turn" the knob. On the radiator was again implied. Oblivious to any possible alternate interpretations, I proceeded through the explanation that while squeezing the knob was a necessary component of turning it, it was also an implicit part of the process which created the friction necessary to ultimately turn up the heat. From the radiator was implied.
Anyway, the point is that a lot of trouble can be avoided when one is able to say exactly what one means to say. In the case of XForms 1.1 authors, there are two new attributes on the insert action that make it a lot easier to manage repeat constructs. In XForms 1.0, we add to the container element of a sequence an extra subelement to act as the prototypical data. We must add the prototype as a child of the container element because the insert action in XForms 1.0 can only get the prototype from the last node of the sequence of items over which it operates. This limitation forces you to adjust the repeat to omit the last node, and it forces you to add application logic to remove the last node when the data is submitted or alternatively before it is processed on the server side.
In XForms 1.1, there is a new origin attribute on the insert action. This attribute allows you to give an XPath that says where the prototype for insertion is located. This allows it to be placed in another instance rather than being stored in the container element of the sequence, which eliminates the necessity of adjusting the repeat and of removing the prototype later (it's already in a separate instance).
The second issue that we solved in XForms 1.0 by putting the prototype data at the end of the container element is that we avoided the problem of the container element becoming empty. It is easy to understand the need for an empty shopping cart, but if the data container becomes empty, then the insert nodeset attribute produces empty nodeset, so you don't know where to insert the copy of the prototype node. To be clear, if nodeset="/cart/item" returns empty nodeset, it's like walking off a cliff because you don't know that the XPath expression visited the cart element right before it found zero elements named item within it. In order to insert into an empty repeat, the parent of the elements being repeated must be non-empty in XForms 1.0.
In XForms 1.1, we solved this by adding a context attribute to insert. The context attribute set the context for evaluating the nodeset attribute. The expected use of this attribute is to choose the parent or container element of the nodeset. So, when the nodeset resolves to empty nodeset, the context node is used as the container item into which the prototype node is inserted. So here's the shopping cart example in XForms 1.1.
<!-- Initial instance data contains the prototypical node as the last element -->
<xf:instance id="liveData" xmlns=""> <cart> </cart></xf:instance>
<xf:instance id="protoCart" xmlns=""> <cart> <item> <name/> ... </cart></xf:instance>...
<!-- repeat operates over the live data -->
<xf:repeat nodeset="item" id="repeatCart"> ...</xf:repeat>
<!-- Add new row after any current row, but do it in a way that can also handle zero rows. -->
<xf:trigger> <xf:label>Insert <xf:insert ev:event="DOMActivate" context="/cart" nodeset="item" at="index('repeat-cart')" position="after" origin="instance('protoCart')/item"/></xf:trigger>
<!-- Delete a row from the repeat. -->
<xf:trigger> <xf:label>Delete <xf:delete ev:event="DOMActivate" context="/cart" nodeset="item" at="index('repeat-cart')"/></xf:trigger>
To complete the example from XForms 1.0 in XForms 1.1, we should now look at an example in which you want a repeat that stays at one row. If the user hits delete for that row, then the row stays, but the data is cleared from it.
<xf:trigger> <xf:label>Delete <xf:action ev:event="DOMActivate"> <xf:delete context="/cart" nodeset="item" at="index('repeat-cart')"/> <xf:insert context="/cart" origin="instance('protoCart')/item" if="not(item)"/> </xf:action></xf:trigger>
In the final insert, we see the appearance of the new if attribute to more precisely communicate that the action is conditionally run. In XForms 1.0, you have to use an XPath predicate to produce an empty nodeset, which is a bit like saying "squeeze the knob". It gets the job done, but it ain't pretty.
The final insert also shows off one of the new things decided at this face to face meeting of the XForms team. In the latest working draft, the nodeset binding is listed as required, but we are changing that to optional in order to make writing inserts like the final one above look more natural. It just identifies the container element as the destination of the insert, the origin as the source of the insert, and a condition that says when to do the insert. And now my room is too hot!
It was a great year for XForms at XML 2007. There were quite a number of presentations that featured XForms as a component, though in this blog I want to focus on the big event for XForms, which of course was the "XForms Everywhere" special session held on Monday evening.
It is no exaggeration to say it was an unqualified success. This was a 2-hour event consisting of six 15 minute presentations by some of the XForms community leaders, including Mark Birbeck, John Boyer (yours truly), Erik Bruchez, Dan McCreary, Keith Wells, and Charles Wiecha. This was followed by a 30 minute keynote from Elliotte Rusty Harold on "How XForms can Win". You can find program details here.
Naturally, we were a bit concerned that the evening slot (7:30-9:30) would test the stamina of the even most eager conference-goers, but the room was filled to capacity (about 60 chairs) and then there wasn't even standing room due to about 20 people standing or sitting at the back of the room.
And then the content was exquisite. The format of the session turned out to be a great idea. With only 15 minutes, each of us was required to refine our content to the cream of the cream. The order of speakers was defined by Leigh Klotz (Xerox) based on the most natural flow of thought about forms design, run-time case studies, general application architectures, and futures. Since I was talking about design, I went first. This created a happy coincidence because it made the most sense to start by announcing the most recent accomplishment the Forms Working Group, which is the transition to Candidate Recommendation of XForms 1.1. Not only did my own design-time presentation depend upon XForms 1.1, but also most of the other presenters used XForms 1.1 features as well.
With such a huge turnout, it occurred to me that quite a large fraction of people might not know much of what XForms is capable of. I started thinking back to the Compound Documents workshop of 2004 where Bert Bos, the chair of the CSS working group, said "Forms, I know what forms are. Name, address, pepperoni, extra cheese. I want to talk about more sophisticated web applications. The kind that can play games. Not necessarily Doom, but you know what I mean." When my turn came to talk at that conference, I started by launching my BlackJack form, and I played a few hands. Lady Luck was with me that day, and she was with me again on the XForms evening at XML 2007. I showed that form for effect, but then I showed a much more sophisticated "Mortgage Pre-approval Form" in both English and Chinese. Next I showed how much of the complexity of that form could be seen as simple aggregation of the concepts one can find in a reasonably small purchase order form. And at last, I launched into a 7 minute demonstration the XForms design experience for that purchase order form. You can see a video clip of that design experience here.
I enjoyed the presentations by Dan McCreary and Keith Wells because they demonstrated sophisticated applications of XForms that would be an order of magnitude harder to build with existing web application technology. During the breaks between speakers, I took the time to reiterate to the audience the business value here in terms of being able to win deals based on lower RFP bids.
I also enjoyed the two more forward-looking presentations by Mark Birbeck and Charlie Wiecha. Mark demonstrated and spoke about his experiences building a desktop application development environment called Sidewinder, which combines XForms and XHTML to allow very efficient creation of desktop applications, not just web applications. Charlie demonstrated and spoke about the applicability of the XForms architecture to creating robust mash-up applications out of components that actually are reusable.
Perhaps my overall favorite talk of the evening was given by Erik Bruchez. You can find out more about it here. Erik presented yet another application with high business value and low development effort that he created by combining XForms with the eXist database. The main point of his overall presentation was to illustrate that the XForms-based web application architecture consists of a rich client tier that can speak directly to server tier applications like databases with no coding in the middle tier. Clearly given my blog entry of Oct. 18, I couldn't agree more.
Whenever I explain that the XForms architecture is "rich client", I always stop to remind the reader that this is a mindset, not a deployment strategy. You can deploy an XForms rich client, or you can deploy XForms functionality to a thin client (a web browser only) using a server product such as the Lotus Forms Webform Server. The point is that the XForms mindset allows you to factor out the deployment issue and focus on what the application is supposed to do.
And as an IBMer, I am duty bound to my colleagues over in Information Management to point out that the database involved in the above mentioned example could instead be the scalable and robust DB2 9.x with pureXML support. Using a package like Lotus Forms that supports the XForms 1.1 submission enhancements for web services and REST-like services, forms applications can directly consume web services set up by the database adminstrator without the need to bring in a java developer to write the code in the middle. This is an important point that I'll return to very shortly in the conclusion.
The final presentation was the keynote by Elliotte Rusty Harold, who is a gifted speaker and thinker. He began by explaining how hard it has been over the past 15 years to make do with today's web architecture for building web apps. He was more eloquent at saying this, but it's a bit like trying to incrementally build a 15 storey building on the foundation for a single family dwelling. Of course, the point was that XForms can do the job that today's web just can't, and it can do it really well, but... As Elliotte explains it, XForms is currently a "Cambridge" technology built by really smart people, and that it needs to build the bridge to New Jersey where all the fast-and-cheap, quick-and-dirty... and successful... technologies live.
This is a fancy way of saying that while XForms is now hardened for the most demanding of enterprise applications, more focus is needed on making XForms more accessible to the much wider audience of people who perhaps start out with simpler problems that work their way up into being the complicated beasts that become those unmaintainable piles of jumble-fuss (my word; Elliotte would've come up with a better one) that cause so much trouble today.
XForms can win, Elliotte says, if three things happen. First, native support in the browser. Second, good design tools. Third, XForms needs a killer app.
Regarding browser support, he points out that it is well-known that IE will be hard, but that we should start out by getting the Firefox/Mozilla XForms plugin to be moved to the core browser, and not an 'additional' plugin anymore. To some extent, I agree, but in the spirit of New Jersey thinking, XForms can already run in all browsers because we have server-side products that can boil it down to the HTML and AJAX that all browsers, including IE, natively understand. Elliotte felt that was a bit of a hack, but we have to remember that he's also a professor, so he sometimes also falls into the Cambridge trap. Still, the point is well-taken, as long as we view things in terms of transition. The server products show that XForms is a viable technology for all browsers even if you don't want to deploy rich clients. A great next step, of course, to garner direct browser support, and we're working on it, but it's not impeding our ability to provide the business value of XForms to customers today.
Finally, there's Elliotte's point about needing a killer app. Let me start with a Cambridge thought: XForms doesn't need a killer app because XForms is the killer app. The first killer app of computing was the word processor; the second was the spreadsheet. In a conversation I had with Elliotte the next day, he commented that the spreadsheet was even more of a killer app because it had to move the coder out of the way in order to put the computing power in the hands of the people. Sound familiar? XForms is the killer app of Web 2.0. The New Jersey reality check is that it has to walk like a killer app and quack like a killer app in order to be a killer app. The spreadsheet only became the killer app once the people saw what they could do with it and how to do it. The 'what' and the 'how', that's what made the XForms + database presentation and the design tool presentation so important in our XForms special session at XML 2007. The call-to-arms is simple: make more what, make more how, ... make XForms Everywhere.
Well, I haven't blogged for the last couple of weeks because, as it turns out, I was swimming with the sharks!
Well, actually, I tried my best not to be in the water when a pair of hammerheads and a trio of grays came callin', but one morning my wife and I had to rush into the water to help an older man who was merrily floating along, ears below the water line and a pair of hammerhead fins vectoring his way.
Otherwise, a peaceful and restful vacation and back in the XForms saddle. And while I was away, the W3C publication process did indeed unfold as expected (with a little help from Steven Pemberton) to allow delivery of the latest working draft of XForms 1.1
Notice that the above link takes you to the diff-marked version of the spec. These show the differences between the current working draft and its predecessor, which was published in December of 2005. If you have familiarity with the prior spec, this allows you to see the new things that have made it into XForms 1.1, and the changes to existing features of XForms 1.1
There are quite a few smaller tweaks here and there, but I'll focus a bit here on the big ticket items. One is the effort the XForms team has been putting into the access that XForms authors will have to event context information in XForms actions. This is going to be a very powerful addition to the language. The other major area of change involves the addition of features to the XForms submission module. The goal of these changes is to open up XForms to a wider variety of sympathetic web technologies. Frankly, there is an interplay between the changes we made to submission and to event context. Expect some further changes and additions in the next working draft.
A small change that has, perhaps, the most important and far-reaching implications is the namespace URI that will be used for XForms 1.1. This is a very challenging issue, but the net is that XForms 1.1 will use the same namespace URI as XForms 1.0. Like those sharks, this really bites in some ways, but also like the sharks, it turns out to be unavoidable. Stay tuned, and I'll tell you why in an upcoming blog entry.
A lot of talks about XForms are a bit technical in nature because the people who manufacture XForms processors tend to be technical people who understand the business value of XForms in terms like "model-view-controller" architectures, a superset of AJAX, and software engineering benefits like abstraction. But the C-level executive cares not about these things. It is important to connect them to what the C-level executive does care about.
The C-level exec is about efficiency, flexibility and accountability.
The CEO wants
- efficiency via reduced operating costs and decreased time to close deals
- flexibility to react to new business processes and changes of business partners
- accountability for control of expenditures and compliance with regulations
The CIO wants to achieve
- efficiency through end-to-end business process integration and ability to leverage business objects as IT assets
- flexibility through a malleable system architecture that can be rapidly reconfigured with replaceable, reusable components
- accountability through transaction record auditability
We need to speak about how XForms helps the organization to achieve better results along these metrics. This is where the global picture of what XForms does against the backdrop of the classic 3-tier architecture comes in handy.
The C-level exec is not up at night worrying about the server tier, where databases, content managers and workflow engines live. These may be expensive systems, but they are designed to be robust and highly scalable systems whose metrics relative to the size of the anticipated user base of a system are easier to quantify. In other words, they are low risk numbers that are easy to budget for.
The C-level exec is not as concerned about the client tier, where the browser and OS live, again because the metrics are easy and stable.
The C-level exec has the most uncertainty and risk at the middle tier. There are two parts to it. There is the part that uses APIs to talk to DBs, CMs and workflow engines to implement custom application logic as needed. This part is not as scary because we have reliable, robust, scalable, stable APIs, and because the components we're talking to are those reliable, robust, scalable, stable, expensive systems sold by companies with important people's ties to yank when there is a support problem. Then there's the scary, assembly-language-of-the-web-part of the middle tier. This is the part that has to juggle a dynamic, multistep end-user experience on the anything-goes client side, where the web browsers are free and you get what you pay for. The upfront cost of the thin client is low, but the time to market with new offerings is the most highly affected because a lot of unpredictable, time-consuming work lives here.
- It allows you to standardize and consolidate the end-user experience into a single business object that represents the overall transaction.
- It allows that business object to access the web services of an SOA as a natural part of the process of going from empty transaction data to completed transaction data.
- It fully represents the transaction and therefore can function as an integral part of the records needed for auditability.
As a final thought, it should be clear from the exponential scales of complexity and cost that the diagram above argues that XForms-based systems have business value through complexity containment, and that said value is rightly reflected in software product cost.
At the specification level, the W3C Forms working group is working on modularization of XForms as one means of promoting incremental adoption. This is following the simple business axiom that "embrace and extend" is economically preferable to "rip and replace". If you have existing web applications, and you want to start incorporating selected features of XForms to get at certain benefits, it helps to be able to get at just those features without having to change your entire application over to XForms in one development cycle. Modularization supports agile development.
The Forms working group is now also working on XForms 1.2, even as XForms 1.1 continues toward full W3C Recommendation status. In XForms 1.2, the second approach the Forms working group is using to promote incremental adoption is to stream-line web application authoring through emphasis on attribute decoration and exploitation of UI element nesting as a means of expressing the simpler classes of MVC applications. The attributes are attached directly to UI elements, but they imply a proper XForms MVC architecture. In other words, the XForms processor can read the attributes and auto-generate a proper model for you. To get an idea of what the stream-lined vocabulary might look like, and what canonical XForms it might correspond to, look here.
The approach of providing stream-lined syntax is simpatico with modularization for two reasons. First, it means we can provide the stream-lined vocabulary and processing model as a module separately from modules that provide the components of the core XForms engine (e.g. the instance data module, the submission module, and so forth). Second, the stream-lined syntax approach means that authors can start with attribute decoration and UI nesting, and then scale up in more complex functional areas as the need arises by adopting the core XForms modules appropriate to the problem at hand.
The stream-lined syntax approach is also interesting because techniques such as attribute decoration, UI nesting, and progressive elaboration of a document are techniques familiar to AJAX developers and made popular by AJAX libraries such as Dojo. In fact, I am really coming to the conclusion that AJAX libraries are a viable alternative to browser-maker adoption as a means of delivering the most web-centric components of the W3C technology stack. It should not be necessary for Microsoft, Mozilla, Apple, Opera and other browser makers to build a W3C technology into their browsers before it becomes widely adopted and deployed. More importantly, this means that the W3C, not the browser makers, defines the web and its standards.
A few posts ago I promised to get back around to the discussion of the namespace and versioning of XForms 1.1.
In the early days of XForms 1.1 (late 2004), we expected to upgrade the namespace URI as the means of indicating which processor should be used to interpret the XForms vocabulary within a document. There were, after all, not just new elements but also new attributes for existing elements and in some cases slightly adjusted behaviors for the same elements.
Well, a lot can happen in 18 months! For one thing, we originally intended to publish XForms 1.1 as the upgrade to XForms 1.0, but since then we decided to focus more heavily on the excellence of XForms 1.0, resulting in the publication of a significant body of errata. Those have since appeared in the new W3C Recommendation for XForms 1.0 in March of 2006. We have since published some further errata, and I expect we will publish an updated errata list in the next few weeks. This has allowed XForms 1.1 to become much more about the new features and less about the behaviors of existing features.
The second thing that change is that we decided that an important aspect of increasing XForms adoption was to make it easier for web content authors to write. For this reason, the HTML working group communicated the need to import the XForms module directly into the XHTML2 namespace rather than leaving it in the native XForms namespace. To accommodate this requirement, we created the notion of a chameleon namespace in the schema for XForms 1.1. This concept allows us to define XForms in the XForms namespace, but it allows a host language like XHTML to more easily import the XForms vocabulary by substituting its own namespace instead of the default XForms namespace.
Of course, any host language that attempts such an import has to deal with possible name conflicts, and we did encounter some, like the src attribute. We had to do some jiggling of both XHTML2 and XForms to smooth the integration. This is interesting to note because you can't expect XForms to change for every host language. We did this more as a one-off for XHTML to help increase adoption. Generally, XForms remaining in the XForms namespace is preferrable because it is easier for a wider array of XML-based tools (like XSLT) to find the XForms content if it stays in its own namespace. However, there is a special heritage for web content, so adding the chameleon namespace to XForms and making other technical adjustments was part of our effort to cater to backward compatibility needs and ease of authoring needs.
So, to be clear, XForms content should remain in the XForms namespace whenever possible, such as when used with host languages other than XHTML, because there are significant technical benefits to doing so.
Now, how does this relate to the versionining of XForms or the fact that XForms 1.1 will use the same namespaces as XForms 1.0? Well, along separate lines, the XForms working group had received a number of communiques indicating that the change of namespace made it harder to write software that processed XForms content in ways that were not necessarily affected the vocabulary changes. Basically, the change of namespace was viewed as costly to the community. While mulling over this feedback several months ago, I realized that people who were dissatisfied with our change of namespace URI between XForms 1.0 to XForms 1.1 could simply use the chameleon namespace feature to put the XForms 1.1 markup into the XForms 1.0 namespace. They didn't have to use the feature to put the markup in the namespace of a host language.
This thought was very good news for solving the problem of those who wanted us not to change the namespace of XForms in 1.1. Since the chameleon namespace feature could defeat the intent to use the namespace URI to control the language version, clearly we had to change to versioning 'within' the XForms vocabulary. Hence, you can see in the latest working draft of XForms 1.1 that we now have reverted to the one and only XForms namespace URI originally allocated in 2002, and we have added an internal version attribute on the XForms model, defaulting to 1.0, to indicate the version of the language.
Well, no blogs from me next week, so seems a good idea to knock off another one this week...
XML namespaces rec (pardon the pun) states that an attribute which is namespace unqualified
is 'local' to an element and is uniquely identified by a combination of the attribute local name and the type and namespace URI of the containing element.
The word identify
really should be used more sparingly, as here is a case where its misuse has caused years of confusion and acrimony in the XML community. An identifier is something that established the identity of something else. You cannot have two things associated with the same identifier unless they are identical things.
I am often frustrated by seasoned W3C folks who say that "this depends on your definition of identify" and honestly
believe this is a defense of the confusion in the namespaces rec. This is like saying ot me, "Well you're right unless you have a definition for identify that doesn't identify things, which is what we
did at the W3C."
To see the problem, you have to look earlier in the spec where a namespace n1 is associated with a URI and then the following code appears:
<good a="1" n1:a="2" />
The problem is that the element 'good' is in the same namespace as n1. So now you have local attribute a
that is essentially given meaning by an element that is in the same namespaces as the 'global' attribute n1:a
. Yes the two attributes have different values.
From this we have to infer that, although a local attribute is given meaning based on the containing element and its namespace, a global attribute with the same local name and namespace qualified into the same namespace can actually mean something totally different.
In other words, we have two attributes with the same local name, contained by the same element, and given meaning by the same namespace URI, but they are not identical. This is the local attribute not
being 'identified' (in my sense) by local name and containing element type and namespace.
The technically subtle W3Cer will tell you that there was no reason to spell out using words in the normative part of the spec the fact that the two attributes are different things because the spec says that local attributes are in a different partition than global ones. Problem is, this partitioning info is in a non-normative part of the spec, and particularly in the same part that has the language about how local attributes are 'identified' by local name and containing element type and namespace.
Anyway, the upshot is that an XML vocabulary does not need to but is allowed to say that local and global attributes with the same local name can mean different things. If they're supposed to mean the same thing, then the XML language has to define a precedence rule for what happens if the two attributes differ in value. Here's an example:
<data xmlns="http://example.org" xmlns:ex="http://example.org">
<price currency="USD" ex:currency="EUR">10</price>
Question: Is the price in USD or Euros?
Answer: Depends on who designed the language.
Second answer: Don't do that.
Interestingly, the XHTML working group came up with a fascinating example where it is legitimate and sensible to have a global and local attribute with the same local name but completely different meanings. It has to do with the next version of XML events. After hours of discussion, the decision was that they weren't going to do that (second answer above). Not because it's illegal, but because it's too subtle for most XML people.
Well, the XHTML group may change their minds, but even if they don't, the example is really worth understanding because it actually makes sense why you'd want to have the two attributes mean something different. Stay tuned, I'll tell you all about it when I get back...[Read More
new release of our forms software, version 4.0, is now only a few weeks from shipping. And as of this release, the product line will be known as IBM Forms
! This is an incredibly important indicator of the strategic value IBM sees in the Forms business as a key component in building a Smarter Planet. The feature set coming in this new major release combines the best of Web 2.0 client application behaviors and design experience with the traditional strength of interactive XML data collection for which the prior releases our product line are well known. IBM Forms documents are interactive web application instances that have many traditional capabilities that we have been building into them since 1993.
- They can always be serialized to provide a user with their own saved copy of their web application experience, to archive, to email, or to pass to the next step of a business process or workflow. They can be reinstantiated at any time to continue the interactive fill experience.
- They can be instantiated directly into a web browser without using a plugin, and they can also be instantiated in a client application for an offline fill experience, dramatically simplifying IT maintenance and platform support, .
- They provide a multipage high precision visual layout with comprehensive accessibility support and localization
- They operate over XML payloads that directly conform to the needs of back-end business functions, enabling straight-through XML processing systems.
They enable rich web application interaction and behaviors based on the W3C XForms 1.1
- They allow users to attach supporting documents into the form in support of their business function, including spreadsheets, word processing files, images, videos, etc.
- They secure the business function implemented by the form with a comprehensive interactive document digital signature system that includes sectional signing, multiple signatures, and overlapping signatures, all of which protect not just the form data, but the form behaviors, the form appearance and the attached documents as well.
Now lets add to that what's coming in the new release:
- The ability to drive customer satisfaction and employee productivity through more efficient, compelling interfaces that can include
- gradients, border styling, thematic color control, combined text and images
- AJAX-driven XML web service updating without web page refreshing,
- formatted HTML text for bold, italic, underline, text colors, font control, lists, hyperlinks, etc.
- Embedded custom HTML extensions and popup dialogs that can show maps or movies or even interact with the running form.
- The ability to deliver integrated solutions faster and spend less time and money maintaining them through a comprehensive new set of Design Experience features including
- Form Parts that make it easy to reuse and maintain common components consisting of one or many user interface controls and corrresponding behavioral markup
- Data-driven form design with XML schemas that define back-end data formats; data type sensitive automatic user interface creation, including tables and label annotations
- Web service integration wizard that provides point-and-click, drag-and-drop web service consumption and user interface mapping.
- The "wizard of wizards" that enables rapid development or addition of a step-by-step wizard experience for a form
- SVN source code repository interaction, Integrated Design Time spell-checking, etc.
- The ability to use your Forms solutions how you want and where you want, including
- An automatic forms view portlet and support for Portal 7.0 and Mashup 2.0
- Updated browser and OS support and improved integrations
- Improved workflow system integrations with Filenet and Websphere BPM
- Webform Server support for the iPad
The real achievement here is not just these new features, but getting them to work well with all the traditional features. IBM Forms 4.0
, coming soon to a Smarter Web Application near you.
The title of this blog has changed to begin "IBM Lotus Forms..." yet many of the prior entries on this blog refer to "IBM Workplace Forms...". This is because, as of the next major release, Workplace Forms is transitioning to "Lotus Forms".
To me, this transition positions the Forms product line squarely among the top-of-the-heap offerings by the Lotus division. Based on the overall technical direction announced at Lotusphere with Quickr, Connections, Expeditor, and Notes, as well as the clear place that Forms has in that technical direction, you can see that great things are coming for IBM Lotus Forms.
In "enterprise" service-oriented architectures (SOAs), XML is a clearly entrenched format for server-to-server communications. But there's a movement afoot to define a "Web SOA" as a separate entity that describes server-to-client communications... based on JSON as the format.
The rationale is that JSON parses "a hundred times faster" in a web browser than does XML. This is like the tail wagging the dog. It is a myopic, accept-the-status-quo approach to web application computing, and it just won't do.
First of all, the difference between parsing different serializations is negligible (when done right), and nodes are nodes are nodes, so if there is a 100-fold difference, it is because one parsing method is direct and the other is doing something phenomenally suboptimal and probably easy to fix. One way to solve this is to alter all the system architectures on the web and introduce the complexity of a data format transformation to cross the illusory chasm between a "Web SOA" and the "Enterprise SOA". Another idea would be to put the pressure where it rightly belongs, on web browser makers to fix the feature of efficient XML processing.
Particularly as vertical industry standardization continues, such as is happening in both insurance and healthcare, this latter solution is really the only viable one in the long term. XForms technologies like Lotus Forms and Ubiquity XForms wrap a logical client application around the exact XML that is needed by the server side because this is the best way to minimize system development and maintenance costs. Once there is a defined XML data model, there should not be some other second data model introduced into web applications for some reason as vapid as poor implementation of the required feature.
This conversion is not a trifling matter either; have you seen how ugly the JSON is that is actually capable of preserving XML fidelity (i.e. supporting round-tripping)? Trying to build a reasonable client-side around that JSON is not nearly such a pretty sight as when the data representational limitations of JSON are accepted in an application. If the full representational power of XML schema is used to define an application data model, then we need to be able to carry that out to the client without significant alteration so that we can make references to the data transparently, not through the lens of some horrendous transformation. And make no mistake, XML schemas for vertical industry standards are full-featured indeed.
Finally, I would be interested in hearing about any research into whether the overall throughput of a web application is increased or decreased by the introduction of JSON down to the client. It's nice that parsing gets a hundred times faster on the client, but when you're server is trying to handle more than a hundred concurrent users, and the server has to translate from XML to JSON (and back again), then it sure seems like the use of JSON is pushing processing burden in the wrong direction.
We usually think of a document as a file. A file is, at the operating system level, a unit of information. The OS can associate properties with it (like who can read it or write it), and the file can easily be moved from directory to directory or computer to computer with standard tools like FTP, email, or flash drive.
The XML recommendation defines a document as that which conforms to BNF rule #1 of the XML syntax. Of course it is useful if an XML document is the sole content of the unit of information being transferred, i.e. if the XML is the only thing in a file. This is because XML tools tend to be built with programming languages that expose services of the operating system.
However, the most important part of an XML document is capable of being contained within another document. In fact, XML canonicalization strips away the part of an XML document that would prevent the result from appearing as element content.
This is the idea that XForms uses when it expresses XML instance data within a larger document, and specifically within the
<xforms:instance> element. Of course, the
<xforms:instance> element can also reference a whole XML document using the
src attribute, but in both cases, the result is parsed into the live running instance data. The user interacts with the XML data document, and the XML data document is what is returned to the server by an
This paradigm works well enough if you think of a form as an ephemeral view of a back-end database or content repository. However, people find it easier to work with a document in the file sense of the word. we like to save our document, email it, reload it the next day or on a new computer, and especially on a computer that has become disconnected from the net or from the original web application context that would be associated with the ephemeral view kind of form.
Enter the document-centric approach of Workplace Forms, the technological basis of which is an XML document whose element names belong to the XFDL and XForms vocabularies. Moreover, the underlying XML data document is still incorporated by the
<xforms:instance> just like all other XForms-based documents.
A key difference is that being a document implies that the form with eventually have to be re-serialized, e.g. to save or sign the form. XForms doesn't really talk about how to reintegrate the live, modified XML data back into the document, but in truth it's really not that hard. The main technical hiccup on the XForms side is that the
src attribute takes precedence over the content of
<xforms:instance>, so when the live data is synchronized back into the form, the
src attribute has to be eliminated.
The conclusion, though, is that a Workplace Form, or XFDL document, is a cross between an office document and an mobile intelligent agent. It can be saved, reloaded, printed, emailed, archived, signed, submitted to web application and portal servers, and participate in document management, records management and content repository processes. This is how the notion of document makes a form be more than just an ephemeral view of data.
A Lotus Form is a document currently expressed in an XML vocabulary called XFDL (html version, pdf version).
This is significant enough, in and of itself, to be set off in a paragraph. XML is a de facto industry standard from the W3C, and this means that widely deployed, interoperable, industry standard software toolkits can be used to introspect and manipulate the content of XFDL, i.e. Lotus Forms documents, thus mitigating issues of vendor lock-in.
But the story gets better...
The XFDL format internally employs W3C standard XForms, so without further reference to any vendor-specific documents, the standard indicates where in the XFDL document to look for the data content created by an end-user who fills in the Lotus Forms document. Anyone with access to a Java reference manual could write code to prepopulate data into an XFDL (Lotus Forms) document before it is delivered to an end-user, and they can write code to extract data from the document when it is returned to the server for processing.
But the story gets even better...
XFDL incorporates the W3C XML Signatures standard, so widely available industry standard tools are available from multiple vendors for validating the security of digital signatures in XFDL documents. In addition to the Apache XML Security implementation, it is notable that Java itself now natively contains support for XML Signatures (JSR 105).
These standards mean that the entire server-side lifecycle of the Lotus Form can be achieved without being locked into using any particular vendor's API. Any application server, any portal server, any server-side environment that can receive HTTP POSTs and process XML in the POST data can be used in combination with Lotus Forms.
And if you ever hear a vendor try to play up high availability of a client-side browser plugin for processing their favorite file format, ask them "Plugin?? What plugin??" With Lotus Forms, you don't need a browser plugin at all because the Lotus Forms Web Form Server converts the XFDL into dynamic HTML that can be processed directly by the browser with no plugins at all. Best of all, when the user finishes interacting with the Lotus Form, the resulting XFDL document is delivered to the application server endpoint, where it can be processed as XML using the standard APIs.
Well, my frieds, it seems a good time to draw your attention to a collection of available resources that can help you understand how you and your customers can address a full spectrum computing requirements, from the enterprise information application all the way down to the simple design-deploy-collect-analyze pattern for situational applications that arise throughout the enterprise
First of all, check out the Lotus Forms Enablement videos on YouTube. There are videos that show Lotus Forms in action in government, financial, and HR scenarios where enterprise-level functionality is a must. But you can also see videos on the new Lotus Forms Turbo product, which enables business users to design and deploy simpler survey style forms for themselves in a matter of minutes.
I want to make a pitstop here to plug the value of the XForms standard in creating this array of products. XForms as a language has a strong declarative component. This means that non-trivial relationships between data and sophisticated interactional and presentational objects is simply declared by the form developer, and it is up to the form run-time system to enforce the relationships. This maps very naturally to how we human beings think and work. We say things like "Whenever I get an email from domain X, put it in my super special high priority folder". We don't directly write batch jobs that run loops over our inbox to categorize our email, and even if we did, the time at which those jobs would run would be specified declaratively! In fact, I don't think it's hyperbole to say that the universe itself operates mostly on declarative laws. Gravity just happens. If you let go something, it drops. It's the law; get used to it ;-).
The point is that the declarative language constructs of XForms are what enable Lotus Forms to accelerate enterprise-level application design, to enable end-user design of simpler applications, and to ensure interoperability between these two types of applications so that the simpler situational applications can be scaled up to the enterprise level as their value becomes more broadly established. The declarative constructs of XForms accomplish all of this due to the simple fact that it is far easier to create software that maps point-and-click/drag-and-drop design experience gestures onto high-level declarative language constructs than it is to map the same things onto huge blobs of imperative scripted code from a general purpose programming language.
And don't just take my word for it. When you finish watching some of the videos at the YouTube link above, how about going for a test drive? You can get free trial downloads of the Lotus Forms, including the client-side runtime and design environments. If you like the enterprise-level capabilities, but you don't want to deploy a client-side runtime, you can also purchase the Lotus Forms Web Forms Server, which translates Lotus Forms into standard web pages. But once you're in test-drive mode with the client-side runtime viewer, check out the Lotus Forms catalog of sample forms. Finally, you can also test drive the new offering for non-technical users directly, without having to install a thing by visiting Lotus Forms Turbo on Greenhouse, or if you'd prefer to test drive it in-house, then just download it from the free trial downloads page.
Either way, once you've gotten Turbo, see a couple of the videos then think of your own simple survey app. Design and deploy it right onto Greenhouse and see how long it doesn't take!
Despite all the excitement about the possibilities that HTML5 offers for a better world wide web, I believe there is a critical flaw in the go-forward plan of those who are feeling the momentum.The problem is that they still haven't got ~80% of the web browser makers on board!By which I mean they haven't got you-know-who.
There's really only one way to break the loggerjam and move forward with advancing thestate of the web. We need to get you-know-who out of the way of progress by showingthat we can innovate on the web with or without their participation. We have now shown that this is feasible by way of the ubiquity strategy based on our abilityto deliver an interactive web application language in multiple web browsers with no client-side plugins and no server-side processing of the language. Click here to visit the project.
For all web advancement technologies, including HTML5, the longer term effect of the ubiquity strategy should be that the late bloomers are shamed into implementing the advancements, thereby obviating the need for the ubiquity code. But meanwhile the ubiquity strategy is the way to enable web advancement technologies including HTML5 to emerge (in the proactive verb sense of the word).
How would you like to be able to construct, deploy and get results from IT solutions using only your web browser?
Don't believe me? Well, how about coming to the IBM Forms wiki
, where you can watch a few short videos that show you.
You'll be intrigued and want to go the next step. One of the prominently available wiki pages is a community article that gives you a starter pack
of prebuilt solutions like the ones you see in the video. You can download any one or all of them because they're just single files that describe the forms, access control, workflow stages and other resources of each solution. You can import any of them into your own IBM Forms Experience Builder
server, and then deploy them, use them, get results from them, and of course edit them to see how they work or to change them and redeploy them. All from your web browser.
Don't have an IBM Forms Experience Builder server to try it out? Well, now we've gotten to the main topic of this blog article. You can get your own free public access to an IBM Forms Experience Builder server. You can try out any of these starter pack solutions as well as build and deploy any of your own solutions.
Since you will be a builder
of forms experience solutions, we will need to be able to present your solutions to you, distinguished from everyone else's solutions. So, you'll have to start by registering yourself with the system that hosts the IBM Forms Experience Builder server. The system is called Lotus Greenhouse
, so click the link and then choose "Sign up" to get your account.
Once you're able to log in to Greenhouse, you'll get access to a number of software products including IBM's social business software (Connections), IBM Websphere Portal Server, and of course IBM Forms Experience Builder. However, you don't really need to log in to Greenhouse then menu navigate to IBM Forms Experience Builder when you can just bookmark the direct link to IBM Forms Experience Builder on Greenhouse
Once you log in with your Greenhouse user id and password, you'll see the "Manage" solutions page, which lists all of the Forms Experience Builder (FEB) applications that you have designed. This is the page that gives you the ability to create a "New Application" or "Import" one of those starter pack applications, all at the press of a button.
So, now you can try out and evaluate IBM Forms Experience Builder now to see for yourself that there really is a smarter web where you can construct valuable solutions without coding now. If you are building IT solutions for your organization, you owe it to yourself to see how much more effective you'll be at satisfying your organization's IT solution demands. But even more importantly, if you're competing for IT solution services contracts, you owe it to yourself to become an IBM business partner or to expand your partnership to include IBM Forms Experience Builder. And finally, if you like to build industry-specific data management products, then you should consider becoming an IBM value-added reseller (VAR) so you can build your products more efficiently with IBM Forms Experience Builder and go to market with IBM to sell the bundle. In all these cases, you now have the access you need above so you can learn more and get started today.
Modified by John M. Boyer
In a recent video interview, the IBM CEO Ginni Rometty
comments that Watson 2.0 will understand images that it sees, and that Watson 3.0 will be able to debate, i.e. to understand what it is talking about with another party. An impressive roadmap, each of these is an incredible leap forward from its predecessor.
It is, however, worth qualifying the term 'understand'. It is being used figuratively, not literally, to communicate the rough order of magnitude improvement in capability. When such a leap is made, it seems analogous to sentient understanding, even though it isn't. Imagine for a moment what Archimedes would have thought at first of a hand-held calculator, given that he had the power of Roman numerals with which to calculate pi to several digits. And yet, we would not now interpret such a device as artificial intelligence. As soon as the mechanical nature of a level of capability becomes clear, so too does the fact that it does not constitute sentient intelligence (Hofstadter's exposition of Tesler's "theorem").
You can see this assertion play out in multiple levels of Bob Sutor's scale of cognitive computing
. There are levels that are clearly not cognitive intelligence, as Sutor points out, but if you lay out the scale on a timeline of decades or centuries, it is clear that each level might once have been interpreted as being indistinguishable from magic.
So where on Sutor's scale is Watson? And what implications does that have for development best practices?
Watson is clearly not on the "Sentient (we can do without humans) systems" level. As sentient beings, we don't just know things with a certain calculated accuracy or confidence level, or determine that we don't know if our confidence is low. We experience desire to know more, and we experience fear of the unknown. We are teetering bulbs of dread and dream (Hofstadter's delightful invocation of a Russell Edson poem). I urge you to let that characterization of us sink into your mind. In Watson technology, IBM has modeled a certain class of knowledge and mechanical reasoning, and in other research, IBM is doing so by simulating some of the known structure of biological brains. However, we don't yet know how to model fear and desire, dread and dream. In my opinion, these are inextricably bound together in sentient intelligence, separating it from simulated intelligence. In other words, intelligent behavior is a construct that works for the dread and dream engine of the sentient, and in the absence of dread and dream, seeming intelligent behavior is but a mechanical simulation of understanding. As an aside, I hope we only manage to model desire and fear around the same time we figure out how to model ethics (as Asimov cautions).
Does this characterization of Watson as a mechanical simulation of understanding detract from its value? Does it detract from the order of magnitude improvement it heralds as an usher of the era of cognitive computing? Of course not, quite the opposite. It is simply fantastic that this level of "Learning, Reasoning, Inference Systems" (Sutor's scale) is now computationally and economically feasible at the scale needed to help sentient intelligence (that's us) to solve real world problems. Quick, what is the square root of 7. Can't do it? No problem. Even if you're Arthur Benjamin
, you'd be better off just hitting a few keys on a calculator. Quick, what are the most likely diagnoses for the patient's presenting symptoms? An "expert advisor" like Watson can be just what it takes to help determine the next best action, especially when time is of the essence because a life hangs in the balance.
The term "expert advisor" is appropriate. It conveys that the system is a "Learning, Reasoning, Inference System" that does not have sentient understanding and is therefore made available to advise and guide the actions of an expert. This is analogous to the way spreadsheets guide the results reported by accountants and chief financial officers. That being said, we also know not to put spreadsheets in the hands of toddlers. From a development practice standpoint, it is crucial to keep in mind that "expert advisor" means that the deployed system should be advising someone who is a qualified expert in the exact domain in which the "expert advisor" system was trained. Especially when a life hangs in the balance, access to the "expert advisor" system needs to be performed by those with expert qualifications in the domain because only they can reasonably be expected to use sentient understanding to interpret and follow up on the advice. In other words, the term 'expert' in 'expert advisor' should apply to the user more so than the advisor.
Now, given an enterprise workforce of those with qualified sentient understanding of their topic areas, Watson-style expert advisors are just the type of technological advancement that will help them work smarter, not harder, to meet the needs of customers and colleagues and to produce a competitive advantage for the business.
The upcoming release of Workplace Forms 2.7 is very exciting for our team due to performance improvements across the board as well as a wealth of improvements to the design time experience. However, the Workplace Forms product line has also benefited from some refinements to the XFDL and XForms layers, and that's what this blog entry is about.
The focal point for the XFDL presentation layer changes has been higher precision layout of tables. The form author can now control the amount of padding between table "rows" to the point where one can essentially create a grid appearance in which there are absolutely no unused pixels. Of course the use case is to be able not only to achieve zero unused space, but to have fine grain control that allows the form author to get down to zero but also to allow a little bit of breathing room for effects like row highlighting. Finally, XFDL now has the ability to create collapsible content such as "details" panes as well as collapsible table columns.
At the XForms level, more of the features from the XForms 1.1 last call working draft are now supported. The focus was on simplifying the handling of repeated data, which again is aligned with the focus on tables at the presentation level. The origin attribute is now supported on the insert action, which means that the prototype for repeated data can be copied from any instance of the XForms model. So, rather than copying an existing row of data then running a pile of setvalue actions to clear it out, the insert action can simply grab a new empty prototypical row of data, which means forms can be faster and take up less memory. The context attribute is also supported for insert and delete actions, which makes it easier to implement a table that is allowed to be empty. Finally, the while attribute is now supported for all XForms actions and sets of actions (the if attribute was already supported). This allows lots of data processing abilities for repeated data, including more complex results obtained from web services.
And last but not least we did make some small but meaningful additions to the language that were implemented across the product line but primarily motivated by the desire to make XFDL forms fit better into a corporate web site when the form is delivered via conversion by Web Form Server. This includes additional styling for XFDL button items to help them look and feel more like anchor links and offering the ability to hide the usual form toolbelt (which has open, save, and other form operation buttons on it). Hiding the toolbelt is useful when the form is intended to be in a portlet or as a component of a larger web application user interface.
I am proud of every release we do, and yet as each release nears, I just can't wait to get it out there into your hands because of how much better it will make your life developing forms applications. Once again, Workplace Forms 2.7 is going to be that kind of release.
Last blog I told you how you could get a "Greenhouse" account that gives you access to a cloud-hosted build-without-coding environment for constructing and deploying data-centric IT solutions. The product is called IBM Forms Experience Builder (FEB), and I want to spend some time on this blog giving you a better sense of the solutions you can create.
The solution I'd like to cover is a bit more advanced than a "Hello, World" solution; maybe I'll do one of those in the next blog. But the first thing I wanted to be able to do was use my account to enable myself to share solution files with you over time. Every FEB solution has a body of code that represents it. You can export the serialization of any solution as a file so that it can be imported into another account or onto another FEB server. So, I wanted to create a FEB solution that allowed me to share FEB solution files with anyone. I created that "Simple File Sharer" solution and then used it to share the "Simple File Sharer" solution file. You can get the file here (requires Greenhouse account login).
Once you have the solution file, you can import it into your own FEB account within Greenhouse and then use it to share files, especially any sample FEB solutions.
The Simple File Sharer solution uses ordinary features in the form (user interface). I just dragged and dropped a table item, and then put a name, description and upload component into the table. I also dragged and dropped a textual label so I could provide some basic instructions for users of the solution.
The Simple File Sharer solution uses some more interesting FEB capabilities to implement behavioral features around the form user interface.
- I used the Access panel to indicate that only one user, yours truly, could create a new database record or update a database record.
- Often, once a user has used a Form to create a database record, subsequent visits to that Form should allow the user to update the record rather than creating a new record in the database. An example would be a vote or a survey response. You don't want the same person voting or answering the survey more than once. In my case, I wanted to restrict myself to one database record because I always want to update a single list of shared files, not create a new list of shared files each time I fill out the form. Regardless of your use case, you can do this by going to the Forms tab, clicking the properties of the form, going to the Advanced tab and then checking "Limit to single submission per Authenticated user".
- I used the Stages panel to add an Update stage to the solution so that database records can be updated after their initial creation.
- I used the Access panel to add a "Reader" role, and I assigned "all authenticated users" to that role. Then, in the Update stage, I gave only Read access to the Reader role. So, as soon as you do a Greenhouse account login, you can use the link I provided to read from the database record I created that includes the table of files that I've shared.
In conclusion, the Simple File Sharer solution may only allow me to create a record, but it still ends up demonstrating a number of FEB features that you would commonly use in solutions that allow multiple users to create and edit records. In the future, I'll share more solutions with you that highlight various FEB features that contribute to creating interesting solutions. Remember, it's not about just forms; it's really about the whole solution you can wrap around forms, including automatic database storage, access control, lightweight workflow stages, and even configuring web services-- all without coding. And that's what makes IBM Forms Experience Builder a platform-as-a-service capability that everyone from a line-of-business users up to an IT professional can appreciate!
Modified by John M. Boyer
When I started on Java Server Pages (JSP) as a topic, I had intended it to be a blog topic. But it grew quite beyond blog size, so now that the technical work is finished, I can give you the meta-level on using JSP with Enterprise IBM Forms.
The work I'm telling you about here is intended to make it easy for you to exploit the powerful, simplifying JSP technique within the XFDL+XForms markup of IBM Forms documents. It took a some work to sort it all out, but with that done, it is easy for you to replicate what I did and gain the benefits. I wrote this wiki page on the IBM Forms product wiki to help you get set up, and the page references the developerWorks article I put together to show how to use JSP in your XFDL+XForms forms.
It was pretty challenging to get the JSP to talk to the Webform Server Translator module, so I was pretty happy when that started to work for me. It's one of those cases of only needing a line or two of code, but it being really hard to get exactly the right line or two. As Mark Twain once said, it's like the difference between lightning and the lightning bug. Anyway now that we know the smidge of code, it's easy for you to copy and use in your XFDL-based JSPs.
At first I thought, OK I have a good blog topic, but then I realized we weren't covering the full Forms information lifecycle. Put simply, a form is possibly prepopulated and then served, it collects data, but then it comes back and you have to do something with the data collected. So, back for more work sorting out how to receive a completed form into a JSP and use its values in JSP scriptlet code that helps prepopulate the next outbound form. This was a fair bit less challenging, as it maps very closely to how you start up the IBM Forms API in a regular Java servlet. Remember, JSP is just a convenient notation that the web application server knows how to turn into a Java servlet. JSP just makes it easier for you to focus on your special sauce application code.
Well, now that I could handle the whole Forms information lifecycle, I realized I hadn't covered the software development lifecycle. Back to the salt mines again. The problem was that JSP annotations are incompatible with XML. Although there is an alternative XML syntax for JSP, I devote a section in the article to explaining why it's a bit of a train wreck, and I focus instead on the normal JSP annotations. By representing them as XML processing instructions, we're able to maintain the XFDL and the JSP logic together using the IBM Forms Designer, and then use an XSLT to convert to actual JSP when it's time to deploy the IBM Form. This was really important to me because, quite frankly, if a new feature does not work in the Design environment for a language, then the feature essentially does not exist in the language.
Now, that's a wrap! I hope you like the article and get accelerated development benefit from it. JSP is really for building quick prototypes and demos, and also for solving simpler problems much more simply than using straight Java servlet coding. It's even a really nice complement to using Java servlet coding within a larger project. So don't delay, get ready to use JSP with XFDL today.
Back in March, I wrote about the open standards basis of Lotus Forms documents. This entry included comments on the use of the XML Signatures standard in combination with XForms within the XFDL markup of Lotus Forms.
Now I'd like to draw your attention to a developerWorks article we've now published on the technical details of Verifying Lotus Forms XML Signatures with Java. This article explains how a JSR 105 compliant implementation, such as can be found in the Apache security library or in Java 6, can be used validate the XML signatures created by the Lotus Forms client software (either the Web Form Server or the client-side Forms Viewer plugin).
Generally, a Lotus Forms document consolidates the client-side of the business process function. This could be a many-step process for an individual or it could be a process that spans many individuals who are collaborating to perform a business transaction. Applying an XML signature on a Lotus Form protects the markup of the consolidated client experience, not just the transactional data created by users. Users don't "see" the XML markup of data for a transaction. They visually see (or aurally sense with accessibility software) the whole "contract" that gives context to the data content. An XML signature applied by Lotus Forms client software signs the whole agreement. The above mentioned article explains how open standards based software can be used to complete the server-side function of validating the XML signatures in order to secure the transactions of a business process. Since Lotus Forms are XML documents based on XForms, this means that the entire business process workflow on a Lotus Form can be achieved with open standards based software.