I'd like to make this extra posting this week to draw your attention to a series of excellent articles recently completed by Kurt Cagle. The series is entitled Understanding XForms and consists of the following entries:
- Why XForms Matter, Revisited
- The Model
- Events and Actions
- AJAX, XBL and XForms.org
These articles give a lot of information about the value proposition of XForms to projects, and they're worth your time.
Ubiquity XForms version 0.7 is now available. In addition to contributing to the recent advancement of XForms 1.1 to Proposed Recommendation, this new version has many new features, fixes and highlights. Personally I'm happiest with the progress on implementing submissions, but that may be a bit of bias because I contributed some code to deal with aspects of submissions such as serialization, validation and relevance pruning. On the other hand, I contributed code in other areas, so maybe it's just that I'm biased toward any improvements to our ability to support the XRX architecture and connect the XForms client to server-side services.
And this blog entry would not be complete without my mentioning a special word of appreciation to the folks at webBackplane for all their contributions in general but especially for the Ubiquity XForms rollup system, which consolidates the many files of the processor into a single file to be deployed on your server. I have an internal project right now that uses the rollup, and it provides us with the high level of performance we expected/required in our applications.
Full details of the version 0.7 release can be found here: <http://code.google.com/p/ubiquity-xforms/wiki/Release0Point7>
There are lots of scenarios in which additional work must be performed on an electronic form document after a digital signature is affixed. For example, suppose you are filling out a form that has an "office use only" section in it. When you're done, you want to sign the document to authenticate yourself and authorize the signed content. But those "office" people still has work to do to fill out that office-use-only section after you sign and submit the form to them. Another easy example is the multiple signer scenario. For a simple signer/co-signer loan, the loan applicant has to sign the document, and then the co-signer signs it.
In cases like those above, it is clear that the office-use-only section of the form was blank whent he signer signed or that the co-signer had not yet signed when the loan applicant signed. But if you modify the document after the first signature is affixed to fill out the office-use-only section or to add another signature, this seems to be indistinguishable from tampering with the document. The document hash in the digital signature would detect the change, and the user would be told not to trust the document any more.
Fortunately, the XML digital signature world has a solution for this problem. It is called a digital signature filter. A filter allows the document hash in the signature to cover part of the document instead of the whole document. This makes it easy to create a digital signature which logically covers the whole document except for the office-use-only section or the co-signer signature.
Unfortunately, there is a bit of a tendency in the wild to misuse digital signature filters, and the result is forms that do not provide nearly as much security. A proper digital signature filter should almost always use exclusive logic. The digital signature filter should say "sign the whole document except ...". The elided part that forms the exception should, as precisely as possible, indicate the part of the document over which additional work must be performed. For example, "sign the whole document except for the office-use-only section" or "sign the whole document except the co-signer signature".
Almost always, a digital signature filter that uses inclusive logic is evil. People have a natural tendency of wanting to say "sign this and this and this" rather than saying "sign everything except for this and this and this", but the former inclusive logic statement has far less power than the latter exclusive logic statement. The reason has to do with the fundamental principle of digital signatures: What you see is what you sign. For the visually impaired, the word 'see' is being used metaphorically, of course, but the point is that users do not care about octet stream security. They care about whether or not the technology has secured the on-the-glass appearance of the document.
So, when an inclusive signature filter identifies markup for what must be signed, it implicitly omits anything else that might be in the document. This means an attacker can add, change or delete any other content before or after the signature creation occurs. The problem is that such arbitrary content can include commands like 'position me over top of the real terms and conditions of this contract'. So, even though the digital signature continues to validate, the end-user would no longer be looking at the same terms and conditions that the signer saw when he signed the document.
An exclusive signature filter does not allow arbitrary content to be added, changed or deleted. The only additions, changes or deletions allowed are those that are explicitly spelled out in the exclusive signature filter. It is easy to only allow the data and not the layout of the office-use-only section to be changed in the document after a signature is affixed. And it is easy to have the loan applicant signature sign the whole document except for allowing the co-signer signature to be added, again with no changes of the overall form layout. Everybody knows that these things were empty when the first signature was affixed, so the exclusive filter allows only the changes that are reasonably expected of the document.
There is a viable use case for an inclusive signature filter. It is useful as an optimization when a signature is co-signing over a second signature. For example, in the signer/co-signer scenario, the first signature covers the whole document less the second signature. The second signature could cover the whole document, but it really only needs to cover itself plus the first signature because the first signature already covers everything else with an exclusive signature filter. However, the whole signature validation step is fast enough that it is rare for this optimization to make a meaningful improvement, and it will certainly never be more than twice as fast as just signing the whole document a second time. This is why I recommend that people should simply never use inclusive signature filters.
To be honest, the greatest disservice of inclusive signature filters is not even to signers who may be duped by the unscrupulous into authorizing transactions that they don't really approve of. The greater disservice is to the organization that deploys a security system with the understanding that it will reduce the risk of transaction repudiation. A signer can simply show that it is possible to attack a signature containing an inclusive filter, so no one has to even change a single byte before or after the signing event and the signature can still be repudiated! Again, this is why I recommend that people simply should never use inclusive signature filters. By which I mean, just don't ever use inclusive signature filters. And Happy Holidays.
Modified by John M. Boyer
As an interesting possible counterexample to my last blog about MLR models not understanding the knowledge they learn, consider the neural network. Our brains are neural networks, and we are capable of learning at all levels of Bloom's Taxonomy, not just the knowledge level. Shouldn't artificial neural networks be able to achieve the same things?
The answer is no, not really. Our brains biologically, chemically and physically perform in ways that we scarcely understand, so our name for the thing we call "artificial neural network" is no less anthropomorphizing than when we say that a computer program of today "understands" anything.
Still (again), this is not to say that they aren't incredibly useful and effective. It's just that they are based on straightfoward and well-understood mechanical methods such as feed forward activation of neural outputs via sigmoidal threshold functions applied to inputs and back propagation of synaptic weight adjustments based on easily quantified classification errors. Before going any further, let's have a quick look at a diagram of an artificial neural network (ANN):
The ANN has an output layer on the right that is a classifier for input patterns received on the left. For example, an ANN for optical character recognition could have an input layer of an 8x8 matrix of bits, and the output layer could be an 8-bit code that indicates an ASCII character. The hidden layer(s) of neurons help the ANN to represent more sophisticated phenomena, though there is seldom need for more than one hidden layer. The "synaptic" connections between the neurons in the layers are weighted numbers, and the neurons apply the weights to the inputs and then feed the results into a Sigmoid function that essentially decides, like a transistor or switch, whether or not to fire the output.
An ANN is "trained" by giving it a sequence of input patterns for which the correct output pattern is known. The input pattern feeds forward through the ANN to produce an output. If there is a difference between the ANN output and the correct output, then the differential error is back propagated through the ANN to adjust the weights so that future occurrences of that input pattern are more likely to produce the correct output.
The synaptic weights, then, essentially represent the knowledge that the ANN "learns" from the input patterns. This is analogous to the constants that are "learned" by an MLR model. In fact, all elements of the ANN and MLR model architectures are analogous. The ANN input layer maps to the the independent X variables, the ANN output layer maps to the dependent Y variable, and the transition from input X values and the Y value that is achieved in MLR by multiplication and addition is achieved by a feed forward through synaptic connections, hidden layer neurons and Sigmoid functions in an ANN.
With such a one-to-one architecture mapping between ANNs and MLR models, it is easier to see them as having similar intellectual power. That's not to say they're equivalent, as ANNs are far more powerful. It's just that they're roughly the same (low) order of magnitude with respect to human intellect, and in terms of Bloom's Taxonomy, we call that order of magnitude "knowledge storage/retrieval".
Despite being in the lowest order of magnitude of intellect, the realm of today's artificial intelligence includes many interesting knowledge storage/retrieval techniques that are worth comparing and contrasting to see the range and limits of their power and the use cases they address. Stay tuned!
Check out this developerWorks article for a step-by-step guide to deploying a DB2 web service and then consuming that web service using the Lotus Forms Designer.
Once the WSDL for a particular DB2 web service is pulled into your Lotus Forms Designer, you select and autogenerate a data instance and a specific service, drag-and-drop the data instance onto the design canvas to autocreate the user interface, and then generate the run-time XForms submission for the service. The article above shows exactly how to do each step.
A smart XForms client and a smart interface to the server database mean no custom code in the middle. Thus, computing power is made available to a broader class of IT knowledge workers who require only forms and database skills. Process democratization in action.
Throughout the history of mathematics, new ideas have tended to be labeled in a demeaning way. The first few that come to mind are negative numbers, irrational numbers, and imaginary numbers. It's always as if the person who invents a new concept to solve a problem is somehow losing his mind because all the sane people use numbers that are positive, rational and real!
The same thing seems to be happening in the web world. I've recently heard the assertion that it is illegal to use XML constructs to upgrade the HTML language because it is illegal to send content over the wire as anything but tag soup if the content-type is text/html. On the one hand, I could be losing my mind, but on the other, maybe the assertion doesn't hold up to scrutiny. The argument seems to be that the user agent would have to commit the sin of content-sniffing in order to determine whether or not the text/html content can be fed to an XML parser. See how the label 'content-sniffing' just oozes negative connotations?!
Well, all of those clever mathematicians of old won out in the end because, quite frankly, their ideas worked. And so it will be with content-sniffing.
To be honest, I don't see the logical difference between testing a document for XML well-formedness versus checking a version attribute value to determine which aspects of a processor for the document will be active and which will be inactive. Before you rush to the comment link to tell me that the version attribute didn't work in HTML and was therefore removed, let me answer that it didn't work only because competing browser makers ignored it; they let features work that should not have worked by version. Had they implemented version checking, we would have had one less mess to clean up in today's web content. But that's water under the bridge now, and also beside the technical point that it can work.
In fact, all features that are placed into IBM Workplace Forms are activated by version. So, if you write a form using, say, version 6.0 of the XFDL language, the form continues to work in the same way in the latest version of the product. Changes to the way features work, and even most bug fixes, are implemented in a version-specific manner so that forms generally don't change behavior as the user agent is updated.
In essence, the Workplace Forms viewer products do a form of content-sniffing by activating certain subprocessors based on some of the content in the document. The point is that the document is still of content type application/vnd.xfdl, but we don't assume that a document processor must be dumb except for the information it derives from HTTP content negotiation. Perhaps this comes from our more document-centric focus-- after all, you don't get HTTP content negotiation when loading a form previously saved to the local disk drive!
The same kind of thing needs to happen with HTML processors. The html tag could bear a boolean attribute that indicates whether or not the content is well-formed XML. Or, there could be added a version tag whose value would imply whether XML was used. This way, new features of HTML could be added to the XML-based variant, and user agents could be required in the RFC 2119 sense to only provide access to the higher-level features by version. The result is that we could begin using new features as the carrot to encourage more XML-compliant content on the web without having user agents lose the ability to process existing content.
But either way, serving out well-formed XHTML with the content type of text/html is not only a valid thing to do, the simple truth of the matter is that people do it all the time. Why? Because it works.
Continuing with the last blog, you saw autotabbing at work, and clearly three XForms controls were working together to collect some common data. But here we're going to refine the aggregation of the controls in both the user interface and the model layers.
For starters, suppose a larger example in which the phone number being collected is not the only piece of data, but rather is nested deep within a larger XML data structure:
<xforms:instance xmlns="" id="X"> <data> ... <customer> ... <phone> <areaCode/> <exchange/> <local/> </phone> ... </customer> ... </data></xforms:instance>
In the prior user interface, the three input controls relied on the fact that the default evaluation context for user interface bindings is the root element node of the instance. Even though the
phone element is no longer the root element above, we can keep the same input controls by setting the context using an XForms
group to more properly communicate the intent to combine these controls together in the user interface layer:
<xforms:group ref="customer/phone"> <xforms:input ref="areaCode" incremental="true" id="areaCode"> <xforms:label>Area Code</xforms:label> ... </xforms:input>
<xforms:input ref="exchange" incremental="true" id="exchange"> <xforms:label>Exchange:</xforms:label> ... </xforms:input>
<xforms:input ref="local" incremental="true" id="local"> <xforms:label>Local:</xforms:label> ... </xforms:input></xforms:group>
The second thing you may want to do is to combine the three components of the phone number together into a single result. There are quite a few ways to do this, so here we'll just pick a simple way and run with it. Say, you were to add an attribute called
number to the
phone element, like so:
<xforms:instance xmlns="" id="X"> <data> ... <customer> ... <phone number="2505551212"> <areaCode>250</areaCode> <exchange>555</exchange> <local>1212</local> </phone> ... </customer> ... </data></xforms:instance>
It's then an easy matter to combine the three phone number components together to form the full result using an XForms calculate formula:
<xforms:bind nodeset="customer/phone/@number" calculate="concat(../areaCode, ../exchange, ../local)"/>
As an example of the powerful data-driven dynamism available in Lotus Forms due to features of XForms, I'd like to take you through a brief conceptual tour on the focused example of creating a Lotus Form template for a Questionnaire or Survey. This template is able to handle not just any number of questions and any amount of question text, but also any kind of answer type. And all of this would be controlled by the data so that the actual design of the Lotus Form template is the same.
The power of being purely data-driven should not be glossed over. You can easily have web application servlet code that obtains the questionnaire template and then prepopulates it with specific questionnaire data so that the client side receives a specific questionnaire selected in a previous step of the web application. But, XForms-based Lotus Forms also have that AJAX property of being responsive during run-time to new data obtained by a form via web services or other http submissions. So, you could even have a Lotus Form that obtains and adds new questions on the fly in response to answers provided to initial questions.
This post will focus on the main repeating template that provides the dynamic presentation layer for each question of a questionnaire or survey. As this is an example of a purely data-driven questionnaire, let's start by looking at a sample data format. Suppose you have a survey consisting of any number of items, each of which can contain a question text, an indication of the type of question being asked, a place for an answer, and optionally some possible choices for those answers. Something like this:
<question type="yesno">Do you like apples?</question>
<question type="likertscale">It is OK for apples to have a powdery texture.</question>
<question type="closedselection">What is your physical gender?</question>
<choice label="Female" code="F"/>
<choice label="Male" code="M"/>
In the XFDL presentational language that Lotus Forms combines with the XForms data processing layer, every XForms user interface element has a container XFDL element
for presentation. The survey format consists of a number of <item> elements, so an XFDL <table> containing an <xforms:repeat> is the correct top-level presentation element:
... <!-- UI for showing one item of data -->
... <!-- More XFDL options for styling the whole table -->
The table has a scope identifier (sid) attribute that allows the table to be programmatically referenced, but we won't be using that feature in this example. The table can also have XFDL options outside of the <xforms:repeat> to control presentational aspects like borders and background colors, and we aren't focusing on that either.
The <xforms:repeat> has an attribute called "nodeset" which uses an XPath expression to make a reference to however many <item> elements are in the <survey>. This is an automatic or "declarative" loop construct. For each <item> node in the data, no matter how many there are, the template content of the <xforms:repeat> is generated to present that <item> to the user. Even if new <item> elements are added at run-time, e.g. by a web service or an <xforms:insert> action, the XFDL table in the Lotus Form will dynamically grow to present the new <item> elements. And even if some <item> elements are removed from the data, e.g. by an <xforms:delete> action associated with an XFDL <button> by an <xforms:trigger>, the XFDL table will dynamically and automatically remove the corresponding user interface elements that were presenting those removed <item> elements.
So, the magic really happens in the template inside the <xforms:repeat>. In Lotus Forms, you can put any and all kinds of XFDL items in the <xforms:repeat>, including more XFDL table items. In this example, we will be showing a few variations that present different kinds of user interface controls for collecting a few different kinds of answers to questions.
First off, though, presenting the actual question text for an <item> is a simple matter of using an XFDL label item with an <xforms:output>, like this:
... <!-- more XFDL options for presentational styling -->
... <!-- more XFDL items for collecting answers -->
For each survey <item>, an XFDL <label> item is generated, and it binds to the <question> child element of that associated <item> using the "ref" attribute. The XFDL label item presents the text of the bound <question> node, and other XFDL options can be used to provide styling such as the block layout flow as well as alternative font color, background color, font selection and so forth.
More XFDL items can be added to the <xforms:repeat> to collect the answer for the given question. In many cases of XFDL tables, each XFDL item within the <xforms:repeat> template is actually presented to the user. An example would be using each XFDL item in the <xforms:repeat> to represent one column of a purchase order table. However, it is not necessary to show all of the XFDL items within the <xforms:repeat> template. In fact, XForms user interface controls have a selective binding feature that XFDL items support, since the XFDL items are wrappers for the XForms user interface elements.
The selective binding feature of XForms will be used to help easily choose one XFDL item from among many to collect the user's answer to the question. Each question can have a different type of answer, so each "row" of the table can make a different choice of user interface control used to collect the answer. The selective binding feature uses an XPath predicate to decide whether or not the XForms user interface element binds to a node of data or not, and the control is invisible if it is not bound to a data node.
In the example survey data above, the first <item> contains a <question> whose type attribute indicates it is a "yes/no" question. Inside the <xforms:repeat> we can create a checkbox item that can collect a (schema valid boolean) true/false answer, as follows:
The above checkbox widget only binds to <answer>, and therefore is only visible, if the corresponding question type is 'yesno'. Otherwise, the XPath in the ref attribute of the <xforms:input> does not select any nodes, so the XFDL <check> item is not visible.
The second <item> of sample data above has a type of 'likertscale', so we would like to show a 5-point radio button group rather than a checkbox. As explained above, the check box on the second row of the survey table automatically hides itself due to selective binding, so all we have to do is add an XFDL <radiogroup> item to the <xforms:repeat> to provide the interface for collecting the 'likertscale' type of answer, as follows:
<xforms:select1 ref="answer[../question/@type='likertscale']" appearance="full">
<xforms:label>Neither agree nor disagree</xforms:label>
The third survey <item> in the sample data above provides a closed selection of choices. That could be styled using a pair of radio buttons, a pair of mutually exclusive checkboxes, a list box, or a popup control that provides a simple dropdown list. The answer types in the survey format could be made to distinguish these possibilities using more keywords, but for this example we'll just assume that a <popup> control is the desired presentation for a closed selection. The XFDL markup below shows how this can be done, and it is also interesting because it is shows that the data can also dynamically control the choices, rather than having only static choices as shown in the <radiogroup> above.
<xforms:select1 ref="answer[../question/@type='closedselection']" appearance="minimal">
It seems a useful, now, to round out this blog post by presenting a few more examples for other common types of input, such as single-line strings, multiline text, and dates. Here's what the data would look like:
<question type="oneline">What's your name?</question>
<question type="multiline">Do you have any other comments?</question>
<question type="date">What is your date of birth?</question>
The corresponding XFDL items that would be added to the <xforms:repeat> content template for these types of questions would be:
So, hopefully you now have the idea that a completely dynamic and completely data-drive survey or questionnaire can be created using the features of XForms in XFDL (Lotus Forms). Any number of XFDL items can be added to the <xforms:repeat>, XPath predicate selection can be used to choose one XFDL item from among many to collect an answer for a survey question, and most importantly that a different choice of user interface control can dynamically selected for each survey question.
A customer asked me recently how they could use XForms constructs to create a form that dynamically populates a table with available products that can be ordered based on selection of a product provider. In this case, the customer also wanted to have the product order list be editable once provided, allowing the user to delete rows or even to add more rows to the product order table initially provided for the selected store. Seemed like another good example for the blog.
So, suppose you have a main data instance for a form that looks something like this:
<xforms:instance id="data" xmlns="">
<product name="" code="" cost="0" qty="0" total="0"/>
Each product has a name, code, and cost, and the end-user will indicate the quantity of the product they desire. The above data corresponds to showing an empty table until the user chooses a "storeID". Here is the XForms user interface markup for showing a four column table having as many rows as there are "product" elements in the data.
<xforms:repeat nodeset="products/store/product" id="orderTable">
<xforms:label>Desired Quantity of Product</xforms:label>
<xforms:label>Calculated Line Total</xforms:label>
Initially, the table will have just one row of four user interface elements containing essentially empty or zero values. However, now lets hook up something that allows us to pick a store ID so we can fill the table with an initial order of products. Now, it would be reasonable in a full form application to obtain the list of stores from a web service and then get the starting list of products for the selected store from another web service. Getting data from web services is not an important but orthogonal point, so in this mock-up, I'm going shorten all of that down to just having the data available in a format that looks like this:
<xforms:instance id='storeLists' xmlns="">
<product name="Widget" code="W1" cost="3.50" qty="0" total="0"/>
<product name="Gadget" code="G1" cost="4.25" qty="0" total="0"/>
<product name="Trinket" code="T1" cost="2.75" qty="0" total="0"/>
<product name="Gadget" code="G1" cost="4.25" qty="0" total="0"/>
<product name="Gromet" code="G2" cost="3.50" qty="0" total="0"/>
<product name="Widget" code="W1" cost="3.50" qty="0" total="0"/>
<product name="Trinket" code="T1" cost="2.75" qty="0" total="0"/>
<product name="Gromet" code="G2" cost="3.50" qty="0" total="0"/>
<product name="Sprocket" code="S1" cost="1.99" qty="0" total="0"/>
<product name="Locket" code="L1" cost="9.50" qty="0" total="0"/>
<product name="Pocket" code="P1" cost="1.50" qty="0" total="0"/>
<product name="Rocket" code="R1" cost="7.50" qty="0" total="0"/>
<product name="Sprocket" code="S1" cost="1.99" qty="0" total="0"/>
<product name="Socket" code="S2" cost="2.49" qty="0" total="0"/>
The user is provide the ability to select a store using the "select1" control, and the list of stores can be easily picked up from the data using an "itemset". Once the user makes a choice, an "xforms-value-changed" event on the select1 could be used to run a web service to get the product list, but here we'll just mock that up with an "insert" because the data is already available:
<xforms:select1 ref="storeID" appearance="minimal">
<xforms:label>Choose a store</xforms:label>
<xforms:delete nodeset="instance('data')/products/store" at="1"/>
<xforms:insert context="instance('data')/products" origin="instance('storeLists')/store[@ID = instance('data')/storeID]"/>
The "ref" on the select1 tells where to store the resulting store selection. The "nodeset" on the itemset tells where to get the list of stores from. The "ref" attribute on the xforms:label in the itemset tells what to show for each item in the list of choices, and the "ref" xforms:value tells what to store in the data ("storeID" due to the ref on select1) when a particular list choice is selected.
The "xforms-value-changed" event handler recognizes when a selection has been made, since that results in a value change on the "storeID" data node. The delete action gets rid of any preceding list, and then the insert action copies the list of products for the selected store into the main data. In particular notice that the XPath predicate in the origin attribute selects a store element to copy based on the store element's ID attribute matching the selected store identity placed in the storeID element by the value change behavior of the select1.
Once this insert occurs, the xforms:repeat is automatically responsive to the change of the data. It generates a four column row of user interface controls for each of the inserted product elements. For example, if the user picks store A, then they get three rows for Widgets, Gadgets and Trinkets. If they then pick store D, the form automatically adjusts to five rows for Lockets, Pockets, Rockets, Sprockets and Sockets.
Once the table content has been set with the product list for a particular store, the user may choose to add or delete rows from the table. Here is an additional instance that would be used to store the data prototype for a product:
<xforms:instance id='proto' xmlns="">
<product name="" code="" cost="0" qty="0" total="0"/>
A button to add a row to the repeat table would trigger the addition using the following XForms markup:
<xforms:insert context="products/store" nodeset="product"
This simply inserts a product prototype, obtained via the origin attribute, into the location defined by the context and nodeset attributes at a position corresponding to the row of the table that current has the input focus. When the data is inserted, the repeat table automatically generates another four-column row of user interface elements to allow the user to interact with the new data.
Similarly, a button to delete a row from the repeat table would trigger the deletion using the following XForms markup:
<xforms:delete context="products/store" nodeset="product" at="index('orderTable')"/>
<xforms:insert context="products/store" at="1" position="before" origin="instance('proto')/product"
This simply deletes the product data element corresponding to the row of the table that currently contains the input focus. The row of user interface elements that presented this data is automatically deleted. As the next step of the action script, if the product list data becomes empty due to the prior delete, a new empty product prototype is inserted. The repeat table then presents one row of interface elements, so this extra insert ensures the user is never left with an unsightly empty table.
Last but not least, the actions scripts of both of the triggers above end with an xforms:setfocus action. This is because pressing a button, be it to add or delete an item from a table, transfers focus to the button. That's just how the web works. But the user's focus is not really on the buttons; those are just tools. The user's focus is on changing the table, so it is a better user experience to push the focus back to the repeat table.
When I read the material on Yahoo! Blueprint, I was pretty pleased and shared the links with you in the last entry. However, I'd like to draw your attention to the first paragraph of their roadmap (emphasis is mine):
Declarative vs. Imperative
Much of Blueprint's philosophy and syntax comes from XForms. We opted for a full declarative language because it was the only way we could effectively run on the wide range of devices out there, some of which have no scripting at all. By using declarative syntax, we can encapsulate and hide the scripting specifics. In some cases, the code could run on the phone, in other case, such as XHTML, we can put the logic on our servers. It's the perfect way to deal with the various environments and their capabilities.
In a nutshell, this is the multimodality story of XForms.
I should clarify, though, that this is not saying that XForms has no imperative scripting of its own. The difference, though, is that the XForms script commands are data-centric, which means that they essentially declare what has to happen to the data in response to a particular event (like pressing a button to add a row to a purchase order). The meaning of each script command in XForms is tailored by the declarative constructs that bind to the data. For example, when you add a node of data containing the subtree of nodes needed to represent a purchase order item, the declarative bindings of XForms automatically create the UI controls and the line calculation formula needed to drive that new data. XForms processors, including Blueprint, are able to decide where these behavioral updates go depending on what devices are at play.
This issue of declarative constructs tailoring the meaning of imperative script commands is extremely powerful. Using the same one line of code to insert or delete a node, the rest of the application may respond by creating or destroying an arbitrary amount of user interface, including nested tables, as well as creating, destroying or updating any formulae that may be needed for the data being created or destroyed. The same one command that removes a row of a purchase order could also remove the entire list of delinquent payments for a customer who has just paid up.
In conclusion, it's not so much "declarative versus imperative". Rather, it's a matter of being much more effective with a hybrid model in which the imperative is data-centric and augmented by the declarative.
How much more effective, you ask? In The Mythical Man Month, Brookes states empirical results showing a relationship between complexity and code length of N1.5, where N is the number of lines of code. To put that in concrete terms, 10 times less code is not just 10 times easier to maintain, it's about 30 times easier. This is the kind of difference that turns years into months or months into days. For the services company this means lower RFP bids, which translates into winning more deals. And it means significantly lower TCO for the enterprise IT shop building that web-app in-house.
Engaging with customers (or employees) involves collecting information from them. To sell an insurance product, for example, you need to know what kind of insurance, how much, who's it for, their age, and a whole raft of other information. This is how we end up being able to provide a customer with an insurance quote or product, a financial preapproval, a medical appointment, and this is how we process IT requests, support requests, purchase orders, expense reports, etc.
If you're architecting a web-available XML-based information system, you will typically define the XML data required by the system. Specifically, what information does the organization need in order to process a transaction with a customer or an employee? A result of the design work will be an XML schema that describes the overall data structure and the data types for the back-end services that process the transactions. Then, a form is created to provide a web-based data collection interface for the data. Finally, there is the possibility of java servlet code in the middle to convert the data values collected by the form into the XML data structure required by your back-end services. Unless...
With IBM Forms 4.0 Designer, the XML schema describing the data structure can be consumed directly into the form design experience without the form author needing to understand anything about XML schemas. The "form author" skillset includes high precision GUI work to create either a UI that conforms to some regulations or a UI that captures and keeps the user's attention. The form author isn't and shouldn't be worried about the data structures. They want to drag and drop stuff onto a design canvas, and so much the better if the design environment makes sure that the GUI items are automatically mapped to the data structures needed on the back end to process the data collected by the form.
Here are two videos that give you an idea of what this design experience looks like: Schema-driven Forms Design
and Schema-driven Forms Design and Dynamic Tables
The form author has been provided with the data schema, and they just pull it into the design experience and ask for the data structure to be created based on the schema definition. Then, they drag and drop the GUI representations of the data elements to automatically create groups of user interface controls that are mapped onto those XML data elements. This leaves the form author free to focus on issues of colors, borders, GUI layout, and so on because the form automatically collects the data in the XML format defined by the XML schema
. The form author needs no knowledge of XML schema, nor XML markup for that matter
, to build up the form user interface.
The drag-and-drop operation is responsive to data types, so it creates different UI controls for various data types, such as calendar pickers for dates, check boxes for booleans, dropdown menus for closed enumerations, and comboboxes for open enumerations. The operation is also responsive to XForms label annotations on element and attribute definitions as well as enumeration items, so rather than using XML element and attribute names as the default values placed into the user interface, the data architect can assign friendly prompts or labels for element and attribute data to be collected as well as enumeration items appearing in lists.
The drag-and-drop operation is also responsive to structural definitions. If the form author drags and drops a piece of data for "Personal Details", then UI controls will be created for single child elements like "Name" and "Date of Birth" but whole subgroups of controls will be created for subtree elements like "Address", which may be comprised of child elements like "Street" and "City". Moreover, if an element being dropped includes a repeating subtree element, i.e. one bearing a schema maxOccurs value greater than 1, then a dynamic table element is created, including the insert and delete buttons that allow a user to insert or delete rows from the table.
So, have a look at the special demo videos above to see the work in action, then plug "IBM Forms" into your search engine, go to our site and download the Designer and Viewer trial software so you can give it a test drive, and stay tuned to this blog because in my next post I'll take you through a sample schema and show you exactly what bits of schema trigger the IBM Forms Designer to do something delightfully automatic for you.
My last blog was about the best paper winner at the ACM Symposium on Document Engineering, and I mentioned there that I'd make another post to give you an idea of the breadth and depth of the papers presented. So, the bad news is you'll have to wait one more post for me to tell you about my own papers because I want to keep the focus on other people's ideas, and so the good news is that this is that follow-on post.
I can't tell you about every paper, nor even get close, and omissions here should in no way be construed as meaning anything. I just had to pick some. Did I mention it was a really interesting conference?
I was quite impressed with a pair of papers in which natural language processing techniques were used either to extract a meaningful summary of a text (a kind of lossy compression, where we expect high compaction and low information theoretic loss) or to rewrite the text into a format that is easier to read (the intent of which is to increase understanding by persons with literacy challenges). It isn't just the computer science of natural language analysis that excites me here. These endeavors are ripe for formal statistical experimental analysis on human subjects to prove out whether the computing techniques are actually benefiting humanity. That's exactly the kind of high-impact innovation for which IBM itself is world-renowned.
There was a really interesting paper on how to reuse and repurpose unstructured content entitled "Automated repurposing of implicitly structured documents." What interested me first was the work they do to analyze the style applied to character sequences to glean document structure like sections and subsections. But then I realized that what they were really up to was the repurposing angle, where one seeks to apply the content of a document to a new style format based on mapping the gleaned implicit structure from the first document onto the gleaned implicit structure of the second document.
Another interesting work was called PrintMonkey, which is not featured here because its authorship includes people from my alma mater (University of Victoria), nor because it has a really cool name. Simple fact is that the problem it addresses is one that I personally have struggled with, and it's nice to see there's progress being made on it. The issue is that sometimes you need a printed copy of a website, but the default web browser print algorithm is typically not suited at all to paper rendition. It spills over the edges of the paper, or takes chunks of the page and prints them on separate pages (ever print a google map?). But what's neat about this work is not just that they solve the problem but that they do so without requiring participation of website authors AND that they involve a social computing aspect in the work. Specifically, the solution allows people to share prior layouts of the same website. Who knows, maybe someone else has already laid out the site in a way suitable to your purposes. Very Web 2.0.
A short paper I enjoyed presented a concept called content-based identifiers (CBIs) for documents. When you store a document in a content repository, how about storing it at a URI that includes the hash of the document? This way, the document retrieval can be immediately be followed by a check to see whether the document you got has been altered. Moreover, a log of the document history can simply consist of the list of CBIs for the document, possibly followed by a hash of the CBI and its predecessor in the log. This latter step establishes a chain of evidence because the predecessor document must have existed in order for its hash to have been computed and combined with the hash of the current document.
Speaking of change tracking and document versioning, there was a good paper on Merging Changes in XML Documents using Reliable Context Fingerprints. The basic idea is that lots of XML documents contain element sequences, which can make it hard to identify where a patch change should go in the face of merged changes from multiple contributors. The naive method of saying "Change element /doc/e as follows" from one person would break if another person adds a new e. This paper attempts to identify the location where a change should go by doing a best match to node structure and content within a given "radius" of nodes.
A real brain opener for me was a paper about Improving Query Performance by generating XML schemas from system design artifacts that include not only the conceptual information to be manipulated by the system but also expected query workload, the idea being to reduce the number of nodes that have to be touched by a query to produce a result.
The last paper that I can reasonably present in this too-long blog entry deals with adaptability of multimedia document content. This is really a fascinating area that addresses the issue of presenting content in the face of spatial, temporal or interactivity constraints (e.g. limited visual space, limited bandwidth, limited available CPU).
Despite the number of things I did talk about above, I am even more mindful now than before about how many good things I've had to leave out. Hopefully you will at least have a better idea of the interesting concepts presented at this conference and take the time to peruse the DocEng conference series for other items of interest to you.
Next up in this blog, I will take the time to tell you about my own papers at this conference on Interactive Office Documents and Web 2.0 Applications and An Office Document Mashup for Document-Centric Business Processes.
Recently, I was experimenting with one of the features planned for the next version of XForms. The feature is the iterate attribute for XForms actions, which will perform a for-each loop operation based on a nodeset obtained from the xpath expression in the iterate attribute value. XForms 1.1 already has a while loop, but iterate makes many data processing loops easier to write and also more performant (subject of a future blog). There were lots of iteration use cases to choose from, but I decided to experiment with sorting because it is a well-known benchmark algorithm.
Before we go any further, let me say up front that XForms action scripting is intended for very lightweight data manipulation, like adding or deleting a data node corresponding to a table row or copying data results to or from the SOAP envelopes of a web service. By the time you get to nested iterations like those needed sorting, you should be considering alternatives expressed in full-blown imperative languages available in the information system within which the form is being used. For example, in the case of sorting, it is a better idea to request sorting in the database query whose results are returned from a web service into your form so that your form logic does not even have to do the sorting.
So, with the disclaimer out of the way, let's abuse the technology a bit to get a better sense of what is feasible in those customer-needs-it-yesterday circumstances. It turns out that XForms 1.1 does allow full nodeset processing in the insert action's origin attribute and the delete action's nodeset attribute. Without even needing the new iterate attribute, this is just enough iteration capability to perform efficient sorting -- so there are some kinds of iterations that can be done now without the iterate attribute.
We're going to do a divide-and-conquer "partition" sort that I personally created as a university freshman after my 1st semester instructor told our class that linked lists could only be sorted slowly. At the time, the usual computer languages only allowed static allocation for arrays, and even though I didn't know what a "quick sort" was, I had seen the light of dynamic allocation, and I was never going back! I later learned how great a merge sort is on a linked list, but the effort of turning an array quicksort into a linked list partition sort comes in handy now because a merge sort cannot be efficiently expressed in XForms until the iterate attribute is added.
The way a quicksort works on an array (or subarray) is that you pick a random element to be the 'pivot' value. Then you run two index variables at the same time, one from the start of the array upward and the other going from the end of the array downward. The 'up' index is advanced until it finds a value greater than the pivot, the 'down' index is decremented until a value less than the pivot is found, and then the values at the 'up' and 'down' locations are swapped. This keeps happening until 'up' and 'down' meet somewhere near the middle of the array. At this point you've partitioned the array into a subarray of values less than the pivot value and a subarray of values greater than the pivot value. The quicksort is then invoked recursively to sort both subarrays.
The main challenge with this approach is the 'down' index, which is a reverse iteration. In a singly linked list, you can only go forward. XForms insert and delete actions have a similar limitation: they can only identify a nodeset of nodes to insert or delete, but not really a direction of iteration. But the important bit is what the quicksort is doing, not how it is done. Think of the list content as being completely messy, and each partitioning stage must make it somewhat less messy by dividing the content into a partition of lesser elements and a partition of greater elements. Then, the next partitioning stage is invoked recursively to do a better job of cleaning up the mess within each partition.
Let's explore this concept by sorting a list of elements, such as sorting a list of <person> elements by a <lastname> child element. We begin by copying the list into an initial partition element of a temporary instance called 'sortdata', like this:
1) <xforms:insert context="instance('sortdata')" origin="instance('partition')"/>
2) <xforms:insert context="instance('sortdata')/partition" origin="instance('data')/list/person"/>
3) <xforms:delete nodeset="instance('data')/list/person"/>
Next, we initialize the random number generator so we can randomly select pivot values for all the partitioning stages:
4) <xforms:setvalue ref="instance('sortpivot')" value="random(true)"/>
Next, we start up a simple while loop that continues to process partitions until none are left.
5) <xforms:action while="instance('sortdata')/partition">
Within the loop, we grab the last partition from the sort data and determine whether it is non-trivial or trivial (only 1 or 2 elements). A non-trivial partition is subjected to further divide-and-conquer processing.
5.1) <xforms:action if="count(instance('sortdata')/partition[last()]/*) > 2">
The first part of dividing and conquering is to create a new empty <partition> element, which we obtain from an instance that expresses a template empty partition element.
5.1.1) <xforms:insert nodeset="instance('sortdata')/partition[last()]"
Step 5 and step 5.1.1 are more interesting than they seem at first. The list of partition elements in the sortdata actually implements the recursion stack, and we just pushed a new element into that stack at the second-to-last position. Because we have an explicit stack, we only need a loop in step 5 to implement recursion.
The next thing we do here is grab a random last name to serve as a pivot value for the partitioning. The first setvalue just picks a random location, and then the second step uses the location to get the value. Notice also that I use * rather than partition before the [last()] predicate because the sort data only contains partition elements, so there is no point in doing a name test for partition.
5.1.2) <xforms:setvalue ref="instance('sortpivot')"
value="1+floor(random(false) * count(instance('sortdata')/*[last()]/*))"/>
Now the magical part happens. All elements in the last partition whose key element (lastname) is less than or equal to the pivot value are moved to the newly created second-to-last partition. By combining the nodeset processing capability of XForms insert and delete actions with the predicate-based node selection capability of XPath, the matching nodes can be selected and moved using two single XForms actions, i.e. without using an XForms while loop.
5.1.3) <xforms:insert context="instance('sortdata')/*[last()-1]"
origin="instance('sortdata')/*[last()]/*[compare(lastname, instance('sortpivot')) <= 0]"/>
<xforms:delete nodeset="instance('sortdata')/*[last()]/*[compare(lastname, instance('sortpivot')) <= 0]"/>
If the last partition is now empty due to the move operation in step 5.1.3, then the new second-to-last partition received all its elements. If all the moved elements are equal to the pivot value, then we can output them back into the original data list and then remove the last two partitions. Note that the insert is configured to prepend the elements into the data list, and we're copying them from the last non-empty partition, which has the elements with the greatest key value.
5.1.4) <xforms:action if="count(instance('sortdata')/*[last()]/*) = 0">
<xforms:action if="not(instance('sortdata')/*[last()-1]/*[compare(lastname, instance('sortpivot')) != 0])">
<xforms:insert context="instance('data')/list" origin="instance('sortdata')/*[last()-1]/*"/>
<xforms:delete nodeset="instance('sortdata')/*[position() >= last()-1]"/>
</xforms:action> <!-- End of 5.1 non-trivial partition handler -->
Now, we've finished with the non-trivial partition handler, and we turn our attention to processing a trivial partition containing at most 2 elements. The content of the partition is moved to the original data list and the partition is removed. Again, note that we're processing the last partition, which has the greatest key values, and the insert prepends to the data list, so the sorted data list starts with the greatest values and grows as lesser and lesser values are prepended over time as all the partitions are processed.
5.2) <xforms:action if="count(instance('sortdata')/*[last()]/*) <= 2">
If the partition contains two elements and the greater one is first, then it is removed from the partition and put in the original list:
5.2.1) <xforms:action if="count(instance('sortdata')/*[last()]/*) = 2 and
instance('sortdata')/*[last()]/*/lastname) > 0">
<xforms:insert context="instance('data')/list" origin="instance('sortdata')/*[last()]/*"/>
Now the partition contains either no elements, one element, or two elements in sorted order, so we move the content to the data list and then delete the partition.
5.2.2) <xforms:insert if="count(instance('sortdata')/*[last()]/*) > 0"
</xforms:action> <!-- End of step 5.2 trivial partition handler -->
</xforms:action> <!-- End of step 5 recursion while loop -->
As a final note on all this algorithmic fun, the question arises whether this sort achieves optimal O(N log N) performance. The answer is no, not quite, due to hidden costs of data instance management and data node selection. However, the sort will be much faster than a "simple" sort because it does perform only O(N) XForms actions.
I'd like to elaborate further, and in concrete terms, a blog post from late last year on Talking to C-Level Execs about XForms. This will also make more sense of the XForms is a killer app of Web 2.0 messaging you got from me in this post.
For a long time now, web applications have had to use middle tier coding to backfill for the underpowered client tier and its inability to talk directly to server tier applications like DB2 that provide the persistent storage for the web. The Web 2.0 movement generally is about defining web applications that factor out the middle tier, allowing users of client applications to create and collaborate on content that Web 2.0 server applications natively understand how to store and serve.
Applications like XForms and DB2's PureXML form an instance of this movement. The server tier (DB2) moves itself closer to consumability by the Web 2.0 client tier by providing the ability to expose web services that give direct access to database table operations. Meanwhile, the conceptual client tier (XForms) now contains a comprehensive interactivity layer to drive a rich front-end user experience as well as a communications layer that allows the client to speak directly to web services. The net result is that a large class of important applications are enabled without the need for middle tier programmers.
To learn more about this union of technologies, including reference customer information, have a look at the new Tax Solution Education Kit.
Today at the IMPACT 2008 conference, IBM made an announcement covered by the press about its new Web 2.0 Portal Software capabilities. That announcement included the following content from the Lotus Forms team:
"With diverse data sources and inefficient, time consuming processes needed to turn data into useful information, more companies are turning to web-based forms. In response, IBM announced today a new lightweight entry-level product code-named TotalForms which the company plans to ship in beta in June."
"TotalForms is an easy-to-use version of IBM's Lotus Forms software that will enable nontechnical people to quickly create, publish and route electronic forms submitted via the web. It can be used for a variety of tasks including customer satisfaction surveys, job applications and product orders. Based on an open and scalable nonproprietary software platform and Web 2.0 technology, TotalForms will integrate with IBM WebSphere Portal to provide one familiar interface for customers."
So you know, Total Forms, will offer not just web forms based on XForms, but also a web-based design experience for forms. More importantly, though, the value of Total Forms is that it provides a server-side repository for completed form data as well as listing and reporting functions. This means that users will have a point-and-click gestural design experience and run-time capability for the full, end-to-end Web 2.0 application. This puts the computing power into the hands of the people who have the problems that computing power can solve.