Well, I have always looked at Workplace Forms as an eco-friendly product line since, after all, it does dramatically cut down on the use of actual paper, which cuts down on the cutting down of actual trees. But, this post is actually quite a bit more technical than that. In this case, we're going to talk a little about the the *Workplace Forms Designer 2.7* (part of the 2.7 GA release this week) and the three wizards it includes to help form authors with creating dynamic tables and the components and formulae that make them dynamic. The focus, of course, will be the new wizard in 2.7.
The first *Table Wizard* helps you to create a repeating set of user interface controls over some data. You get a chance to set up nice presentational features like column headers, column dividers and row highlighting, and you also get Add and Delete buttons for the table. The second *Row Operation Wizard* allows the form author to express a formula over two columns of a table, with the result being placed into a third column. The formula is defined on the underlying data, of course, via an XForms calculate. The wizard is ideal for setting up that "line total = quantity * unit price" formula that shows up in every purchase order. It could also be used as the starting point for setting up the XForms constructs for a more complex calculation.
Although we've made improvements to these wizards in 2.7 (and will do so again in the future), the new kid on the block is the *Column Summation Wizard*, which is ideal for setting up a summation over a column of data, e.g. for taking the subtotal of a purchase order. The XPath expression written by this wizard is very interesting and relates to the title of this post. All you do as the form author is gesture at the user interface object in the column to be summed and at the user interface object where the result will go. Here is a sample of the underlying XPath calculate that results:
This formula requests the summation of all the line "total" values on each row of a purchase order form, and if the result is not a number (NaN) for some reason such as bogus user input, the formula produces an empty string result rather than the summation value.
This is helpful in and of itself because XPath, XML Schema and other W3C technologies thend to regard an empty string as not a number. So if you try to add or multiply a data node that has no value yet (like the quantity of items desired), then NaN starts to percolate through your form unless you take special measures.
But the really special sauce is the XPath predicate shown here:
[.!='']. An XPath predicate is used to filter out nodes that have some undesirable property. In this case, we want to get rid of the empties and only sum up the data nodes that have a non-empty value. The dot means "this node", so the expression in square brackets says "this node must be not equal to the empty string".
Suppose for a moment that we didn't add this extra tidbit to ignore the empties. Suppose a user has a partially filled out purchase order form with three or four rows of data leading to a non-empty column total. If the user then adds one or more empty rows to add more data, then at that moment the column total formula would update itself to calculate the total over rows that included empties, which would result in a NaN, which the 'if' part of the expression converts to an empty string result. So the user would get treated to this odd experience of having the column total disappear each time he tries to add another row to the purchase order. The extra predicate created by the *Column Summation Wizard* says to ignore the empties so that the user will have a chance to fill in a row of data before it contributes to the column summation.
Well, no blogs from me next week, so seems a good idea to knock off another one this week...
XML namespaces rec (pardon the pun) states that an attribute which is namespace unqualified
is 'local' to an element and is uniquely identified by a combination of the attribute local name and the type and namespace URI of the containing element.
The word identify
really should be used more sparingly, as here is a case where its misuse has caused years of confusion and acrimony in the XML community. An identifier is something that established the identity of something else. You cannot have two things associated with the same identifier unless they are identical things.
I am often frustrated by seasoned W3C folks who say that "this depends on your definition of identify" and honestly
believe this is a defense of the confusion in the namespaces rec. This is like saying ot me, "Well you're right unless you have a definition for identify that doesn't identify things, which is what we
did at the W3C."
To see the problem, you have to look earlier in the spec where a namespace n1 is associated with a URI and then the following code appears:
<good a="1" n1:a="2" />
The problem is that the element 'good' is in the same namespace as n1. So now you have local attribute a
that is essentially given meaning by an element that is in the same namespaces as the 'global' attribute n1:a
. Yes the two attributes have different values.
From this we have to infer that, although a local attribute is given meaning based on the containing element and its namespace, a global attribute with the same local name and namespace qualified into the same namespace can actually mean something totally different.
In other words, we have two attributes with the same local name, contained by the same element, and given meaning by the same namespace URI, but they are not identical. This is the local attribute not
being 'identified' (in my sense) by local name and containing element type and namespace.
The technically subtle W3Cer will tell you that there was no reason to spell out using words in the normative part of the spec the fact that the two attributes are different things because the spec says that local attributes are in a different partition than global ones. Problem is, this partitioning info is in a non-normative part of the spec, and particularly in the same part that has the language about how local attributes are 'identified' by local name and containing element type and namespace.
Anyway, the upshot is that an XML vocabulary does not need to but is allowed to say that local and global attributes with the same local name can mean different things. If they're supposed to mean the same thing, then the XML language has to define a precedence rule for what happens if the two attributes differ in value. Here's an example:
<data xmlns="http://example.org" xmlns:ex="http://example.org">
<price currency="USD" ex:currency="EUR">10</price>
Question: Is the price in USD or Euros?
Answer: Depends on who designed the language.
Second answer: Don't do that.
Interestingly, the XHTML working group came up with a fascinating example where it is legitimate and sensible to have a global and local attribute with the same local name but completely different meanings. It has to do with the next version of XML events. After hours of discussion, the decision was that they weren't going to do that (second answer above). Not because it's illegal, but because it's too subtle for most XML people.
Well, the XHTML group may change their minds, but even if they don't, the example is really worth understanding because it actually makes sense why you'd want to have the two attributes mean something different. Stay tuned, I'll tell you all about it when I get back...[Read More
The annual World Wide Web Conference is being held next week in Banff, Canada. It's about an hour's flight from Victoria, where I live, so of course I'm going!
I served on the program committee of the XML Web Data track this year. Competition to get into the conference is quite stiff, so recommending a paper for acceptance is a low probability event. Of the nine papers I reviewed, most were really good, and a couple that I had hoped would make it to the program will indeed appear. I am pleased to be chairing a session on Thursday May 10 that includes one such paper (I did not receive the other two for review).
I will also be presenting in the W3C Track on Friday at 1:30 as part of an Architectural Integration session organized and chaired by Steven Pemberton. I will be presenting on The W3C Rich Web Application Backplane, a forward-looking view of the possibilities for integration and composition of web applications leveraging W3C formats and APIs.
Finally, I'll be presenting in the Dev Track on Saturday at 1:30. My talk will demonstrate a schema-initiated drag-and-drop design experience for XForms using the Lotus Forms Designer.
Hope to see you there!
The XML Schema language is a good language for capturing the syntactic constraints of an XML vocabulary. But let's face it, it is really designed more for describing data. It's just not powerful enough to be used for making normative contributions to W3C Recommendations like, say, XForms.
This is not to say that a disagreement between the recommendation and the schema won't sometimes be resolved in favor of what the schema says, but there is a reasonably popular view that the schema is also normative. Actually, the recommendation is normative. In fact, the recommendation is typically provided in two or three formats (sliced up version, single HTML file, diff marked version), and only one of those is considered normative. Even more to the point, English is, for better or worse, the only language in which the normative recommendation is expressed. Any other versions are also considered to be informative.
Quite apart from what anyone tells you about the normativeness of a schema, you can determine that the schema associated with a recommendation is informative, not normative, if it does not appear in TR space. This is a subdirectory of the W3 website named 'tr' and used to publish all technical recommendations of the W3C. In the case of XForms, the schema lives in the working group space, not TR space, so it is informative.
However, the location of the schema isn't really the deciding factor for me. Even if a schema did appear in TR space, it still would only be informative in my opinion because there are lots of language constraints that you just cannot express in schema but which are expressed in the recommendation. A number of great examples of this can be found in XForms, two of which are explained below.
Perhaps the easiest is the use of XPath expressions in XForms. In the XForms schema, a number of the attributes like
ref have values that must be XPath expressions. The schema declaration for those attributes declares a type of
xforms:XPathExpression. But here is what the XForms schema defines for the XPathExpression type:
<xsd:simpleType name="XPathExpression"> <xsd:restriction base="xsd:string"/> </xsd:simpleType>
Shock of shocks, it's a no-restriction restriction from the base type of
xsd:string. You can just see two spec writers hands getting tossed way up in the air on this one and being caught, if only tentatively, by wrists not anxious to end up with carpal tunnel syndrome from the attempt to express the XPath BNF rules as an XML schema restriction. (Those with formal language training can groan a little louder than the rest).
The point is that schema just can't touch this. But, the rules of XPath are clearly and normatively defined in the XPath recommendation, and the XForms recommendation dutifully cites XPath in its list of normative references. That's why the recommendation text is normative and the schema isn't.
Not to put too fine a point on this, but you might claim that the XPath example is a perverse one. Well, it's not the only one. For example, a number of XForms elements like
select1 have a required single node binding. For the two elements I just mentioned, the single node binding attaches the user interface control to a node of instance data so that user input can be placed into the data model of the form. A single node binding can be expressed using either a
ref attribute or a
bind attribute. Unfortunately, XML schema doesn't have a way to express the requirement that one of two attributes must appear. So, in the schema for XForms, we say that both
bind are optional even though that's not very accurate. Separately, each is optional, but that's only the half of the story that XML schema can tell. The full story appears in the normative text of the XForms Recommendation.
The purpose of XForms is to express the core XML data processing asset used in sophisticated data collection scenarios.
In fact, it would be better if XForms were called the XML data processing language (XDP or XDPL) because XML is about standardizing data and about 80% of business transactions are based on filling out some kind of form to collect the transactional data.
An XForm contains one or more XML data instances. An instance is an arbitrarily structured XML data document that is typically an instance of some XML schema that expresses the static validation rules for a target namespace.
One can write an XForm without an XML schema by just expressing the XML data in an instance. This is because XForms provides other channels of data validity checking that can be easier to work with when only simple data type validation is needed. For example, you can use an XForms
type declaration to associate an
xsd:date or similar data type to an XML data node without writing an XML schema for your XForm.
But XForms validity checking is also dynamic, in recognition of the fact that validity of some values can be based on other values or the aggregation of other values. For example, in an interlibrary article request, the upper bound page number in the journal must not be less than the lower bound page number. Or, the user is only authorized to make a purchase order with less than $10,000 total value.
The latter example is important because it leads to the conclusion that we not only need a way of testing data values relative to other data values, but also that we need a way of calculating data values that are then used in validity tests.
From there, it is not a big leap to conclude that generalized XML data processing requires some way to indicate dynamically whether further changes should be allowed to certain pieces of data or whether certain parts of the data are still applicable to the transaction based on other data values. A good example is a mortgage preapproval form that can handle both single and joint applications. The co-applicant data is only relevant if the user selects the joint application mode.
XForms allows the form author to express formulae for these aspects of data, which are called model item properties, or just MIPs. Not too surprisingly, the names of these MIPs are
Of course, there is no point in representing data, calculating values over data, and validating data if you have no way to change the data. XForms allows simple content data values to be changed, but it also allows insertion and deletion of larger blocks of data that contain internal structure because this is essentially what's needed to add or delete a row from a table.
Most importantly, XForms offers form controls that expose data to the surrounding application context. If the data changes, the form controls change. This includes not only exposing a changed simple content data value, but if a set of form controls are associated with a repeated sequence of structured data, and the number of data nodes in the sequence changes, then form controls are created or destroyed as needed to respond to the change of data.
XForms is all about thinking of the data first and driving outward to how that data gets exposed to applications. Perhaps the most prevalent of such applications are for presenting the data to a human user, though even human users have highly varied capabilities. For example, the desktop user and the PDA user have very different visual capabilities. Of course, this argument extends easily to meeting the far greater accessibility needs of the sight-impaired.
For this reason, the XForms form controls represent what I've often called an intent-based user interface. It's kind of neat to see the term popping up more frequently now. It gets to the heart of the matter: XForms does not provide a presentation layer. XForms relies for presentation on a host language like XFDL (in Workplace Forms) or XHTML (in web browsers). I am certainly hoping that VoiceXML will come to the conclusion that they should soak up the benefits of XForms rather than reinventing all of this stuff over again (partly because almost everybody underestimates how much work goes into it until it's too late; but maybe they will prove to be wiser than the rest).
I sometimes get asked whether XForms will next extend itself to standardizing the actual presentation layer. Clearly, from above the answer is no. XForms standardizes the core XML data processing asset, and more work will go into doing a better job of that. The key issue we want is to address interoperability and reusability across applications and user contexts of the data processing behaviors that are fundamental to completing a transaction.
A lot of talks about XForms are a bit technical in nature because the people who manufacture XForms processors tend to be technical people who understand the business value of XForms in terms like "model-view-controller" architectures, a superset of AJAX, and software engineering benefits like abstraction. But the C-level executive cares not about these things. It is important to connect them to what the C-level executive does care about.
The C-level exec is about efficiency, flexibility and accountability.
The CEO wants
- efficiency via reduced operating costs and decreased time to close deals
- flexibility to react to new business processes and changes of business partners
- accountability for control of expenditures and compliance with regulations
The CIO wants to achieve
- efficiency through end-to-end business process integration and ability to leverage business objects as IT assets
- flexibility through a malleable system architecture that can be rapidly reconfigured with replaceable, reusable components
- accountability through transaction record auditability
We need to speak about how XForms helps the organization to achieve better results along these metrics. This is where the global picture of what XForms does against the backdrop of the classic 3-tier architecture comes in handy.
The C-level exec is not up at night worrying about the server tier, where databases, content managers and workflow engines live. These may be expensive systems, but they are designed to be robust and highly scalable systems whose metrics relative to the size of the anticipated user base of a system are easier to quantify. In other words, they are low risk numbers that are easy to budget for.
The C-level exec is not as concerned about the client tier, where the browser and OS live, again because the metrics are easy and stable.
The C-level exec has the most uncertainty and risk at the middle tier. There are two parts to it. There is the part that uses APIs to talk to DBs, CMs and workflow engines to implement custom application logic as needed. This part is not as scary because we have reliable, robust, scalable, stable APIs, and because the components we're talking to are those reliable, robust, scalable, stable, expensive systems sold by companies with important people's ties to yank when there is a support problem. Then there's the scary, assembly-language-of-the-web-part of the middle tier. This is the part that has to juggle a dynamic, multistep end-user experience on the anything-goes client side, where the web browsers are free and you get what you pay for. The upfront cost of the thin client is low, but the time to market with new offerings is the most highly affected because a lot of unpredictable, time-consuming work lives here.
- It allows you to standardize and consolidate the end-user experience into a single business object that represents the overall transaction.
- It allows that business object to access the web services of an SOA as a natural part of the process of going from empty transaction data to completed transaction data.
- It fully represents the transaction and therefore can function as an integral part of the records needed for auditability.
As a final thought, it should be clear from the exponential scales of complexity and cost that the diagram above argues that XForms-based systems have business value through complexity containment, and that said value is rightly reflected in software product cost.
A rich client platform (RCP) is a framework of widgets, widget containers and other processing capabilities that facilitate sophisticated application development and deployment. One important aspect of RCPs is that they tend to be highly cross-platform because platform support is provided by the framework.
In both cases, you get sophisticated capabilities and cross-platform support. In the case of RCPs, you get these benefits by learning a framework, deploying the platform to the client-side and developing applications targeted for the platform. The one negative here is that you are invested heavily in the platform, so if some aspect of the platform is a showstopper (such as large download size), then you won't be able to deliver the application. In the case of RIAs, you get the benefits by learning a framework (browser HTML/JS/AJAX + J2EE), deploying the platform to the server side and developing applications targeted for the platform. The one negative here is that you are heavily invested in the platform, so if some aspect of the platform is a showstopper (such as no offline operation of the application), then you won't be able to deliver the application.
XForms solves the RIA/RCP conundrum by providing a framework for expressing the core data processing asset of an XML-based application independently of the deployment paradigm. There are XForms implementations of both types, RCP and RIA. In particular, the IBM Lotus Forms Webform Server is a product that converts an XForms-based Lotus Form into a rich internet application (RIA) automatically. The IBM Lotus Forms Viewer is a rich client platform (RCP) that provides the run-time for XForms-based Lotus Forms. The application developer is free to think about the application without thinking so much about the pecularities of how it will be deployed.
We're just finishing up the XForms face-to-face meeting in Amsterdam. Given my prior post and the approach of XForms 1.1 toward last call status, it seemed a good idea to talk about the improvements to XForms 1.1 that make the repeat construct easier to work with.
It's about being able to say more exactly what you mean, really. For example, on my first night in Amsterdam, my hotel room became quite cool, but it wasn't clear how to turn up the heat. Apparently I'm just not old school enough to relate to a radiator, so I called down to the front desk. I was informed that in order to turn the heat up in my room I had to "squeeze the knob". On the radiator was implied. Despite being pretty good with a pun, it seems I still needed a friend to help me fully appreciate how important my complaint about the advice truly was, especially in Amsterdam. In fine propeller head form, I complained that the proper advice was to "turn" the knob. On the radiator was again implied. Oblivious to any possible alternate interpretations, I proceeded through the explanation that while squeezing the knob was a necessary component of turning it, it was also an implicit part of the process which created the friction necessary to ultimately turn up the heat. From the radiator was implied.
Anyway, the point is that a lot of trouble can be avoided when one is able to say exactly what one means to say. In the case of XForms 1.1 authors, there are two new attributes on the insert action that make it a lot easier to manage repeat constructs. In XForms 1.0, we add to the container element of a sequence an extra subelement to act as the prototypical data. We must add the prototype as a child of the container element because the insert action in XForms 1.0 can only get the prototype from the last node of the sequence of items over which it operates. This limitation forces you to adjust the repeat to omit the last node, and it forces you to add application logic to remove the last node when the data is submitted or alternatively before it is processed on the server side.
In XForms 1.1, there is a new origin attribute on the insert action. This attribute allows you to give an XPath that says where the prototype for insertion is located. This allows it to be placed in another instance rather than being stored in the container element of the sequence, which eliminates the necessity of adjusting the repeat and of removing the prototype later (it's already in a separate instance).
The second issue that we solved in XForms 1.0 by putting the prototype data at the end of the container element is that we avoided the problem of the container element becoming empty. It is easy to understand the need for an empty shopping cart, but if the data container becomes empty, then the insert nodeset attribute produces empty nodeset, so you don't know where to insert the copy of the prototype node. To be clear, if nodeset="/cart/item" returns empty nodeset, it's like walking off a cliff because you don't know that the XPath expression visited the cart element right before it found zero elements named item within it. In order to insert into an empty repeat, the parent of the elements being repeated must be non-empty in XForms 1.0.
In XForms 1.1, we solved this by adding a context attribute to insert. The context attribute set the context for evaluating the nodeset attribute. The expected use of this attribute is to choose the parent or container element of the nodeset. So, when the nodeset resolves to empty nodeset, the context node is used as the container item into which the prototype node is inserted. So here's the shopping cart example in XForms 1.1.
<!-- Initial instance data contains the prototypical node as the last element -->
<xf:instance id="liveData" xmlns=""> <cart> </cart></xf:instance>
<xf:instance id="protoCart" xmlns=""> <cart> <item> <name/> ... </cart></xf:instance>...
<!-- repeat operates over the live data -->
<xf:repeat nodeset="item" id="repeatCart"> ...</xf:repeat>
<!-- Add new row after any current row, but do it in a way that can also handle zero rows. -->
<xf:trigger> <xf:label>Insert <xf:insert ev:event="DOMActivate" context="/cart" nodeset="item" at="index('repeat-cart')" position="after" origin="instance('protoCart')/item"/></xf:trigger>
<!-- Delete a row from the repeat. -->
<xf:trigger> <xf:label>Delete <xf:delete ev:event="DOMActivate" context="/cart" nodeset="item" at="index('repeat-cart')"/></xf:trigger>
To complete the example from XForms 1.0 in XForms 1.1, we should now look at an example in which you want a repeat that stays at one row. If the user hits delete for that row, then the row stays, but the data is cleared from it.
<xf:trigger> <xf:label>Delete <xf:action ev:event="DOMActivate"> <xf:delete context="/cart" nodeset="item" at="index('repeat-cart')"/> <xf:insert context="/cart" origin="instance('protoCart')/item" if="not(item)"/> </xf:action></xf:trigger>
In the final insert, we see the appearance of the new if attribute to more precisely communicate that the action is conditionally run. In XForms 1.0, you have to use an XPath predicate to produce an empty nodeset, which is a bit like saying "squeeze the knob". It gets the job done, but it ain't pretty.
The final insert also shows off one of the new things decided at this face to face meeting of the XForms team. In the latest working draft, the nodeset binding is listed as required, but we are changing that to optional in order to make writing inserts like the final one above look more natural. It just identifies the container element as the destination of the insert, the origin as the source of the insert, and a condition that says when to do the insert. And now my room is too hot!
Modified by John M. Boyer
Machine learning today is every bit as calculated, as simulated, as is machine intelligence. It is easier to use machine intelligence to highlight how much greater human cognition is, which is why I've been using a machine intelligence algorithm over the last several entries. However, the conclusion drawn so far is that, while machine intelligence is only simulated, it is still quite effective and valuable as an aid to human insight and decision making. Machine learning offers another leap forward in the effectiveness and hence value of machine intelligence, so let's see what that is.
Machine learning occurs when the machine intelligence is developed or adapted in response to data from the domain in which the machine intelligence operates. The James Blog entry only does this degenerately, at a very coarse grain level, so it doesn't really count except as a way to begin giving you the idea. The James Blog entry plays a game with you, and if he loses, he adapts by increasing his lookahead level so that his minimax method will play more effectively against you next time. In some sense, he learned that you were a better player. However, this is only a single integer of configurability with only a few settings of adjustment that controls only one aspect of the machine intelligence algorithm's operation. To be considered machine learning, a method must typically have a more profound impact on the operation of the algorithm, with much more adaptation and configurability based on many instances of input data. An example will clarify the more fine grain nature of machine learning.
The easiest example of which I can think is a predictive analytic algorithm called linear regression. Let's say you'd like to be able to predict or approximate the purchase price of a person's new car based on their age. Perhaps you want to do this so that you can figure out what automobile advertisements are most appropriate to show the person. Now, as soon as you hear this example, your human cognition kicks in and you rattle off several other likely variables that would impact the most likely amount of money a person is willing to spend on a car, such as their income level, debt level, nuclear familial factors, etc. This analytic technique is typically called multiple linear regression (MLR) exactly because we humans most often dream up many more than two variables that we want to simultaneously consider. Like most machine learning techniques, MLR does not learn of new factors to consider by itself. It only considers those factors that a human has programmed it to consider. When they are well chosen, additional variables typically do make an MLR model more effective, but for the purpose of discussing the concept of machine learning, the simple two-variable example suffices since your mind will have no problem generalizing the concept.
Suppose you have records of many prior car purchases, including a wide and nicely distributed selection of prices of the cars and ages of their buyers. This is referred to as "training data". If you plotted the training data, it might look something like the blue points in the image below. Let purchase price be on the vertical Y axis since it is the "dependent" variable that we want to predict, and let age be on the X-axis since it is a predictor, or "independent" variable. MLR uses a standard formula to compute a "line of best fit" through the given data points, again like the one shown in red in the picture.
A line has a formula that looks like this: Y=C1X1+C0, where C1 is a constant that governs the slant (slope) of the line, and C0 is a constant that governs how high or low the line is (C0 happens to be the point where the line meets the Y-axis, and the line slopes up or down from there). If we had more variables, then MLR would just compute more constants to go with each of them. For example, if we wanted to use two variable predictors of a dependent variable, then we'd be using MLR to create a line of the form Y=C2X2+C1X1+C0.
Technically, MLR computes the constants like C1 and C0 of the line Y=C1X1+C0 in such a way that the line minimizes the sum of the squares of the vertical (Y) distances between each data point and the line. For each point, we take its distance from the line as an amount of "error" in the prediction. We square it because that gets rid of the negative sign (and, less importantly, magnifies the error resulting from being further from the line). We sum the squares of the errors to get a total measure of the error produced by the line, and the line is computed so as to minimize that total error.
Once the constants have been computed, it is a trivial matter to use the MLR model as a predictor. You simply plug the known values of the predictor variables into the formula to compute the predicted Y-value. In the car buying example, X1 is the age of a potential buyer, and so you multiply that by the C1 constant, then add C0 to obtain the Y-value, which is the predicted value of the car.
In this way, hopefully you can see that the MLR "learns" the values of the constants like C1 and C0 from the given data points. Furthermore, the actual algorithm that produces the machine intelligence only computes the result of a simple linear equation, so hopefully you can also see that the predictive power comes mainly from the constants, which were "learned" from the data. In the case of the minimax method, most of the machine intelligence came from the algorithm, but with MLR-- as with most machine learning-- the machine intelligence is for the most part an emergent property of the training data.
Lastly, it's worth noting that there are a lot of "best practices" around using MLR. However, these are orthogonal to topic of this post. Suffice it to say that just like the minimax method has a very limited domain in which it is effective as a machine intelligence, MLR also has a limited domain. For example, the predictor variables (the X's) do need to be linearly related to the dependent variable in reality. However, within the limited domain of its linearly related data, MLR is quite effective and an excellent example of a simple machine learning technique that produces machine intelligence within that domain.
A Lotus Form is a document currently expressed in an XML vocabulary called XFDL (html version, pdf version).
This is significant enough, in and of itself, to be set off in a paragraph. XML is a de facto industry standard from the W3C, and this means that widely deployed, interoperable, industry standard software toolkits can be used to introspect and manipulate the content of XFDL, i.e. Lotus Forms documents, thus mitigating issues of vendor lock-in.
But the story gets better...
The XFDL format internally employs W3C standard XForms, so without further reference to any vendor-specific documents, the standard indicates where in the XFDL document to look for the data content created by an end-user who fills in the Lotus Forms document. Anyone with access to a Java reference manual could write code to prepopulate data into an XFDL (Lotus Forms) document before it is delivered to an end-user, and they can write code to extract data from the document when it is returned to the server for processing.
But the story gets even better...
XFDL incorporates the W3C XML Signatures standard, so widely available industry standard tools are available from multiple vendors for validating the security of digital signatures in XFDL documents. In addition to the Apache XML Security implementation, it is notable that Java itself now natively contains support for XML Signatures (JSR 105).
These standards mean that the entire server-side lifecycle of the Lotus Form can be achieved without being locked into using any particular vendor's API. Any application server, any portal server, any server-side environment that can receive HTTP POSTs and process XML in the POST data can be used in combination with Lotus Forms.
And if you ever hear a vendor try to play up high availability of a client-side browser plugin for processing their favorite file format, ask them "Plugin?? What plugin??" With Lotus Forms, you don't need a browser plugin at all because the Lotus Forms Web Form Server converts the XFDL into dynamic HTML that can be processed directly by the browser with no plugins at all. Best of all, when the user finishes interacting with the Lotus Form, the resulting XFDL document is delivered to the application server endpoint, where it can be processed as XML using the standard APIs.
As promised, here are the links to my W3C Track talk at WWW2007 on the Rich Web Application Backplane (html, click here for the ppt file). Also, note that I heavily annotated my slides so you can read more about what I said than just the points on the slides. Let me know your thoughts on the content...
At the specification level, the W3C Forms working group is working on modularization of XForms as one means of promoting incremental adoption. This is following the simple business axiom that "embrace and extend" is economically preferable to "rip and replace". If you have existing web applications, and you want to start incorporating selected features of XForms to get at certain benefits, it helps to be able to get at just those features without having to change your entire application over to XForms in one development cycle. Modularization supports agile development.
The Forms working group is now also working on XForms 1.2, even as XForms 1.1 continues toward full W3C Recommendation status. In XForms 1.2, the second approach the Forms working group is using to promote incremental adoption is to stream-line web application authoring through emphasis on attribute decoration and exploitation of UI element nesting as a means of expressing the simpler classes of MVC applications. The attributes are attached directly to UI elements, but they imply a proper XForms MVC architecture. In other words, the XForms processor can read the attributes and auto-generate a proper model for you. To get an idea of what the stream-lined vocabulary might look like, and what canonical XForms it might correspond to, look here.
The approach of providing stream-lined syntax is simpatico with modularization for two reasons. First, it means we can provide the stream-lined vocabulary and processing model as a module separately from modules that provide the components of the core XForms engine (e.g. the instance data module, the submission module, and so forth). Second, the stream-lined syntax approach means that authors can start with attribute decoration and UI nesting, and then scale up in more complex functional areas as the need arises by adopting the core XForms modules appropriate to the problem at hand.
The stream-lined syntax approach is also interesting because techniques such as attribute decoration, UI nesting, and progressive elaboration of a document are techniques familiar to AJAX developers and made popular by AJAX libraries such as Dojo. In fact, I am really coming to the conclusion that AJAX libraries are a viable alternative to browser-maker adoption as a means of delivering the most web-centric components of the W3C technology stack. It should not be necessary for Microsoft, Mozilla, Apple, Opera and other browser makers to build a W3C technology into their browsers before it becomes widely adopted and deployed. More importantly, this means that the W3C, not the browser makers, defines the web and its standards.
We've been quite busy today in the XForms working group. Our work has been focused on modifications of the functionality of XForms submission for XForms 1.1. Tomorrow we will address any details that come up as the pieces of the spec are put together for the next working draft, but here is an overview:
- We're adding context information to the xforms-submit-done event so that an XForm can access the http return code.
- We're adding support for the DELETE method, which is more reasonable can be done now that the return code will be available. One practical result of this is that you will be able to write an XForms that speaks ATOM.
- We're allowing run-time modification of the submission URL. We'll be adding a child element called
resource to the submission element, and it will be able to use a single-node binding or value attribute to construct a URL that includes data from an XForms instance.
- We're allowing the ability to set content headers for the submission so that, among other things, an XForm can speak WebDAV. We'll be adding a child element called
header that will take a name attribute as well as single-node binding, value attribute, or content for the value of the header. You will be able to put as many headers as you want. The main detail to be fleshed out is out to say that user agents/XForms processors may choose to ignore some of the header settings.
- We're updating the context information available to xforms-submit-error so that XForms authors will be able to determine whether the submission failed due to a validation error versus an error resolving the URL.
Tomorrow we will be discussing aspects of the XForms type model item property. I'm anxiously awaiting the outcome because, frankly, I've been waiting a few weeks now to tell you about these aspects, but I need the group resolutions to occur before posting my comments. I think it will turn out well, though!
Someone recently asked me to explain why the XForms way of binding form controls to XML data with XPath was better than, say, using XSLT templates to match data with XPaths and output an XHTML form. Seemed a fair question.
An XSLT describes a transformation step that happens entirely before the user gets involved, so all the limitations of current XHTML forms still applies to the result of the transformation.
In comparison, XForms defines an intelligent, interactive layer that governs the fill experience for the data. Applying an XSLT to data does not, in and of itself, describes what happens as the user interacts with the data to create new content, so the author of the XSLT is left to encode the old script and HTML devices into the XSLT.
XForms standardizes the following:
- schema-based validation as you fill out the form
- dynamic validation constraints applied as you fill out the form
- calculated dependent values (like tax and total values on a purchase order)
- accessibility text
- access to the web services to let an SOA enrich the fill experience
- preservation of data hierarchy in the UI
- access to template-based UI repetition responsive to data mutations
- user interface switching (presenting panels of controls conditionally or in sequence)
- dynamic user interface properties (readonly, visibility/display)
- XML data submissions to the Web 2.0 server tier
Not only does XForms standardize these things, but it standardizes how they work together.
To further drive home the point, note that XSLT transformation could actually be used as a sympathetic technology with XForms. You could, for example, write an XSLT that transforms an XML Schema into data+XForms rather than data+DHTML. Then, you'd still get all of the above value adds because XForms is a standardized interactivity layer that describes what happens after the XSLT generates the initial rendition.
XForms is an important standard for encoding the core XML data processing asset of a forms application for multiple reasons.
For one, it fills the gap of schema languages, which are predominantly focused on describing what constitutes correct data that should drive server-side transactions. XForms codifies what it takes to get from an empty initial instance of a schema to a completed correct instance of a schema.
Another reason is that XForms provides an efficient language for doing the above, one that is based on such well-known and time-tested techniques as the Model-View-Controller design pattern, declarative formulas based on the spreadsheet algorithm (the second killer app of computing), and asynchronous event-based scripting. Frankly, the scripting capability is very effective for two reasons. First, each command is fine-tuned to the data by all the declarative constructs, so each line of script can do the work of 10 to 100 lines of purely imperative code. Second, the scripting in XForms actions are quite focused on mutating the XML data, so you don't get the baggage of a general purpose language that can create arbitrarily complex data structures that exist only in the memory heap of the run-time processor.
On the one hand, one could conclude that all of these language efficiency and simplicity aspects of XForms are beneficial to application authors who write XForms. But on the other hand, it turns out that the crown jewel is that the language efficiency and simplicity of XForms reverberate into the design tools that can all authors to write the application for a schema without much exposure to pointy brackets, much less coding.
To illustrate the point, I've created a 12 minute video of an XForms design experience on developerWorks to help show you more about what can be done with XForms based on a wizard-driven, point-and-click visual design experience. This video is similar to what I presented in the developer track at WWW 2007 in Banff. Please check it out and let me know what you think.
There's a new movie due out soon with a really cool name (Snakes on a Plane) and starring an actor I like, Samuel Jackson. Josh Friedman was supposed to help edit the screenplay, but he walked when studio execs threatened to change the name to something banal. However, Sam apparently signed up for the movie because of the name, so it had to stay. Anyway, Josh blogged about months ago, and since then the movie itself has taken on this kind of Web 2.0 existence where the people's conjectures about the movie's content have actually caused the studios to adapt the movie!
Aside from changing the actual film content, the fan base (surreal as that sounds) have actually been responsible for creating marketing hype, movie paraphernalia and even an internet meme for the title. Apparently, Snakes on a Plane is now synonymous "That's Life!" or my personal favorite way of saying the same ("Shh! It happens.") An example from everyday life might be
Programmer on Friday afternoon: "I was going to be done by code module today, but such-and-such didn't properly implement the standard, so now I have to work around their bug."
Hip Manager: "Snakes on a plane. What time did you say you'd be in tomorrow?"
Neat but what's this got to do with Workplace Forms, you say. Well, it's really good to know background cultural information no matter what. Background information is always good to have around. Take for example, an exciting new extension I've recently heard that someone has built for the Workplace Forms Designer. It allows a form author to select a group of form controls, it then figures out the associated markup, and allows the form author to indicate how many times to iterate that markup. Makes creating tables a snap!
Snakes on a plane! Here we are talking about all the cool new features that will be coming in the future of XForms, and the background cultural information that clearly needs further socialization is that
- Workplace Forms includes XForms
- XForms contains a repeat element
- It repeats a given set of controls according to any limits. You could have any number of rows of controls needed to fit your data, though for speed and ease of use we usually recommend keeping a table within a page.
- The design experience is that you just write the table template. No design time duplication of markup.
That last point is really important. Suppose you have some popup selection control that you need duplicated 15 times. Then you decide it needs to have another selection. If you use an XForms repeat, you only have to change the one selection control in the repeat template. If you duplicated controls at design time, then you would have 15 changes to make.
But, hey, at least when the customer asks how to write more maintainable tables, we already have an answer.
Today is an exciting day for the IBM Victoria Software Lab because today we have been pushing the new version of the IBM Workplace Forms product line into the IBM release queue. General availability is on June 13th, but it's on its way folks!
The IBM Workplace Forms product line is designed to simplify the process of creating and maintaining forms-based web applications. The flagship knowledge product of the product line is an XML vocabulary called XFDL, which is an XForms-enabled language that provides a rich and precise presentation layer for XML data. The IBM Workplace Forms software supports the full life cycle of XFDL documents, from creation to run-time to processing of completed form documents.
The IBM Workplace Forms Designer is an eclipse-based visual development environment for creating XFDL documents. It has a main canvas for drag-and-drop design of the user interface. The drag-and-drop palette has all the usual suspects for atomic form controls as well as containment controls like repeat tables and group/switch panes. The user can also add custom groups of controls to the palette as single new controls. This makes it easy to standardize constructs like toolbars or address blocks across all forms in a workspace.
It is no surprise that a form filled with the Viewer and with WebForm Server are indistinguishable to the web application developer. Our intent is to provide a single document paradigm for the form design and the web application development. So, both of our form run-time products are based on a common API that understands and implements much of XFDL. Although the nature of XFDL as an XML vocabulary means that standard XML tools could be used to process XFDL documents, the XFDL API is also made available to the web application developer. The idea is that your servlet/portlet will require less coding if you use an API that understands not just XML, but the XML vocabulary of the document. For this reason, the IBM Workplace Forms Server contains not just the WebForm server, but also the API.
Rounding out the Server product is the IBM Workplace Forms Server - Deployment Server. This server facilitates mass deployment of the Viewer and custom viewer extension files by providing an on-demand install experience that can be customized by the server adminstrator.
An interesting comment showed up on my prior post that seemed worth discussing as part of getting around to talking about what XFDL does for (and with) XForms.
In the last post, I talked about what XForms does and what it doesn't do from the big picture perspective. XForms does seek to standardize the language for the core information processing asset expressed within a form. XForms does not seek to standardize the exact presentation of a user interface. XForms delegates this task to different XML host languages like XHTML or XFDL (or others in the future like, hopefully, VoiceXML). These host languages exist to satisfy different requirements that exist beyond the core information processing requirements, but XForms allow us to design the underlying application once and handle presentment and other orthogonal requirements as the separate issues that they are.
The comment came in saying that one thing XForms doesn't do is allow the URL of a submission to be dynamically calculated. This is a technical limitation that does not affect the big picture of what we're trying to achieve with XForms. The XForms language does contain some technical limitations like this. Many of them are being addressed in XForms 1.1. This issue in particular should be addressed in the very next working draft of XForms 1.1.
However, because we have to deliver products that can be used to build applications now, it sometimes happens that implementations have to lead the standard with custom extensions until the standard comes along and specifies the common way that everyone on the working group agrees is the way the language will express a feature.
In XFDL, we make relatively few changes to XForms because we want our documents to be as conformant as possible. However, this particular issue of a dynamic URL comes up in almost all of the forms applications we have every deployed, so we did not feel we could make our next release of Workplace Forms without some ability to do a dynamic URL on an XForms submission. At the same time, we do like it to be clear to the form author when an extension is being used. As a result, the addition was made using an attribute in the XFDL namespace. This makes it easier to find those bits of our XForms-based documents that will need special attention when trying to get them to interoperate with other XForms implementations.
In the upcoming release of Workplace Forms, you can create XForms-based applications that include a dynamic URL component by using an
xfdl:actionref attribute instead of an
action in the
xforms:submission. The content of
xfdl:actionref is an XPath expression whose result is the node containing the desired URL. Of course, the full power of XForms calculations can be brought to bear on that node to allow dynamic calculation of all or any portion of the final URL.
In XForms 1.1, I am expecting a more general mechanism that will allow the instance data to be used to set not just the URL, but eventually many of the other parameters too.
Modified by John M. Boyer
In a recent video interview, the IBM CEO Ginni Rometty
comments that Watson 2.0 will understand images that it sees, and that Watson 3.0 will be able to debate, i.e. to understand what it is talking about with another party. An impressive roadmap, each of these is an incredible leap forward from its predecessor.
It is, however, worth qualifying the term 'understand'. It is being used figuratively, not literally, to communicate the rough order of magnitude improvement in capability. When such a leap is made, it seems analogous to sentient understanding, even though it isn't. Imagine for a moment what Archimedes would have thought at first of a hand-held calculator, given that he had the power of Roman numerals with which to calculate pi to several digits. And yet, we would not now interpret such a device as artificial intelligence. As soon as the mechanical nature of a level of capability becomes clear, so too does the fact that it does not constitute sentient intelligence (Hofstadter's exposition of Tesler's "theorem").
You can see this assertion play out in multiple levels of Bob Sutor's scale of cognitive computing
. There are levels that are clearly not cognitive intelligence, as Sutor points out, but if you lay out the scale on a timeline of decades or centuries, it is clear that each level might once have been interpreted as being indistinguishable from magic.
So where on Sutor's scale is Watson? And what implications does that have for development best practices?
Watson is clearly not on the "Sentient (we can do without humans) systems" level. As sentient beings, we don't just know things with a certain calculated accuracy or confidence level, or determine that we don't know if our confidence is low. We experience desire to know more, and we experience fear of the unknown. We are teetering bulbs of dread and dream (Hofstadter's delightful invocation of a Russell Edson poem). I urge you to let that characterization of us sink into your mind. In Watson technology, IBM has modeled a certain class of knowledge and mechanical reasoning, and in other research, IBM is doing so by simulating some of the known structure of biological brains. However, we don't yet know how to model fear and desire, dread and dream. In my opinion, these are inextricably bound together in sentient intelligence, separating it from simulated intelligence. In other words, intelligent behavior is a construct that works for the dread and dream engine of the sentient, and in the absence of dread and dream, seeming intelligent behavior is but a mechanical simulation of understanding. As an aside, I hope we only manage to model desire and fear around the same time we figure out how to model ethics (as Asimov cautions).
Does this characterization of Watson as a mechanical simulation of understanding detract from its value? Does it detract from the order of magnitude improvement it heralds as an usher of the era of cognitive computing? Of course not, quite the opposite. It is simply fantastic that this level of "Learning, Reasoning, Inference Systems" (Sutor's scale) is now computationally and economically feasible at the scale needed to help sentient intelligence (that's us) to solve real world problems. Quick, what is the square root of 7. Can't do it? No problem. Even if you're Arthur Benjamin
, you'd be better off just hitting a few keys on a calculator. Quick, what are the most likely diagnoses for the patient's presenting symptoms? An "expert advisor" like Watson can be just what it takes to help determine the next best action, especially when time is of the essence because a life hangs in the balance.
The term "expert advisor" is appropriate. It conveys that the system is a "Learning, Reasoning, Inference System" that does not have sentient understanding and is therefore made available to advise and guide the actions of an expert. This is analogous to the way spreadsheets guide the results reported by accountants and chief financial officers. That being said, we also know not to put spreadsheets in the hands of toddlers. From a development practice standpoint, it is crucial to keep in mind that "expert advisor" means that the deployed system should be advising someone who is a qualified expert in the exact domain in which the "expert advisor" system was trained. Especially when a life hangs in the balance, access to the "expert advisor" system needs to be performed by those with expert qualifications in the domain because only they can reasonably be expected to use sentient understanding to interpret and follow up on the advice. In other words, the term 'expert' in 'expert advisor' should apply to the user more so than the advisor.
Now, given an enterprise workforce of those with qualified sentient understanding of their topic areas, Watson-style expert advisors are just the type of technological advancement that will help them work smarter, not harder, to meet the needs of customers and colleagues and to produce a competitive advantage for the business.
new release of our forms software, version 4.0, is now only a few weeks from shipping. And as of this release, the product line will be known as IBM Forms
! This is an incredibly important indicator of the strategic value IBM sees in the Forms business as a key component in building a Smarter Planet. The feature set coming in this new major release combines the best of Web 2.0 client application behaviors and design experience with the traditional strength of interactive XML data collection for which the prior releases our product line are well known. IBM Forms documents are interactive web application instances that have many traditional capabilities that we have been building into them since 1993.
- They can always be serialized to provide a user with their own saved copy of their web application experience, to archive, to email, or to pass to the next step of a business process or workflow. They can be reinstantiated at any time to continue the interactive fill experience.
- They can be instantiated directly into a web browser without using a plugin, and they can also be instantiated in a client application for an offline fill experience, dramatically simplifying IT maintenance and platform support, .
- They provide a multipage high precision visual layout with comprehensive accessibility support and localization
- They operate over XML payloads that directly conform to the needs of back-end business functions, enabling straight-through XML processing systems.
They enable rich web application interaction and behaviors based on the W3C XForms 1.1
- They allow users to attach supporting documents into the form in support of their business function, including spreadsheets, word processing files, images, videos, etc.
- They secure the business function implemented by the form with a comprehensive interactive document digital signature system that includes sectional signing, multiple signatures, and overlapping signatures, all of which protect not just the form data, but the form behaviors, the form appearance and the attached documents as well.
Now lets add to that what's coming in the new release:
- The ability to drive customer satisfaction and employee productivity through more efficient, compelling interfaces that can include
- gradients, border styling, thematic color control, combined text and images
- AJAX-driven XML web service updating without web page refreshing,
- formatted HTML text for bold, italic, underline, text colors, font control, lists, hyperlinks, etc.
- Embedded custom HTML extensions and popup dialogs that can show maps or movies or even interact with the running form.
- The ability to deliver integrated solutions faster and spend less time and money maintaining them through a comprehensive new set of Design Experience features including
- Form Parts that make it easy to reuse and maintain common components consisting of one or many user interface controls and corrresponding behavioral markup
- Data-driven form design with XML schemas that define back-end data formats; data type sensitive automatic user interface creation, including tables and label annotations
- Web service integration wizard that provides point-and-click, drag-and-drop web service consumption and user interface mapping.
- The "wizard of wizards" that enables rapid development or addition of a step-by-step wizard experience for a form
- SVN source code repository interaction, Integrated Design Time spell-checking, etc.
- The ability to use your Forms solutions how you want and where you want, including
- An automatic forms view portlet and support for Portal 7.0 and Mashup 2.0
- Updated browser and OS support and improved integrations
- Improved workflow system integrations with Filenet and Websphere BPM
- Webform Server support for the iPad
The real achievement here is not just these new features, but getting them to work well with all the traditional features. IBM Forms 4.0
, coming soon to a Smarter Web Application near you.
For a very long time, the XML language underlying Lotus Forms (called XFDL) has offered an option element called <format> to describe data type, validity constraints and presentational formatting for the (valid) values of form controls (called XFDL items, like text entry fields, selection popups, and so forth). Neither a whole Lotus Form document, nor the XForms instance data it contains, can be submitted to a server unless all relevant data type and validity constraints are satisfied.
However, the <format> elements are placed into the form controls, so the user interface layer must be articulated in order to create the run-time objects described by the <format> elements. This makes it harder to use Lotus Forms performance enhancing features like "on demand page loading" in a multipage XFDL document. It will also conflict with an increasing number of performance enhancing features over time. Suffice it to say, it would be better to express <format> data type and validity constraints as XForms binds in the form.
Sometimes this is easy to do. The following are a few examples. Suppose you have an XFDL <field> text entry element containing <xforms:input ref="data"> as the form control binding it to a data node named "data".
Change: <format> <datatype>integer</datatype> </format>
To: <xforms:bind nodeset="data" type="xforms:integer"/>
Change: <format> <datatype>float</datatype> </format>
To: <xforms:bind nodeset="data" type="xforms:double"/>
Change: <format> ... <constraints><length><max>10</max></constraints> </format>
To: <xforms:bind nodeset="data" constraint="string-length(.) < 10"/>
Change: <format> ... <constraints><range><min>18</min></constraints> </format>
To: <xforms:bind nodeset="data" constraint=". >= 18"/>
Change: <format> ... <constraints><mandatory>on</mandatory></constraints> </format>
To: <xforms:bind nodeset="data" required="true()"/>
These are easy mechanical changes to make that prepare a Lotus Form for use with performance enhancing features like "on demand page loading". The XForms binds are easy to create with the "Create Bind" or "Create Bind with Wizard" features in the Lotus Forms Designer's XForms Model Editor.
One aspect of <format>
option capability that is... somewhat less mechanical... to replace are constraints based on regex (regular expression) pattern matching. If you don't know what a regex is already, just do a web search for "regex tutorial" and spend an hour or two reading something that will save you days upon days of your life in the long run.
As an example, here is a simple XFDL <format> that applies a regex pattern limiting user input to zero or more alphanumeric characters:
<format> ... <constraints><patterns><pattern>[A-Za-z0-9]*</pattern></patterns></constraints> </format>
The * means zero or more characters, the brackets are used to express the range of characters, and subexpressions like A-Z and 0-9 provide subranges of the acceptable character range. Regex is a lot more powerful than this, so you really should read that tutorial if you needed this explanation. For example, you can express multiple alternative patterns directly in regex without exploiting the obvious ability to use multiple <pattern> elements (the availability of that option is simply because it is easier to read than really long single regex patterns).
Anyway, to replace a pattern-based format constraint with an XForms bind, you would use a type model item property that references an XML schema type library entry. In essence, you can build XML schema declarations in your form to declare types even if you don't use XML schema to define the element structure and the data types of the instance data. Here's what it looks like:
<xforms:model> <schema targetNamespace="http://test.org" xmlns="http://www.w3.org/2001/XMLSchema"> <simpleType name="alnum"> <restriction base="xsd:string"> <pattern value="[A-Za-z0-9]*"></pattern> </restriction> </simpleType> </schema> <xforms:instance xmlns="http://test.org"> <data> <node></node> </data> </xforms:instance> <xforms:bind nodeset="test:node" type="test:alnum"/> </xforms:model>
The inline schema declares a datatype named alnum (for alphanumeric) and indicates it is any string that can be restricted to the same pattern you saw in the XFDL format constraint. The XForms bind then associates that type with the data node. If you bind a field item's <xforms:input> to that data node, you'll see that the user input is indeed restricted to strings that match the pattern.
Just ran across a good developerWorks article on Integrating Lotus Forms with Lotus Domino.
Domino is a web application and web services platform that is often used in combination with the Lotus Notes rich client platform. However, as this article shows, it is possible for Domino to have broader reach out to all web browsers without a large client-side installation. The intelligence and interactivity of XForms are combined with the high precision presentation layer of XFDL to describe a rich client experience, but the Lotus WebForm Server is used to convert that to HTML and AJAX that is natively understood by the web browser. The net result is that XML data processing and web services from Domino servers are extended right out to the webtop.
Of course, if you do have the Lotus Notes client platform installed, then the story gets better because the Notes replication capabilities can be brought to bear for when a user needs to work in offline/disconnected mode. You can also use the Notes Composite Application Framework to create mashups involving Lotus Forms and other application components deployed to the Notes client. Whether you use the Lotus Forms Viewer or the Lotus Forms WebForm Server to render a Lotus Form in Notes application component, you have access to a running Lotus Form using an API. This gives you getters and setters, of course, so you can push data from other components into the Lotus Form, but you can also set up event listeners that are notified of changes made within the Lotus Form, so you can push changes from the Lotus Form to any other component in your mashup.
But net-net, if you want to extend your Domino applications and services beyond the usual enterprise IT boundaries and get access to new B2C and G2C market opportunities, using Lotus Forms is a way to do it. See the new article for technical details.
Despite all the excitement about the possibilities that HTML5 offers for a better world wide web, I believe there is a critical flaw in the go-forward plan of those who are feeling the momentum.The problem is that they still haven't got ~80% of the web browser makers on board!By which I mean they haven't got you-know-who.
There's really only one way to break the loggerjam and move forward with advancing thestate of the web. We need to get you-know-who out of the way of progress by showingthat we can innovate on the web with or without their participation. We have now shown that this is feasible by way of the ubiquity strategy based on our abilityto deliver an interactive web application language in multiple web browsers with no client-side plugins and no server-side processing of the language. Click here to visit the project.
For all web advancement technologies, including HTML5, the longer term effect of the ubiquity strategy should be that the late bloomers are shamed into implementing the advancements, thereby obviating the need for the ubiquity code. But meanwhile the ubiquity strategy is the way to enable web advancement technologies including HTML5 to emerge (in the proactive verb sense of the word).
Check out this cool video that will help you understand how and why your business processes need to lose the paper and get on board with dynamic, interactive electronic forms... powered by the XForms standard of course!
It was a great year for XForms at XML 2007. There were quite a number of presentations that featured XForms as a component, though in this blog I want to focus on the big event for XForms, which of course was the "XForms Everywhere" special session held on Monday evening.
It is no exaggeration to say it was an unqualified success. This was a 2-hour event consisting of six 15 minute presentations by some of the XForms community leaders, including Mark Birbeck, John Boyer (yours truly), Erik Bruchez, Dan McCreary, Keith Wells, and Charles Wiecha. This was followed by a 30 minute keynote from Elliotte Rusty Harold on "How XForms can Win". You can find program details here.
Naturally, we were a bit concerned that the evening slot (7:30-9:30) would test the stamina of the even most eager conference-goers, but the room was filled to capacity (about 60 chairs) and then there wasn't even standing room due to about 20 people standing or sitting at the back of the room.
And then the content was exquisite. The format of the session turned out to be a great idea. With only 15 minutes, each of us was required to refine our content to the cream of the cream. The order of speakers was defined by Leigh Klotz (Xerox) based on the most natural flow of thought about forms design, run-time case studies, general application architectures, and futures. Since I was talking about design, I went first. This created a happy coincidence because it made the most sense to start by announcing the most recent accomplishment the Forms Working Group, which is the transition to Candidate Recommendation of XForms 1.1. Not only did my own design-time presentation depend upon XForms 1.1, but also most of the other presenters used XForms 1.1 features as well.
With such a huge turnout, it occurred to me that quite a large fraction of people might not know much of what XForms is capable of. I started thinking back to the Compound Documents workshop of 2004 where Bert Bos, the chair of the CSS working group, said "Forms, I know what forms are. Name, address, pepperoni, extra cheese. I want to talk about more sophisticated web applications. The kind that can play games. Not necessarily Doom, but you know what I mean." When my turn came to talk at that conference, I started by launching my BlackJack form, and I played a few hands. Lady Luck was with me that day, and she was with me again on the XForms evening at XML 2007. I showed that form for effect, but then I showed a much more sophisticated "Mortgage Pre-approval Form" in both English and Chinese. Next I showed how much of the complexity of that form could be seen as simple aggregation of the concepts one can find in a reasonably small purchase order form. And at last, I launched into a 7 minute demonstration the XForms design experience for that purchase order form. You can see a video clip of that design experience here.
I enjoyed the presentations by Dan McCreary and Keith Wells because they demonstrated sophisticated applications of XForms that would be an order of magnitude harder to build with existing web application technology. During the breaks between speakers, I took the time to reiterate to the audience the business value here in terms of being able to win deals based on lower RFP bids.
I also enjoyed the two more forward-looking presentations by Mark Birbeck and Charlie Wiecha. Mark demonstrated and spoke about his experiences building a desktop application development environment called Sidewinder, which combines XForms and XHTML to allow very efficient creation of desktop applications, not just web applications. Charlie demonstrated and spoke about the applicability of the XForms architecture to creating robust mash-up applications out of components that actually are reusable.
Perhaps my overall favorite talk of the evening was given by Erik Bruchez. You can find out more about it here. Erik presented yet another application with high business value and low development effort that he created by combining XForms with the eXist database. The main point of his overall presentation was to illustrate that the XForms-based web application architecture consists of a rich client tier that can speak directly to server tier applications like databases with no coding in the middle tier. Clearly given my blog entry of Oct. 18, I couldn't agree more.
Whenever I explain that the XForms architecture is "rich client", I always stop to remind the reader that this is a mindset, not a deployment strategy. You can deploy an XForms rich client, or you can deploy XForms functionality to a thin client (a web browser only) using a server product such as the Lotus Forms Webform Server. The point is that the XForms mindset allows you to factor out the deployment issue and focus on what the application is supposed to do.
And as an IBMer, I am duty bound to my colleagues over in Information Management to point out that the database involved in the above mentioned example could instead be the scalable and robust DB2 9.x with pureXML support. Using a package like Lotus Forms that supports the XForms 1.1 submission enhancements for web services and REST-like services, forms applications can directly consume web services set up by the database adminstrator without the need to bring in a java developer to write the code in the middle. This is an important point that I'll return to very shortly in the conclusion.
The final presentation was the keynote by Elliotte Rusty Harold, who is a gifted speaker and thinker. He began by explaining how hard it has been over the past 15 years to make do with today's web architecture for building web apps. He was more eloquent at saying this, but it's a bit like trying to incrementally build a 15 storey building on the foundation for a single family dwelling. Of course, the point was that XForms can do the job that today's web just can't, and it can do it really well, but... As Elliotte explains it, XForms is currently a "Cambridge" technology built by really smart people, and that it needs to build the bridge to New Jersey where all the fast-and-cheap, quick-and-dirty... and successful... technologies live.
This is a fancy way of saying that while XForms is now hardened for the most demanding of enterprise applications, more focus is needed on making XForms more accessible to the much wider audience of people who perhaps start out with simpler problems that work their way up into being the complicated beasts that become those unmaintainable piles of jumble-fuss (my word; Elliotte would've come up with a better one) that cause so much trouble today.
XForms can win, Elliotte says, if three things happen. First, native support in the browser. Second, good design tools. Third, XForms needs a killer app.
Regarding browser support, he points out that it is well-known that IE will be hard, but that we should start out by getting the Firefox/Mozilla XForms plugin to be moved to the core browser, and not an 'additional' plugin anymore. To some extent, I agree, but in the spirit of New Jersey thinking, XForms can already run in all browsers because we have server-side products that can boil it down to the HTML and AJAX that all browsers, including IE, natively understand. Elliotte felt that was a bit of a hack, but we have to remember that he's also a professor, so he sometimes also falls into the Cambridge trap. Still, the point is well-taken, as long as we view things in terms of transition. The server products show that XForms is a viable technology for all browsers even if you don't want to deploy rich clients. A great next step, of course, to garner direct browser support, and we're working on it, but it's not impeding our ability to provide the business value of XForms to customers today.
Finally, there's Elliotte's point about needing a killer app. Let me start with a Cambridge thought: XForms doesn't need a killer app because XForms is the killer app. The first killer app of computing was the word processor; the second was the spreadsheet. In a conversation I had with Elliotte the next day, he commented that the spreadsheet was even more of a killer app because it had to move the coder out of the way in order to put the computing power in the hands of the people. Sound familiar? XForms is the killer app of Web 2.0. The New Jersey reality check is that it has to walk like a killer app and quack like a killer app in order to be a killer app. The spreadsheet only became the killer app once the people saw what they could do with it and how to do it. The 'what' and the 'how', that's what made the XForms + database presentation and the design tool presentation so important in our XForms special session at XML 2007. The call-to-arms is simple: make more what, make more how, ... make XForms Everywhere.