Ubiquity XForms version 0.7 is now available. In addition to contributing to the recent advancement of XForms 1.1 to Proposed Recommendation, this new version has many new features, fixes and highlights. Personally I'm happiest with the progress on implementing submissions, but that may be a bit of bias because I contributed some code to deal with aspects of submissions such as serialization, validation and relevance pruning. On the other hand, I contributed code in other areas, so maybe it's just that I'm biased toward any improvements to our ability to support the XRX architecture and connect the XForms client to server-side services.
And this blog entry would not be complete without my mentioning a special word of appreciation to the folks at webBackplane for all their contributions in general but especially for the Ubiquity XForms rollup system, which consolidates the many files of the processor into a single file to be deployed on your server. I have an internal project right now that uses the rollup, and it provides us with the high level of performance we expected/required in our applications.
Full details of the version 0.7 release can be found here: <http://code.google.com/p/ubiquity-xforms/wiki/Release0Point7>
Something happened today that seemed like perfect fodder for this blog because it combines a humanitarian story, social business, and a good use case for IBM Forms Experience Builder in a social business context.
An esteemed colleague at my Canada software lab site is the organizer for our site's donations and efforts for a local charitable cause. Turns out, IBM has a special charitable grant program in which IBM makes a substantial additional donation to the charity if a set number of IBMers sign up as volunteers. I really like this model because it essentially gives IBMers a way to vote, not only on how much IBM corporately donates but also where the donation goes. It's the same reason I like charitable tax incentives, as it is essentially the same as the federal government allowing the people to vote on the charitable causes they support and how much tax money goes to them.
Anyway, IBM internally runs an enterprise social business platform called Connections, which includes profile pages for all IBMers. These are like Facebook or LinkedIn pages, only inside the corporate firewall. Still, there is information that would not normally show up on one's profile page, but which my colleague needs in order to complete our internal IBM charitable grant application. So, my colleague has found it necessary to reach out to a large number of people at our site to get this information; she's under a tight timeline, and she needs responses from everyone.
This is a great example use case for IBM Forms Experience Builder. One reason is that my colleague needs to quickly stand up a lightweight custom data collection solution, where we'd miss the grant deadline for our charity if we had to wait for a normal IT project to be completed. So, this is a great example of workforce agility to be gained. Yet, on a technical front, this is interesting because it is so easy to see that the solution needs two forms, and their corresponding backend DB storage, as well as a service to connect one to the other. The fact that you can make all this stuff and hook it up in your web browser without coding is truly a step forward in IT democratization.
In this solution, my colleague can create a first LIST form that simply allows her to enter the names of all the people that she needs to get information from. Each person's email address can then be automatically looked up by hooking up into the form the LDAP lookup service that is added to our installation of Forms Experience Builder.
Then, my colleague can create a second COLLECT form that allows each person to provide the information she needs.
Most importantly, my colleague really needs to be able to re-enter her LIST form at any time, not to add more names, but rather to see which people have submitted the COLLECT form-- and which haven't. To do this, she can automatically hook up the database for the COLLECT form as a service to her LIST form. As each person fills the COLLECT form, their entry on the LIST form can show their required information.
At any point, my colleague can look at her LIST form for blank entries to see those people with whom she has to take "secondary measures" to ensure 100% response before the deadline. Most importantly, her LIST form would already then have all the information she requires collected into one place.
Can you imagine how much more work she'd have to do if she did all this by email? She'd have a flood of emails mixed with her normal email, and she'd have to fish out the information from each email and put it into a list manually. Tedious, error prone, and lots more ways to miss the deadline. My colleague is be able to collaborate much more efficiently and effectively using a 2-form IBM Forms Experience Builder solution, and this is why lightweight data management solutions should be added as an essential ingredient of an enterprise grade social business platform. Specifically, if you're going to purchase IBM Connections, then add IBM Forms Experience Builder.
For a very long time, the XML language underlying Lotus Forms (called XFDL) has offered an option element called <format> to describe data type, validity constraints and presentational formatting for the (valid) values of form controls (called XFDL items, like text entry fields, selection popups, and so forth). Neither a whole Lotus Form document, nor the XForms instance data it contains, can be submitted to a server unless all relevant data type and validity constraints are satisfied.
However, the <format> elements are placed into the form controls, so the user interface layer must be articulated in order to create the run-time objects described by the <format> elements. This makes it harder to use Lotus Forms performance enhancing features like "on demand page loading" in a multipage XFDL document. It will also conflict with an increasing number of performance enhancing features over time. Suffice it to say, it would be better to express <format> data type and validity constraints as XForms binds in the form.
Sometimes this is easy to do. The following are a few examples. Suppose you have an XFDL <field> text entry element containing <xforms:input ref="data"> as the form control binding it to a data node named "data".
Change: <format> <datatype>integer</datatype> </format>
To: <xforms:bind nodeset="data" type="xforms:integer"/>
Change: <format> <datatype>float</datatype> </format>
To: <xforms:bind nodeset="data" type="xforms:double"/>
Change: <format> ... <constraints><length><max>10</max></constraints> </format>
To: <xforms:bind nodeset="data" constraint="string-length(.) < 10"/>
Change: <format> ... <constraints><range><min>18</min></constraints> </format>
To: <xforms:bind nodeset="data" constraint=". >= 18"/>
Change: <format> ... <constraints><mandatory>on</mandatory></constraints> </format>
To: <xforms:bind nodeset="data" required="true()"/>
These are easy mechanical changes to make that prepare a Lotus Form for use with performance enhancing features like "on demand page loading". The XForms binds are easy to create with the "Create Bind" or "Create Bind with Wizard" features in the Lotus Forms Designer's XForms Model Editor.
One aspect of <format>
option capability that is... somewhat less mechanical... to replace are constraints based on regex (regular expression) pattern matching. If you don't know what a regex is already, just do a web search for "regex tutorial" and spend an hour or two reading something that will save you days upon days of your life in the long run.
As an example, here is a simple XFDL <format> that applies a regex pattern limiting user input to zero or more alphanumeric characters:
<format> ... <constraints><patterns><pattern>[A-Za-z0-9]*</pattern></patterns></constraints> </format>
The * means zero or more characters, the brackets are used to express the range of characters, and subexpressions like A-Z and 0-9 provide subranges of the acceptable character range. Regex is a lot more powerful than this, so you really should read that tutorial if you needed this explanation. For example, you can express multiple alternative patterns directly in regex without exploiting the obvious ability to use multiple <pattern> elements (the availability of that option is simply because it is easier to read than really long single regex patterns).
Anyway, to replace a pattern-based format constraint with an XForms bind, you would use a type model item property that references an XML schema type library entry. In essence, you can build XML schema declarations in your form to declare types even if you don't use XML schema to define the element structure and the data types of the instance data. Here's what it looks like:
<xforms:model> <schema targetNamespace="http://test.org" xmlns="http://www.w3.org/2001/XMLSchema"> <simpleType name="alnum"> <restriction base="xsd:string"> <pattern value="[A-Za-z0-9]*"></pattern> </restriction> </simpleType> </schema> <xforms:instance xmlns="http://test.org"> <data> <node></node> </data> </xforms:instance> <xforms:bind nodeset="test:node" type="test:alnum"/> </xforms:model>
The inline schema declares a datatype named alnum (for alphanumeric) and indicates it is any string that can be restricted to the same pattern you saw in the XFDL format constraint. The XForms bind then associates that type with the data node. If you bind a field item's <xforms:input> to that data node, you'll see that the user input is indeed restricted to strings that match the pattern.
As we start off the business of a new decade, I already find myself looking forward to the improvements in XForms 1.2.
I know, I know, we just released the XForms 1.1 Recommendation, which contained a huge bundle of new features and architectural improvements. Still, the working group has been energetically advancing on still more features and improvements even as we closed the loop on the W3C process in the latter half of last year. I am very pleased with the completion of XForms 1.1, but the only constant is change, and we have an excellent array of new technical results to achieve in XForms.
The working group has put a fair bit of effort into the triage of features to separate nearer term (and simpler) objectives from those features of a more architecturally significant nature, which tend to get put in the "XForms 2.0" bucket.
My personal favorite "simple" feature is model-based UI switching with the switch element. This may seem like a bit of a nerdy thing to perseverate on, but the switch element is the only UI control we have that is not directly controllable by a UI binding to data. So, if a form author wants to switch UIs based on a user input, such as changing the payment details UI based on the selection of a payment, then the form author typically resorts to using group elements and the relevant model item property. But this is more than just the sense of completion we'll get from being able to say *all* UI controls can be declaratively linked to data. And it's more than just the satisfaction we'll get from providing the right tool for the job. The ability to declaratively link the state of a switch to data is critical to the save/reload web application pattern and the transaction digital signature web application pattern.
Other web application standardization efforts are focused mainly on what happens during execution of a single web page. XForms-based technogies are a step ahead because they can more easily address the user patterns related to collaboration on the web. Whether it is one user saving an application context and reloading it again in the future, or whether one user provides data to be loaded by a second user, the simple fact remains that data can be in any state along a continuum between empty template and fully completed, so web applications need to be declaratively responsive to that continuum. And nowhere is this clearer than when a web-based transaction is of high enough value to warrant a digital signature. A digital signature needs to be able to capture the full current state of the web application, and having a declarative binding to the data is the easiest way to achieve that goal since it means one can simultaneously sign the web application page template(s) and the data as separate resources.
For the record, Lotus Forms has for a few years now used a legitimate XForms extension mechanism, the xfdl:state attribute, to support model-based switching. We deemed this extension critical to our ability to address our customers' requirements for the above two web application patterns. Standardized integration of XForms with XML Signatures and other advanced security features like XAdES is, so far, in the XForms 2.0 bucket.
In conclusion, I'm not necessarily saying this is the "biggest" feature of XForms 1.2, only that it is my fave. I'll be giving some coverage to the other XForms 1.2 features in future blog posts, so please click the 'follower' button above (top right) and stay tuned.
Modified by John M. Boyer
Ever since my first blog entry in this recent series on artificial intelligence, I've been highlighting the lesser, calculational nature of machine intelligence and learning-- as well as the valuable role it nonetheless can play in driving more effective human understanding and decisions. I've been doing this by articulating mainly what machines do, as that is the primary interest of mine and most who would read a developerWorks blog. Still, our interests will be served by taking an entry to discuss human learning as a counterpoint or contrast.
The multiple linear regression example in my last post is a good example to start with because it highlights the difference between accuracy versus understanding. If there is a linear relationship among the data, then an MLR can have a very high predictive accuracy, but it has no explanatory power whatsoever. The MLR model does not have, nor does it convey, any understanding as to why the relationship exists.
Let's see how this predictive accuracy rates in terms of human intelligence and learning. In this case, we can benefit from an instance of that delightful human propensity to apply ideas to themselves. Specifically, we humans have applied our learning abilities to the phenomenon of our learning abilities, with many useful results including Bloom's Taxonomy.
According to Bloom's taxonomy, the very lowest level of cognitive learning is the knowledge level, or the ability to remember and recall what is learned. When you think about it, you realize that an MLR model, like many predictive analytics, is really a storage mechanism for something that has been machine learned from data. In MLR, we store the constants of a linear formula as the representation of what has been learned from linearly related data.
The next higher level of Bloom's taxonomy is comprehension, which is where understanding and true explanatory power begin to surface. But human learning is so much more sophisticated than the knowledge level of machine learning that there are a number of levels above comprehension. There's the application level, in which we can use our knowledge to solve new problems, including being able to explain why the new solution works. The analysis level drills deeper into our ability to make inferences and generalizations. The synthesis level begins to get at our ability to be creative with what we've learned and come up with new ideas and solutions. Finally, the evaluation level gets at our ability to be subjective and judge quality and creativeness of ideas and solutions. We are beginning to see some faint glimmers of some elements of some of these levels in cognitive computing efforts like IBM Watson, but it is early days indeed.
While we're on the subject of human learning and Bloom's Taxonomy, it makes sense to digress for a bit and mention the IBM Social Learning product. This is a SaaS educational platform intended to help enterprises achieve a Smarter Workforce. A few reasons for the digression are
this is a product I currently work on,
both it and the whole Smarter Workforce initiative are being featured at the IBM Connect 2014 conference being held right now, and
learning is a key ingredient of how a human workforce becomes smarter.
The IBM Social Learning product has a very nice feature that enables educational administrators to implement Bloom's Taxonomy in their learning materials. A component of the product is the Kenexa LCMS, or learning content management system, which includes various subcomponents like a course designer and a metadata dictionary. The educational administrator can add any metadata tag, such as "Learning Goal", and any tag values, such as "Basic Knowledge", "Comprehension", "Application", etc. Once this is done, the educational administrator can use the metadata tag values to classify any learning item in the LCMS accord to Learning Goal. Once these classified learning materials are published, learners can use the "Learning Goal" as a new faceted search criterion in the platform's learning library. A learner would be able to isolate and focus on "knowledge" level learning in a subject area before proceeding to comprehension and then application, for example. This will enable learners to effectively use the natural way in which their learning blooms, i.e. Bloom's Taxonomy.
Finally, there is an aspect of human learning that goes beyond Bloom's taxonomy, and it's an area that is highlighted by the IBM Social Learning product. There is a very important word in the product title: Social. This is crucial because it underscores the central role of communication and collaboration in the human learning process. We are an order of magnitude more effective at learning based on our interconnectedness to others who think and learn, rather having access to just data. This is pertinent to the advancement of artificial intelligence because "social" goes quite beyond the computing architecture underlying a lot of today's machine learning efforts.
One of the New IBM Forms 4.0 Demonstrations
has just won the top-rated video award at Lotusphere
. This is a fantastic victory for the Websphere Portal segment of Lotus, which focuses on products like IBM Forms that create and provide exceptional web experiences. It is also a victory for the W3C XForms standard
since the Wizard Creator
featured in the video is, as far as I know, the first point-and-click design experience for an XForms switch. This feature exemplifies the principle that declarative markup languages result in more powerful application design environments because they express what the author wants. With an imperative languages, a design environment must either operate at a much lower level (less powerful) or do some wicked reverse engineering/pattern matching to discern what the author wanted from how he did it.
So, XForms has definitely been my friend in helping to create an award-winning feature in IBM Forms. Wanna meet my newest friend? Here we are, just me and Watson, celebrating our victories at the closing session of Lotusphere 2011:
To put it in Watson's terms:
Category: Lotusphere Best Demo Video Winners
Answer: An IBM Form that uses XForms to express Wizard interfaces for forms.
Question: What is a Smarter Web Application?
MC: Congratulations Watson, you're ready for prime-time! Wanna join my social network?
Recently, I was experimenting with one of the features planned for the next version of XForms. The feature is the iterate attribute for XForms actions, which will perform a for-each loop operation based on a nodeset obtained from the xpath expression in the iterate attribute value. XForms 1.1 already has a while loop, but iterate makes many data processing loops easier to write and also more performant (subject of a future blog). There were lots of iteration use cases to choose from, but I decided to experiment with sorting because it is a well-known benchmark algorithm.
Before we go any further, let me say up front that XForms action scripting is intended for very lightweight data manipulation, like adding or deleting a data node corresponding to a table row or copying data results to or from the SOAP envelopes of a web service. By the time you get to nested iterations like those needed sorting, you should be considering alternatives expressed in full-blown imperative languages available in the information system within which the form is being used. For example, in the case of sorting, it is a better idea to request sorting in the database query whose results are returned from a web service into your form so that your form logic does not even have to do the sorting.
So, with the disclaimer out of the way, let's abuse the technology a bit to get a better sense of what is feasible in those customer-needs-it-yesterday circumstances. It turns out that XForms 1.1 does allow full nodeset processing in the insert action's origin attribute and the delete action's nodeset attribute. Without even needing the new iterate attribute, this is just enough iteration capability to perform efficient sorting -- so there are some kinds of iterations that can be done now without the iterate attribute.
We're going to do a divide-and-conquer "partition" sort that I personally created as a university freshman after my 1st semester instructor told our class that linked lists could only be sorted slowly. At the time, the usual computer languages only allowed static allocation for arrays, and even though I didn't know what a "quick sort" was, I had seen the light of dynamic allocation, and I was never going back! I later learned how great a merge sort is on a linked list, but the effort of turning an array quicksort into a linked list partition sort comes in handy now because a merge sort cannot be efficiently expressed in XForms until the iterate attribute is added.
The way a quicksort works on an array (or subarray) is that you pick a random element to be the 'pivot' value. Then you run two index variables at the same time, one from the start of the array upward and the other going from the end of the array downward. The 'up' index is advanced until it finds a value greater than the pivot, the 'down' index is decremented until a value less than the pivot is found, and then the values at the 'up' and 'down' locations are swapped. This keeps happening until 'up' and 'down' meet somewhere near the middle of the array. At this point you've partitioned the array into a subarray of values less than the pivot value and a subarray of values greater than the pivot value. The quicksort is then invoked recursively to sort both subarrays.
The main challenge with this approach is the 'down' index, which is a reverse iteration. In a singly linked list, you can only go forward. XForms insert and delete actions have a similar limitation: they can only identify a nodeset of nodes to insert or delete, but not really a direction of iteration. But the important bit is what the quicksort is doing, not how it is done. Think of the list content as being completely messy, and each partitioning stage must make it somewhat less messy by dividing the content into a partition of lesser elements and a partition of greater elements. Then, the next partitioning stage is invoked recursively to do a better job of cleaning up the mess within each partition.
Let's explore this concept by sorting a list of elements, such as sorting a list of <person> elements by a <lastname> child element. We begin by copying the list into an initial partition element of a temporary instance called 'sortdata', like this:
1) <xforms:insert context="instance('sortdata')" origin="instance('partition')"/>
2) <xforms:insert context="instance('sortdata')/partition" origin="instance('data')/list/person"/>
3) <xforms:delete nodeset="instance('data')/list/person"/>
Next, we initialize the random number generator so we can randomly select pivot values for all the partitioning stages:
4) <xforms:setvalue ref="instance('sortpivot')" value="random(true)"/>
Next, we start up a simple while loop that continues to process partitions until none are left.
5) <xforms:action while="instance('sortdata')/partition">
Within the loop, we grab the last partition from the sort data and determine whether it is non-trivial or trivial (only 1 or 2 elements). A non-trivial partition is subjected to further divide-and-conquer processing.
5.1) <xforms:action if="count(instance('sortdata')/partition[last()]/*) > 2">
The first part of dividing and conquering is to create a new empty <partition> element, which we obtain from an instance that expresses a template empty partition element.
5.1.1) <xforms:insert nodeset="instance('sortdata')/partition[last()]"
Step 5 and step 5.1.1 are more interesting than they seem at first. The list of partition elements in the sortdata actually implements the recursion stack, and we just pushed a new element into that stack at the second-to-last position. Because we have an explicit stack, we only need a loop in step 5 to implement recursion.
The next thing we do here is grab a random last name to serve as a pivot value for the partitioning. The first setvalue just picks a random location, and then the second step uses the location to get the value. Notice also that I use * rather than partition before the [last()] predicate because the sort data only contains partition elements, so there is no point in doing a name test for partition.
5.1.2) <xforms:setvalue ref="instance('sortpivot')"
value="1+floor(random(false) * count(instance('sortdata')/*[last()]/*))"/>
Now the magical part happens. All elements in the last partition whose key element (lastname) is less than or equal to the pivot value are moved to the newly created second-to-last partition. By combining the nodeset processing capability of XForms insert and delete actions with the predicate-based node selection capability of XPath, the matching nodes can be selected and moved using two single XForms actions, i.e. without using an XForms while loop.
5.1.3) <xforms:insert context="instance('sortdata')/*[last()-1]"
origin="instance('sortdata')/*[last()]/*[compare(lastname, instance('sortpivot')) <= 0]"/>
<xforms:delete nodeset="instance('sortdata')/*[last()]/*[compare(lastname, instance('sortpivot')) <= 0]"/>
If the last partition is now empty due to the move operation in step 5.1.3, then the new second-to-last partition received all its elements. If all the moved elements are equal to the pivot value, then we can output them back into the original data list and then remove the last two partitions. Note that the insert is configured to prepend the elements into the data list, and we're copying them from the last non-empty partition, which has the elements with the greatest key value.
5.1.4) <xforms:action if="count(instance('sortdata')/*[last()]/*) = 0">
<xforms:action if="not(instance('sortdata')/*[last()-1]/*[compare(lastname, instance('sortpivot')) != 0])">
<xforms:insert context="instance('data')/list" origin="instance('sortdata')/*[last()-1]/*"/>
<xforms:delete nodeset="instance('sortdata')/*[position() >= last()-1]"/>
</xforms:action> <!-- End of 5.1 non-trivial partition handler -->
Now, we've finished with the non-trivial partition handler, and we turn our attention to processing a trivial partition containing at most 2 elements. The content of the partition is moved to the original data list and the partition is removed. Again, note that we're processing the last partition, which has the greatest key values, and the insert prepends to the data list, so the sorted data list starts with the greatest values and grows as lesser and lesser values are prepended over time as all the partitions are processed.
5.2) <xforms:action if="count(instance('sortdata')/*[last()]/*) <= 2">
If the partition contains two elements and the greater one is first, then it is removed from the partition and put in the original list:
5.2.1) <xforms:action if="count(instance('sortdata')/*[last()]/*) = 2 and
instance('sortdata')/*[last()]/*/lastname) > 0">
<xforms:insert context="instance('data')/list" origin="instance('sortdata')/*[last()]/*"/>
Now the partition contains either no elements, one element, or two elements in sorted order, so we move the content to the data list and then delete the partition.
5.2.2) <xforms:insert if="count(instance('sortdata')/*[last()]/*) > 0"
</xforms:action> <!-- End of step 5.2 trivial partition handler -->
</xforms:action> <!-- End of step 5 recursion while loop -->
As a final note on all this algorithmic fun, the question arises whether this sort achieves optimal O(N log N) performance. The answer is no, not quite, due to hidden costs of data instance management and data node selection. However, the sort will be much faster than a "simple" sort because it does perform only O(N) XForms actions.
The XForms team finished our face-to-face meeting in Amsterdam last week. A major focus of the work on XForms 1.2 is called modularization.
The rationale for this work is the observation that the set of XML data processing problems which may have first arisen in the electronic form space are really more generally applicable to the XML data processing needs of RIAs and web applications. On the other hand, those who hear the word "form" may think they do not have a form problem because they still think of a "form" as a simple or static application like those for ordering pizza or flowers. However, XForms has solved numerous problems that keep coming up again and again throughout the W3C standards stack as well as the web application stack. By modularizing the components of XForms, we believe we can increase adoption of the components in other technologies which may not have need of all aspects of XForms.
The current view of the XForms 1.2 modularization can be viewed here. As an example of the rationale above, consider an application that may want to use the submission capability from XForms in a regular web application. The application would import the instance data module, but it might have an application-specific way of populating a data instance with data. The submission module would reference a data instance for the upload data and another instance for the submission results. An application-specific method would then be invoked to consume the submission result into the application, but the means of invoking that method could be an event handler for the xforms-submit-done event. The key issue here is that submission could be consumed by a non-XForms application without needing to incorporate the XForms recalculation engine, user interface controls and so forth.
The full elaboration of this modularization will allow applications to consume pieces of XForms incrementally, including the notion of data validation, data relevance, declarative data calculations, event-drive action scripting, repeats, switches, groups, basic user interface controls, and of course submissions.
Last blog I told you how you could get a "Greenhouse" account that gives you access to a cloud-hosted build-without-coding environment for constructing and deploying data-centric IT solutions. The product is called IBM Forms Experience Builder (FEB), and I want to spend some time on this blog giving you a better sense of the solutions you can create.
The solution I'd like to cover is a bit more advanced than a "Hello, World" solution; maybe I'll do one of those in the next blog. But the first thing I wanted to be able to do was use my account to enable myself to share solution files with you over time. Every FEB solution has a body of code that represents it. You can export the serialization of any solution as a file so that it can be imported into another account or onto another FEB server. So, I wanted to create a FEB solution that allowed me to share FEB solution files with anyone. I created that "Simple File Sharer" solution and then used it to share the "Simple File Sharer" solution file. You can get the file here (requires Greenhouse account login).
Once you have the solution file, you can import it into your own FEB account within Greenhouse and then use it to share files, especially any sample FEB solutions.
The Simple File Sharer solution uses ordinary features in the form (user interface). I just dragged and dropped a table item, and then put a name, description and upload component into the table. I also dragged and dropped a textual label so I could provide some basic instructions for users of the solution.
The Simple File Sharer solution uses some more interesting FEB capabilities to implement behavioral features around the form user interface.
- I used the Access panel to indicate that only one user, yours truly, could create a new database record or update a database record.
- Often, once a user has used a Form to create a database record, subsequent visits to that Form should allow the user to update the record rather than creating a new record in the database. An example would be a vote or a survey response. You don't want the same person voting or answering the survey more than once. In my case, I wanted to restrict myself to one database record because I always want to update a single list of shared files, not create a new list of shared files each time I fill out the form. Regardless of your use case, you can do this by going to the Forms tab, clicking the properties of the form, going to the Advanced tab and then checking "Limit to single submission per Authenticated user".
- I used the Stages panel to add an Update stage to the solution so that database records can be updated after their initial creation.
- I used the Access panel to add a "Reader" role, and I assigned "all authenticated users" to that role. Then, in the Update stage, I gave only Read access to the Reader role. So, as soon as you do a Greenhouse account login, you can use the link I provided to read from the database record I created that includes the table of files that I've shared.
In conclusion, the Simple File Sharer solution may only allow me to create a record, but it still ends up demonstrating a number of FEB features that you would commonly use in solutions that allow multiple users to create and edit records. In the future, I'll share more solutions with you that highlight various FEB features that contribute to creating interesting solutions. Remember, it's not about just forms; it's really about the whole solution you can wrap around forms, including automatic database storage, access control, lightweight workflow stages, and even configuring web services-- all without coding. And that's what makes IBM Forms Experience Builder a platform-as-a-service capability that everyone from a line-of-business users up to an IT professional can appreciate!
Modified by John M. Boyer
In a recent video interview, the IBM CEO Ginni Rometty
comments that Watson 2.0 will understand images that it sees, and that Watson 3.0 will be able to debate, i.e. to understand what it is talking about with another party. An impressive roadmap, each of these is an incredible leap forward from its predecessor.
It is, however, worth qualifying the term 'understand'. It is being used figuratively, not literally, to communicate the rough order of magnitude improvement in capability. When such a leap is made, it seems analogous to sentient understanding, even though it isn't. Imagine for a moment what Archimedes would have thought at first of a hand-held calculator, given that he had the power of Roman numerals with which to calculate pi to several digits. And yet, we would not now interpret such a device as artificial intelligence. As soon as the mechanical nature of a level of capability becomes clear, so too does the fact that it does not constitute sentient intelligence (Hofstadter's exposition of Tesler's "theorem").
You can see this assertion play out in multiple levels of Bob Sutor's scale of cognitive computing
. There are levels that are clearly not cognitive intelligence, as Sutor points out, but if you lay out the scale on a timeline of decades or centuries, it is clear that each level might once have been interpreted as being indistinguishable from magic.
So where on Sutor's scale is Watson? And what implications does that have for development best practices?
Watson is clearly not on the "Sentient (we can do without humans) systems" level. As sentient beings, we don't just know things with a certain calculated accuracy or confidence level, or determine that we don't know if our confidence is low. We experience desire to know more, and we experience fear of the unknown. We are teetering bulbs of dread and dream (Hofstadter's delightful invocation of a Russell Edson poem). I urge you to let that characterization of us sink into your mind. In Watson technology, IBM has modeled a certain class of knowledge and mechanical reasoning, and in other research, IBM is doing so by simulating some of the known structure of biological brains. However, we don't yet know how to model fear and desire, dread and dream. In my opinion, these are inextricably bound together in sentient intelligence, separating it from simulated intelligence. In other words, intelligent behavior is a construct that works for the dread and dream engine of the sentient, and in the absence of dread and dream, seeming intelligent behavior is but a mechanical simulation of understanding. As an aside, I hope we only manage to model desire and fear around the same time we figure out how to model ethics (as Asimov cautions).
Does this characterization of Watson as a mechanical simulation of understanding detract from its value? Does it detract from the order of magnitude improvement it heralds as an usher of the era of cognitive computing? Of course not, quite the opposite. It is simply fantastic that this level of "Learning, Reasoning, Inference Systems" (Sutor's scale) is now computationally and economically feasible at the scale needed to help sentient intelligence (that's us) to solve real world problems. Quick, what is the square root of 7. Can't do it? No problem. Even if you're Arthur Benjamin
, you'd be better off just hitting a few keys on a calculator. Quick, what are the most likely diagnoses for the patient's presenting symptoms? An "expert advisor" like Watson can be just what it takes to help determine the next best action, especially when time is of the essence because a life hangs in the balance.
The term "expert advisor" is appropriate. It conveys that the system is a "Learning, Reasoning, Inference System" that does not have sentient understanding and is therefore made available to advise and guide the actions of an expert. This is analogous to the way spreadsheets guide the results reported by accountants and chief financial officers. That being said, we also know not to put spreadsheets in the hands of toddlers. From a development practice standpoint, it is crucial to keep in mind that "expert advisor" means that the deployed system should be advising someone who is a qualified expert in the exact domain in which the "expert advisor" system was trained. Especially when a life hangs in the balance, access to the "expert advisor" system needs to be performed by those with expert qualifications in the domain because only they can reasonably be expected to use sentient understanding to interpret and follow up on the advice. In other words, the term 'expert' in 'expert advisor' should apply to the user more so than the advisor.
Now, given an enterprise workforce of those with qualified sentient understanding of their topic areas, Watson-style expert advisors are just the type of technological advancement that will help them work smarter, not harder, to meet the needs of customers and colleagues and to produce a competitive advantage for the business.
Modified by John M. Boyer
Explanatory power is a bit of a loaded term. I believe that we can come to a good understanding of what it is and how it related to machine learning by comparing and contrasting linear regression with neural nets.
The IBM Analytics Education Series has a good introductory analytics presentation that includes brief descriptions of neural nets and linear regression. It's a good video worth your viewing, though it does make one point with which I don't agree.
The speaker says that a challenge with neural nets in business applications is that they are black box, meaning that you can understand the inputs and the outputs but not really how it is deriving the outputs. Later, the speaker says that linear regression is a preferred technique because it has a very strong predictive and explanatory power.
It's not really true that linear regression has more explanatory power than neural nets. Rather, it is easier to understand the problems and the answers that can be solved by linear regression. By comparison, neural nets tend to be used to provide cognitive computing power to harder problems than linear regression can solve.
To put this another way, when you use linear regression, you actually begin by assuming linearity of the relation you want to predict. As the speaker points out, you can also make a non-linear assumption, and you can accommodate this using a data transformation, for example. But the high order bit is that you are asked to assume the data relationship, and that assumption is what is giving you the illusion of explanatory power. You can explain that the data follows a line, but this is due to your own assumption. Note that an important aspect of completing a linear regression model is determining the R2 or goodness of fit of the model. This is the part where you make sure that your assumption of linearity is valid. And if the assumption is invalid, then the model has no predictive value, so it does not matter that you can explain how it operates.
Under the interpretation that explanatory power is akin to predictive power, it turns out that neural nets have greater predictive power because they can produce results for a wider array of applications than linear regression can. There a neat table that relates the cognitive power of a neural net to the number of hidden layers. From the table, you can see that when a relationship actually is linear, a neural net can solve it without even using any hidden layers of neurons. When one or two hidden layers of neurons is present, neural nets transcend the capabilities of linear regression, in part because they do not require you to make any assumption about what the data relationship actually is.
And that's where the confusions comes in. The linear regression model requires you to assume linearity and so you know at least what geometric shape the relationship looks like. The neural net requires no such assumption, but nor does the trained neural network give you any hint at what the relationship is. The lack of knowing the relationship is confused for having less explanatory power.
But if you look at this a bit more abstractly, the trained linear regression model has the same exact problem of not providing any additional insight. A neural net is really just a pile of numbers giving constant weights to the neural connections that can convert inputs to outputs. Similarly, a linear regression model is just a pile of numbers that give constant weights to inputs to be linearly combined into an output. Sure you know the data relationship, but that's because you assumed it. The actual linear regression model gives you no insight into why one dimension has a large slope constant where another has a small slope.
An analogy I like to use is that the value of the neural net is not diminished by our inability to explain how it is that the little gray cells which implement our personal neural nets can produce the cognitive results that they do, and who among us would prefer to have cognitive powers defined by linear regression instead?
In terms of explanatory power, our biological neural nets perform an additional key function that we have not hitherto been able to achieve with artificial neural nets. We are able to construct additional information in the output that reveals causal relationships, or insights into the reasons for the phenomena it predicts. Put simply: we say why something is true. We provide a rationale. This is an aspect of explanatory power that, when achieved, dramatically increases the value and utility of any cognitive analytic. Theorem provers and Prolog programs have been able to do this for the applications to which they apply. In the area of unstructured information processing and data mining, you can see a demo of this concept in Watson Paths.
There are a number of new XPath extension functions available to XForms developers in the latest release of IBM Forms, and I'd like to draw your attention to two of them: eval() and eval-in-context(), and they are wicked cool!
The function eval(expr) evaluates expr in the context of the function call and returns the result. The eval-in-context(expr, contextExpr) function does a similar thing, except it first evaluates the contextExpr and uses the result as the context for the main expression. This is desparately needed for XPath 1.0 expressions to eliminate the infestation of pesky ".." operations that typically occurs. I've used it, rather than eval(), in the samples below.
One use of these functions is to enable the powerful capability to let XML data carry sophisticated dynamic metadata, which can then be implemented and enforced with singular XForms bind elements that attach the semantics of the metadata whereever it is found in the data.
It turns out that the xsi:type attribute from XML schema is already a rudimentary version of the metadata idea we're pursuing here, so the question becomes what if you could do it for all of the juicy metadata that XForms contains, like calculated data values, data validity constraints, and so forth? Let's look at what this "decorated" data might look like for a simple expense report:
<total value="quantity * price"/>
constraint="total < 10000"/>
It is really easy to use XPath capabilities to find all elements having a value attribute and then to bind an XForms calculate formula to those elements, and then eval-in-context() is used to determine the result according to whatever expression is given in the data, like this:
<xforms:bind nodeset="descendant::*[@value]" calculate="eval-in-context(@value, ..)"/>
The nodeset expression starts with descendant::* to explore all elements of the data, and then the predicate [@value] selects all elements that have a value attribute. For each such element node, a calculate formula is bound to it by the XForms bind. The eval-in-context() call uses ".." to go up a level so that the formulas in the value attributes can omit "../", e.g. so the expression can simply be "quantity * price" rather than "../quantity * ../price".
Similarly, a data validation constraint can be attached like this:
On the expense report data above, this binding evaluates the constraint expression attached to the total element, and then converts the result to a boolean. Due to the constraint, the expense form data cannot be submitted to a server for processing unless the total expense is less than 10000.
In other scenarios, you may want to control the other metadata properties like relevant, required and readonly. Here's an example of data where relevance and requiredness control is needed:
<parent relevant="age < 18" required="true">...</parent>
The xsi:type assignment for age already works in XForms without needing a bind. The required setting for name is statically true, not dynamically changeable, so it is very handy to be able to allow either a static boolean value or an expression, like this:
required="@required='true' or boolean-from-string(eval-in-context(@required, ..))"/>
Technically, the required property on the parent element is conditional on the age value, but that is an automatic feature of XForms, i.e. nodes marked required are only required if they are relevant. It should not be too surprising to see that the bind for relevance looks like this:
relevant="boolean-from-string( eval-in-context(@relevant, ..) )"/>
At this point, it's probably safe to leave the readonly bind as an exercise for the reader :-)
XForms is an important standard for encoding the core XML data processing asset of a forms application for multiple reasons.
For one, it fills the gap of schema languages, which are predominantly focused on describing what constitutes correct data that should drive server-side transactions. XForms codifies what it takes to get from an empty initial instance of a schema to a completed correct instance of a schema.
Another reason is that XForms provides an efficient language for doing the above, one that is based on such well-known and time-tested techniques as the Model-View-Controller design pattern, declarative formulas based on the spreadsheet algorithm (the second killer app of computing), and asynchronous event-based scripting. Frankly, the scripting capability is very effective for two reasons. First, each command is fine-tuned to the data by all the declarative constructs, so each line of script can do the work of 10 to 100 lines of purely imperative code. Second, the scripting in XForms actions are quite focused on mutating the XML data, so you don't get the baggage of a general purpose language that can create arbitrarily complex data structures that exist only in the memory heap of the run-time processor.
On the one hand, one could conclude that all of these language efficiency and simplicity aspects of XForms are beneficial to application authors who write XForms. But on the other hand, it turns out that the crown jewel is that the language efficiency and simplicity of XForms reverberate into the design tools that can all authors to write the application for a schema without much exposure to pointy brackets, much less coding.
To illustrate the point, I've created a 12 minute video of an XForms design experience on developerWorks to help show you more about what can be done with XForms based on a wizard-driven, point-and-click visual design experience. This video is similar to what I presented in the developer track at WWW 2007 in Banff. Please check it out and let me know what you think.
For over 15 years now, the solutions built with the software products from the IBM Lotus Forms team have been based on a simplifying system architecture depicted below. This architecture uses a sophisticated, intelligent XML document as the unit of information that flows among collaborators in a business process.
To support such high-value intelligent interaction, the document has a numberof layers as depicted below. These layers have different responsibilities. Foundationally, the document format is an XML format, which means it is based on thestandard that allows introspection of the document to inject or extract information.This also means the full power of XML signatures can be brought to bear to meet archival and security requirements, particularly non-repudiation of the transactioneffected by the business process based on the document. The XForms layer allows representation of a rich interaction layer for collection of structured data content.
Our work on Lotus Forms has been focused on the collection of structured contentneeded to drive a transaction through a business process, and this has included defining the XML vocabulary called XFDL to allow application developers to build thehigh precision user interface needed to drive interaction with end-users.
But consider for a moment the great number of situational applications that areconstantly being created and deployed around the globe that are based on the requirement to have a mix of structured data and unstructured content. In other words, consider the case ofoffice documents with high business value. We're talking purchase contracts, supplieragreements, insurance polices, RFP responses, financial arrangements, and the like. These are documents that can make effective use of multipage free-flowing unstructuredcontent and graphics interspersed with fill-in-the-blanks content for names, dates, addresses, monetary values, milestones and so forth.
These documents support an important class of live, interactive applications in which multiple collaborators work together to solve a semi-structured content problem.The OASIS Technical Committee for the Open Document Format, in their wisdom, incorporatedthe XForms model into ODF, and this delightful feat of standards reuse allows us to reap the benefits of ODF's unstructured content creation features together with the interactive structured data collection and both the data and document submission facilities of XForms 1.1 at the same time. The layered document picture looks pretty much the same, except as you can see below the on-the-glass presentationlayer is ODF, and otherwise the overall business process system architecture is the same as depicted above.
The result of these considerations is a focus on Interactive Office Documentsas a strategic means of going Beyond Office, i.e. going beyond the traditional usage patternsfor office documents that have become entrenched due to the office document limitations of the past and instead solving the real business problems that cause people to select the useof office documents in the first place. Using XForms 1.1, office documents based on ODF can be first class participants in collaborative business processes.
Check out this cool video that will help you understand how and why your business processes need to lose the paper and get on board with dynamic, interactive electronic forms... powered by the XForms standard of course!