I have been using Rational Application Developer v6 since the early WP5.1 beta days(November 2004).
I updated my install to v220.127.116.11 and found the Domino SDO functionality, and I was pleasantly surprised.
I have been drinking the SDO Kool-Aid for the past year or so, and it was refreshing to see it implemented for my favorite document centric store (NSF). I will have some updates on issues and successes with using the SDO in articles and future blog posts, as I play with the feature more. One of our consultants, Chris Riner, has been playing with it at my customer site and finds it useful with the WSIWYG and rapid application development functionality combined with JSF.
Raj Balasubramanian, ISSL
InsideLotus - Lotus, Portal and Social Collaborative Software
TedStanton 0600014754 278 Visits
TedStanton 0600014754 312 Visits
The folks over at Teamstudio are at it again. This time with a free utility to help you with your LotusScript coding efforts, called Teamstudio Script Browser. It lets you browse ALL of your script in the database all at once, find out where code is referenced by other code, and browse to it in Domino Designer.
I'm sure the Domino developer community will be quite grateful for this useful tool. (And be sure to check out their other Domino development tools, which have always been first class!)
Sr. Product Manager, IBM Workplace Application Development
TedStanton 0600014754 438 Visits
The 'Coltrane' release of Lotus Freelance Graphics, the 1998 version bundled with Lotus SmartSuite, was famously held up for a month when it was found out that one of the clip art images had a tiny 20 pixel image of Taiwanese currency (rather than mainland Chinese currency). Or was it the other way round? An eagle-eyed QA engineer had discerned the offending image. Needless to say, this was something that would cause offense and/or affect purchasing decisions in the involved markets.
I was a little canary in that Freelance mineshaft (or was it a minefield?) when a couple of months earlier in the project I had discovered a seriously outdated map of Africa when integrating new clip art into the product. I wrote one of the sprs of which I am most proud titled something like: "Upper Volta should be Burkina Faso". The spr also made a brief mention of the fact that as far as I could tell there were similar problems in our maps with the names of the states from the former Soviet Republic which had changed earlier in the decade. I believe that my spr was deferred to a later release; it was too late in the product cycle and we'd have had to go back to the company we licensed the art from and so forth. As it turned out however, with the advent of China/Taiwan flag issue, the entire suite was delayed as the test team had to painstakingly go over all the clip art and other areas of the product vetting for similar outrages that could endanger sales.
Incidentally the acronym, spr, stands for "Software Problem Report" in Lotus's parlance. After a decade of indoctrination, I still tend to use spr rather than the more clinical, "defect", which is the favoured term at IBM, or bug which is the more widely-used term generally. You can still tell whether someone has the 'old Lotus' DNA rather than the 'new Lotus/IBM' variety by the terminology they reflexively use. For a dissection of recent linguistic trends in technology see this earlier post.
As a further aside, the Lotus spr database is actually one of the most successful and useful applications of Notes technology ever and in a similar manner, the Bugzilla project is actually one of the best things to have sprung out of the Mozilla effort. Certainly it was more useful than the Gecko suite until the M7 milestone was reached years ago. There must be some kind of software law at work here, something analogous to Zawinski's Law on software expanding until it can support email (and now feed technology) and so I'll coin it as Koranteng's First Law of Software Systems:
A software platform reaches its tipping point once it can serve as a bug reporting system.Intuitively this makes sense, once developers can view, file and retrieve bug reports using their own products, they will be more likely to use them and their confidence in their mission will improve dramatically.
Back to my topic though... I'm sure that the artists who created the clip art in that design firm had no subversive intent; they probably used outdated stock art as their source material. Still sometimes you wonder at these things and, in this vein, I'm reminded of the Nobel Prize-winning Portuguese author, Jose Saramago's wonderful novel, The History of the Siege of Lisbon, in which a copy editor proof-reading a historical novel defiantly changes a crucial word in the text changing the outcome of a famous historical battle from a miserable defeat to victory. A masterful alternate history of Portugal then develops and its culture is fully reimagined. I'm sure that bored developers sometimes slip similar things into their products, and not just serendipitous Easter Eggs.
More seriously though, the Freelance incident (or similar incidents relating to any new release of dictionaries or thesauruses from Microsoft) go beyond the cultural sensitivity of which this post is concerned into the rarefied realm of political sensitivity. The same thing goes when choosing product names: you don't want mere mention of your product to cause snickering or offense. A misconceived name or culturally-insensitive feature can be a black eye that will chasten even the most high-flying company. More worrying is that this can translate to a serious financial impact (well beyond a month's lost revenue for the suite) if proper attention isn't paid.
Internationalization however is a mostly thankless, and oft-neglected task even as I've outlined its importance. From a developers standpoint, you first think about it mostly in terms of dealing with resource bundles and having to put all your resource strings into seperate files for different languages. This is a pain because you probably started with a little prototype of a user interface and now you're being asked to find all those hard-coded strings and to "do the right thing". Plus it typically becomes an issue only late in the game when the real interesting work is over and you're ready to think about the next big thing. Instead, you're grudgingly forced to worry about handling different writing systems, dealing with bi-directional text, or that old favorite: the various "special characters" in the wide variety of programming languages, file formats, protocols or operating systems in your software. Those best at this task are akin to forensic investigators and must possess considerable patience and attention to detail.
If, for example, your software mangles names with accents, you'd have real trouble selling in France. Similarly with ampersands and apostrophes - which have significance in SGML and its derivatives (HTML, XML etc). And don't get me started about wider character sets and encoding issues... As an ironic case in point, while developing Lotus K-station, one of our best business partners was somewhat stymied in his development and extraordinary evangelism of our product because, in its earliest release, it couldn't handle the ampersand in his company's name. Luckily for us, he temporarily renamed his organization in his LDAP directory; Sun & Son became Sun and Son until we worked out the quirks. To this day, I always make sure to test whatever product I work on with that organization name. It's surprising though, how often this kind of problem recurs.
In this internet era, most technology, and certainly all software development, has to have global concerns in mind. It is said that the sun doesn't set on an IBM project and I work with a very diverse set of colleagues the world over. Presumably the benefit of such a widely dispersed and diverse workforce should be to mitigate the likelihood of issues in this area. As my current project has been going through translation and localization testing of late, I've been thinking a lot about how we handle internationalization.
The old Lotus process was a more decentralized one, each product group had an internationalization team seconded to it. Members of such teams were domain experts and knew the product inside-out in addition to being localization gurus. This had many benefits, the team was involved from the very beginning in the product development cycle and could give crucial design feedback very early and iteratively. Your first prototype was immediately critiqued from their standpoint. The obvious downside of decentralization was the lack of uniform standards - chaos that our translation teams couldn't bear. Some products were very easy to localize, others were, to put it politely, far less so.
As we transitioned to the more centralized IBM globalization process, we got the benefits of uniform standards and greater resources. More languages could be supported in the initial release and you could at least point developers to documentation about the processes to follow instead of ad-hoc stumbling about. Still this was at the expense of the loss of domain expertise, sensitivity and a much slower feeback loop. Some of the test teams would be barely getting up to speed with the product by the end of the testing phase. Localization concerns would sometimes surface too late in the game and would cause much unnecessary reworking and product delays. These things are tweaked, as all processes are, and in recent years, I've seen a push towards re-enabling more domain expertise in the internationalization teams which has been very useful for our product development.
I'll lay out a brief example then, that I think is illustrative of the kinds of things we're dealing with here I hope it can tease out the technical, design and business issues that can arise...
I embarked over the past couple of weekends on a mass digitization project and scanned, retouched and uploaded 2000 or so old photos from shoeboxes under my bed. The technology involved in this exercise was scanner hardware and image aquisition software, the bundled Adobe Photoshop Elements for color adjustment and red-eye correction and a couple of online photo-sharing services: Yahoo Photos and Flickr (I started the day Yahoo's acquisition of Flickr was announced).
The first thing I very quickly noticed: somehow all the photos that I uploaded to Yahoo Photos turned out darker than on Flickr (the services both resize uploaded photos). The photo-resizing algorithm used by Yahoo Photos was giving worse results. This was noticeable to me because a large number of photos featured darker-skinned people such as myself. The originals were fine and where there were lighter skin tones everything looked good, but with darker skintones, the resized photos were not so good. This meant that if I didn't believe in the virtues of Save Lots of Copies Everywhere (SLOCE), I would have leant towards Flickr and stopped using Yahoo Photos.
Secondly, I don't have Flash installed on my Mozilla browsers and so my experience of the Flickr website was very different from that of my family and friends who started looking at the photos with Flash on Internet Explorer. There were immediate complaints about the first bunch of photos. It turns out that images that are rendered in the Flash plugin have a slightly darker tinge than the images rendered directly by the browser itself. This is not normally noticeable but again this exacerbates contrast issues when darker skintones are in evidence as in this case. This was made worse when users tried the slideshow feature because the background of a Flickr screenshow (also implemented in Flash) is black worsening the contrast further.
Thirdly, when retouching photos, the Quick Fix or Auto Correct options in Photoshop seemed tailored for lighter skintones so I was constantly having to do manual tweaking of my photos. Now this is not a big deal for a few photos and indeed it's fun to fiddle with photos but after a couple of hundred images, it gets tiresome. I found mysef longing for "smarter" recognition by the software or for at least, a nice 'dark skin' option that I could set in a preferences dialog. In short I started considering using different software instead of Adobe's.
Now I mention these nitpicks on otherwise excellent and useful products because of the design issues they raise. Technology is simply a tool in aid of people and as human beings we live in different society and cultures. We find that different cultures adapt technologies in different ways to suit local preoccupations and concerns. And I certainly had my own localized concerns these past weeks.
Even when the technical 'fixes' are easy there are design issues and economic tradoffs that also arise. True, Macromedia could implement better JPEG rendering in Flash but that comes at a certain expense - a good renderer is a hard thing to write (even if they could license the photo-rendering from the Mozilla folks). Also what they have appears to be good enough for most people (except me obviously). When do you decide that your product is good enough and stop pandering to the Long Tail? Can you afford to do that? Or are you missing out on a vast market opportunity?
Yahoo Photos could certainly implement a better photo-resizing algorithm although presumably there's a performance penalty to be paid if you use a more color-accurate algorithm (or perhaps a larger resultant image size). At the kind of intenet scale that the Yahoo service operates on with tens of millions of users and photos, this could potentially seriously affect the scalability of their platform. Conversely, if all Yahoo users switched en-masse to Flickr which uses the more expensive algorithms, would that platform be able to handle it? Or would it generate a case of teething problems and turn users off because of poor response times?
Similarly Photoshop could implement a slight variant of their various Quick Fix and AutoCorrect features that were more attuned to my kind of skin colour (indeed I assume that photographers in Africa have written macros or filters that do such a thing). How best though, to phrase a global preference in an options dialog in Photoshop? "Adjust for darker skintones"? Documentation writers would have a field day finding the right verbiage for such an option. Also what about usability? If you add all these preferences to your product what will your user interface look like? Try typing about:config in a Mozilla browser to get a sense of the kind of complexity that modern software developers have to cope with.
If there was a huge market for these products and services in Africa the issues I faced could be a real problem for the companies in question (although this is unlikely given the low internet penetration rates and the likely widespread software piracy). There would be demand not just for local language versions (say a Swahili language version in Kenya) but also for tweaks that would make these services more closely attuned to the prevailing culture and, in this case, ethnic backgrounds. Thus photographers in Africa over the past 150 years have had to deal with brighter sunshine, higher contrast as well as darker skintones when processing their photos as photography has gone through its various evolutions and has now moved into the digital realm. The people who install photo laboratory hardware in Ghana where I come from, always have to recalibrate their equipment to deal with the kind of skin tones that are present in the local market. The factory defaults simply won't do. I've had better results developing film in Ghana than in the US because I often forget to tell the labs here that they should "watch for skintones". I'd expect then that software that were truly local (and all web software should be local or adjustable to fit local concerns) might sometimes need not just run-of-the-mill language changes or even writing system changes but also, as seems needed here, algorithmic adaptations.
As software designers, we try to engineer simplicity and refrain from overwhelming users in their interaction with our services and products. Software is not the user's main focus, it's rather its use that is most important for the user, for the business and for society as a whole. Yet there very real concerns in the application of technology in different cultures. So the next time you see a vaguely-worded so-called "Turkish option" somewhere in your application's configuration dialogs, know that someone somewhere was likely adapting their product for a local market. Join me though in saluting the developers, testers, product managers and designers who collectively worked together to come to that solution. I'd hazard that the tweaking of the product was to fix a deal-breaker in some market.
Finally, for what it's worth, I find endlessly fascinating this notion that cultural sensitivity in technology sometimes necessitates algorithmic adaptation. Maybe though, iterative adaptation in response to local environments - evolution in short, is the name of the game. Perhaps that's simply the way things should be.
Workplace Forms Development, Lotus Software[Read More]
Since blogging for InsideLotus, I've posted a couple of entries regarding spam and different levels of spam. It wasn't until a recent seminar with some anti-spam vendors and IBMers, that I really grasped the impact spam is having in today's e-commerce. Before I continue, I should first give my definition of spam.
"Any unwanted commercial email usually sent in bulk where the sender can gain a profit."
One aspect of spam that I can't seem to draw a conclusion from is how does one define internal email. Obviously, I use Lotus Notes and IBM Workplace as my mail client and receive both internal and external SMTP mail. Now I don't consider any news letters or broadcasts from internal employees spam, but some users do consider this spam. On the other hand, I also have some email accounts with ISP's. The question is, where do you draw the line with internal email in that case? I would like to get the feedback of all the readers of this blog, concerning your thoughts on whether internal email can be considered spam.
Fighting spam is now a million dollar business. To put things into perspective, more then half of all email sent is spam related. Most anti-spam activists agree that the goal is to protect the end user from fraud as well as the time it takes to weed through spam email. Surveys show that some users spend up to 30 minutes a day sorting through email. One method of fighting spam is to allow the end user to decide what they consider spam. Personally, I like the idea of being able to vote on email that I consider spam. Then I leave it up to the server to sort my mail for me and deposit my spam voted mail into a junk folder for me. Some ISP's have incorporated this method and I am glad to see that IBM Workplace has also developed a similar method known as SpamGuru.
So if your serious about fighting spam, I suggest that you try IBM Workplace Messaging junk filtering.
Ted Stanton[Read More]
TedStanton 0600014754 253 Visits
Fast Company has a great article on the Top 25 jobs for 2005. I'm glad to see Computer Software Engineer has its place for 2005 along with Personal Financial Advisor and Producer/Director. They considered factors such as high demand, salary range, investment in education, and the ability to be innovative and creative in your job. I had thought that many "science" type jobs didn't allow for innovations so I was surprised to see a lot of engineer and scientist jobs.
Why is Computer Software Engineer hot? They tell us it looks like computers are here to stay. Good for us!
- Barbara[Read More]
TedStanton 0600014754 422 Visits
The always interesting Mark Birbeck of formsPlayer has a nice article up on his blog.
Ostensibly it's about "CSS, the XForms Dependency Engine, and 'Dynamic Infosets'", and he starts out by asking why there are 2 main languages (CSS and XPath) for selecting and addressing nodes within a DOM. It's a good question, and one that many have asked before. The piece is probably the definitive consideration of the question and he's a great guide, walking us through all the issues involved.
What is more interesting to me is the wider point that he goes on to make as he lurches into a very useful discussion of how we design languages and layer and model our systems. Almost in passing he addresses an issue I find most fascinating which boils down to importance of ease of authoring and syntax in technology.
The mental makeup of human beings means that brevity matters and "intuitiveness" becomes a concern. So long as the length of phone numbers was low, they were easily memorable, these days however, with 10+ digit dialing, we rely on Caller Id and programming numbers into our phones. Thus our cognitive faculties and our short-term powers of recall come into question. Being able to control the nicknames and identifiers we use in our buddy lists is a very significant factor in the spread of instant messaging and now applications like Skype. Identifiers matter significantly in this respect. The simplicity of a URI as a key, and memorable tenet, of the web architecture is a similar case in point.
From another angle on the issue, consider that not everyone can tilt their heads enough to handle the parentheses of a typical Lisp program. Most programmers can, on the whole, and some, like the Paul Grahams of the world, even wear it as a badge of honour. Of course a good computer science program should expose budding engineers to this way of thinking and many do. But these, like the Smalltalk gurus and others, are sadly outliers in the software landscape. I would hazard here that the largest impediment to the widespread adoption of the elegant programming model of Lisp is not that something like recursion is difficult to understand but rather the dissonance that the proliferation of parentheses can cause when Jane Programmer scans a listing in an editor. Vacant stares and cognitive overload ensues.
Marc Andreessen will be remembered for many things; amongst others: Mosaic, Netscape, the AOL merger, a little dotcom hubris some might say, but simply youthful exuberance I would say, evidence in the flesh of what a monopoly like Microsoft can do when provoked, and a pointer, along with Jim Clark, to the role of gravity in deflating bubbles ala Great Crash).
Historians will point to all that and more. For me though, his choice of the syntax for the hypertext link is his most lasting contribution to technology and to mankind in general. Others argued otherwise at the time and would have foisted semantic doodles on us. The "View Source" impulse that has led directly to the success of the web, that great conversational engine, would have been stymied by much head-scratching by the eveyrday people who created many a homepage circa 1995-1999. Those much mocked homepages were wonderful assertions of identity, and the lowered barriers to entry enabled many people to land their flag on this here internet where they, their friends, parents and children now live, shop and commune. If he ever receives an honourary knighthood from King Charles, his coat of arms should read
In this vein, I was perplexed that in XPath 1.0, it is better, or rather less ambiguous, to write true() rather than true. In other words, it is recommended or even required that we treat booleans as functions and not as literals. Indeed everything is a function and as we know functions need parentheses to indicate their arguments. This always trips me up and maybe this is no longer the case in XPath 2.0. Who knows? I certainly haven't cared to look. What is true is that the cognitive impedance this caused me on my first date with the language will forever taint it in my eyes even though I have daily dealings with it.
If you had to say huh? when you did your first view source of a web page, would you have gone with that newfangled web thing or would you have written it off as one of those overly complicated buzzwords that you would look at later "when you had more time"? First impressions and snap judgments (ala Blink) count surprisingly much in these things.
One of the things that I keep thinking we need, and that I hope someone with an itch will build, is a nice XPath expression editor, a component that parses XPath and walks you through the processes of adding conditions and formulating expressions. Maybe a wizard or something, with selectors for picking the various kinds of things that are typical when building forms applications e.g. this field should be less than the value from this other field. A component that would let you add a library of custom XPath functions that could implement additional rules. Each of these libraries would be able to specify their editors but most could just be simple drop-down lists. It's not a big thing to do and you can sketch out a nice design for such a component and knock it out over a weekend. Make it open source it and be done with it.
Still, my focusing on the critical necessity of such a component is simply a recognition that hand-authoring XPath can quickly turn into a nightmare of missed parentheses, predicates and selectors. It is true that authoring in XPath doesn't require as much head scratching as say XSLT, in which context I first encountered the language, and which mere mortals like me will never understand even as I used to write in Lisp. But it is something that raises the bar quite high for the average author. In contrast there is something strangely satisfying about editing a style sheet (or maybe it's just that I've grown accustomed to that over the years). Something like this I expect is what lies behind the impulse for Web Forms 2.0, and the WHATWG, a pragmatism borne of weighing programmers' familiarity with scripting languages and a tenacious devotion to backward compatibility.
More generally though, the issue is that getting general users to author structured content is a big problem, indeed it is a nigh insoluble issue. And all the software that we produce cares very much about structure. The wonder of the spread of HTML and XML is that, ever since Berners-Lee, Bray and others unleashed their projects on us, human beings have adapted to angle brackets, < >, and now don't see them as much ado about anything. The same thing goes with CSS, the tradeoff that was made for syntax is now bearing fruits.
Thus, one of the main questions that will determine the adoption (or lack thereof) of XForms or Web Forms and their ilk is the perplexing matter of whether human beings in the next decade will become as inured to writing true() in an expression as they have become with the angle brackets of html and xml. Put a different way, it could well be something completely orthoganal to the merits of the underlying technology that will determine the outcome: it will be the appearance of the kind of code you see when you do View Source on the first cool forms application you encounter. I'm suggesting then that the language acquisition cost and what I'm terming the cognitive impedance in the average human being of parentheses for functions, and forward slashes for selectors will determine the adoption rates of XForms technology.
I see a bright future in which that much maligned Forms "programming model" that is at the core of the Lotus Notes platform could be brought to the web platform leveraging the native primitives of the Web style (hypermedia, uris, linking etc). XForms is singularly well suited to do this. For those unfamiliar with Notes/Domino, my handwaving elevator pitch is that it is a platform essentially founded on the fundamental insight that a huge class of applications can be built based on just a few compositional building blocks: Forms, Views a standard file format, the note in Notes terms. The brouhahas made about messaging, security, directory services, and all that paraphernalia that marketing people throw about when they pitch the platform to you are all syntactic sugar around the core competency of Forms and Views and the client and server processes that can manage them. A whole cottage industry of business partners are doing very fine thank you building custom and evolvable applications for businesses, small and large, everywhere. The fact that email can be construed as a forms application is just a side benefit and detracts from the real focus of the platform. This is much misunderstood by people whose only encounter with Notes is as a Mail client. It's really just a forms and view app for people and processes. Incidentally this same platform is most likely what is funding my current work and much of the IBM Software Group, even as resources are spent on other "sanctioned" and more "strategic" approaches. C'est la vie.
One thing I've noticed is that many people seem to want to ignore the lessons learned from the Notes world over the past 15 years and and behave as if the forms space is terra incognita - a brave new world indeed. On the contrary, the Forms problem and the wider Process problem is nothing new. These are things that have been with us almost from the time that societies became organized and larger communities formed as Barry Briggs has pointed out. Whenever I plumb those depths however, I am reminded of the notion that Joel Spolksy so eloquently coined that in software it it easier to write code than to read code. In software terms, 15 years is an eternity hence we are fated to reinvent and rewrite anew old software. Just look as WS-* as opposed to Corba. Sometimes I almost despair at this notion, since it bespeaks a total lack of curiousity and historical memory even with those who are sitting in the same building who have learned comprehensive lessons about the many problems of forms: evolvable schemas, metadata, annotations and the like.
Of course I'll continue to build the tools, the processors, the renderers and the infrastructure plumbing to to make the forms dream an easier reality. I'd still argue that adoption will ultimately come down to whether the View Source impulse can be leveraged and whether the average Joe will get turned off by things like true() instead of true. If I were inclined to be a research type, I'd imagine a case study or paper titled something like
"The Importance Of Syntax In Technology Adoption - Historical Insights From The Trenches 1940-2005"A more prescient Historian of Science would note that the issue of notation in mathematics is similarly a longstanding area of concern. A linguist would add insights about how different societies adapted different writing systems and the impact on the writing system on cognition and development. Anthropologists, sociologists or psychologists would have much to say in this vein.
Technologies like XSLT and XForms which are the prime users of XPath are in still in their infancy (as are some of the other takes on this problem space from Adobe and Microsoft). Despite having many implementations at its launch, XForms is still ambling towards its inflection point and I'd hazard that the majority of XForms templates and transactions are machine-generated. Fair enough perhaps. Wearing my prediction hat however, it will be very interesting to see what happens 6 months after the default installation of Firefox includes its XForms extension. We're going to see the same thing in microcosm now that Mozilla have announced that Firefox 1.1 will include SVG which has a more limited utility for mass audiences. With very little tongue in cheek, I'd wonder what contact with a massively vaster audience of form authors will do to XForms implementors. I'd lay bets on the first XForms engine that implements a "quirks mode" for their XPath evaluation engines to dealt with common patterns of mistakes in hand-authored forms. It will be a case of omitted parentheses rather than browser tag soup that will cause much fretting in mailing lists the world over. I wonder whether Peter-Paul Koch will then have to add an XForms or XPath section on his invaluable site that documents browser quirks. The litmus test will be the teenager doing a summer job in a lawyer's office who is asked to write a little forms application to help some workflow. If parentheses make their eyes glaze over, I doubt I'll be proved wrong (although I hope to be), about whether XForms would be used for that custom application. If the typical simplified wiki syntax (whatever Jotspot or SocialText are using) is more intuitive, that will be what gets used.
The other point Birbeck raises, and here the argument is much stronger, is about the tradeoffs that designers consider when it comes to seperating the processing, addressing, eventing and styling models. He speaks to an architectural truism whatever the domain in question. This is where his clarity of thought comes to light. A clarity that stems from being one of the exalted "Invited Experts" on the XForms and HTML W3C working groups, and having an innovative product that daily explores this landscape,
If, for example, you started off in the mad, slapdash world that was early browser development, you might opt instead for a very pragmatic viewpoint on these issues. That's the kind of weighing that has characterized the Mozilla folks. Håkon Lie, Hixie of Opera, fall into this category even if they appear to take it to almost militant extremes at times. Still I see where they are coming from. Your take on these things is coloured by contact with the millions of end-user authors and the daily reality of tag soup.
I've only spoken with the Opera folks a couple of times and always forgot to ask them the burning question I have. How difficult is it, by the way, to add an XPath engine to a browser? I've always assumed that the real reason (as opposed to the stated reason, "we have everything we need in scripting and css") for Opera's almost visceral objection to XForms has been a concern over footprint since the same codebase is used on desktop and pervasive clients. But with the necessity of XML engines in browsers(with the now indispensable XMLHttpRequest) and Moore's law at work on your more limited clients, at what stage do protestatations about XPath (and hence XForms which is a dependency engine over and above XPath) become simply bywords for inertia? As a very conservative software engineer personally, I am similarly not inclined to jump on bandwagons just because they are in vogue. Still I'm interested in the architectural thinking that lies behind their position.
If, on the other hand, like Birbeck, you've drunk the XForms Kool-Aid, you'd be a generalist and will be inclined to see almost everything through rose-tinted glasses in terms of seperation of model from UI and from eventing and actions. You might recite MVC chapter-and-verse as you lull yourself to sleep at night, self-satisfied at your specification. The irony is that the visual effect of mere parentheses on an average teenager could cut short your sweet dreams of empire building.
Of course it's not always so cut and dry, sometimes you're just in the middle, trying to figure out which of the 45 latest buzzwords you're expected to spout fluently tomorrow to get a raise that beats inflation - or even a promotion. Or maybe you're just trying to code for food and get some real work done by helping a doctor's assistant keep track of HMO paperwork more efficiently or something of that sort. Oh well. Food for thought in any case.
Workplace Forms Development[Read More]
TedStanton 0600014754 297 Visits
I was just asked a question on how to pass usernames between web servers (Domino/WP/WAS to another non-IBM web server that needs awareness) if the environment has Siteminder deployed. The easiest way to pass valid usernames in an environment with Siteminder is using Agent Responses configured on the target agent(s). The response basically adds a custom field to the HTTP header which can then be parsed/looked up from the JSP/PHP/.NET/CGI/Python web page. The response can gets its value from an LDAP Attribute or can compute a value, which could be the user DN used by Sametime. Refer to my Siteminder articles on developerWorks for more information on Agent Response.
For example - PHP page could use $_GET['http_customvar'] or request.getHeader("http_customvar") in Java.
Here customvar is the name of the Agent Response variable being configured for the agent (and we use http_cutomvar to refer it on the target web servers).
Raj Balasubramanian, ISSL
I get asked a lot about how to implement awareness on a non-Domino/non-WebSphere platform. The key is to ensure that the target platform supports scripting metaphor to read cookie data. To illustrate this point, I have a PHP page hosted on a PHP server to get the LTPA cookie and build the awareness into the page using STLinks (just like in Domino or in a JSP). The only key concept was that I had to build in a custom cookie to pass the user DN (distinguished name) in a un-encrypted cookie, so that the PHP page can get the user DN from a cookie and the token from LTPA cookie to login to Sametime. Ofcourse, the PHP server was accessed using the fully qualified hostname and was in the same Internet Domain as the Sametime/Portal server.
I set the variable for the user by
Note that UserDN is a custom cookie that was set by my WAS server.I then get the LTPA cookie using the following
$r = ereg_replace("[ ]","+",trim($_COOKIE['LtpaToken']));
Everything else is setup the standard STLinks way (follow the ST Links documentation, if you need further clarification). You can peruse the actual PHP file here. On some browsers, you might want to use the "view source" option to actually see the code, if the file doesnt download by default.
I tested this with PHP running on a Suse Linux box accessing it from my IE browser on Windows XP.
The same rules of applet restrictions apply if you are using Mozilla/Firefox. Please follow the STLinks documentation to setup th enecessary files to make it work with Mozilla.
Full code of the php page
I have another sample using IIS/ASP.NET that I will post in a few.
In the meantime, bring "light" to a name on a PHP page :)
Raj Balasubramanian, Software Services for Lotus
TedStanton 0600014754 338 Visits