Lately, I have been getting some questions on scalability of EGL Rich UI, and especially how well the architecture supports many asynchronous services being called at the same time.
So, I decided to write a sample with a service looking like this:
Then I call that service however many times the end user likes:
The end result for 30 tries looks like this:
It finishes the whole calling sequence instantly. I'd say within 0.5 seconds or close to that. It is extremely hard to see the browser draw the results. It actually feels like it finishes drawing before I raised my index finger off the mouse button
To increase the stakes, I tried a more interesting number of service calls. So I entered 1,000 calls, fully expecting either my browser or Tomcat to give up and die:
However, 1,000 calls are still comfortably handled in around 2 seconds. Note that both the browser and Tomcat run on the same Thinkpad T60. Actual network delays will produce different results when repeating this on a service deployed on a production server.
As you can see the answers come back in a "random" order. This is to be expected. The first letter in "Ajax" stands for "a"synchronous, and that is actually how the messages are sent out, handled, and received again by the browser.
Finally, I tested the above little sample with 10,000 concurrent service calls, to see when the ship would sink.
This final experiment took a while longer, and Chrome kept asking me if I wanted to kill the runaway script while it was sending out those 10,000 Ajax calls
Eventually the whole thing finished. Making 10,000 calls this way took about 1 minute to send out all those requests, and then about 2 minutes to receive all the incoming results. Clearly, my algorithm exposed some non-linear performance.
Therefore, I rewrote my sample just a little bit, to send the same messages in 10 separate batches and not to print out every single message that came back. This removed an obvious bottleneck in the browser, and likely gave the browser's garbage collector the chance to catch up.
With the 10 batches, it took only 24 seconds on average to make 10,000 calls. Looking at the performance chart I could see that a lot more was happening in parallel now also, as both CPUs in my dual-core machine were running at 100% now.
For the tests with 100,000 and 1,000,000 calls I used 100 and 1,000 batches respectively.
Here are the end results in a chart:
As you can see above, you can expect to be able to make a million services calls in 2,400 seconds. This comes down to over 400 calls per second between a dedicated server and a browser. You hopefully agree that 2.4ms is a really good response time for a web service.
Transaction rates will go down when the distance between the browser and the server is larger. They will go up when edge caching and real production hardware is used, instead of a laptop. In other words, your mileage will vary.
In "real" world scenarios, my experience is that web service calls take anywhere between 40ms and 2 seconds to make the round trip.
Hopefully this experiment and its outcomes gives you some confidence about the scalability of EGL Rich UI. Being able to make 400 service calls per second is more than enough for any application I personally can imagine.[Read More]
The developerWorks Connections Platform is now in read-only mode and content is only available for viewing. No new wiki pages, posts, or messages may be added. Please see our FAQ for more information. The developerWorks Connections platform will officially shut down on March 31, 2020 and content will no longer be available. More details available on our FAQ. (Read in Japanese.)
From archive: December 2009 X
One of the challenges of creating software for a global audience is language localization. The act of translating text from one language to another requires great skill. Literal translations of the type provided by a service such as Yahoo!'s Babel Fish are insufficient. There is only way to get linguistically and culturally accurate translations that also take the context of a software application into account – human translators. Large software development firms either employ their own translators or outsource translation to a firm that specializes in software localization. Either way, the cost of localization is typically high. I have a long history of working with firms that market software to an international audience and I enjoy designing and developing software for the global marketplace. When I migrated my Java-based Solitaire card game to EGL Rich UI I already knew that the capabilities of the programming language and tooling were great but the obvious sticking point for me, working on this as a solo developer, was the prohibitively high cost of obtaining translations.
My initial goal was to make the game available in English, Spanish, and Russian. I am a native English speaker with some knowledge of Russian and I had ready access to people who could validate a Spanish translation. As I contemplated my goal it was clear that I had very little text requiring translation – just button text, the options panel, and a few messages. In a flash of inspiration it occurred to me that one potential approach to translation would be to use Amazon's Mechanical Turk service. “Mechanical Turk” refers to a device created in the 18th century to play chess against human opponents. Unbeknown to the human players, the Mechanical Turk wasn't a machine at all. Inside the device was a human chess master manipulating the machine. Amazon's Mechanical Turk service employs people to perform tasks that computers don't do well or can't do at all. Anyone can sign up and submit tasks to be performed. The submitter of a task provides a definition of the task to be performed, sets a price, and indicates a number of unique submissions that he or she will accept. Workers from all over the world can browse outstanding tasks and do the work required. The submitter has final say over whether the work performed is accepted and payment made. The submitter pre-pays Amazon for work to be performed and, when the submitter approves a worker's task, Amazon pays the worker. (Amazon assesses a small fee for providing the service infrastructure). The submitter and workers are never directly known to one another.
I set aside $30 (USD) for my Amazon Mechanical Turk translation experiment. I submitted requests for translation tasks offering $2.00 per translation with a maximum number of five Russian and five Spanish translations. I made it clear in my task description that I only wanted professional, context-relevant translations and that payment would not be made to anyone who had obviously used something like Babel Fish. To my surprise, within just a couple hours I had ten translations waiting for me. I assessed these using, in the case of Russian, my own knowledge of the language and, in the case of Spanish, the help of a Spanish-speaking colleague. Translation is never a straightforward or obvious task. Ask five people to translate something (even five professional translators) and you will likely get five largely unique results and they will all be technically accurate. The ten translations I received looked like good work on the part of the translators and I paid all of them. I had enough language knowledge and help to choose translations that seemed to be the best fit for my game and I was happy with the initial release of the game in English, Spanish, and Russian.
For my next experiment (and given that I still had $10 to spend in my Amazon account) I expanded my language coverage to include French, German, and Portuguese. I realized from my first experience with the Amazon service that the translations I received were pretty raw. The translators did not provide much beyond the translated text. In order too better understand a translator's linguistic choices I revised my task request and asked for the translators to provide notes on their process along with the translated text. Also, I was curious to see if I could get good results at an even lower price point. For this next round of work I offered only $.50 per translation. I submitted requests for five French, five German, and five Portuguese translations and, again, within a couple of hours all available requests had been completed. The additional information I requested from the translators made it possible to determine which of the translators put real effort into their work (as well as applying a skilled translator's techniques).
The price was right but the downside of using the Amazon Mechanical Turk was the anonymity. I wanted at times to be able to have a two-way conversion about a particular translation and in some cases I wanted to reach out and make direct contact with a translator so that I might hire or recommend that person for future translation work. If this trade-off is acceptable though then my experience with low-cost translations via the Amazon Mechanical Turk leads me to believe that even this specialized and historically expensive aspect of software globalization is accessible to even the smallest software companies and independent developers.[Read More]
Please join me on a trip through time.
Thirty years ago
Back in the late eighties I saw a presentation by David Ungar on animation algorithms implemented in the Self programming language interface. I was blown away by the innovative ideas implemented by him and his research team. Here are some of the though-provoking animations implemented in Self:
Twenty years ago
Michael Jordan starts with the the Looney Toons cast in the movie called "Space Jam". Computer animation in movies has come a long way since then. Especially the last few years, the distinction between reality and animation is becoming increasingly blurred. A similar trend is happening in the gaming industry. Think back how the characters moved in classics, such as Space Invaders, PacMan, and Tetris. Each moved in a linear path and the animations can only be categorized using my 12-year old son's favorite phrase as being "so obviously fake!".
Ten year ago
In one of the most iconic scenes in the "Matrix" movie, Keanu Reaves learns how to dodge bullets. He moves at super-human speed and twists and turns to avoid the bullets coming at him at roughly the speed of sound.
Luckily, the directors of the movie, the Wachowski brothers, have enough empathy with the poor audience to show the scene in slow motion. Here is a still fragment:
Notice the blurry motion suggesting movement. What you cannot really see in the still of course, is how the computer animation wizards used real-life laws of physics to "correctly" model the actual movements. Newton already taught us the lesson of inertia. Things don't tend to move in a linear fashion, but they take time to get going, then they move, and then they slow down again using some mathematical function.
Three years ago
The iPhone is launched. It is immediately known for bringing many user interface innovations to the general market. One of them is the "swipe". To switch between images, you move your finger over the touch screen, and the iPhone takes over by finishing your gesture by making a nicely controlled animation of the old and new images scrolling on the screen. Few people notice the math that goes behind that simple scrolling, yet it is crucial to making the animation believable. By providing a good animation, the user actually feels like they are dragging a physical object. And for those that still feel like they need to practice, there's an app for that too:
iPhone app: Finger Sprint, only $0.99.
Animation comes to EGL Rich UI
By abstracting the JSTween library, it is now quite easy to animate browser objects believably and move them or resize them using a almost-realistic animation.
The animation in the ad at the top right is using the following mathematical function to reveal and hide the underlying links:
Don't ask me to go into the details here. EGL is designed for hiding complexities like those mathematical functions, and we very nicely wrap them into an API that can be used like this:
This defines an animator object, with a target widget and a duration. The widget can be moved to a certain location simply by calling the moveTo operation. It will then the animation function that was specified for the animator, with strongEaseIn being the default, and move it there in 2 seconds.
Therefore, the underlying jstween library will split up each animation step, and run the individual animation fragments in a separate job. Basically, it does this until it is done: move-render-wait-move-render-wait-move-render. These animations can be interleaved, and the animation can be combined as a result.
The ad shown above is included in the project attached to this blog entry. In addition, a second example is included that shows all the different animation functions:
When the user clicks anywhere in the grey box, the yellow box is moved to that position, while being resized at the same time. You should try out the real sample here.
Research has shown that during the development of every single successful software project pizza was eaten by one or more of the developers. Therefore, we conclude pizza is essential to the success of your projects.
Now.... where to find those pizzas? Of course, the boring way would be to use Google and find those pizzas on a map in a few clicks:
Real computer scientists, however, don't surrender so easily to ready-to-use tools. No. They write a script. Especially in EGL Rich UI that is easy to do. First, you make a call to find the pizzas:
The search string and the zipcode are hard-coded, but everyone knows how to parameterize them into a function and do some string manipulations on the url string. Right?
When the service call comes back with an answer, something special happens. Normally, service calls in EGL return either SOAP or JSON, depending on what type of service is being used. However, when making a REST GET service call, the data that is being sent back is left up entirely to the implementer of the service. In this particular case, the result is pure HTML.
To handle the HTML we will inspect it inside this function:
First we create an instance of a Div widget; one of the basic EGL widgets. Then we find all the spans inside the entire HTML file. Then for each span, we see if it is a title or an address and print it out. The result:
This is just a very simple example of how to scrape a certain page. Always make sure you have the appropriate rights to scrape a certain site. The above example to scrape Google maps is for educational purposes only. This type of scraping probably violates Google's terms of service.
Have fun scraping! [Read More]
Browser applications are getting richer and richer, downloading more and more data, and allowing users to interact with that data from within the browser. When a given application is accessed repeatedly, applications end up downloading the same data over and over again. One could say they suffer from short term memory loss.
A prototypical example of such an application is gmail. For years, gmail was a true web application. Your inbox, sent messages, contacts, and all other data was download each time you accessed gmail. Since a short time, gmail is now using browser-side storage. Today, if you search your inbox, for instance, all searching is done in the browser on data that is stored in a local database.
A major argument for using browser-side storage is performance and responsiveness. By doing more in the browser, your application will start up much faster. Furthermore, you will make fewer service calls, putting less stress on the server, and reducing the amount of session state it needs to keep.
Browser-side storage can be done using various techniques (Flash, HTML5, Google Gears, etc.). In this blog entry I will show you how to easily store, update, and retrieve EGL records from browser-side storage without ever needing to make a service call to get the data.
To ease the use of the solution, I abstract out the two solution providers that I support: HTML5 local storage and Google Gears. To you, the functionality manifests itself as a library where you can simply store a record using a simple function call.
Assume we have a record type Employee:
A local storage database table for employees is created as follows:
I give the database table a name, the type of record I want to store in it, and which field denotes the key for the record.
With that in place, I can make simple operations such as:
Each of the above operations has callbacks to handle the results asynchronously. This is a requirement of the underlying browser-side storage solutions.
The attached sample puts it all together as follows:
When the application starts up, we load all employees, and add them to the grid we declared inside our UI:
That was pretty simple, not? To delete the currently selected employee, we ask the grid selector which row was selected and then delete it from the database table. Finally, we refresh the UI.
It's equally straightforward to delete a known employee, such as the employee with id == 2:
To create a new employee, we use a special utility function on the database table to compute a unique key for us. We take that unique key, add some more field, and store the record:
Local browser storage can be used for many things. One example is storage of userids and passwords to avoid them from expiring in a cookie store. Another example is caching certain data to avoid having to download thousands of records over and over.
A more advanced example is to use the support of full offline browsing when HTML5 caching in combination with local browser storage. One such example is GVButler, which is an EGL Rich UI web app that starts up on an iPhone 3GS within one second.
Attachment: com.ibm.browser.storage-2009-12-15.zip (80K) [Read More]