In a previous post on JCR I mentioned that JRS had consciously avoided the development of a client-side Java API. In fact there is no requirement for application clients to be developed in Java at all. One of the concerns we saw for previous Rational products was the complexity of the API and it's proprietary nature which made interoperability, integration and extension an expensive and complex proposition.
For Java client applications we really don't expect the use of the JDK and have tested with both Apache HttpClient and Abdera (for feed/entry creation and parsing). These seem to be the preferred libraries the application teams want to, and probably should, use.
So at least for us in JRS, if not for the rest of IBM, "J" stands for the Jazz Project and not Java.
I want to just note briefly that my humble thoughts here seem to be, if not causing a stir, causing a level of reference from friends at Microsoft. Now, it seems that there is a link from Harry Pierson's column
on the MSDN architecture site as well as a more detailed commentary from Stuart Kent on his welog
. Now, I do hope to come back and make some comments on Stuart's posting, specifically addressing some of his comments. However, I have just received my copy of Jack Greenfield and Keith Short
's book on Software Factories and I intend to give it a good read first.
I have worked with both Jack and Keith as well as Steve Cook (another contributor to the work) and a little with Stuart - they are all excellent leaders in this area and I want to read and understand their position a little better to make sure I am not mischaracterizing their
use of the term software factory; Stuart suggested that
The characterization of sofware factories suggested by Simon is at best an over-simplification of the vision presented in this book.
I hope that I didn't do that, but I will take the time to read the book and then post my further comments, and if necessary clarifications on my earlier thoughts.[Read More
Via /. I read this great article on ars technica titled MIT startup raises multicore bar with new 64-core CPU. More interesting is this quote from the article:
"Tell me if this sounds familiar: a grid of processor "tiles" arranged in a mesh network, where each tile houses a general purpose processor, cache, and a non-blocking router that the tile uses to communicate with the other tiles on the chip."
Makes that Intel Core Duo in my ThinkPad seem pretty tame now doesn't it. But seriously the question is raised on slashdot already - how do we program this, and efficiently? The company is Tilera, a small player, but maybe the first of many?
I recently attended a number of meetings at IBM Research in Hawthorne (two days of back to back meetings with these guys leaves your head in a spin!). There we talked quite a bit about the notion of Services Oriented Architecture, is it new, why is it important now, and how the current web services specifications help/hinder it's adoption and application.
I'd like to refer back to an earlier post, "Modeling Services - A Chicken and Egg Situation" in which I discussed a more formal presentation I gave on SOA. I introduced another term in that presentation that I think has merit, the notion of Service Oriented Thinking (don't use the acronym, it really doesn't work well). The notion of Service Oriented Thinking was really to highlight the fact that much of the earlier literature on Object Oriented Programming stressed the mental model that objects mapped well to the real world and therefore where inherently easier to grasp, model and develop. Well, whatever you think about this there has certainly been a broad adoption of Object Oriented technology and along with it has come a set of thinking that has codified both best practice as well as language specific capabilities into a sort of tacit Object Oriented Thinking. This is of course partly due to the fact that colleges now include such technology in courses, but also that apprentice programmers are immersed, for the most part, in the craft of object construction and they pick up this thinking as they mature.
The particular concern stemmed from the fact that these research folks are meeting more and more customers who believe they are implementing SOA by putting a few web services into a system - a monolithic system. It is this current focus on implementing services at all cost that is confusing the "S" with the "A" in SOA. We are even going one dangerous step further in confusing Web Services with SOA, hence the title of this post, we really do have to look to the Architecture aspects and not to the use of a transient set of standards from the W3C. To get to Object Oriented Thinking we have to go beyond the concepts of any particular programming language, we have to abstract the common concepts of the implementation technology and even introduce concepts that may not map well to a particular language/technology but enables this architectural-level thinking. So, how do we get to focus on the "A"; well one way would be tool around and see if anyone does actually have experience we can use.
It is clear that in the mainstream there is no Service Oriented Thinking yet in even our senior developers and architects. Now, this is not true across the board, various industries can point to Service Oriented Architectures that have been in use for decades - in particular in the telecoms industry. In looking at the infrastructure underlying the current wired and wireless telecoms network we see a complex set of switches which provide not just the routing of calls but security, billing, roaming and other capabilities. Switches are expensive, replacing them takes time and any provider network usually has switches not only of different versions but often from different suppliers. Standards in this game are extremely important and specifically the way services provided by switches are described and information flows through the network has to be such that these complex systems can be constructed in a reasonable fashion.
What we can do is learn from these examples, look at the architectural decisions and thinking that these folks have ingrained and understand how this can be generalized into a practical Service Oriented Thinking. In terms of the details we must abstract the fact that today's focus on protocol transport is around SOAP, realizing that the use of Java/RMI, C#/.net remoting or ASN.1 (in telecom networks) invalidates it's classification as a SOA. We can discuss the use of WSDL or some other IDL in the role of describing service interfaces, but it is the fact that interfaces have to be strictly defined and support immutable definition that is important.
Finally, we must realize that getting to a Service Oriented Thinking will naturally take time, that it is human nature to explore and take different routes to a solution and that those routes that provide little value will naturally stop being used and so such thinking is evolutionary and no amount of pontificating from industry experts, writers (or bloggers) can force the definition of SOA on practitioners.
So, I shall stop (for now...).[Read More
I have a copy of the new book Beautiful Code: Leading Programmers Explain How They Think (you can also check out the O'Reilly Beautiful Code Home. My concern is that Beauty, depending on how you define it in this context, does not seem to me to be the way to measue or judge code. Now, some people seem to define beauty in terms of the readability of code and that is important for those that follow in your footsteps. Some define it in terms of the simplicity and compactness of an algorithm and implementation and again that seems valuable in that a smaller implementation tends to be more understandable (fills fewer pages in the brain). But those who become enamoured with the elegance, symmetry or "beauty" of code we should remember the words of Donald Norman "".
My personal preferrence is to see well-laid out code, reasability, simplicity and openess as great tools in the service of safe code. I would be much happier to judge the value of my code on what the test team think of it rather than the adulation of other programmers (even though that is nice). Code that doesn't come back to haunt you, that's beautiful. So what else can we include in the list of tools for developing safe code? Well Bryan Cantrill discusses the book here but more interestingly here where he argues that programming language choice plays a part in beautiful (and by my extension, safe) code. This an area which tends to bring about some heated, even passionate, discussion but I believe that language choice really does make a difference in both the ease with which concise and clear code can be written as well as the ability to develop safe code.
To this end one area where I think many programmers struggle is the development of parallel code; and with the widespread availability of multi-core machines (it's hard to by a PC these days which isn't a Duo) it's a skill more of us will need to know when our jobs include the performance of applications. This is certainly part of the discussion in a new book on the language Erlang - a language which includes simple, compact and elegant parallel primitives. I spent some time working in Ada which has a good set of parallel abstractions and while Ada has many problems it is interesting that few of the popular languages today provide much in the way of parallel primitives beyond Thread classes and synchronized keywords. I'm not sure that Erlang is going to be any more successful than Ada outside of it's current niche but it is now a fully open sourced project and does seem to be generating quite a bit of buzz. The nice thing about Erlang is that it combines a god functional language, single-assignment and a high-level set of parallel primitives in an elegant (dare I say beautiful?) manner. Whether Erlang does take off or not I do think that we'll have to work out a way to keep our code beautiful when it is split into numerous components running parallel across different cores, processors, blades or machines.
Just for kicks, here's a nice piece from Jonathan Edwards in his post Beautiful Code.
Another lesson I have learned is to distrust beauty. It seems that infatuation with a design inevitably leads to heartbreak, as overlooked ugly realities intrude. Love is blind, but computers aren’t. A long term relationship – maintaining a system for years – teaches one to appreciate more domestic virtues, such as straightforwardness and conventionality. Beauty is an idealistic fantasy: what really matters is the quality of the never ending conversation between programmer and code, as each learns from and adapts to the other. Beauty is not a sufficient basis for a happy marriage.
I wouldn't normally use this blog for a commercial, and I remember being told by an IBM Distinguished Engineer that he rarely passed on book recommendations because they can be very personal and some people may not like his choice (didn't stop him giving me his recommendation on that occasion). But having just finished reading Dreaming in Code by Scott Rosenberg (subtitled Two Dozen Programmers, Three Years, 4,732 Bugs, And One Quest For Transcendent Software) I have to say it is an excellent read in part because of the care and detail that obviously went into the research.
The book follows the development of Chandler which I had been following as an interesting and ambitious Python application. The book not only looks at the particular issues faced by the Chandler team but also how this relates to perennial problems faced in software development. It was particularly interesting for me as the reason this blog has been a little light in recent days is due to the start up of a project here in IBM Rational which I will hopefully be able to talk about in the next few weeks - at least in general. Watching real time slowly stretch out into software time has been frustrating but inevitable I suppose.
P.S. the recommendation given to me was for Lean Software Development: An Agile Toolkit by Mary and Tom Poppendieck.
Django is cool - and to be really clear if you think I mean Django Reinhardt then yes I agree he is very cool, or perhaps you think I mean Pearl Django and yes they are pretty darn cool too, but if you thought instantly of the Python "The Web framework for perfectionists with deadlines" then we're on the same page (though that means we probably both need a life).
As part of the team here we tend to develop prototypes to prove out certain technical risks and right now my favorite platform for these throw-away projects has become Django, although for some more control over low-level details Twisted is great, but a bit more work. For web applications Django has so much in the box it's very easy and remarkably quick to get going - however what we were trying to do was a little different and so one of the things we had to do was add a few pieces to the Django framework itself, which turned out to also be a lot less work than we thought. Specifically we had need of two new capabilities not included in the current Django (0.96):
- A Database field to store UUID/GUID values and also support the 'auto' property so that such a field can be used as an auto-generated primary key value.
- A Database field to store regular expressions and while these are just strings we would like to have a form validator that ensures that the text you enter is a valid regular expression.
The first was easy, we simply subclassed the standard Django CharField model field class, fixed it's length at 36 characters and used the uuid module to generate a value if the 'auto' property is set. Note that the uuid module is included in Python 2.5 but not 2.4 or previous so you'll need to download it from Ka-Ping Yee. We also ensured that if 'auto' is set then any such property is not editable in the Django admin UI - this logic was taken from the current implementation of auto properties in Django itself. The code below shows the content of a module used in a number of places in the project and specifically the class UuidField is used by our model classes.
The second was also relatively easy, though it took a little longer to find some code to crib from but the result is also shown below in the isValidRegularExpression function. The approach is pretty simple (simplistic?) and involves passing the field value through the regular expression compile function and if that throws an exception assume that the value is not a legal expression. This seems to work pretty well, certainly well enough for our purposes anyway.
from django.db.models.fields import CharField
""" A field which stores a UUID value, this may also have the Boolean
attribute 'auto' which will set the value on initial save to a new
UUID value (calculated using the UUID1 method). Note that while all
UUIDs are expected to be unique we enforce this with a DB constraint.
def __init__(self, verbose_name=None, name=None, auto=False, **kwargs):
self.auto = auto
# Set this as a fixed value, we store UUIDs in text.
kwargs['maxlength'] = 36
# Do not let the user edit UUIDs if they are auto-assigned.
kwargs['editable'] = False
kwargs['blank'] = True
CharField.__init__(self, verbose_name, name, **kwargs)
""" see CharField.get_internal_type
Need to override this, or the type mapping for table creation fails.
def pre_save(self, model_instance, add):
""" see CharField.pre_save
This is used to ensure that we auto-set values if required.
value = super(UuidField, self).pre_save(model_instance, add)
if (not value) and self.auto:
# Assign a new value for this attribute if required.
value = str(uuid.uuid1())
setattr(model_instance, self.attname, value)
from django.core import validators
def isValidRegularExpression(field_data, all_data):
""" A standard validator function that ensures that the user enters a
valid regular expression in a form field.
raise validators.ValidationError, 'Error compiling regular expression %s' % field_data
isValidRegularExpression.always_test = True
There are a few more Django tweaks as well as some tips/tricks we found that hopefully I can post over the next week or so.
I see that Grady's blog
has been mentioned in the Mac press after postings in which he discussed his use of Apple hardware at home. So, as I sat at my G4 Tower
developing with Xcode
to deploy over an Airport network
to my eMac
But seriously (IBM ThinkPad T40 - the only reason to by an Intel processor!) I am, like many that make their living developing software, a little jaded when I look at software developed by others - "well I'm sure I wouldn't have done it like that". But every now and then something comes along with the wow factor and Delicious Library
from Delicious Monster in Seattle is one of those. Here is a piece of software with two individual wow factors (apart from being developed natively for the Mac and looking so nice), firstly the innovative bar code scanner and secondly the use of the Amazon API.
First - ever looked at the price of a bar code scanner? then looked at the drivers you need, the integration.... well what these guys have done is to use the Apple iSight
camera, the little web cam come video conference camera and they wrote the cool code that turns it's still image capability into a bar code recognizer - very, very neat,
Secondly - when you scan a book (or, for those in the stone age, type in the ISBN) wait a second and you get all the books details including cover image all from Amazon. Basically you now have the ability to catalog your book collection (yeah, that's either geeky or anal depending on your point of view) using web services from your Mac desktop. How cool is that!
Thirdly - (yes, I know I said there were two cool things) well, it actually demonstrates the use of a great service in an innovative way, let's face it Amazon opened up it's API some time ago and there are lot's of people using it ... to build a better shopping tool. Here is a small team of guys that have really turned that source of information into a real service, they provided additional value through integration of services. Is this the first of a new breed of desktop applications really leveraging services already provided or being provided by key vendors such as Amazon?
I hope so - and look forward to seeing them on the Mac :-)[Read More
I recently sat through an interesting presentation, given by one of our consultants, on the application of SOA at customers. In general it was a good solid presentation, and after all as a practitioner solving real problems the speaker had instant credibility in my eyes. One moment did make me chuckle at the time, but more importantly stirred me to some deeper thought over the last few days. The speaker was discussing the issue of defining operations for services and the oft-stated desire to ensure that developers do not build CRUD services. I'm sure most SOA-aware architects are familiar with the principle that while we are trying to create more granular services we need to analyze the data concerns and needs for operations, certainly update transactions need to be carefully considered.
Specifically let us consider the usual "Customer" example, we do not want to see an operations
for a number of reasons. Firstly the update to the database has to work out what has changed and if the underlying representation is across a set of tables (which it has to be) then the update is a complex transaction. Secondly there are business rules that may be applied to certain updates, for example when we update a customer address within an insurance company we may find that the new address will invalidate a customers policy or at least change their premiums or coverage. So, we provide a set of update transactions, such as
Back to the original speaker, his comment was that these are still CRUD operations, that they should be changed to be Business Operations
. While the first of these simply changes the word update
, which doesn't seem to change the semantic in any way. However, interestingly the second alternative is substantially different, its language is not expressing an action to take against the customer but denoting a change in the state of the real-world entity represented by the software Customer. As such we are dealing with something much more like a business event, and while this was not the intent of the original speaker I think this is a very important observation, it allows for a much more flexible approach to a business service definition, a service that manages an entity in this way can express a set of direct operations that change the underlying data but also a set of business events that it responds to. These business events are a great way to also look at integration scenarios; existing Enterprise Application Integration often focuses on Event-Driven Architectures and the distribution of events across and between services using mechanisms such as pub/sub messaging or the Enterprise Service Bus.
I want to come back to the broader discussion of the relationship between SOA as an architectural style and message-oriented architectures and programming models in a later posting, for now I'd love to get peoples feelings on this idea of recasting some of these update transactions as events.
Before I begin, I would like to quickly claim that over the next few days I intend to put together a more detailed response to Alan Cameron Wills comment to my last posting. I know Alan has a lot of experience in the area of modeling languages and so his comments are taken seriously and have given me much to think about, unfortunately much to think about and not so much time to commit thoughts to paper.
I don't know how well the notion of Desert Island Discs
travels across geographies and cultures but for those unfamiliar with the radio programme here is a brief revue. The host interviews some guest though their choice of 8 pieces of music that they would like to take with them if they were left alone on a desert island (They also get to choose one book and a luxury item).
So what has this to do with our usual topics? Well some years back an editor at Addison Wesley asked me if I could just have one book on a desert island, one computer related book, what would it be? More recently I was approached by someone here at IBM who asked what I would suggest as good books to read as a background in software engineering. So, I thought it would be interesting to put down what I think are the texts that I would take with me, either because they are perpetually good to read or because I think they represent a particularly valuable point of view. You might be interested to know that the book at the top of this list is the one I selected those years ago.
- Project Oberon - The Design of an Operating Systems and Compiler (Wirth, Gutknecht). As a programming language geek Wirth is a hero and this book shows the elegance and simplicity of his solutions to complex problems.
- The Design of Everyday Things (Norman). Has anyone read this book and not been moved by what seems like basic common sense, and then looked at all of the things we've produced in our careers and been profoundly embaressed?
- Knowledge Representation - Logical, Philosophical, and Computational Foundations (Sowa). A great, though hard read; this is an ambitious text as the title suggests but I personally would appreciate the quiet of a desert island to read this again.
- Object Oriented Software Construction (1st Edition, Meyer). I say specifically the first edition because I found the second to be more cluttered; however I learned object-oriented concepts from this text and I have always felt Eiffel to be an undervalued language.
- Taligent's Guide to Designing Programs. Well, Taligent didn't last long, but boy is this a nice, practical and simple set of coding styles and guidelines. Oh, and if you can't get this text any more almost any of the Taligent technical documents would do!
- Understanding and Deploying LDAP Directory Services (Howes, Smith & Good). I have always found LDAP to be a fascinating service, maybe more interesting than the relational model. Oh and at 846 pages if the island turns out to be cold at night there are a few fires that can be started with it.
- Managing Technical People: Innovation, Teamwork, and the Software Process (Humphrey). Watts Humphrey is more well known for his work on software process, but this book is insightful in articulating the motivation of technical people.
- Objects, Components and Frameworks with UML - the Catalysis Approach (D'Souza, Wills). Well, imagine my interest as I reviewed the shelf on which I have the complete Addison Wesley Object Technology Series - I should have a UML or modeling book in the list ight? But not picking a Rational book, and picking Alan Cameron Wills? A co-incidence I assure you, but a really great book of practical advice.
I did seriously consider Generative Programming
(Czarnecki, Eisenecker) instead of Meyer's text for option 4, a more modern book and representing possibly many of the future directions in software engineering. I also was torn for my choice of number 7, I like much of Humphrey's work and was tempted by the Personal Software Process
So there it is, like it or not, and either way I'd love to hear what you would put and specifically what would be number one.[Read More
Well, the last few months have been very busy and really fun - am writing code for real! I have been seconded to work on the new Jazz REST Services (JRS) project**. JRS is a technology incubator project as part of the The Jazz Project and provides a RESTful, resource-neutral store which I'll talk about in subsequent posts.
This post then is about using Jazz, rather than developing for, which has been a really positive experience. I've used a whole bunch of source control and configuration management systems over the years, RCS, PVCS, PCMS, CVS, SVN, ClearCase and ClearCase/ClearQuest UCM. They seem to fall into one of two broad categories, file based or work-item based, that is they either deal in checking in/out files and folders or they track work against work items and you commit the item to check-in all the associated change sets. PCMS (way-back when) was work item based, UCM is and now Jazz is as well; however, the level of integration and ease of use in Jazz is really a huge leap forward from any of those.
The workflow, creating a defect/task making changes and associating them to the item is as easy as you think it should be and then the collaboration features to share changes in-flight with team members, request validation of work and so on have been simple enough to use that even a small team like ours has used daily. If anyone has seen any of the demos of Jazz so far you'll have seen Eclipse and Java, lots of Java :-) Well I can say that this is pretty much the out-of-the-box configuration, however it works just as well with PyDev and our Python test client projects.
So, to the last part of the title, yep all my Jazz dev is done on my nice shiny new MacBook Pro. The Jazz client is always provided in a Mac OS X package and has worked perfectly all the way through the project. And, of course, the screen envy from my ThinkPad using colleagues is always nice.
** the link will, at least for now require sign on but that should be removed in the next week or so.
The hardest part of writing this entry has been how to start... well there, done that. But what I really mean is that, as I stated in comments to the earlier entry I do not want this discussion to degenerate into a "UML versus DSL" rock-throwing session, but to spur some open discussion on the merits of both. In particular I do want to revisit the notion of refinement and how it is supported by both general purpose and domain specific languages.
Let us take two examples from the world outside software (there's a world outside software?).
- My son and I recently read a book on Leonardo da Vinci (great link), in particular looking at the way Leonardo's work contained a great body of work from very rough sketches to beautiful and complete works of art. Explaining how artists start with rough pencil sketches, refining the lines, the perspective and then move onto oil to complete was a particularly interesting discussion.
- I know there are many analogies that we in computer science draw between our world and that of construction - here's another. Look at how buildings are really constructed, the architect does not build blue prints, they draw or make a model of the envisioned building (some of these drawings have become as well known as the actual buildings themselves). For example, Frank Lloyd Wright's Fallingwater started with a truly beautiful drawing that sold the client. Then followed floor plans and blueprints. Only then did wiring diagrams, plumbing details and specific engineering drawings for features such as the cantilevered balconies complete the story.
So, what can we see from this? Well there are distinct phases in many human endeavors that allow us to see the outcome of our efforts in progressive levels of detail and from distinct perspectives. Nearly everything we do is based around our ability to abstract information we gather from the world around us, to learn by generalizing activities and matching them to past experience. So, is it in any surprise that these are the natural ways in which we approach the world when it comes to developing works of art, buildings or complex software?
We start with known abstract and high-level patterns; if we're building a bridge we have a different set of starting patterns to a high-rise office or back yard dog house. We then refine this set of patterns according to the requirements we know (girder, arch, truss or suspension bridge) and complete a number of stages of design which we use to communicate the overall direction and strategy.
So, does this mean that the pencil and paper are domain specific tools and that they are the design tools used ahead of detailed "coding" tools such as oil and canvas? Well, the important question is does it really matter - does the artist have any such distinction? I would argue no, there are logical stages that the artist can use depending on the work at hand (not all paintings require a pencil sketch) and although different tools are used they it is perfectly acceptable to use pencil, ink or charcoal as the final result, they are fantastic tools depending on the desired outcome. Is it the case that the blue prints of our bridge require a different language from the engineering plans for attaching the suspension cables to the deck?
My position is that the creation of domain specific languages that do not seamlessly support the ability to transform information along the refinement scale are not helpful to us. So, for example, a component designer that is a stand alone tool unconnected to the class designer that provides the next logical level of refinement (classes being used to construct components) is a pot hole in the road from concept to actual implementation. Now, this is not as I have said to indicate that domain specific languages are bad, just that many of us in this industry love to create new languages be they graphical, textual or conceptual. We have to beware of the tendency to build these disjoint languages that force the user to keep stopping and jumping across another gap.
Now, does the UML
help us in this? Well, actually as it stands no, it has many pot holes all of it's own! But, one way to look at the UML is a pre-existing set of domain specific languages, with at least small and well understood gaps between them. For example most everyone is familiar with the class model, the component model, the state machine model - all of these can be treated as sub-languages, and have been successfully applied in projects.
Now, the danger is then to say that the UML is enough surely for any problem (got a hammer here - anyone got a nail?). Well therein lies a big, big hole waiting for the unwary. The UML is a general purpose language, like English
and as such can be vague in a particular domain, so we have created specific usages of the English language for engineering or science, even standardized the meanings of terms for defining specifications (RFC 2119
). In this regard it is clear that the UML needs to have particular usage patterns documented as sub-languages, or domain specific usage.
But then, there are simply concepts in use today that do not map well to any UML sub-language or usage pattern. Well, the OMG has already thought of that and provided the Meta-Object Facility (MOF
) which is the underlying language used to construct the UML â making MOF a domain specific language for constructing domain specific languages? Here at IBM the open source Eclipse Modeling Framework
implements the MOF specification and provides both run time capabilities and tools for defining new modeling languages (and you can see examples in the XML Schema Tooling on eclipse.org).
Finally, let's cycle back to the notion that refinement has to be supported by tools that provide domain specific views of artifacts in the development lifecycle. How does UML or EMF provide this support - well by having a common infrastructure is a big help and allows for tools to standardize concepts where they are common. But bottom line we the tool architects have to consider the refinement of artifacts as a key feature of our products and realize that however cool another new language would be delivering the tool experience that actually smoothes the road in front of the developer is way, way cooler.
So, having spent most of my blog time on modeling, process and SOA I'd like to depart at least for now into something different. I have been coding up some samples, examples and experiments over the past few months and decided that given the productivity requirements and costs I decided to leverage a dynamic language rather than the almost ubiquitous Java here at IBM. My language of choice is Python, and I have a long history using it on a number of platforms, though most interestingly I contributed the changes to get Python 1.0.2 working on OS/2 2.x (I even found the email in the Python archives, Re: Python and OS/2 2.x, dated 2nd June 1994). As usual I got started quickly and just kept going using Python then wxPython to develop the application UI and a number of other packages - now this is not to say that I could not have found equivalent packages for Java but even though it has been my primary language now for some time I am just more productive in Python.
But why? Is there something inherent in the language - unlikely it's not radically different to any other; the dynamic nature is certainly an advantage over Java, with meta-programming and the exec() function, but is that all there is to it? After all in 1994 Python, like Perl, was considered a "scripting" language, a term the language theorists say with a sneer. But whatever it is there seems to be a resurgence of dynamic languages and not just Python but of course Ruby and others are really taking on the "serious" languages. But why does it feel simpler? after all there's no fewer language features, in fact Python supports both OO and functional styles, has some really unique features, but still it feels simple. In my opinion a lot of this is not the language itself but the way the language is used, for example in Python if I want to open a database I open the database; in Java I have to use a JNDI factory to get a connection to get a name for a class which is a factory to obtain an instance of a JDBC connection that is abstracting the database as much as possible... It seems that we have layered on top of Java a whole set of patterns, practices and idioms that have become embedded in the libraries we use and therefore become required in using almost any Java API.
But isn't abstraction good? shouldn't the language do it's best to build APIs that allow us to plug in different implementations of things? Well yes, but to a point. It is clear that the use of PHP is growing across the web, the use of Pythonis growing in many places including the web with the Washington Post powering sections of their site with the Django web framework (much like Ruby on Rails, see here) and in desktop applications such as Chandler. Larry Wall is quoted as saying that the principle virtues of a programmer are Laziness, Impatience and Hubris, and I find that dynamic languages play to my laziness (well stocked library), impatience (no compile, so edit and run) and hubirs (I look smarter, faster). Interestingly most Java developers use an IDE with Eclipse being very popular, and in many cases with the complexity of the code we develop they are indispensable, but in the world of dynamic languages there are far fewer tools used let alone required. Now, I do use Eclipse with a plug-in for Python development however only because it saves me having two tools open when I switch between projects, in many cases my tool of choice is vi -- and it work just fine (and it's small, simple and fast).
This has some bearing on an on-going activity here at IBM, an activity that has become known outside through blog postings by Sam Ruby. While I wont go into the background or details, here's a quote that has gotten around already.
Application development using IBM programming models and tools is untenably complex. The Research Division's new Services and Software strategy includes a strong focus on radical simplification. Radical simplification was one of the featured topics at Paul Horn's recent Vision Conference. Over 70 people in IBM worldwide are currently participating in an effort to define the problem, and the scope of the solution, more precisely. Our effort will lead to recommendations to emphasize, grow or refocus selected existing Research projects, to start new projects, and to undertake other initiatives to promote a culture of simplicity. This talk will discuss some of the insights we have gained so far into different perceptions of complexity, the nature of complexity in IBM software, why complexity is a high-priority problem for IBM, and some of the directions being pursued inside and outside IBM to deal with complexity
So, is there a message here? Obviously I have taken a personal message and will continue with my Python projects, but in a broader context I think we all have to learn that just because some software is large and complex and requires layers of abstraction to hide complex underlying infrastructure it doesn't mean that all projects require the additional overhead. By overhead, because I know that will upset some people, I mean not just larger libraries but simply cognitive, I have to know much more to get things done (and personally I have more interesting things to use my neurons for). In really looking at simplifying our middleware platforms we (IBM) have to consider developing offerings that allow people to use simple languages, the right level of abstraction and so forth to develop languages. For the simple prototype just give me a simple system, as the application gets more complex maybe I need more complexity in the libraries and code, but add only as needed.
I know this is a topic much discussed in a number of places, but hope we can have some interesting debate here on developerWorks.
I decided to write this short entry after the announcement that the Web Services Navigator
has been released to AlphaWorks. In a previous post I mentioned the work that IBM Research has been doing around Web Services and the work of the team led by John Morar
(and especially Wim De Pauw
) in the IBM T.J. Watson Research center; this is one of the fruits of that labor. The interesting aspect to this is that although predominantly intended as a problem determination tool - identifying errant behavior in a network of services, there is another interesting perspective.
I recently finished a number of books leaving Stephen Wolfram's
book A New Kind of Science
at the top of my to-read pile. So, having already stated that I haven't started reading this I want to be clear that this has been on my list a while and I am hoping the holidays will give me a chance to get through at least some of the 1197 pages. My interest is really that as we look at SOA as the next generation of distributed systems development, with carefully crafted and autonomous services then we have to consider the issue of Emergence
. Now, for those who have dealt with large and complex distributed systems this is no major surprise, but first let's take a look at a definition of Emergent Behavior.
Emergent Behavior - Behavior of a system that is not explicitly described by the behavior of the components of the system, and is therefore unexpected to a designer or observer.
For a good, and entertaining, introduction I would recommend The Connected Lives of Ants, Brains, Cities, and Software
by Steven Johnson
. But basically as we (and I include myself) write articles on the design of services and as the industry looks forward to applications as cooperating networks of services developed in-house and integrated from external providers we will have to come to terms with the potential for systems to behave in ways we did not, and maybe could not predict. Now, we have many ways to define and model the messages, operations, protocols and policies for a service and maybe even a process built from services but there are still many issues left unaddressed.
So, why was this thought spurred by the Web Services Navigator? Well, it may not be possible to predict emergent behavior in such systems, but it will be absolutely necessary to monitor such systems to identify and hopefully rectify such behavior when it does emerge. The Web Services Navigator does provide this ability, to visualize the behavior of a system over time and identify the traces of interaction that do not match the anticipated scenarios. From an IBM perspective there are obviously interesting connections that can be made from our design-time tooling and our operational monitoring and management products. Imagine the ability to model the expected behavior, to define in sequence diagrams the traces we expect to see - now the Navigator can compare the actual traces with those specified and ignore those that match the expected behavior and highlight those that do not. Now the possibility is for our monitoring system to raise events on identification of unexpected behavior and possibly even to correct the behavior under certain circumstances. So, maybe as they say this is not a challenge but an opportunity. Time will tell.[Read More
Well, it has been a while since I managed to sit and write here, with a long family vacation and some very interesting trips to IBM labs and customers in Europe pretty much filling up the month of August. One recurring theme however is one I've touched on here before and that is the fact that as we think more and more about "service-ifying" the business and developing an IT infrastructre based on SOA principles and patterns one issue has to be addressed and that is how we develop a truly enterprise-wide perspective on this new IT world. Already we are seeing terms such as service fabric
describing this enterprise wide network of services, service repository
to describe the additional metadata seemingly required for integration and service library
which seems to be a kind of service repository for developers... So, you can be forgiven for thinking the IT world has lost it's head, until you realize that there is an underlying need that is bing poorly represented by this plethora of new dictionary terms. Though before we leave this I will admit that we used the term service portfolio
within the RUP update for SOA
. What we are really trying to convey is that the Enterprise we envision is one whose technical capabilities are expressed entirely as services
and that we believe there will be a greater level of reuse across this IT landscape due to the nature of services (see here
for a discussion) and so understanding how these services collaborate in support of business processes will be key.
The problem is that with SOA in general is that we are trying to tackle two really hard problems in one go; specifically one hard technical problem and one hard organizational problem (I'll leave it to the reader to decide which is the harder). The first is the need for integration technologies and approaches for the leverage of IT assets that actually works
(seemingly a minor requirement or even an afterthought in many organizations). The second is the need not to bridge the Business/IT gap (see here
for a discussion) but to eliminate the concept - to say there is an enterprise which achieves it's aims through business knowledge and IT capabilities which are inseparable. Some customers are moving toward such a model where projects are not conceived in the business world and then thrown over the wall to IT for implementation but where teams consist of business and IT folks from inception through to delivery and in some cases into the monitoring and optimization as well. Fundamentally the key concept, or rather the concept that should be key, in the business domain is the business process as it is really the delivery vehicle for the value that a business provide. And yet right now in IT we have so many key concepts you'd need to sit for a week with a dictionary to get to grips with them - the concept of a service has the value of providing a single abstraction that due to it's granularity can be used to describe the actions performed as steps within a business process, this there is a common vocabulary now in terms of business actions and services. It is in this approach that we see the real need for an enterprise-wide view of the services that IT provides, the capabilities of the IT that support the business processes, we need to be able to identify services that exist that may be reused in a given solution, we need to be able to see the current and planned interaction between services as we expand the portfolio and finally we need the ability to use this information during design, implementation and integration phases.
I read a good article entitled Service Orientation in Enterprise Computing
from Mike Burner
at Microsoft while I was traveling and Mike touches on some of these topics (and uses the service portfolio term). Specifically Mike introduces the separation of service orientation as the technologists view of the patterns and practices for service development, and the Service Oriented Enterprise
or SOE which is the set of processes used to manage the IT service portfolio.
Each well-designed service and service-oriented solution becomes part of the organization's technology portfolio, components in the service-oriented enterprise. The hallmarks of a successful SOE are rigorous service factoring, a thorough, forward-looking integration strategy, and a common, coherent approach to the management and governance of services and solutions.
This separation is very useful as it allows us to describe the set of processes that govern development of individual services, that govern the development of a solution and that govern the management of the service portfolio as separate lifecycles that share common goals and many common activities. In this regard IBM is set to extend and re-release the current RUP for SOA plug-in with specific guidance on the development of a service portfolio as a broader governance activitiy. This additional guidance, developed in conjunction with our customers, we hope to be able to make available on developerWorks before the end of the year.
This does lead us nicely back to a point made in the first paragraph above, when you start to discuss the term SOA Governance
you very soon end up discussing SOA Registries
. As we have seen, this is inevitable as the nature of the SOA Governance processes are to manage the service portfolio, one expression of which may be through a service repository. However, there is a second aspect here, which is a vendor and analyst push to convince uss that we all need a global run-time repository of service metadata. In the RUP we certainly recommend that there be a design-time model of the services in your enterprise, if you are using modeling tools to design your solution from services it seems only appropriate to have your portfolio accessible in the same medium. However we do discuss the need for, or at least the desirability of a central repository for service specifications and additional information. So should we not be able to provide concrete guidance on the form this repository should take? Well as it turns out we have a number of standards applicable in this area as well as a number of vendors competing for the limelight. So what about standards in this area? Well it seems we have a choice of standards, both interestingly enough now under the OASIS
and ebXML Registry
as well as the possibility of using the Reusable Asset Specification
(RAS) as a means to describe the metadata associated with services. As for vendors, there are companies such as such as LogicLibrary
looking to recast existing repositories for an SOA purpose as well as new players such as Systinet
providing a specific SOA play.
- Note, when using the term services I do not imply a) web services, there may not be any HTTP or SOAP in sight, and b) that these are new-fangled things, COBOL on CICS is still a fine way to build IT capabilities.
- While most of the above vendors have chosen to build repositories around UDDI one, notably Sun (Sun registry) has chosen to also implement the ebXML registry.