• Add a Comment
  • Edit
  • More Actions v
  • Quarantine this Entry

Comments (6)

1 localhost commented Trackback

In the simplest possible way you can. For smart people, this probably means roll your own schema language. Or hopefully, base it on your actual implementation; use annotations, reflection, to extract your schema directly from your code. For example: http://www-03.ibm.com/developerworks/blogs/page/pmuellr?entry=modelled_serialization

2 localhost commented Permalink

"So declared interfaces are a good thing, both for code and data. Can we agree on that?"

Apparently, no, not every one agrees with that:
However, I agree with you on principal, but not in practice; XML schema is too complicated, XML itself does not provide a good mapping to data, and WSDL ... it's not "of the web".
I'd like to see more lightweight 'schema' stories for data, and then building on that, lightweight schema stories for RESTy web services.

3 localhost commented Trackback

Patrick, I'm already working on a WADL posting; stay tuned tomorrow.

The theme of today's posting is (supposed to be) that declared interfaces are a good thing. You and I seem to agree on that. Then the issue becomes how to declare interfaces. Today, we have Java (and C#) interfaces for code, XML schemas (and DDLs in general) for data, and WSDL for Web services. That doesn't necessarily make them the best, nor mean that they can't be improved upon or replaced with something better, but they're what we've got today, and they're better than nothing (the Smalltalk approach that REST seems to embrace or at least embody).
So, declared interfaces are good. How do we do them?

4 localhost commented Trackback

Another way to look at statically-typed interfaces is as a rigid, brittle stopgap to work around a failure to have done sufficient unit testing.

Whether or not having a declaration facility for Web Services is inherently a good idea and currently under active debate. However, such an argument cannot possibly be based on WSDL as an exemplar, because it is a fatally-flawed technology, neither readable nor writable by humans; the fact that it is typically generated by introspecting existing code means that, once again, you end up trying to extend object models and procedure calls across the net, something that we should have learned, all these decades later, really is a bad idea.

5 localhost commented Trackback


I posted a response here:
I can summarize it thusly: I agree with what Ramon said above, a lot :)

6 localhost commented Trackback

Ramon's right this far: you don't know if your code works until you run it in any language. But there are a lot of possible failures that can be diagnosed before the code even runs, if the language processor (parser, compiler, IDE, whatever) has enough information to do basic consistency checks. There are a lot of things I like about Ruby, but it's damn annoying that the only way you can even chase down misspelled variable names is to run the code --- because the lack of an explicit variable declaration syntax makes it extraordinarily difficult to support even the level of checking that you get out of Perl's (gah!) "use strict".

Type inconsistencies, to me, are the same sort of thing. If the type signature is correct, the code might still be bad --- but if it's wrong, you know you have a problem. I like testing too --- but if the language processor can be trusted to catch stuff like this automatically, those are tests that I don't have to write, and that's a time-saver. (And it doesn't necessarily require a whole lot of code decoration; look at, say, Haskell, which has strong static types even though type declarations are almost entirely optional --- the compiler infers the types, and lets you know when you blew it).

Add a Comment Add a Comment