• Add a Comment
  • Edit
  • More Actions v
  • Quarantine this Entry

Comments (9)

1 localhost commented Trackback

I think you're still missing the point, Patrick.

 
In RESTful systems, all services expose the same contract. That's *by definition*. What is the contract? HTTP.
 
So when anybody asks "show me a contract for your service", just point them to RFC 2616.

2 localhost commented Permalink

Mark: in the post you were responding to, Aristotle Pagaltzis wrote (and I think this goes a long way to answering Patrick's question): "There is a sort of equivalent to a description of an RPC API for REST, but it looks very different from what those who are looking for one would expect: It’s the media type specification. The key to REST lies in the fact that the client understands the representations the server returns, and can interpret them to find out which other resources the server has on offer and what they can be used for."

 
Doesn't the client's ability to comprehend the representations produced by the server imply a contract beyond HTTP itself?

3 localhost commented Trackback

I do want to believe. I swear. I just don't understand how this scales. If I end up building a complex web-based system, the size of flickr, how do you describe how it works? Just point them at RFC 2616 simply isn't good enough. How would I ever find out what I can actually DO?

4 localhost commented Trackback

Adam ... exactly. And then you always hear "Hypermedia as the engine of application state". Which implies that you know how to understand hypermedia. Which is a structured format. Seems like ... you don't need any structure on the web, except for the structure implied by HTML. Why is HTML treated specially here? Why are web browsers, as HTML interpreters, treated specially here? I see a fine line between Mark and other's arguments that "we don't need to stinkin' formats" and recent (implied) arguments I've seen from Pete Lacey and others that "the web" == "web browser". The "web" is clearly NOT the "web browser". Breaking a web browser does not mean breaking the web.

5 localhost commented Trackback

Adam - many of the same issues exist with data in a RESTful context as they do as data in an SOA/WS context. I was trying to compare apples-to-apples, i.e. everything below the data.

 
Patrick - "understanding hypermedia" just means being able to recognize links and dereference them: not a very difficult task, and extremely generic. HTML isn't special here. I've built very large RESTful systems using just XML and HTTP, plus making sure there were lots of URIs in the XML. That's hypermedia, with no HTML in sight. Said another way, all "hypermedia" means (and REST's "identify resources" constraint) is that instead of 12345, you say http://example.org/customer/12345.
 
And as for describing how Flickr works, I dunno. There's probably lots of ways to do it, just as there are many ways to view any architecture (e.g. 4+1). The important thing, I'd say, is that it *works*.

6 localhost commented Trackback

Mark, how did you recognize these URIs in your sea of XML? I reckon you parsed the XML? Traversed a DOM somehow? How did you know where to look for the URI in the DOM? Or do you just find a magic function somewhere findURIsInDom(dom)?

 
That's really all I'm talking about here. Somewhere, there was a structural definition of where to find URIs in your XML.
 
I think it's possible to abstract, lightly, over HTTP to handle turning the "sea of XML" into something a programmer can understand (structures/objects). And keep all the great HTTP optimizations. The gartner press release on "REST has won" was scary in saying "you will all be using low-level HTTP client libraries from how on". That's a recipe for disaster, frankly. Especially since the number of decent low-level HTTP client libraries is quite low.

7 localhost commented Trackback

You could use XLink, or define your own "foo:href" if you wanted to. I mostly use RDF/XML, so I used rdf:about.

 
I guess part of "getting" REST includes appreciating that HTTP libraries are not "low level", they are high level. Higher level than SOAP libraries in fact, because they all have knowledge of a particular application interface (GET, PUT, etc..). Programming directly to an HTTP library is a *GOOD* thing. And there's really quite a few good libraries out there; Apache has a good Java one, Python has libhttp2...

8 localhost commented Trackback

Mark/Bill, thanks for the comments.

 
Combining bits of both comments, "foo:href" and "rdf:about" are where we "want some more information". As in, telling someone about those elements/attributes and what they mean.
 
Again, not suggesting some be-all, end-all schema to rule the universe. Some kind of description of what users can do with your service. Such a description can probably be made machine readable, and even include human readable descriptions of the meaning and semantics of the data / services. More complex systems will need more complex meta-data. Simpler ones can get by with simpler meta-data, or just use existing conventions. A lot of us will be able to use simpler systems to build bigger systems, but not everyone. I hate to tell these folks building complex systems: "Sorry, REST is only for simple stuff; for complex systems, you might want to look at WS-*".
 
I see this as a spectrum between strongly, strictly typed systems like WS-*, or say WADL, and the loosely typed browseable web. My sweet spot lies much closer to the browseable web.
 
And w/r/t HTTP libraries; dunno. curl on PHP is simply terrible, in terms of an API. HttpURLConnection ('built-in' to Java so it's what most people use) is not great. HttpClient from Apache has lots of function, but isn't the simplest thing in the world to use (it appears to have been refactored multiple times, with the old API still littering public facing API).
 
I have another post I've been writing on "building your own frameworks" which I can probably dovetail into this discussion. Having every user of your service use direct HTTP calls is kinda horrible. Providing a little framework to access your service, to provide a slightly higher level API is what I like to do; the generic high-level frameworks are almost always uselessly too generic. Having your little framework work off of your own little meta-data usually ends up being fairly agile. And then use that meta-data to describe to the world the HTTP-level API.

9 localhost commented Trackback

I've tried to describe a scenario where declarative data is able to annotate links and form actions for machine processing on my blog.

 
The example I describe is a machine agent "buying" things on a shopping site.

Add a Comment Add a Comment