What happens when big companies go bad? They reinvent HTTP in XML (some markup language plus a bunch of schemas) and transport it over HTTP. Yup, duplication in two ways. You got to be kidding me. Anyway, this stuff is now a W3C Recommendation. A special track (“category”) at the W3C for Stuff that Members want to work on because they don’t understand the Web
seems like a good suggestion and not just for this particular specification.
Wasn't this to get HTTP through some networks, or something like that?
Yes, sort of. The point of this spec is to incorporate HTTP headers with the datastream itself, because many routers, firewalls, proxies, etc. strip out, modify, or block HTTP headers that may be required by the consumer. You can also use this with direct TCP/IP connections on internal networks, bypassing HTTP altogether.
Anne...although it looks like there is some overlap with HTTP, it was not designed by anyone with a disregard or lack of understanding of World Wide Web technologies. There really is a need for this. I think you'll be surprised (and troubled, as a developer) as you get more real-world exposure to how large companies set up their networks. It's an eye-opening and head-shaking experience.
I would think this would be useful for Web Services that aren't just transported over HTTP. At work we predominantly use JMS queues as web service endopints and this recommendation could be useful for abstracting over transport differences.
Sorry Keith, but WS-Addressing is not needed, and was developed largely by people who do NOT understand the Web (not to name names, of course). Did you realize that the former chair of the group left BEA for Yahoo and now actively criticizes Web services? Rumours are that another critical contributor to the spec is also going to Yahoo to work on actual services-on-the-Web.
I agree with Keith. Large companies often have ugly networks, a request in company between two departments on different locations often passes multiple proxy servers which modify http headers, and have often incorrect date / times (witch infects the HTTP Date header). HTTP caching and gzip compression headers are often cut off. Fixing this issues often takes days / weeks (it takes days to make network managers even do anything).
I know some succesfull implementations of webservices; in practice data exchange between applications (in large companies) is often implemented with FTP and CSV files because external legacy systems do not support newer technologies. Web services are really much easier to implement / maintain than FTP / CSV.
...in other words, WS-Addressing is a nasty big hack around a bunch of buggy, poorly-written software. And the rest of us who don't use has buggy, poorly-written software have put up with their hack.
Lovely. But I guess it's easier for such companies to spend a fortune on developing new 'standards' than it is to spend a fraction of the amount implementing current standards properly. But that'd be insane!