Anne van Kesteren

Peer-to-peer connections

After the <device> element (see What’s Next in HTML, episode 1) Ian sketched out an interface in the WHATWG HTML draft that builds on that, which looks quite interesting. Peer-to-peer connections (URL bound to change):

[NoInterfaceObject]
interface AbstractPeer {
  void sendText(in DOMString text);
  attribute Function ontext; // receiving

  void sendBitmap(in HTMLImageElement image);
  attribute Function onbitmap; // receiving

  void sendFile(in File file);
  attribute Function onfile; // receiving

  attribute Stream localStream; // video/audio to send
  readonly attribute Stream remoteStream; // video/audio from remote peer
  attribute Function onstreamchange; // when the remote peer changes whether the video is being sent or not

  attribute Function onconnect;
  attribute Function onerror;
  attribute Function ondisconnect;
};

[Constructor(in DOMString serverConfiguration)]
interface PeerToPeerServer : AbstractPeer {
  void getClientConfiguration(in PeerToPeerConfigurationCallback callback);

  void close(); // disconnects and stops listening
};

[Constructor]
interface PeerToPeerClient : AbstractPeer {
  void addConfiguration(in DOMString configuration);
  void close(); // disconnects
};

[Callback=FunctionOnly, NoInterfaceObject]
interface PeerToPeerConfigurationCallback {
  void handleEvent(in PeerToPeerServer server, in DOMString configuration);
};

You will still need some kind of intermediary (i.e. a server in almost all practical scenarios) to exchange the address, but after that things can get pretty interesting I think. I was hoping people would be willing to share their thoughts on the interface sketch above and the general idea of having access to peer-to-peer connections from Web pages and the Web platform in general.

Comments

  1. Seems pretty cool - looks like you could make Chat Roulette without flash.

    Posted by Rich at

  2. Anne,

    I think that, if I understand the intention of the PeerToPeer feature correctly, this is an extremely useful feature but also a fundamental change in the WWW architecture. This feature will basically turn each browser into a server-type component on the WWW, not only in terms of computation, but also in terms of connection i.e. other entities on the WWW may initiate connection to processes in the browser. This means that the WWW would potentially grow for every browser connected to it. Or more specifically, the WWW space of URIs would grow for each executing PeerToPeerServer object. The space would also become more dynamic since browsers are opened and closed more frequently than regular servers.

    Lots of (visionary) use-cases for such an architecture, beyond the obvious data transfer, have been discussed all over the WWW, so I'm going to mention just two. 1) A recent discussion on the WHATWG mailing list - Web-sockets + Web-workers to produce a P2P website or application. 2) A longish post on the HighScalability blog (it's worth the time to read it through) - Building super scalable systems: Blade runner meets autonomic computing in the ambient cloud. As I said, these are more long-term benefits of making everything a server -- and web-browsers, that is client-sides of Web apps, are an (obvious?) choice.

    Since the potential use-cases are so broad and the feature induces fundamental changes in the architecture, I think that the feature should be introduced and provided in layers. The feature implies that it would be used only for high-level browser-to-browser communication, which is potentially limiting. APIs that enable this high-level P2P interactions should be an additional feature built upon a basic one which enables webapps to programmatically start a HTTP server and add handlers to it. This reminds me of the Programmable HTTP Serving part of the "Programmable HTTP Caching and Serving" spec. This spec defines an API for creating servers that serve requests originating only from the local browser, while one of the outcomes of the PeerToPeer feature would be to create servers that serve requests originating from remote browsers or any other entity on the WWW.

    The importance of starting from HTTP is to make the whole Web platform (as you call it) uniform - the web is based on URIs for identifying resources and HTTP for accessing those resources. Of course, new protocols like WebSockets will fit into this frame since they are and will be based on URIs and HTTP (even if it is for HTTP-upgrade). The HTTP and URI Web platform is the base building block of the Web and it should stay that way. That's why I think that extending the browser with serving capabilities should start there - enable the browser to claim a part of the global URI space and then extend with standard protocols and handlers - HTTP and WebSockets. The streaming, file transfer, bitmap transfer and other APIs seem like an upgrade over that base (and a very useful upgrade), but the base must be present and available to webapp developers also. This reminds me of the Reverse HTTP specification (note that there is also an IETF spec with that name but with different purpose and semantics). The Reverse HTTP spec is a RESTful protocol and system that enables client-side processes of webapps to claim a part of HTTP URI space (via a special proxy) and in that way enables browsers to be not only WWW clients but also servers. Thus, the fundamental goals of this PeerToPeer feature and the Reverse HTTP spec seem very similar - to enable access to webapp processes running in browsers.

    I've not commented on the exact interface Ian proposed since I think it's still to early for that, the interface is not the cause but the effect of use-cases the feature is trying to cover. This is something that should be discussed and I'm looking forward to Ian starting one soon on the mailing lists.

    This is going to be a great feature!

    Best, Ivan

    Posted by Ivan Zuzak at

  3. Very interesting, Ivan,

    My thought on the notion that browsers will occupy URI space is that URIs are supposed to be persistent and permanent. If that part of Web architecture is to be honored, there will have to be some kind of online presence-aware proxy or other type of in-net registry between my browser(s) and everyone else.

    Would these be provided on a per-application basis? Or would peer-supporting URI-management servers be application-neutral?

    My browser opens, closes, travels, etc., and "my" browser isn't even one browser. I can imagine that many peering applications will be developed where "my" peer node is supposed to be associated with me as an individual and not my software's specific instance(s). Like the above, this also seems could only be managed by an online service or supernode outside the browser.

    Posted by Bean Lucas at

  4. Hi Bean,

    Those are excellent questions, to which I of course don't have clear answers yet. So I'll just write some of my thoughts below...

    I agree that cool URIs don't change. However, I think we both agree that the WWW architecture will change and this change may require that we rethink URIs with regard to their purpose and usage (URIs will identify things that are by their nature dynamically available). URIs themselves may not need to change as you said - we may only need better URI managements systems, services, schemes and tools which would enable this dynamism (URI registration, redirection, etc). Also, just to clear things up - when I wrote that browsers will claim parts of the URI space, what I actually meant was that browsers will implement APIs for claiming parts of the URI space. Users of these APIs will be either webapps or the browser itself.

    Simple kinds of these systems for dynamically claiming URIs for webapps can be developed even today - either as a) a part of the developed web application or b) as a separate application-neutral system. For example, a feed reading webapp served from http://feedreader.com could implement the system in-house and reserve a new URI each time the webapp is served to a client browser (e.g. at http://feedreader.com/dynamic-uris/dyn_uri_1), or the webapp could use an external system like ReverseHTTP (and then clients would get an URI like http://www.reverseHTTPprovider.com/dynamic-uris/dyn_uri_1). So, I think there will be many implementations of the systems for handing out URIs and managing them afterwards and that they will be application-neutral - you might implement one within your application domain, or use an external system. Also, since URIs are strings, creating new ones will never be a problem since we can never run out of them. For example, when I load the feed reading webapp in my browser, the app will claim an URI to use it for receiving real-time notifications via pubsubhubbub. After the user closes the webapp - the URI may just be discarded on the system that handed out the URI i.e. it will return a reasonable HTTP error status code.

    On the other side, claiming URIs for browsers (not webapps) may be more complicated since, as you noted, you as a person are exposed via the browser, and often - it's not even a single browser. So more complex URI management and synchronization may be required to expose your "presence". I haven't thought about these scenarios before so don't have a clear picture yet, but I think that establishing this presence may require more than just URI management - more high-level standards may be involved, like identity management, security, single sign-on in browsers ...

    What do you think? I think we're missing examples through which it would be easier to point at problems and discuss :).

    Cheers, Ivan

    Posted by Ivan Zuzak at