Last week was the second reformed TAG meeting, this time with new chairs, and hosted by me at Mozilla in London. I felt that overall it went well, though there was quite a bit of repetition too. Getting a shared understanding takes more time than desired. Takeaways:
provider.comwith WebRTC or will it need a dedicated API for the next twenty years?
Also, the W3C TAG is now on GitHub. It took some arguing internally, but this will make us more approachable to the community. We also plan to have a developer meetup of sorts around our meetings (a little more structured than the first one in London) to talk these things through in person. Feel free to drop me a line if something is unclear.
There are a ton of features in the web platform that take a URL. (As the platform is build around URLs, that makes a ton of sense, too.)
@font-face, … The semantics around obtaining a resource from such a URL however are not very well defined. Are redirects followed? What if the server uses HTTP authentication? What if the server returns 700 as status code for the resource? Does a
data URL work? Does
about:blank work? Is the request synchronous? What if I use a
skype URL? Or
mailto? Is CORS used? What value will the
Referer header have? Can I read data from the resource returned (e.g. via the
canvas element)? Can I display it?
For what seems rather trivial, is actually rather complicated.
At the moment Ian Hickson has defined some of this in the HTML Standard. In an algorithm named fetch. Then CORS came about (for sharing cross-origin resources) and the idea was that it would neatly layer on top, but it ended up rather intertwined. And now there is another layer controlling fetching, named CSP. To reduce some of this intertwinedness and simplify defining new features that take a URL, I wrote the Fetch Standard. It supersedes HTML fetch and CORS and should be quite a bit clearer about the actual model as well as fix a number of edge cases.
It is not entirely done, but it is at the point where review would be much appreciated.
At Mozilla we’re trying to bring the web platform closer to what is taken for granted in the “walled gardens” of our time (Apple’s App Store, Google Play, and friends). A big thing we need to solve is offline. As applications, sites should just work without network connectivity. Some variant of “NavigationController” (the name is bad) will give us that, but we need to iterate on it more. And in particular we need to test it to make sure performance is adequate and the API simple enough.
We have an API for end-user notifications, but after the site is closed clicking the notification from the notification center will fail (what should happen?) and if there are multiple browsing contexts with the same site open there is also some ambiguity as to which should receive focus. The permission grant is per-origin, but a single origin can host multiple sites. Push notifications face similar issues. The site is not open, but a push notification for it comes in, where should it be delivered?
The idea we have been toying around with is a worker that could be fired up whenever there is some external event that cannot be directly managed by the site (e.g. when the site is not open). This idea is not new, Google suggested it long ago, but it did not take off. A change from their model would be to not make these workers persistent, but rather short-lived so they are not too wasteful. Part of the application logic would move to the server, and push notifications can be used to wake the worker (we have been using “event worker” as a name) to e.g. notify the user or synchronize state for when the user navigates to the relevant site next.
Well it’s not possible to win this kind of thing. This is a continuous striving that people have done for a long time. Of course, there is many individual battles that we win, but it is the nature of human beings that human beings lie and cheat and deceive and organized groups of people who do not lie and cheat and deceive find each other and get together… and because they have that temperament, are more efficient. Because they are not lying and cheating and deceiving each other. And that is an old, a very old struggle between opportunists and collaborators. And so I don’t see that going away. I think we can make some significant advances and it is perhaps, it is the making of these advances and being involved in that struggle that is good for people. So the process is in part the end game. It’s not just to get somewhere in the end, rather this process of people feeling that it is worthwhile to be involved in that sort of struggle, is in fact worthwhile for people.
The W3C TAG meeting I attended in Boston went better than expected. I got to meet the rest of the group and we discussed a variety of interesting topics without ever getting too far into lalaland (although the Polyglot Markup discussion was pushing it). I did think it was a bit too formal, though fortunately Jeni and Tim supplied a healthy dose of comic relief.
I thought I would outline some of what we discussed here and quickly jot some thoughts down for further reflection later:
Content-Typefile signatures would have worked just as well with a safe fallback), it has not worked out in practice. We should at least indicate in the finding that what it requires is not what is practiced.
videoelement we also standardized the API and left the black box (the codec) as an open question for which we eventually found WebM as an acceptable solution. (Of course WebM is not universal so whether this is completely okay is to be seen still unfortunately.) As a corollary, standardizing the API for the DRM black box will bring more content out of plugins and may at some point lead to a standardized black box (presumably then a white box). A counter argument goes that while everyone would be happy with a open codec, not everyone would be with W3C-blessed DRM. And that even while you could envision an “open” black box, nobody would use it because it does not provide the protection deemed necessary.
You can find the raw minutes here: March 18, March 19, and March 20. Time to get back to London.
Standards developed by the WHATWG are licensed under CC0 (variant of Public Domain that works globally). The HTML Standard is licensed under MIT for historical reasons. This is important for these reasons:
The W3 Project at CERN had the same policy as the WHATWG:
The definition of protocols such as HTTP and data formats such as HTML are in the public domain and may be freely used by anyone. Tim BL Nowadays the W3C employes a more restrictive W3C document license that does not allow for derivative works and requires attribution.
Received an introductory email from my bank today. “Dear Customer,” it starts and it goes on to explain how online statements work. For instance, how can I be sure these emails are from my bank? “[W]e will always greet you personally using your name[.]”
I have been quite busy preparing for this and all the while I felt uninspired as to how to write about it. And now I am here and the words still won’t come, though the place is awesome and full of excitement. I live in London now and Monday I start working at Mozilla.
My MacBook Pro crashes every hour. Considering I did not need most of things that were on it anyway, I made a quick backup of the essentials, erased the hard drive, and started anew. That it crashed again while installing Mac OS X was not that promising and unfortunately a fresh install did nothing but remove the non-essentials from my system. So presumably it is a hardware failure of sorts.
Anyway, what does it take to make a commit again?
editis so useful).
Time well spent!
(Not sure why Apple and Bare Bones Software spell command-line tools differently.)
Quite quickly after the summer vacation period I figured out I was not that passionate about building a new product. What I wanted to be working on was a much smaller territory: URLs. So I started writing the URL Standard. Some time in I learned about the NLnet Foundation and wondered whether they might be able to help me out in this endeavor, as being independent for a while felt nice. It would also ensure I could finish the first 80% of the URL Standard, leaving the remaining 80% (no typo) for when implementations start to align and feedback starts coming in, which for standards work about already implemented technology is usually much later (e.g. the HTML parser was first defined in 2006 and it took until 2012 for it to be mostly implemented across the board).
NLnet is one of the few organisations in the world that funds independent people who contribute to the internet. The projects they have funded range from software running on root servers of the internet to popular security browser plugins such as NoScript, from TOR Hidden Services to Unhosted, from real time kernel updates (KSplice) to finding the security holes in GSM telephony (and then rebuilding it with open source GSM/LTE). All open source and out in the open.
So I made a proposal to work on WHATWG standards budgeted for three months and after handing over a more detailed proposal it was accepted. Putting the work in the public domain and in repositories on GitHub as I had been doing was encouraged. I can heartily recommend them either if you’re looking for an organisation to support this kind of work or if you’re looking to sponsor such an organisation.