Anne van Kesteren


Five years at Mozilla today. I’m very humbled to be able to push the web forward with so many great people and leave a tiny imprint on web architecture and the way the web platform gets standardized. Being able to watch from the sidelines as more people are empowered to be systems programmers and as graphics for the web is reinvented is hugely exciting. It’s a very tough competitive landscape, and Firefox is very much the underdog, but despite that Mozilla manages to challenge rather fundamental assumptions about web browsers and deliver on them.

And ultimately, I think that is a huge part of what makes the web platform so great. Multiple independent implementations competing with each other and thereby avoiding ossification of bugs, vendor lock-in, platform lock-in, software monoculture, and overall reluctance to invest in fundamentally improving the web platform. Really grateful to be part of all this.

Testing standards

At a high level, standards organizations operate in similar ways. A standard is produced and implementations follow. Taking a cue from software engineering, WHATWG added active maintenance to the mix by producing Living Standards. The idea being that just like unmaintained software, unmaintained standards lead to security issues and shaky foundations.

The W3C worked on test suites, but never drove it to the point of test-driven development or ensuring the test suites fully covered the standards. The WHATWG community produced some tests, e.g., for the HTML parser and the canvas API, but there was never a concerted effort. The idea being that as long as you have a detailed enough standard, interoperable implementations will follow.

Those with a background in quality assurance, and those who might have read Mark Pilgrim’s Why specs matter, probably know this to be false, yet it has taken a long time for tests to be considered an essential part of the standardization process. We’re getting there in terms of acceptance, which is great as crucial parts of the web platform, such as CSS, HTML, HTTP, and smaller things like MIME types and URLs, all have the same kind of long-standing interoperability issues.

These interoperability issues are detrimental to all constituencies:

Therefore I’d like everyone to take this far more seriously than they have been. Always ask about the testing story for a standard. If it doesn’t have one, consider that a red flag. If you’re working on a standard, figure out how you can test it (hint: web-platform-tests). If you work on a standard that can be implemented by lots of different software, ensure the test suite is generic enough to accommodate that (shared JSON resources with software-specific wrappers have been a favorite of mine).

Effectively, this is another cue standards needs to take from modern software development practices. Serious software will require tests to accompany changes, standards should too. Ensuring standards, tests, and implementations are developed in tandem results in a virtuous cycle of interoperability goodness.

(It would be wrong not to acknowledge Ecma’s TC39 here, who produced a standard for JavaScript that is industry-leading with everything derived from first principles, and also produced a corresponding comprehensive test suite shared among all implementations. It’s a complex standard to read, but the resulting robust implementations are hard to argue with.)

Using GitHub

I’ve been asked a few times how I stay on top of GitHub:

This works well for me, it may work for you.

What I miss is Bugzilla’s needinfo. I could see this as a persistent notification that cannot be dismissed until you go into the thread and perform the action asked of you. What I also miss on /notifications is the ability to see if someone mentioned me in a thread. I often want to unsubscribe based on the title, but I don’t always do it out of fear of neglecting someone.


Dara was born.

MIME type interoperability

In order to figure out data: URL processing requirements I have been studying MIME types (also known as media types) lately. I thought I would share some examples that yield different results across user agents, mostly to demonstrate that even simple things are far from interoperable:

These are the relatively simple issues to deal with, though it would have been nice if they had been sorted by now. The MIME type parsing issue also looks at parsing for the Content-Type header, which is even messier, with different requirements for its request and response variants.

Browser differences in IDNA ToASCII processing between ASCII and non-ASCII input

At the moment the URL Standard passes the domain of certain schemes through the ToASCII operation for further processing. I believe this to be in line with how the ToASCII operation is defined. It expects a domain, whether ASCII or non-ASCII, and either returns it normalized or errors out.

Unfortunately, it seems like the web depends on ToASCII effectively being a no-op when applied to ASCII-only input (at least for some cases), as is the way browsers seem to behave from these tests:

Input Description ToASCII Expected Chrome 58 dev Edge 14.14393 Firefox 54.0a1 Safari TP 23
x01234567890123456789012345678901234567890123456789012345678901x A domain that is longer than 63 code points. Error, unless VerifyDnsLength is passed. No error. No error. No error. No error.
x01234567890123456789012345678901234567890123456789012345678901† Error. Error. Error. Error.
aa-- A domain that contains hyphens at the third and fourth position. Error. No error. No error. No error. No error.
a†-- Error. No error, returns input. No error, returns xn--a---kp0a. Error.
-x A domain that begins with a hyphen. Error. No error. No error. No error. No error.
-† Error. No error, returns input. No error, returns xn----xhn. Error.

There is also a slight difference in error handling as rather than returning input, Chrome returns the input percent-encoded.

(I used the Live URL Viewer and Live DOM Viewer to get these results, typically prefixing the input with https://.)

Using SSH securely

We have been moving WHATWG standards to be deployed through GitHub and Travis CI. This way we can generate snapshots for each commit which in turn makes it easier to read older obsolete copies of the standard. The final step in our build process moves the resources to the server using SSH.

Unfortunately we have been doing this in a bad way. The documentation from Travis suggests to use ssh_known_hosts and lots of other documentation suggests passing -o StrictHostKeyChecking=no as argument. The risks of these approaches and their secure alternatives are not (always) outlined unfortunately. Both of these open you up to network attackers. You effectively do not know what server you end up connecting to. Could be the one you know, could be that of an attacker. Note also that in case of Travis’s ssh_known_hosts it is not even trust-on-first-use. It is trust-on-each-use (i.e., trust-always). You can be attacked each time Travis runs. I filed issue 472 since what we need is trust-never, as the network is unsafe.

As far as I can tell this is not a big deal for WHATWG standards, since they are completely public and the worst that could happen is that an attacker stops publication of the standard, which they could do even if we had a proper setup (by terminating the network connection). However, it does set a bad example and we would not want folks to copy our code and have to know the limitations of it. It should just be good.

The easiest way to do Travis deployments securely that I have found is to create a known_hosts resource and pass -o UserKnownHostsFile=known_hosts as argument (thanks Tim). That ensures the ssh/scp/rsync -rsh="ssh" program will not prompt. However, rather than not prompting because you told it to bypass a security check, it is not prompting because everything is in order. Of course, this does require that the contents of known_hosts are obtained out-of-band from a secure location, but you need to be doing that anyway.

The XMLHttpRequest Standard now makes use of that secure deployment process and the remainder of WHATWG standards will soon follow.

With that, if any of the following is true, you probably need to fix your setup:

Standards on GitHub

A couple years ago I wrote Contributing to standards and it is worth noting how everything has gotten so much better since then. Basically all due to GitHub and standards groups such as TC39, WHATWG, and W3C embracing it. You can more easily engage with only those standards you are interested in. You can even subscribe to particular issues that interest you and disregard everything else. If you contrast that with mailing lists where you likely get email about dozens of standards and many issues across them, it’s not hard to see how the move to GitHub has democratized standards development. You will get much further with a lot less lost time.

Thanks to pull requests changing standards is easier too. Drive-by-grammar-fixes are a thing now and “good first bug” issue labels help you get started with contributing. Not all groups have adopted one-repository-per-standard yet which can make it a little trickier to contribute to CSS for instance, but hopefully they’ll get there too.

(See also: my reminder on the WHATWG blog that WHATWG standards are developed on GitHub.)


Andrew pointed out webrender yesterday. A new rendering technology for CSS from the folks that are reinventing C++ with Rust and browsers with Servo. There is a great talk about this technology by Patrick Walton. It is worth watching in its entirety, but 26 minutes in has the examples. The key insight is that using a retained mode approach to rendering CSS is much more efficient than an immediate mode approach. The latter is what browsers have been using thus far and makes sense for the canvas element (which offers an immediate mode rendering API), but is apparently suboptimal when talking to the GPU. Patrick mentioned this was pointed out back in 2012 by Mark J. Kilgard and Jeff Bolz from NVIDIA in a paper titled GPU-accelerated Path Rendering: We believe web browsers should behave more like video games in this respect to exploit the GPU.

The reason this is extremely exciting is that if this pans out layout will finally get the huge boost in speed that JavaScript got quite a while ago now. Going from not-even-sixty frames-per-second to hundreds of frames-per-second is just fantastic and also somewhat hard to believe. Always bet on the web?

Fetch Standard 101

The WHATWG Fetch Standard is an essential part of the browser networking subsystem. Basically any API that involves networking (e.g., <img src>, <a href> (through navigation), XMLHttpRequest, @font-face, WebSocket) goes through Fetch. The exception is WebRTC’s RTCDataChannel and perhaps not surprisingly it has a security issue. The fetch() API is also defined in terms of Fetch and the similar naming has led to some confusion. Fetch is basically the subsystem and fetch() is one of the many APIs that exposes (part of) the capabilities of Fetch.

The basic setup is that an API prepares a request, which consists of a URL and a number of variables, feeds that to Fetch, and at some point gets a response, which consists of a body and a number of variables. Fetch takes care of content security policies, referrer policies, invoking service workers, credentials, cache modes, CORS, HSTS, port blocking, default headers (and whether they get exposed to service workers), X-Content-Type-Options: nosniff, and more. In part Fetch defines essential infrastructure such as CORS, redirect handling, port blocking, and overall terminology, and in part it serves as glue between the now numerous standards that together define the browser networking subsystem.

E.g., for redirects, Fetch defines which headers are preserved, whether a request body gets cloned and reused (it usually does), how the referrer policy gets updated, what happens with redirects to non-HTTP schemes (fail, except when navigating sometimes), but the actual connection opening and request transmission is largely left to TLS and HTTP. And as a consequence of all APIs using Fetch, redirects behave the same throughout. There are exceptions to the rule of course, but redirects are no longer a problem we need to solve on a per-API basis. And when you extrapolate this redirects example to content security policies, referrer policies, service workers, and all the other little things Fetch takes care of, it should be clear why it is essential.

(See Fetching URLs for an earlier introduction.)