Anne van Kesteren

Enabling HTTPS and HSTS on DreamHost

DreamHost recently enabled Let’s Encrypt support. This is great and makes HTTPS accessible to a great many people. For new domains there is a simple HTTPS checkbox, could not be easier. For existing domains you need to make sure the domain’s “Web Hosting” is set to “Fully Hosted” and there are no funny redirects. If you have an Internationalized Domain Name it appears you are out of luck. If you have a great many subdomains (for which you should also enable HTTPS), beware of rate limits and wildcard certificates being unsupported.

The way DreamHost manages the rate limits is by scheduling the requests not succeeding for a week later. Coupled with the fact that Let’s Encrypt certificates are relatively short-lived this places an upper bound on the amount of subdomains you can have (likely around sixty). If you manage certificate reqeusts from Let’s Encrypt yourself you could of course share a certificate across several subdomains, thereby increasing the theoretical limit to six-thousand subdomains, but there is no way that I know of to do it this way through DreamHost.

To make sure visitors actually get on HTTPS, use this in your .htaccess for each domain (assuming you use shared hosting):

RewriteEngine On
RewriteCond %{HTTPS} !=on
RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]

(As long as domains are not using rewrite rules you can in fact share this across many domains by placing it in a directory above the domains, but you will need to copy it for each domain that does use rewrite rules. IndexOptions InheritDownBefore requires Apache 2.4.8 and DreamHost sits on Apache 2.2.22, although they claim they will update this in the near future. (Very much unclear why DreamHost wiki is still without HTTPS.))

The next thing you want to do is enable HSTS by adding this to your .htaccess (first make sure all your subdomains are on HTTPS too):

Header set Strict-Transport-Security "max-age=31415926; includeSubDomains; preload" env=HTTPS

The preload directive is non-standard, but important, since once this is all up and running you want to submit your domain for HSTS preloading. You can remove the preload directive after submitting your domain, if you care for the bytes (or standards). With that done, and thanks to DreamHost’s upgraded HTTPS story, you will get an A+ on the SSL [sic] Server Test.

Custom elements no longer contentious

I was in San Francisco two weeks ago. Always fun to see friends and complain about how poorly Caltrain compares to trains in most civilized countries. Custom elements day was at Apple where you cannot reasonably get to via public transport from San Francisco. The “express” Caltrain to Mountain View and a surged Uber is your best bet. On the way back you can count on Ryosuke, who knows exactly how much gas is in his car well after the meter indicates it’s depleted.

While there are still details to be sorted with both custom elements and shadow DOM, we have made major headway since last time. Getting cross-browser agreement on the contentious issues:

For those paying attention, none of this provides a consistent world view throughout. We gave up on that and hope that the combination of the parser doing things synchronously and other contexts not doing that will be enough to get folks to write their components in a way that is resilient to different developer practices.

Three years at Mozilla

I started in London during a “work week” of the Platform group. I had also just moved to London the weekend prior so everything was rather new. I don’t remember much from that week, but it was a nice way to get to know the people I had not met yet through standards and figure out what things I could be contributing to.

Fifteen months later I moved to Switzerland to prepare for the arrival of Oscar and Mozilla has been hugely supportive of that move. That was so awesome. Oscar is too of course and might I add he is a little bigger now and able to walk around the house.

Over the years I have helped out with many different features that ended up in Gecko and Servo (the web engines Mozilla develops) through a common theme. I standardize the way the web works to the best of my ability. In the form of answering questions, working out fixes to standards such as the security model of the Location and Window objects, and helping out with the development of new features such as “foreign fetch”. I hope to continue doing this at Mozilla for many years to come.

W3C forks HTML yet again

The W3C has forked the HTML Standard for the nth time. As always, it is pretty disastrous:

So far this fork has been soundly ignored by the HTML community, which is as expected and desired. We hesitated to post this since we did not want to bring undeserved attention to the fork. But we wanted to make the situation clear to the web standards community, which might otherwise be getting the wrong message. Thus, proceed as before: the standards with green stylesheets are the up-to-date ones that should be used by implementers and developers, and referred to by other standards. They are where work on crucial bugfixes such as setting the correct flags for <img> fetches and exciting new features such as <script type=module> will take place.

If there are blockers preventing your organization from working with the WHATWG, feel free to reach out to us for help in resolving the matter. Deficient forks are not the answer.

— The editors of the HTML Standard

Firefox OS is not helping the web

Mozilla has been working on Firefox OS for quite a while now and ever since I joined I have not been comfortable with it. Not the high-level goal of turning the web into an OS, that seems great, but the misguided approach we are taking to get there.

The problem with Firefox OS is that it started from an ecosystem parallel to the web. Packaged applications written using HTML, JavaScript, and CSS. Distributed through an app store, rather than a URL. And because Mozilla can vet what goes through the store, these applications have access to APIs we could never ship on the web due to the same-origin policy.

This approach was chosen in part because the web does offline poorly, and in part because certain native APIs could not be made to work for the web and alternatives were not duly considered. The latest thinking on Firefox OS does include URLs for applications, but the approach still necessitates a parallel security model to that of the web. Implemented through a second certificate authority system, for code. With as sole authority Mozilla, and a “plan” to decentralize that over time.

As stated, the reason is APIs that violate the same-origin policy, or more generally, go against the assumed browser sandbox. E.g., if Mozilla decides your code is trustworthy, you get access to TCP and can poke around the user’s local network. This is quite similar to app stores, where typically a single authority decides what is trustworthy and what is not. With app stores the user has to install, but has the expectation that the authority (e.g., Apple) only distributes trustworthy software.

I think it is wishful thinking that we could get the wider web community to adopt a parallel certificate authority system for code. The implications for the assumed browser sandbox are huge. Cross-site scripting vulnerabilities in sites with extra authority suddenly result in the user’s local network being compromised. If an authority made a mistake during code review, the user will be at far more risk than usual.

The certificate authority system the web uses today basically verifies that when you connect to example.com, it actually is example.com, and all the bits come from there. And that is already massively complicated and highly political. Scaling that system, or introducing a parallel one as Firefox OS proposes, to work for arbitrary code seems incredibly farfetched.

What we should do instead is double down on the browser. Leverage the assumed browser sandbox. Use all the engineering power this frees up to introduce new APIs that do not require the introduction and adoption of a parallel ecosystem. If we want web email clients to be able to connect to arbitrary email servers, let’s back JMAP. If we want to connect to nearby devices, Fly Web. If we want to do telephony, let’s solidify and enhance the WebRTC, Push, and Service Worker APIs to make that happen.

There are many great things we could do if we put everyone behind the browser. And we would have the support of the wider web community. In the sense that our competitors would feel compelled to also implement these APIs, thereby furthering the growth of the web. As we have learned time and again, the way to change the web is through evolution, not revolution. Small incremental steps that make the web better.

Update on standardizing shadow DOM and custom elements

There has been revived interest in standardizing shadow DOM and custom elements across all browsers. To that end we had a bunch of discussion online, met in April to discuss shadow DOM, and met earlier this month to discuss custom elements (custom elements minutes). There is agreement around shadow DOM now. host.attachShadow() will give you a ShadowRoot instance. And <slot> elements can be used to populate the shadow tree with children from the host. The shadow DOM specification will remain largely unchanged otherwise. Hayato is working on updates.

This is great, we can start implementing these changes in Gecko and ship them. Other browsers plan on doing the same.

Custom elements however is somewhat more astray. Here are some of the pain points:

During the meeting Maciej imagined various hacks during a break that would meet the consistency and upgrade requirements, but neither seemed workable on closer scrutiny, although an attempt will still be made. That and figuring out whether JavaScript needs to run during DOM operations will be our next set of steps. Hopefully with some more research a clearer answer for custom elements will emerge.

Still looking for alternatives to CORS and WebSocket

Due to the same-origin policy protecting servers behind a firewall, cross-origin HTTP came to browsers as CORS and TCP as WebSocket. Neither CORS nor WebSocket address the problem of accessing existing services over protocols such as IRC and SMTP. A proxy of sorts is needed.

We came up with an idea whereby the browser would ship with a HTTP/TCP/UDP API by default. Instead of opening direct connections a user-configurable public internet proxy would be used with a default provided by the browser. The end goal would be having routers announce their own public internet proxy to reduce latency and increase privacy. Unfortunately routers have no way of knowing whether they are connected to the public internet so this plan fall short. (There were other concerns too, e.g. shipping and supporting an open proxy indefinitely has its share of issues.)

There might still be value in standardizing some kind of proxy for HTTP/TCP/UDP traffic that is selected by web developers rather than the browser. Similar to TURN servers in WebRTC. Thoughts welcome.

Statement regarding the URL Standard

The goal of the URL Standard is to reflect where all implementations will converge. It should not describe today’s implementations as that will not lead to convergence. It should not describe yesterday’s implementations as that will also not lead to convergence. And it should not describe an unreachable ideal, e.g., by requiring something that is known to be incompatible with web content.

This is something all documents published by the WHATWG have in common, but I was asked to clarify this for the URL Standard in particular. Happy to help!

Same-origin policy

The same-origin policy, sometimes referred to as SOP, is the foundation of the web platform’s somewhat flawed security model. Without a browser, https://untrusted.example/ (Untrusted) can access any number of servers through curl. It cannot however access any servers located behind a firewall. With a browser, Untrusted can fetch resources from servers accessible to the user visiting Untrusted. Therefore, when a browser is involved the reach of Untrusted is what Untrusted can reach through curl plus with what the user can reach through curl. SOP prevents Untrusted from accessing the contents of resources on https://intranet.local/ (Intranet).

SOP also protects the contents of resources that depend on HTTP cookies and/or authentication (credentials). Most request contexts, such as img and script elements, include credentials in fetches by default. Thus if the user has stored credentials for https://credentialed.example/ (Credentialed), they will be included in outgoing fetches from Untrusted. (See ambient authority for why this might lead to problems.) Being able to access the contents of resources of Credentialed would be as problematic as accessing those of Intranet.

Because of SOP XMLHttpRequest has historically had a same-origin restriction. Reading the contents of resources of Credentialed and Intranet would be problematic. However, this also excludes access to notable non-Credentialed non-Intranet servers, such as https://example.com/ (Example). The problem with Example is that it cannot be distinguished from Intranet (private IPv4 address ranges are not reliably used). We invented CORS so that Untrusted can access the contents of resources on Example (and even on Credentialed and Intranet) as long as the resource opts in. To better understand CORS we first need to look at the historical request contexts.

At some point the same-origin policy did not exist and various requests could be made across origins, including credentials, leading to some leakage of Credentialed and Intranet data. Because of the network effects of the web these holes could not be fixed and are now enshrined and part of the security model:

Now as should be clear from above what CORS enables is allowing reading the contents of resources across origins (e.g. from Untrusted to Credentialed). To a far greater extent than the enshrined information leaks that already exist. CORS also enables the full power of XMLHttpRequest across origins. However, that would be far more than what has been traditionally allowed through images and forms (e.g. custom methods and headers). Therefore requests that use the full power require a CORS preflight request. The CORS preflight request confirms that the URL understands CORS. The final response still has to include the relevant CORS headers.

Hopefully this makes it clear that the same-origin policy solves a real problem. And that we need CORS due to Credentialed, Intranet, the more powerful requests it allows, and the inability to distinguish Example from either Credentialed or Intranet. This is also the reason we do not have a TCP API. It would be great to have a solution that would remove the need for CORS and allow a TCP API, but if you think you have one and it involves asking the user, think again. And if you want to expand the information leaks (e.g. allowing document.styleSheets without CORS), reconsider. All the information leaks we have are enshrined bugs from the time JavaScript became a thing.

(Please understand that this is introductory material. It is simplified somewhat for brevity.)

Terminating a fetch

Not long after fetch() was introduced the question was raised how to terminate a fetch. Of the various options the best seemed to be invoking a method on the returned promise (let f = fetch(…); f.terminate()). However, irrespective of which of the alternatives is chosen, what happens to the promise?

Kris Kowel has some rather interesting reading on the subject in A General Theory of Reactivity and cancelation.md. Promises/A+ has a repository with a bunch of open issues with useful discussion. After discussing it further with Domenic and Ben on IRC, rejection no longer feels like the natural solution. Explicit termination of a fetch will rarely share code with the generic handler for network errors. On Twitter Kris offers [f]orever pending is ok, which we might well go with.