The HTTPS-only experience

EFF recently announced that We’re Halfway to Encrypting the Entire Web.” As a celebration of this achievement I’ve started an experiment: as of yesterday, no unencrypted HTTP traffic reaches this machine*.


Even though securing your web site is easier than ever it will take some time before everybody encrypts. But there are a bunch of sites which support both secure and insecure HTTP transfer, and there are a few tricks to tilt the scales and make the current experience better:

  • The HTTPS Everywhere browser extension ensures that you use HTTPS whenever possible on thousands of sites.
  • Editing the URL to add “https://” at the start works for some sites. If you get the dreaded certificate error page make sure to report them, and only add an exception if … well, that subject is too big to get into here. Just don’t.

Many sites have excellent HTTPS support, and have enabled HTTP Strict Transport Security (HSTS). In short, if your browser knows that the domain supports HTTPS (by visiting it or the browser being installed with it) you can simply type the domain name in the URL field and the browser will default to fetching the secure version of the site. On the other end of the spectrum I can still visit sites which have no HTTPS support at all if I really need to by using Tor, which provides privacy but not integrity or authenticity.

pacman stopped working after setting this up. It turns out the package database is fetched using unencrypted HTTP by default, but it was easy to generate a new list of only HTTPS mirrors.

Some sites have a strange HTTPS setup. The BBC only support HTTPS for the front page, which is just weird. Why go to all that trouble for 1% of the gain? Other sites require authentication to access using HTTPS, possibly not realising that setting up HTTPS for everyone would be easier.

My home router runs DD-WRT, and the web interface for it is only accessible by HTTP by default. This is easy to configure though.

OCSP uses HTTP (at least in Firefox), since the returned file signature has to be checked separately anyway. So if I go to about:config, change security.OCSP.require to true, and visit a site I haven’t seen for a while, I get an error message like this:

An error occurred during a connection to The OCSP server experienced an internal error. Error code: SEC_ERROR_OCSP_SERVER_ERROR

The solution is to either allow OCSP queries specifically or to allow HTTP to specific hosts. Let’s see what can be done…

The Steam client uses insecure HTTP for both game updates and the store pages. There doesn’t seem to be any way to force it to use HTTPS, so I have submitted suggestion to Valve using the official channel.

These have been the only major hassles so far. The only other sites I really can’t get to work over HTTPS are various hold-outs like Wikia and BBC.


The change was a simple addition to my Puppet manifest:

firewall { '100 drop insecure outgoing HTTP traffic':
  chain  => 'OUTPUT',
  dport  => 80,
  proto  => tcp,
  action => reject,

The resulting rule:

$ sudo iptables --list-rules OUTPUT | grep ^-A
-A OUTPUT -p tcp -m multiport --dports 80 -m comment --comment "100 drop insecure outgoing HTTP traffic" -j REJECT --reject-with icmp-port-unreachable

* Technical readers will of course notice that the configuration simply blocks port 80, while HTTP can of course be served on any port. The configuration wasn’t meant as a safeguard against absolutely every way unencrypted HTTP content could be fetched, but rather focused on the >99.9% of the web which serves unencrypted content (if any) on port 80. I would be interested in any easy solutions for blocking unencrypted HTTP across the board.

Why I still contribute to Stack Overflow

This is in response to Michael T. Richter’s excellent critique of Stack Overflow. While I share some of the concerns for the problems mentioned there, I don’t believe they are quite as detrimental to the quality of the site as he appears to.

From the article:

I’m not a Java programmer. I’ve only ever briefly programmed in Java professionally. I hated the experience and I hate the language. I certainly don’t consider myself a Java expert. Yet I managed to get the bulk of my points from Java. How is this possible?

It’s possible because I did what many of the people whose questions I answered (and got points for) should have done for themselves: I saw a simple Java question, hit Google, read briefly, then synthesized an original answer.

This is what I read:

I know Java well enough to have used it professionally. Considering the recent explosion of access to computers and the Internet, the vast numbers of students and hobbyists who only ever touch on Java very briefly to solve a highly specific problem, the low level of entry relative to most other languages, the tiny amount of people who end up specialising in Java rather than any other languages in existence, and the fact that the language hasn’t been around long enough for there to be large amounts of people with many decades of experience, that single point probably makes me more knowledgeable than 90% of Stack Overflow users in Java.

I freely admit that I do pay attention to the points. They are the best way I have to figure out if I am actually getting better at submitting useful questions and answers. People’s motivation for answering doesn’t matter one jot to me if they write interesting or useful on-topic content. How could it? Such content is completely off topic, and would be edited out like the plague.

There’s room for improvement (and I wouldn’t even say “obviously”, like for so many other sites), but SO/SE is still miles ahead of anything we’ve ever had. It’s the Google of Q&A sites: Terrible for super specific issues (which are likely to be closed since they are not interesting to anyone but the author), but awesome for the rest.

Much of the rest of the article seems to be simply a complete lack of faith in other human beings. For example:

  • “If you’re going for points (and that’s the entire raison d’être for gamification!), are you going to waste time like that for 60 points when you could fit in a dozen 460-point answers? Of course not! You’re going to go wherever the points are, And the points are the low-hanging fruit of trivial questions from popular languages.” I personally think the Stack Overflow answer rate is plenty proof that lots of that people are willing to help even when the answer is likely to get very few votes.
  • “There’s an old cliché in English: give a man a fish, he eats for a day; teach a man to fish, he eats for a lifetime. StackOverflow is filled to the brim with people giving fishes.” After using other web sites which were much more geared to fish donations (almost every linear forum, IRC channel and mailing list I ever used) I find it refreshing that adding links to more resources (even on other people’s posts), where people can learn how to fish, is extremely easy on Stack Overflow. I use this feature extensively, having far too many times googled a problem only to find a tip in a random forum reply with absolutely no rationale or link to more information. This is something which could possibly be pushed even more by the SO system, but it’s already very good.

And finally, the end paragraph: “How about learning?” Great! “You know, that thing that puts information in your head that you can apply later at need.”“Use Google.” About 30 times a day, and usually the best result is on Stack Overflow or Stack Exchange. “Use Wikipedia (if you must).” Except it’s pretty terrible to learn about practical software development or actual development issues. “Use RosettaCode for code examples.” Impressive collection, but I don’t think it’s very relevant for real-world development or easily searchable. And the MediaWiki editor is a pain compared to SO. “Engage with other users of the tools you use in the form of user groups, mailing lists, web forums, etc.” Vastly inferior technology to solve the exact same problem. No thanks. “Learn foundational principles instead of answers to immediate questions.” Stack Overflow is a Q&A site, not a substitute for work experience or college degrees.

SMS authentication on fail recently added two step authentication. Hooray for taking security seriously! Unfortunately the setup page is full of fail:

  • No indication whether the trunk prefix should be included in the number. I tried both with and without one, twice, but never received a single message. It is not obvious how it would occur to anyone to try both, especially for people who always use one or the other.
  • Why is Google Authenticator so massively emphasized over SMS? Granted, many rich* people have a smartphone, but there is no indication why using a third party app is preferable to the solution which works on every mobile phone capable of connecting to an existing network. YAGNI, and if GitHub gets by with SMS then it’s good enough for me.
  • Why is there a separate “Send SMS” button? Surely by the time the “Verify Code” page shows up you should have sent the message.
  • The first page contains an obvious button to go to the next step. The second page contains three differently styled button-ish elements to show download links for one app and two plain links to go to the next page. The third page (after following the “use Two Step Authentication via SMS” link) contains one left-aligned and one right-aligned button. I haven’t got to the last page yet; I just hope it isn’t too crazy.
  • No relevant help page in sight.
  • No context-sensitive support link. For a new feature of such importance and with the possibility of locking people out pending manual intervention I’d expect more direct support integration.
  • Most search results for “sms authentication” in their forums seem to revolve around problems deactivating this feature. Sounds like it’s simply not ready yet.

PS: I’m using SMS codes for several other international services, and they all work fine.

* If you’re reading this, then you are very likely within the 10% richest people on the Earth.

Quit social job sites

I just quit two social job sites, and I don’t expect I’ll start using another one. After several years’ membership I find that my occasional hillbilly-style urge to peek through the work-life curtains of acquaintances is not outweighed by the cons:

  • Spam, Spam, Spam, Spam, lovely Spam! Lovely Spam! I hesitate to think that I’ve received even half a dozen useful messages through these sites, and those would have been just as useful via email. That means even the occasional junk mail is pushing the statistics up the >90% range.
  • Giant productivity sink. Did you ever have “100% completion” on your profile? How long did it take until it was back at 80%? And how many giant lists of people, skills and options have you gone through to “optimize” your participation?
  • Sites selling my information to third parties really grinds my gears. With “we can do anything we like, anytime we want” legalese, “You are not being watched” propaganda and the impossibility of keeping up with opt-out data sharing schemes, it’s hopeless to keep any kind of privacy.

This list is starting to look a lot like the reasons for leaving Facebook. So good riddance! Those who know where to find me (without having bought my contact information) know that they’re welcome to. to Spotify migration

When announced on 2012-12-13 that they were pulling out of Switzerland in mid-January, I requested a refund (unsuccessfully so far) and started looking around for an alternative immediately. Spotify seems to fit the bill nicely, and barring any glaring privacy or stability issues it’s probably worth a threefold price increase.

But how to migrate? has got XSPF and tab-separated values export functionality at /user/<username>/library/loved; great! Except that didn’t work the first half a dozen times I tried, and Ivy, the most promising import service, only supports some iTunes format and comma-separated values, so I ended up making a tool to convert XML to CSV. Of course, after finishing that I tried again, and now the XSPF export worked, so I created another project to convert XSPF to CSV. Nice hacking exercise to ensure that it produces parseable CSV even with quotes and high-Unicode characters in the input.

Ivy kept telling me I didn’t select a track column in the second step, but would gladly accept the pasted result from xclip -in favorites.csv without even asking about columns. Out of 1222 tracks it found 734 (60%) on Spotify, which is not too terrible considering the impressive collection of obscure tracks on the former.

Facebook won’t work with NoScript extension

Looks like they’re doing something retarded when detecting NoScript: Some links (for example, other people’s profiles) have an extra ?_fb_noscript=1 at the end, and the page includes a <meta http-equiv=refresh content="0; URL=/people/JohnDoe/12345678?_fb_noscript=1" />, which results in an endless loop of reloading the same page :/