BT VDSL modem linkjacking HTTP URLs

Yet another reason to install HTTPS Everywhere and NoScript: It looks like our BT Hub 4 is linkjacking all HTTP URLs:
NoScript blocking warning

Turns out they just want to show a useless landing page:

BT landing page

This wouldn’t be so annoying if it didn’t happen on every new IP that connects to the hub. Isn’t the proof in the pudding – if the rest of the web works, why is the landing page necessary?

Advertisements

Confessions of an ex(?) newbie

Today Months ago it hit me that I should properly ask forgiveness for my crimes committed against the IT community. I have, in no particular order:

  • Asked for help before searching.
  • Filed bugs with too little information.
  • Been dead sure of the source of the bug and completely wrong.
  • Used
    noob
    text
    “techniques”
    in
    chats
    At least I never used FUCKING COLORED CAPS.
  • Participated in newsgroup flame wars.
  • Used frames on my website. *Shiver*
  • Vented frustration in bug reports.
  • Sent emails without reviewing content and formatting.

More practical HTTP Accept headers

Isn’t it time for user agents to start reporting more fine grained which standards they support? The HTTP Accept header doesn’t provide enough information to know whether a document will be understood at all, and can lead to quite a few hacks, specially on sites using cutting edge technology, such as SVG or AJAX.

For an example, take a look at the correspondence between the Firefox 1.5.0.1 Accept header (text/xml, application/xml, application/xhtml+xml, text/html;q=0.9, text/plain;q=0.8, image/png, */*;q=0.5) and the support level reported at Web Devout’s Web browser standards support page or Wikipedia’s Comparison of web browsers.

Say that I want to serve a page with some SVG, MathML, CSS2/3, and AJAX functionality. Each of these requires different hacks to ensure that non-compliant browsers don’t barf on the contents. For SVG and MathML, I can use CSS to put the advanced contents above the replacement images, or use e.g. SVG to provide replacement text. Both methods increase the amount of contents sent to the user agent, and are not really accessible – Non-visual browsers get the same information twice.

For CSS, countless hacks have been devised to make sure sites display the same in different browsers. So the user agent always receives more information than it needs.

AJAX needs to check for JavaScript support, then XMLHttpRequest support, and then must use typeof to switch between JS methods. This can easily triple the length of a script.

What if browsers could negotiate support with the server using e.g. namespace URIs, where these would reference either a standard, part of it, or some pre-defined support level? Poof, SVG 1.1 Tiny 95% supported, CSS 3 10% supported, DOM level 2 80% supported, etc..

Obviously, the Accept header would be much longer, but the contents received could be reduced significantly. Also, I believe it would be easier for developers to use only Accept header switching than learning all the hacks necessary for modern web development.

I don’t really know if this is possible, but maybe this kind of Accept header could be separated into a special HTTP reply. This would contain the URIs of the potential contents, and the user agent would send a new HTTP GET request with the modified Accept header, reporting the support levels.

Note: This post is the same as sent in reply to an email by Allan Beaufour on the www-forms mailing list of W3C. The text has been slightly modified for legibility.

Follow-up: Added a bug report for Firefox and a short Wikipedia article.

The Internet generation

Sometime when I was younger, I heard that generations normally shift with 30 years’ interval. So either I was just on the “old” side of the last shift, or the generations are changing faster now than my parents’ lore would have it.

Yes, I’m talking about the Internet generation. I’m lucky enough to be part of the Nintendo generation, but Internet was outside my scope until high-school. Even then, it seemed a strange and geeky place, and the idea of it’s widespread adoption by “mom’n’dad” didn’t even manifest itself. Now, they are braving the fields of the unknown, if not with enthusiasm, then at least a slight interest. Myself, I’m online as long as my home or work computer is not having a well-earned break.

But still, I’ll never be part of the Internet generation. It is composed by those who do not yet know how to spell, but who know where to click in order to play “snakes and ladders” with their friends online. They will be the first to grow up in a world where the Internet is ubiquitous.

How does this bode for the future of the Internet? Certainly, usability will be an issue when everyone is online. Government and private services will be expected to be available online, with security, accountability, speed, and reliability at higher levels than could ever be obtained by manual work. People will meet each other, exchange digital signatures as easily and naturally as business cards, and use them to ease the possibility for secure message transfer free from spam and phishing attempts. Idle CPU cycles and free storage, which is already astronomical, will be put to use in distributed computing and storage systems, solving research problems and backing up your family photos. Passwords will be replaced by biometrical or other “natural” methods of authentication. Users with little or no expertise in security, or even computers in general, will be able to set up totally secret conversations with others.

As always, the medallion has a backside. Higher levels of security will mean that people put more trust into the systems, and forget that ultimately there are people behind, creating and maintaining them. If perfect security is assumed, the results can be the disastrous when proven otherwise. Technology, like humans, does not perform perfectly. Also, secrecy is a useful tool for criminals. However, I believe this will lead to the use of low-tech solutions for catching them, and a long wanted, real privacy for ordinary citizens. With regard to biometric measurements, there have been concerns that criminals might cut off an organ or limb to get access to a system. However, this can be solved by extending the measurements to the whole body. There is also the issue of psychological damage from the material on the Internet. This issue is discussed in another blog entry.

One thing is for sure: The Internet is here to stay, and it will influence the lives of our children, for better or worse.

Sex & violence on the ‘net

Context: Slashdot recently featured an article entitled “Internet Porn More Addictive Than Crack, Senate Told“. As is usual on Slashdot, the most interesting part of the articles are the comments from the readers, often extending to tens of pages of political, religious, and technical debate. Sure, most people seem to be left-leaning nerds, but everyone gets to say their piece. Back to the article, it touches the highly controversial theme of how “computerized” sex and violence influence people of all ages, and whether and how it should be controlled. I started on a new comment at Slashdot, but realized that it had slid into being quite off-topic, so instead I’ll post it here. Comments are welcome, but please note:

  • Be nice, even if you disagree.
  • Please don’t quote out of context.
  • If you can, back up any straight-out medical claims with links to articles published in acknowledged journals.

Now for the contents…

Sex is good. Violence is bad. Anything combined with violence is bad, even sex. It’s that easy. For the picky, I probably should mention that I am of course thinking about the kind of violence that happens without the consent of the person in question. Piercing, tattooing, S&M, legal boxing, and the like are therefore not included.

Now we get to the really difficult question: What are the effects of exposing people to sexual and/or violent material?

Speaking for my (obviously statistically freaky) self, see the two following paragraphs.

I’ve played tons of blood’n’gore FPSes, watched loads of heavily violent movies, and frequently listen to music promoting violent actions (relevant favorites include Grand Theft Auto, Silent Assassin, Fight Club, Army of Darkness, Marilyn Manson, and Clawfinger). Even so, I’ve never been in a fight with another physical human being. In fact, I abhor violence of any kind, and won’t even serve military service since I can avoid it.

Before switching to a non-sucky browser, I’ve seen my share of pop-ups with actions I wouldn’t like to perform, but what the hey, whatever gets you going (so long as it doesn’t hurt anyone, physically or mentally, directly or indirectly).

I believe that Internet porn has two very different sides to it: The bad, in which people are forced or coaxed into performing actions against their will, or pictures believed to be private are submitted to public pages. The good, which I believe most of us know, and is used by people for inspiration, entertainment, outlet, discussion, getting serious information, etc.

A perhaps more controversial point: Children are going to get exposed to both sex and violence while growing up. This is something you can’t avoid without a complete bereaving of their freedom, something which would probably be much more harmful. So tell them what they are seeing and hearing. Explain that performing an act of violence is bad, that sex is good but age limited, and what they should do if they ever get into contact with a child molester. I don’t believe it is necessary to put a lot of fear into the discussion, children are usually defenseless against an adult in any case. More important is to stress which situations they should avoid, and that they must tell their parents about any such episodes. E.g., Internet chats with strangers in which meetings are requested, grown-ups undressing or touching the children when alone with them, and other. It’s a complicated subject, so try to make it easy to understand. Another important point: IIRC, most child molesters are family members or close friends of the family, and even teenagers below the legal age have been found to be rapists. Still, be very careful to get the facts right before discussing the matter outside the four walls of the home, as even a rumor is enough to throw a person’s life into hell and worse.

Then again, this is probably something any parent has already thought about.

And last, a request for anyone providing news to the public: Don’t use the phrase “sex offender”! If someone has had sex with another person without his/her consent, it’s the act of constraining that person’s freedom and harming the person that is the point, not the sexual part of it. It’s called rape, not sex. Euphemisms won’t make the act any less hideous, and only serves to introduce ambiguity. You wouldn’t call a thief a “house offender”, or a murderer a “knife offender”.

Patent suicide

Yeah, I know. Everyone’s writing about it, so this isn’t going to get too original. But I’d like to write some of this down before the Black Monday of patent hogs happens, and the system finally gets the review and revolution it needs.

According to The Register, patent 4,734,690, filed by Tektronix in 1987, covers the display in 2D of a 3D image. Which should cover just about … every FPS since Wolfenstein! McKool Smith, a US legal firm, is now suing numerous companies for violation of this patent. Some of the giants mentioned include Electronic Arts (Sim series), Activision (Doom series), Take Two (Grand Theft Auto series), Ubisoft (Myst, Rainbow Six series), Atari (Civilization series), Vivendi Universal (Half-Life series), Sega (Sonic the hedgehog series), and Lucasarts (Star Wars series).

The case mentioned is in no way unique. There are numerous examples of what seems to be a trend to make money not by the good old-fashion way of actually producing something, but rather by suing others for making something similar to what you have made, maybe decades before (all links go to news articles about lawsuits for patent infringement). Big companies are sued because the legal system of many countries calculate fines according to the size of companies, and smaller companies are sued because many of them will rather go for a settlement than risking bankruptcy in case of a loss in court.

So what is the basis of the problem? Something which popped up while writing this, was that each and every patent is like a law, the main difference being that fines are paid to the patent holder, not the state. I am not familiar with the legal texts of patents, but I would believe that most of them correspond to at least an A4 page of text. With the current amount of patents, that amounts to millions of pages! How are companies supposed to be able to keep à jour with that? The result is that most companies produce and innovate without checking for pre-existing patents for their products, and just hope that they are not interfering with existing patents. In other words, the current patent system is a time-bomb in the face of any company.

So what can be done? Eliminating patents altogether would be extremely unfair towards the innovators, as good ideas would be copied as soon as they are made public. The mandate of patent offices could be extended to check project descriptions and the like for any possible infringements, but errors in this process could create legal chaos. Who would be to blame? Also, it would probably be too expensive to be effectively done by public offices. Stricter patent reviews could be used, but will probably take enormous amounts of time because of the complexity of the legal aspects. How about passing laws making sure some fixed part of the settlement / fine sums goes to the state? Sure, there would be less suing for patent infringement, but this could encourage companies to take even lighter on patent infringement.

There are two measures which, in combination, I believe could solve at least part of the problem. They are based on the assumption that there are two ways to know whether project X will infringe on a patent: Intricate knowledge of any patents in the same and bordering business areas, and actually seeing a product which is very similar to the expected result of the project. The first is problematic because of the enormous amount of work involved to get a legal approval before the project is finished. The resources are always limited, and projects normally evolve from their initial plan. The second I believe to be much easier in the general case: Take a look at finished products, and see if they already share key features with the expected result of the project. Based on this, I propose that either of the following must hold to make company A win a patent infringement case against company B:

  1. Company A must prove that, at the time company B’s product was in the stores, it was planning to produce, already producing, or in the process of selling a product utilizing the patent.
  2. It must be proven that company B somehow knew, or for some obvious reasons, should have known, about the patent.

In other words, unless company A was in the process of making any product based on a patent, it shouldn’t be able to stop other companies from creating such a product. This would make sure that companies cannot buy patents to stifle the production of something revolutionary, thus holding back innovation. Also, if there is no reason to believe that company B knew of the patent, they shouldn’t be punished afterwards.

It should be noted that if company A discovers that company B is infringing on their patent while in the planning, production, or sale period, they should send a “cease and desist” letter to company B, stating the relevant patent number and some kind of indication that planning, production, or sale is in process. If company B chooses to ignore this, it is clear that they have broken point 2 above, and so could be successfully sued.

As a nerd, I also have to suggest a technical solution: RDF, or Resource Description Framework. In short, it can be used to enable computer reasoning about complex, human-related themes. Basically, it defines relationships between generally atomic parts of data, and also about the relationships themselves. The point is that this could possibly be used to formalize and query patents. You could specify key concepts about something you are planning to produce, and the system would (by means of logical inference) return any infringing patents, explaining which parts of the patents are relevant. This is much more than a plaintext search (à la Google) can achieve, because it doesn’t just work in the words themselves, but their meaning.