Job trends in web development

The job search service Indeed has an interesting “trends” search engine: It visualizes the amount of job postings matching your keywords the last year. Let’s see if there is some interesting information for modern web technologies there…

XHTML vs. HTML

The relation between XHTML and HTML Relative popularity of XHTML and HTML in job offers could be attributed to a number of factors:

  • XHTML is just not popular yet (1 Google result for every 19 on HTML).
  • The transition from HTML to XHTML is so simple as to be ignored.
  • The terms are confused, and HTML is the most familiar one.
  • XHTML is thought to be the same as HTML, or a subset of it.

The XHTML graph alone Popularity of XHTML in job offers could give us a hint as to where we stand: At about 1/100 of the “popularity” of HTML, it’s increasing linearly. At the same time, HTML has had an insignificant increase, with a spike in the summer months (it is interesting to note that this spike did not occur for XHTML). XHTML could be posed for exponential growth, taking over for HTML, but only time will tell.

AJAX

This is an interesting graph Popularity of AJAX in job offers: It grows exponentially, which is likely to be a result of all the buzz created by Google getting on the Web 2.0 bandwagon. Curiously, the growth rate doesn’t match that of the term “web 2.0” Relative popularity of AJAX and "Web 2.0" in job offers. Attempting to match it with other Web 2.0 terms such as “RSSRelative popularity of AJAX and RSS in job offers, “JavaScript” Relative popularity of AJAX and JavaScript in job offers, and “DOMRelative popularity of AJAX and DOM in job offers also failed. The fact that AJAX popularity seems to be irrelevant to Web 2.0 and even JavaScript popularity is interesting, but I’ll leave the creation of predictions from this as an exercise for the readers. :)

CSS

While insignificant when compared to HTML Relative popularity of HTML and CSS in job offers, the popularity of CSS closely follows that of XHTML Relative popularity of XHTML and CSS in job offers. Based on that and the oodles of best practices out there cheering CSS and XHTML on, I predict the following: When CSS is recognized for its power to reduce bandwidth use and web design costs, it’ll drag XHTML up with it as a means to create semantic markup which can be used with other XML technologies, such as XSLT and RSS / Atom.

Discussion of conclusions

The job search seems to be only in the U.S., so the international numbers may be very different. I doubt that, however, based on how irrelevant borders are on the Web.

The occurence of these terms will be slowed by such factors as how long it takes for the people in charge to notice them, understand their value / potential, and finally find areas of the business which needs those skills.

Naturally, results will be skewed by buzz, large scale market swings, implicit knowledge (if you know XHTML, you also know HTML), and probably another 101 factors I haven’t though of. So please take the conclusions with a grain of salt.

My conclusions are often based on a bell-shaped curve of lifetime popularity, according to an article / book I read years ago. I can’t find the source, but it goes something like this:

  1. Approximately linear growth as early adopters are checking it out.
  2. Exponential growth as less tech savvy people catch on; buzz from tech news sources.
  3. Stabilization because of market saturation and / or buzz wearing off.
  4. Exponential decline when made obsolete by other technology.
  5. Approximately linear decline as the technology falls into obscurity.

PS: For some proof that any web service such as Indeed should be taken with a grain of salt, try checking out the result for George Carlin’s seven dirty words Relative popularity of George Carlin's seven dirty words in job offers ;)

Advertisements

Re: The Future of Tagging

Vik Singh has an interesting blog post on the future of tagging. IMO not so much because of the idea, since it looks quite familiar to the directory structure we’re all familiar with, adding the possibility for objects to be inside several directories at the same time. But it got me thinking on what tagging lacks, which is an easy to use relation to more structured data about what the tag means.

Anyone who’s used the del.icio.us tagging interface provided by their bookmarklets or Firefox extension knows how easy they are to use. Just click any tag, and it’s added or removed, based on whether it’s already used. Click the “save” button when done. Dead easy.

Creating RDF, when compared with del.icio.us, is quantum theory. But it’s already being used, and will probably be one of the biggest players in the semantic web, where things have meanings which can be interpreted by computers. Using RDF, you can distinguish between the tag “read” as an imperative (read it!) and an assertation (has been read). You can also make the computer understand that “examples” is the plural of “example”, and that curling is a sport (though some may disagree :)).

How could we combine the two? Here’s an idea: When clicking any of the tags in the del.icio.us tagging interface, you’ll be asked what you mean, by getting the possibility to select any number of from a list of one-sentence meanings. E.g., when selecting “work”, you could get these choices:

  • Item on my todo list
  • Something you’ve worked on
  • Something somebody else has worked on
  • A job/position
  • None of the above
  • Show all meanings
  • Define new meaning…

The list would normally only contain the most popular definitions, to harness the power of the “best” meanings, as defined by the number of users. The “Show all meanings” link could be used to show the whole range of meanings people have defined.

“Define new meaning…” could give you a nice interface to define the meaning of the word in the context of the link you’re tagging at the moment. This is where the designers really have to get their minds cooking to get something usable by at least mid-range computer literates.

More practical HTTP Accept headers

Isn’t it time for user agents to start reporting more fine grained which standards they support? The HTTP Accept header doesn’t provide enough information to know whether a document will be understood at all, and can lead to quite a few hacks, specially on sites using cutting edge technology, such as SVG or AJAX.

For an example, take a look at the correspondence between the Firefox 1.5.0.1 Accept header (text/xml, application/xml, application/xhtml+xml, text/html;q=0.9, text/plain;q=0.8, image/png, */*;q=0.5) and the support level reported at Web Devout’s Web browser standards support page or Wikipedia’s Comparison of web browsers.

Say that I want to serve a page with some SVG, MathML, CSS2/3, and AJAX functionality. Each of these requires different hacks to ensure that non-compliant browsers don’t barf on the contents. For SVG and MathML, I can use CSS to put the advanced contents above the replacement images, or use e.g. SVG to provide replacement text. Both methods increase the amount of contents sent to the user agent, and are not really accessible – Non-visual browsers get the same information twice.

For CSS, countless hacks have been devised to make sure sites display the same in different browsers. So the user agent always receives more information than it needs.

AJAX needs to check for JavaScript support, then XMLHttpRequest support, and then must use typeof to switch between JS methods. This can easily triple the length of a script.

What if browsers could negotiate support with the server using e.g. namespace URIs, where these would reference either a standard, part of it, or some pre-defined support level? Poof, SVG 1.1 Tiny 95% supported, CSS 3 10% supported, DOM level 2 80% supported, etc..

Obviously, the Accept header would be much longer, but the contents received could be reduced significantly. Also, I believe it would be easier for developers to use only Accept header switching than learning all the hacks necessary for modern web development.

I don’t really know if this is possible, but maybe this kind of Accept header could be separated into a special HTTP reply. This would contain the URIs of the potential contents, and the user agent would send a new HTTP GET request with the modified Accept header, reporting the support levels.

Note: This post is the same as sent in reply to an email by Allan Beaufour on the www-forms mailing list of W3C. The text has been slightly modified for legibility.

Follow-up: Added a bug report for Firefox and a short Wikipedia article.