How page speed is the most responsive part in RWD

Over the last couple of days several smarter people than me have posted their ideas on making sites load with maximum speed and elaborated on the suble differences in user experience when facing slow loading pages on a crappy mobile network.

Basically it all boils down to the simple fact that optimizing a website for mobile usage the site must load as fast as possible in order to be usable on a shaky EDGE network. Webdesigners and developers need to respond to the rising trend of mobile internet usage by building sites that focus on speed from the early stages of design and concept.

A primer and a failure

These days I mostly get my incentives from Smashing Magazine and the connected editor-in-chief curated Twitter channel @smashingmag. Recently the discussions have been revolving around the critical loading path, technics for “breaking news at 1000ms” at the Guardian or how large and well known sites organize frontend architecture.

Therefore the last days have been glorious and filled with exiting reads that made me suck in the information and learn my share from how development work is done by the big guys in the business.

But what a disappointment to find the latest issue of Website Boosting Magazine and skim through an article focusing on on-page search engine optimization and finding a rather high number of advice that is just plain wrong. Coming from optimising for 60fps everywhere I could not believe that the articles author was actually trying to tell me to do the exact opposite than what I had learned earlier today.

But what is this fuss all about?

Well in order to be fair it has to be stated that the Website Boosting article was written by a marketing and communications professional and that the best web development professionals of the trade might probably be years ahead in knowledge.

But reading that one should put your CSS and Javascript code into linked external files to speed up one’s site for search engine optimization while others promote inlining the critical CSS/JS to speed up things is quite a stretch.

The browser needs to download and parse any HTML, CSS and JavaScript before all pixels can be written to the screen as required. And it only has 16ms to accomplish this - not exactly what you would call a hell lot of time.

And while we all promote to use the latest and greatest HTML5/CSS3 - whatever you want to call all the fancy - since we do not have to support broken browsers any more (I am looking at you IE), we tend to develop a sane and straight forward coding style.

Per HTML5 spec, typically there is no need to specify a type when including CSS and JavaScript files as text/css and text/javascript are their respective defaults:

<!-- JavaScript -->
<script src="application.js"></script>

I find it rather odd, to find code examples like the following in a magazine fresh out of the press

<script language="JavaScript" type="text/javascript" src="main.js"></script>

This hurts my eyes and makes me sad. The good and the bad all in one day - and these are just the most obvious examples, why I believe that a huge part of the web business already has lost the connection to its most valuable resource - the actual visitors of their sites, services and apps.

Varnish makes your site fly - episode 4.0

Varnish is a web application accelerator and ever since we suffered the first performance issues on a fairly large site years ago we have been relying on varnish to speed up delivery.

Varnish 4.0 comes with extra batteries included

The latest release comes with several enhancements of the earlier releases and therefore we decided to jump on the bandwagon sooner than expected and push for a quick adoption.

This is out adopted varnish.vcl asuming a plone.app.caching installation with split-view caching enabled. Feel free to use parts of this for your own speed adventures.

vcl 4.0;
import std;

# Configure balancer server as back end
backend balancer {
    .host = "${hosts:varnish-backend}";
    .port = "${ports:varnish-backend}";
    .connect_timeout = 0.4s;
    .first_byte_timeout = 300s;
    .between_bytes_timeout = 60s;
}

# Only allow PURGE from localhost
acl purge {
    "${hosts:allow-purge}";
}

sub vcl_hit {
  if (obj.ttl >= 0s) {
    # normal hit
    return (deliver);
  }
  # We have no fresh fish. Lets look at the stale ones.
  if (std.healthy(req.backend_hint)) {
    # Backend is healthy. Limit age to 10s.
    if (obj.ttl + 10s > 0s) {
      set req.http.grace = "normal(limited)";
      return (deliver);
    } else {
      # No candidate for grace. Fetch a fresh object.
      return(fetch);
   }
  } else {
    # backend is sick - use full grace
    if (obj.ttl + obj.grace > 0s) {
      set req.http.grace = "full";
      return (deliver);
    } else {
     # no graced object.
    return (fetch);
   }
  }
}

sub vcl_recv {
    set req.backend_hint = balancer;
    set req.http.grace = "none";

    if (req.method == "PURGE") {
        if (!client.ip ~ purge) {
            return(synth(405, "Not allowed."));
        }
        ban("req.http.host == " + req.http.host +
                      "&& req.url == " + req.url);
        return(synth(200, "Ban added"));
    }
    if (req.method != "GET" && req.method != "HEAD") {
        # We only deal with GET and HEAD by default
        return(pass);
    }
    if (req.http.host ~ "^(.*\.)?${hosts:unstyled-hostname}$") {
        # We do not cache sites in development
        return(pass);
    }
    call normalize_accept_encoding;
    call annotate_request;
    return(hash);
}

sub vcl_backend_response {
    set beresp.ttl = 10s;
    set beresp.grace = 1h;
    if (!beresp.ttl > 0s) {
        set beresp.http.X-Varnish-Action = "FETCH (pass - not cacheable)";
        set beresp.uncacheable = true;
        set beresp.ttl = 120s;
        return (deliver);
    }
    if (beresp.http.Set-Cookie) {
        set beresp.http.X-Varnish-Action = "FETCH (pass - response sets     cookie)";
        set beresp.uncacheable = true;
        set beresp.ttl = 120s;
        return (deliver);
    }
    if (!beresp.http.Cache-Control ~ "s-maxage=[1-9]" && beresp.http.Cache-   Control ~ "(private|no-cache|no-store)") {
        set beresp.http.X-Varnish-Action = "FETCH (pass - response sets   private /no-cache/no-store token)";
        set beresp.uncacheable = true;
        set beresp.ttl = 120s;
        return (deliver);
    }
    if (!bereq.http.X-Anonymous && !beresp.http.Cache-Control ~ "public") {
        set beresp.http.X-Varnish-Action = "FETCH (pass - authorized and  no   public cache control)";
        set beresp.uncacheable = true;
        set beresp.ttl = 120s;
        return (deliver);
    }
    if (bereq.http.X-Anonymous && !beresp.http.Cache-Control) {
        set beresp.ttl = 10s;
        set beresp.http.X-Varnish-Action = "FETCH (override - backend not     setting cache control)";
    } else {
        set beresp.http.X-Varnish-Action = "FETCH (deliver)";
    }
    call rewrite_s_maxage;
    return(deliver);
}

sub vcl_deliver {
    call rewrite_age;
    set resp.http.grace = req.http.grace;
}

##########################
#  Helper Subroutines
##########################

# Optimize the Accept-Encoding variant caching
sub normalize_accept_encoding {
    if (req.http.Accept-Encoding) {
        if (req.url ~ "\.(jpe?g|png|gif|swf|pdf|gz|tgz|bz2|tbz|zip)$" ||  req. url ~ "/image_[^/]*$") {
            unset req.http.Accept-Encoding;
        } elsif (req.http.Accept-Encoding ~ "gzip") {
            set req.http.Accept-Encoding = "gzip";
        } else {
            unset req.http.Accept-Encoding;
        }
    }
}

# Keep auth/anon variants apart if "Vary: X-Anonymous" is in the response
sub annotate_request {
    if (!(req.http.Authorization || req.http.cookie ~ "(^|.*; )__ac=")) {
        set req.http.X-Anonymous = "True";
    }
}

# The varnish response should always declare itself to be fresh
sub rewrite_age {
    if (resp.http.Age) {
        set resp.http.X-Varnish-Age = resp.http.Age;
        set resp.http.Age = "0";
    }
}

# Rewrite s-maxage to exclude from intermediary proxies
# (to cache *everywhere*, just use 'max-age' token in the response to   avoid   this override)
sub rewrite_s_maxage {
    if (beresp.http.Cache-Control ~ "s-maxage") {
        set beresp.http.Cache-Control = regsub(beresp.http.Cache-Control,   "s- maxage=[0-9]+", "s-maxage=0");
    }
}

Adopt to your own needs and enjoy your speedy setup.

Note: parts of this configuration file have been directly taken from the plone.app.caching example and need to be attributed to the original authors.

The price you pay for coding on the edge

Every developer will sooner or later in her carreer make a concious choice what will compromise her weapons of the trade. Starting with your favorite hardware and OS and including your favorite programming language and tool chain a lot of decicions need to be made. And sometimes your decisions fire back at you.

A decade worth of failures

I started my ongoing affair with my weapon of choice about 10 years ago, when I was introduced to the miracles and wonders of Zope. Back then it was in many aspects way ahead of its competition for example by introducing the idea of through-the-web, the so called TTW development. The Zope management interface - short ZMI - made it possible to actually build complex dynamic webpages and portals in a straight forward manner. This was both a joy to use for a newcomer like me but also served somehow as an entry drug to pick up programming.

As my code started to move to the filesystem and gradually away from the ZMI my focus ever so slowly shifted to the filesystem, *Nix system administration, Python packaging, deployment tools, monitoring, deployment, etcetera.

Nowadays I even tend to get obsessed with optimizing my local development setup to squeeze out every bit of productivity end effectivness I can. But I will elaborate on this part of my daily chores eventually in a seperate post.

Systems evolve and so do developers?

We use - amongst other things - the enterprise content management system Plone and try our best to keep all our running installs and projects as close to the latest stable version. This is a constant struggle that can only be undertaken with discipline and careful planing of the system architecture when it comes to deploying customer sites.

But after all these years I am getting bored with the status quo and usually we use a slightly newer setup and strive to keep ahead of time: the current stable development version has always been used in production during the last couple of years.

But sometimes your weapon of choice fails you. Newer development versions get installed automatically and interfere with your carefully crafted project setup.

Today we had to debug a suddently empty navigation box only to find out the the underlying template had changed. This update fixes a yearlong annoyance and finally uses an ‘ul’ for the navigation items listing. But nevertheless an unexpected update took place that demanded for a hotfix (this will probably be explained in a seperate post as well).

So what?

Accidents will happen and no matter how careful you plan your setup and version pin all project resources there is a price you pay for living on the very edge. Upstream changes might affect your current project so always be prepared. As long as only our local development setup breaks all is not lost.

I am convinced that the price we pay is worth the benefits to be had. We have things mostly up to date and under our control. The time spend for hotfixes like todays chores are an annoyance but nothing compared to the hours of development and upgrade work we spare by doing these small tests and incremental updates in our everyday work.

Optimize a little bit every day and the benefits will outweight the small problems that sometimes slow down development.

Introducing the mysterious turnover moment in every project

There is a severe problem arising from the fact that both customer and contractor seem to change after the project deadline has been reached.

The agile dilemma

Trying to sell an agile approach to project management to clients is sometimes more than difficult. After reading the Why clients should be more agile I have been thinking about wether we as contractors simply fail to convince our customers of the benefits of going agile or if our clients are not ready for such a radical change in culture yet.

Three simple truths - so hard to believe

I am convinced that these hold true about every software development project:

  • It’s impossible to gather all of the requirements at the start of the project.
  • Whatever requirements you do gather are guaranteed to change.
  • There will always be more to do than time or budget will allow.

While it is perfectly natural for every developer and web consulting company to embrace the bittersweet truth behind these three facts of life, clients seem to disagree every once in a while and insist on trying to bend the universe as if it was possible to actually make an exception just this once.

I often experience the best intentions being tossed overboard after deadlines have been meet. The first day after a new application, web app or site goes into production all principles and asumptions that held true during development somehow do not make it into the post-release project phase.

Contrary to the fact, that after moving the project into deployment we need to be double careful when running updates, since we are dealing with a running system, the client suddenly treats every small requirement as urgent and asumes ad hoc reaction on the development team part. It is correct that small bugs can be fixed fast and efficient without proper relase planing. Especially if you follow continuous integration precedures. Still the fact that new bugs get introduced just as fast as old chores get resolved goes mostly unnoticed.