{ "post": [ { "title": "Building a mobile-first university website: Use and report on Google Core Web Vitals", "date": "2022-08-13", "categories": [ "web" ], "excerpt": "
I started work as ARU’s web manager on the last day of May. Since then I’ve gone through the usual round of introductions and getting to know the structure of a large organisation. It’s been fun – and hard work.
", "content": "I started work as ARU’s web manager on the last day of May. Since then I’ve gone through the usual round of introductions and getting to know the structure of a large organisation. It’s been fun – and hard work.
I’ve identified performance as the most important area to improve on the website, especially on mobile devices. There are several reasons for this:
In this post I’m going to start outlining what I’m doing in my role as web manager, rather than what I feel we can do technically or with the design of the website.
Are CWVs the only measure of performance? Nope. Are they a replacement for watching people use your site? No. Do they measure how quickly sites load after you click/tap on a link on a Google search results page (SERP) above everything else? Probably.
But I think they reflect how likely a site performs well. Reducing javascript bundle sizes, optimising images and reducing layout shift are uncontroversial optimisations.
More cynically, if most of our website traffic comes from Google, it makes sense to focus on how Google evaluates performance. And how quickly it loads after a visitor clicks/taps a link is important.
CWVs have traction. Developers are familiar with CLS, TTFB etc., while our SEO agency are saying we need to improve our scores. After all, their job is to get our courses appearing higher up SERPs. In turn, the marketing team is aware of the importance of performance. This is good news, as third-party scripts applied via Google Tag Manager (GTM) and the drive to use more imagery and video in the cause of “engagement” will lower our scores.
Finally, everyone’s aware of Google and probably believes it knows a thing or two about performance – and why it’s so important. (Incidentally, just last week an agency sent a PDF ranking websites by their Lighthouse scores – it’s a smartish piece of marketing.)
Lighthouse provides colour-coded performance scores, and even a pass or fail. That’s useful for the big, high-level reports I‘m sending out every month, along with Site Improve accessibility and QA ratings.
Is a score out of 100 anything more than a broad indication of how well a site performs across thousands of different devices? In thousands of different contexts? Probably not. But that’s not the main benefit of reporting Lighthouse scores. I’m hoping that raising awareness means that performance becomes more of a consideration when we develop a new feature, design a component, discuss the website roadmap or add a new script via GTM. If it does, that will only benefit our visitors.
I’ve started by putting the home page through a Page Speed Test whenever I need a figure. It’ll fluctuate, but it’s better than nothing. I’m looking at services like Treo to automate tests, score over a period of time, and monitor different pages, regions, devices and network conditions.
", "url": "http://localhost:4000/posts/mobile-sites-1/" }, { "title": "The Guardian asks permission to show embedded Instagram content. Is this the future?", "date": "2022-04-09", "categories": [ "web" ], "excerpt": "I’m not sure how long The Guardian has been asking permission before displaying embedded Instagram posts, but I only noticed this today:
", "content": "I’m not sure how long The Guardian has been asking permission before displaying embedded Instagram posts, but I only noticed this today:
Facebook is particularly egregious when it comes to placing unasked-for cookies on website visitors’ devices. But those of us running websites have also been neglectful of the responsibility we have to protect our visitors who, after all, are just looking for some information or amusement. Ignorance is no defence – you still see social sharing widgets on international newspaper websites even though no-one ever clicks on them. And we’ve known for years they track all visitors.
Trust is a central element of any brand, far more important than a logo or tone of voice. 99% of your visitors may not be aware of how Facebook tracks them, or even be bothered, but that doesn’t absolve you from considering the privacy implications of what you do. We value your privacy is an empty statement if you’re not at least using no-cookie
YouTube embeds, or if you’re adding a Facebook widget or pixel to your website.
One wonders at the cynicism the Facebooks and Googles of this world have engendered among website owners. Maybe that’s changing, or maybe in a few years organisations won’t have any choice in the matter – The Guardian is simply adapting to a world in which browsers don’t allow third party cookies and governments enforce opting-in.
", "url": "http://localhost:4000/posts/guardian-requests-permission-for-instagram/" }, { "title": "Focusing your writing with a simple scaffolding technique", "date": "2021-07-11", "categories": [ "thinking" ], "excerpt": "I’ve been writing an article about advertising online over the last couple of weekends. It’s a subject I’ve been interested in for years, and I can offer some perspective as we place lots of Facebook ads at work (spoiler: they work really well).
", "content": "I’ve been writing an article about advertising online over the last couple of weekends. It’s a subject I’ve been interested in for years, and I can offer some perspective as we place lots of Facebook ads at work (spoiler: they work really well).
I was fairly happy with it. I liked the tone I’d struck along with a few well turned passages. But it had got a little flabby, hitting 2000 words. More importantly, the point of it had become unclear, and I began overreaching, making less sense as I tried to draw the thing into some coherent whole.
Time to start again.
I’m not one for convoluted writing processes. I’ve looked at apps like Obsidian and struggled to find a use for them, preferring to simply draft something and refine it as I go along.
However, I sometimes use a structuring technique which dates back to my university and teacher training days. In this case it helped sort out my meandering article.
The technique is simple. Instead of thinking in terms of complete sentences and paragraphs, you focus on your argument and its development through using basic chunking elements. These form a scaffold for your actual article.
You can use any text editor or word processor. Writing your scaffold in Markdown works particularly well as the elements can be found in HTML.
The elements are:
Don’t sweat this too much in the beginning as you’ll probably edit it several times as you build your scaffold.
It should be the final element you edit before publishing, as it will exist in its own little world on social media or in your readers’ RSS feeds or inboxes. You’ll need to think about how it works away from your website.
At this stage it’s the briefest expression of your argument, which you’ll expand in the next element.
Summarise your article in a sentence or two. How would you explain your argument to someone in a few seconds? Assume they’re not interested in your subject.
The standfirst isn’t about justifying your argument in any detail – it should be a statement of what you feel is the truth. The evidence comes later.
It’s worth doing this as early as possible as it will largely determine your article’s structure. But be prepared to change it.
An argument will usually progress through several top level points. List them here as second level headings. Each should follow on from the last one, so you may want to begin the heading with a connective (but, however, additionally, despite etc.)
An article will normally consist of 3-5 sections. If you have more I’d suggest going back to your standfirst and reconsidering your argument.
Again, be prepared to delete, edit and reorder as you go along.
Within each section you’ll make several points. This is the deepest level of your article, where you’ll present the evidence for your argument.
List every point here, as well as ideas for quotes and images, as bullets beneath the second level heading. Make use of nesting to create sub-sub points.
Your final article won’t need to reflect this nested structure. You’re making sure that each point is explicit, discrete and considered, and relates properly to the whole. In our prose we can easily smooth over inconsistencies and incomplete thinking with a flourish or writerly trick. Bullets don’t allow us this luxury.
The order of sections and sub-sections may shift as the argument develops.
You can use this scaffolding technique at any point in the writing process. Sometimes it’s useful to dump your thoughts onto the page before trying to structure them better, sometimes you might want to start with a scaffold. You can switch between the scaffold and your draft at any time.
Not every article needs a scaffold. I tend to use one when I hit 1000+ words. Often, I’m not looking to write a well-reasoned argument when I post to my website.
Each element of the scaffold affects the other. For example, you might find a nested bullet point is unexpectedly important and warrants expansion, changing your section structure and the standfirst.
You’ll eventually reach a stage where you’re happy with the underlying argument and your draft is coming along nicely. At this point, you’ll move from the scaffold to the draft, with a more tightly argued, easier to read article in the offing. But be flexible – you can still change your argument at any point before publication.
", "url": "http://localhost:4000/posts/focusing-articles-through-scaffolding/" }, { "title": "Navigation submenus – what’s the best approach? Dropdown, hovers, clicks or keeping it flat?", "date": "2021-04-11", "categories": [ "web" ], "excerpt": "I agree with this post from Mark Root-Wiley which argues that dropdown navigation menus are best activiated when a user clicks the link (or, correctly speaking, button) rather than hovers over it. Adrian Roselli also writes really well on how to implement this UI pattern accessibly.
", "content": "I agree with this post from Mark Root-Wiley which argues that dropdown navigation menus are best activiated when a user clicks the link (or, correctly speaking, button) rather than hovers over it. Adrian Roselli also writes really well on how to implement this UI pattern accessibly.
This came up at work recently, where the hover pattern has been implemented on an extranet. I predict it will cause problems for the normal reasons – affordability, dexterity and annoyance – especially considering the age of the audience using the website. Any testing, or even your own day-to-day experience, will demonstrate how frustrating it is to hover over a link to reveal a submenu, and then avoid accidentally triggering dropdowns.
But whether you’re using a click or hover pattern, you’re still creating a new set of problems whenever you implement any dropdown menu. That is, what’s the top level link/button text for?
Take Adrian’s menu. It asks two questions:
Adrian opts to makes it a link to the website home page, which I feel is confusing as it’s labelled “Blog”, even if Adrian’s site is a blog. It’s also a fairly complicated set of controls to negotiate – a link/label next to a disclosure widget.
This lead me to follow Heydon Pickering’s advice on accessible menu systems when we developed the work site last year:
Where a site has a lot of content, a carefully constructed information architecture, expressed through the liberal use of tables of content “menus” is infinitely preferable to a precarious and unwieldy dropdown system. Not only is it easier to make responsive, and requires less code to do so, but it makes things clearer: where dropdown systems hide structure away, tables of content lay it bare.
The top level menu is eniterly flat. In the spirit of not hiding content behind disclosure widgets, you’ll also see as many navigation items as your screen’s width will allow:
Feedback and testing have revealed no problems with this approach – it appears users are happy to delve into tables of content several levels deep, as long as the trail is clear enough.
However, I can see possible benefits in implementing dropdown menus well:
I am, on the whole, very happy with using a flat, signposting system for website navigation, but what do you think? Do you see a use for dropdowns in some circumstances?
", "url": "http://localhost:4000/posts/hover-versus-click-versus-flat-navigation-menus/" }, { "title": "CMSs should allow component-based page creation, but does WordPress Gutenberg get it right?", "date": "2021-03-27", "categories": [ "web" ], "excerpt": "My former colleague Alice was bemoaning the lack of a way of dropping various components into a page using her current CMS. She’s right, I think; you’d expect any modern CMS to allow editors to build pages from smaller, self-contained elements, such as alerts, promo boxes, galleries and accordions.
", "content": "My former colleague Alice was bemoaning the lack of a way of dropping various components into a page using her current CMS. She’s right, I think; you’d expect any modern CMS to allow editors to build pages from smaller, self-contained elements, such as alerts, promo boxes, galleries and accordions.
Unless your pages follow a predictable pattern – on a blog, for example – you’re relying on the theme author to code a finite set of templates that will cover a set of requirments you need to predict when the site’s built.
This was one of the main requirements when we rebuilt the work website using Statamic last year. We ended up with a list of around 20 components we could drop into any page, ranging from simple alerts to more complex collection listings:
Some simple maths tells us we can create around 400 different page layouts based on shifting these components around. WordPress.com includes around 100 different “blocks” out of the box, giving editors even more possibilities.
Statamic and WordPress offer different ways for editors to create components. Statamic takes a more form-based approach, where the editor enters data into clearly styled fields.
The editor doesn’t have any visual relationship to what’s rendered on the actual website:
WordPress’s Gutenberg, on the other hand, attempts to make the editor look more like what will appear on the website. For example, this is the “cover” block (an image with some text over it):
This would appear to be a good idea – tying the editing process to its output saves editors from flipping between the rendered page and the CMS. But having set up several users on WordPress, and trained a couple of editors in Statamic, I think it poses a few problems.
Firstly, filling in and amending traditional forms is a familiar, predictable process. You enter some text, select an element from a dropdown or complete a file upload dialogue. If you want to the edit the text, you head back to the relevant field and just fill it in again.
In WordPress, this process varies according to the block you’re editing, and can take some figuring out. For the cover editor it’s fairly clear, but involves an element of discovery – you click on the image, which reveals a toolbar with a ‘Replace’ option:
The picture slider block works differently. When you click on the block, a different type of toolbar appears, with no ‘Replace’ option:
In my experience, less technically skilled editors find this frustrating, even when they have experience of using different CMSs, including pre-Gutenberg WordPress. In every case, I’ve ended up installing the the Classic Editor.
Secondly, editors can find it difficult to navigate around a page and add new content without a framework of clearly demarcated boxes and forms. Take this example of the verse block followed by a cover. Coming out of the verse block and starting a new paragraph involves a frustrating number of clicks:
I’m a really big fan of WordPress. It offers a free, fast way to get online and publishing to the indieweb. Half of the features I implement on this site, such as comments, come out of the box. It has a huge range of plugins, and is mature, stable software that handles updates and upgrades fantastically well.
But I think Gutenberg causes problems for editors who won’t be willing to learn how it works. There are some poor UI decisions to overcome. Perhaps more worringly, I think it may be misconceived – keeping content separate from its appearance allows users to focus on creating and editing; let the designer worry about how it ends up looking.
Of course, I could be wrong. My experience is limited to four or five people. Maybe tens of millions are happy with Gutenberg. But it’d be interesting to see how often the Classic Editor is installed.
", "url": "http://localhost:4000/posts/cms-component-ui/" }, { "title": "Libraries as alternative", "date": "2021-03-13", "categories": [ "thinking" ], "excerpt": "How we think about the function of libraries in relation to other organisations that provide the same set of things can be complex. Especially when we consider the list of competitors, some of whom are frankly terrifying in their reach and resources:
", "content": "How we think about the function of libraries in relation to other organisations that provide the same set of things can be complex. Especially when we consider the list of competitors, some of whom are frankly terrifying in their reach and resources:
Libraries offer some unique services – free PC and internet access, for example. But these tend to be the services other organisations don’t want to offer because there’s no profit in them. While it’s really important we do this, we need a role beyond being a part of the social care system. After all, simply providing things for people who can’t afford them is a very narrow interpretation of what a universal service is. More practically: what role do libraries play when everyone has broadband and a device? How do you get the whole population engaged with and supportive of what you do?
This seems a difficult position for libraries. How do we compete with the internet? Perhaps Jason Fried provides a clue when writing about how Basecamp’s Hey! email service isn’t in competition with, say, gmail:
When you think of yourself as an alternative, rather than a competitor, you sidestep the grief, the comparison, the need to constantly measure up. Your costs are yours. Your business operates within its own set of requirements. Your reality is yours alone. An alternative to competition.
This makes perfect sense for an online product, and there are lots of services taking this approach. They often focus on the ethical aspect of what they’re offering – micro.blog is a nicer Twitter that doesn’t own your content, Hey! isn’t gmail and won’t track you etc.
This seems a powerful aspect of the library offer, and a route out of competing with the Amazons of this world. After all, our only motive is to provide the public with a good service, which is self-evidently more ethical than, say, Google. We won’t track you or sell your personal information to anyone else, our staff don’t work 10 hour shifts without comfort breaks, we’re built on a model of recycling things etc. etc. This is the basis of a good alternative offer.
But note the low numbers in Jason Fried’s post. He estimates Hey! will get 200,000 “alternative to gmail” users at most – at $99/year/user that’s an income of around $20m/year, which sounds great for establishing a solid business in a distinct, well-defined market, but that’s not what libraries are. We can’t settle on appealing to a small subset of the local population by offering a niche product.
I’m not sure what the answer to the competition problem is. Libraries as alternative offers a great way to market ourselves to new audiences, but what’s the bigger reason for using the library?
", "url": "http://localhost:4000/posts/libraries-as-alternative/" }, { "title": "I’m not posting to leonpaternoster.com often anymore; subscribe to This Day’s Portion instead!", "date": "2021-02-28", "categories": [ "thinking" ], "excerpt": "As you may or may not have noticed, I haven’t posted anything to this site in a while.
", "content": "As you may or may not have noticed, I haven’t posted anything to this site in a while.
That’s because leonpaternoster.com
is really just a professional portfolio now.
I was lucky enough to be able to publish anything I liked here, but that doesn’t really fit in with the work vibe, and hasn’t for a while.
But! I do still blog at This Day’s Portion. Here you’ll find notes, links and posts on:
The frequency varies, with most posts coming at the weekend. In February I blogged 12 times, mostly 1-300 word efforts.
So if you are interested in subscribing head over to the This Day’s Portion RSS feed now.
Cheers! 👋
", "url": "http://localhost:4000/posts/au-revoir-auf-weidersehn/" }, { "title": "Marketing without tracking", "date": "2021-02-27", "categories": [ "web" ], "excerpt": "DF has been going in hard on Apple Mail not blocking tracking pixels over the last week or so. The relationship between the providers of what we consume content with (browser and email client/services) and the publishers of our content (Facebook, Twitter, Mailchimp et al) is a murky one but, as Apple has discovered, pushing privacy strengthens the brand.
", "content": "DF has been going in hard on Apple Mail not blocking tracking pixels over the last week or so. The relationship between the providers of what we consume content with (browser and email client/services) and the publishers of our content (Facebook, Twitter, Mailchimp et al) is a murky one but, as Apple has discovered, pushing privacy strengthens the brand.
Perhaps those of us working in marketing not only need to make sure we make ethical decisions when we use/choose our tools, but also the possibility we won’t even be able to use services that track every user move in the future. What will marketing look like then? How do we plan? How do we wean ourselves from the notion that data is somehow all important? That marketing is all about outsmarting and persuading people?
", "url": "http://localhost:4000/posts/marketing-without-tracking/" }, { "title": "Why brand web pages?", "date": "2021-02-13", "categories": [ "web" ], "excerpt": "What price traditional visual branding? — Or rather, what value does carefully crafted branding have to users in a fluid, interactive medium where they want to do something? Where they feel in control of the flow of the experience? Where they can view the content in a million and one different contexts the author has no control over? Is it worth putting the same amount of effort into the visual brand of a web page as, say, a print advert? Could we put more resources into the actual content and functionality?
", "content": "What price traditional visual branding? — Or rather, what value does carefully crafted branding have to users in a fluid, interactive medium where they want to do something? Where they feel in control of the flow of the experience? Where they can view the content in a million and one different contexts the author has no control over? Is it worth putting the same amount of effort into the visual brand of a web page as, say, a print advert? Could we put more resources into the actual content and functionality?
Adam Morse touches on this point when proposing a form of automated branding for digital.
In the past, you might spend 10-15 minutes picking a typeface and font size in Microsoft word in preparation for printing it out and sharing with others. But when you publish on Twitter, Facebook, Medium, you’re removed from this part of the design process. Even [on] your own website, you don’t have absolute control over how the typography will render for the end user. Chaos Design: Before the robots take our jobs, can we please get them to help us do some good work?
Note, he’s not arguing for no branding at all, more a version where AI makes decisions over things like colour combinations. However, Morse is the creator of an atomic CSS API, which basically says to front-end developers: “Don’t bother trying to create ‘semantic’ class names”, which I’d argue is the first step in removing the “artisinal” element from web design.
Would it be better thinking of web pages in terms of function and interfaces, and build them in the same way we might build a car dashboard? Dashboards more or less look the same and perform similar basic functions, but they work differently or just feel different depending on the car and how well they fulfil their purpose. The brand is derived from the experience of using the dashboard.
On the other hand, some still like the quirky visual nature of our sites, and how it aids the meaning of our words:
Among the many small violences of the social media platforms is the way they squash every contribution into the same rectangle, framed by the same buttons. They do this so they can assemble those contributions into a larger structure; a timeline. They prefer neat bricks; stackable, interchangeable. Heterogeneous, weird-shaped content won’t do… Foundation (part two)
I like the way good interaction looks, and I get easily annoyed at artisinal looking sites. But that’s just me. I’d be interested in any research on how important visual branding is to users. Even Nielsen says creating the right impression quickly is important, but what exactly creates that impression?
", "url": "http://localhost:4000/posts/why-brand-web-pages/" }, { "title": "Static works", "date": "2020-05-03", "categories": [ "web" ], "excerpt": "I’ve been managing the Suffolk Libraries website for seven years, during which it’s been through three incarnations. The first two were built on standard PHP/MySQL database CMSs – WordPress in version two. The third version, built in 2016 and still used at the time of writing, is probably the most interesting. It’s a “static” site, built using Jekyll, a static site generator.
", "content": "I’ve been managing the Suffolk Libraries website for seven years, during which it’s been through three incarnations. The first two were built on standard PHP/MySQL database CMSs – WordPress in version two. The third version, built in 2016 and still used at the time of writing, is probably the most interesting. It’s a “static” site, built using Jekyll, a static site generator.
In 2016 this was revolutionary. As far as I’m aware, it was the first public sector/not-for-profit website to be built in this way. Static was the big new thing. From an organisational point of view the move was successful, I think. The web team has always consisted of just one (me) or two people, with no budget to speak of, and it’s responsible for everything web: hosting, designing and building the site, writing and managing the content and social media. All this with no large, cross-departmental council web team to provide support; yet, before the service was divested to an independent not-for-profit, the library was traditionally the most visited part of the Suffolk County Council website. In very practical terms, moving to static meant we didn’t have to worry about two things out of hours: the site falling over with a 500 error, or getting hacked.
Users also benefited. Most importantly, the Suffolk Libraries website is fast. It scores As across the board on WebPageTest, recording load times of under a second. I’ve accessed the site in plenty of Suffolk villages using a Moto G4 phone and a patchy 3G connection; it’s inherently accessible in a rural county with vast differences in income. Indeed, I would have liked to have written a progressive web app so certain key pages could have been served without a connection at all, but, alas, no time or money for this. Because our host offers some basic dynamic functionality (such as forms and build hooks for automated daily builds), users had just enough interactivity to find up-to-date information quickly.
Apart from the fact users could depend on the site being available at all hours – it’s not been uncommon to have 100% uptime over a month – it’s handled traffic spikes effortlessly. On Thursday 19 March our page views increased threefold over the previous Thursday. Our coronavirus page was served more than 2,500 times in a two hour period – unsurprising, as it contained important information on what we were doing about customer fines and we’d just announced we’d be shutting all our buildings. It was a good example of Eric Meyer’s call to get static:
If you are in charge of a web site that provides even slightly important information, or important services, it’s time to get static. Get Static
In short, serving a traditionally static site, enhanced with minimal features and javascript (mainly in the form of jQuery) has worked for us and users for over four years.
Back when static was the new thing advocates often noted that it represented a return to the halcyon days of the early-2000s web – pre-CMS, pre-scripting and pre-databases. This was a selling point. After all, it’s simpler, faster and more secure to serve plain HTML than have a router assemble pages on the fly from data stored in a database, right?
If you buy this argument you need to accept its corollary: static pages are enough to meet most user needs most of the time. The Suffolk Libraries website proves this is true. So, if you’re publishing a blog or a site that just provides users with information, 90% of that can be done with flat HTML, and you either sacrifice some or all of the other 10% (probably forms or automatic updates based on variables such as the date) or you find another means. That might be some basic server side functionality (a la Netlify forms), or through javascript.
But static changed pretty quickly from around 2017, and I think we’ve lost an important strand of web development where we make sure our websites deliver important information as quickly as possible to as many people as possible all of the time. Coronavirus may have refocused our thinking.
Under the traditional static model, the heavy lifting of building pages from includes and local or external data is done when the website is compiled into flat HTML files, whether that’s on a PC or a server. This happens out of view (hence Jekyll, incidentally), completely separately from any user involvement. Javascript is used to enhance UI, perhaps through offering sorting or filtering functions. All the user does is download the HTML file and its assets.
Under a newer model (which even has its own Netlify-created brand name of JAMstack) much of this heavy lifting is moved to the user’s browser. Websites are created as SPAs, where HTML, CSS, data and javascript are downloaded in one bundle and the javascript creates pages based on user interaction.
Now, this is still static in that there’s no server-side scripting or database involved when users see pages, and it makes sense for websites where state needs to change often – for websites that behave more like apps. And it may make it easier to develop sites with predictable CSS “at scale”.
But the downside is that we lose two of the things static promised in the first place: speed and resilience. To go back to the Suffolk Libraries example, do we want to be in a situation where users download the whole of the React library on a Moto G4 in an area with a patchy internet connection in order to find out whether a library is closed?
There is no more reliable and fast way to provide users with content than by serving static HTML and CSS. This is an extremely powerful feature of the web, and something static once clearly promised. Static can mean static in the purest sense, and it’s something that works for developers and users.
", "url": "http://localhost:4000/posts/static/" }, { "title": "Building a web that lasts", "date": "2019-12-28", "categories": [ "web" ], "excerpt": "How do we make web content that can last and be maintained for at least 10 years?
asks Jeff Huang.
How do we make web content that can last and be maintained for at least 10 years?
asks Jeff Huang.
Another back-to-basics post bemoaning the death of the web. Maybe I sound skeptical? I don’t mean to – I agree with the sentiment, if not all the suggestions; maintaining just one HTML page seems absurd. I sometimes think these blogs are really just an expression of nostalgia, for a time when our limited skills were enough to stay on top of the web.
Yes, we want to generate new pages using a simple, robust templating system, even if that system disappears one day.
Perhaps the answer lies in making sure your CMS creates HTML pages from text files, rather than a database/API. Shifting text files is a lot easier than extracting them from a database. Jekyll, for example, uses Markdown stored in a folder. Copy the folder and you’ve copied your content.
I am all for resisting SPAs and complex build chains. I fit squarely in Jeff’s group of professionals who are able to get a website up and running, but aren’t paid just to do this. The nostalgia, again.
Whatever your approach, this is mainly a matter of will. I’ve used WordPress, Hugo and Jekyll over the years, and you can still find the first ever post I published over eleven and a half years ago.
I’m very glad I never shifted everything over to a service like Medium, but I am reliant on Github/Netlify at the moment. Netlify’s move to charging for more individual components (such as build minutes) concerns me. I’m therefore thinking of making the next logical step in an indieweb set up: old fashioned, paid for hosting. And if I want to be able to post stuff when and wherever I want, I’ll need an old-fashioned CMS, which won’t have such a complex set of dependencies as Jekyll. Kirby, probably; I expect PHP is as robust as anything out there, apart from HTML itself, of course.
As we approach the end of the decade, it looks like we’re returning to the set up we had at its beginning. Comforting, if nothing else.
", "url": "http://localhost:4000/posts/building-a-web-that-lasts/" }, { "title": "Hiding accessibility on web pages", "date": "2018-09-01", "categories": [ "web" ], "excerpt": "I’ve unhidden my skip link (as you may be able to see in the top right hand corner of the page). Why? Well, if you hide a link and neglect to reveal it on focus, keyboard navigators experience a weird jump when they tab to it. It seems as if something’s broken.
", "content": "I’ve unhidden my skip link (as you may be able to see in the top right hand corner of the page). Why? Well, if you hide a link and neglect to reveal it on focus, keyboard navigators experience a weird jump when they tab to it. It seems as if something’s broken.
Whether you still need skip links is perhaps a moot point.
You can get round this problem by hiding the link with CSS and revealing it when the user tabs to it. Lots of sites do this, including the New York Times (try pressing the tab key a couple of times and you’ll see what I mean).
This strikes me as an odd approach. The link won’t appear until you discover it accidentally, and then disappears when you tab on through the page. It’s confusing. As a developer, you’re also hacking CSS, writing extra code.
But there’s another question we should perhaps ask: Why hide it in the first place?
What harm does it do? If the majority of our visitors are using a screen and mouse to navigate around a web page, clicking the link will simply move them to the page content. This is expected behaviour, assuming the link’s been labelled logically (probably hard to argue with Skip to main content). Granted, users may pause for a second to wonder why it’s there – it’s perhaps something to interpret.
If some visitors use a keyboard to navigate, having a visible skip link is helpful.
The only other argument I can think of is that it doesn’t look good, or it’s inelegant, not minimal etc.
There’s a pattern when it comes to hiding accessibility features. On this page, most websites wouldn’t show you:
It seems as if accessibility features are something to be hidden. Accessibility detracts from the ideal, default experience of a web page.
There’s something dishonest about hiding things. It’s like the developer is ashamed of what they’ve placed on the page. We don’t really think our beautiful, primarily visual page is better than a usable, accessible design (that can still be beautiful)?
Rather than concentrating on how we hide and reveal page elements, it perhaps makes more sense to put our efforts into making pages accessible in the first place. In the same way that genuinely building mobile first should make our pages quicker and easier to use for everyone, building visible accessible elements will help make them more inclusive.
", "url": "http://localhost:4000/posts/hiding-things-on-web-pages/" }, { "title": "Kiosk testing: Different users, different results", "date": "2018-08-19", "categories": [ "web" ], "excerpt": "We don’t read enough about the results of kiosk testing, maybe because people don’t kiosk test enough. But Thomas wrote an excellent post on some last minute tests on the MoMu website, sharing his methodology and five findings.
", "content": "We don’t read enough about the results of kiosk testing, maybe because people don’t kiosk test enough. But Thomas wrote an excellent post on some last minute tests on the MoMu website, sharing his methodology and five findings.
I can’t recommend this testing enough. It’ll snag major usability problems, challenge your assumptions and help get into users’ minds. It’s also relatively easy to set up, so you can use it to test any website change, not just a whole new release.
You’ll want to test your “typical” website’s users rather than just anyone (although this is still useful in identifying problems that will trip anyone up), or, worse still, whoever’s paying for the site. We all want to build websites usable by anyone, and your marketing department will no doubt have a new audience to target (which is probably younger than your current audience). But if you run a website, you at least need to be aware of who is using it at the moment, and whether you’re going to confuse them with a change.
Which is why I found Thomas’s first finding interesting: Hiding navigation is totally OK
as it contradicts lots of testing I’ve done. Now, kiosk testing can take some thought. On the MoMu website the navigation menu toggle button is very clearly styled with a nice big drop shadow, and uses a label rather than an icon. Perhaps the results would have been different if they’d used a standard hamburger icon.
Nonetheless, I suspect the Suffolk Libraries’ audience may have made a difference. It’s older, and perhaps less comfortable with toggles and switches, especially if they haven’t been styled clearly. We therefore use toggles very sparingly on our site. Off the top of my head, there’s just a search icon at narrow widths and accordions in event listings.
The point is you’re testing something in context. How well is it designed? Who’s using it? Does everyone experience it in the same way? Change any of these factors, and you’ll likely get a different result – even if some findings are more relevant to all users than others.
", "url": "http://localhost:4000/posts/kiosk-testing-different-users-different-results/" }, { "title": "If users aren’t bothered about griddy layouts, why are we?", "date": "2018-08-17", "categories": [ "web" ], "excerpt": "Hidde de Vries posted some excellent thoughts on not bothering with complex CSS fallbacks for older browsers (in this case when using CSS grid):
", "content": "Hidde de Vries posted some excellent thoughts on not bothering with complex CSS fallbacks for older browsers (in this case when using CSS grid):
I don’t think we owe it to any users to make it all exactly the same. Therefore we can get away with keeping fallbacks very simple. My hypothesis: users don’t mind, they’ve come for the content.
Granted, what we mean by “content” is vague. A blog post is different from a list of trainers, which may require a more complex layout than a single column. But what always strikes me about these types of posts (which have been written for years) is that they don’t take the obvious next step and recommend ditching complex layouts altogether.
If users don’t care about a complex layout then why should the people making web pages? Why do we bother creating griddy layouts at all when it means more work and more code?
(Actually, a few designers have argued just this.)
Hidde acknowledges the main reason for implementing grids is to keep on-brand: Some brand design guidelines come with specific grids that content needs to be layed [sic] out in.
In other words, it’s not users who demand grids, but marketing departments.
But that’s not a good reason, and something designers should resist. You could argue for making things look fancy for their own sake by referring to the world of advertising. Companies still pay for display adverts and TV spots because they communicate something about the the “brand” beyond its content and price. They don’t want to look cheap, and nor does your website:
… an ad can emit a powerful signal about a brand, regardless of information content. Online ads are cheap and easy to make, but the problem is, they look it.
But this is by the by. Even if we accept the Don Draper logic, a simple layout doesn’t have to look cheap. If your user is happier with plainer then that’s a good business reason for keeping it simple. Online, the interface is the brand, and your testing and feedback should dictate the layout you implement.
Note: That’s not a reason to discard CSS grid 😇 – I don’t mean to conflate a CSS technique with a type of layout. Any layout is quicker and easier to implement in CSS grid than by using floats, an age old CSS hack.
", "url": "http://localhost:4000/posts/griddy-layouts-why-bother/" }, { "title": "Getting round GDPR with dark patterns. A case study: Techradar", "date": "2018-08-12", "categories": [ "web" ], "excerpt": "Many news and big blog sites have introduced onerous and confusing popovers since the introduction of GDPR in May. Unfortunately, this will no doubt result in GDPR banner blindness, where users will simply click ‘Accept’, thereby allowing websites to install tracking javascript, just as they did before 25 May 2018.
", "content": "Many news and big blog sites have introduced onerous and confusing popovers since the introduction of GDPR in May. Unfortunately, this will no doubt result in GDPR banner blindness, where users will simply click ‘Accept’, thereby allowing websites to install tracking javascript, just as they did before 25 May 2018.
This is not GDPR’s fault. The guidelines are clear. Websites have to:
It would be very easy to design an unobtrusive banner that did this. Something like:
We can share your anonymised browsing history with advertisers so you get tailored adverts Share your browsing history →
Of course, no-one would ever click this link because no-one wants to be served adverts, or share their data with someone they don’t know. As a consequence, sites that rely on tracking and identifying visitors are getting round GDPR by the way they know best: obfuscation. They could rephrase this request to collect data by being honest about why they need it:
We can share your anonymised browsing history with advertisers so you get tailored adverts. We rely on the money we get from tailored adverts to pay our journalists. Please share your browsing history →
There are very few examples of sites doing this well. Smashing Magazine is one, although it recently moved to a part-subscription model for its income, so isn’t reliant on installing tracking cookies. The pop up is a minor annoyance which presents a simple binary choice (although you are nudged to accept cookies through the placement, colour and attached image of the Okay option, and the button labels could describe the actions more explicitly):
But old habits die hard. Techradar writes reviews of electronic things – it’s a useful resource if you’re comparing products. Here’s the popover they display when you visit their site. I’m sure they’re not the only website doing this sort of thing:
There are a few dark pattern techniques at play here that make proceeding without opting-in difficult:
Unfortunately, clicking ‘Show purposes’ to not opt-in doesn’t end the process. Instead, it reveals the following:
Another popover to negotiate, using the same dark pattern techniques. The primary action is not to not opt-in (we’re in the land of the double negative), but to accept the site’s cookies. The secondary, ‘technical’ option is to Reject all. Presumably that’ll do the job:
I get the feeling Techradar really don’t want us not to opt-in. Again, the primary action leads us to opt-in, and the alternative is very confusingly labelled. Presumably, ‘Leave’ means leave the website? But I do want to read the article. Techradar have put me in a situation where it seems I have to accept cookies in order to use the site. Let’s see what happens if you do click ‘Leave’, though. Ah, success! Sort of:
At least the text here is clearer, and Techradar are honest about why they don’t want you to use an adblocker. Incidentally, we’d need to change our more straightforward banner:
We can share your anonymised browsing history with advertisers so you get tailored adverts. We rely on the money we get from tailored adverts to pay our journalists. Please share your data → and turn off your adblocker.
This is an easy one. We’ll continue with our Adblocker, thank you. And we’re there. Of course, this being Techradar, we get a popover for our troubles:
Technically speaking Techradar are getting explicit consent to collect visitor data. They’re obviously not operating in the spirit of the regulations, but I also think they’re in breach in at least two areas:
Avoid making consent to processing a precondition of a service.
Be clear and concise. Although this is a subjective requirement, I can’t see how anybody could interpret this process as clear and concise.
Bearing in mind the murky history of online advertising, and some sites’ reliance on it as a source of income, it’s depressingly inevitable that organisations will find ways to get round the new regulations. The only way GDPR will achieve what it set out to do will be through prosecutions. If they don’t prosecute, people will just click or tap ‘Accept’ and the online advertising industry will carry on as before, while claiming it’s doing the right thing. I’m skeptical. If you’ve any interest in privacy and you’d like to keep websites fast, make sure you’re using a browser that puts you in control of trackers and cookies, and use an adblocker. Firefox (or, better still, Firefox Focus on a mobile) is the obvious choice.
", "url": "http://localhost:4000/posts/techradar-gdpr/" }, { "title": "Jekyll Tachyons starter theme for Jekyll", "date": "2018-07-28", "categories": [ "work" ], "excerpt": "Jekyll Tachyons is a starter theme for Jekyll that makes it easy to:
", "content": "Jekyll Tachyons is a starter theme for Jekyll that makes it easy to:
head
or refer to a stylesheet, perhaps saving an additional request for a file and getting your ‘critical’ CSS served as soon as possible-ns
, -m
and -l
If you use Tachyons and Jekyll a lot it’ll save you time and effort setting up new projects. I’ve used it on a couple of projects (and to tidy up the work website):
It’ll also make it easy to remove unnecessary CSS. Tachyons is small (weighing in under 14k gzipped), but it’s still worth removing any CSS you don’t need, especially if you’re not gzipping.
If you’re obsessing over performance you might appreciate putting your CSS in the document head
rather than referring to an external stylesheet.
As for Why use Jekyll and Tachyons?.. I’ve written about why I use Tachyons full stop, but this also provides a really quick and easy way to prototype HTML pages that share components such as a header and footer. Just edit your header and footer files in the Jekyll _includes
folder and you’re away. All depends on how you work, of course, but I like getting into the browser as soon as possible.
If you’ve got any questions/comments/bugs raise them on Github or contact me via Twitter or at leon.paternoster@zoho.com
", "url": "http://localhost:4000/posts/jekyll-tachyons/" }, { "title": "The Adventures of Sherlock Holmes in HTML", "date": "2018-07-04", "categories": [ "work" ], "excerpt": "", "content": "I made a website from the 1892 Sherlock Holmes collection The Adventures of Sherlock Holmes. 12 stories, some of which you may well recognise, originally published in serial form in The Strand magazine.
I’m not a huge Holmes fan, but these stories provided a few minutes’ Victorian pleasure. It can be fun reading Doyle’s character between the lines; protestant, conservative, anti-royalist, a streak perhaps of non-conformism, some romanticism. They have generated some excellent pastiches and adaptations: the best probably being Anthony Burgess’ Murder to Music, along with some of the Cumberbatch and Freeman BBC episodes.
So, bearing in mind my slightly meh attitude to the books, and the fact you can easily download them for free, why do this?
What I mean is: I like the idea of navigating to a website and starting to read. You don’t need a new device or software; just your browser and a phone, tablet or PC (assuming the text’s been responsively styled). Pick up your reading at any time from another device.
The Sherlock Holmes stories seemed just right for this. They’re short (most can be read in 15-30 minutes), not too high brow and are freely available in pretty decent HTML on Gutenberg. Victorian readers would have bought the magazine every month; in a better world, you could possibly see modern readers subscribing via RSS, and reading each new episode in their RSS software (another advantage: HTML is infinitely portable). An attentive reader could also start (cross)referencing via the magic of hyperlinks to create all sorts of interesting rabbit holes.
Behind the scenes I’m using my normal technical stack: Jekyll, Tachyons, Github and Netlify. The stories are broken down into a chapters collection, which can be used to automatically generate a table of contents, and to split the stories into separate pages. Disappointingly, I discovered only the first story, A Scandal in Bohemia, is divided into chapters, so this wasn’t really necessary, and reading is actually easier if you don’t split the text up into separate pages. Who knew – scrolling is better on a screen than flipping, although this may not apply if you were HTMLifyimg War and Peace.
(The table of contents did mean I got to use my favourite HTML elements, details
and summary
: semantic and accessible toggable content in pure HTML, I’ve no idea why these aren’t all over the internet.)
You can bookmark your place in the text by clicking on the #
sign that appears when you hover over a paragraph. Unfortunately, the (excellent) anchor.js doesn’t work particularly well on mobile; so much so I’ve decided to hide (as in display:none
) anchors on narrow screens. I may take Dmitry Fadeyev’s approach and add a bookmark toggle to the page so that anchors are either visible or not, regardless of the device. If you are reading across devices you’ll also need to use a cloud bookmarking service (Firefox has one baked in). This isn’t perfect, but it’s keeping it no-login-required pure ☺️.
(Note to self: this could be a good time to look into some sort of AWS integration for generating bookmarks on the fly.)
I’ve also been using a very simple “framework” I made a few months back. Jekyll Tachyons creates a basic Jekyll site complete with the Tachyons framework. Most interestingly, you can choose to place styles in the document head
or in a separate stylesheet, and pick and choose the Tachyons modules it loads. I’m also using it on this site.
There are lots of possibilities here. I could index the text with Algolia to make it searchable and somehow categorised. If only I had the time to look into the potential of plain HTML and text.
", "url": "http://localhost:4000/posts/adventures-of-sherlock-holmes/" }, { "title": "Inclusive web design is web design for everyone, including you", "date": "2018-01-07", "categories": [ "web" ], "excerpt": "Heydon makes a good point about how we all, at some point during our day (or lives), need designers to make web pages accessible because we’re finding something difficult. We are all physically, cognitively, physiologically, socially or technologically hampered, whether that’s through a permanent condition which means we need to use a screen reader, or in a more difficult to define or temporary way. We may turn 46 and find our our eyesight isn’t what it was, or we get tired more. Our commute into work may get stuck in some godforsaken wood outside Ingatestone which has little or no mobile connection. We might lose our job and find we can only use a knackered, ancient desktop to get online. How do we design for these scenarios?
", "content": "Heydon makes a good point about how we all, at some point during our day (or lives), need designers to make web pages accessible because we’re finding something difficult. We are all physically, cognitively, physiologically, socially or technologically hampered, whether that’s through a permanent condition which means we need to use a screen reader, or in a more difficult to define or temporary way. We may turn 46 and find our our eyesight isn’t what it was, or we get tired more. Our commute into work may get stuck in some godforsaken wood outside Ingatestone which has little or no mobile connection. We might lose our job and find we can only use a knackered, ancient desktop to get online. How do we design for these scenarios?
Ironically enough, the term inclusivity has been used to ghettoise easier to label conditions and states. Unfortunately, there are plenty of overprivileged fools out there willing to exploit any group they deem weaker than themselves. But web designers can and should – and they mostly have honourable intentions, I think – work to make their output truly accessible. If they don’t, they could be excluding anyone, even if you think you have a healthy, wealthy audience.
This isn’t just about putting a skip link at the top of your page, or using ARIA attributes correctly. It’s about a myriad of other little decisions designers and developers have to make in every line of code, such as: adding bullet points to long lists, underlining links, colour contrast, text size, using meaningful imagery, using thoughtful imagery, marking up content properly, thinking twice about using a library or webfont, using appropriate language, making content findable by search engines, etc. etc.
What would an accessible web look like? I’m sure it would look a lot different from how it does now ( – and am not unguilty, obviously).
", "url": "http://localhost:4000/posts/inclusive-design/" }, { "title": "Skinny Guardian", "date": "2017-10-05", "categories": [ "work" ], "excerpt": "Skinny Guardian displays the last 50 Guardian articles in a plain, easy to scan and read format. No javascript, no database and a smattering of CSS make it ideal for when you just want something to read on your phone.", "content": "Skinny Guardian was inspired by sites like CNN Lite and Thin NPR – news served with next to no styling. While this may sound (and look) unexciting I find a simple list of headlines an excellent way to get something to read quickly, and because they’re just HTML and CSS, articles load instantly. Perfect on a train or bus journey into work with a poor mobile connection, or when you want something quick to read during your lunch.
The Guardian has an excellent, open API, so the project gave me a chance to work with external, queryable data. I’d only used the Google Maps API in the past, copying some pre-defined templates and queries. With Skinny Guardian, I built my own API queries and wrangled the results into layout files.
Skinny Guardian uses Jekyll to generate static HTML files, thereby removing any database requirement. I use Netlify hosting for free SSL and a build hook URL, which means I can automate site builds once every 30m if I use something like a free Postman account to send the URL a POST request.
The site simply queries the API whenever it’s built, grabbing the 50 most recent articles and converting the json response into Jekyll data files with the Jekyll Get plugin.
I then use the Jekyll Datapage Generator plugin to convert the json into Jekyll pages that I can list and feed through layout files. Throw in the Tachyons CSS framework, and you have a fast, regularly updated list of Guardian articles to peruse and read.
There’s a bit going on under the hood at build time to make sure the site serves as quickly as possible.
I only use the Tachyons CSS modules the site needs – there are no hover effects, for example. This means the gzipped CSS weighs in at around 7k, half the magic 14k figure. So instead of making a separate request for a CSS file, styles are placed directly in the HTML document’s head, cutting down on load and display times. I also minimise the HTML and CSS.
All this means Skinny Guardian should load quickly, regardless of the quality of your connection. Ideal if you’re stuck between, say, Ingatestone and Brentwood on your commute into work 😄
", "url": "http://localhost:4000/posts/skinny-guardian/" }, { "title": "Lessons learned from developing library self-service software: The launch date shouldn’t be the real launch date", "date": "2017-09-02", "categories": [ "web" ], "excerpt": "Although we’d like to develop products and services iteratively, the truth is organisations think in terms of strict deadlines, mainly because projects have finite budgets that have to be spent within set financial periods. We product owners need to think about how this affects our ability to make changes based on user feedback.
", "content": "Although we’d like to develop products and services iteratively, the truth is organisations think in terms of strict deadlines, mainly because projects have finite budgets that have to be spent within set financial periods. We product owners need to think about how this affects our ability to make changes based on user feedback.
Browsers update regularly, making frequent, (mostly) small changes. This causes little disruption for users. It’s rare you’ll download a big update and find everything is completely different. This is is how we’d like to develop software over several years, such as our self-service system.
But let’s say you, unlike Google, have a finite budget to be spent by a specific date; consequently, you’ll have a set-in-stone launch date – like the ship in the above picture. The expectation is that the product will be ready, and won’t sink if it bumps into any icebergs.
If your best feedback will come from real usage, you might plan a beta phase, as we did when developing self-service software. This is useful, but there’s a big difference between a controlled roll out to four small libraries that are showing an interest in the new system, and installing kiosks in 44 libraries, including Ipswich, Lowestoft and Bury St Edmunds. Generally, staff aren’t particularly interested in the product itself. They just want it to not cause them hassle.
We unearthed user experience problems after I’d installed kiosks in the four beta libraries, problems I would have liked to have fixed more quickly. But we ran up against the big launch date curse of no more project money.
The lesson learned is therefore quite simple:
If you have an official launch date, reserve some money for further changes after this date. Say, at least 10% of the overall budget.
Note: It’s important this money is kept aside for the project. There’s little value in coming in under budget (besides some temporary good PR that’ll come back to bite you later in your career) if your product isn’t perfect – and it’ll never be perfect.
This also means gathering as much live feedback as quickly as you can, and preparing your developers to make changes fast. Wait too long, and the money’s gone.
", "url": "http://localhost:4000/posts/against-launch-dates/" }, { "title": "Lessons learned from developing library self-service software: User testing isn’t as good as user using", "date": "2017-08-16", "categories": [ "web" ], "excerpt": "Between December 2016 and May this year we developed a library self-service progressive web app. At the time of writing, it’s been deployed in 35 of 44 of our libraries, so it’s a good time to start thinking about what went right and wrong, and what I could have done better.
", "content": "Between December 2016 and May this year we developed a library self-service progressive web app. At the time of writing, it’s been deployed in 35 of 44 of our libraries, so it’s a good time to start thinking about what went right and wrong, and what I could have done better.
My first observation is a simple one, and probably self-evident, and contains a lot of uses:
User testing isn’t as useful as watching users use your product in real life. 😲
Over the course of the project I conducted all testing, arranging sessions in libraries with around 15 ‘real’, representative users, from ages 8 to around 70. Libraries are democratic spaces, and if you observe visitors in a busy branch you’ll see a very broad demographic. On the one hand it’s really exciting to see your product being used by so many different people, on the other it does pose several design challenges (more of which in another post).
I have plenty of experience testing websites using a simple kiosk testing methodology, so I understand the need to get out of the way, not provide hints and phrase tasks in as neutral a way as possible. However, testing self-service software is more challenging than testing a website as there are additional, environmental factors to consider.
For example, self-service users have to manoeuvre physical objects while interpreting information on a screen. They may well be in a queue or using the system with a child. In this case, some have been using the same system for eight years, so when they do get to the machine they’ll simply perform exactly the same sequence of movements and screen presses as they have the last several hundred times.
Kiosk tests, on the other hand, take place in isolation. The users have more time to consider what they’re doing and they don’t have the immediate pressure of children or other users standing behind them. In fact, because they know they’re testing something they’ll approach it as something to learn, and to take their time over.
No-one wants to learn new UI in the real world unless they know they’ll get something back. When buying a new car or phone, for example, or through learning something new. There’s little financial or emotional return in self-service kiosks. They’re something to use as briefly as possible.
So I quickly found we got by far the most useful feedback once an app change was deployed in the wild. Niggles that had been smoothed over by a patient approach from testers became full blown problems in real life.
It therefore would have made more sense to go live with new features as soon as possible, and observe them “in the wild” – and to make sure changes were made quickly. We worked to fortnightly sprints, so we should have been pushing these new features at the end of each sprint based on feedback from the last sprint. Instead, we too often went on to the next batch of features to implement on our staging site and only made the changes live once we had completed our first version. (Our MVP was too big, but that’s for another post as well.)
One word of caution on this though: If you’re replacing a mature, well-used product this can be tricky. It’s not easy to change a system and leave existing features out until they’re developed at a later date.
", "url": "http://localhost:4000/posts/user-testing-not-as-good-as-user-using/" }, { "title": "Managing a project to design and install library self-service software", "date": "2017-08-01", "categories": [ "work" ], "excerpt": "I conceived a new library self-service system, commissioned and helped run a feasibility and design sprint, and managed the project from start to finish.", "content": "I conceived, managed and implemented a project to replace self-service machines throughout Suffolk’s 44 libraries. We built an innovative, web based system using the latest offline techniques. The project had a small, fluid budget and tight development and implementation deadline.
In 2015 Suffolk Libraries began to think about replacing its aging self-service kiosks. The kiosks consisted of a client running on Windows XP PCs, and had given us several problems:
We looked at using existing replacements, but found them unsatisfactory:
I therefore thought about what we’d want from an ideal system:
The obvious solution to our problem was a website. This offers several advantages over a client:
I felt strongly this was a good idea, but would it be viable? If so, what form would it take? What should we be looking for? I decided to run a sprint to find out.
I chose one of the most respected digital agencies in the UK to carry out some research into what a web app might do, and look like.
I worked with Clearleft over a week, interviewing staff and customers, investigating the feasibility and technical aspects of our proposed approach. We did lots of different activities, some structured, some more free form.
At the end of the week we had:
We’d use these later in the project:
Read more: A 5 day sprint with Clear Left exploring library self-service machine software
Having decided on the web app approach we needed to find a developer to build it.
Our Clearleft report provided the basis of a request for proposal. Instead of providing a long checklist of technical targets, I identified ten or so important features the app would need to provide, such as:
We invited three developers to submit proposals and scored each against how well we thought they’d be able to meet my ten requirements. We weighted what we thought were the most important, and also scored on value for money, clarity and how dependable we felt they’d be.
Read more: Build user requirements into your requests for proposal and avoid long functional check lists
Having carried out a research phase and armed with an app mock up, I felt we could jump straight into developing our app. This was important as we were working to a five month deadline.
We followed a sprint methodology, which consisted of agreeing work, doing it over two weeks, reviewing it and then agreeing a new sprint. It meant we could change ideas that didn’t work relatively easy, and re-prioritse features – we only had a limited amount of money and time.
I tested our prototypes regularly on customers. I also installed kiosks running our web app in four ‘beta’ libraries to get some real feedback over a two month period.
Commissioning our own web app has several advantages:
Running a project like this involves a lot of work; it’s a lot easier to buy an off the shelf product. However, libraries should be looking to improve library user interfaces, and sometimes that means getting things done ourselves.
", "url": "http://localhost:4000/posts/managing-self-service-project/" }, { "title": "Small vs big user studies (and making lots of changes)", "date": "2017-04-18", "categories": [ "web" ], "excerpt": "Jakob Nielsen points out that more testers result in diminishing returns. The key is acting on feedback from a small number of users regularly.", "content": "So, the good news: You can do a few kiosk tests and get really useful feedback. This is cheap, quick and relatively easy to do properly (although asking well-phrased, appropriate questions, setting the right mood and knowing when to keep quiet are skills that take some practice.)
The difficult bit: You need to test a few people often and act on your test results. Doing can be difficult, especially when you’re not the person developing software. In fact, I’d say finding out how much testing and doing any contractor really does should be a central part of any request for proposal and tender process.
", "url": "http://localhost:4000/posts/small-v-big-user-studies-acting-on-feedback/" }, { "title": "Measuring success on library websites", "date": "2017-04-16", "categories": [ "web" ], "excerpt": "Website visits, impressions and user numbers aren't outcomes. Measure customer actions instead.", "content": "As ever, Gerry McGovern makes an important point on a Sunday afternoon (which is when his weekly newsletter arrives):
Digital must be measured based on customer outcomes. Traffic, visits, time spent, page views; these are not outcomes. High traffic does not equal good customer experience
On our library site, lowering the number of page visits and impressions is often the goal. For example, about 3 years ago Google started putting library opening times in the sidebar of results pages. As one of the most common customer tasks is to find out when the library is open, this was a good thing. It saved another click through to our website. The way to measure success would be to observe users finding opening times not starting from our website, but from a browser home page.
Except Google often got the opening times wrong. So a part of our work is to manage library branches as Google businesses. That way we control the opening times you see when you search for something like Haverhill Library, and we save you a visit to our website.
On the other hand, sometimes we do want to increase visits to our website, and the number of pages visitors browse. We publish lots of reviews and lists on the site, content customers can use for reading or even gift ideas. It’s content that’s just asking to be browsed.
However, increasing the number of visits only provides an indication that we’re doing a good job. What we should be looking at is the number of loans the reviews and recommendations section generates. Sometimes this is measurable: we can look at our catalogue referrals. Other times the link isn’t so clear. A customer may open our newsletter, click a link to the latest titles page and then make a note before visiting a library and borrowing the book several months later. They may buy it in Waterstones.
The point is broad measures like visits, impressions and bounce rates are largely meaningless without some context and explanation. But more importantly: Websites are a means to an end. Although we may spend most our working lives changing and updating them, visiting them is not our customers’ aim. More often than not they’d rather not be using them at all; they’d rather be reading.
", "url": "http://localhost:4000/posts/measuring-the-right-thing-on-a-library-website/" }, { "title": "Build user requirements into your requests for proposal and avoid long functional check lists", "date": "2017-02-11", "categories": [ "web" ], "excerpt": "Anything people use needs to be tested by users. This is in addition to functional does it work? testing. Product commissioners can procure better products by avoiding long, functional check list requests for proposal and doing some user research first. Make providers demonstrate their ability to meet user needs.", "content": "Products work successfully on two levels. Firstly, they function correctly, so when you perform action A expected effect B takes place every time. Anybody can test whether something is working in this way; in fact, we often use machines to do this kind of testing. The results are objective: the product either works under certain conditions, or it doesn’t. You don’t need product users to do this testing: the developers can (and should) do it. Let’s call it functional testing.
The other measure of success is whether the product helps its users achieve something (or things). This is less objective. Figuring out the user’s requirements, and whether the product helps meet them, involves judgement. The user’s experience of the product is even more subjective and difficult to measure.
Only users can perform user requirement tests as developers experience the product in a different way. Let’s call this user testing.
All obvious stuff, but in my work I’ve yet to come across a product provided by a third party that is actually tested by users. Functionally tested to various degrees, yes; but actually tested on users – no. This results in bad products:
They’re engineered so that they technically ‘work’ even though ‘working’ may involve you having to do 6 months of training, reading a 7,000 page manual and going through 673 excruciating steps that take 92 hours, when it should only be 3 steps, taking 2 minutes. Gerry McGovern, How does price affect the “user” experience?
When I asked one provider whether they’d actually tested their product on users, they replied by agreeing that yes, this sounded a good idea. When they upgraded the product they gave us a huge list of objective tests to sign off, thereby even shifting the functional testing to us.
Worst of all, product development is driven by product buyers voting on what they want developed. Actual users are never asked, or observed. We can’t raise bugs based on poor user experience.
This also results in bad products.
Us buyers can mitigate this problem by writing user testing into our tender and request for proposal documents. Don’t provide a long check list of technical requirements – any vaguely modern design process will include a functional requirements research phase, and providers with no UX expertise can easily work through a check list.
Instead, do some user research before writing your request for proposal. Find out what your users want from the product and build a loose spec around that. Let the developer establish what the product needs to do in order to help users succeed, and let them do the functional testing. If it doesn’t work functionally, file a bug. Most importantly: judge agency proposals on how well you feel they can meet your set of users’ needs.
", "url": "http://localhost:4000/posts/functional-vs-user-testing/" }, { "title": "Getting to grips with Contentful and Jekyll", "date": "2016-12-11", "categories": [ "web" ], "excerpt": "Contentful is a CMS based on an API, providing a non-technical editing environment for web writers. Here's how it works with Jekyll, a static site generator.", "content": "Contentful, if you’re unaware, is a platform agnostic Content Management System (CMS). You use its API to pull content into a website or app. There are several reasons for using Contentful instead of/in addition to a CMS:
We use Jekyll at work and publish with a text editor, command line and Git. That’s OK for our web team of two as we don’t need (or want) many other people publishing to our website. However, if you’re running a site with lots of distributed writers, editors and publishers, this set up is far from ideal. You’ll want something a little less technical.
Even we’d like the option of adding a news story without having to fire up a terminal – we could publish on a phone or tablet, for example, or the comms manager could post a news story in an emergency. While we can use Prose or even edit files directly in Github, a proper editor would be a lot more useful.
So I decided to test Jekyll and Contentful.
Setting up content types and using the Contentful editor was great, especially for someone who’s been using Markdown for years. Getting Contentful to talk to jekyll was also relatively easy.
I created a simple collection of books for my experiment. The content type book
consisted of four fields:
Setting up a Contentful account and a book content model took a couple of minutes. I found the UI simple and intuitive:
Happily enough, Contentful provides Jekyll and Middleman plugins for grabbing content and putting it into your project. I installed the Jekyll plugin in a project on my laptop with no problems – you’ll just need a ‘space’ code and an API access token, which you’ll add to your _config.yml
file. Pull content into your project by running bundle exec jekyll contentful
. So far so good.
Contentful places YAML files in your Jekyll project’s data folder. As you’d expect, the files consist of json name and value pairs:
Now, things got a little trickier here. You’d normally access data in Jekyll with site.data.[name of folder containing data]
, so in the case of a books data folder you’d use site.data.books
. You’d then loop through whatever that returned.
Contentful imports complicate your data’s namespace in two ways:
_data → contentful → spaces → books.yaml
.books:
at the top of the file, which implies you could have different data types within books.yaml
.All this means site.data.books
doesn’t work. Jekyll changes the namespace to match the folder and data structure, which results in the somewhat verbose site.data.contentful.spaces.books.book
So, to loop through the collection you could use:
Note that Contentful returns Markdown, so using the markdownify
filter will make sure it also works in HTML files. This is what the code produces with the Jekyll Poole theme:
Contentful returns complex data with aplomb – you can put images in long text fields, for example, so you could easily add content models like blog posts.
Creating, adding and editing complex data is really easy in Contentful. The thoughtfully implemented WYSIWYG editor gets the balance between editing and producing sane code just right.
However, I think there are two problems:
Firstly, as with all static sites, getting content onto a live site is, by default, a manual process. When you edit in WordPress you press the publish button and it’ll appear. In a Contentful/Jekyll workflow the process is:
bundle exec jekyll contentful
bundle exec jekyll s
I’m sure there are ways to automate this process, and perhaps you could run Jekyll remotely, but this raises the question of why run a static site if you need to build all these dynamic features?
Secondly, by adding data to Jekyll, the Contentful plugin makes it difficult to create web pages for each data item. For example, I can’t create a separate page for Pale Fire with its own URL – I can only refer to it in a loop. Normally in Jekyll you’d make a collection for this purpose. I’m sure this is possible, but it’s not available out of the box.
So the search for a friendly, available-everywhere content editor for Jekyll continues. Prose is OK, but flakey on a mobile, while editing files directly in Github is too technical and makes it too easy to break things. In theory Contentful provides the best method – I just need to find ways of automating the process.
", "url": "http://localhost:4000/posts/getting-started-with-contentful-and-jekyll/" }, { "title": "Developing a static library website using Jekyll, Netlify and Zapier", "date": "2016-12-01", "categories": [ "work" ], "excerpt": "I migrated the Suffolk Libraries website from a WordPress backend using a theme built on the Foundation framework. We moved to a static website built on Jekyll and hosted by Netlify. The site is faster, more stable and more secure, yet it still handles dynamic features such as events and forms.", "content": "I migrated the Suffolk Libraries website from a WordPress backend using a theme built on the Foundation framework. We moved to a static website built on Jekyll and hosted by Netlify. The site is faster, more stable and more secure, yet it still handles dynamic features such as events and forms.
Running a WordPress site had become difficult for a small team. We had to buy, maintain and manage plugins to make the site secure from hackers and fast enough for our users.
At the same time I’d started to use Jekyll, a static site generator that builds your complete website locally for uploading to a server, or, better still, deployment via Github pushes. Jekyll sites are extremely fast because there are no database calls and page builds involved: your pages are served as is. They’re also inherently stable and secure because they’re just HTML, CSS and javascript. There are No SQL or PHP vulnerabilities, no 500 errors.
I exported content from WordPress into Markdown files using a Jekyll plugin. With our content in an open, convertible format, the work could begin.
Jekyll has a SASS workflow built in, which I use to write scss
partials. I use Bundler to keep all our Ruby dependencies in shape.
I also moved from the overly-opinionated Foundation framework to a more modular library called Tachyons. Tachyons just does CSS, and avoids building complex modules such as cards and call-outs. Instead, it takes a low level, ‘atomic’ approach: classes mostly map to single CSS properties, so the db
class is display: block
.
This approach results in faster, easier to manage CSS. You can read HTML and tell exactly what the classes are doing. It’s also small and fast, weighing in at around 14k minimised: there’s no redundant javascript slowing things down.
Read more:
Most library content doesn’t change over time, and edits are relatively simple. A library might change its opening hours, which we could reflect by editing a Markdown file and pushing it to Github.
However, some content is more dynamic. For example, our libraries run dozens of events every week, but we don’t want them appearing on the website once they’ve passed. Implementing dynamic features requires some lateral thinking, and some automation on your server.
Read Coding one off and recurrent events in Jekyll to see how I got the site to display events based on time.
The automation is handled by our host, Netlify, which specialises in static sites. Netlify has a smart API which works with Zapier to do all sorts of clever things, including firing site build requests early in the morning to update all our dynamic content.
The Netlify Zapier connection also offers other features you’d normally only expect from a dynamically hosted site. For example, we can send form submissions to Google Sheets, and automate email replies, without hosting SMTP scripts or databases.
The other great benefit of running a static site on Netlify’s servers is its tight integration with Github. This means we can roll back commits and manage the site from a local, Github command line.
This makes it easy (and reliable) to share, stage and publish changes, something that can be difficult in a WordPress workflow. There’s something very satisfying about typing git push
and seeing your live site update.
Because the site is on Github it’s also automatically backed up and easy to roll back if there are problems. We can also share it with the public, and deploy it locally on any PC within minutes.
Running a static, Netlify hosted website brings many benefits:
I’d recommend it for sites that don’t require a huge amount of dynamic content; local and central government organisations, for example. If you’re interested in building a fast, secure static website with dynamic features, contact me:
Suffolk Libraries puts on thousands of events and activities over a year, ranging from regular song and rhyme sessions for babies to one off Raspberry Jams, gigs and author talks. (Didn’t know that? Libraries don’t just lend books, you know…)
We recently moved the website from WordPress to Jekyll, a static site generator. As events and event lists are inherently dynamic they pose several problems for any static site. How do you:
A library website is the perfect place to figure out these problems. Here’s how we went about it.
Without knowing when now is you can’t determine whether an event is in the future or the past. Without that, you can’t keep an event list up to date.
Jekyll uses the Liquid templating language, which has a now
filter. To Liquid, now
is how much time has passed since Jan 1 1970; feed this through a seconds filter and you have the current time since a fixed point in time expressed as a number:
{% capture now-unix-seconds %}{{ 'now' | date: '%s' }}{% endcapture %}
Good. If we can convert an event start date in the same way we can determine whether it’s taking place in the past or future.
To do that in Jekyll we can add properly formatted event start and end times to event YAML. For example, if we set event-start-date: 2016-08-18
it’ll start on 18 Aug 2016. Feed the date through the %s
date filter again to get a time in seconds since Jan 1 1970:
{% capture event-time-seconds %}{{event.event-start-date | date: '%s' }}{% endcapture %}
Finally, we need to convert our seconds to days so we can make an easy comparison between now and the event’s start date. Handily, the Liquid divided_by
filter rounds down by default, so dividing our times by the number of seconds in a day gives us a number of days:
{% capture event-time %}{{ event-time-seconds | divided_by: 86400}}{% endcapture %}
and
{% capture event-time %}{{ event-time-seconds | divided_by: 86400}}{% endcapture %}
With these days, we can use a simple conditional statement to build current and future event lists:
{% if now-unix <= event-time %} ... {% endif %} // if now is equal to or less than the event start time
Unlike WordPress based sites, static sites don’t change unless you rebuild them. Therefore, now remains the same until a new build.
You could rely on someone rebuilding the site every day, but this is obviously far from ideal (what happens at weekends, for example).
You’ll need to turn to your hosting for a solution to this problem. We use Netlify to set up a post hook URL and Zapier to send a post request to that URL at 1am every morning. When the URL receives the request Netlify runs jekyll build
, thereby recalculating now and running the event list code again.
We handle the bread and butter, week in, week out events differently. They’re actually a lot more simple: we just use a recurrents
collection where each event has plain text YAML that’s outputted on library pages. The YAML includes recurrent-day
, recurrent-times
and recurrent-category
.
The final piece of the jigsaw. To link one off and recurring events to locations we need to add a key to events that matches a key in library YAML.
Each library has a branch-unique-id
such as beccles-library
. If we add some YAML that uses this text to an event we can start matching events to libraries:
location: beccles-library
Here’s how we use the code on a library page (in our library.html
layout file):
{% assign current = page.branch-unique-id }%{% assign events = site.events | sort: 'event-start-date'}%{% for event in events }% {% if event.location == current }% // some stuff, could probably use where filter {% endif }%{% endfor }%
And that’s it: we have one off and recurrent events linked to specific locations on the Suffolk Libraries website, and they’ll disappear from events listings once their start date passes. This may seem quite a lot of work, but Jekyll’s flexibility makes it really easy to attach any data to events and create all sorts of lists (based on category, for example).
See our repo for all the code.
", "url": "http://localhost:4000/posts/jekyll-events-static-site-libraries/" }, { "title": "How we built a static Suffolk Libraries website (an overview)", "date": "2016-07-21", "categories": [ "web" ], "excerpt": "We've built a static website using Jekyll and Netlify hosting. Apart from the speed, security and stability, why did we do this? What are the difficulties and disadvantages?", "content": "After lots and lots of work and a painful propagation period we’ve got a new Suffolk Libraries site up and running (by we I mean me and the excellent Emma Raindle, who will be sorely missed at Suffolk Libraries when she leaves next week).
Here’s a summary of what we’ve built (which is, if nothing else, different from any other library website).
git push
to these branches our websites get updated.https
(from Let’s Encrypt)We’re happy with it. Let’s see if it makes much difference to our feedback and analytics.
", "url": "http://localhost:4000/posts/suffolk-libraries-website-static-overview/" }, { "title": "Doing a Liverpool", "date": "2016-04-23", "categories": [ "web" ], "excerpt": "Ha! So all along (via over 2,000 words) all I was really proposing was doing a Liverpool. Nothing to do with the Fab 4, You’ll Never Walk Alone or Stan Boardman for that matter.
", "content": "Ha! So all along (via over 2,000 words) all I was really proposing was doing a Liverpool. Nothing to do with the Fab 4, You’ll Never Walk Alone or Stan Boardman for that matter.
An unexciting, yet efficient UI pattern. Describes a set of titles, arranged in a grid, that each link to an overview page. Doing a Liverpool
There’s a lot of playfulness in iA’s description of the old internet. Are they being serious? Aren’t they fairly old internet themselves?
In part this often reflects an ongoing tension between doing quite boring things well in order to get your users where they want to be, and doing some whizzy things that please the paymasters when they’re looking at (rather than using) your website.
Of course, it’s not necessarily as clear cut as that. Your boss may be pragmatic and want to make something as easy possible. She may really dislike parallax scrolling. You may have educated your colleagues about the fold. The most whizzy solution might be the most efficient. The most efficient, old internet solution might look and feel whizzy. Customers might think this site’s a bit boring as they zoom through to the information they want.
But when our job is to help people do something or find information in a complex structure, often it’s the older, plainer approach that works best, simply because it doesn’t require any figuring out. A hamburger icon hides information. A shifting navigation bar that fixes itself to the top of the screen grabs your eye from the text you’re trying to read. Even something as unexciting as tabbed navigation is less efficient than your browser’s built in scroll bar.
", "url": "http://localhost:4000/posts/doing-a-liverpool/" }, { "title": "The future of libraries (again)", "date": "2016-03-29", "categories": [ "libraries" ], "excerpt": "A BBC report into library branch and staff numbers brings out the usual arguments against the service. Why can't some right wing critics even bother to find out what libraries do?", "content": "Yesterday, the BBC reported on UK library service cuts since 2010. The figures (gathered from Freedom of Information Act requests made to library services) are pretty depressing, although if you work in libraries they won’t come as a surprise. Turns out government estimates have underplayed the extent of the damage done to the service as a whole. Since 2010:
(Source: Libraries lose a quarter of staff as hundreds close)
If you take Scotland out of the equation the figures are even worse.
In my experience the library service is one of those things we feel should exist, even if we don’t use it all the time. There’s something barbaric about closing a library. Their popularity explains why the report got so much coverage.
Some of the right’s response was predictable. Firstly, the IEA argued that the library service is outmoded and poor value for money, and that replacing paid staff with volunteers is a good thing. The IEA has some hefty baggage, of course – it’s hardly a surprise it’s not a fan of a publically funded library service, or of replacing it with free labour.
Secondly, John McTernan argued the stats about libraries tell a story of increasing irrelevance rather than underinvestment: book loans and visits have been dropping since 2005, and that’s ultimately why staff and branch numbers have decreased.
Of course, there’s a lot of opinion in McTernan and the IEA’s presentation of their stats. I’m genuinely baffled by the idea that libraries represent poor value for money. In Suffolk, for example, a population of 730,000 gets 44 branches, 3 mobile libraries, home visits, free ebooks, free eaudio, free streaming, free wifi, free PC and Chromebook usage, online reservations and renewals, a schools and children’s literacy service, thousands of events, live gigs, a makerspace, two book festivals, a mental health service and around a million print titles for less than £6m a year. This strikes me as incredibly good value for money.
Similarly, McTernan ignores a large swathe of things libraries do apart from lend print titles (although he does acknowledge Suffolk’s ebook service, bizarrely implying we’re unique in this – we’re not; virtually all library services offer something like Overdrive). Yes, adult print loans have been decreasing steadily for years, and his reasons for this sound right. But PC usage, children’s loans and all the other community stuff like running post offices are becoming more important precisely because of the austerity he claims isn’t causing the demise of the library service.
Lazy critics attack an outmmoded idea of a library service for ideological reasons. There are lots of things libraries should be taking a hard look at – Google, Amazon and Spotify do pose difficult, existential questions, as does providing a truly universal service. But before knocking libraries it’d be nice if think tanks and journalists bothered to find out what they actually do in 2016.
", "url": "http://localhost:4000/posts/future-libraries/" }, { "title": "Council Toolkit and non-universal navigation", "date": "2016-03-12", "categories": [ "web" ], "excerpt": "Council Toolkit does complex navigation really well by using an old fashioned, easy to interpret method.", "content": "Stumbled on Aberdeenshire Libraries website earlier this week. It’s built on Council Toolkit, an HTML, CSS and javascript framework that’ll help you build a council (or similar) website quickly.
It’s not a million miles from gov uk. You get a free set of solidly designed templates (home page, guide, article, signpost etc.) which you can slot pretty easily into your CMS of choice.
One of the things I like about Council Toolkit is the way it handles navigation. On council (and library) websites we often publish lots of sections. Instead of implementing an unwieldy universal navigation, the home page and category templates put navigation in the main content area, while leaving a few universal actions in the header area – a good idea:
There’s something old fashioned about this approach – the home page is essentially an index, but I think it has 4 advantages over a traditional navbar:
We’re currently reviewing the Suffolk Libraries website as it’s a grown a lot since I rebuilt it in 2013. I’ll definitely be taking the Council Toolkit approach to navigation, although I won’t be using the toolkit itself – as for why, that’s for a different post.
", "url": "http://localhost:4000/posts/council-toolkit-no-universal-navigation-aberdeenshire/" }, { "title": "A 5 day sprint with Clear Left exploring library self-service machine software", "date": "2016-02-28", "categories": [ "work" ], "excerpt": "An account of running a design sprint in order to plan a web product, in this case library self-service software for use across Suffolk’s libraries – and beyond. Includes a timetable, techniques and overview of the outcomes.", "content": "Last week we spent 5 days with digital agency Clear Left exploring how we might develop new self-service machine software for libraries.
Here are a few observations about how Clear Left and a sprint work. It was certainly very useful for me, and you might find it interesting if you do any design work with clients, or you work with external agencies to explore any digital area. If you work in library IT read on to find out about what the future of self-service might look like.
We’re replacing the self-service machines customers use to check out and return books in libraries. The machines were installed about 8 years ago (in this respect libraries are years ahead of supermarkets).
A lot has changed in the last 8 years – in 2008 there was no such thing as an iPad and the world wasn’t quite as connected to the internet as it is now.
Over the last few weeks, the IT team has been discussing what a new self-service system might look like. We want something a lot more portable, cheaper, device agnostic and easy to manage than the current system. All this led us to conclude that some sort of web app might be the best approach, rather than, say, a Windows client or an Android app.
As there aren’t any software-only providers out there, we’d need to build it ourselves (or rather, get someone to build it for us).
At this stage we felt we needed an expert opinion. Was this really the right approach? If so, what might a web app look like? What problems might we run into? How do we go about this project?
We chose Clear Left because they’ve done similar digital strategy work with organisations like the Wellcome Trust. Most of all, I felt they’d keep accessibility to the fore; not just in areas such as designing for older people, but in keeping the system as open and device agnostic as possible.
There’s a longterm business advantage in keeping our hardware, software, library management system (LMS) and software decoupled (a word we used a lot during the week), and in extricating ourselves from the cosy LMS/self-service provider world.
Since one of Clear Left’s founders Jeremy Keith has been writing about the core technologies of the web for over a decade, they seemed the right choice.
As you can imagine, I was pretty excited to get Jeremy and James Bates into the library for a weeklong sprint. I got a stack of A3 sheets ready for Monday morning.
Before the sprint we sent Clear Left our own research and thoughts on self-service, including a report from a staff workshop we’d run a couple of weeks beforehand. We’d agreed some basic outputs we wanted from the week; namely, a feasibility report and perhaps some sort of sketch of what a web app could look like.
I’d also arranged for several staff members and volunteers to turn up on the Monday morning. What we were going to do I wasn’t sure.
Before the staff workshop, James helped us visualise the sprint by putting a diagram up on the wall. At the start, days 1-5 were pretty empty. That would change over the week.
The workshop helped define our self-service ‘problem’ and gave us a rough plan:
You’ve probably heard terms like agile, lean and sprint before: this was, I guess, agile in action. It’s a good process as you react to whatever you discover, rather than try to squeeze your findings into a predefined structure. You don’t want to simply reinforce your initial ideas, however attached you are to them; instead, you’re looking to test them and throw them out if necessary.
And a sprint has a definite end product. It avoids projects getting bogged down.
I was surprised at just how much James and Jeremy involved the IT team – and staff as a whole – in the work. At one stage we had library information advisors and the finance manager sketching interfaces alongside Jermey Keith and James Bates:
James and Jeremy based their war room in our Lab (a mediumsized room on the top floor of Ipswich County Library). The door was always open and staff and stakeholders were encouraged to pop in whenever they liked to ask questions and get involved. In fact, a lot of the research and testing took place in the library itself; James and Jeremy were very visible throughout the week.
I think there are some very solid reasons for taking this approach. Firstly, it recognises that anybody can have a good idea when it comes to a service they work with and/or use. Secondly, it gets people behind the project; it becomes something they feel a part of.
Teachers will recognise the skills you need to run a successful sprint. There’s the basic organisation, the ability to express and elicit ideas using different learning styles and the knowledge of how to pitch and time activities. There was even a starter/main/plenary structure to some of the work.
James did most the facilitation, using a mix of visual and verbal activities, often strictly timed. For example, we were asked to note down 8 ideas for self-service, which could be as practical or science fiction as we liked (I rather liked my book wayfinding idea – one for the future, perhaps). We then chose two of these to sketch and got the rest of the group to critique them.
The final stage involved assigning all the sketched ideas points based on how important we felt they were. This gave us the basis for our proposed product, which Clear Left suggest we divide into minimal viable product (MVP), phase 2 and backlog sections:
We did a lot of work over the week (it is a sprint, after all). Jeremy looked into a javascript library that uses a device’s camera to recognise barcodes, but found it wasn’t quite ready (unfortunately – getting rid of peripheral devices makes the whole web app approach a lot easier, and reduces our hardware costs). However, we did establish we could use an API rather than the SIP protocol (really good news – it means we can use simple https to connect to the LMS, while accessing all the data we hold on borrowers and titles). James produced some clickable prototypes that helped everyone picture what our own self-service app might actually look like:
I can’t recommend this kind of research sprint enough. We got a report, detailed technical validation of an idea, mock ups and a plan for how to proceed, while getting staff and stakeholders involved in the project – all in the space of 5 days.
(See Clear Left’s write up of the week.)
", "url": "http://localhost:4000/posts/5-day-sprint-clear-left-self-service/" }, { "title": "Kill the LMS – what a modern library digital presence should look like", "date": "2015-12-10", "categories": [ "libraries" ], "excerpt": "Libraries do digital badly. Replace the LMS with an open, queryable database that can talk to other apps that do one job well. Kill the LMS or be killed.
The term digital presence encompasses any point at which a customer or member of staff performs an activity that affects the library’s stock (which will include things like physical books, DVDs, eBooks and collections) and/or the customer’s account remotely. This will include:
What all of these points have in common is a screen that someone interacts with and a connection to a database. Most customer interactions with the library therefore involve some digital element.
For many years, and for many reasons, library services have bought a single piece of software that controls nearly all the library’s digital presence. This software is called a Library Management System (LMS).
LMSs make for poor customer experiences:
There are many reasons for this set of problems, some of them cultural:
Library staff are skilled in cataloguing, circulation and categorisation, rather than fields such as user experience, programming or design. They therefore often fail to see customer problems and don’t know what to look for in a good digital presence.
Furthermore, my experience of library culture is that it’s hierarchical, and the people responsible for procuring digital systems are simply the most senior staff, whose skills were more relevant to the pre-Google age.
LMSs offer a single solution to a complex array of problems, which makes them attractive to unskilled procurers.
Similarly, a good digital presence requires lots of integration between good quality systems: library services and local authorities have neither the will or expertise to carry out this integration.
These factors have combined to create an unhealthy ecosystem of providers and procurers. Because there’s been no incentive to improve libraries’ digital presence, LMSs and the established library equipment providers have been able to peddle essentially the same product for decades for huge amounts of money – easy government money.
On the procurement side, a cosy, internal LMS language and set of protocols have evolved that stifle innovation and discourage new players from entering the market. Take acronyms and terms like LMS, SIP, LCF, circulation, enquiry and OPAC. These are more often than not symptoms of problems such as proprietary protocols (SIP) and fractured user experiences (OPAC), but that’s the lingua franca of the library IT world.
In order to solve our customer experience problems, we need to get rid of the LMS altogether. Here’s what I think an alternative could look like:
The heart of any digital library is its inventory of assets and customers. Its digital presence is the sum of its interactions with this database.
In order to enable as many good quality providers as possible to build apps, websites and self service points, we need to make this database easy to query and easy to process the data it returns.
All modern digital services publish an open API. Libraries should be no different.
In order to open our data to providers outside of the existing library ecosytem we need a secure, library-agnostic path. Again, as with all modern APIs, this should be done via http(s)
.
When we use an API it should also return data in an open, popular format. The most obvious candidate is json
.
By using agnostic, published APIs we open up the market to any number of expert providers, whether they do mobile apps, website design, search or CRM.
A handful of LMS and library equipment vendors currently provide all points of the library digital presence.
It would be difficult for Google to provide good quality CMS, CRM and email marketing, let alone an LMS provider that hasn’t meaningfully updated their product for decades.
Our open API would allow us to choose the right tool for the right job in the library digital experience. For example, we could connect our database with:
Dealing with a single LMS provider and its hangers-on has a major advantage: it doesn’t require much work or thought. However obtuse and unagile your LMS vendor, they are at least familiar. You’ll probably be speaking to the same people if you work for a different library service.
Procuring more, smaller services requires expertise and effort. You’ll also need to manage more commercial relationships.
However, the pay-offs should be huge. Let’s explore an example.
At the moment, library self-service machines are built by a handful of vendors, most of whom are closely linked to or even owned by the LMS providers.
All the hallmarks of our dysfunctional ecosystem are in place. The machines tend to be old, they use a proprietary SIP protocol to connect to the library database and they’re expensive.
Our current 70 or so self-service machines have come to the end of their natural life (they’re actually running Windows XP). The device cost (excluding support and installation) was around £5,000 each several years ago.
We can go to an existing provider for any number of options, but what I’d like to do is copy Argos:
The set up here is:
In a library we’d need to attach a barcode scanner to the tablet.
If we had a database with an open API and used https
to connect to it, we could get any app developer to build a UI that would guide customers through checking library stock in and out. Our tablet would give us several advantages over the current system:
If you could get 70 tablets at a cost of around £9000 you’re going to be saving hundreds of thousands of pounds, even after you’ve taken installation, development, project management and testing costs into account. And you’ll have a better product.
You can see why the LMS and self-service machine vendors have no interest in opening up their data.
Or rather: the nuts and bolts of library systems are essentially the same as any retail operation. You have stock you track, and customers who interact with that stock.
LMS providers and procurers have created a proprietary world that excludes technological developments. The only winners have been the providers and their hangers-on and the unskilled procurers, while customers and frontline staff struggle with creaking, unfriendly systems.
We need to get over ourselves to move forward.
I’m an optimist at heart. The purpose and universal appeal of libraries should be stronger than ever in an increasingly commercial and atomised world. Digital experience could help customers from all walks of life take advantage of their library. We do have advantages over the Googles and Amazons of this world.
Changes are already in the air. The Libraries Task Force recently commissioned Canadian company Bibliocommons to write a report on what a national digital library service would look like. Bibliocommons interviewed me (along with several other library folk) while researching their report, and invited a few of us to a workshop at the Department for Culture Media and Sport to present their proposals back in September 2015.
However, at this moment we’re failing miserably to provide a good digital experience. In the age of Google and Amazon the competition is fierce: I can get a book on my device for a few quid and a handful of screen taps without leaving my house. If ever an area was ripe for disruption it’s library digital.
If we don’t offer a good experience to counter Google and Amazon, customers won’t bother with us. And who could blame them?
That’s deadly in an age of austerity. We need to make our case to as many people as possible, not just those who have to use the library service or have used it for years. If most customer interactions with us are digital, we have to make sure their experience is pleasurable and engaging from the off. At the moment digital adds very little to our case beyond some headlines about lending ebooks.
", "url": "http://localhost:4000/posts/kill-the-lms-future-digital-experience/" }, { "title": "The case against universal navigation", "date": "2015-08-30", "categories": [ "web" ], "excerpt": "", "content": "There is no reason to mention all features of the site on all pages. Instead, select a very small number of highly useful features and limit pervasive linking to maybe five or six things like search. — Jakob Nielsen Is Navigation Useful?
There is no reason to mention all features of the site on all pages. Instead, select a very small number of highly useful features and limit pervasive linking to maybe five or six things like search. — Jakob Nielsen Is Navigation Useful?
Website navigation is difficult. Labelling and organising content can be a nightmarish exercise in interpreting users’ idiosyncratic ways of conceptualising and labelling your services.
Long navigation lists are noisy, but omitting items suggests they don’t exist at all. Throw in a host of design problems caused by limited screen space, and you have a thorny set of questions to untangle.
We should be wary of answers that consist of simply getting rid of things, even if they come with a pleasingly minimalist pay off. Providing too little information for users is just as bad as overwhelming them – both result in frustrating experiences and, ultimately, users giving up and going elsewhere.
Still, it does make sense to at least ask whether we need a universal navigation menu on all sites on all pages all of the time. Here are some things you need to consider if you’re thinking of taking this approach:
Checking your analytics and watching people use your website can be a sobering experience. All those news items and sweating over the look and feel, all for someone to come along, find a plain old table containing a phone number they need and then leave, all in 10 seconds or less. Your bounce rate is in the 70-80% range.
Worse (or better, actually) they don’t even make your website. Instead, they google the number and get it from the top search result.
The chances are any given website visitor wants to do one thing only. They really don’t need to see Kessingland Library events if they just want to find out how much the library charges for overdue books. In other words, universal navigation is of little use as long as they can get to their information quickly. If they can’t get to their information quickly then they may reach for a navigation menu.
So if you’re ditching universal navigation make sure you:
This isn’t necessarily a bad thing. Designing home pages can be difficult, especially when you’re figuring out whether a link should appear in the navigation menu or main content area – there’s only a main content area to worry about. Duplicating links can be confusing as you present users with a choice they don’t need to make.
On the Suffolk Libraries website we’ve opted to limit the main navigation menu to top level sections. We found that users were overwhelmed by the sheer number of links when we also displayed subpages, but this can make deeper content hard to find.
We have a lot of content to structure on the site, and some is hard to find via the main navigation menu. When we did our card sort users often grouped services like the home library service, mobile libraries and the schools library service together. However, the label they used for this group varied widely, from out and about to external services and the somewhat unwieldy services provided outside of library buildings. In the end, we plumped for Community services.
If I run a Find information about the schools library service test the results are mixed. This isn’t necessarily a problem as users are often sent directly to the relevant part of the website via print materials they receive when they sign up for the service, and a Google search for Suffolk schools library service will take you straight there.
If you run a website for a varied, complex service you’ll inevitably make some parts of that service difficult to find by limiting the number of top level sections and by cramming content into places it doesn’t readily belong. A larger home page with a content area set aside for navigation makes it easier to expose more sections to visitors. Government websites are using this ‘mega home page menu’ so much it’s fast becoming a convention – see the Council Toolkit framework for an example of how it works.
Let’s say a user is looking to find and reserve a book. They don’t know which book and pick up on the New in and recommended section of the website from the home page.
This is split into several subcategories (including fiction, local interest and staff picks). The user likes the suggestions and wants to explore more.
Without universal navigation you’ll have to use alternative techniques to allow the user to complete their task. Again, this is an opportunity to improve the user’s route through the website to task completion as you can concentrate on providing links that are relevant to their current task, rather than links to every section.
There are 3 main ways to guide users through the website:
As Jakob Nielsen says at the top of this article, some navigation elements may be relevant to all users, or at least a lot of them, all the time. Links to help, search, contacts and social media accounts are obvious examples; in the case of the library service using the catalogue and logging into your account often represent the final action in a chain of tasks.
Often these links are actions rather than sections of the website; in fact, you might describe them collectively as a toolbar rather than a navigation menu.
In my experience of watching users try and complete tasks on websites, navigation menus are rich in information scent. They’re concise, containing words users recognise – and because they’re such a well established convention their meaning takes little interpretation.
Although few users need access to all website sections from every page of the website, it’s reassuring to know they could reach a far away section, especially if they’ve ended up in the wrong neighborhood.
Perhaps more importantly, universal navigation gives users an overview of what an organisation does. While it’s true most of them are only interested in doing one thing as quickly and effortlessly as possible, it would be nice to think we can show these users other aspects of our service.
As with all design, there’s a balance to be struck between competing user requirements. The positives of ditching complex, universal navigation perhaps outweigh the negatives. There are some websites where you really do want to be able to reach everything from anywhere – portfolios, for example, but for more complex, task driven sites it will probably pay to make navigation focused and contextual rather than universal.
We’ve started moving away from displaying a complete navigation menu on every page of our site. For example, none of our microsites display it. The Summer Reading Challenge section displays a contextual sidebar:
We’ve already removed most subsections from the universal navigation menu. It is worth bearing in mind that users get used to finding information in a certain way; when they come to your site they may head straight for a particular link without even thinking. We found this was the case when one user had got used to locating the Mobile libraries link in the universal navigation menu. She found a new link in the home page content area, but did provide some feedback. When you change your website you will always upset some users as you’re forcing them to relearn your UI.
However, we feel the benefits will outweigh any short term annoyance – we’ll remove our universal navigation menu sometime soon.
", "url": "http://localhost:4000/posts/case-against-universal-navigation/" }, { "title": "Do not always fear the nav bar on narrow screens", "date": "2015-05-23", "categories": [ "web" ], "excerpt": "Users scan pages for text that might help them complete their task. Navigation bars and lists are rich in information scent, so it makes sense to avoid hiding them whenever possible.", "content": "You should display your whole navigation menu on narrow screens if possible. While hamburger toggle buttons and their ilk solve a problem neatly, the best navigation menu is visible by default.
When we design websites for small screens navigation can cause a problem. If we use a long, vertical list of navigation links we risk them taking up most or even all of the screen. If we go horizontal, the menu can extend beyond its right edge. For example, this is what The Telegraph’s screen looks like if you narrow your browser window:
There are lots of ways to tackle this problem. The most common is to toggle the navigation list by pressing a button – you’ve probably seen some variation of this hundreds of times:
I’m not interested here in the rights and wrongs of hamburger menus. But I think it is worth remembering two things about website navigation on narrow screens.
Firstly, we’d rather not use hamburger icons at all. The simplest, most accessible navigation menu is… a list of links. I’ve spoken to several Suffolk Libraries website users who simply have no idea what pressing the icon does, or that it’s even some sort of toggle button. Although it’s inventive and reasonably whizzy, the hamburger and its ilk create a tension between competing user problems: the awkwardness of scrolling horizontally and the findability of the navigation menu.
Secondly, screen real estate is valuable on a mobile but visible navigation menus make a site more usable. Again, there’s a tension between making content difficult to reach by displaying a long navigation menu on a narrow screen and hiding it behind a button click.
I’d suggest we err a little too much toward hiding the navigation menu. If your navigation menu consists of 6 or more links then you probably need to use a hamburger or something, but there are ways to display shorter menus on a narrow screen.
At the time of writing, I’m not hiding my navigation menu behind a toggle button on narrow screens. Instead, I’m displaying all four links in rows of two:
On a widescreen you get a traditional horizontal menu:
Most blog navigation menus consist of a handful of links which could comfortably fit on a narrow screen. Yet you see a surprising number of hamburger menus in the blog world – perhaps bloggers could experiment with visible menus a little more.
", "url": "http://localhost:4000/posts/do-not-fear-the-nav-bar/" }, { "title": "A guide to using Markdown to write blog posts", "date": "2015-05-11", "categories": [ "web" ], "excerpt": "Markdown is quick, easy and portable. If you're writing for websites, you really should learn it.", "content": "Markdown makes it possible for anybody to structure their text and publish it in an electronic format. The most common electronic format is HTML, but you can also use Markdown to produce other formats, such as PDF.
For the purposes of this guide we’ll stick to HTML.
Markdown is simple, quick, portable and efficient. To really appreciate why you should use Markdown, it’s best to take a look at the old fashioned way of publishing websites.
If you publish a web page you publish HTML. This may come as a surprise – when you wrote that blog post you didn’t write this:
<h1>Why writers should use Markdown</h1><p>Markdown is quick, simple and portable. It's also extremely elegant and very cool. You need to know Markdown.</p><p>WYSIWYGs are the past. They're slow and bloated.</p>
But that’s what you ended up publishing.
What happened?
Let’s say you use WordPress. When you write a post in WordPress you use a What you see is what you get (WYSIWYG) editor, which is like a simple word processor. You write text, select bits of it and then make those bits a heading, list or quote.
WordPress then takes your text and converts it to HTML. Why? Because browsers need HTML to understand the structure of your article. HTML is the language of web pages.
HTML isn’t code. Anyone can look at HTML and understand what it’s doing.
Take our example. The text between the <h1> </h1> tags is a first level heading, which the browser will display in a large, bold style. The text between the <p> </p> tags is a parapgraph (p for paragraph, see) which the browser will display in a normal style.
There are 5 or so commonly used tags. Once you know these you can pretty much write HTML.
However, writing HTML is laborious. As most people have some experience of using a word processor, it makes sense to replicate the Word experience online.
Writing HTML is simple, but it is important to follow the rules.
When you use a WYSIWYG editor, it’s easy to get hung up on the text’s appearance rather than its structure.
In order to make their text look right, writers can misuse headings, quotes and blockquotes. A first level heading not only looks bigger than a paragraph, it has a different meaning to a browser.
This might not seem a huge problem, but think about who – and what – might read your web page.
If it’s a visitor using a screen reader a misplaced <h1> will make your page confusing. If it’s a search engine trying to figure out what your page is actually about, it may get the wrong idea, making your page hard to find.
Let’s say you have to move to a new blogging system, or you want publish some of your blog posts in another format, such as PDF or ePub.
If you’ve used the WordPress WYSIWYG editor all your posts are in WordPress’s own format.
Now, WordPress is a good web citizen. It makes it easy to export your posts in an open XML format. But you’ll still need to find a tool that can convert this complex XML into your new system’s format, or into another file format.
It’s far easier to move plain text, Markdown or HTML between systems and formats.
You can also store all your content on a USB or in a Dropbox folder.
If you write a lot chances are you’ve got used to using your keyboard to select bits of text and copy and paste.
Although WYSIWYG editors are quicker than writing HTML, you still have to navigate your way round a web page and select formatting options from a dropdown menu. You’ll be switching between keyboard and mouse a lot.
The more you keep to the keyboard, the more efficient your writing.
You write Markdown in a text editor. Think of it as an easy to use shorthand for HTML.
Take our HTML example. In Markdown, it would look like this:
# Why writers should use MarkdownMarkdown is quick, simple and portable. It's also extremely elegant and very cool. You need to know Markdown.WYSIWYGs are the past. They're slow and bloated.
Note the #
we used. That tells the Markdown editor (or programme you’re using to convert Markdown to HTML) to wrap to the following line in <h1> </h1> tags.
Markdown knows you’re writing a paragraph when you leave a line between blocks of text.
To make a second level heading, we use 2 ##
s:
## This is a subheading
There are lots of little characters we use to structure our text. For example:
> This makes a blockquote. A blockquote is a long piece of quoted text. Blockquotes are often indented and occasionally italicised.
Which a browser will make look something like this:
This makes a blockquote. A blockquote is a long piece of quoted text. Blockquotes are often indented and occasionally italicised.
Once you understand Markdown’s syntax it’s easy – and quick – to write web documents. Using a WYSIWYG editor will seem cumbersome.
Markdown mirrors HTML. When you write in Markdown you’re thinking about structure and meaning, not appearance. Quality control is easier in Markdown than WYSIWYG.
OK, so that all sounds grand. You’ve written your first blog post in Markdown in record time. How do you actually convert it to HTML and get it on the internet?
You’ve got a few options. You can either do the conversion before you open your blogging software, or you can do it in your blogging software.
Let’s take a look at WordPress again.
You can write Markdown in any text editor, but you’ll need some software to convert it to HTML.
The best way to do this is to use a Markdown editor, which’ll let you edit and convert in one program. Lots of editors will show you what your HTML will look like as you write Markdown.
There are lots of good Markdown editors out there. I use:
The process is simple. Write in Markdown then choose Export from your file menu. At the very least you’ll get the option to export HTML. Some editors also let you export to a PDF file.
Once you have some HTML to work with, login to your website, select New post and paste your HTML into the Text tab in the WYSIWYG editor pane. Press Publish and you’re done.
You can bypass Markdown editors altogether. This will make for a more streamlined workflow, but you won’t have any local copies of your blog posts. Remember, one of the weaknesses of WYSIWYG editors was the lack of portability.
You’ll still write better quality posts more quickly.
However, if exporting and copying and pasting HTML is too time consuming, then it makes sense to use Markdown in WordPress. To do that, you’ll need a plugin. You could try wp-markdown.
This time, you’ll login to your website and select New post. Instead of using the WYSIWYG editor you’ll chose the Text editor and write in Markdown.
Blogeasy offers the best of both worlds. Write local Markdown files and hit the Publish button to send them straight to your WordPress powered website.
As you’ve probably figured out, the Markdown/WordPress workflow isn’t perfect at the moment. Although there are some advantages to using a Markdown editor you may well find the whole export/copy/login/paste process too time consuming.
There are some desktop apps out there that let you write Wordpress posts in a WYSIWYG editor on your computer before uploading to your site. A few (such as Blogeasy) let you write in Markdown. This is ideal.
We need more Markdown editors that connect directly to websites, whether they’re built on WordPress, Blogger or Drupal.
Alternatively, you could try a different blogging engine. There are lots of static site generators out there which take Markdown files and convert them to complete websites. The likes of Jekyll (which runs this site) are super fast, secure and portable, but they do take quite a bit of technical know how.
", "url": "http://localhost:4000/posts/writing-markdown-blogging-guide/" }, { "title": "Tabbed navigation didn’t work on our site", "date": "2014-07-10", "categories": [ "web" ], "excerpt": "Tabbed navigation looks a good idea, but it often causes confusion. Generally, it makes more sense to publish a single, visible, well-organised document.", "content": "I was using tabbed navigation on a couple of the Suffolk Libraries website pages. By tabbed navigation I mean a single document’s content divided by horizontal tabs, so you don’t see it all at once.
It’ll look something like this:
Tabs might seem a good idea because they break content up, thereby making it easier to interpret. But it’s the single design element that got the most complaints after my redesign.
It’s not that users didn’t get how to use the tabs (although initial testing indicated the labels had to be clearly styled as links – in the picture above, users might have struggled identifying the tabs). No, it was simply because tabs can break the back button. Users see each tab as a separate page, when in reality they’re just parts of the same, single page. Although it might seem boring, I found in most cases it makes users’ lives a lot easier if you actually make each tab a separate page, or plump for a single, long document, and find other ways to present it.
", "url": "http://localhost:4000/posts/tabbed-navigation-didnt-work-on-our-site/" }, { "title": "3 things to do with a website content audit", "date": "2014-04-22", "categories": [ "web" ], "excerpt": "Keep your website up to date, lean and organised.", "content": "A year or so ago I posted a guide to carrying out a content audit. At the time I was getting to grips with the Suffolk Libraries website, and the audit was a really useful way to understand the library service and what we were trying to do online.
Now we have a (hopefully) sane IA and most of the redundant content has been stripped away, the audit has a different purpose. I use it to prune pages and make sure everything’s up to date. The original audit took days, now it takes on hour or two.
So here are 3 things you can do with a content audit once you’re happy with your website’s structure:
Staff move on, go on paternity leave or get promoted. Your organisation structure changes too. Update your content owners accordingly.
The Suffolk Libraries website consists of 144 static pages, including redirects. This is manageable for one person.
Fewer pages and files make it easier to:
So the first question you need to ask of any page or file is do we need this? If you think the answer’s no, check your analytics and search logs and ask the content owner.
You’ll probably have an idea of what content needs updating, but the audit will force you to review everything. It’ll throw light on those obscure corners of the website you’d forgotten about.
Information ages quite naturally, but sometimes something unpredictable happens. For example, a new government might make changes to the National Curriculum, so your children’s reading advice pages might need updating.
Often content owners will come to you with changes, but it’s ultimately your responsibility to make sure everything’s up to date. And it’s a good thing when you show enough interest in other people’s content to ask questions about it.
The first content audit I did for Suffolk Libraries was a major piece of work, but it was really important to get a handle on the website. Now it’s a relatively painless way to keep the site lean and healthy.
", "url": "http://localhost:4000/posts/3-things-to-do-with-a-website-content-audit/" }, { "title": "Responsive versus works in IE6", "date": "2014-03-08", "categories": [ "web" ], "excerpt": "Making your site responsive will get you more accessibility points than making it work in IE6.", "content": "Stats from 4 days’ worth of Suffolk Libraries website traffic:
The Suffolk Libraries website is responsive. Cost to build: £0 – I did it, so I guess there’s my web manager salary, but I do all the other web stuff too.
Let’s look at another library service’s website (or, more specifically, another library’s website): The Library of Birmingham. Cost to build £1.2m. Annual running costs £190,000. I don’t earn £190,000 a year.
Outsourcing isn’t always cheaper.
The Library of Birmingham’s website isn’t responsive.
It does, however, work in Internet Explorer 6. They did ask for it to work in IE6.
Which is the most accessible?
", "url": "http://localhost:4000/posts/responsive-versus-works-in-ie6/" }, { "title": "Library website visits increase 16% in 8 years—is that all?", "date": "2013-09-28", "categories": [ "libraries" ], "excerpt": "According to a pretty slapdash Guardian piece, visits to libraries have declined 12% since 2005–2006.
", "content": "According to a pretty slapdash Guardian piece, visits to libraries have declined 12% since 2005–2006.
(Actually, you’re better off just reading the Taking Part report (PDF) yourself.)
Reword the oddly negative standfirst – try “36.2% Britons visited a library in the last year” instead – and the picture isn’t quite as bleak as the article makes out. You could also point to ambitious projects in Birmingham and Manchester.
But there is a clear downward trend.
Better minds than mine will explain the reasons for this, but I suspect it’s a combination of three things:
These factors affect each other. It’s difficult to come up with a ‘vision’ for libraries while the service is being cut.
On the plus side (maybe) visits to library websites are up. In 2005–06 8.9% of the public visited a library website. In 2012–13 the figure stood at 16.9%.
This is a somewhat murky figure – it doesn’t seem that high to me – although the report describes it as “a significant increase”. I suspect Amazon’s visitor figures are slightly more impressive.
Nonetheless, there’s a trend here. Less people are visiting libraries, more people are visiting library websites.
Apart from me, obviously :-)
If more people are visiting library websites this is good news for the library service and the public. Whether you borrow a physical or virtual book, you’ll still read the thing.
Yet how much effort is put into the website ‘vision’? While Manchester and Birmingham build awesome new libraries, their websites remain dire: splintered, unfriendly, difficult to use. A joke compared to Amazon and Google.
The most important functions of a library website are to search, reserve and download books. Yet catalogue searches are literal, frustrating experiences, light years behind Google, Amazon et al .
The ebook service has been outsourced with nary a thought – the biggest ebook provider isn’t even based in the UK, while catalogues sit in several separate places, away from the library’s website. You can’t download a UK library book to a Kindle.
Maybe that’s why there’s only been an 8% increase. It makes perfect sense that more people use library websites – but the increase should be in three figures.
", "url": "http://localhost:4000/posts/library-visits-up-only-16-percent/" }, { "title": "Libraries should invest millions in search engines", "date": "2013-06-24", "categories": [ "libraries" ], "excerpt": "Libraries are under threat, and not just from ideological governments.
", "content": "Libraries are under threat, and not just from ideological governments.
Borrower numbers for physical books have been declining for years. That’s because going to a library to find and pick up a book seems a more and more anachronistic way to spend your spare time.
I make no comment on the rights and wrongs of this, but it is unarguably a lot easier to buy a book from Amazon and have it sent to your Kindle in seconds than it is to walk into town and collect a physical book.
The reason so many library users are in their 60s isn’t just because they don’t get modern technology. It’s also because they’re one of the few groups that has the time or the inclination to pop into the town centre just to pick up a book.
Regardless of the politics, this social shift is a threat to the current library model.
One obvious consequence is that libraries need to move online. And they have done this – in a somewhat cackhanded way.
Compare the experience of buying a book from Amazon with borrowing an ebook. It may seem an unfair comparison, but that’s the competition.
Let’s say I want The Great Gatsby. I go to Amazon and start typing Great Gatsby:
Amazon has done a huge amount of work here for me. Just by typing great it’s:
I don’t even need a search results page.
This is a stupendous bit of search engineering that handles topical relevancy, text matching, alternative suggestions and media format instantly from one search box.
I can have the book on my Kindle within seconds and three clicks for 79p.
Try the same search on your average UK library site. Let’s try a big one: Manchester (note that you could choose any library for this experiment).
Now, Manchester’s site looks great, but we have to click through to our search box. If you’ve done any research on library website user tasks you’ll know that finding and reserving books is by far the most popular task. Unnecessary friction (remember, I didn’t even need search results on Amazon to make decisions).
Anyway. Take a look at a search for The Great Gatsby via the separate OPAC (a different website – yet more friction):
In this case the search engine is doing zero work for me. In fact, I have to read something in order to use it properly, tell it where I’m looking and what format I’m interested in. And under no circumstances must I use more than four words.
It does find me the books, but I’m still a long way from getting them on to my ereader (it won’t be a Kindle, but that is Amazon’s fault).
I’ll have to go to a third website (friction, friction, friction), download a file, run that through Adobe Digital Editions and finally transfer the book from my PC to the ereader.
In three weeks I won’t be able to read it any more.
Compare this with the frictionless, easy Amazon experience. Which are people going to choose?
Although this might make you feel gloomy libraries have three big advantages over Amazon:
Libraries do have an advantage over companies like Amazon. But they’re facing stiff competition from an industry that understands the importance of making the search experience as frictionless and convenient as possible.
Libraries need to start investing serious money in search and catalogue user experience. They could start by asking more from their catalogue providers.
", "url": "http://localhost:4000/posts/libraries-should-invest-millions-in-search-engines/" }, { "title": "Extra form fields – Make sure they benefit the user", "date": "2013-06-06", "categories": [ "web" ], "excerpt": "Something to look out for when your work CMS has a forms module.
", "content": "Something to look out for when your work CMS has a forms module.
It’s too easy to expose your organisation’s processes to the user and ask them to do your work.
An example:
You run a website for an organisation with 50 branches. You’ve set up a form which is sent to someone in the organisation to forward to the appropriate branch. A lot of customers use it, which saves the organisation time answering phone calls and emails.
Then you think you could make things more efficient by cutting out the middle man.
Luckily, your CMS’s form module includes a nifty little feature where it can send emails to different addresses based on the user’s choice from a dropdown list. In this case, you present a choice of 50 branches. The user selects a branch and the email gets routed appropriately.
Sounds good, apart from the fact you’ve shifted work to the user without providing anything in return.
Now, some might argue that adding a field to a form isn’t a big deal. But consider the additional work a dropdown list of 50 branches entails:
Filling in forms on and offline is a joyless experience that involves interpretation, repetition and a degree of dexterity. Think of all the times you’ve had to complete an endless job application form which asks you about every job you’ve ever had.
Unless your user can see they _have _to fill in a field in order to make something work, adding friction simply means they’re more likely to bail out. At the very least, you’re making their experience of your website more miserable.
", "url": "http://localhost:4000/posts/extra-form-fields-shoul-benefit-the-user/" }, { "title": "Your website is not as important as your catalogue", "date": "2013-05-22", "categories": [ "libraries" ], "excerpt": "And your users don't distinguish between the two. Poor web skills and procurement processes result in poor user experience.", "content": "Your website is not as important as your catalog. This is a fact. We’ve asked many nonlibrarians about what they do on library websites, and the usual response is “Place reserves on books.” This is subtly different from how we think of our websites and catalogs (i.e., as distinct things). So, either our users see the two as the same thing, or they ignore our websites and just use our catalogs. Looking at website analytics suggests the latter. User Experience (UX) Design for Libraries
Your commonsense, analytics and visitor research will all point to this fundamental truth. And it’s an awkward truth, because chances are you have virtually no control over the public face of your catalogue. You probably didn’t even have a say in what catalogue your library chose, because that’s the concern of the stock unit.
It’s also difficult to change. The people you work with are probably responsible for all sorts of things in your library that can be done online: marketing, writing news, arranging events, running friends groups, setting up a reading group, setting up reference services. Anything but sorting out the search for the public—unless they deal with a member of that public who hasn’t been able to find a not particularly esoteric text on your site.
", "url": "http://localhost:4000/posts/your-website-is-not-as-important-as-your-catalogue/" }, { "title": "Library websites, catalogues and their poor UX", "date": "2013-04-13", "categories": [ "libraries" ], "excerpt": "I started working for Suffolk Libraries last week. Over the next few months I’ll be rebuilding our website. If you take a look now, you’ll see why.
", "content": "I started working for Suffolk Libraries last week. Over the next few months I’ll be rebuilding our website. If you take a look now, you’ll see why.
Libraries and the internet seem to share an uncomfortable relationship. I can see a few reasons for this. The main one is that Online Public Access Catalogues (OPACs) are not designed to slot into existing websites. They live as separate entities.
That’s why you’re shunted off to a subdomain or even a different website whenever you search for a title, whichever library website you try:
This strikes me as exceptionally poor UX. If I go to find a book on a library website I expect to be able to find it on that site. Navigating across websites is confusing. And isn’t a library essentially a collection of texts? Why aren’t they actually housed in the library?
Libraries have tackled this problem in two ways:
In an ideal world users would tap in their query and the website would display a list of search results. You’d expect this to be easy enough to implement even if the website itself didn’t store the database records. That’s what APIs are for.
But here we run into a second problem. Suffolk Libraries is unique in that it is a completely separate entity from its council. But its OPAC still feels like something from the world of local government – it looks dated and is difficult to talk to. If you were able to come up with a library management system with an easy, open API you’d make a killing (there’s a business idea for you.)
It seems that providing services across disparate domains and services has become a habit. That’s why you’ll see yet more subsites popping up, such as ebook provider Overdrive.
I suspect this will pose the main challenge in my website redesign. Is there anyone out there with experience of dealing with OPACs? I guess screen scraping is an option. Or is there actually a way to do subdomains and separate websites well?
", "url": "http://localhost:4000/posts/library-websites-catalogues-and-their-poor-ux/" }, { "title": "Online newspaper layout – 10 years and 10 steps back", "date": "2009-06-28", "categories": [ "web" ], "excerpt": "This image has been doing the rounds on Twitter recently:
", "content": "This image has been doing the rounds on Twitter recently:
[caption id=”attachment_715” align=”aligncenter” width=”590” caption=”Online newspapers: now and then (from http://imgur.com/gQouk.jpg). More ads and more stuff in 2009.”][/caption]
I think it’s hard to deny the point the picture’s making: there’s too much worthless _stuff _in modern online newspaper design. Of course, it’s not all bad: the text in the old article would probably be set in 8 pixel verdana with 1.2em leading and a measure of 35 words. The ads would flash ‘YOUR COMPUTER HAS BEEN INFECTED WITH A VIRUS!!!!!. CLICK HERE TO WIN!!!!’.
Now we have sensible typography, slightly more tasteful ads (on the whole) and really very clever grids.
I think we can ascribe this visual confusion to four causes:
newspaper publishers don’t trust readers to visit their site and find information, so they put (and I’m not exaggerating in the case of The Guardian) 75+ stories on the front page
publishers need to generate revenue through advertising and, while there’s nothing wrong with ads per se, online advertising is still relatively immature and driven by click rates. Ads therefore tend to flash, intrude and use OVER-EXCITED language to try and attract attention among all the newspaper stuff.
publishers are in awe of user-generated comment and social media. Unfortunately, the current thinking seems to be that concepts like _community _are commodities: the more you have, the better
this mass of stuff has coincided with the develpment of complex CSS techniques that mirror print conventions, such as grid-based design. As publishers want to publish lots of stuff they’ll happily use 16 colums
[caption id=”attachment_721” align=”aligncenter” width=”590” caption=”New York Times: the basic typography is great, but the text has to snake around the ‘social’ options and the adverts”][/caption]
So how could we improve online newspaper design?
The print verisons of newspapers provide a clear, simple pointer as to how online content could be organised. The Guardian’s website has a header navbar with 25 links. The print version is divided into six clear sections: news, sport, business, commentary, listings and G2. By dividing the website into a manageable number of sections and exercising a bit more editorial judgment on the front page, readers could be guided through content, and the layout simplified.
Some comments are insightful. Lots are not. And it’s nearly impossible for readers to sift through several hundred of them. Newspapers need to rethink how comments are accepted, limit comments to email correspondence or scrap them altogether. Rejigging the comments system is perhaps the most palatable option. Publishers could direct comments to a separate message board area where commentators could post new threads and respond to existing ones.
Alternatively, online publishers should once again turn to the print world for guidance and moderate comments so that only the representative and/or particularly useful remain. In time, commentators will only comment if they’re particularly inspired or knowledgeable enough to do so, and the general quality of the commentary will perhaps increase.
[caption id=”attachment_722” align=”aligncenter” width=”590” caption=”The Times. Again, the basic typography is good, but too many columns and too much ‘stuff’ means the ads have to fight for attention”][/caption]
If we can reduce the amount of content on our pages it’s just a short hop and a step to simplifying the layout. Just because we can implement a 12 column grid doesn’t mean we have to use all of it: Although it might be boring, a two-column, content/asides layout is really easy to read, and has the added bonus of being familiar to most readers.
And if ads aren’t fighting content for attention then it should be possible to employ more subtle advertising techniques. Animated Flash movies make us think we’re looking at annoying adverts. Once again, online publishers can take their lead from the print world, where the click-through isn’t king, and the idea of communicating a brand is sometimes just as important as generating direct sales.
Not all online newspapers are as cluttered as others. The NY Times, for example, employs an understated typography and uses (mainly) just two columns when displaying individual stories. It uses too many contextual options and social media links (in my opinion), but it is far more readable than The Times and The Guardian.
", "url": "http://localhost:4000/posts/online-newspaper-layout-10-years-and-10-steps-back/" }, { "title": "FT redesign: modern, readable and accessible", "date": "2008-11-20", "categories": [ "web" ], "excerpt": "Reading The Guardian’s tech supplement this morning, I came across a rather intemperate criticism of the FT’s home page redesign. Andrew Brown (when he’s not arguing that dyslexia is a condition suffered solely by the lower orders) makes the following comments about the FT’s home page:
", "content": "Reading The Guardian’s tech supplement this morning, I came across a rather intemperate criticism of the FT’s home page redesign. Andrew Brown (when he’s not arguing that dyslexia is a condition suffered solely by the lower orders) makes the following comments about the FT’s home page:
The overriding theme is that it has been designed for people who can’t and ‘don’t want to read’. There is a surprising (bearing in mind it’s in The Guardian) snobbishness here; it reads more like The Telegraph.
All nonsense, of course, and the design should be saluted for breaking from the current convention for super-complex grids, content-overload and headers stuffed with 75 links.
So here’s what the FT does well:
In short, all great things that The Guardian’s site isn’t.
It’s perhaps a surprise that such a traditional, conservative publication has led the way in designing a home page that is fully aware of the constraints and possibilities of the medium, while Britain’s greatest liberal paper adopts such a narrow, backward looking view. It’s the FT that’s leading newspaper design into the modern era.
", "url": "http://localhost:4000/posts/ft-redesign-modern-readable-and-accessible/" }, { "title": "Simplifying The Guardian’s header", "date": "2008-09-20", "categories": [ "web" ], "excerpt": "I’ve had a stab at redesiging The Guardian’s home page before. I thought I’d take a different tack this time by concentrating on one part of the page: the header.
", "content": "I’ve had a stab at redesiging The Guardian’s home page before. I thought I’d take a different tack this time by concentrating on one part of the page: the header.
If we accept that simplifying design equates to reducing the choices available to the user to a few core, meaningful options, then the header fails because:
Summary: too many options, probably because the designers felt everything must be accessible within one click from the home page.
No frills, just a simplification of the header’s purpose. I’ve used titles to further guide readers through the navigation — a simple technique, but one that is really helpful and easy to implement.
I’ve also dropped the .co.uk from the masthead as it only serves to devalue the online brand (i.e. it suggests that readers haven’t encountered the ‘real’, proper Guardian).
A lot more scannable and purposeful.
", "url": "http://localhost:4000/posts/simplifying-the-guardians-header/" }, { "title": "6 newspaper writing techniques for the web", "date": "2008-07-17", "categories": [ "web" ], "excerpt": "Concision, chunking and emphasis will make your web writing more effective.", "content": "Tabloid newspaper and internet readers share similar goals when they approach their respective texts. Research shows that website users scan pages and search for cues in order to locate the information they are looking for. Tabloid readers are offered a wide range of typographical cues to help them comprehend ** and **organise the text. Here are a few newspaper techniques, with some notes on how they might be applied to web texts.
One of the great arts of (British) tabloid journalism is the well-turned headline. From the brutal, pub speak of ‘Gotcha!’ to the more witty ‘How do you solve a problem like Korea?’ and ‘Super Cally go ballistic, Celtic are atrocious’, the headline’s purpose is manifold: on one hand it attempts to make the reader take a closer look at the story, while on the other it makes him or her try and figure out what the story is about. Unfortunately, web headlines may serve a far more mundane purpose: web readers don’t have the time or inclination to figure out what they are about to read: a headline should simply summarise the text in as few words as possible.
Straplines run across the top of the page. They serve as a short (3-6 word) summary of the text and often employ an informal tone so as to entice the reader to continue with the text. Straplines are difficult to implement in web pages (although not impossible using absolute postioning), but may lose some of their meaning if the styling is stripped away from a page, which would cause an accessibility problem for readers using a screen reader.
A crosshead is a word or short phrase taken from a text and then used as a heading. Crossheads serve two purposes: they give a clue as to the succeeding content and entice the reader to explore the text in more detail. These examples give an idea as to their purpose: ‘plagued’, ‘begged’ ‘stabbed’. Used carefully, crossheads serve as an excellent method of summarising information, and they break up hard-to-read swathes of text; however, care should be taken that their meaning is closely related to that of the succeeding text. Readers won’t appreciate being hoodwinked into thinking that what they are about to read isn’t quite as dramatic as they were lead to believe.
Standfirsts are short (1 or 2 sentence) summaries of the complete text. Newspaper convention dictates that they are set in bold. Standfirsts are an excellent idea for web texts: they enable readers to decide whether the article is something they are interested in. Bolding them is a good idea, but they can be displayed in other creative ways. The WordPress equivalent is an excerpt, which can be displayed pretty much anywhere within a web text (for example, under a headline on a blog’s front page).
Pull quotes are literally quotes that are pulled from the main flow of a text. They are often set in a box and given a different font size and face from the body text; the CSS float property is particularly useful when adding them to web texts. Again, they serve to give a clue as to the text’s content and entice the reader to examine the text in more detail. They also help break a text up by providing variation in layout, size and font face. Unfortunately, they cause an accessibility problem: without CSS they are not distinguished from the rest of a text, which means they simply break up the logical sequence of the writing.
Tabloid newspapers generally limit paragraph length to a maximum of 3 sentences, often 1 or 2. This breaks up texts into easily comprehensible chunks, both semantically and visually. They provide an excellent convention for web texts.
As we can see, newspapers offer web writers a wide range of techniques to improve their writing. Not only do they help break texts up into comprehensible chunks, they also provide visual interest and variation. Concentrating on writing techniques as opposed to layouts will prove more productive to web authors.
", "url": "http://localhost:4000/posts/6-newspaper-writing-techniques-for-the-web/" }, { "title": "5 web design sins from the experts", "date": "2008-06-15", "categories": [ "web" ], "excerpt": "As you can probably gather from this blog, I’m interested in the areas of usability and accessibility and how they relate to web design. There are some fabulous web designers out there, but the age old apparent conflict between attractiveness and accessibility sometimes flares up on their own web sites. Here I present 5 areas in which even the experts could improve:
", "content": "As you can probably gather from this blog, I’m interested in the areas of usability and accessibility and how they relate to web design. There are some fabulous web designers out there, but the age old apparent conflict between attractiveness and accessibility sometimes flares up on their own web sites. Here I present 5 areas in which even the experts could improve:
A perennial favourite. I think designers often find large, sans-serif text ugly. So a surprising number of sites still use really small text:
(From Mark Boulton’s web site).
Small text is difficult to read, even when given a generous leading. And 11 pixel Lucida is too small.
Designers can be very creative when they style their links, but there are some very sensible guidelines that should be followed in order to ensure they are clear to all users (according to wikipedia, 6% of males are deuteranomalous). If designers use red or green as their link colour they should ensure that the link is underlined, otherwise it becomes indistinct from the rest of the page:
This is an example from iA: superb designers who write extremely entertaining, oftimes controversial articles about design and the web. But can you spot the link? This is how some colour-blind readers see the page (screenshot made possible by the essential ColorOracle).
The web and html are inherently flexible and copyable phenomena. If I read something I like on the web I should be able to copy it and manipulate it as I please (in a Word document, for example). Unfortunately for designers, there are only a few fonts that are ‘safe’ to use across the web. To get round this designers will go to extraordinary lengths, perhaps by using images instead of text:
(This is from Daniel Mall’s site - normally bold in its adherence to good old simple text and typography.)
I’ve written about this before. Unless it’s being used as a large heading, Times is inappropriate for the screen. Georgia is a wonderful thing.
(From the New Yorker.)
I see this often when designers use dark backgrounds. Contrast is important for all users, not just the visually impaired:
(From Bright Creative.)
So we can see that designers often make decisions that affect the usability and accessibility of their sites. Is this a disaster, or quibbling?
", "url": "http://localhost:4000/posts/5-web-design-sins-from-the-experts/" }] }