South by South West

I kept saying to myself that I will eventually get around to writing something about this, but being out of the office for 8 days and the best part of offline for 10 means I’ve just had too much getting in my way.

In short: South by South West was great!

First things first; our panel.

Well, it went well. I think. People have been nice and positive about it. Two main concerns generally were a) that it wasn’t exactly what we were billed to do and b) some people didn’t know what microformats were. Well, regarding point b – it wasn’t billed as an introduction to. We only had so much time, and we felt that those likely to want to attend this particular session would already have a foundation, or at least an interest enough to have read about them a little, in microformats enough to know what was going on. I’m sorry if you didn’t get it – I’m more than happy to give you an intro to if you drop me an email (fberriman AT gmail).

A photograph from PTG of the microformats panelAs for the first point. You’re right. We did do more of a history/what’s cool now, as opposed to looking deeply into the future. To be honest, it’s hard to know what’ll happen exactly. I think we did a good job in showing that take up has been brilliant and a lot of big names are getting involved with supporting and implementing microformats, and that there’s real ways you can start using microformats in your day-to-day internet experience.

How did I cope, I hear you ask? I was nervous as anything (particular thanks to Patrick who put up with me being the best part of mute that morning).

About 10 minutes before going on, we learnt that rather than being in a small room which I had somewhat managed to psych myself up for – we’d be in the biggest room (18ABCD). This means I was not at all prepared for the green lighting and the spotlights and the 700-odd people!

Tantek, Glenn and Mike were brilliant though. I’ve heard back from a few people who were genuinely really impressed with the backnetwork and Operator. Jeremy Keith joined us about half way in after a completely unplanned, but perfect, question which allowed him to show of his cool little bluetooth trick.

I did learn that I don’t like prepared speeches though. About 2 lines in, I realised I’d for some reason abandoned what I had planned to say and was making it up on the spot. I did not enjoy that. I did however really like the Q&A section. Having to think on the spot about what people wanted to know about seemed to have the effect of removing a bit of my stage fright. I suppose I prefer conversational tones.

Am I glad I did it overall? Yep. Would I do it again? No. Actually, yeah, I might… if I could do some more smaller things in the mean time and get some practice in, I’d consider it.

Other than that, SxSW was fun. It was a good chance to put some faces to names I knew online and meet a few new ones. Panel wise, I made a bit of an effort to avoid subjects in my field since I didn’t expect to learn that much and headed for more unusual topics. I went to quite a few of the game track (screenburn) sessions, and really enjoyed them. It was especially interesting to learn that they are having a lot of similar issues surrounding maintaining identities/networks in different parts of the web as we are. Also quite fun to know that OpenID is a topic for them.

I have more to write I think, but this is already a long post and late! “Hi” if I met you though, it was a pleasure.

SXSWi Microformats Panel Confirmed

The Growth and Evolution of Microformats panel at SXSWi has been added to the rather handy panel planner on the SXSWi site (shame it’s not marked up as hCalendar though – [edit] the panel picker has now been microformated!). It gives a little bit more of an overview about what it’ll be about:

In its first year, microformats.org ushered in the rapid adoption of key formats for publishing and sharing tags, licenses, contacts, relationships, events and reviews on the Web. See what new microformats are being developed for resumes, classified listings, music, and media, as well as how tens of millions of established microformats on web sites of individuals, companies, and organizations are driving innovations in desktop applications and advancing personal data portability

You’ll also note that the rest of the panel is announced too. We’ve got Tantek of course – *the* microformat advocate, and moderating the session. Michael Kaply from IBM is the man behind the Operator toolbar for Firefox, which in my mind is the most complete and fully functional addition for Firefox for detection and use of microformats. Glenn Jones is the only one of the bunch I’ve had the pleasure to have met before – he was 1 third of our microformats triple bill at the first BarCampLondon (along with Drew Mclellan and myself). He’s an implementor and created the backnetwork which is stuffed to the gills with microformats. He also presents on the topic, and did so recently with Destroying Walled Gardens at BarCampLondon2.

Then there’s me, of course. Makes me consider my place in the group though.

I mentioned this to a few people and they simply said that I was the human side of things. Possibly more down to earth and using microformats in day to day development. Not too many ideas of grandure and using them practically, and advocating and explaining them in simple terms to those who want to learn about them.

I’m also the most community involved panelist. That might not seem important, but when you realise that microformats wouldn’t exist without the community it’s a lot more. Every spec and decision made about microformats is done by an organic community of people, like myself, who are enthusiasts. It’s this organic growth thats let microformats spring up out of nowhere and gather speed and support so quickly.

I think it’s a good mix though, and I’m looking forward to the panel even if I am a touch nervous.

3 months of Twitter

I have many microformat related posts (rants?) to come, but they’re best saved until after this weekend since BarCamp will make a better foundation for said discussions. Looking forward to what is turning out to be a little bit of a mini microformat camp though!

So on another topic – Twitter. I know, plenty of people have spoken about it, but I just haven’t been inclined to yet. I am now since the novelty is starting to wear off (I’ve been using it since November, I think) and this seems like as good a time as any to give my thoughts on it.

Twitter basically is sort of like web-based status messages (like you’d have on MSN messenger). Twitter asks “What are you doing?” and the correct response would be a 3rd person answer such as “looking at twitter.”. Anyway, what’s actually happened is Twitter is a status updater and a really slow IM client in one. I think it’s because Twit’s can be grouped into two main types:

  • Status updater – this kind of Twit uses Twitter as intended and updates in the 3rd person about what they’re doing at that given time, or what interesting location they may be in. This kind of Twit generally dislikes the types below for muddying the stream. Often these Twitter’s could exist alone and don’t tend to be a response to anyone else, or require feedback. It’s a rather solipsist world. (I kind’a dig it.)
  • Chatter – these treat Twitter like an IM client, generally holding conversations on the site. Lots of “@Bob – See you there!” type messages. To know who, what and where these Twits are you have to friend and follow all of their friends. These Twit’s think the type above are boring.

Of course, there’s a bit of overlap. I try to be a Status Updater, but occasionally I fall into Chatter mode. I don’t dislike either type, really, although my friends list has had to be culled a few times to remove those that I feel twitter too often, and those Status Updaters that only update about eating their lunch soon got lost as well.

I actually quite like Twitter on the whole.

What I actually like about it is the ability to keep in touch with people I don’t see very often, but wouldn’t necessarily chat to or email. A good example is the Brighton geek crowd – I see them from time to time, but now when I do see them I already know what they’ve been up to without having to go through the “How are you, what’re you up to lately?” mundane conversation because they all twitter update. I know exactly where they’ve been and what projects have been driving them potty, and can cut straight to the chase. That really works for me.

Another thing that interests me about Twitter, on a personal level, is how much I like using it when I’m away from the computer. There’s something odd about me (and others) that makes me want to check in on my mobile and actually prove to people that I do go outside occasionally and maybe even go to interesting places. Why do I need to do that? I don’t often turn the updates on, so I don’t see any responses if there are any. I think I must like solidifying the things I do in digital form. If it’s on the web it must be true!

As an aside, there is actually a 3rd kind of Twit. The News Twit. Generally, these are automated (the BBC news headlines are available) but there are one or two human-controlled streams popping up, like the microformats one we’ve set up and have been using to announce events. I’m not sure how much I like this. It doesn’t really fit – why not just subscribe to the RSS of the actual feed if there is one? Jury’s out.

Ultimately though, I’ve always thought that Twitter can’t last as it is. It needs better filtering and friends control – the noise is starting to get too loud. Perhaps interest groupings? Channels? It works for IRC (which is still my preference for digital communications). It needs smarter phone commands that might let Twitter become a worldwide answer to Dodgeball so that it can be used more easily for getting together with friends and finding out what’s going on and where. It’ll be interesting to see how useful Twitter will be in Texas next month, if at all.

I’m still checking in on it, but I’m not sure for how much longer.

JAWS Screen Reader Session, Microformats and Us.

Last Monday I had the opportunity to visit the Test Partners, after an invite by Steve Green, to attend an afternoon of screen reader demonstration. I’m exceptionally glad I went, and it’s a shame I’ve been too busy to mention it sooner.

A picture of the JAWS shark logoFirstly, the session concentrated on a specific sub-group of internet users – the blind. Steve made a point of saying that with any project you need to decide who your target audience is as measures to help one group of users may be mutually exclusive to helping another. As such, he stated clearly that within this session (and my feedback below) that the techniques and problems raised are relating to those specifically who are visually impaired and using screen readers.

However, in my opinion, a site that is easy and understandable to navigate for screen reader users probably does go a long way to making this whole web experience easier for a range of people with varying difficulties and user needs.

We ran through JAWS operation. JAWS in particular because it has the market share of screen reader users.

From testing on live sites, it became clear pretty quickly that headers and lists were really, if not the most, important. Properly used headers (semantically correct) give an easy way to navigate to prominent areas of the page and also give instant context.

The logic behind putting navigation buttons into lists also became ever so clear when you hear JAWS inform you that there is a list of 6 links coming up. Not only does that sound like a navigation, you can easily be ready for how far into said list you’re probably going to have to look to get to the section you want (somewhere in a dozen or a hundred links?).

The reason these two things are particularly important is because of the way a blind user creates his or her visual model of how the site works. They can’t just glance at what’s coming or where something is. They must make a top-down mental image of what’s available on the page by running through the entire document (unless they already know the site and can rely on it being the same as the previous visit).

This method of mental navigation shows why consistency and predictability are really vital. I’m not saying that every site should be laid out in an identical way, but subsections of a site should follow a template defined by the initial landing page and navigation should be pretty standardised.

Thinking about this does remind me of a reason why I like microformats and think they may have some potential as accessibility aids and therefore why I’d like to see them utilised within something like JAWS.

Microformats are a way of standardising more specific elements within a page, and I’m sure it would be useful to some if they could have JAWS announce to them that the page they have just landed on has 4 headings, 12 links, a contact and 3 calendar events.

The reason I mention microformats in particular is because it’s something I’ve been giving some thought to for a while in direct association with accessibility and think I’d like to elaborate on in the future. Feedback in this area is especially welcome.

For me though, the most surprising thing was being told that by default, most SR users don’t read title, acronym and abbreviation attributes! A majority of users find it annoying to be given the extra (and often superfluous) information. The lesson learnt here is to make sure that everything that is important to the understanding of a document should be in the actual page copy, rather than hidden away in tag attributes.

There were many other bits and pieces of interest – from reminding us all that display:none is adhered to by JAWS to writing good copy can go a long, long way, and then that access keys (uh oh) aren’t particularly utilised or useful because most sites don’t offer them so they cannot be relied upon.

The thing is – JAWS is a bit smarter than I’ve given it credit for. Saying that though, from the feedback from John, a vast majority of users aren’t advanced users and probably aren’t toggling features on and off to get the most out of a site.

So, there’s two things there – users probably need more training, or at least opportunities to learn how to use websites on their own in some way, but equally as publishers to the web we should be making the best effort to present information – and that’s ultimately what we’re all doing after all – as clearly and as simply as possible.

Web standardistas will already be doing this I’m sure and rolling their eyes at being told yet again, but it’s important not to forget why you’re doing it. It’s not just about being correct for correct’s sake; it’s about giving everyone a fair chance.

Developing for others

As if by magic, Jeffery Zeldman yesterday published an article about print style sheets, which just happened to be what I was tangentally ranting about in the car on the way to work the other morning. Most of us support printers about as badly as we support screen readers, and I don’t think the two devices are that far removed – both are mostly “out of sight, out of mind”.

What I was more specifically complaining about was screen readers and their general lack of support from the web community.

I’m under the firm belief that screen reader support is rubbish for two reasons:

  • None of us use them (”us” being the general web dev community).
  • Screen readers are expensive, clunky and support the specs even less than some of our most hated normal browsers (and then have to work with browsers that don’t support the specs well either).

Because of the first reason – we’re not pushing screen readers to be developed well. Why isn’t there a good, free, open source screen reader? “We” don’t need one.

I’d really like to encourage people to start trying out some of the screen readers (most will run in a trial/demo mode for a period of time). If a few more of us incorporated screen readers into our testing (if you’re not already, why the hell not?) and then perhaps started badgering those developing these products to improve them, we could have some decent products for everyone.

Remember the spread firefox campaigns and “web standards” pushes that worked so well? We desperately need one of these for screen readers, and to be frank, it’s going to take people giving a damn about others to happen because this is an area outside most of our personal uses. We also need developers who are willing to put some time into working on new open source products that can be used by, and improved by, everyone.

How can we do this? Do you think it isn’t worth it? Feedback people.

dConstruct 2006

Thursday

On Thursday night I headed down to Brighton after work for the d.Construct web conference. I’ve been looking forward to it for ages, so I can’t say I wasn’t buzzing a bit. Met up with Dave when I arrived, and also Adam Bardsley who I’d met at the WSG event back in July. We headed down to Heist for the pre-event social type get together and I got to say hello to a few faces I already knew, and also meet some people who I’ve known around and about online, but had yet to meet face-to-face. The nice thing about having been to BarCampLondon the weekend before meant there were plenty of faces I recognised.

Friday

Stayed out a bit too late on Thursday, so was a little sleepy first thing on Friday but the amazing weather soon picked me up. Summer hasn’t left Brighton yet, apparently. We (myself, Dave and someone he works with) located a little cafe for breakfast, then made our way over to the Corn Exchange. After receiving lanyards (which also doubled as programs for the day) and goodie bags we went in for tea and coffee and to say hi to the people we knew there and meet even more people. And then the day began for real.

Photo of the conference theatreAll of the sessions and speakers were interesting and insightful. Each spoke from experience which is more important than anything. The downside to the sessions was possibly the length of time the speakers spoke for – they didn’t leave much room for Q&A sessions, and that is often the most useful part of a presentation.

Even though Flex isn’t my area, I really enjoyed Aral Balkan’s session (and meeting him again) since he is just a little ball of energy and so enthusiastic. I’d not seen much about Flex, but was suitably impressed. Jeff Bar’s presentation about Amazon’s APIs was neat too – the Human Turk, especially.

A highlight of the day for me was definitely the Microformats Picnic. It was a rather short-notice idea Jeremy came up with in the previous week, and only a handful of people had marked themselves down as attending – but the good weather must have prompted more to join, since there were a lot of people listening, including random Brighton passers-by, to Jeremy explaining what Microformats were, how to use them and answering queries! It was slightly surreal with the indian twinkly music in the background though, coming from further down into the park.

Since I wore my Microformats shirt to the event (along with a few others), I got to discuss Microformats with plenty of people who were interested in using them, so it was fun for me. A few people have since emailed me to find out more, or get some advice.

After the conference, I grabbed some dinner with Steve, Faruk, David, Trev, Ben and Neil and then we headed down to The Terraces for the after-party.

The after-party was fun, although we missed the tab, but not to worry. Everyone was in a good mood, and there was plenty of chat related to the topics we’d seen during the day and generally throwing ideas about – mostly in mine and my friend’s cases, how to incorporate Microformats into various mash-ups!

As the party wound down, Drew and Andy rounded a few of us up for cocktails back at the delegate hotel. I think we finished up around 3am!

Saturday

Decided to stick around on Saturday since the weather was continuing to be lovely. Got in contact with Natalie and met up with the crowd from the night before. Dave and I watched Natalie and Simon have a go at the bungee-trampoline things and then had lunch at “Oh So Social” followed by a wander along the sea-front to watch the “eXtreme” skateboarding, have a dig through some second-hand books, and a trip to the Lego store to drool over the new Mindstorm robots (£180!!).

I ended the day with the group having a BBQ down on Brighton beach, followed by a game of werewolf (I wasn’t a wolf at last, but they still lynched me!). Made it home by 11pm, shattered but very pleased.

After

The backnetwork really comes into its own now that the event has happened. I’m a bit rubbish at remembering names, so the fact that a majority of people have included their photograph on their profile (and most have managed to include a useful, proper, photo) has made it easy for me to mark those people I’ve met and grab any contact information I need.

I collected the odd business card for mobile numbers, but generally there was no need for them. Good because business cards get lost, and it’s more environmentally friendly (yes, computers aren’t, but we’re running them business cards or not).

Also, I can grab my new friend’s links as as XFNified blog listing, and subscribe to them all in one go.

The other nifty thing is everyone’s profile page collects photos and blog posts that include them. You can see mine here: My backnetwork profile!

I hope they continue to use it for future conferences because it’s a really great resource.

If you have a flick around on the backnetwork, you’ll find all the links to other people blogging about this and photographs, so I don’t need to make you a list! Go forth and explore.

Adding XFN

I recently readded XFN tags back into my links (read: blogroll), which are another Microformat open standard. As with most microformats it’s very simple, and some blogs will do it for you by default. What it basically means is you add rel=”relationship” to the link of the person to give the link some additional meaning.

For example, if I wanted to link to my friend Lana, I can write:

<a href=”http://lanadenise.wordpress.com” rel=”met friend”>Lana’s blog</a>

This indicates that Lana is a friend who I have met. If you leave out the “met” it can be a friend you haven’t yet met (i.e. online). There’s a handful of predefined relationships that should be used but there’s just enough. You can indicate family members, co-workers and vague connections.

Why would you bother, I hear you ask? Well, it gives some extra meaning to my markup for one. You know how I love semantics. But after badgering my Dad onto WordPress so I’d have a legitimate reason to use a family XFN tag, we discussed some of the awesome things about it (which had also been mentioned on #microformats). For example, my Dad has a website because he’s interested in finding, and being found by, distant relatives. Imagine a few years down the line when everyone has a blog (don’t they already?) and use XFN tags on the links to their other family members with blogs. You could easily pull up a diagram based on these interconnected links and see who is related to who. An instant family tree!

Alternatively, you could look up people who work together, or instantly pull up a group’s social network based on reciprocated links. Also, it means I can tie other websites that I use to this page, providing they all show the rel=”me” which will ultimately end here. (See Identity consolidation with the XFN rel=”me” value.)

So, I added that, and after spotting that I had accidentally misspelt his surname and telling me that I should blog this, Tantek suggested I also hCard the links. Not a bad idea! So now you can grab my friends names, websites and what they mean to me all in one go.

Apart from my inability to spell some names correctly, XFN is a very simple to add but fully loaded tag for links, so I had no issues with implementing them.

I think it’s something that will be relied upon more and more in the future for a range of uses and services, so it’s really worth adding now and getting a grip on. Mixing XFN with VoteLinks (which I have yet to use anywhere) and no-follow seem like an interesting prospect and perhaps could be useful for better determining page ranking or just aiding web searches. I’m no innovator, but I’m sure someone will come up with a good way to utilise these features together.

The question is, what should I format next? hResume?

Implementing hAtom: The entries code

This article is rather old now, but has been linked to a few times recently. Just wanted to say that I don’t actually use the example below anymore, as I have since upgraded WordPress and used the sandbox theme as the basis for my own, which comes with it’s own microformat goodness. The example below is still valid though and should be useful to you if you want to understand how it works or still want to do it yourself.

As promised, here is my PHP hAtom WordPress loop. Feel free to do as you like with it.

To start: The first thing with implementing something like this is it’s a really good excuse to do a code review. I had a look at the way I was using my headings and abbrs etc. and moved them about a bit. It becomes clear pretty soon that if you’re not using your HTML tags in a semantic way, it’s harder to think about adding additional levels of meaning (the microformat classes).

When I first added hAtom I didn’t also add it for my comments. This meant I could use hFeed around the entries. You can’t nest hFeed though, and since this loop will sit around the comments loop on a permalink entry, it had to be sacrificed and instead the page is taken as hFeed (which is the fallback) and the hFeed wraps around the comments loop (which I can also post if you’re interested). If you don’t hAtom your comments, put hFeed back in around the main entries, as this is a field that *should* exist if possible.

Entries loop with hAtom:


<?php if (have_posts()) : ?>

<?php while (have_posts()) : the_post(); ?>

<div class="post hentry" id="post-<?php
the_ID(); ?>">

<h2 class="entry-title"><a href="<?php
the_permalink() ?>" rel="bookmark"
title="<?php the_title(); ?>"><?php the_title();
?></a></h2>

<h3>by <span class="author
vcard"><span class="fn"><?php the_author()
?></span></span>

on <abbr class="published" title="<?php
the_time(’Y-m-d’) ?>"><?php the_time(’F jS Y’)
?></abbr> </h3>

<?php the_bunny_tags(); ?>

<div class="entry-content">

<?php the_content(’Read the rest of this entry »’); ?>

</div>

</div> <!– end hentry –>

<p class="righted">Posted in <?php the_category(’, ‘) ?>
<strong>|</strong>

<?php teb_word_count(); ?> | <?php
edit_post_link(’Edit’,'’,'<strong>|</strong>’); ?>
<?php comments_popup_link(’No Comments »’, ‘1 Comment »’,
‘%Comments »’); ?></p>

<?php comments_template(); ?>

<?php endwhile; ?>

<p><?php next_posts_link(’« Previous Entries’) ?>
<?php previous_posts_link(’Next Entries »’) ?></p>

<?php else : ?>

<h2>Not Found</h2>

<p>Sorry, but you are looking for something that isn’t here.</p>

<?php endif; ?>

I’ve added some linebreaks and such to make it a bit more practical for a fixed width blog, so don’t take too much heed of the actual layout.

The bold parts are the hAtom sections (although the bunny tags also produce the “rel=tag”). The parts in italics are plugin calls. Note also how you incorporate hCard as the author. The publish date uses the datetime design pattern on the abbreviation. I chose not to include a timestamp as I don’t publish more than once a day (as a rule, anyway).

Do not fret those of you who aren’t into working up your own code, or perhaps are using wordpress.com – The Sandbox theme (available to .com users also) now has hAtom!

As you can see, it’s pretty simple. It’s just a case of going through and basically labeling the correct parts with the correct classes (making sure you have instances of all the *must have* classes). If you follow the link above to the hAtom wiki page on the microformat site, you’ll find some tools for testing your implementation.

hAtom implementation

It’s late, and I have had a long journey today down to Cornwall to visit my family for the week (so if you email me this week please use my gmail not my work address), but I thought I’d stop by and mention that a couple nights ago I finally got around to implementing hAtom on this blog.

Implementation was a doddle. Checking it was accurate was not so much. The available Firefox extensions that check for hAtom are not all that… working. If you show certain elements in a slightly different order it seems microformat-find, for example, can totally miss it. This is fair enough though for a couple reasons. A. hAtom is very, very new. Version 0.1 still. and B. Writing parsers isn’t exactly straight forward, and I have every sympathy for that. Consequently, I spent a long time wondering why it “wasn’t working”, when infact it was fine… I just didn’t have access to tools to show me that (but the boys helped me out – cheers Chris, Drew and Luke for checking my stuff over).

Also, I found that some of the documentation for hAtom seemed a little odd. I will probably bring this up with the group later, but I’ll throw it on here first. The main thing I found odd is that “updated” is a required field, and if not found you should use “published“. This seems odd to me since surely you can’t update something unless it was published in the first place? Perhaps I am misunderstanding the usage of the term updated (Drew suggests it’s a mapping to the Atom spec)? Anyway, looks like that part of the spec could use a little ironing, and I think that’ll be happening. I’ll keep you posted on that one.

Anyway, I was going to post my final wordpress loop code snippet, but I’m not sure how useful that is for everyone. If you want to see it, and where I’ve added the extra information, say so, and I’ll run through it on here. Having lots of implementations means that people can get writing extractors and having some varied testing beds to try them out on will help, so I encourage people to take a look at it.