28 December 2009

Sleep modes, big green buttons and how my mother-in-law taught me something about computers

Until now, my mother-in-law and I have had some clearly understood boundaries. She is an excellent cook and fattens me up (further) with all manner of culinary delights whenever I visit. On the other hand, with computers she manages to hold the mouse the right way around most of the time, but I am the designated computer expert and therefore have to do the usual laptop fixing, virus removal, etc. of her machine whenever I go round.

But she has now crossed the line by buying me a Christmas present which actually taught me something I didn't already know about computers.

The present in question is an EcoButton(tm) which is basically a big green glowing button that you plug into a USB port and which puts your computer to sleep when you press it, thus saving the planet.

Obviously I initially dismissed this as a complete gimmick - after all, how is this any different from me putting my laptop to sleep by just shutting the lid? But before totally writing it off, I did go as far as reading their FAQs and then discovered that there was more to sleeping than I previously was aware of. After a bit of browsing around I am now slightly less ignorant so thought I should share.

Like most people, I thought that there were a few options for the power state of a PC:
  • Fully on
  • Sleep mode (minimal power to keep the memory going)
  • Hibernate (state dumped to disk but powered down)
  • Off
But it turns out that this is not the full story. Back in 1996, an open standard was produced to describe different power states, called the Advanced Configuration and Power Interface (ACPI). This has undergone a few revisions since then (version 4.0 is viewable here), but essentially defines different states for software and hardware power levels including a range of sleep states:
  • S1 - low wake latency sleep, processor clock off, bus clock stopped, <= 2secs wake
  • S2 - similar to S1, but processor and some buses are powered down. Slower than S1 to wake.
  • S3 - similar to S2, but all context is lost apart from system memory. Similar wake time to S2.
  • S4 - all devices (including memory) powered off with memory state stored to disc, i.e. this is what we know of as hibernate.
  • S5 - everything off, all context lost, i.e. this is what we know of as "shut down".
Also, with Windows XP it usually defaults to S1 rather than S3 even though the power consumption for S1 isn't anywhere near as good as S3 (Dennis Forbes did some tests on his machine and found S0 to be 129W, S1 112W and S3 5W).

So it turns out that what the EcoButton does when you press it is to run a command to force the machine into the S3 state rather than S1, and then show you a nice page when you reawaken that tells you how much money and CO2 you've saved.

Various people have found an undocumented Microsoft tool called dumppo.exe (download here) that shows you your current configuration and allows you to force the minimum sleep level to be S3 rather than S1:
C:\>dumppo admin
Admin policy overrides
Min sleep state......: S1
Max sleep state......: S4 - hibernate
Min video timeout....: 0
Max video timeout....: -1
Min spindown timeout.: 0
Max spindown timeout.: -1
To force the min sleep to be S3 do dumppo admin minsleep=s3 maxsleep=s4 and then reboot. Note that some USB devices may have problems with this, so you may need to revert if you experience any of these.

In Vista, Microsoft introduced a new "Hybrid" sleep mode where it saves a hibernate image in case power is lost, but then drops to S3 mode. It appears though that by default this is disabled in laptops - the thinking being that in a laptop you don't want it to sit there draining the battery but instead just go straight to hibernate if you're leaving for a long time. Checking my Windows 7 configuration this also seems to be the case there (control panel - power options - edit plan settings - change advanced power settings).

So what am I going to do? I spend most of my home life using a laptop, and tend to shut the lid when I'm not using it to send it to sleep. The big green glowing EcoButton is not going to make me do this any more often so I'll probably just plug that in to my less-used desktop. But I will switch on hybrid sleep mode, and as soon as my new power meter arrives I'll check out what the consumption really is.

And in case you're wondering, EcoButton claim that a typical user can save 135KG of CO2 in a year by using their button, so compare this against a single flight from the UK to NY at 1.3 metric tons of CO2 and you can see that although it's not earth-shattering it does make a difference.


Further reading:

26 July 2009

Google Wave - collaboration or chaos?

How Google Wave would have helped me in a recent project, and how it wouldn't.

At BigCorp*, I was recently involved in a team of eight people spanning three countries. We hadn't really worked together before, but had been brought together to do some strategic thinking over a few weeks, then do a presentation to senior management of our findings.

As we still had our day jobs to do, and as we were spanning several time zones, it quickly became apparent that we needed some collaboration tools to help us bounce ideas around, capture thoughts and eventually structure into the presentation. Being a large company, we couldn't just download the latest toys from the web to use, so settled on a combination of e-mail and SharePoint as the best we could get hold of at short notice.

But it was soooo painful.

E-mails quickly drifted into monstrous chains of replies and CCs where you had to read the entire mail each time to work out which point someone was responding to. The SharePoint discussion forum became unwieldy as the threaded tree structure made it impossible to see what was new, and change notifications didn't really help unless each team member had subscribed to ten different feeds. And to produce the final presentation we ended up with a couple of people just getting round a screen and bashing it out, sending out for review, then trying to incorporate feedback.

To give it its due, SharePoint was quite good as a document management space where we could upload relevant docs and version docs we were creating ourselves, and the workspace concept helped keep information around meetings in some kind of order, but it really wasn't quite up to what we needed.

Wave is the answer

What we needed was of course Google Wave. If you don't already know about this then there's plenty of information out there, but a good place to start is O'Reilly's overview.

In a single tool, we'd have been able to have structured discussion, receive notifications, and collaborate together (in real time!) on the final presentation. It would have been perfect, and several of us lamented on it not being available for general use yet.

But maybe it's not in fact a silver bullet.

Looking back a few weeks later on how we worked as a team, Wave in itself wouldn't have solved some of the key problems:
  • Intense debate around certain points generated a huge amount of comments and replies, which would clutter up a Wave just as much as a discussion forum.
  • As a Wave gets larger, it's not clear how to effectively notify people of what has changed.
  • The Wave has no intrinsic structure, so without strong moderation is likely to suffer from the same drift into chaos as Wiki pages.
  • Because Waves are so open, it would probably have been harder rather than easier to reach consensus, as anyone could reedit any part of the Wave (although of course you can see who did what when).
  • We had enough problems getting some of the team used to using a discussion forum, so a Wave would probably be too much too quickly. Nothing Google can do about this, but sometimes you have to work with Luddites.
So how do we make Wave work?

I think there are two main points that I take out of this.

Firstly, Waves are going to require the same kind of discipline and organisation as Wikis currently do. Teams will have to work out rules to guide how they use Waves and also have some idea of ownership or moderation to keep the Waves clean and structured. I assume that Wave best practice guides will appear fairly swiftly, and that training companies won't waste much time before seeing an opportunity to make a buck there as well.

Secondly, I think that over time we'll see the emergence of Wave UIs that make it easier to use Waves in particular ways focused around certain tasks. We can't assume that everyone will be able to just dive into a free-form Wave and make effective use of it. The underlying technology and protocols will still stand, but the tools will make it easier to do specific types of work using Waves, and of course over time new ways of working will evolve that will also need supporting.

I fully intend to get Wave into my organisation just as soon as I can, but it's going to be interesting to see how quickly it is adopted, what troubles we hit when we are using it for real and how we make it into a really productive tool.




* Not the real name, in case you hadn't guessed

14 March 2009

ISPs snoop every page we visit: how worried should we be?

British ISPs BT, Virgin Media and TalkTalk intend to launch a service called Webwise that spies on the address and content of every web page that you visit using third-party software from a US company called Phorm, and then make information on your browsing habits available to other web sites (presumably for a kick-back). This sounds pretty worrying, and it has sparked a lot of attention over the last few weeks, incluing Tim Berners-Lee going to Parliament to ask for it to be banned, and ending up in a clash with the Phorm CEO (who it seemed hadn't been invited to the party but turned up anyway).

If Sir Tim's worried then I'm worried, so I decided to find out some more about what it actually is.

In brief, Phorm provide a system that the ISPs will run that performs deep packet inspection. This means that they will be analysing not only the pages you visit, but what's contained on the pages as well. This is then matched against certain patterns to identify browsing habits (e.g. you visited a page containing the words "holiday" and "Bulgaria", so you're now tagged for Bulgarian holidays), and if you match a pattern then this match is stored in a site called the Open Internet Exchange (OIX). Now if you visit another page from a site, that site can query OIX to find what you're interested in and deliver you appropriate advertising. Your privacy is protected because it doesn't store your name, just a random number that stays with you as a cookie so can be used to target content.

BT et al are claiming that this is a great thing for consumers because:
  1. you get adverts targeted at what you're interested in, and
  2. they can also throw in an anti-phishing thing that warns you if you're about to go to a dodgy page.
Note that there's been no mention of you getting a share of the revenue that no doubt BT will get for putting this service in, but that's by the by.

Pulling the analogy from the ZDNet page about Deep Packet Inspection, this is like the Royal Mail rather than just looking at the address on a letter and sending it to you, instead opening the letter up, reading the contents, then telling someone else to send you spam based on what youe letter said. This would never be allowed, so why is it OK for electronic communication? It is also worth remembering that most people's webmail accounts are http rather than https so all their e-mails are fully accessible for scanning.

Two other things deeply worry me about this whole shebang.

Firstly, the anonymity mechanism is totally flawed. Although Phorm don't know who you are (because they claim they won't look at user names, credit card numbers, etc.), any site you're logged into can match up your unique number with your user. They've now got access to your full browsing habits as well, and this is a massive invasion of privacy.

Secondly, the ISPs are running a system from a third-party company with a CEO that has allegedly been responsible for spyware on PCs previously, with no clear regulation, whose legality has also been questioned in the US, and where the BT has already performed secret trials of Webwise without end-users knowing which resulted in the European Commission getting involved. None of this makes me feel warm inside about these people having and distributing my browsing habits.

So - what to do?

I'm not with one of the three ISPs currently planning to launch the service, so I can sleep slightly easier. For those that are:
  • Check out BadPhorm which has some more info on all of this.
  • If you're a Firefox user then get the extension from Dephormation which blocks Webwise from working.
  • Seriously think about switching ISP to one that is not going to sell your secrets to the world.

9 March 2009

Sony loses the plot on custom build Vaios

It became time for me to purchase a new laptop, and so after a bit of research I settled on a Vaio. Before you say it, I know it's not as cool as a MacBook Pro, but in the current climate I couldn't really justify paying twice as much so I settled for a custom build FW series to give me good resolution and a reasonable balance of other features without breaking the bank. So far so good and I placed my customised order.

Unfortunately, Sony decided that I was untrustworthy so even though my credit card went through fine they still rejected my order based on some trumped-up excuse about failing some other security checks (the Sony guy I spoke to claimed they're the third most defrauded company in the UK). Of course they couldn't tell me which security checks because that might enable me to defraud them in some other way, but it turned out that was not the biggest problem.

The biggest problem was with their order management software. Because my original order was rejected they had to place a new one, but there was some problem with their software so I was asked to call back in a couple of days. This I did, to be told that there was still a problem with their software. A couple more attempts, and I found out that what was actually happening was that they had a problem where the software would reset any custom builds back to the default configuration as soon as you saved the order. This is obviously not good, and even more not good was that Sony only discovered this after they'd shipped some units to customers (presumably who weren't too happy with what they received).

This has got to be pretty damn embarrassing for Sony, and I really hope that they're throwing some toys out of cots at their software supplier as I suspect that they would be one of the really big, really should know better suppliers, who probably charge lots of money because they have really good QA departments. Or not.

And in the mean-time, it's been ten days since I decided to buy a Vaio rather than a MacBook, I'm still waiting for a call to say I can resubmit my order, and now I'm starting to reconsider my decision...

15 February 2009

Has Google just killed Nuevasync?

I've been using NuevaSync with my iPhone for a while now. It was just what I needed, allowing me to pull my Google calendar down into the iPhone's calendar and push edits on my iPhone back to Google. Lovely. And a recent update even made it so that multiple calendars in Google show up as different colours in the iPhone calendar.

Last week, Google launced a beta of their new Google Sync which allows you to connect to Google with ActiveSync (as if it were an Exchange server). Splat - and there goes the need for NuevaSync.

It's obviously not quite as black and white as that - NuevaSync do other things as well, and Google don't yet do the multi-calendar colour thing - but you have to be wondering how this is going to impact NuevaSync's model.

And it's not just NuevaSync. There are a whole host of small companies or individuals who are doing a fantastic job of setting up systems that provide value-add to the applications from the big boys - just look at all the apps that have sprung up around Twitter or FaceBook. So do all these people have to live in fear that one day they'll get squashed from above?

In the mean-time, I'm going to use Google Sync for my contacts (sorry NuevaSync, but it removes a layer of complexity), and SaiSuke for my calendar (as it renders much better than the iPhone's built-in). How long I continue to use IzyMail for my Hotmail before they get stomped remains to be seen...

1 February 2009

Making the real-time web real-time

RSS, Atom, SUP, Web Slices - in the context of receiving site update notifications they are all based on regularly querying the site to find out what has changed. To quote Kirk in one of his rants about SUP:
But it's still polling! It still ultimately doesn't scale. You can make polling suck less, but it will always suck.
Great though SUP in its current form is, it's really an optimisation hack rather than a solution, designed to reduce the traffic to a given site. But still if I have 10,000 consumers hitting my site's SUP address(es) every second then that's still going to give me a headache.

The solution part 1: SUP-Push

The first part of my solution to this problem is to turn the problem around. Whilst there's nothing in particular about the Internet Protocol that is asymmetrical, the whole World Wide Web has grown up around a client requesting information from a server which then sends it a response. The reasons for this are partly the evolution of WWW from FTP and then Gopher, and partly because it puts everything under the control of the publishing site rather than relying on any intermediaries.

As we move toward the real-time web, consumers don't want to have to wait to see a Twitter message or even a new news event appear in their client app, they want them instantly. The only away to achieve this is to push update notifications rather than polling for them.

This requires two changes at the publisher site:
  • Provide (and advertise) a way for clients to make a connection that they keep open and down which updates are sent.
  • Send updates down this connection as soon as they happen (or in batches at a frequency determined by the publisher).
The good news is that providing a connection is easy - let's just use a URL to a TCP/IP socket (or say HTTP) for that - and that with not very much work at all we can use the SUP update document format to deliver the changes.

The bad news is that keeping lots of sockets open to clients is not scalable, and so to:

The solution part 2: Distribution network

Obviously every client keeping a socket open to every server is wrong, so what we need is some kind of distribution network so that clients can connect to just a few (or maybe just one) distribution node and receive updates for any of the feeds to which they are subscribed. Similarly, the web sites will not want to have to publish to every distribution node just in case someone is subscribed, so they will publish to a few "seed" nodes (deliberately using someTorrent terminology here) which then propagate into the wider network.

What we end up with therefore is some kind of self-organising mesh that works out the most efficient toplogy based on some function of load and internet distance (by which I mean latency) to the consumer.


It would be nice if there was already a ubiquitous asynchronous message-oriented middleware out there that we could use to implement the distribution network, but although AMQP may get there eventually it's not there yet. Similary it would have been nice if IPv6 had helped us out here but although they've overhauled multicast and introduced "anycast", these still don't really help outside an organisation.

So instead, if we assume that the end-client (you or I) has a few well-known nodes that they can connect to (hosted by their ISP say), the sequence probably would look something like this:
  • Client establishes connection to local distribution node.
  • Client sends request for subscription URL to its local distribution node.
  • If the node already has that subscription then it just starts sending updates down to the client. End.
  • Otherwise, the node requests the publishing site for a connection.
  • The site can either give the connection and start sending updates, or alternatively send a redirection to one of its seed nodes.
  • If redirected to a seed node, the local node and seed node then use some cunning algorithm based on existing routes to give the local node the most efficient source for its updates.
Over time as loads on nodes change and as nodes appear and disappear, the network will update its topology to optimise delivery similar to spanning trees in IP routing. There may well also be some interesting overlaps with the way torrents work and in particular Broadcatching, but I've not that through yet.

But who should pay for the distribution nodes?

This is probably the key thing that would drive the success, and we have three main options:
  • The end-user pays, as they receive the benefit of improved performance. This would translate into ISPs hosting nodes but charging extra to clients to be able to use them.
  • The ISP pays as it reduces network consumption through their network. As more people move to the protocol, network traffic due to polling will decrease so the ISPs obviously get benefit from hosting nodes at key points just on the internet backbone. Latencies between ISPs would be minimal, and the distribution network topology would align to major routes across the internet.
  • The source web sites pay, as they have lower traffic on their servers to deliver the same amount of content to users. Again this would mean the company pays some provider to use their nodes as seeds. As well as ISPs, companies like Akamai would be well placed to deliver this kind of service.
In reality I'd expect it would turn out to be a combination of the three - to start with it would be free, hosted by people developing the idea, then as ISPs catch on people at both ends will have to pay something to get better performance than the freebies, and finally it becomes taken for granted so incorprated by ISPs into the general cost of using the internet.

So what next?

Unfortunately I have a day job (or maybe fortunately in this market), so for this to go any further it would need to be picked up by some cool developers that are also in a position to push the agenda. As the authors of SUP, maybe the FriendFeed developers should take it to the next stage...

28 January 2009

FriendFeed developers, give us some love!

I like FriendFeed a lot. It has displaced FaceBook and my PS3 as the thing I do when I'm not doing anything. And there are new things happening on it all the time, so the developers are obviously hard at work. And they have a FriendFeed feedback room where people talk about the things they want, and they have bug submission forms, etc., etc.

But I've no idea whether the developers really pay attention to what people say in the room or the forms. Given the fix-it Fridays I guess they're looking at the bugs, but who drives the new features?

Well, it should be us, the users.

And this may be happening, but there's no way of telling.

I wrote previously about UserVoice, and I'm bringing it up again now because I think this is exactly what FriendFeed needs. I'll even give you the piccy again:


Let us tell you as a community what's important to us, and let us see what you're doing about it.

And of course if you managed to do some cool integration between the feedback room and the UserVoice page that'd be even sweeter - comments map to comments, likes map to votes, don't need to register separately and so on.

Go on, show us some love!

26 January 2009

Reddit: The bookmarking site that won't let you bookmark

OK. This I just don't get:

All I was trying to do was post two bookmarks within about a minute of each other.

How does this make sense? Tell me, please.

25 January 2009

Give me more social interop!

I'm sure most people by now are like me and have a proliferation of accounts across social media sites: Twitter, Digg, Blogger, FaceBook, WordPress, FriendFeed, MySpace - the list is endless.

Fairly recently I signed up on FriendFeed, which does an excellent job of bringing all these together into one place and allowing you to do interesting things with the combined social stream. The developers are really active improving integration with other sites (especially Twitter), respond to feedback from the community, plus they sometimes come out with things like SUP which have much broader implications.

But it's not perfect. And it's mainly not their fault.

My main beef at the moment is around what happens if someone posts something that people then comment on. Say for example that Robert Scoble posts something an interesting article on his blog, which will then automatically appear on FriendFeed. FriendFeed does an OK job of spotting when someone else shares it on something like Digg or Google Reader it so it appears in their feed as well - it marks it as a related article of the original post. But where do you comment on the entry - in FriendFeed or in the blog? If you do it in FriendFeed then anyone not also on FriendFeed and subscribed either to you or to the author will not see it. If you do it in the blog then the comment doesn't make it to FriendFeed. The author has to keep track of multiple separate discussion threads across the original blog site and FriendFeed.

Open it up everybody!

So everyone needs to stop playing the "my site is the one true site" thing and accept that there will always be n sites catering to different audiences, but with some severe overlap. Now we're all thinking straight, open up the APIs for commenting/retweeting. If I comment in FriendFeed it should be added to the blog entry (maybe this should be an option for the author when they add their feed to FriendFeed). If someone comments in the blog entry then it should appear in FriendFeed (again an option for the author) and if FriendFeed knows my ID on the blog/Twitter/whatever, then it should match this up so it appears in my activity stream in just the same way as if I'd commented directly in FriendFeed.

How hard can that be? I don't even think there are commercial arguments against it, as surely this would just drive extra traffic to the sites because there would be more interesting discussions.

So FriendFeed developers please come up with a cool API for doing this, then everyone else please implement it so we can bring it all together.


Update

Just spotted that when you comment in FriendFeed to someone else's Tweet then you have the option of also doing it as an @reply on Twitter. Good stuff - now what about the rest?

Update 2

It looks like using Disqus to manage your comments gives you some integration with FF, but I'm not sure yet if posting a reply in FF will come back as a comment...

18 January 2009

Companies should take responsibility for their software

One of the problems with working in the IT industry is that you become tech support to anyone you know who's not in the IT industry (and some of those that are but shouldn't be). My mother-in-law is not in the IT industry and computers to her are a way to get things done rather holding any particular interest to her, so inevitable I spend some time helping her out with the odd issue or two.

Toward the end of last year she finally decided that the ancient desktop she had that was running Windows 98 ME really really was past it, so went out and bought a new Acer laptop which came preinstalled with Vista. So of course the odd issue or two came up. Living 200 miles away from your MiL is something that many people envy, but it does have the down-side that tech support trouble shooting usually involves the phone and her describing what's going on (or her interpretation of that) and me trying to figure out what's really happening and trying to fix it.

As I don't have any Vista-installed machines of my own then this was even more trouble than usual, so the next time we were down visiting I thought I'd just install MSN messenger and show her how to do the Remote Assistance thing. I could then see what was really happening and also take control to fix things rather than having to explain what click-and-drag was about.

But no. Messenger installation failed right at the end. Which I thought was a bit odd, even for Microsoft, as this was their latest OS, and their latest Messenger, and even they would have made sure this worked.

After a couple of retries to make sure I wasn't being totally dense, I hit Google, and sure enough other people were seeing the same problem, which turned out to be related to some Acer software that they had bundled on the laptop (which surely the European Commission would have something to say about). I was running out of time so didn't want to uninstall the Acer stuff, and so had to give up.

When we were back down over Christmas I had another look (as I'm a bit stubborn that way sometimes, and I know there are other options out there but this should have worked, damnit), and this time the MSN installer had some diagnosis that pointed to a patch from Acer (presumably as MS had had many people complaining about the same thing). It turns out that Acer have this thing called eDataSecurity which amongst other things hacks MSN messenger to encrypt file transfers across it. And the bad bit is that they hadn't kept it up to date with versions of Messenger, so this was preventing the installation working.

Fine, at least we now had a fix. So off I went to the Acer site to get the patch and the instructions at the time took me to an FTP directory listing:

Now forgive me if I'm wrong, but I would have thought that anyone non-technical at this point would have precisely zero chance of working out what to do next. Obviously I figured out to get the readme zip, extract the file, and then get the real installer that I wanted. But I consider this completely unacceptable that Acer found out they'd screwed up then thought this was an adequate solution for their average user.

To give them their due, they have now improved the page that this linked from, so the first step it now describes is:
Extract eDSMSN81patch.exe from the patch .rar file to an empty folder.
Like my mother-in-law is going to understand that. So they screwed up, realised that they screwed up their solution, then screwed up again. Nice going.

The good news is that the patch worked and I'm now fully remote-support-enabled, but two key messages for Acer and anyone else out there doing this kind of thing:
  • If you're going to hack someone else's software then you'd better keep on top of it.
  • Know your customer. Know what level of support they need. Know that if you screw up they are going to slag you off to everyone they meet.
I wasn't that keen on Acer before, but I can safely say that now I will never buy an Acer product. Ever.

12 January 2009

How many software developers does it take to write good software?

Kirk’s had a couple recent rants against the use of Agile if you’re an excellent developer and I thought I’d comment on this, but realised that it’s part of a different question which is how big the ideal development team should be.

My answer is that it depends on the situation.


Types of developer

To explain my thinking, I’m going to start by loosely categorising developers into three archetypes: Production-Line, Regular and Alpha.
  • Production-Line developers: These are the ones where it wasn’t obvious that programming was going to be their life-choice – they weren’t born with the “programming gene” but for one reason or another ended up doing it anyway. Typically, they will need a lot of guidance, and are used most effectively in environments where they have detailed requirements given to them, strong processes around the way they work, and a large QA/test facility to ensure that what they produce works properly.
  • Regular developers: These get what programming is and enjoy doing it. They are quite happy to take loose requirements and work out what the client really means by them, use a bit of initiative and produce good quality software at a reasonable pace. The level of process and testing needed around them is lower than Production-line, but they like working in an environment where there are clear goals and a road-map to take them there.
  • Alpha developers: These are the people who know that they are the best developer around, and are not afraid to remind you of this. They typically score quite highly on the AQ test and are likely to rub people up the wrong way - partly because they always know best, and partly because they’re usually right. Alpha developers tend to really get into a particular technology (Python, Erlang, lambdas, WPF, MapReduce whatever) and try to use it for everything until the next shiny toy comes along. Processes annoy them (“just slowing me down, yo”). Detailed requirements annoy them (“it’s quicker to write the code than a requirement doc”). Unnecessary testing annoys them (“my code doesn’t have bugs, bugs are for losers”). Not being able to use the latest toys annoys them (“you chose the wrong technology”). But if you want innovation in your organisations then this is where to look for it.

Effect on development team size

Using the three different types effectively results in quite different team sizes:
  • Production-line: Your team size is typically quite large (e.g. 20+), and a large proportion of the team is focused around activities other than straight development (e.g. project management, QA/testing, business analysis).
  • Regular developers: Team size for an effective team usually doesn’t get bigger than about 8 developers plus a PM and maybe a couple of people for QA and business analysis.
  • Alpha developers: Ideal team size is two, or maybe three if they have a (good) history of working together. One Alpha on his own will run the risk of not staying focused and also not having anyone to argue with about technology choices. More than three and you’re in for a whole world of pain and minimal productivity as they spend all their time arguing rather than developing.
So which kind of team do you want? It sounds like Alphas are best, but you also have to take into account the kind of company that you’re working in. Larger more traditional organisations typically like to have a lot of controls and processes around things as this allows them to tick boxes on audit reports that are issued from so far up the hierarchy that there is no visibility of what actually happens on the ground. They also have little appetite for the introduction of many new technologies or techniques as these are perceived to introduce unnecessary risk. And these are precisely the things that would drive an Alpha nuts. Plus if you really do have to churn out a big system then you probably couldn’t get enough Alphas to work together to be able to achieve it.

The result is a reasonable correlation between type of organisation, risk appetite, and kind of developers. To give some examples from the Financial sector:


But what about Agile?

I’m differentiating here between little-a agile (unit tests, continuous integration, refactoring and so on) from big-A Agile (total buy-in, stand-up meetings, Product Owner at the planning meetings, a near-religious fervour in defending Agile and creating new believers).

A lot of the agile techniques are used across all three kinds of teams very effectively. But Agile itself in my experience only really works for the middle group.

Team sizes for Production Line are too large to use Agile effectively as it’s all about communication. In Scrum for example, it's recommended that the ideal time size is about seven, and if you get bigger than this you start breaking into sub-teams to bring you back to the ideal.

Try to get Alphas to do Scrum and they’ll see it as a total waste of time and probably quit (or just use work time to do some pet project whilst complaining to anyone within earshot).

But for Regular developers it does increase overall productivity and also give other benefits such as client visibility/ownership that are usually desirable. I’m not sure exactly why this is, but my guess would be that this kind of size is about right for the communication-boosting techniques that are introduced, and that this kind of developer is receptive to a lightweight process rather than the kind used for Production-Line.

This is just based on what I’ve seen, and I’m sure that there are many people out there that can give counter-examples of applying Agile to small or big teams, but was it really the right thing to do or did you lose more than you gained?


Conclusion

If you’re building a team, think hard about what kind of people you want in it and how many, based on what you have to deliver and also what kind of environment you’re in. And make sure your methodology fits your team.

10 January 2009

Consumer software vendors: this is how you should do customer feedback

Recently I shelled out a massive £1.19 on the Nambu iPhone app. This isn't actually the subject of this post because the best thing I like about it is not the app itself, but how Nambu manage feedback from customers on bugs or feature requests: UserVoice. Here it is in action:
Anyone who's used Digg before will instantly recognise this, but the genius part from UserVoice is to produce something that's a cross between Digg and JIRA. Here are the key features about how it works:
  • It appears as part of your normal web site (e.g. http://iphone.feedback.nambu.com/).
  • It's very easy to search existing items and create new ones.
  • Customers vote for things you want them to do using up to three votes per item out of their allocation of ten, and also comment on items.
  • Customers can see what's new, what's hot, and if they want get an RSS feed of updates.
  • The developers keep watch over the list, review items and schedule them for releases (so what we've really got here is an Agile Product Backlog driven by consensus across all clients).
  • Customers can imediately see what's going on with an item - being reviewed, planned for a specific release - and the developers comment on which release a feature will go into.
I think this is a great approach. It brings the customers much closer to what's going on with the app and allows them to drive what's happening next (but limiting total votes to ten keeps them focused on the things that really matter to them). It allows the developers to see what it's worth putting their time into rather than having to use guesswork or "market research". It gives the customers somewhere they can see they're being listened to rather than having to use an e-mail support inbox black hole or ring a call centre of hold music tedium and disinterested operators. It's very 2.0. What more could you want?

Whilst this works very well for small applications, it would be interesting to see how this really does scale up for something like iTunes (UserVoice do have some big customers on their client list). It's also worth noting that JIRA has had a votes feature for years, but it just doesn't achieve what UserVoice have done.

So if you're a small software application vendor then you definitely should be looking at UserVoice and asking yourself whether you really want to listen to your customers or not.