15 February 2009

Has Google just killed Nuevasync?

I've been using NuevaSync with my iPhone for a while now. It was just what I needed, allowing me to pull my Google calendar down into the iPhone's calendar and push edits on my iPhone back to Google. Lovely. And a recent update even made it so that multiple calendars in Google show up as different colours in the iPhone calendar.

Last week, Google launced a beta of their new Google Sync which allows you to connect to Google with ActiveSync (as if it were an Exchange server). Splat - and there goes the need for NuevaSync.

It's obviously not quite as black and white as that - NuevaSync do other things as well, and Google don't yet do the multi-calendar colour thing - but you have to be wondering how this is going to impact NuevaSync's model.

And it's not just NuevaSync. There are a whole host of small companies or individuals who are doing a fantastic job of setting up systems that provide value-add to the applications from the big boys - just look at all the apps that have sprung up around Twitter or FaceBook. So do all these people have to live in fear that one day they'll get squashed from above?

In the mean-time, I'm going to use Google Sync for my contacts (sorry NuevaSync, but it removes a layer of complexity), and SaiSuke for my calendar (as it renders much better than the iPhone's built-in). How long I continue to use IzyMail for my Hotmail before they get stomped remains to be seen...

1 February 2009

Making the real-time web real-time

RSS, Atom, SUP, Web Slices - in the context of receiving site update notifications they are all based on regularly querying the site to find out what has changed. To quote Kirk in one of his rants about SUP:
But it's still polling! It still ultimately doesn't scale. You can make polling suck less, but it will always suck.
Great though SUP in its current form is, it's really an optimisation hack rather than a solution, designed to reduce the traffic to a given site. But still if I have 10,000 consumers hitting my site's SUP address(es) every second then that's still going to give me a headache.

The solution part 1: SUP-Push

The first part of my solution to this problem is to turn the problem around. Whilst there's nothing in particular about the Internet Protocol that is asymmetrical, the whole World Wide Web has grown up around a client requesting information from a server which then sends it a response. The reasons for this are partly the evolution of WWW from FTP and then Gopher, and partly because it puts everything under the control of the publishing site rather than relying on any intermediaries.

As we move toward the real-time web, consumers don't want to have to wait to see a Twitter message or even a new news event appear in their client app, they want them instantly. The only away to achieve this is to push update notifications rather than polling for them.

This requires two changes at the publisher site:
  • Provide (and advertise) a way for clients to make a connection that they keep open and down which updates are sent.
  • Send updates down this connection as soon as they happen (or in batches at a frequency determined by the publisher).
The good news is that providing a connection is easy - let's just use a URL to a TCP/IP socket (or say HTTP) for that - and that with not very much work at all we can use the SUP update document format to deliver the changes.

The bad news is that keeping lots of sockets open to clients is not scalable, and so to:

The solution part 2: Distribution network

Obviously every client keeping a socket open to every server is wrong, so what we need is some kind of distribution network so that clients can connect to just a few (or maybe just one) distribution node and receive updates for any of the feeds to which they are subscribed. Similarly, the web sites will not want to have to publish to every distribution node just in case someone is subscribed, so they will publish to a few "seed" nodes (deliberately using someTorrent terminology here) which then propagate into the wider network.

What we end up with therefore is some kind of self-organising mesh that works out the most efficient toplogy based on some function of load and internet distance (by which I mean latency) to the consumer.


It would be nice if there was already a ubiquitous asynchronous message-oriented middleware out there that we could use to implement the distribution network, but although AMQP may get there eventually it's not there yet. Similary it would have been nice if IPv6 had helped us out here but although they've overhauled multicast and introduced "anycast", these still don't really help outside an organisation.

So instead, if we assume that the end-client (you or I) has a few well-known nodes that they can connect to (hosted by their ISP say), the sequence probably would look something like this:
  • Client establishes connection to local distribution node.
  • Client sends request for subscription URL to its local distribution node.
  • If the node already has that subscription then it just starts sending updates down to the client. End.
  • Otherwise, the node requests the publishing site for a connection.
  • The site can either give the connection and start sending updates, or alternatively send a redirection to one of its seed nodes.
  • If redirected to a seed node, the local node and seed node then use some cunning algorithm based on existing routes to give the local node the most efficient source for its updates.
Over time as loads on nodes change and as nodes appear and disappear, the network will update its topology to optimise delivery similar to spanning trees in IP routing. There may well also be some interesting overlaps with the way torrents work and in particular Broadcatching, but I've not that through yet.

But who should pay for the distribution nodes?

This is probably the key thing that would drive the success, and we have three main options:
  • The end-user pays, as they receive the benefit of improved performance. This would translate into ISPs hosting nodes but charging extra to clients to be able to use them.
  • The ISP pays as it reduces network consumption through their network. As more people move to the protocol, network traffic due to polling will decrease so the ISPs obviously get benefit from hosting nodes at key points just on the internet backbone. Latencies between ISPs would be minimal, and the distribution network topology would align to major routes across the internet.
  • The source web sites pay, as they have lower traffic on their servers to deliver the same amount of content to users. Again this would mean the company pays some provider to use their nodes as seeds. As well as ISPs, companies like Akamai would be well placed to deliver this kind of service.
In reality I'd expect it would turn out to be a combination of the three - to start with it would be free, hosted by people developing the idea, then as ISPs catch on people at both ends will have to pay something to get better performance than the freebies, and finally it becomes taken for granted so incorprated by ISPs into the general cost of using the internet.

So what next?

Unfortunately I have a day job (or maybe fortunately in this market), so for this to go any further it would need to be picked up by some cool developers that are also in a position to push the agenda. As the authors of SUP, maybe the FriendFeed developers should take it to the next stage...