At dinner last night, Scott Rosenberg, researching his history of blogging book, said he couldn’t find any trace of the original version of Tim Berners-Lee’s original site, info.cern.ch. I found this amazing.
When I was maintaining the What Are Weblogs page on weblogs.com, in 2000, I said up-front that TBL’s site was also the first weblog. The crazy thing is I remember looking at the site, with my own eyes, and realizing that I was looking at history, like listening to the first telephone conversation or watching Thomas Edison turn on his first electric light bulb.
Today, in 2008, the network we’re building with Twitter is imho as historic as any of these things, we’re all creating artifacts and connections that are even more fragile than the early web, because, unlike the web, it’s 100 percent centralized. We all trust the owners of Twitter, but they’re human, even with the best intent, we all are taking a risk that the network could disappear at any time. And unlike the Internet which has huge amounts of redundancy built-in, if there’s any redundancy in Twitter, none of us outside the company know about it.
This is just plain unacceptable.
I’m on the case because I care so much about this medium, and if it were to disappear, I would feel partially responsible if I hadn’t raised a huge red flag warning about this very unreliable architecture we’re building on.
And, if you know where there’s a backup of the original info.cern.ch, please post a link here, in a comment.
Update #1: A new web service for Twitter clients.
Update #2: Marc Canter checks in.
You have to fit the phrase into conversation at least once during the day. Example. “It’s bad design to put all your eggs in one basket. One day your chickens will come home to roost.”🙂
Taken last night on Indian Rock.
A view of the back of Indian Rock on Google Maps.
Yesterday I wrote about a way to prepare to decentralize Twitter, in the event of a lengthy outage. The goal is to create no extra work or complexity for users. I think this is the responsible way for developers to help because it’s 1. Not a good idea to build a centralized system around a for-profit company and 2. Users generally won’t do anything extra to decentralize to prepare for an outage, but when one happens, they blame us (technologists) for not protecting them. Right or wrong, this is the way it is. So I’m working on a step-by-step bootstrap that, if enough developers go along, will have us reasonably protected against a prolonged Twitter outage. It’s not to say that it’s the only way to do it, but it seems to me that it’s one way.
I said I might put up a web service to store user’s RSS feeds on Amazon S3, and I’d pick up the hosting bill, to help the bootstrap. One developer took me up on the proposal, so I went ahead and implemented it. Here’s how it works.
1. There’s a new XML-RPC service at this address: xmlrpc://rpc.twittergram.com/RPC2
2. The name of the procedure is twittergram.saveFeed.
3. It takes three params: The user’s Twitter username and password, and the text of the feed. The password is only used to authenticate, it is not stored on the server.
4. It returns the URL of the feed as its stored on feeds.twittergram.com.
5. Code (in UserTalk) that works.
local (server = "xmlrpc://rpc.twittergram.com/RPC2")
local (username = "davewiner", password = user.twitter.prefs.password)
local (feedtext = tcp.httpreadurl ("http://twitter.scripting.com/daveRss.xml"))
local (url = [server].twittergram.saveFeed (username, password, feedtext))
6. You can call the routine at most once a minute. This may be increased if it becomes a popular service. My server is limited to 70 calls per hour. Again something will have to be done if it becomes popular.