It’s going to be a busy few weeks starting at the end of the month, beginning with a discussion I’m leading at Mix 07 at the Venetian in Las Vegas on April 30 at 3PM. The topic is how to design a perfect podcast player, but I have a hunch we’ll branch out into other topics as well. Unlike the other sessions at the conference, there will be no panel and no audience. I will speak for a few minutes to get some discussion topics out there, and then we’ll see what’s on everyone’s minds. We’ll make sure the discussion has an online presence, maybe someone will even live-webcast it.
Yesterday I posted what seemed then to be a rational comment policy, and on re-reading it, it seems equally rational today. I hope people consider posting one of their own, and since I link to and quote another blogger, we could start a process of refinement where each of us helps each other draft their policy. To me that would be the true blogger way to solve the problem, something like a bucket brigade. Blogging is inherently DIY and decentralized. I think that’s why we like cats so much. :-)
CNN: “Millions of White House e-mails may be missing, White House spokeswoman Dana Perino acknowledged Friday.”
TechDirt quotes Lorne Michaels, the creator of Saturday Night Live, on YouTube. “If the work is good, I want the most number of people to see it.”
CNN reports Google buys Doubleclick for $3.1 billion.
If you’re pointing to the RSS 2.0 spec, you may want to point to its new location.
I found this project interesting, because I want to learn how to create a website that lives for decades, if not longer.
Here are some of the techniques I employed:
1. Everything is static. It can all be seved by a standard install of Apache, with no plug-ins or special software required.
2. It’s self-contained. Every resource it uses is stored within the site’s folder. That includes images, screen shots, example files, downloads.
3. Almost all the links are relative. As far as I know only one type of link is not, links to the blue arrow that marks an internal document link. If for some reason at some time in the future, cyber.law.harvard.edu should go offline, and the site has been moved to a new location, the blue arrows will appear as broken images. I may yet fix this one. I don’t think there are any other hard-coded links in the site.
The goal was to make it so that a future webmaster, wanting to relocate the site, would just have to move the folder, add some redirects, and everything would work, more or less.
You can also download the whole site, from a link on the site’s About page. You’re free to mirror it if you like. And as always it’s licensed under the Creative Commons, giving everyone the ability to create new things from it. (I also included the Frontier CMS tables the site was generated from, and the Manila site, in the Downloads folder.)
There was one example where I thought for a second about changing the spec, but I didn’t; the <docs> element, which we say should point to the spec. It’s an optional channel-level element. The example we provide is the previous location. I thought this was a good place for me to express the commitment to the spec being totally frozen, so I left it as it was. To change that value would have broken nothing but a promise, but promises are everything when it comes to specs that industries are built on, and the RSS 2.0 spec surely has become a foundation that many build on.
Since I’ve been playing with sitemaps, of course I created one for the RSS 2.0 site.
And I’ve checked to see that the maps I deployed for scripting.com are properly updating, and they are.
But when I checked, I realized that I would have done it differently, so that the sitemaps, in adition to helping search engine crawlers, might be interesting things for human beings to read as well.
The idea was that the content server was responsible for providing a daily reverse-chronologic list of pages that had changed. Then a crawler would keep track of when it had last visited my site, and only suck down the files that had changed since then. This would enable search engines to be more efficient, and provide more current content. It was nice because you could read it yourself and see what had changed. Contrast this with sitemaps, where you have to go hunting for the changes, it’s no better a user interface for finding the new and newly updated stuff than the file sytstem is. I was kind of disappointed.
Another thing I would have done differently is allowed sitemaps to include other sitemaps. There really is no need for two file types, just let me link to an index from an index, much like inclusion in OPML 2.0. This added an extra layer of complexity for everyone implmenting sitemaps on moderately large sites, or old ones where some content changes frequently and other content not so frequently (like scripting.com).
However, on balance, it’s a great thing that all these companies got together and did something to make the web work better. We need more of that!
If anyone is working on more stuff like this, I am available to review it before it’s cast in stone.
I don’t give a shit if the new OS is delayed. :-)