Scripting News for 4/13/2007

Mix 07 

It’s going to be a busy few weeks starting at the end of the month, beginning with a discussion I’m leading at Mix 07 at the Venetian in Las Vegas on April 30 at 3PM. The topic is how to design a perfect podcast player, but I have a hunch we’ll branch out into other topics as well. Unlike the other sessions at the conference, there will be no panel and no audience. I will speak for a few minutes to get some discussion topics out there, and then we’ll see what’s on everyone’s minds. We’ll make sure the discussion has an online presence, maybe someone will even live-webcast it.

Rational comment policy, day 2 

Yesterday I posted what seemed then to be a rational comment policy, and on re-reading it, it seems equally rational today. I hope people consider posting one of their own, and since I link to and quote another blogger, we could start a process of refinement where each of us helps each other draft their policy. To me that would be the true blogger way to solve the problem, something like a bucket brigade. Blogging is inherently DIY and decentralized. I think that’s why we like cats so much. :-)

Today’s Links 

CNN: “Millions of White House e-mails may be missing, White House spokeswoman Dana Perino acknowledged Friday.”

TechDirt quotes Lorne Michaels, the creator of Saturday Night Live, on YouTube. “If the work is good, I want the most number of people to see it.”

TechCrunch is hit with a defamation suit. Rex Hammock, who yesterday reported on yet another defamation suit, has more data on the latest one. It’s in the air. Coast to coast.

CNN reports Google buys Doubleclick for $3.1 billion.

From John Feld comes this handy tip. Use Google Maps to plot a course from Berkeley to London. :-)

New RSS 2.0 spec site deployed 

As I reported here and here, I’ve been slowly working on a project to “future-safe” the Harvard site that houses the RSS 2.0 spec. Yesterday, we started redirecting from the old site to the new one.

If you’re pointing to the RSS 2.0 spec, you may want to point to its new location.

I found this project interesting, because I want to learn how to create a website that lives for decades, if not longer.

Here are some of the techniques I employed:

1. Everything is static. It can all be seved by a standard install of Apache, with no plug-ins or special software required.

2. It’s self-contained. Every resource it uses is stored within the site’s folder. That includes images, screen shots, example files, downloads.

3. Almost all the links are relative. As far as I know only one type of link is not, links to the blue arrow that marks an internal document link. If for some reason at some time in the future, cyber.law.harvard.edu should go offline, and the site has been moved to a new location, the blue arrows will appear as broken images. I may yet fix this one. I don’t think there are any other hard-coded links in the site.

The goal was to make it so that a future webmaster, wanting to relocate the site, would just have to move the folder, add some redirects, and everything would work, more or less.

You can also download the whole site, from a link on the site’s About page. You’re free to mirror it if you like. And as always it’s licensed under the Creative Commons, giving everyone the ability to create new things from it. (I also included the Frontier CMS tables the site was generated from, and the Manila site, in the Downloads folder.)

There was one example where I thought for a second about changing the spec, but I didn’t; the <docs> element, which we say should point to the spec. It’s an optional channel-level element. The example we provide is the previous location. I thought this was a good place for me to express the commitment to the spec being totally frozen, so I left it as it was. To change that value would have broken nothing but a promise, but promises are everything when it comes to specs that industries are built on, and the RSS 2.0 spec surely has become a foundation that many build on.

Of course if you spot any breakage, please let me know asap. Post a comment here, or send me a private email.

Sitemaps, day 3 

Since I’ve been playing with sitemaps, of course I created one for the RSS 2.0 site.

And I’ve checked to see that the maps I deployed for scripting.com are properly updating, and they are.

But when I checked, I realized that I would have done it differently, so that the sitemaps, in adition to helping search engine crawlers, might be interesting things for human beings to read as well.

I refer back to sitechanges.txt, a simple project I was doing in 1997 that was like sitemaps. It was also before I did XML. :-)

The idea was that the content server was responsible for providing a daily reverse-chronologic list of pages that had changed. Then a crawler would keep track of when it had last visited my site, and only suck down the files that had changed since then. This would enable search engines to be more efficient, and provide more current content. It was nice because you could read it yourself and see what had changed. Contrast this with sitemaps, where you have to go hunting for the changes, it’s no better a user interface for finding the new and newly updated stuff than the file sytstem is. I was kind of disappointed.

Another thing I would have done differently is allowed sitemaps to include other sitemaps. There really is no need for two file types, just let me link to an index from an index, much like inclusion in OPML 2.0. This added an extra layer of complexity for everyone implmenting sitemaps on moderately large sites, or old ones where some content changes frequently and other content not so frequently (like scripting.com).

However, on balance, it’s a great thing that all these companies got together and did something to make the web work better. We need more of that!

If anyone is working on more stuff like this, I am available to review it before it’s cast in stone.

A nice thing about *not* being a Mac fanatic 

I don’t give a shit if the new OS is delayed. :-)

8 responses to this post.

  1. I have many concerns. In fact, I’m gonna say that bad. I thought you were doing to create a static version of the existing site. In fact, you’ve create a new site, which compounds the existing confusion. Could you please at least forward the previous site URLs to the new ones?

    Reply

  2. typo

    I’m gonna say this is bad.

    Reply

  3. The Google Maps gag appears to work for most places in Europe, but Africa, South America, and Australia appear to be out of the question. I guess swimming across the Atlantic is easier than crossing the Panama Canal. Or driving through Colombia. Yes, that might be part of the problem.

    Reply

  4. Posted by Kosso on April 13, 2007 at 2:13 pm

    probably worth pointing out about: “if you are pointing to the spec”

    That means that all the millions of RSS feeds out there who do point to the spec should change the rss/channel/docs element value.

    It might be worth highlighting that a bt ‘louder’ ;)

    Reply

  5. or.. as Randy says : redirect the old site to the new one.

    Reply

  6. Dave longs for a world that does not exist.

    As such, I present:

    http://thegoreyears.wordpress.com/iraq-during-the-gore-years/

    But he will ignore it, because he didn’t write it. Such is the beauty of narcissism, and the ability of those to survive.

    Reply

  7. This will be, btw, my largeesse, in terms of hits.

    Very Satisfying. Ooompa. I love the feel of narcissism. Even if I can’t spell it.

    Reply

  8. Thanks Dave! I see that’s done.

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 60 other followers

%d bloggers like this: