I've slowly been appreciating the wisdom in using REpresentational State Transfer (REST) as the architectural style for building services on the Web. My most recent influences have been Nelson Minar's ETech presentation entitled Building a New Web Service at Google and a discussion on the XML-DEV mailing list last week.

The REST architectural style was first formally defined in Chapter 5 of Roy Fielding's Ph.D dissertation. It's principles are succintly distilled in the Wikipedia entry for REST which currently states

REST's proponents (sometimes referred to as RESTafarians) argue that the web has enjoyed the scalability and growth that it has as a result of a few key design principles:

  • A fundamentally stateless client/server protocol: each HTTP request/response pair is complete in itself, and participants are not required to take additional steps to track states (though in practice, many HTTP servers use cookies and other devices to maintain a session state).
  • A limited number of well-defined protocol operations: HTTP allows very few verbs, the most important of which (for REST) are GET, POST, PUT, and DELETE. These correspond fairly closely with the basic CRUD functions required for data persistence.
  • A universal means of resource-identification and -resolution: in a RESTful system, every piece of information is uniquely addressable in a single namespace through the use of a URI .
  • The use of hypermedia both for application information and application state-transitions: the resources in a REST system are typically HTML or XML files that contain both information and links to other resources; as a result, it is often possible to navigate from one REST resource to many others, simply by following links, without requiring the use of registries or other additional infrastructure.

These principles can be further distilled to the simple phrase, "Just use HTTP". However the problem with this is that lots of people actually don't use HTTP correctly. This ranges from the fact that many HTTP-based systems don't support the DELETE and PUT to the fact that many services based on HTTP ignore the specification that HTTP GET requests should be idempotent and safe. About the latter issue, the HTTP specification states

9.1 Safe and Idempotent Methods
9.1.1 Safe Methods

Implementors should be aware that the software represents the user in their interactions over the Internet, and should be careful to allow the user to be aware of any actions they might take which may have an unexpected significance to themselves or others.

In particular, the convention has been established that the GET and HEAD methods SHOULD NOT have the significance of taking an action other than retrieval. These methods ought to be considered "safe". This allows user agents to represent other methods, such as POST, PUT and DELETE, in a special way, so that the user is made aware of the fact that a possibly unsafe action is being requested.

...
9.1.2 Idempotent Methods

Methods can also have the property of "idempotence" in that (aside from error or expiration issues) the side-effects of N > 0 identical requests is the same as for a single request. The methods GET, HEAD, PUT and DELETE share this property. Also, the methods OPTIONS and TRACE SHOULD NOT have side effects, and so are inherently idempotent.

There are a number of reasons for HTTP GET requests to be both idempotent and safe. Being idempotent means that various caching systems between the user and the web server from the browser cache to caching proxies can cache the request without worry that the web server was supposed to service that request. And as Sam Ruby pointed out in his post AJAX Considered Harmful being safe means unsuspecting grandmothers and bots everywhere cannot be tricked into modifying online databases simply by following a link.

However a number of web services that have been held up as examples of RESTful APIs actually violate these principles. These include the Bloglines sync API, the Flickr API and the del.icio.us API.

  1. An example of where the Bloglines sync API uses HTTP GET requests in a way that is not idempotent or safe is the getitems method which has a parameter to "mark unread items as read".

  2. An example of where the Flickr API uses HTTP GET requests in a way that is not idempotent or safe is the flickr.photosets.delete method which deletes a photo set.

  3. An example of where the del.icio.us API uses HTTP GET requests in a way that is not idempotent or safe is the http://del.icio.us/api/posts/delete function which 
    deletes a post from delicious.

This isn't to say all popular web services held up as RESTful APIs actually aren't. The Atom publishing  protocol is an example of a RESTful API that actually honors the principles behind  HTTP. However there are a number of plain old XML (POX) web services which people have begun to conflate with RESTful services which led to some confusion on my part when I started trying to figure things out in this space.

A good place to start when deciding to design a RESTful system is Joe Gregorio's article How To Create a REST Protocol.


 

Categories: XML Web Services

In his post Waiting for Attention… or something like it Steve Gillmor describes our conversation at ETech and responds to some of the thoughts in my post Nightcrawler Thoughts: Thumbs Up, Thumbs Down and Attention.xml. My post ignored some of the collaborative aspects to the solution to the attention problem that Steve would like to see. Specifically

First I go to my reputational thought leaders, the subs and recurring items that bubble to the top of my attention list. It’s a second-degree-of-separation effect, where the feeds and items that a Jon Udell and a Doc Searls and a Dave Winer prioritize are gleaned for hits and duplicates, and returned as a weighted stream. In turn, each of those hits can be measured for that author’s patterns and added in to provide a descending algorithim of influence. All the while, what is not bubbling up is driven further down the stack, making more room for more valuable content.

It’s important to remember that this is an open pool of data, massaged, sliced, and diced by not just a Technorati but a PubSub or Feedster or Google or Yahoo or any other service, and my inforouter will gladly consume any return feed and weight it according to the success (or lack of it) that the service provides in improving my results. Proprietary services can play here as well, by providing their unique (and contractually personalized) streams as both a value proposition for advertisers and marketers and as an attractor for more users of their service.

The part of the attention problem I focused on in my previous post was "Based on my reading habits, tell me what new stuff I should read" but Steve Gillmor points out that the next level beyond that is "Based on the reading habits of the people whose opinion I trust, tell me what new stuff I should read". People already do this to a lesser extent by hand today. People who subscribe to Robert Scoble's link blog or various individual RSS feeds in del.icio.us are basically trusting a member of their social network to filter out the blogosphere for them.

Once one knows how to calculate the relative importance of various information sources to a reader, it does make sense that the next step would be to leverage this information collaboratively.

The only cloud I see on the horizon is that if anyone figures out how to do this right, it is unlikely that it will be made available as an open pool of data. The 'attention.xml' for each user would be demographic data that would be worth its weight in gold to advertisers. If Bloglines could figure out my likes and dislikes right down to what blog posts I'd want to read, I find it hard to imagine why the Bloglines team would make that information available to anyone including the user. For comparison, it's not like Amazon makes my 'attention.xml' for books and CDs available to myself or their competitors. 

By the way, why does every interesting wide spanning web service idea eventually end up sounding like Hailstorm?


 

One of the things I have found most interesting about watching MSN Spaces over the past few months is seeing various communities beginning to form and watching regular people use their space to communicate that their thoughts and experiences with others. As with all communities there are the negative elements, various trolls who go around criticizing people's posts or who go around impersonating others in various comments.

Another interesting trend I've seen in a couple of spaces is a some resentment from adults that there are so many teenagers using MSN Spaces. The most significant manifestation of this being the Space titled Are you looking for adults and their spaces? where one enterprising MSN Spaces user has begun cataloguing various spaces whose authors are 18 and over.

Among the spaces listed on that page are a couple of my favorites. A few of the hundreds of spaces I've found interesting since the beta launch are below

What I like most about these Spaces is that their content is [mostly] not what you find in the Technorati Top 100 list which is dominated by men talking about technology and politics or women talking about sex. The above spaces just have regular people sharing the interesting and the mundane in their lives which sometimes do involve technology, politics and sex.

Perhaps it's the rise of reality TV that's made me find such spaces so very interesting. Of course, if you want technical content you can always check out the spaces of John Kountz or Scott Issacs.


 

Categories: Mindless Link Propagation | MSN

March 31, 2005
@ 03:45 PM

Bloglines just published a new press release entitled Bloglines is First to Go Beyond the Blog with Unique-to-Me Info Updates which is excerpted below

Oakland, CA -- March 30, 2005 -- Ask Jeeves®, Inc. (Nasdaq: ASKJ), today announced that Bloglines™ (www.bloglines.com), the world’s most popular free online service for searching, subscribing, publishing and sharing news feeds, blogs and rich web content has released the first of a wave of new capabilities that help consumers monitor customized kinds of dynamic web information. With these new capabilities, Bloglines is the first web service to move beyond aggregating general-audience blogs and RSS news feeds to enable individuals to receive updates that are personal to their daily lives.

Starting today, people can track the shipping progress of package deliveries from some of the world’s largest parcel shipping companies—FedEx, UPS, and the United States Postal Servicewithin their Bloglines MyFeeds page. Package tracking in Bloglines encompasses international shipments, in English. Bloglines readers can look forward to collecting more kinds of unique-to-me information on Bloglines in the near future, such as neighborhood weather updates and stock portfolio tracking.

“Bloglines is a Universal Inbox that captures all kinds of dynamic information that helps busy individuals be more productive throughout the day—at the office, at school, or on the go,” said Mark Fletcher, vice president and general manager of Bloglines at Ask Jeeves. “With an index of more than 370 million blog and news feed articles in seven languages, we’re already one of the largest wells of dynamic web information. With unique-to-me news updates we’re aiming to be the most comprehensive and useful personalized information resource on the web.”

So it looks like Bloglines is evolving into MyYahoo! or MyMSN which already provide a way to get customized personal information from local news and weather reports to RSS feeds and email inboxes. 

I've been pitching the concept of the digital information hub to folks at work but I think the term 'universal inbox" is a more attractive term. As a user spends more and more time in front of an information consumption tool be it an email reader, RSS reader or online portal, the more data sources the user wants supported by the tool. Online portals are now supporting RSS. Web-based RSS readers are now supporting content that would traditionally show up in a personalized view at an online portal.

At MSN, specifically with http://www.start.com/2/, we are exploring what would happen if you completely blurred the lines between a web-based RSS reader and the traditional personalized dashboard provided by an online portal. It is inevitable that both mechanisms of consuming information online will eventually be merged in some way. I suspect the result will look more like what Steve Rider's team is building than MyYahoo! or Bloglines do today.

As I mentioned before we'd love feedback about all the stuff we are doing at start.com. Don't be shy send your feedback.


 

Categories: MSN | Syndication Technology

I recently read Joe Beda's post on Google 20% Time and it actually left me with a lot of questions as well as a better understanding of why there is a steady flow of ex-B0rg to Google's Kirkland office. I found some of his points interesting, specifically

Switching teams at Google is a very fluid process.  An engineer can be 40% on one project, 40% on something completely different and 20% on his or her own thing.  That mix can be adjusted as project requirements change.  Switching groups should also not have an affect on your annual review score because of arbitrary team politics.   Joining a new group is more about find a good mutual fit then going through HR and a formal interview loop.  When there is an important project that needs to be staffed the groups and execs will evangelize that need and someone who is interested is bound to step up.  If it is critical to the business and no one is stepping up (I don't know if this has occurred or not) then I imagine it would go come down to a personal appeal by the management team
...
There is a big difference between pet projects being permitted and being encouraged. At Google it is actively encouraged for engineers to do a 20% project. This isn't a matter of doing something in your spare time, but more of actively making time for it. Heck, I don't have a good 20% project yet and I need one. If I don't come up with something I'm sure it could negatively impact my review.

He also mentions the openness of the Google intranet which I find less interesting because I expect that as the company grows it will have to start dealing with the kind of leaks that Apple and Microsoft has. Given the company's history of tightly controlling its message (the extremely boring press release style posts on the Google blog, the firing of Mark Jen, etc) I expect that once leak sites in the style of Neowin and Microsoft-Watch spring up around Google's actions, there will be tighter control over access to internal content.

The first set of questions I have are around his comments on switching teams. At Microsoft it is common knowledge that switching teams in the middle of the annual review cycle is guaranteed to screw up your score. Due to stack ranking managers are obligated to give some people low scores and it is always easy to tell the new guy "Everyone's been here for a year you were only here for a few months so you didn't contribute as much". My first manager at Microsoft was actually pretty good about this which I'm thankful for. This is the main reason I left the XML team without waiting for the release [or even the second beta] of version 2.0 of the .NET Framework which I spent about two years working on. Once I had decided I was going to leave it made the most sense to do so at the beginning of the next review cycle independent of where we were in the ship cycle of the product. The first question swirling through my mind about Joe's post is how does Google allow people to fluidly move between teams yet not get screwed during annual reviews?

The other thing I find interesting are his comments on 20% projects. I was recently part of a leadership training class where we were supposed to come up with a presentation as a team and the topic we chose was launching a 20% project at Microsoft. One of the things we got hung up on was deciding what exactly constituted a valid 20% project. If the company mandates strongly encourages me to have a 'pet' project then what guidelines is the project expected to follow? If my pet project isn't software related, such as a woodworking project, can it be considered a valid use of my 20% time? What if my pet project benefits a competitor such as Mark Pilgrim's Butler which ads links to competing services to Google pages?

I personally haven't needed a formal 20% program while at Microsoft. I've written lots of articles and worked on RSS Bandit while still being able to deliver on my day job. Having flex time and reasonable autonomy from management is all I've needed.

My suspicion is that 20% pet projects are a cheap way for Google to do new product development and prototyping in a light weight manner as opposed to being a way to encourage people to spend time working on their hobbies. This is a 'code first' approach to launching new products. Developers actually write cool apps and then someone decides whether there is a market or business model for it or not. This is in contrast to the 'VP first' or 'talk first' approach of new product development at places like Microsoft where new projects need to be pitched to a wide array of middle and upper management folks to get the green light unless the person creating the project is in upper management. At the end of the day with either approach one has to pitch the product to some management person to actually ship it but at least with Google's "code first" approach there already is a working app not just a dollar and a dream.


 

Categories: Life in the B0rg Cube

MSN services such as MSN Spaces and MSN Messenger require one to create a Passport account to use them. Passport requires the use of an email address which is used as the login name of the user. However as time progresses people often have to change email addresses for one reason or the other. On such occassions I've seen a couple of requests internally asking for the ability to change the Passport account an MSN Spaces or MSN Messenger  account is associated with. 

Doing this is actually quite straightforward in the general case. All one has to do is go to the Passport.net website and click on the 'View or edit your profile' link which should have an option for changing the email address associated with the Passport. And that's it.

Thanks to a recent changes made by some of the devs on our team, the renaming process now works seamlessly in MSN Messenger. You don't have to import your existing contact list to the new account nor do your contacts have to add the new email address to their contact list. Instead the change to your email address propagates across the MSN network in about an hour or so.

The are some caveats. The first is that renaming a Passport to a Hotmail account isn't supported at the current time. Another is that you may be prevented from changing your display name from the new email address in MSN Messenger for some time. This means that your friends will see you in their contact list as yourname@example.com (if that is your email address).

The above information also applies if you've been asked to change your MSN Messenger email address because your employer is deploying Microsoft Live Communications Server 2005 with Public IM Connectivity (PIC).  


 

Categories: MSN

Recently I was chatting with Steve Rider on the Start.com team about the various gotchas awaiting them as they continue to improve the RSS aggregator at http://www.start.com/1/. I mentioned issues like feeds which don't use title's like Dave Winer's and HTML showing up in titles.

I thought I knew all the major RSS gotchas and RSS Bandit handled them pretty well. However I recently got two separate bug reports from users of WordPress about RSS Bandit's inability to handle extension elements in their feed. The first complaint was about Kevin Devin's RSS feed which couldn't be read at all. A similar complaint was made by Jason Bock who was helpful enough to debug the problem himself and provide an answer in his post RSS Bandit problem fixed where he wrote

I can't believe what it took to fix my feed such that Rss Bandit could process it.

I'm just dumbfounded.

Basically, this is the way it looked:

						<rss xmlns:blog="urn:blog">
   <blog:info directory="JB\Blog" />
   <channel>
      <!--  Channel stuff goes here... -->
   </channel>
</rss>

				

This is what I did to fix it:

						<rss xmlns:blog="urn:blog">
   <channel>
      <!--  Channel stuff goes here... -->
   </channel>
   <blog:info directory="JB\Blog" />
</rss>

				

After debugging the Rss Bandit code base, I found out what the problem was. Rss Bandit reads the file using an XmlReader. Basically, it goes through the elements sequentially, and since the next node after <rss> wasn't <channel>, it couldn't find any information in the feed, and that's what was causing the choke. Moving <blog:info> to the end of the document solved it.

The assumption I made when developing the RSS parser in RSS Bandit was that the top level rss element would have a channel element as its first child element. I handle extension elements if they appear as children of the channel or item element since these seem logical but never thought anyone would apply an extension to the rss element. I took a look at what the RSS 2.0 specification says about where extension elements can appear and it seems my assumption was wrong since it states

RSS originated in 1999, and has strived to be a simple, easy to understand format, with relatively modest goals. After it became a popular format, developers wanted to extend it using modules defined in namespaces, as specified by the W3C.

RSS 2.0 adds that capability, following a simple rule. A RSS feed may contain elements not described on this page, only if those elements are defined in a namespace.

Since there is no explicit restriction of where extension elements can appear it looks like I'll have to make changes to be able to expect extension elements anywhere in the feed.

My apologies to the folks who've had problems reading feeds because of this oversight on my part. I'll fix the issue today and refresh the installer later this week.

 


 

I've seen a number of responses to my recent post, SOA, AJAX and REST: The Software Industry Devolves into the Fashion Industry, about the rebranding of existing techniques for building websites which used to be called DHTML or Remote Scripting to AJAX. I've found the most interesting responses to be the ones that point out why this technique isn't seeing mainstream usage in designing websites.

Scott Isaacs has a post entitled AJAX or as I like to call it, DHTML where he writes

As Adam Bosworth explained on Dare's blog, we built Ajax-like web applications back in 1997 during and after shipping IE4 (I drove the design of DHTML and worked closely with the W3C during this period).  At that time xmlhttp (or xml for that matter) did not exist. Instead, structured data was passed back and forth encapsulated in Javascript via hidden IFrames.  This experimental work helped prove the need for a structured, standardized approach for managing and transporting data.

Over the years, quite a few web applications have been built using similar approaches - many are Intranet applications and one of the best was Outlook Web Access. However, very few main-stream web-sites appeared. I believe this was due to a number of factors - the largest being that the typical web user-experience falls apart when you start building web-based applications.  The user-interface issues revolve mostly around history and refresh. (For example - In GMail, navigate around your inbox and either refresh the page or use the back button. In Spaces (IE Users), expand and view comments and hit the back button.  I am willing to wager that what happens is not what you really want).  The problem lies in we are building rich applications in an immature application environment. We are essentially attempting to morph the current state of the browser from a dynamic form package to a rich application platform.

I have noticed these problems in various Javascript based websites and it definitely is true that complex Javascript navigation is incompatible with the traditional navigation paradigm of the Web browser. Shelley Powers looks at the problems with Javascript based websites from another angle in her post The Importance of Degrading Gracefully where she writes

Compatibility issues aside, other problems started to pop up in regards to DHTML. Screen readers for the blind disabled JavaScript, and still do as far as I know (I haven’t tried a screen reader lately). In addition, security problems, as well as pop-up ads, have forced many people to turn off JavaScript–and keep it off.

(Search engines also have problems with DHTML based linking systems.)

The end result of all these issues–compatibility, accessibility, and security–is a fundamental rule of web page design: whatever technology you use to build a web page has to degrade, gracefully. What does degrade, gracefully, mean? It means that a technology such as Javascript or DHTML cannot be used for critical functionality; not unless there’s an easy to access alternative.
...
Whatever holds for DHTML also holds for Ajax. Some of the applications that have been identified as Ajax-enabled are flickr and Google’s suite of project apps. To test how well both degrade, I turned off my JavaScript to see how they do.
...
Google’s gmail, on the other hand, did degrade, but did not do so gracefully. If you turn off JavaScript and access the gmail page, you’ll get a plain, rather ugly page that makes a statement that the primary gmail page requires JavaScript, either turn this on, get a JS enabled browser, or go to the plain HTML version.

Even when you’re in the plain HTML version, a prominent line at the top keeps stating how much better gmail is with a Javascript enabled browser. In short, Google’s gmail degrades, by providing a plain HTML alternative, but it didn’t do so gracefully; not unless you call rubbing the customer’s nose in their lack of JS capability “graceful”.

You don’t even get this message with Google Suggest; it just doesn’t work (but you can still use it like a regular search page). As for Google Maps? Not a chance–it is a pure DHTML page, completely dependent on JavaScript. However, Mapquest still works, and works just as well with JS as without.

(Bloglines also doesn’t degrade gracefully — the subscription is based on a JavaScript enabled tree. Wordpress, and hence Wordform, does degrade gracefully.)

If we’re going to get excited about new uses of existing technology, such as those that make up the concept of Ajax, then we should keep in mind the rule of degrading gracefully: Flickr is an example of a company that understands the principle of degrading gracefully; Google is an example of a company, that doesn’t.

Update: As Doug mentions in comments, flickr is dependent on Flash. If Flash is not installed, it does not degrade gracefully.

I do remember being surprised that I had to add "http://*.google.com" to my trusted sites list to get Google Maps to work. Of course, none of the above observations is new but given that we are seeing a renaissance of interest in using Javascript for building websites, it seems like a good idea for a similar return of the arguments discussing the cons of this approach as well.


 

Categories: Technology

While at ETech I got to spend about half an hour chatting with Steve Gillmor about what he's called "the attention problem" which isn't the same thing as the attention.xml specification. The attention problem is the problem that faces every power users of XML syndication clients such as RSS Bandit or Bloglines. It is so easy to subscribe to various feeds that eventually readers get overwhelmed by the flood of information hitting their aggregator's inbox. Some have used the analogy "drinking from a firehose" to describe this phenomenon.

This problem affects me as well which is the impetus for a number of features in the most recent release of RSS Bandit such as newspaper views which allow one to view all the unread posts in a feed in single pane, adding more sortable columns such as author and comment count to the list view, and skim mode ('mark all items as read on exiting a feed or category'). However the core assumption behind all these features is that the user is reading every entry.

Ideally a user should be able to tell a client, "Here are the sites I'm interested in, here are the topics I'm interested in, and now only show me stuff I'd find interesting or important". This is the next frontier of features for RSS/ATOM aggregators and an area I plan to invest a significant amount of time in for the next version of RSS Bandit.

In my post Some Opinions on the Attention.xml Specification I faulted the attention.xml specification because it doesn't seem to solve the problems it sets out to tackle and some of the data in the format is unrealistic for applications to collect. After talking to Steve Gillmor I realize another reason I didn't like the attention.xml spec; it ignores all the hard problems and assumes they've been solved. Figuring out what data or what algorithms are useful for determining what items are relevant to a user is hard. Using said data to suggest new items to the user is hard. Coming up with an XML format for describing an arbitrary set of data that could be collected by an RSS aggregator is easy.

There are a number of different approaches I plan to explore over the next few months in various alphas of the Nightcrawler release of RSS Bandit.  My ideas have run the gamut from using Bayesian filtering to using the Technorati link cosmos feature for weighting posts [in which case I'd need batch methods which is something I briefly discussed with Kevin Marks at Etech last week]. There is also weighting by author that needs to be considered, for example I read everything written by Sam Ruby and Don Box. Another example is a topic that may be mundane (e.g. what I had for lunch) and something I'd never read if published by a stranger but would be of interest to me if posted by a close friend or family member.

We will definitely need a richer extensibility model so I can try out different approaches [and perhaps others can as well] before the final release. Looks like I have yet another spring and summer spent indoors hacking on RSS Bandit to look forward to. :)


 

I woke up this morning to find an interesting bit of Web history posted in a comment in response my post SOA, AJAX and REST: The Software Industry Devolves into the Fashion Industry by Adam Bosworth. He wrote

I actually agree with the post and I could care less what it is called (Ajax or DHTML or ..) but I thought I'd correct a couple of historical points. We (I led the IE MSFT 4.0 team which shipped in the fall of 97) called it DHTML because we introduced the read and writable DOM so that Ecmascript could dynamically modify the page to react to fine grained user actions and asynchronous events. That really was new and inventing this and the term occured simultaneously. Scott Isaac drove this work and worked tirelessly to get it into the W3C. We (MSFT) had a sneak preview for many developers in the fall of 96 actually showing things like pages expanding and collapsing in the gmail style and Tetris running using Javascript and DHTML in the page. Before then, javascript could hide/unhide items and react to some coarse events, but that was about it. We added the XMLHTTPRequest object (Chris Lovett actually did this) in IE 5.0 when we wrote the auction demo to show how XML could be used with IE to build a new interactive client. If there was a pioneer for Ajax, I'd argue that the auction demo was it.

I am surprised by how familiar I am with some of the people he mentions. Chris Lovett is currently an architect on the XML team at Microsoft and was the person who gave me the Extreme XML column on MSDN when I was still fresh out of college in my few months at Microsoft. Scott Isaacs is an architect on the MSN Spaces team who I've been in a few design meetings with so far. Cool.

I also see that Adam is back posting to his blog with his post Tensions on the Web. He mentions our conversation at ETech, specifically

I haven't posted for quite a while because my last posts caused unfair attacks on Google by twisting the words I'd used in my posts and attributing my posts to Google. I want to be really clear about something. The opinions I express in this Blog are my own. They have nothing to do with Google's opinions. Google only asks that I not leak information about future products. Period. But despite that, recent blog posts of mine were used to attack Google and this upset me deeply. Much to my surprise, Dare Obasanjo came up to me and told me, after some fairly vitriolic complaining from me to him about this earlier state of affairs, that he wished I'd continue to post. I thought about this over the weekend and decided that to some degree, you have to take your chances in this environment rather than just hide when you don't like the behavior and that perhaps I was being over sensitive anyway. There are too many interesting things going on right now anyway.

Adam's blog postings have been somewhat inspirational to me and part of the reason I decided to move to MSN (in fact, I'd considered leaving Microsoft). They also led to the most popular entry in my blog, Social Software is the Platform of the Future. It's good to see that he's back to sharing his ideas with us all.

Welcome back to the blogosphere, Adam.


 

Categories: Ramblings