If you work in the technology industry it pays to be familiar with the ideas from Geoffrey Moore's insightful book Crossing the Chasm. In the book he takes a look at the classic marketing bell curve that segments customers into Early Adopters, Pragmatists, Conservatives and Laggards then points out that there is a large chasm to cross when it comes to becoming popular beyond an initial set of early adopters. There is a good review of his ideas in Eric Sink's blog post entitled Act Your Age which is excerpted below

The people in your market segment are divided into four groups:

Early Adopters are risk takers who actually like to try new things.

Pragmatists might be willing to use new technology, if it's the only way to get their problem solved.

Conservatives dislike new technology and try to avoid it.

Laggards pride themselves on the fact that they are the last to try anything new.

This drawing reflects the fact that there is no smooth or logical transition between the Early Adopters and the Pragmatists.  In between the Early Adopters and the Pragmatists there is a chasm.  To successfully sell your product to the Pragmatists, you must "cross the chasm". 

The knowledge that the needs of early adopters and those of the majority of your potential user base differ significantly is extremely important when building and marketing any technology product. A lot of companies have ended up either building the wrong product or focusing their product too narrowly because they listened too intently to their initial customer base without realizing that they were talking to early adopters.

The fact is that early adopters have different problems and needs from regular users. This is especially true when you compare the demographics of the Silicon Valley early adopter crowd which "Web 2.0" startups often try to court with the typical users of social software on the Web.  In the few years I've been working on building Web applications, I've seen a number of technology trends and products that have been heralded as the next big thing by technology pundits which actually never broke into the  mainstream because they don't solve the problems of regular Internet users. Here are some examples

  • Blog Search: A few years ago, blog search engines were all the rage. You had people like Marc Cuban talking up IceRocket and Robert Scoble harranguing Web search companies to build dedicated blog search engines. Since then the products in that space have either given up the ghost (e.g. PubSub, Feedster), turned out to be irrelevant (e.g. Technorati, IceRocket) or were sidelined (e.g. Google Blog Search, Yahoo! Blog Search). The problem with this product category is that except for journalists, marketers and ego surfing A-list bloggers there aren't many people who need a specialized feature set around searching blogs.  

  • Social bookmarking: Although del.icio.us popularized a number of "Web 2.0" trends such as tagging, REST APIs and adding social features to a previously individual task, it has never really taken off as a mainstream product. According to the former VC behind the service it seems to have peaked at 2 million unique visitors last year and is now seeing about half that number of unique users. Compare that to Yahoo! bookmarks which was seeing 20 million active users a year and a half ago.

  • RSS Readers: I've lost track of all of the this is the year RSS goes mainstream articles I've read over the past few years. Although RSS has turned out to be a key technology which powers a number of interesting functionality behind the scenes (e.g. podcasting) actually subscribing and reading news feeds in an RSS reader has not become a mainstream activity of Web users. When you think about it, it is kind of obvious. The problem an RSS reader solves is "I read so many blogs and news sites on daily basis, I need a tool to help me keep them all straight". How many people who aren't enthusiastic early adopters (i) have this problem and (ii) think they need a tool to deal with it?

These are just the first three that came to mind. I'm sure readers can come up with more examples of their own. This isn't to say that all hyped "Web 2.0" sites haven't lived up to their promise. Flickr is an example of an early adopter hyped site that showed up sprinkled with "Web 2.0" goodness that has become a major part of the daily lives of tens of millions of people across the Web.

When you look at the list of top 50 sites in the U.S. by unique visitors it is interesting to note what common theme unites the recent "Web 2.0" entrants into that list. There are the social networking sites like MySpace and Facebook which harness the natural need of young people to express their individuality yet be part of social cliques.  Then there are the sites which provide lots of flexible options that enable people to share their media with their friends, family or the general public such as Flickr and YouTube. Both sites also have figured out how to harness the work of the few to entertain and benefit the many as have Wikipedia and Digg as well. Then there are sites like Fling and AdultFriendFinder which seem to now get more traffic than the personal sites you see advertised on TV for obvious reasons.

However the one overriding theme is that all of these recent entrants is that they solve problems that everyone [or at least a large section of the populace] has. Everyone likes to communicate with their social circle. Everyone likes watching funny videos and looking at couple pics. Everyone wants to find information about topics they interested in or find out what's going on around them. Everybody wants to get laid.

If you are a Web 2.0 company in today's Web you really need to ask yourselves, "Are we solving a problem that everybody has or are we building a product for Robert Scoble?"

Now Playing: Three 6 Mafia - I'd Rather (feat. DJ Unk)


 

Categories: Social Software | Technology

Recently we got a new contributor on RSS Bandit who uses the ReSharper refactoring tool. This has led to a number of changes to our code base due to suggestions from ReSharper. For the most part, I've considered these changes to be benign until recently. A few weeks ago, I grabbed a recent snap shot of the code from SVN and was surprised to see all the type declarations in method bodies replaced by implicitly typed variable declarations (aka the var keyword). Below is an excerpt of what some of the "refactored" code now look likes.

var selectedIsChild = NodeIsChildOf(TreeSelectedFeedsNode, selectedNode);
var isSmartOrAggregated = (selectedNode.Type == FeedNodeType.Finder ||
                                      selectedNode.Type == FeedNodeType.SmartFolder);
//mark all viewed stories as read 
if (listFeedItems.Items.Count > 0){
   listFeedItems.BeginUpdate();

   for (var i = 0; i < listFeedItems.Items.Count; i++){
       var lvi = listFeedItems.Items[i];
       var newsItem = (INewsItem) lvi.Key;

I found this change to be somewhat puzzling since while it may have shortened the code by a couple of characters on each line but at the cost of making the code less readable. For example, I can't tell what the type of the variable named lvi is just by looking at the code.

I did some searching online to see how the ReSharper team justified this refactoring "suggestion" and came across the blog post entitled  ReSharper vs C# 3.0 - Implicitly Typed Locals by a member of the ReSharper team which contains the following excerpt

Some cases where it seems just fine to suggest var are:

  1. New object creation expression: var dictionary = new Dictionary<int, string>();
  2. Cast expression: var element = (IElement)obj;
  3. Safe Cast expression: var element = obj as IElement;
  4. Generic method call with explicit type arguments, when return type is generic: var manager = serviceProvider.GetService<IManager>()
  5. Generic static method call or property with explicit type arguments, when return type is generic: var manager = Singleton<Manager>.Instance;

However, various code styles may need to suggest in other cases. For example, programming in functional style with small methods can benefit from suggesting every suitable local variable to be converted to var style. Or may be your project has IElement root interface and you just know that every variable with "element" name is IElement and you don't want explicit types for this case. Probably, any method with the name GetTreeNode() always return ITreeNode and you want vars for all such local variable.

Currently, we have two suggestions: one that suggests every suitable explicitly typed local to be converted to implicitly typed var, and another that suggests according to rules above.

So it seems there are two suggestion modes. The first suggests using the var keyword when the name of the type is obvious from the right hand side expression being evaluated such as casts or new object creation. The second mode suggests replacing type declarations with the var keyword anywhere the compiler can infer the type [which is pretty much everywhere except for a few edge cases such as when you want to initialize a variable with null at declaration]. 

The first suggestion mode makes sense to me since the code doesn't lose any clarity and it makes for shorter code. The second mode is the one I find problematic it takes information out of the equation to save a couple of characters per line of code. Each time someone is reading the code, they need to resort to using Go To Definition or Auto-Complete features of their IDE to tell something as simple as "what is the type of this object".

Unfortunately, the ReSharper developers seem to have dug in their heels religiously on this topic as can be seen in the post entitled Varification -- Using Implicitly Typed Locals where a number of arguments are made justifying always using implicitly typed variables including

  • It induces better naming for local variables.
  • It induces better API.
  • It induces variable initialization.
  • It removes code noise
  • It doesn't require using directive.

It's interesting how not only are almost all of these "benefits" mainly stylistic but how they contradict each other. For example, the claim that it leads to "better naming for local variables" really means it compels developers to use LONGER HUNGARIAN STYLE VARIABLE NAMES. Funny enough, these long variable names add more noise to the code overall since they show up everywhere the variable is used compared to a single type name showing up when the variable is declared. The argument that it leads to "better API" is another variation of this theme since it argues that if you are compelled to use LONGER MORE DESCRIPTIVE PROPERTY NAMES (e.g. XmlNode.XmlNodeName instead of XmlNode.Name) then this is an improvement.  Someone should inform the ReSharper folks that encoding type information in variable names sucks, that's why we're using a strongly typed programming language like C# in the first place.

One more thing, the claim that it encourages variable initialization is weird given that the C# compiler already enforces that. More importantly, the common scenario of initializing a variable to null before it is used isn't supported by the var keyword.

In conclusion, it seems to me that someone on the ReSharper team went overboard in wanting to support the new features in C# 3.0 in their product even though in some cases using these features cause more problems than they solve. Amusingly enough, the C# 3.0 language designers foresaw this problem and put the following note in the C# language reference

Remarks: Overuse of var can make source code less readable for others. It is recommended to use var only when it is necessary, that is, when the variable will be used to store an anonymous type or a collection of anonymous types.

Case closed.

Now Playing: Mariah Carey - Side Effects (featuring Young Jeezy)


 

Categories: Programming

I've spent all of my professional career working at a large multinational company. In this time I've been involved in lots of different cross-team and cross-divisional collaboration efforts. Some times these groups were in the same organization and other times you would have to go up five to ten levels up the org chart before you found a shared manager. Surprisingly, the presence or lack of shared management has never been the key factor that has helped or hindered such collaborative efforts.

Of all the problems I've seen when I've had to depend on other teams for help in getting a task accomplished or vice versa; there have been two insidious that tend to crop up in situations where things go awry. The first is misaligned goals. Just because two groups are working together doesn't mean they have the same motivations or expected end results. Things quickly go awry when one group's primary goals either run counter to the goal(s) of the group they are supposed to be collaborating with. For example, consider a company that requires its technical support to have very low average call time to meet their metrics. Imagine that same company also puts together a task force to improve the customer satisfaction with the technical support experience after lots of complaints from their customers. What are the chances that the task force will be able to effect positive change if the metrics used to reward their tech support staff remain the same? The funny thing is that large companies often end up creating groups that are working at cross purposes yet are supposed to be working together.

What makes misaligned goals so insidious is that the members of the collaborating groups who are working through the project often don't realize that the problem is that their goals are misaligned. A lot of the time people tend to think the problem is the other group is evil, a bunch of jerks or just plain selfish. The truth is often that the so-called jerks are really just thinking You're not my manager, so I'm not going to ask how high when you tell me to jump. Once you find out you've hit this problem then the path to solving it is clear. You either have to (i) make sure all collaborating parties want to reach the same outcome and place have similar priorities or (ii) jettison the collaboration effort.

Another problem that has scuttled many a collaboration effort is when one or more of the parties involved has undisclosed concerns about the risks of collaborating which prevents them from entering into the collaboration wholeheartedly or even worse has them actively working against it. Software development teams experience this when they have to manage dependences on their project or that they have on other projects. There's a good paper on the topic entitled Managing Cognitive and Affective Trust in the Conceptual R&D Organization by Diane H. Sonnenwald which breaks down the problem of distrust in conceptual organizations (aka virtual teams) in the following way

Two Types of Trust and Distrust: Cognitive and Affective
Two types of trust, cognitive and affective, have been identified as important in organizations (McAllister, 1995; Rocco, et al, 2001). Cognitive trust focuses on judgments of competence and reliability. Can a co-worker complete a task? Will the results be of sufficient quality? Will the task be completed on time? These are issues that comprise cognitive trust and distrust. The more strongly one believes the answers to these types of questions are affirmative, the stronger one’s cognitive trust. The more strongly one believes the answers to these types of questions are negative, the stronger one’s cognitive distrust.

Affective trust focuses on interpersonal bonds among individuals and institutions, including perceptions of colleagues’ motivation, intentions, ethics and citizenship. Affective trust typically emerges from repeated interactions among individuals, and experiences of reciprocated interpersonal care and concern (Rosseau, et al, 1998). It is also referred to as emotional trust (Rocco, et al, 2001) and relational trust (Rosseau, et al, 1998). It can be “the grease that turns the wheel” (Sonnenwald, 1996).

The issue of affective distrust is strongly related to lacking shared goals while working together as a team which I've already discussed. Cognitive distrust typically results in one or more parties in the collaboration acting with the assumption that the collaboration is going to fail. Since these distrusting group(s) assume failure will be the end result of the collaboration they will take steps to insulate themselves from this failure. However what makes this problem insidious is that the "untrusted" groups are often not formally confronted about the lack of trust in their efforts and thus risk mitigation is not formally built into the collaboration effort. Eventually this leads to behavior that is counterproductive to the collaboration as teams try to mitigate risks in isolation and eventually there is distrust between all parties in the collaboration. Project failure often soon follows.

The best way to prevent this from happening once you find yourself in this situation is to put everyone's concerns on the table. Once the concerns are on the table, be they concerns about product quality, timelines or any of the other myriad issues that impact collaboration, mitigations can be put in place. As the saying goes sunlight is the best disinfectant, thus I've also seen that when the "distrusted" team becomes fully transparent in their workings and information disclosure it quickly makes matters clear. Because one of two things will happen; it will either (i) reassure their dependents that their fears are unfounded or (ii) confirm their concerns in a timely fashion. Either of which is preferable to the status quo.

Now Playing: Mariah Carey - Cruise Control (featuring Damian Marley)


 

Categories: Life in the B0rg Cube

Lots of "Web 2.0" pundits like to argue that it is just a matter of time before Web applications make desktop applications obsolete and irrelevant. To many of these pundits the final frontier is the ability to take Web applications offline.  Once this happens you get the best of both worlds, the zero install hassle, collaborative nature of Web-based applications married to the ability to take your "apps on a plane".  Much attention has been given to this problem which has led to the rise of a number of frameworks designed bring offline capabilities to Web applications the most popular of which is Google Gears. I think the anti-Microsoft sentiment that courses through the "Web 2.0" crowd has created an unwanted solution to a problem that most users don't really have.

Unlike David Heinemeier Hansson in his rant You're not on a fucking plane (and if you are, it doesn't matter)!, I actually think the "offline problem" is a valid problem that we have to solve. However I think that trying to tackle it from the perspective of taking an AJAX application offline is backwards.  There are a few reasons I believe this

  1. The user experience of a "rich" Web application pales in comparison to that of a desktop application. If given a choice of using a desktop app and a Web application with the same features, I'd use a desktop application in a heart beat. 
  2. The amount of work it takes to "offline enable" a Web application is roughly similar to the amount of work it takes to "online enable" a desktop application. The amount of work it took me to make RSS Bandit a desktop client for Google Reader is roughly equivalent to what it most likely took to add offline reading to Google Reader.
  3. Once you decide to "go offline", your Web application is no longer "zero install" so it isn't much different from a desktop application.

I suspect this is the bitter truth that answers the questions asked in articles like  The Frustratingly Unfulfilled Promise of Google Gears where the author laments the lack of proliferation of offline Web applications built on Google Gears.

When it first shipped I was looking forward to a platform like Google Gears but after I thought about the problem for a while, I realized that such a platform would be just as useful for "online enabling" desktop applications as it would be for "offline enabling" Web applications. Additionally, I came to the conclusion that the former is a lot more enabling to users than the latter. This is when I started becoming interested in Live Mesh as a Platform, this is one area where I think Microsoft's hearts and minds are in the right place. I want to see more applications like Outlook + RPC over HTTP  not "offline enabled" versions of Outlook Web Access.

Now Playing: Jordin Sparks - No Air (feat. Chris Brown)


 

Categories: Web Development

Disclaimer: This post does not reflect the opinions, thoughts, strategies or future intentions of my employer. These are solely my personal opinions. If you are seeking official position statements from Microsoft, please go here.

Recently there were three vaporware announcements by Facebook, Google and MySpace each describing a way for other web sites to integrate the user profiles and friends lists from these popular social networking sites. Given that I'm a big fan of social networking sites and interoperability between them, this seemed like an interesting set of announcements. So I decided to take a look at these announcements especially given the timing of them.  

What Do They Have in Common?

Marc Canter does a good job of describing the underlying theme behind all three announcements in his post I do not compromise where he writes

three announcements that happened within a week of each other: MySpace’s Data Availability, Facebook’s Connect and Google’s Friend Connect - ALL THREE had fundamentally the same strategy!

They’re all keeping their member’s data on their servers, while sending out tentacles to mesh in with as many outside sites as they can. These tentacles may be widgets, apps or iFrames - but its all the same strategy.

Basically all three announcements argue that instead of trying to build social networking into their services from scratch, Web sites should instead outsource their social graphs and "social features" such as user profiles, friends lists and media sharing from the large social networking sites like Facebook, MySpace and Orkut.

This isn't a new pitch, Facebook has been singing the same song since they announced the beta of the Facebook Platform in August 2006 and Google has been sending Kevin Marks to every conference they can find to give his Social Cloud presentation which makes the same pitch. The new wrinkle to this time worn tale is that Google and Facebook [along with MySpace] are no longer just pitching using REST APIs for integration but are now preaching "no coding required" integration via widgets. 

Now that we know the meat of all three announcements we can go over the little specifics that have leaked out about each forthcoming product thus far.

Facebook Connect

Dave Morin gave the first official statement about Facebook Connect news in his blog post Announcing Facebook Connect where he wrote

Trusted Authentication
Users will be able to connect their Facebook account with any partner website using a trusted authentication method. Whether at login, or anywhere else a developer would like to add social context, the user will be able to authenticate and connect their account in a trusted environment. The user will have total control of the permissions granted.

Real Identity
Facebook users represent themselves with their real names and real identities. With Facebook Connect, users can bring their real identity information with them wherever they go on the Web, including: basic profile information, profile picture, name, friends, photos, events, groups, and more.

Friends Access
Users count on Facebook to stay connected to their friends and family. With Facebook Connect, users can take their friends with them wherever they go on the Web. Developers will be able to add rich social context to their websites. Developers will even be able to dynamically show which of their Facebook friends already have accounts on their sites.

Dynamic Privacy
As a user moves around the open Web, their privacy settings will follow, ensuring that users' information and privacy rules are always up-to-date. For example, if a user changes their profile picture, or removes a friend connection, this will be automatically updated in the external website.

The key features to note are (i) a user can associate their Facebook account with their account on a 3rd party site which means  (ii) the user's profile and media shared on Facebook can now be exposed on the 3rd party site and (iii) the users friends' on Facebook who have also associated their Facebook account with their account on the 3rd party site will show up as the user's friends on the site. 

The "dynamic privacy" claim seems pretty vague if not downright empty. All that is stated above is that the user's changes on Facebook are instantly reflected on 3rd party sites. Duh. Does that need to be called out as a feature?

Google Friend Connect

On the Google Friend Connect page there is the following video

The key features to note are (i) a user can associate their Facebook account OpenID with their account on a 3rd party site which means  (ii) the user's profile and media shared on Facebook account a small set of social networking site can now be exposed on the 3rd party site and (iii) the users friends' on Facebook the small set of social network sites who have also associated their Facebook account OpenID using Google Friend Connect to connect their account on the 3rd party site will show up as the user's friends on the site (iv) the user's activities on the 3rd party site are broadcast in her friends' news feeds.

One interesting thing about Google Friend Connect's use of OpenID is that it allows me to associate multiple social network profiles to a single account which may not even be from a social networking site (e.g. using my AOL or Y! email to sign-in but associating it with my Facebook profile & friend list).

Google Friend Connect seems to be powered by Google OpenSocial which is Google's attempt to commoditize the functionality of the Facebook platform by making it easy for any social networking site to roll its own Facebook-style platform by using Google's standard set of REST APIs, Javascript libraries and/or hosting services. In the above video, it is mentioned that Web sites which adopt Google Friend Connect will not only be able to obtain user profile and friend list widgets from Google but also OpenSocial widgets written by 3rd party developers. However since Facebook announced the JavaScript Client Library for Facebook API way back in January they already have the technology in place to offer something similar to Web site owners if this capability becomes in demand.  More important will be the set of functionality that comes "out of the box" so to speak since a developer community won't form until Google Friend Connect gains traction.

By the way, it turns out that Facebook has banned Google from interacting with their user data using Google Friend Connect since it violates their terms of service. My assumption is that the problem is Google Friend Connect works by building an OpenSocial wrapper on top of the Facebook API and then exposing it to other web sites as widgets and to OpenSocial gadget developers via APIs. Thus Google is pretty much proxying the Facebook social graph to other sites and developers which takes control of safeguarding/policing access to this user data out of Facebook's hands. Not good for Facebook. 

MySpace Data Availability

The only details on the Web about MySpace's Data Availability seems to be second hand data from tech bloggers who were either strategically leaked some details/screenshots or took part in a press release conference call. The best source I found was Mike Arrington's TechCrunch post entitled MySpace Embraces DataPortability, Partners With Yahoo, Ebay And Twitter which contains the following excerpt

image

MySpace is announcing a broad ranging embrace of data portability standards today, along with data sharing partnerships with Yahoo, Ebay, Twitter and their own Photobucket subsidiary. The new project is being called MySpace “Data Availability” and is an example, MySpace says, of their dedication to playing nice with the rest of the Internet.

A mockup of how the data sharing will look in action with Twitter is shown above. MySpace is essentially making key user data, including (1) Publicly available basic profile information, (2) MySpace photos, (3) MySpaceTV videos, and (4) friend networks, available to partners via their (previousy internal) RESTful API, along with user authentication via OAuth.

The key goal is to allow users to maintain key personal data at sites like MySpace and not have it be locked up in an island. Previously users could turn much of this data into widgets and add them to third party sites. But that doesn’t bridge the gap between independent, autonomous websites, MySpace says. Every site remains an island.

But with Data Availability, partners will be able to access MySpace user data, combine it with their own, and present it on their sites outside of the normal widget framework. Friends lists can be syncronized, for example. Or Twitter may use the data to recommend other Twitter users who are your MySpace friends.

The key difference between MySpace's announcement and those of Facebook & Google is that MySpace has more ground to cover. Since Facebook & Google already have REST APIs that support a delegated authentication model, MySpace is pretty much playing catch up here.

In fact, on careful rereading it seems MySpace's announcement isn't like the others since the only concrete technology announced above is a REST API that uses a user-centric delegated authentication model which is something both Google and Facebook have had for years (see GData/OpenSocial and Facebook REST API).

Given my assumption that MySpace is not announcing anything new to the industry, the rest of this post will focus on Google Friend Connect and Facebook Connect.  

The Chicken & Egg Problem

When it comes to social networking, it is all about network effects. A social networking feature or site is only interesting to me if my friends are using it as well.

The argument that a site is better off using a user's social graph from a big social networking site like Facebook instead of building their own social network features only makes sense if (i) there is enough overlap in the user's friends list on Facebook and that on the site AND (ii) the user's friends on the site who are also his friends on Facebook can be discovered by the user. The latter is the tough part and one I haven't seen a good way of bridging without resorting to anti-patterns (i.e. pull the email addresses of all of the user's friends from Facebook and then cross-reference with the email addresses of the sites users). This anti-pattern works when you are getting the email addresses the user entered by hand from some Webmail address book (e.g. Hotmail, Gmail, Y! mail, etc).

However since Google and Facebook are going with a no-code solution, the only way to tell which of my Facebook friends also use the 3rd site is if they have also opted-in to linking their account on the site with their Facebook profile. This significantly weakens the network effects of the feature compared to the find your friends on your favorite "Web 2.0" site which a lot of sites have used to grow their user base by screen scraping Webmail address books then cross referencing it with their user databases.

How Does this Relate to Data Portability and Social Network Interoperability?

Short answer; it doesn't.

Long answer; the first thing to do is to make sure you understand what is meant by Data Portability and Social Network Interoperability. The difference between Data Portability and Social Network Interoperability is the difference between being able to export your email inbox and address book from Gmail into Outlook or vice versa (portable) and being able to send an email from a Gmail address to someone using Outlook or Hotmail (interoperable).

So do these new widget initiatives help portability? Nope. Widgets give developers less options for obtaining and interacting with the user data than APIs. With Facebook's REST API, I know how to get my friends list with profile data into Outlook and my Windows Mobile phone via OutSync. I would actually lose that functionality if it was only exposed via a widget. The one thing they do is lower the bar for integration by people who don't know how to code.

Well, how about interoperability? The idea of social network interoperability is that instead of being a bunch of walled gardens and data silos, social networking sites can talk to each other the same way email services and [some] IM services can talk to each other today. The "Use our data silo instead of building your own" pitch may reduce the number of data silos but it doesn't change the fact that the Facebooks and MySpaces of the world are still fundamentally data silos when it comes to the social graph. That is what we have to change. Instead we keep getting distracted along the way by shiny widgets.

PS: The blog hiatus is over. It was fun while it lasted. ;)

Now Playing: Fugees (Refugee Camp) - Killing Me Softly


 

March 5, 2008
@ 04:00 AM

I’ve been writing a personal weblog for almost seven years. It’s weird to go back and read some of the posts in my old kuro5hin diary such as my early postings about interning at Microsoft and see how much my perspectives have changed in some ways and stayed the same in others. Anyway…

Although I’ve found this weblog to be personally fulfilling, the time has come for me to put it aside for the time being. This will be the last post on http://www.25hoursaday.com/weblog.

In addition, I’ll be cleaning up my Twitter and Facebook profiles by removing anyone who I haven’t personally met from my list of followers and friends respectively.

I will continue to work on and blog about RSS Bandit. I haven’t yet picked a location for a new blog for the project. However this shouldn’t impact subscribers to my RSS Bandit feed since it is already hosted on Feedburner and a redirect shouldn’t be noticeable.

Thanks for everything.

PS: See also The Year the Blog Died.

Now playing: Boyz II Men - End of the Road


 

March 4, 2008
@ 04:00 AM

Dean Hachamovitch who runs the Internet Explorer team has a blog post entitled Microsoft's Interoperability Principles and IE8 where he addresses some of the recent controversy about how rendering pages according to Web standards will work in IE 8. He wrote

The Technical Challenge

One issue we heard repeatedly during the IE7 beta concerned sites that looked fine in IE6 but looked bad in IE7. The reason was that the sites had worked around IE6 issues with content that – when viewed with IE7’s improved Standards mode – looked bad.

As we started work on IE8, we thought that the same thing would happen in the short term: when a site hands IE8 content and asks for Standards mode, that content would expect IE7’s Standards mode and not appear or function correctly. 

In other words, the technical challenge here is how can IE determine whether a site’s content expects IE8’s Standards mode or IE7’s Standards mode? Given how many sites offer IE very different content today, which should IE8 default to?

Our initial thinking for IE8 involved showing pages requesting “Standards” mode in an IE7’s “Standards” mode, and requiring developers to ask for IE8’s actual “Standards” mode separately. We made this decision, informed by discussions with some leading web experts, with compatibility at the top of mind.

In light of the Interoperability Principles, as well as feedback from the community, we’re choosing differently. Now, IE8 will show pages requesting “Standards” mode in IE8’s Standards mode. Developers who want their pages shown using IE8’s “IE7 Standards mode” will need to request that explicitly (using the http header/meta tag approach described here).

Going Forward

Long term, we believe this is the right thing for the web.

I’m glad someone came to this realization. The original solution was simply unworkable in the long term regardless of how much short term pain it eased. Kudos to the Internet Explorer team for taking the long view and doing what is best for the Web. Is it me or is that the most positive the comments have ever been on the IE blog?

PS: It is interesting to note that this is the second time in the past week Microsoft has announced a technology direction related to Web standards and changed it based on feedback from the community.

Now playing: Usher - In This Club (feat. Young Jeezy)


 

Categories: Web Development

David Treadwell has a blog post on the Windows Live Developer blog entitled David Treadwell on New and Updated Windows Live Platform Services where he previews some of the announcements that folks will get to dig into at MIX 08. There are a lot of items of note in his post but there is some stuff that stands out that I felt was worth calling out.

Windows Live Messenger Library (new to beta) – “Develop your own IM experience”

We are also opening up the Windows Live Messenger network for third-party web sites to reach the 300 million+ Windows Live Messenger users. The library is a JavaScript client API, so the user experience is primarily defined by the third party. When a third party integrates the Windows Live Messenger Library into their site they can define the look & feel to create their own IM experience. Unlike the existing third party wrappers for the MSN Protocol (the underlying protocol for Windows Live Messenger) the Windows Live Messenger Library securely authenticates users, therefore their Windows Live ID credentials are safe.

A couple of months ago we announced the Windows Live Messenger IM Control which enables you to embed an AJAX instant messaging window on any webpage so people can start IM conversations with you. I have one placed at http://carnage4life.spaces.live.com and it’s cool to have random readers of my blog start up conversations with me in the middle of my work day or at home via the IM control.

The team who delivered this has been hard at work and now they’ve built a library that enables any developer to build similar experiences on top of the Windows Live Messenger network. Completely customized IM integration is now available for anyone that wants it.  Sweet. Kudos to Keiji, Steve Gordon, Siebe and everyone else who had something to do with this for getting it out the door.

An interesting tidbit is that the library was developed in Script#. Three cheers for code generation.

Contacts API (progressed to Beta) – “Bring your friends”

Our goal is to help developers keep users at the center of their experience by letting them control their data and contact portability, while keeping their personal information private. A big step forward in that effort is today’s release to beta of Windows Live Contacts API. Web developers can use this API in production to enable their customers to transfer and share their contacts lists in a secure, trustworthy way (i.e., no more screen scraping)—a great step on the road toward data portability. (For more on Microsoft’s view on data portability, check out Inder Sethi’s video.) By creating an optimized mode for invitations, it allows users to share only the minimum amount of information required to invite friends to a site, this includes firstname / lastname / preferred email address. The Contacts API uses the new Windows Live ID Delegated Authentication framework; you can find out more here.

A lot of the hubbub around “data portability” has really been about exporting contact lists. Those of us working on the Contacts platform at Windows Live realize that there is a great demand for users to be able to access their social graph data securely from non-Microsoft services.  

The Windows Live Contacts API provides a way for Windows Live users to give an application permission to access their contact list in Windows Live (i.e. Hotmail address book/Live Messenger buddy list) without giving the application their username and password. It is our plan to kill the password anti-pattern when it comes to Windows Live services. If you are a developer of an application or Web site that screen scrapes Hotmail contacts, I’d suggest taking a look at this API instead of continuing in this unsavory practice.

Atom Publishing Protocol (AtomPub) as the future direction

Microsoft is making a large investment in unifying our developer platform protocols for services on the open, standards-based Atom format (RFC 4287) and the Atom Publishing Protocol (RFC 5023). At MIX we are enabling several new Live services with AtomPub endpoints which enable any HTTP-aware application to easily consume Atom feeds of photos and for unstructured application storage (see below for more details). Or you can use any Atom-aware public tools or libraries, such as .NET WCF Syndication to read or write these cloud service-based feeds.

In addition, these same protocols and the same services are now ADO.NET Data Services (formerly known as “ Project Astoria”) compatible. This means we now support LINQ queries from .NET code directly against our service endpoints, leveraging a large amount of existing knowledge and tooling shared with on-premise SQL deployments.

The first question that probably pops into the mind of regular readers of my blog is, “What happened to Web3S and all that talk about AtomPub not being a general purpose editing format for the Web?”. The fact is when we listened to the community of Web developers the feedback was overwhelmingly clear that people would prefer if we worked together with the community to make AtomPub work for the scenarios we felt it wasn’t suited for than Microsoft creating a competing proprietary protocol.

We listened and now here we are. If you are interested in the technical details of how Microsoft plans to use AtomPub and how we’ve dealt with the various issues we originally had with the protocol. I suggest subscribing to the Astoria team’s blog and check out the various posts on this topic by Pablo Castro. There’s a good post by Pablo discussing how Astoria describes relations between elements in AtomPub and suggests a mechanism for doing inline expansion of links. I’ll be providing my thoughts on each of Pablo’s posts and the responses as I find time during the coming weeks.

Windows Live Photo API (CTP Refresh with AtomPub end point)

The Windows Live Photo API allows users to securely grant permission (via Delegated Authentication) for a third party web site to create/read/update/delete on their photos store in Windows Live. The Photo API refresh has several things which make it easier and faster for third parties to implement.

  • Third party web sites can you link/refer to images directly from the web browser so they no longer need to proxy images, and effectively save on image bandwidth bills.
  • A new AtomPub end point which makes it even easier to integrate.

At the current time, I can’t find the AtomPub endpoint but that’s probably because the documentation hasn’t been refreshed. Moving the API to AtomPub is one of the consequences of the decision to standardize on AtomPub for Web services provided by Windows Live. Although I was part of the original decision to expose the API using WebDAV, I like the fact that all of our APIs will utilize a standard protocol and can take advantage of the breadth of Atom and AtomPub libraries that exist on various platforms.

I need to track down the AtomPub end point so I can compare and contrast it to the WebDAV version to see what we’ve gained and/or lost in the translation. Stay tuned.

Now playing: Jay-Z - Can't Knock the Hustle


 

Categories: Windows Live | XML Web Services

Over the past week, two Windows Live teams have shipped some good news to their users. The Windows Live SkyDrive team addressed the two most often raised issues with their service with the announcements in their post Welcome to the bigger, better, faster SkyDrive! which reads

You've made two things clear since our first release: You want more space; and you want SkyDrive where you are. Today we're giving you both. You now have five times the space you had before — that’s 5GB of free online storage for your favorite documents, pictures, and other files.
 
 
SkyDrive is also available now in 38 countries/regions. In addition to Great Britain, India, and the U.S., we’re live in Argentina, Australia, Austria, Belgium, Bolivia, Brazil, Canada, Chile, Colombia, Denmark, the Dominican Republic, Ecuador, El Salvador, Finland, France, Guatemala, Honduras, Italy, Japan, Mexico, the Netherlands, New Zealand, Nicaragua, Norway, Panama, Paraguay, Peru, Puerto Rico, Portugal, South Korea, Spain, Sweden, Switzerland, Taiwan, and Turkey.
 

Wow, Windows Live is just drowning our customers with free storage. Thats 5GB in SkyDrive and 5GB for Hotmail.  

The Windows Live Spaces team also shipped some sweetness to their customers as well. This feature is a little nearer to my heart since it relies on Contact platform APIs I worked on a little while ago. The feature is described by Michelle in on the their team blog in a post entitled More information on Friends in common which states

In the friends module on another person’s space, there is a new area that highlights friends you have in common.  Right away you can see the number of people you both know and the profile pictures of some of those friends. 

Want to see the rest of your mutual friends?  Click on In common and you’re taken to a full page view that shows all of your friends as well as separate lists of friends in common and friends that you don't have in common.  This way you can also discover new people that you might know in real life, but are not connected with on Windows Live.

           Friend_in_common_1                                      Friends_in_common_2

 

Finding friends in common is also especially important when planning an event on Windows Live Events.  Who wants to go to a party when none of your friends are going? 

On the Guest list area of every event, you can now quickly see how many of your friends have also been invited to the event.  Just click on See who’s going and see whether or not your friends are planning to go. 

Friends_in_common_3

Showing mutual friends as shown above is one of those small features that makes a big impact on the user experience. Nice work Michelle and Shu on getting this out the door.

Now playing: Iconz - I Represent


 

Categories: Windows Live

I found Charles Hudson’s post FriendFeed and the Facebook News Feed - FriendFeed is For Sharing and Facebook Used to be About my Friends somewhat interesting since one of the things I’ve worked on recently is the What’s New page on Windows Live Spaces. He writes

I was reading this article on TechCrunch “Facebook Targets FriendFeed; Opening Up The News Feed” and I found it kind of interesting. As someone who uses FriendFeed a lot and uses Facebook less and less, I don’t think the FriendFeed team should spend much time worrying about this announcement. The reason is really simple.

In the beginning, the Facebook News Feed was really interesting. It was all information about my friend and what they were doing. Over time, it’s become a lot less interesting.

I would like to see Facebook separate “news” from “activity” - “news” is stuff that happened  to people (person x became friend with person y, person x is no longer in a relationship, status updates, etc) and “activities” are stuff related to applications, content sharing, etc. Trying to stuff news and activity into the same channel results in a lot of chaos and noise.

FriendFeed is really different. To me, FriendFeed is a community of people who like to share stuff. That’s a very different product proposition than what the News Feed originally set out to do.

This is an example of a situation where I agree with the sentiment in Jeff Atwood’s post I Repeat: Do Not Listen to Your Users. This isn’t to say that Charles Hudson’s increasingly negative user experience with the Facebook should be discounted or that the things he finds interesting about FriendFeed are invalid. The point is that in typical end user fashion, Charles’s complaints contradict themselves and his suggestions wouldn’t address the actual problems he seems to be having.

The main problem Charles has with the news feed on Facebook is its increased irrelevance due to massive amounts of application spam. This has nothing to do with FriendFeed being more of a community site than Facebook. This also has nothing to do with separating “news” from “activity” (whatever that means).  Instead it has everything to do with the fact that Facebook platform is an attractive target for applications attempting to “grow virally” to send all sorts of useless crap to people’s friends. Friendfeed doesn’t have that problem because everything that shows up in your feed is pulled from a carefully selected list of services shown below

The 28 services supported by FriendFeed

The thing about the way FriendFeed works is that there is little chance that stuff in the feed would be considered spammy because the content in the feed will always correspond to a somewhat relevant user action (Digging a story, adding a movie to a Netflix queue, uploading photos to Flickr, etc).

So this means one way Facebook can add relevance to the content in their feed is to pull data in from more valid sources instead of relying on spammy applications pushing useless crap like “Dare’s level 17 zombie just bit Rob’s level 12 vampire”. 

That’s interesting but there is more. There doesn’t seem to be any tangible barrier to entry in the “market” that Friendfeed is targetting since all they seem to be doing is pulling the public RSS feeds from a handful of Web sites. This is the kind of project I could knock out in two months. The hard part is having a scalable RSS processing platform. However we know Facebook already has one for their feature which allows one to import blog posts as Notes. So that makes it the kind of feature an enterprising dev at Facebook could knock out in a week or two.

The only thing Friendfeed may have going for it is the community that ends up adopting it. The tricky thing about social software is that your users are as pivotal to your success as your features. Become popular with the right kind of users and your site blows up (e.g. MySpace) while with a different set of users your site eventually stagnates due to it’s niche nature (e.g. LiveJournal).

Friendfeed reminds me of Odeo; a project by some formerly successful entrepenuers that tries to jump on a hyped bandwagon without actually scratching an itch that the founders have or fully understanding the space.

Now playing: Jae Millz - No, No, No (remix) (feat. Camron & T.I.)


 

Categories: Social Software