News of the layoffs at Yahoo! has now hit the presses. A couple of the folks who've been indicated as laid off are people I know are great employees either via professional interaction or by reputation. The list of people who fit this bill so far are Susan Mernitt, Bradley HorowitzSalim Ismail and Randy Farmer.  Salim used to run Yahoo's "incubation" wing so this is a pretty big loss. It is particularly interesting that he volunteered to leave the company which may be a coincidence or could imply that some of the other news about Yahoo! has motivated some employees to seek employment elsewhere. It will be interesting to see how this plays out in the coming months.

Randy Farmer is also a surprise given that he pretty much confirmed that he was working on Jerry Yang's secret plan for a Yahoo comeback which included

  • Rethinking the Yahoo homepage
  • Consolidating Yahoo's plethora of social networks
  • Opening up Yahoo to third parties with a consistent platform similar to Facebook's
  • Revamping Yahoo's network infrastructure

If Yahoo! is reducing headcount by letting go of folks working on various next generation projects, this can't bode well for the future of the company given that success on the Web depends on constant innovation.

PS: To any ex-Yahoo's out there, if the kind of problems described in this post sound interesting to you, we're always hiring. Give me a holler. :) 


 

Mini-Microsoft has a blog post entitled  Microsoft's Yahoo! Acquisition is Bold. And Dumb. which contains the following excerpt

To tell you the truth, if you had pulled me aside when I was in school, holding court in the computer science lab, and whispered in my ear ala The Graduate: "online ads..." I would have laughed my geek butt off.

So Google gets to have the joke on me, but for us to bet the company and build Microsoft's future foundation on ads revenue? WTF? As someone who considers themselves a citizen, not a consumer, I want to create software experiences that make people's lives delightful and better, not that sells them crap they don't need while putting them deeper into debt. I'm going to be in purgatory long enough as is.

I find this sentiment somewhat ironic coming from Mini-Microsoft. Microsoft’s bread and butter comes from selling software that people have to use not software that they want to use. In fact, you can argue that the fundamental problems the company has had in making traction in certain consumer-centric markets is that our culture is still influenced by selling to IT departments and developers (i.e. where features and checklists are important) as opposed to selling to consumers (i.e. where user experience is the most important thing).

Specifically, it is hard for me to imagine that there are more people in the world that think that whatever Microsoft product Mini works on has given them more delight or improved their lives better than Facebook, Flickr, Google, MySpace or Windows Live Messenger which happen to all be ad supported software. Thus it amusing to see him imply that ad-supported software is the antithesis of software that delights and improves peoples quality of life. 

The way I see it, Jerry Yang is right that from the perspective of a user “You Always Have Other Options” when it comes free (ad supported), Web-based software which encourages applications to innovate in the user experience to differentiate themselves. It is no small wonder that we’ve seen more innovations in social applications and user interfaces in the world of free, Web-based applications than we’ve seen in the world of proprietary, commercial software.  Something to think about the next time you decide to crap on ad supported Web apps because you think building commercial software is some sort of noble cause that results in perfect, customer delighting software, Mini.

Now playing: Snoop Doggy Dogg - Downtown Assassins


 

Categories: Life in the B0rg Cube

A couple of weeks ago I got a bug filed against me in RSS Bandit with the title Weak duplicate feed detection that had the following description

I already subscribed to "http://feeds.haacked.com/haacked" feed. Then while browsing the feed's homepage the feed autodetection gets refreshed with a "new feed found", url: "http://feeds.haacked.com/haacked/".  It is not detected as a duplicate (ends with backslash) there and also not detected in the subscription wizard.

Even though I argued that there were lots of URLs that seem equivalent to end users that aren't according to the specs I decided to go ahead and fix the two common types of equivalence that trip up end users

Within a few hours of shipping this in version 1.6.0.2 of RSS Bandit, the bug reports have started coming in hard. There's a thread in our forums with the title URL Corruption After Adding a Feed with the following complaints

After I add the feed at: http://www.simple-talk.com/feed/
I get an error in the Error Log and when I restart RSS Bandit, the URL has been truncated to http://www.simple-talk.com

Possibly related problem:
When I add "
http://www.amazonsellercommunity.com/forums/rss/rssmessages.jspa?forumID=22" then RSS Bandit slices and dices it into "http://amazonsellercommunity.com/forums/rss/rssmessages.jspa?forumID=22".
Sorry about the period outside the quotation mark. I wanted to make sure the URL was clear.

Same problem--have a feed, I've used it in the past with RSSBandit and was trying to enter it (tried the usual way first, then turned off autodiscover, then tried to change it via properties) but no matter what I do the www. in front disappears, and the feed doesn't work. http://www.antipope.org/charlie/blog-static/

Each of these is an example where the URL works when the domain is starts with "www" but doesn't if you take it out. This is definitely a case of from bad to worse. We went from the minor irritation of duplicate feeds not being detected when you subscribe to the same feed twice to users being unable to access certain feeds at all.

My apologies to everyone affected by this problem. I will be dialing back the canonicalization process to only treat trailing slashes as equivalent. Expect the installer to be refreshed within the next hour or so.

Now Playing: Young Jeezy feat. Swizz Beatz - Money In Da Bank (Remix)


 

Categories: RSS Bandit

I just realized that the current released version of RSS Bandit doesn’t have a working code name based on a character from the X-Men comic book. The previous 1.5.0.17 release was codenamed ShadowCat while the next release is codenamed Phoenix. Since the v1.6.0.x releases have been an interim releases on the road to Phoenix, I’ve decided to give them the codename Jean Grey retroactively. Now, on to the updates.

Jean Grey (v1.6.0.x) Update

The last bug fix release of RSS Bandit fixed a few bugs but introduced a couple of even worse bugs [depending on your perspective]. We’ve shipped version 1.6.0.2 that addresses the following issues

  • Application crashes with AccessViolationException on startup on Windows XP.  
  • Application crashes and red 'X' shows in feed subscriptions window on Windows XP.  
  • User's credentials are not used when accessing feeds via a proxy server leading to proxy errors when fetching feeds.  
  • Duplicate feed URLs not detected if they differ by trailing slash or "www." in the host name 
  • Application crashes when displaying an error dialog when a certificate issue is detected with a secure feed.

The first three issues are regressions that were introduced as part of refactoring the code and making it work better on Windows Vista. Yet another data point that shows that you can never have too many unit tests and that beta testing isn’t a bad idea either.

You can download the new release from http://downloads.sourceforge.net/rssbandit/RssBandit1.6.0.2_Installer.zip

Phoenix (v2.0)  Update

I’m continuing with my plan to make RSS Bandit a desktop client for Web based feed readers like NewsGator Online and Google Reader. I’ve been slightly sidetracked by the realization that it would be pretty poor form for a Microsoft employee to write an application that synchronized with Google’s RSS reader but not any of Microsoft’s, even if it is a side project. My current coding project is to integrate with the Windows RSS platform which would allow one to manipulate the same set of feeds in RSS Bandit, Internet Explorer 7 and Outlook 2007. The good news is that with Outlook 2007 integration, you also get Exchange synchronization for free.

The bad news has been having to use the RSS reading features of Internet Explorer 7 and Outlook 2007 on a regular basis as a way of eating my own dog food with regards to the integration features. It’s pretty stunning to see not one but two RSS reading applications that assume “mark all items as read” or “delete all feeds” are actions that users never have to take. When you have people writing shell scripts to perform basic tasks in your application then it is a clear sign that somewhere along the line, the user experience for that particular set of features got the shaft. 

I’m about half way through the integration after which I’ll continue with integrating with Google Reader and finally NewsGator Online using an Outlook + Exchange style model. While I’m working on this, both Oren and Torsten will be mapping out the rewrite of the graphical user interface using WPF. I’ll probably need to buy a book on XAML or something in the next few months so I can contribute to this effort. The only thing I’ve heard about any of the various books about the subject on the market is that they all seem to have had their forewords written by Don Box. Does anyone have recommendations on which book or website I should use to start learning XAML + WPF?

Now playing: Eminem - Sing For The Moment


 

This past weekend I attended the O’Reilly Social Graph FOO Camp and got to meet a bunch of folks who I’ve only “known” via their blogs or news stories about them. My favorite moment was talking to Mark Zuckerberg about stuff I think is wrong with Facebook and he stops for a second while I’m telling hin the story of naked pictures in my Facebook news feed then says “Dare? I read your blog”. Besides that my favorite part of the experience was learning new things from folks with different perspectives and technical backgrounds from me. Whether it was hearing different perspectives on the social graph problem from folks like Joseph Smarr and Blaine Cook, getting schooled on the various real-world issues around using OpenID/OAuth in practice from John Panzer and Eran Hammer-Lahav or getting to ask getting to Q&A Brad Fitzpatrick about the Google Social Graph API, it was a great learning experience all around.

There have been some ideas tumbling around in my head all week and I wanted to wait a few days before blogging to make sure I’d let the ideas fully marinate. Below are a few of the more important ideas I took away from the conference.

Social Network Discovery vs. Social Graph Portability

One of the most startling realizations I made during the conference is a lot of my assumptions about why developers of social applications are interested in what has been mistakenly called “social graph portability” were incorrect. I had assumed a lot of social networking sites that utilize the password anti-pattern to screen scrape a user’s Hotmail/Y! Mail/Gmail/Facebook address book were doing that as a way to get a list of the user’s friends to spam invite to join the service. However a lot of the folks I met at the SG FOO Camp made me realize how much of a bad idea this would be if they actually did that. Sending out a lot of spam would lead to negativity being associated with their service and brand (Plaxo is still dealing with a lot of the bad karma they generated from their spammy days).

Instead the way social applications often use the contacts from a person’s email address book is to satisfy the scenario in Brad Fitzpatrick’s blog post URLs are People, Too where he wrote

So you've just built a totally sweet new social app and you can't wait for people to start using it, but there's a problem: when people join they don't have any friends on your site. They're lonely, and the experience isn't good because they can't use the app with people they know.  

I then thought of my first time using Twitter and Facebook, and how I didn’t consider them of much use until I started interacting with people I already knew that used those services. More than once someone has told me, “I didn’t really get why people like Facebook until I got over a dozen friends on the site”.

So the issue isn’t really about “portability”. After all, my “social graph” of Hotmail or Gmail contacts isn’t very useful on Twitter if none of my friends use the service. Instead it is about “discovery”.

Why is this distinction important? Let’s go back to the complaint that Facebook doesn’t expose email addresses in it’s API. The site actually hides all contact information from their API which is understandable. However since email addresses are also the only global identifiers we can rely on for uniquely identifying users on the Web, they are useful as way of being able to figure out if Carnage4Life on Twitter is actually Dare Obasanjo on Facebook since you can just check if they are backed by the same email address.

I talked to both John Panzer and Brad Fitzpatrick about how we could bridge this gap and Brad pointed out something really obvious which he takes advantage of in the Google Social Graph API. We can just share email addresses using foaf:mbox_sha1sum (i.e. cryptographical one-way hashes of email addresses). That way we all have a shared globally unique identifier for a user but services don’t have to reveal their user’s email addresses.

I wonder how we can convince the folks working on the Facebook platform to consider adding this as one of the properties returned by Users.getInfo?

You Aren’t Really My Friend Even if Facebook Says We Are

In a post entitled A proposal: email to URL mapping Brad Fitzpatrick wrote

People have different identifiers, of different security, that they give out depending on how much they trust you. Examples might include:

  • Homepage URL (very public)
  • Email address (little bit more secret)
  • Mobile phone number (perhaps pretty secretive)

When I think back to Robert Scoble getting kicked off of Facebook for screen scraping his friends’s email addresses and dates of birth into Plaxo, I wonder how many of his Facebook friends are comfortable with their personal contact information including email addresses, cell phone numbers and home addresses being utilized by Robert in this manner. A lot of people argued at SG FOO Camp that “If you’ve already agreed to share your contact info with me, why should you care whether I write it down on paper or download it into some social networking site?”.

That’s an interesting question.

I realized that one of my answers is that I actually don’t even want to share this info with the majority of the people in my Facebook friends list in the first place [as Brad points out]. The problem is that Facebook makes this a somewhat binary decision. Either I’m your “friend” and you get all my private personal details or I’ve faceslammed you by ignoring your friend request or only giving you access to my Limited Profile. I once tried to friend Andrew ‘Boz’ Bosworth (a former Microsoft employee who works at Facebook) and he told me he doesn’t accept friend requests from people he didn’t know personally so he ignored the friend request. I thought it was fucking rude even though objectively I realize it makes sense since it would mean I could view all his personal wall posts as well as his contact info. Funny enough, I always thought that it was a flaw in the site’s design that we had to have such an awkward social interaction.

I think the underlying problem again points to Facebook’s poor handling of multiple social contexts. In the real world, I separate my interactions with co-workers from that with my close friends or my family. For an application that wants to be the operating system underlying my social interactions, Facebook doesn’t do a good job of handling this fundamental reality of adult life.

Now playing: D12 - Revelation


 

Categories: Social Software | Trip Report

Niall Kennedy has a blog post entitled Sniff browser history for improved user experience where he describes a common-sense technique to test URLs against a Web browser’s visited page history. He writes

I first blogged about this technique almost two years ago but I will now provide even more details and example implementations.
...
A web browser such as Firefox or Internet Explorer will load the current user's browser history into memory and compare each link (anchor) on the page against the user's previous history. Previously visited links receive a special CSS pseudo-class distinction of :visited and may receive special styling.
...
Any website can test a known set of links against the current visitor's browser history using standard JavaScript.

  1. Place your set of links on the page at load or dynamically using the DOM access methods.
  2. Attach a special color to each visited link in your test set using finely scoped CSS.
  3. Walk the evaluated DOM for each link in your test set, comparing the link's color style against your previously defined value.
  4. Record each link that matches the expected value.
  5. Customize content based on this new information (optional).

Each link needs to be explicitly specified and evaluated. The standard rules of URL structure still apply, which means we are evaluating a distinct combination of scheme, host, and path. We do not have access to wildcard or regex definitions of a linked resource.

Niall goes on to describe the common ways one can improve the user experience on a site using this technique. I’ve been considering using this approach to reduce the excess blog flair on my weblog. It doesn’t make much sense to show people a “submit to reddit” button  if they don’t use reddit. The approach suggested in Niall’s article makes it possible for me to detect what sites a user visits and then only display relevant flair on my blog posts. Unfortunately neither of Niall’s posts on the topic provide example code which is why I’m posting this follow up to Niall’s post. Below is an HTML page that uses Javascript function to return which social bookmarking sites a viewer of a Web page actually uses based on their browser history.


<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">  
  <head>
    <title>What Social Bookmarking Sites Do You Use?</title>  
    <script type="text/javascript">
var bookmarking_sites = new Array("http://programming.reddit.com", "http://www.dzone.com", "http://www.digg.com", "http://del.icio.us", "http://www.stumbleupon.com", "http://ma.gnolia.com", "http://www.dotnetkicks.com/", "http://slashdot.org")


function DetectSites() {
  var testelem = document.getElementById("linktest");
  var visited_sites = document.getElementById("visited_sites");
  var linkColor; 
  var isVisited = false;

  for (var i = 0; i < bookmarking_sites.length; i++) {          
         var url2test = document.createElement("a");                
         url2test.href = bookmarking_sites[i];       
         url2test.innerHTML = "I'm invisible, you can't click me";        
         
	 testelem.appendChild(url2test); 

	 if(document.defaultView){ //Mozilla
           linkColor = document.defaultView.getComputedStyle(url2test,null).getPropertyValue("color");

	   if(linkColor == "rgb(100, 149, 237)"){
	     isVisited = true;
	   }
	 }else if(url2test.currentStyle){ //IE
	   if(url2test.currentStyle.color == "cornflowerblue"){
	     isVisited = true;
	   }
	 }

	 if (isVisited) {           
	 visited_sites.innerHTML = visited_sites.innerHTML + 
	  "<a href='" + url2test.href + "'>" + url2test.href + "</a><br>"
	 }         
	 testelem.removeChild(url2test);            
	 isVisited = false; 
  } 
}
      
    </script>
 <style type="text/css">
   p#linktest a:visited { color: CornflowerBlue }
 </style>
  </head>
  <body onload="DetectSites()">
    <b>Social Bookmarking Sites You've Visited</b>
    <p id="linktest" style="visibility:hidden" />
    <p id="visited_sites" />
  </body>
</html>

Of course, after writing the aforementioned code it occured to me run a Web search and I found that there are bits of code for doing this all over the Web in places like Jermiah Grossman’s blog (Firefox only) and GNUCITIZEN

At least now I have it in a handy format; cut, paste and go.

Now all I need is some free time which in which to tweak my weblogt to start using the above function instead of showing people links to services they don’t use.

Now playing: Disturbed - Numb


 

Categories: Programming | Web Development

I was recently reading a blog post in response to one of my posts and noticed that the author used my wikipedia entry as the primary resource to figure out who I am. The only problem with that is my Wikipedia entry is pretty outdated and quite scanty. Since it is poor form to edit your own Wikipedia entry I am at a quandary.

My resume is slightly more up to date, it is primarily missing descriptions of the stuff I worked on at Microsoft last year. Thus I saw two choices. I could either change the “About Me” link on my blog to point to my resume or I could implore some kind soul in my readership to update my entry in Wikipeda. I decided to start with the latter and if that doesn’t work out, I’ll be updating the “About Me” link to point to my resume.

Thanks in advance to anyone who takes the time to update my Wikipedia entry (not vandalize it Smile ).

Now playing: Ice Cube - Ghetto Vet


 

Categories: Personal

February 5, 2008
@ 05:09 PM

First Round Capital ad asks 'Leaving Microsoft?'

 

Amusing. I wonder what kind of ads YHOO employees are getting in Facebook these days?

Now playing: Abba - Money, Money, Money


 

Categories: Social Software

On Friday of last week,  brad Fitzpatrick posted an entry on the Google code blog entitled URLs are People, Too where he wrote

So you've just built a totally sweet new social app and you can't wait for people to start using it, but there's a problem: when people join they don't have any friends on your site. They're lonely, and the experience isn't good because they can't use the app with people they know. You could ask them to search for and add all their friends, but you know that every other app is asking them to do the same thing and they're getting sick of it. Or they tried address book import, but that didn't totally work, because they don't even have all their friends' email addresses (especially if they only know them from another social networking site!). What's a developer to do?

One option is the new Social Graph API, which makes information about the public connections between people on the Web easily available and useful
...
Here's how it works: we crawl the Web to find publicly declared relationships between people's accounts, just like Google crawls the Web for links between pages. But instead of returning links to HTML documents, the API returns JSON data structures representing the social relationships we discovered from all the XFN and FOAF. When a user signs up for your app, you can use the API to remind them who they've said they're friends with on other sites and ask them if they want to be friends on your new site.

I talked to Dewitt Clinton, Kevin Marks and Brad Fitzpatrick about this API at the O'Reilly Social Graph FOO Camp and I think it is very interesting. Before talking about the API, I did want to comment on the fact that this is the second time I've seen a Google employee ship something that implies that any developer can just write custom code to do data analysis on top of their search index (i.e. Google's copy of the World Wide Web) and then share that information with the world. The first time was Ian Hickson's work with Web authoring statistics. That is cool.

Now back to the Google Social Graph API. An illuminating aspect of my conversations at the Social Graph FOO Camp is that the scenario described by Brad where social applications would like to bootstrap the user's experience by showing them their friends who use the service is more important than the "invite my friends to join this new social networking site" for established social apps. This is interesting primarily because both goals are currently achieved by the current anti-pattern of requesting a user's username and password to their email service provider and screen scraping their address book. The social graph API attempts to eliminate the need for this ugly practice by providing a public API which will crawl a user's publicly articulated relationships and then providing an API that social apps can use to find the user's identities on other services as well as their relationships with other users on those services.

The API uses URIs as the primary identifier for users instead of email addresses. Of course, since there is often an intuitive way to convert a username to a URI (e.g. 'carnage4life on Twitter' => http://www.twitter.com/carnage4life), users simply need to provide a username instead of a URI.

So how would this work in the real world? So let's say I signed up for Facebook for the first time today. At this point my experience on the site would be pretty lame because I've made no friends so my news feed would be empty and I'm not connected to anyone I know on the site yet. Now instead of Facebook collecting the username and password for my email address provider to screen scrape my addres book (boo hiss) it shows a list of social networking sites and asks for just my username on those sites. On obtaining my username on Twitter, it maps that to a URI and passes that to the Social Graph API. This returns a list of people I'm following on Twitter with various identifiers for them, which Facebook in turn looks up in their user database then prompts me to add them as my friends on the site if any of them are Facebook users.

This is a good idea that gets around the proliferation of applications that collect usernames and passwords from users to try to access their social graph on other sites. However there are lots of practical problems with relying on this as an alternative to screen scraping and other approaches intended to discover a user's social graph including

  • many social networking sites don't expose their friend lists as FOAF or XFN
  • many friend lists on social networking sites are actually hidden from the public Web (e.g. most friend lists on Facebook) which is by design
  • many friend lists in social apps aren't even on the Web (e.g. buddy lists from IM clients, address books in desktop clients)

That said this is a good contribution to this space. Ideally, the major social networking sites and address book providers would also expose APIs that social applications can use to obtain a user's social graph without resorting to screen scraping. We are definitely working on that at Windows Live with the Windows Live Contacts API. I'd love to see other social software vendors step up and provide similar APIs in the coming months. That way everybody wins; our users, our applications and the entire ecosystem.

Now Playing: Playaz Circle - Duffle Bag Boy (feat. Lil Wayne)


 

Categories: Platforms | Social Software

A few days ago I got a Facebook message from David Recordon about Six Apart's release of the ActionStreams plugin. The meat of the announcement is excerpted below

Today, we're shipping the next step in our vision of openness -- the Action Streams plugin -- an amazing new plugin for Movable Type 4.1 that lets you aggregate, control, and share your actions around the web. Now of course, there are some social networking services that have similar features, but if you're using one of today's hosted services to share your actions it's quite possible that you're giving up either control over your privacy, management of your identity or profile, or support for open standards. With the Action Streams plugin you keep control over the record of your actions on the web. And of course, you also have full control over showing and hiding each of your actions, which is the kind of privacy control that we demonstrated when we were the only partners to launch a strictly opt-in version of Facebook Beacon. Right now, no one has shipped a robust and decentralized complement to services like Facebook's News Feed, FriendFeed, or Plaxo Pulse. The Action Streams plugin, by default, also publishes your stream using Atom and the Microformat hAtom so that your actions aren't trapped in any one service. Open and decentralized implementations of these technologies are important to their evolution and adoption, based on our experiences being involved in creating TrackBack, Atom, OpenID, and OAuth. And we hope others join us as partners in making this a reality.

This is a clever idea although I wouldn't compare it to the Facebook News Feed (what my social network is doing) it is instead a self hosted version of the Facebook Mini-Feed (what I've been doing). Although people have been doing this for a while by aggregating their various feeds and republishing to their blog (life streams?), I think this is the first time that a full fledged framework for doing this has been shipped as an out of the box solution. 

Mark Paschal has a blog post entitled Building Action Streams which gives an overview of how the framework works. You define templates which contains patterns that should be matched in a feed (RSS/Atom) or in an HTML document and how to convert these matched elements into a blog post. Below is the template for extracting and republishing del.icio.us links extracted from the site's RSS feeds.

delicious:
    links:
        name: Links
        description: Your public links
        html_form: '[_1] saved the link <a href="[_2]">[_3]</a>'
        html_params:
            - url
            - title
        url: 'http://del.icio.us/rss/{{ident}}'
        identifier: url
        xpath:
            foreach: //item
            get:
                created_on: dc:date/child::text()
                title: title/child::text()
                url: link/child::text()

It reminds me a little of XSLT. I almost wondered why they just didn't use that until I saw that it also supports pattern matching HTML docs using Web::Scraper [and that XSLT is overly verbose and difficult to grok at first glance].

Although this is a pretty cool tool I don't find it interesting as a publishing tool. On the other hand, it's potential as a new kind of aggregator is very interesting. I'd love to see someone slap more UI on it and make it a decentralized version of the Facebook News feed. Specifically, if I could feed it a blogroll, have it use the Google Social Graph API to figure out the additional services that the people in my subscriptions have and then build a feed reader + news feed experience on top of it. That would be cool. 

Come to think of it, this would be something interesting to experiment with in future versions of RSS Bandit.

Now Playing: Birdman - Pop Bottles (remix) (feat. Jim Jones & Fabolous)


 

Categories: Platforms | Social Software