I was recently on a panel at the South by South West interactive conference (SXSW) where we discussed multiple applications of the real-time Web and the things that might prevent us from seeing its true potential. I’ve found it interesting that the key take away from the panel is that privacy issues will be one of the biggest problems we will face as we move forward. You can see this perspective in CNN’s coverage of the panel in the story Privacy concerns hinder 'real-time Web' creation, developers say and GigaOm’s write-up SXSW: Is Privacy on the Social Web a Technical Problem? 

This overlap of privacy and real-time web features is brought into sharp relief when you look at services such as Foursquare and Gowalla which provide a mechanism for people to broadcast their physical location to a group of friends in real-time. I started using Foursquare last week and I’ve noticed I’m even more careful about who I accept friend requests from than on Facebook or Windows Live Messenger. The fact is that I may share status updates and photos with people but it doesn’t mean I want them to be aware of where I am on an up to the minute basis especially if I’m out spending time with my family and friends. This difference in how we view location data from other sorts of real-time data we share is captured by the co-founder of Foursquare in the article Facebook Isn't For Real Life Friends Anymore, Says Foursquare's Dennis Crowley where it states

Facebook plans to clone Foursquare's central service -- the ability for site members to use their phones to "check-in" from restaurants and bars -- and make it a mere Facebook feature.

But Foursquare cofounder Dennis Crowley says there's something Facebook can't clone: the real-life friendships between Foursquare users.

"Facebook used to be who your friends are, now it's everyone," Dennis told us in an interview.

"[Foursquare] is more tightly curated to who you want to have as your check-in friends. Facebook is good place for status updates and sharing photos, not to keep tabs on where people are going."

I think Crowley is on to something when he says Facebook can’t clone the Foursquare relationship model. I suspect that like Twitter, Foursquare has created a social network whose value proposition is differentiated enough from Facebook’s that it can grow into a relatively popular albeit smaller service that will not be “killed” by Facebook*. Secondly, there is a lot of synergy between Foursquare and Facebook as evidenced by the fact that Facebook is the largest referrer of traffic to Foursquare thanks to their implementation of Facebook Connect. So I think the claims that one will kill the other is just the usual tech press creating conflict to generate page views.

One thing I have noticed is that location can’t just be a field you bolt on to a status update. It has to be a key part of the information you are sharing with others otherwise it adds little value to the user experience and in fact may detract from it by adding clutter. For example, compare what a location-based update from Foursquare looks like on Facebook versus what the exact same update looks like on Twitter

VS

 

The difference between both updates is almost night and day even though the actual status text I shared is the same. The way Twitter has approached location is to treat it as a bunch of “poorly translated” GPS coordinates that are bolted on to the end of my status update. The Facebook update not only gives you that but also a human readable location for where I am down to the room number and includes some social context such as the fact that I was attending the talk with two coworkers from Windows Live.

As real-time location data starts to permeate social experiences, there’s a lot to learn from the above screenshots. In the example above, people who are interested in the topic based on my status knew which room to find danah’s talk from the Facebook update whereas they were told “downtown austin” in the Twitter update.  As designers of social software applications, we should be mindful that location data enhances the experience and the information being shared. Adding location simply for buzzword compliance or to add metadata to the status update without enhancing the experience actually ends up crufting it up.

* Twitter’s value proposition is that it is the place to interact with celebrities and microcelebrities that you care about. It is useful to note that the much maligned Suggested Users List was key in establishing this value proposition in the minds of users. This is different from Facebook’s position as the social network for your real world friends, family, coworkers and acquaintances.

Note Now Playing: B.O.B. - Nothin' On You (featuring Bruno Mars) Note


 

“Every marketer's dream is to find an unidentified or unknown market and develop it” – Barry Brand

Last week I read the various notes on the presentations by famous startup founders at Startup School 2009 and found a lot of the anecdotes interesting but wondered if they were truly useful to startup founders. I tried to think of some of the products I started using in the past five years that I now use a lot today and couldn’t do without. In looking for the common thread in these products and the quote above came to mind.

Finding an underserved market sounds like getting the winning ticket in a lottery, unlikely. In truth it isn’t hard to find underserved markets if you recognize the patterns. The hard part in turning it into a successful business is execution. 

There are two patterns I’ve used in looking for underserved markets that were invigorated by products I now use regularly today. The first pattern is looking for activities people like doing where the technology has been stagnant for a while. For the past few decades, we’ve been in world where the technology products you use can be made smaller, cheaper and faster every few years. There is some notion of waiting for the stars to align such as hard drive sizes shrinking until you could fit gigabytes of data in your pocket (e.g. the iPod) or AJAX and online banking becoming ubiquitous (e.g. Mint). However for each product category where some upstart has changed the game by leveraging modern technologies there are still dozens of markets where decade(s) old tech is dictating the user experience.

Another patterns is looking for an activity or task that people hate doing but assume is a fact of life or a necessary aspect of using a product. I remember when I’d buy video games like Soul Calibur and lament about how I could only find people to play with when I went to visit friends from out of town who were also fans of the game. To me, not being able to play multiplayer games without having friends physically in the same room was just a fact of live. This is no longer the case in the world of XBox Live. I also remember almost cancelling my cable subscription about five or six years ago because I resented having to tailor my TV watching schedule around prime time hours which to me represented prime hours to decompress from work and hack on RSS Bandit. The entire notion of “prime time TV” has been a fact of life for decades. Once I discovered TiVo, it stopped being the case for me. Some of these painful facts of life have been with us for centuries. For example, take this quote from John Wanamaker

“Half the money I spend on advertising is wasted; the trouble is I don't know which half”

with modern advertising solutions like Google’s Adwords and Facebook’s Advertising platform this is no longer the case. People can now tell down to minute levels of detail exactly what their return on investment is on advertising.

I’d love to see more startups attempting to find and satisfy underserved markets instead of going with the crowd and doing what everyone else is doing. Less Facebook games and iPhone fart apps, more original ideas that solve real problems like Mint and Flickr. If your startup needs a few hints at what some of these markets are in the technology space, there’s always YCombinator’s list of Startup Ideas they’d fund

Note Now Playing: Jay-Z Feat. Kanye West - Hate Note


 

Categories: Startup Shoutout

After all the hype, I got around to taking Wolfram Alpha for a spin last night due to being unable to sleep after a weird Doctor Manhattan themed nightmare. The experience of using the site is very impressive and there is a great walkthrough of the power of the site in the Wolfram Alpha screencast which I encourage people to watch if you are interested in learning about a new breed of search engine.

There have been a ton of articles calling Wolfram Alpha a "Google Killer" but after using the site for a few hours although I find it fascinating, I question how much of a threat the site is to Google either as a way to satisfy the typical questions people ask Web search engines or a threat to Google’s search advertising cash cow. You can get a sense for the kinds of queries that Wolfram Alpha handles amazingly well from the list below

As you can tell from the above list, Wolfram Alpha is like having a search engine over the kind of data you’d see in the CIA's World Factbook or Time Almanac. There really isn’t anything like it on the Web today. However it isn’t really a competitor to traditional web search engines who for the most part are still focused on finding web pages despite the various advancements in answering a subset of queries with direct answers instead of links to web pages such as Google's OneBox results and Live Search’s instant answers feature.

From my perspective, the threat to search engines like Google isn’t Wolfram Alpha but the trend it represents. That trend is the renaissance of the vertical search engine. Earlier this year, I was putting together a panel at the MIX ‘09 conference and needed to invite the panelists from a pool of people who I’d either heard about or knew of professionally but had never contacted directly. How did I find out how to contact these people?  Even though all of them had blogs, there wasn’t a consistent way to track down contact information. So I looked them up on Facebook and sent each of them a private message. Mission accomplished. Unbeknownst to me, Facebook had become my “people” search engine”.

Here’s another story. Last year I worked on the most satisfying software release of my career, Windows Live (wave 3). After the launch I wanted to find out what people were saying about the product so I did a Twitter search for Windows Live and posted the results. While I wasn’t paying attention, Twitter had become my “what are people saying about <insert brand here>” search engine.

This trend of search engines dedicated to specific scenarios and contexts that can’t be answered well by Web search is the trend that traditional search engines should watch carefully.

I can imagine Wolfram Alpha eventually growing to satisfy a lot of the sorts of queries I go to Wikipedia today to get answers to and doing so in a more authoritative manner. In that case, it would become my “facts and trivia” search engine. However there are currently too many gaps in its knowledge of commercial products (e.g. search for “ipod” results in a coming soon notice) and people (e.g. the Jim Carrey entry is amazingly brief yet still manages to have a factually inaccuracy) to make it a true replacement for wikipedia. That said, the service shows great promise and it will be interesting watching as it evolves. 


 

Categories: Startup Shoutout

Over the past two weeks I participated in panels at both the SXSW and MIX 09 on the growing trend of provide streams of user activities on social sites and aggregating these activities from multiple services into a single experience. Aggregating activities from multiple sites into a single service for the purpose of creating a activity stream is fairly commonplace today and was popularized by Friendfeed. This functionality now exists on many social networking sites and related services including Facebook, Yahoo! Profile and the Windows Live Profile

In general, the model is to receive or retrieve user updates from a social media site like Flickr and make these updates available on the user's profile on the target social network and share it with the user's friends via an activity stream (or news feed) on the site. The diagram below attempts to capture this many-to-many relationship as it occurs today using some well known services as examples.

The bidirectional arrows are meant to indicate that the relationship can be push-based where the content-based social media site notifies the target social network of new updates from the user or pull-based where the social network polls the site on a regular basis seeking new updates from the target user.

There are two problems that sites have to deal with in this model

  1. Content sites like Flickr have to either deal with being polled unnecessarily millions of times a day by social networks seeking photo updates from their users.  There is the money quote from last year that FriendFeed polled Flickr 2.7 million times a day to retrieve a total of less than 7,000 updates. Even if they move to a publish-subscribe model it would mean not only having to track which users are of interest to which social network but also targeting APIs on different social networks that are radically different (aka the beautiful f-ing snowflake API problem).

  2. Social aggregation services like Friendfeed and Windows Live have to target dozens of sites each with a different APIs or schemas. Even in the case where the content sites support RSS or Atom, they often use radically different schemas for representing the same data

The approach I've been advocating along with others in the industry is that we need to adopt standards for activity streams in a way that reduces the complexity of this many-to-many conversation that is currently going on between social sites.

While I was at SXSW, I met one of the folks from Gnip who is advocating an alternate approach. He argued that even with activity stream standards we've only addressed part of the problem. Such standards may mean that FriendFeed gets to reuse their Flickr code to poll Smugmug with little to no changes but it doesn't change the fact that they poll these sites millions of times a day to get a few thousand updates.

Gnip has built a model where content sites publish updates to Gnip and then social networking sites can then choose to either poll Gnip or receive updates from Gnip when the update matches one of the rules they have created (e.g. notify us if you get a digg vote from Carnage4Life). The following diagram captures how Gnip works.

The benefit of this model to content sites like Flickr is that they no longer have to worry about being polled millions of times a day by social aggregation services. The benefit to social networking sites is that they now get a consistent format for data from the social media sites they care about and can choose to either pull the data or have it pushed to them.

The main problem I see with this model is that it sets Gnip up to be this central point of failure and I'd personally rather deal interact directly with the content services directly instead of inject a middle man into the process. However I can see how their approach would be attractive to many sites who might be buckling under the load of being constantly polled and to social aggregation sites that are tired of hand coding adapters for each new social media sites they want to integrate with. 

What do you think of Gnip's service and the problem space in general?

Note Now Playing: EamonF**k It (I Don't Want You Back) Note