I spent a bunch of time last night and this morning noodling on Evan Williams’ post Five Easy Pieces of Online Identity where he talks about what people often mean when they talk about “online identity”. His list has the following five entries

1) Authentication

Question Answered: Do you have permission?*
Offline Equivalent: Picture ID or keys, depending on method.

2) Representation

Question Answered: Who are you?
Offline equivalent: Business card. (Also: Clothes, bumper stickers, and everything else one chooses to show people who they are.

3) Communication
Question Answered: How do I reach you?
Offline Equivalent: Phone number.
4) Personalization

Question Answered: What do you prefer?
Offline Equivalent: Your coffee shop starting your drink when you walk in the door.

5) Reputation

Question Answered: How do others regard you?
Offline Equivalent: Word of mouth/references, credit agencies.

I think Ev’s post is a really good start to answering the question why would one would want to identify a user in an application or website. Specifically, what does a user get from being asked to log-in or register to your application or website? Secondarily, it also provides the framework for deciding if or when you should use your own identity system or should leverage someone else’s such as Facebook Connect.

Ev misuses the term authentication in his post which is a little confusing since he seems to do it knowingly. All five entries on his list are all facets of what you get when you identify or in some cases authenticate a user. For identification, you may simply need an identifier such as an email address or URL. For example, if I give you the URL to my Facebook profile, you get to see how I’ve chosen to represent myself to the world (e.g. my profile picture is a family shot which tells you something about me), you can contact me if you’re in the right network on Facebook and you can even make some personalization decisions by looking at the music and TV I’ve liked. Authentication is a more nuanced version of identification because it means you’ve proved that I’m actually the person who “owns” http://www.facebook.com/Carnage4Life not just someone who knows that URL (or email address or other identifier).

The first thing to do is update Ev’s list

  1. Authentication – who are you?
  2. Authorization – do you have permission?
  3. Representation – how do I want others to view me? 
  4. Communication – how do I reach you?
  5. Personalization – what do you prefer?
  6. Reputation – how do others regard you?
  7. Commerce – how are you going to pay for this? (e.g. credit cards, putting a meal on your hotel room bill when eating at the hotel restaurant, etc)
  8. Social – who are your friends?

The first change on the list is already explained. Asking who I am is an intrinsic aspect of all of the other items on the list.

The second change is obvious in retrospect. There are a broad class of websites and applications that need to identify a user so that the user can pay for a virtual or physical good or service. The biggest player in the identity and payment space on the open web is obviously Paypal with minor competition from Google Checkout and Amazon Payments. There are also specific ecosystems where payment is a fundamental aspect of identity such as Facebook Credits which is part of the Facebook platform ecosystem, the 200 million iTunes accounts with credit cards that are a part of the iOS ecosystem or Microsoft Points which are the coin of the realm in the XBox Live ecosystem.

The third addition is also a surprising omission from Ev’s list given that this has been the primary way distributed identity has actually become popular on the Web. Unsurprisingly, the key player in this space is Facebook which provides widgets such as the recommendations plugin which allows sites like Engadget show me what articles on their site my friends liked

This list is pertinent to web developers from multiple perspectives. First of all it’s a checklist that determines whether your application or website needs a user identity system. When you do determine that you do meet some of the requirements in the checklist, it also sharpens your focus on when you let identity get out of the way for your users. Sites like Yelp and Reddit are good examples of sites that need user identity for personalization, reputation and communication but users can get value without using features that rely on those capabilities. However neither site does a good job of explaining to users that they can get this functionality if they create an identity on the site. On the other hand, I think Quora does a particularly awful of running this check list when you hit the front page of the site since you don’t even get to see any content without creating an account.

The list is also useful as a way to decide which aspects of your site or application’s identity requirements you want to maintain in-house versus outsource. Do you want to rely on Facebook’s social graph or have a friend list that is native to your site? Will you accept credit cards or just utilize Paypal or Amazon Payments? And so on. Finally the list is useful for entrepreneurs as a way to segment the various use cases in the industry and see opportunities where things can be improved. Some people like to call game over for innovation in identity on the web given the Facebook juggernaut but it is clear when you look at that list that there are parts of the identity space where they haven’t made much traction such as reputation and payments.      

Note Now Playing: Rihanna - Love the Way You Lie, Part II (featuring Eminem)Note


Categories: Social Software

The EFF has a persuasive anti-Facebook rant titled An Introduction to the Federated Social Network which bemoans the perils of centralization of social networking under the aegis of one company (You can replace "Facebook" with "Orkut," "LinkedIn," "Twitter," and essentially tell the same story). The core arguments from the article are excerpted below

But federated social network developers are doing two things differently in order to build a new ecosystem. First, the leading federated social networking software is open-source: that means that anybody can download the source code, and use it to create and maintain social networking profiles for themselves and others. Second, the developers are simultaneously collaborating on a new common language, presumably seeking an environment where most or even all federated social networking profiles can talk to one another. 

What will that likely mean in practice? To join a federated social network, you'll be able to choose from an array of "profile providers," just like you can choose an email provider. You will even be able to set up your own server and provide your social networking profile yourself. And in a federated social network, any profile can talk to another profile — even if it's on a different server.

Distributed social networks represent a model that can plausibly return control and choice to the hands of the Internet user. If this seems mundane, consider that informed citizens worldwide are using online social networking tools to share vital information about how to improve their communities and their governments — and that in some places, the consequences if you're discovered to be doing so are arrest, torture, or imprisonment. With more user control, diversity, and innovation, individuals speaking out under oppressive governments could conduct activism on social networking sites while also having a choice of services and providers that may be better equipped to protect their security and anonymity.

As someone who’s been noodling on social network interoperability for the past four years this is a topic that’s near and dear to my heart. However there needs to be some reality injected into the unbridled bashing of existing social networks that we have today. There’s a reason why you read about Iran's Twitter Revolution and Japanese school children using Facebook to communicate with their parents during the earthquake yet hear virtually nothing about Diaspora or Status.Net being used in similar ways to impact the lives of millions of people.

The first key argument made by the EFF article is that popular social networks aren’t Open Source so you can’t download the source code and run them on your own server. The interesting thing here is that this is actually attempting to buck a trend. Most people don’t want to host software because it is a pain in the ass to deal with. Even businesses don’t want to do this which is why “cloud” is such a hot buzzword and enterprise microblogging tools such as Yammer and Salesforce Chatter are hosted services not on-premises software. More importantly, the entire point of broadcast oriented social networks is being able to communicate with a lot of people which encourages network effects and the sort of winner take all dynamics that we’ve seen people lament over Facebook about. Why would a user create an account on identi.ca or any other hosted instance of Status.Net when they can create one on Twitter and reach a lot more people? Social networking isn’t like blogging. A blog is a solitary item which only needs the publisher to exist so one person being able to download it and throw it on a server and getting value out of it is true. On the other hand, a social network by definition needs lots of people using the service to be useful which makes the ability to download it and throw it on a server for your own use much not terribly useful.

The second argument is that there should be protocols that enable interoperability between social networks and this is one I firmly believe in. In fact, this is the real problem. If I can’t use my self-hosted social networking tool to talk to my friends on Facebook and Twitter then it isn’t a useful social networking tool. This is similar to the early days of email when you could only send messages to people on your network or who used the same software as you.  Being unable to subscribe to @shitmydadsays from my Facebook account or dm my wife from Twitter may sound trivial but it is a fundamental impediment to social networking reaching its full potential as a way for people to share and communicate with the people they care about no matter where they are. Without interoperability we will continue to see the power law caused by network effects continue to play out and the sorts of innovation talked about by the EFF article won’t come to fruition since given the choice of being able to communicate with others and some innovative functionality on a particular service, most people will choose friends over features.

The interesting question is whether we’ll see this logjam broken by smaller social networking services banding together in an interoperable way thus creating a whole that is greater than the sum of its parts.

Note Now Playing: Ol' Dirty Bastard - Proteck Ya Neck II The Zoo (feat. Brooklyn Zu, Prodigal Sunn, Killah Priest, & 60 Second Assassin) Note


Categories: Social Software

I’ve now been working and blogging about web technology long enough to see technologies that we once thought were the best thing since sliced bread turn out to be rather poor solutions to the problem or even worse that they create more problems than they solve. Since I’ve written favorably about all of the technologies mentioned below this is also a mea culpa where I try to see what I can learn about judging the suitability of technologies to solving problems on the web without being blinded by the hype from the “cool kids” on the web.

The Failure of OpenID

According to Wikipedia, “OpenID is an open standard that describes how users can be authenticated in a decentralized manner, obviating the need for services to provide their own ad hoc systems and allowing users to consolidate their digital identities”. So the problem that OpenID solves is having to create multiple accounts on different websites but instead being able to re-use from the identity provider (i.e. website) of your choice. OpenID was originally invented in 2005 by Brad Fitzpatrick to solve the problem of having bloggers having to create an account on a person’s weblog or blogging service before being able to leave a comment.

OpenID soon grew beyond it’s blog-centric origins and has had a number of the big name web companies either implement it in some way or be active in it's community. Large companies and small companies alike have been lobbied to implement OpenID and accused of not being “open” when they haven’t immediately jumped on the band wagon. However now that we’ve had five years of OpenID, there are a number of valid problems that have begun to indicate the emperor may either have no close or at least is just in his underwear.

The most recent set of hand wringing about the state of OpenID has been inspired by 37 Signals announcing they'll be retiring OpenID support but the arguments against OpenID have been gathering steam for months if not years.

First of all, there have been the arguments that OpenID is too complex and yet doesn't have enough features from people who’ve been supporting the technology for years like David Recordon. Here is an excerpt from David Recordon’s writings on the need for an OpenID Connect

In 2005 I don't think that Brad Fitzpatrick or I could have imagined how successful OpenID would become. Today there are over 50,000 websites supporting it and that number grows into the millions if you include Google FriendConnect. There are over a billion OpenID enabled URLs and production implementations from the largest companies on the Internet.

We've heard loud and clear that sites looking to adopt OpenID want more than just a unique URL; social sites need basic things like your name, photo, and email address. When Joseph Smarr and I built the OpenID/OAuth hybrid we were looking for a way to provide that functionality, but it proved complex to implement. So now there's a simple JSON User Info API similar to those already offered by major social providers.

We have also heard that people want OpenID to be simple. I've heard story after story from developers implementing OpenID 2.0 who don't understand why it is so complex and inevitably forgot to do something. With OpenID Connect, discovery no longer takes over 3,000 lines of PHP to implement correctly. Because it's built on top of OAuth 2.0, the whole spec is fairly short and technology easy to understand. Building on OAuth provides amazing side benefits such as potentially being the first version of OpenID to work natively with desktop applications and even on mobile phones.

50,000 websites sounds like a lot until you think about the fact that Facebook Connect which solves a similar problem had been adopted by 250,000 websites during the same time frame and had been around less than half as long as OpenID. It’s also telling to ask yourself how often you as an end user actually have used OpenID or even seen that it is available on a site.

The reason for why you can count the instances you’ve had this occur on one or two hands is eloquently articulated in Yishan Wong’s answer to the question What's wrong with OpenID on Quora which is excerpted below

The short answer is that OpenID is the worst possible "solution" I have ever seen in my entire life to a problem that most people don't really have.  That's what's "wrong" with it.

To answer the most immediate question of "isn't having to register and log into many sites a big problem that everyone has?," I will say this: No, it's not.  Regular normal people have a number of solutions to this problem.  Here are some of them:

  • use the same username/password for multiple sites
  • use their browser's ability to remember their password (enabled by default)
  • don't register for the new site
  • don't ever log in to the site
  • log in once, click "remember me"
  • click the back button on their browser and never come back to the site
  • maintain a list of user IDs and passwords in an offline document

These are all perfectly valid solutions that a regular user finds acceptable.  A nerd will wrinkle up his nose at these solutions and grumble about the "security vulnerabilities" (and they'll be right, technically) but the truth is that these solutions get people into the site and doing what they want and no one really cares about security anyways.  On the security angle, no one is going to adopt a product to solve a problem they don't care about (or in many cases, even understand). 

The fact that anyone even expects that OpenID could possibly see any amount of adoption is mind-boggling to me.  Proponents are literally expecting people to sign up for yet another third-party service, in some cases log in by typing in a URL, and at best flip away to another branded service's page to log in and, in many cases, answer an obscurely-worded prompt about allowing third-party credentials, all in order to log in to a site.  This is the height of irony - in order to ease my too-many-registrations woes, you are asking me to register yet again somewhere else??  Or in order to ease my inconvenience of having to type in my username and password, you are having me log in to another site instead?? 

Not only that, but in the cases where OpenID has been implemented without the third-party proxy login, the technical complexity behind what is going on in terms of credential exchange and delegation is so opaque that even extremely sophisticated users cannot easily understand it (I have literally had some of Silicon Valley's best engineers tell me this).  At best, a re-directed third-party proxy login is used, which is the worst possible branding experience known on the web - discombobulating even for savvy internet users and utterly confusing for regular users.  Even Facebook Connect suffers from this problem - people think "Wait, I want to log into X, not Facebook..." and needs to overcome it by making the brand and purpose of what that "Connect with Facebook" button ubiquitous in order to overcome the confusion. 

I completely agree with Yishan’s analysis here. Not only does OpenID complicate the sign-in/sign-up experience for sites that adopt it but also it is hard to confidently make the argument that end users actually consider the problem OpenID is trying to solve be worth the extra complication.

The Failure of XML on the Web

At the turn of the last decade, XML could do no wrong. There was no problem that couldn’t be solved by applying XML to it and every technology was going to be replaced by it. XML was going to kill HTML. XML was going to kill CORBA, EJB and DCOM as we moved to web services. XML was a floor wax and a dessert topping. Unfortunately, after over a decade it is clear that XML has not and is unlikely to ever be the dominant way we create markup for consumption by browsers or how applications on the Web communicate.

James Clark has XML vs the Web where he talks about this grim realization

Twitter and Foursquare recently removed XML support from their Web APIs, and now support only JSON.  This prompted Norman Walsh to write an interesting post, in which he summarised his reaction as "Meh". I won't try to summarise his post; it's short and well-worth reading.

From one perspective, it's hard to disagree.  If you're an XML wizard with a decade or two of experience with XML and SGML before that, if you're an expert user of the entire XML stack (eg XQuery, XSLT2, schemas), if most of your data involves mixed content, then JSON isn't going to be supplanting XML any time soon in your toolbox.

There's a bigger point that I want to make here, and it's about the relationship between XML and the Web.  When we started out doing XML, a big part of the vision was about bridging the gap from the SGML world (complex, sophisticated, partly academic, partly big enterprise) to the Web, about making the value that we saw in SGML accessible to a broader audience by cutting out all the cruft. In the beginning XML did succeed in this respect. But this vision seems to have been lost sight of over time to the point where there's a gulf between the XML community and the broader Web developer community; all the stuff that's been piled on top of XML, together with the huge advances in the Web world in HTML5, JSON and JavaScript, have combined to make XML be perceived as an overly complex, enterprisey technology, which doesn't bring any value to the average Web developer.

This is not a good thing for either community (and it's why part of my reaction to JSON is "Sigh"). XML misses out by not having the innovation, enthusiasm and traction that the Web developer community brings with it, and the Web developer community misses out by not being able to take advantage of the powerful and convenient technologies that have been built on top of XML over the last decade.

So what's the way forward? I think the Web community has spoken, and it's clear that what it wants is HTML5, JavaScript and JSON. XML isn't going away but I see it being less and less a Web technology; it won't be something that you send over the wire on the public Web, but just one of many technologies that are used on the server to manage and generate what you do send over the wire.

The fact that XML based technologies are no longer required tools in the repertoire of the Web developer isn’t news to anyone who follows web development trends. However it is interesting to look back and consider that there was once a time when the W3C and the broader web development community assumed this was going to be the case. The reasons for its failure on the Web are self evident in retrospect.

There have been many articles published about the failure of XML as a markup language over the past few years. My favorites being Sending XHTML as text/html Considered Harmful and HTML5, XHTML2, and the Future of the Web which do a good job of capturing all of the problems with using XML with its rules about draconian error handling on the web where ill-formed, hand authored markup and non-XML savvy tools rule the roost.

As for XML as the protocol for intercommunication between Web apps, the simplicity of JSON over the triumvirate of SOAP, WSDL and XML Schema is so obvious it is almost ridiculous to have to point it out. 

The Specific Failure of the Atom Publishing Protocol

Besides the general case of the failure of XML as a data interchange format for web applications, I think it is still worthwhile to call out the failure of the Atom Publishing Protocol (AtomPub) which was eventually declared a failure by the editor of the spec, Joe Gregorio. AtomPub arose from the efforts of a number of geeks to build a better API for creating blog posts. The eventual purpose of AtomPub was to create a generic application programming interface for manipulating content on the Web. In his post titled AtomPub is a Failure, Joe Gregorio discussed why the technology failed to take off as follows

So AtomPub isn't a failure, but it hasn't seen the level of adoption I had hoped to see at this point in its life. There are still plenty of new protocols being developed on a seemingly daily basis, many of which could have used AtomPub, but don't. Also, there is a large amount of AtomPub being adopted in other areas, but that doesn't seem to be getting that much press, ala, I don't see any Atom-Powered Logo on my phones like Tim Bray suggested.

So why hasn't AtomPub stormed the world to become the one true protocol? Well, there are three answers:

  • Browsers
  • Browsers
  • Browsers

Thick clients, RIAs, were supposed to be a much larger component of your online life. The cliche at the time was, "you can't run Word in a browser". Well, we know how well that's held up. I expect a similar lifetime for today's equivalent cliche, "you can't do photo editing in a browser". The reality is that more and more functionality is moving into the browser and that takes away one of the driving forces for an editing protocol.

Another motivation was the "Editing on the airplane" scenario. The idea was that you wouldn't always be online and when you were offline you couldn't use your browser. The part of this cliche that wasn't put down by Virgin Atlantic and Edge cards was finished off by Gears and DVCS's.

The last motivation was for a common interchange format. The idea was that with a common format you could build up libraries and make it easy to move information around. The 'problem' in this case is that a better format came along in the interim: JSON. JSON, born of Javascript, born of the browser, is the perfect 'data' interchange format, and here I am distinguishing between 'data' interchange and 'document' interchange. If all you want to do is get data from point A to B then JSON is a much easier format to generate and consume as it maps directly into data structures, as opposed to a document oriented format like Atom, which has to be mapped manually into data structures and that mapping will be different from library to library.

As someone who has tried to both use and design APIs based on the Atom format, I have to agree that it is painful to have to map your data model to what is effectively a data format for blog entries instead of keeping your existing object model intact and using a better suited format like JSON. 

The Common Pattern in these Failures

When I look at all three of these failures I see a common pattern which I’ll now be on the look out for when analyzing the suitability of technologies for my purposes. In each of these cases, the technology was designed for a specific niche with the assumption that the conditions that applied within that niche were general enough that the same technology could be used to solve a number of similar looking but very different problems.

  1. The argument for OpenID is a lot stronger when limiting the audience to bloggers who all have a personal URL for their blog AND where it actually be a burden to sign up for an account on the millions of self hosted blogs out there. However it isn’t true that same set of conditions applies universally when trying to log-in or sign-up for the handful of websites I use regularly enough to decide I want to create an account.

  2. XML arose from the world of SGML where experts created custom vocabularies for domain-specific purposes such as DocBook and EDGAR. The world of novices creating markup documents in a massively decoupled environment such as the Web needed a different set of underlying principles.

  3. AtomPub assumed that the practice of people creating blog posts via custom blog editing tools (like the one I’m using the write this post) would be a practice that would spread to other sorts of web content and that these forms of web content wouldn’t be much distinguishable from blog posts. It turns out that most of our content editing still takes place in the browser and in the places where we do actually utilize custom tools (e.g. Facebook & Twitter clients), an object-centric domain specific data format is better than an XML-centric blog based data format. 

So next time you’re evaluating a technology that is being much hyped by the web development blogosphere, take a look to see whether the fundamental assumptions that led to the creation of the technology actually generalize to your use case. An example that comes to mind that developers should consider doing with this sort of evaluation given the blogosphere hype is NoSQL.

Note Now Playing: Keri Hilson - Knock You Down (featuring Kanye West & Ne-Yo)Note


Categories: Web Development

I’ve slowly become a big fan of Quora. I’ve learned quite a few things which I’ve actually applied in my day job or excerpted for blog posts at work from various questions being answered on Quora. I can see why Robert Scoble asks Is Quora the biggest blogging innovation in 10 years? because this is the same way I felt when I first discovered knowledgeable technical people sharing insights about building software or just historical context on blogs several years ago.

Quora has smart people with significant pedigrees freely sharing information about how and why things work in various parts of the software industry. It is a thing of beauty to log-in and get gems like Steve Case answering questions the history of AOL, Ian McAllister sharing product management tips from the bowels of Amazon or Andrew Bosworth [and others at Facebook] giving explanations for why and how they built key features like Messages, Chat and the News Feed at Facebook.

I’m not the only one that has been impressed by their experience on Quora and specifically there has been a lot of hype about Quora  on TechCrunch. Today TechCrunch published a contrasting opinion by Vivek Wadwha titled Why I Don’t Buy the Quora Hype where he calls interest in Quora a fad and pooh poohs the sites chances of becoming mainstream.

Although I like Quora, I do agree that the site faces key challenges if it is to ever break out of its niche. The primary challenge the site faces is that since it is more of a community like Reddit or MetaFilter not a networked communication tool like Facebook and Twitter, is that the user experience is likely to get worse as it grows more popular not better.

A few weeks ago I found a description of one of their attempts to solve the problem in a post by Charlie Cheever titled Commitment to Keeping Quora High Quality where he wrote

One thing we're trying to do a better job ASAP on is educating the new users that join the site and getting them up to speed on the policies, guidelines, and conventions as quickly as possible. Yesterday, we added a quick tutorial quiz before a user posts his/her first question.

So far, we've found that the quiz has helped make more of the questions that new users post conform to the site guidelines and require less editing from experienced users. We also made changes to the way the homepage feed works and when notifications are sent yesterday.
Over the next few months, we're going to be heavily investing engineering effort in:

  • Educating new users about site policies and guidelines
  • Improving the feed and voting ranking mechanisms
  • Changing the core product to accomodate a Quora with many more users and many more questions and answers and topics
  • Building special tools to support the efforts of reviewers and admins to improve the site and maintain civility and generally make it more fun to make Quora better

What I found odd about all of the above efforts is that none of them seems to try to keep the magic of what makes Quora more interesting than Yahoo! Answers, Facebook Questions or Stack Overflow. Quora is interesting because the quality of the answers is amazing due to the fact that questions are often answered by the some of the most knowledgeable people on the topic. So the key problems to solve the preserve the Quora experience is really “how do you encourage subject matter experts to flock to the site and answer questions?”

The folks at Quora have already posted a follow up to the aforementioned post titled Scaling Up where some of the approaches above are already being called into question and there is a nod towards highlighting the high quality users. That post is excerpted below

Up until a few days ago, new questions and answers from new Quora users were all being reviewed by users (reviewers and admins) who had demonstrated over a period of time an understanding of the spirit, policies, and guidelines of Quora.  There is now too much new content being posted on Quora to handle this in the same way.
Concretely, some of the projects we are working on in this area are:

(1) Getting many more people to participate in the evaluation of new content on the site.  For people who want to see the newest content on Quora that might be good or might be bad, we want to let you opt in to evaluating the new stuff in mostly the same way that you browse the site.  Most of the people who use Quora have pretty good judgement, and we believe there is some wisdom in crowds.  Preliminarily, this approach is very promising

(2) We're developing an algorithm to determine user quality.  The algorithm is somewhat similar to PageRank but since people are different from pages on the web and the signals that are available on Quora are different from those on the web, it's not exactly the same problem.  We'll use this to help decide what to show in feeds, when to send notifications, and how to rank answers.

(3) Explaining Quora better to new users before they add content to the site.  We added a very short tutorial quiz before new users add new questions and it made a big difference in reducing the number of questions that don't meet guidelines or policies.

The thing I still don’t see clearly here is a focus on catering to the high quality answerers that have made Quora more buzzworthy than innovative competitors like the StackExchange family of sites or mainstream Q&A sites like Yahoo! Answers. The question the folks at Quora should be asking themselves is what are they doing to not only have Steve Case continue to answer questions on the site but perhaps get similar quality answerers from other industries (e.g. Jack Welch or Russell Simmons).

Here I believe there is something Quora can learn from Q&A sites like StackOverflow and from sites that have attracted celebrity users like Twitter. Some things that I think would be useful to see implemented on the site to retain and attract quality answerers would include

  1. Democratize voting on quality of questions. Although Quora has started quizzing users before they ask a question as a way to keep quality high. It would be even better if users could vote on the quality of questions so that the more interesting ones got a wider audience. Similarly, being able to mark questions as duplicates so answerers don’t keep seeing the same questions all the time which is a particular pain point with various answer forums would be valuable.

  2. Better recognition of valuable users. The ability for people to provide topic-specific descriptions of their qualifications per topic area is a great idea. Of course, it does encourage to appeal to authority when judging their answers such as the case with someone posting a super long answer that doesn't answer the question but has "screen writer" in their qualifications being voted highly. Despite that, it is still useful to be able to look at a set of answers on movies and be able to tell which of the answerers is more qualified than others. Democratizing this by visibly showing which users have been judged by the community as being more valuable than others would be a useful addition. Whether it is copying StackOverflow badges or karma on reddit there is value both to readers of answers in determining which answerers are more trustworthy and to answerers in being able to get intangible value for the service they are providing to the Quora community. It is amazing how digital points systems like reputation scores, badges and achievements can motivate people and Quora can do more to harness this.

  3. A better connection between people and their followers. People like Jack Welch and Russell Simmons are on Twitter communicating with hundreds of thousands of followers who’d like to learn from them and be inspired by their words. Twitter isn’t really great for conversations or lengthy answers to insightful questions. I believe Quora can fill the gap for such celebrities in the same way Twitter filled the original need of giving celebrities a direct channel to their fans without the media acting as intermediaries. Right now I have followers and people I follow on Quora but they are treated equivalent to “topics” in my feed and I there aren’t good facilities for us to communicate with each other on the site. Can you imagine if Twitter treated hashtags you’ve expressed an interest in and people you followed the same way in your stream? What Quora does is similar.

Note Now Playing: Chris Brown - Deuces (Remix) (featuring Drake, T.I., Kanye West, Fabolous, Rick Ross & André 3000)Note


Categories: Social Software

An interesting discussion broke out in the comments to my last post about whether location based services like FourSquare are about sharing one’s location with friends or an evolution of the loyalty cards where users get deals for advertising a store to their friends. You can also see members of the tech press asking themselves this question with articles like Dear Foursquare, Gowalla: Please Let’s Stop Pretending This Is Fun appearing on TechCrunch. The following excerpt from that article strikes at the heart of the matter

Pew Research reported that, despite all the hype, the use of location-based services is actually declining in America, from 5% of the online population in May to 4% last month. Forget the fabled hockey stick; that’s more like a broken pencil.

Why? Because they’re not giving us any good reason to use them. Look at their web sites. Gowalla proclaims, “Discover the extraordinary in the world around you.” Foursquare says, “Unlock your city.” To which I say: “Oh, come on“ — and it seems I speak for approximately 96% (formerly 95%) of the population. I have no interest in enlisting in a virtual scavenger hunt, or unlocking merit badges — what is this, the Cub Scouts? — or becoming the narcissistic “Mayor” of my local coffee shop. Thanks for the offer, but I’m afraid I already have some semblance of a life.

I do want to keep up with my friends, and (sometimes) let them know where I am. But if you’re competing with Facebook in social networking and your name isn’t Twitter or Google, I’m sorry, but I don’t like your chances.

The challenge for FourSquare is that being the best service for getting local deals is a very different product from being the best service for keeping up to date with my friends are. FourSquare is trying to be both when either one faces significant challenges as a standalone product given the competitiveness of the marketplace.

The main innovation with location based services like FourSquare is that they are so easy for the consumer to get into that you could make it similar to frequent flier miles where the model is “if I’m already going to be doing something regularly anyway why not get perks for doing so?”. The problem that FourSquare faces is that they have to compete with GroupOn, JungleCents, Restaurant.com and various other services that have a more proven model for delivering customers to local businesses. The ideal place for FourSquare to be is where they can pitch to customers like me that every 10th check-in to my local health club gives me a perk even if it’s a free beverage or breakfast pastry instead of offering me a Gym Rate badge which I acquired after my first month using the service and now have no reason to check-in from there anymore. FourSquare has been primarily focused on getting discounts for "mayors" of local businesses which seems backwards given that they are basically saying only one person per store can get a discount (and in the case of Starbucks it was just a $1 discount). Can you imagine American Airways saying only the passenger who flies the most from Seattle<->San Francisco that year can use the frequent flier program? Sounds ridiculous doesn’t it?

If FourSquare wants to make a difference as a way for people to get local deals, they’ll need to refocus the company and not fall into the trap of paying too much attention to their game mechanics.

FourSquare faces a different set of challenges as a way to share location with friends and this isn’t just because of the social graph problem of competing with Facebook which I discussed in my previous post. I’m personally very bullish that location based services will be an integral part of our lives in the future. When I originally read Robert Scoble’s Location 2012 which aimed to predict where we’d be in 2 years given location trends today I couldn’t help but nod my head in agreement. However as I’ve started seeing some of these features show up in apps like Loopt and Facebook Places I’ve realized that some of these scenarios sound better in theory than in practice.

Constantly checking-in to share your location is a chore than you get nothing out of 99% of the time. This is why FourSquare uses game mechanics such as badges and mayorships to try to get people to regularly check-in. The problem is that game mechanics only get you so far and will not appeal to a significant chunk of the user population. On the other hand, automatically checking-in by sharing your location in real-time such as Loopt does brings with it a raft of privacy issues and can just come off as downright creepy. Although I’ve found real-time location sharing to be useful at times when I need to meet up with my wife at a crowded 5K race, it seemed a little creepy to have each other’s locations permanently shared even though it sounded convenient in theory. We no longer use Loopt and try as I might I couldn’t figure out how to pitch it to other people without it sounding creepy.

Striking the right balance between the tediousness of check-ins and the creepiness of constant location sharing will take a lot of trial and error as well as a careful sense of design. Again, I’d it would take refocusing the company to really cross the chasm here from the Silicon Valley early adopter to being the sort of mass market success that my non-technical friends who use Facebook and play Farmville but don’t read TechCrunch would be familiar with.

Right now FourSquare and other services that have cloned its model are between a rock and a hard place. It will be interesting to see how they transform themselves in the coming months to address these challenges.

Note Now Playing: Cali Swag District - Teach Me How To Dougie [Remix] (featuring Red Cafe, Jermaine Dupri, B.o.B. and Bow Wow)Note


Categories: Social Software

Two months ago Nelson Minar wrote a post entitled Stop making social networks, Facebook won where he argues that websites should just treat Facebook as the one true social graph instead of trying to build their own. I agree a lot with what Nelson wrote which some people tell me conflicts with my argument that There will be many social graphs. I thought that the best way to illustrate this seeming contradictory thinking is by comparing two sites that the media considers competitors to Facebook in different ways; Twitter and FourSquare.

How Twitter and FourSquare position themselves against Facebook

Recently, Twitter’s VP of business and corporate development spoke at Nokia World 2010 where he proclaimed that Twitter is NOT a social network. Below are some excerpts from ReadWriteWeb’s coverage of his talk.

says Thau: Twitter is for news. Twitter is for content. Twitter is for information.

So Twitter Is "News"?

Yes, says Thau. Twitter is changing the very nature of news today. Journalists are sending their stories to Twitter and some are even publishing directly to Twitter. It's also allowing everyday users to become journalists themselves by providing them with a simple mechanism to break news.

"The guy who saw a plane land on the Hudson River right in front of him didn't think to send an email," says Thau. "He tweeted it."

Thau also wanted to assure Twitter users it's OK if you think you're not interesting enough to have your own Twitter account. Don't apologize if you don't tweet - just come to Twitter and consume content instead. After all, plenty of people already do just that.

The key thing here is that Twitter is arguing that the primary relationships on the site are not “social”. They are producer<->consumer where the product is news and other information content.

Dennis Crowley of FourSquare made more direct comparisons between his service and Facebook in an interview with TechCrunch.

On why the world needs more than one social graph

Our social graph is more representative of the people that you meet in the real world. I am starting to believe, if you asked me a year ago, Why would you ever need more than one social graph? You need representation of a couple of them. Between the three, Facebook is literally everyone I’ve ever shaken hands with at a conference or kissed on the cheek at Easter. Twitter seems to be everyone I am entertained by or I wish to meet some day. Foursquare seems to be everyone I run into on a regular basis. All three of those social graphs are powerful in their own.

The FourSquare argument is that services that create new social graphs that are tied to a specific social context can continue to exist and grow in a world where social networking is dominated by Facebook’s website and Facebook Connect.


Facebook’s trajectory: Adding a social element to every online activity

Before analyzing the wisdom of the approaches that FourSquare and Twitter have taken to differentiate their offerings from Facebook, it is a good idea to have an idea of Facebook’s ultimate strategy. This isn’t hard since Zuckerberg and other Facebook regularly share this with TechCrunch. Below is an excerpt from an article titled Zuckerberg: Facebook Photos Used 5 Or 6 Times More Than Competitors — Combined which describes their long term strategy

He noted that when they launched the product, they didn’t have all of the features that their competitors did. For example, they didn’t have high-resolution photos and you couldn’t print them. But one thing they did have was the social element — and this changed everything.

“Those features by themselves were more important than anything else combined,” Zuckerberg said of the social elements of Facebook Photos. He then dropped the competitor bomb. “The photo product that we have is maybe five or six times more used than every other product on the web — combined,” Zuckerberg stated.

And it was clear from both Zuckerberg and CTO Bret Taylor’s talk at the event that photos to them was the harbinger of things that eventually came — and will still come.

Taylor noted that he had been “brainwashed by Silicon Valley” before he saw and understood the power of Facebook Photos (he was likely working at Google at the time). He had been thinking like an engineer about the best way to organize photos on the web. But he quickly realized that “the best possible organization of photos is around people,” Taylor said.

“There are ten other industries waiting to have this type of disruption,” Taylor said noting the travel industry, e-commerce, and music as a few of them. Earlier, Zuckerberg agreed. Because of the social element, “every single vertical will be transformed.“

Facebook’s social graph is a graph of people I know or have met. Facebook’s fundamental strategy is to build a product and platform where key online activities are improved by adding the social element of people you know. Where Facebook has been dominant is when the activity is one that already related to interacting with people you know. Facebook has beaten geek favorites like Gmail, Flickr and Delicious as the way regular people share private messages, photos and links online. This is both due to the powerful network effects of a social networking product and the fact that their graph maps 100% to the people one typically wants to indulge in those activities with.

Facebook has had less success with products where their graph doesn’t correspond well with the target activity. The best examples of this are Facebook Marketplace versus eBay/Craig's List or Facebook Questions versus Yahoo! Answers. In both of these comparisons, Facebook isn’t even on the radar of the market leader. This is because activities such as buying someone’s old junk really need a wider net than just your friends, family and coworkers.

This is where the positioning and focus of Twitter as a news service as opposed to a social network puts them in a good place in comparison to Facebook. Twitter is where I go to get entertainment and news about the topics I’m interested in from subject matter experts. These subject matter experts (in many cases bloggers, minor & major celebrities) are not people I know nor  have I met. This is distinct from my Facebook social graph but has some overlap depending on how much of a subject matter expert I am myself. On the other hand, FourSquare is a place where I go to share my location with people I know or have met. This set of people is almost always a subset of the people in my Facebook social graph. The only value additions you get from FourSquare are the game mechanics and deals (not anymore). FourSquare has unfortunately reached the point where the only practical difference between using it versus Facebook Places is that I get to be mayor of my local Gymboree and collect two dozen video game style achievements. Personally I’ve already grown bored with the game mechanics and suspect that targeting the console gaming demographic guarantees it will be a niche service at best.

The bottom line is that if the primary focus of your product is that it connects people with their friends, family and others they know around a particular activity then you need to be able to answer the question as to how your product can compete in a world where your service is a feature of Facebook or of an app on its platform.

Note Now Playing: Far East Movement - Like a G6 Note


Earlier this morning, Jeff Kunins posted the announcement that Messenger Connect is out of beta and available worldwide. Key excerpts from his post include

Today, we are pleased to announce that Messenger Connect is out of beta and available worldwide. We’ve gotten a great response so far: leading sharing syndicators ShareThis, AddThis, Gigya, and AddToThis have already made the Windows Live sharing badge available on more than 1 million websites (check it out now on Bing).

Over 2500 developers gave us great feedback during the beta, helping us to refine and improve this release of Messenger Connect. Below is a quick summary, but for all the details check out this post from Angus on the Windows Live for Developers blog. Our focus with the release of Messenger Connect was to make it easier for partners to adopt, without compromising user privacy.

  • Easier to check out –We made it faster and simpler for partners to try out Messenger Connect and determine if it would be useful for their sites. For example: you can try out the real time chat control without needing to write any code.
    Learn at the Windows Live Developer Center
  • Easier to adopt and integrate– We reduced the effort needed for sites to implement Messenger Connect usefully and powerfully by providing new tools and sample code for ActivityStrea.ms template selectors and more.
    Sample code

A number of folks worked really hard behind the scenes to get us to this point and I’m glad to see what we’ve shipped today. I haven’t announced this on my blog yet but I recently took over as the Lead Program Manager responsible for our Messenger Connect and related platform efforts in Windows Live. If you’ve been a regular reader of my blog it shouldn’t be a surprise that I’ve decided to make working on building open web platforms my day job and not just a hobby I was interested in.

As Angus Logan says in his follow up blog post on the Windows Live for Developers blog; this is just the beginning. We’d love to get feedback from the community of developers on what we’ve released and the feedback we’ve gotten thus far has been immensely helpful. You can find us at http://dev.live.com/forums

PS: Since someone asked on Twitter, you can find out more about Messenger Connect by reading What is Messenger Connect?

Note Now Playing: Waka Flocka Flame - No Hands (featuring Roscoe Dash and Wale) Note


Categories: Platforms | Windows Live

October 8, 2010
@ 03:44 PM

Earlier this week Facebook announced the revamp of Facebook Groups. At first I planned to avoid commenting on this release since there is significant overlap between it’s functionality and that of Windows Live Groups so it is hard for me to have an objective perspective. However this morning I saw the following tweet from a designer at Facebook


I found this a little intriguing since I was sure I'd seen the presentation from the Google UX researcher he was referencing and I couldn't see how Facebook Groups addresses the problem he pointed out. If you haven't seen the presentation there is a brief description and link to it in the VentureBeat article Google researcher says friend groups may give it a window to best Facebook. Below are key excerpts from the article which capture the key point from the presentation

Through studying the nuances of social interaction both off- and online, Google researchers found that people typically have between four and six friend groups and only between two and six “close” friends, he said. College friends don’t necessarily mix with work friends, who don’t necessarily mix with a person’s family.

Adams pointed out all of the different problem scenarios Facebook users run into if the different parts of their identities end up blurring. One teacher the company interviewed, for example, realized that photos of her with her close friends at a gay bar were being exposed to her 10-year-old students.

Personally, I’d always assumed this collision of friend groups would be the main challenge that would prevent Facebook from being as successful as it could be back in 2008. What I didn’t expect is that people would decide that the benefit of having access to all their friends in one place was worth the cost of having to censor themselves a little bit in their online sharing since what may be appropriate for one group of friends (e.g. your friends from the gay bar scene) may not be for another (e.g. parents of students in your middle school class). Today has grown to having 500 million users based on that fact.

The reasons for self-censorship are sometimes not so controversial. Simply posting a bunch of kid pictures can get annoying for your coworkers even though your family on Facebook loves every single one of them. For the people who find this need to censor how they share online for various reasons, the argument is that Facebook Groups solves this problem. There’s only one catch which Mark Zuckerburg brought up himself a while ago which is mentioned in the TechCrunch article Facebook Overhauls Groups, A Social Solution To Create “A Pristine Graph”

The naive solution is to do something like Friend Lists,” Zuckerberg says. ”Almost no one wants to make lists,” he continues. He’s noted this before. “The most we’ve ever gotten is 5 percent of people to make a list. It’s pretty brutal to have to do this every single time.” He then went into the algorithmic solutions. These are helpful, Zuckerberg says, but it’s also really easy to get these wrong, he notes. There needs to be a social solution, Zuckerberg says.

Facebook Groups faces all the problems with Friend Lists that Zuckerburg mentions above. In real life, I don’t manage my different social circles at the same time. I go to work and have work friends, I have school friends I still call every once in a while and when I go to my regular poker game there I interact with my poker friends. Every once in a blue moon like at my wedding, all of these worlds collide and it is actually a little stressful to manage them in real time. In addition, when the members of these groups change I don’t have to actively manage them (i.e. when a coworker becomes friendly enough for me to hang out with them outside of work, when a poker friend stops attending the regular poker game or when a coworker switches jobs and we no longer work on related technologies). Friend Lists on Facebook make people work to keep track of changes in their social relationships which is just not how most humans work. I still have phone numbers in my cell phone for people who I was supposed to meet up with for dinner at a conference almost four years ago who’ve since left Microsoft.

Facebook Groups cranks the awkwardness of dealing with this up to 11. Let’s say I create a group for “People who work on social at Microsoft who regularly have lunch” and after a few months to years some of these people leave the company, get promoted or switch roles. As the owner of the group what do I do? Do I kick them out? Do I keep blathering on in private discussions that I know are no longer relevant for half of the recipients and in some cases actually violates work ethics since some of these people have left the company? What happens when I stop working on social at Microsoft?

Facebook Groups may solve some problems users have with Facebook but I suspect it is not the silver bullet that addresses the problem of people having friend groups that they’d like to keep separate on Facebook especially since it introduces a new set of problems for users. Time will tell if I’m right or wrong on this suspicion.

Note Now Playing: Yo Gotti - Women Lie, Men Lie (featuring Lil Wayne) Note


Categories: Social Software

Software companies love hiring people that like solving hard technical problems. On the surface this seems like a good idea, unfortunately it can lead to situations where you have people building a product where they focus more on the interesting technical challenges they can solve as opposed to whether their product is actually solving problems for their customers.

I started being reminded of this after reading an answer to a question on Quora about the difference between working at Google versus Facebook where Edmond Lau David Braginsky wrote

Google is like grad-school. People value working on hard problems, and doing them right. Things are pretty polished, the code is usually solid, and the systems are designed for scale from the very beginning. There are many experts around and review processes set up for systems designs.

Facebook is more like undergrad. Something needs to be done, and people do it. Most of the time they don't read the literature on the subject, or consult experts about the "right way" to do it, they just sit down, write the code, and make things work. Sometimes the way they do it is naive, and a lot of time it may cause bugs or break as it goes into production. And when that happens, they fix their problems, replace bottlenecks with scalable components, and (in most cases) move on to the next thing.

Google tends to value technology. Things are often done because they are technically hard or impressive. On most projects, the engineers make the calls.

Facebook values products and user experience, and designers tend to have a much larger impact. Zuck spends a lot of time looking at product mocks, and is involved pretty deeply with the site's look and feel.

It should be noted that Google deserves credit for succeeding where other large software have mostly failed in putting a bunch of throwing a bunch of Ph.Ds at a problem at actually having them create products that impacts hundreds of millions people as opposed to research papers that impress hundreds of their colleagues. That said, it is easy to see the impact of complexophiles (props to Addy Santo) in recent products like Google Wave.

If you go back and read the Google Wave announcement blog post it is interesting to note the focus on combining features from disparate use cases and the diversity of all of the technical challenges involved at once including

  • “Google Wave is just as well suited for quick messages as for persistent content — it allows for both collaboration and communication”
  • “It's an HTML 5 app, built on Google Web Toolkit. It includes a rich text editor and other functions like desktop drag-and-drop”
  • “The Google Wave protocol is the underlying format for storing and the means of sharing waves, and includes the ‘live’ concurrency control, which allows edits to be reflected instantly across users and services”
  • “The protocol is designed for open federation, such that anyone's Wave services can interoperate with each other and with the Google Wave service”
  • “Google Wave can also be considered a platform with a rich set of open APIs that allow developers to embed waves in other web services”

The product announcement read more like a technology showcase than an announcement for a product that is actually meant to help people communicate, collaborate or make their lives better in any way. This is an example of a product where smart people spent a lot of time working on hard problems but at the end of the day they didn't see the adoption they would have liked because they they spent more time focusing on technical challenges than ensuring they were building the right product.

It is interesting to think about all the internal discussions and time spent implementing features like character-by-character typing without anyone bothering to ask whether that feature actually makes sense for a product that is billed as a replacement to email. I often write emails where I write a snarky comment then edit it out when I reconsider the wisdom of sending that out to a broad audience. It’s not a feature that anyone wants for people to actually see that authoring process.

Some of you may remember that there was a time when I was literally the face of XML at Microsoft (i.e. going to http://www.microsoft.com/xml took you to a page with my face on it Smile). In those days I spent a lot of time using phrases like the XML<-> objects impedance mismatch to describe the fact that the dominate type system for the dominant protocol for web services at the time (aka SOAP) actually had lots of constructs that you don’t map well to a traditional object oriented programming language like C# or Java. This was caused by the fact that XML had grown to serve conflicting masters. There were people who used it as a basis for document formats such as DocBook and XHTML. Then there were the people who saw it as a replacement to for the binary protocols used in interoperable remote procedure call technologies such as CORBA and Java RMI. The W3C decided to solve this problem by getting a bunch of really smart people in a room and asking them to create some amalgam type system that would solve both sets of completely different requirements. The output of this activity was XML Schema which became the type system for SOAP, WSDL and the WS-* family of technologies. This meant that people who simply wanted a way to define how to serialize a C# object in a way that it could be consumed by a Java method call ended up with a type system that was also meant to be able to describe the structural rules of the HTML in this blog post.

Thousands of man years of effort was spent across companies like Sun Microsystems, Oracle, Microsoft, IBM and BEA to develop toolkits on top of a protocol stack that had this fundamental technical challenge baked into it. Of course, everyone had a different way of trying to address this “XML<->object impedance mismatch which led to interoperability issues in what was meant to be a protocol stack that guaranteed interoperability. Eventually customers started telling their horror stories in actually using these technologies to interoperate such as Nelson Minar’s ETech 2005 Talk - Building a New Web Service at Google and movement around the usage of building web services using Representational State Transfer (REST) was born. In tandem, web developers realized that if your problem is moving programming language objects around, then perhaps a data format that was designed for that is the preferred choice. Today, it is hard to find any recently broadly deployed web service that doesn’t utilize on Javascript Object Notation (JSON) as opposed to SOAP.

The moral of both of these stories is that a lot of the time in software it is easy to get lost in the weeds solving hard technical problems that are due to complexity we’ve imposed on ourselves due to some well meaning design decision instead of actually solving customer problems. The trick is being able to detect when you’re in that situation and seeing if altering some of your base assumptions doesn’t lead to a lot of simplification of your problem space then frees you up to actually spend time solving real customer problems and delighting your users. More people need to ask themselves questions like do I really need to use the same type system and data format for business documents AND serialized objects from programming languages?

Note Now Playing: Travie McCoy - Billionaire (featuring Bruno Mars) Note


August 5, 2010
@ 02:36 PM

This morning I stumbled on a great post by Dave Winer titled Why didn't Google Wave boot up? where he writes

So why didn't Google Wave happen? Permanent link to this item in the archive.

Here's the problem -- when I signed on to Wave, I didn't see anything interesting. It was up to me, the user, to figure out how to sell it. But I didn't understand what it was, or what its capabilities were, and I was busy, always. Even so I would have put the time in if it looked interesting, but it didn't.  Permanent link to this item in the archive.

However, it had another problem. Even if there were incentives to put time into it, and even if I understood how it worked or even what it did, it still wouldn't have booted up because of the invite-only thing. It's the same problem every Twitter-would-be or Facebook-like thing has. My friends aren't here, so who do I communicate with? But with Wave it was even worse because even if I loved Wave and wanted everyone to use it, it was invite-only. So the best evangelist would still have to plead with Google to add all of his workgroup members to the invite list. The larger your workgroup the more begging you have to do. This is exactly the opposite of how you want it to work if you're in Google's shoes.  Permanent link to this item in the archive.

This is an important lesson on the value of network effects on social software applications. A service that exhibits network effects is more useful the more of my friends use it (e.g. having SMS on my cell phone is only useful if I have friends who can send & receive text messages). By definition, a social software application is dependent on network effects and needs to do everything in its power to promote them. Placing artificial barriers that prevent me from actually using the product as a communication tool with my social network works against the entire premise of being social in the first place.

Google definitely learned the wrong lesson from the success of Gmail as an invite only service. Being invite-only worked for Gmail at launch because my friends don’t have to use Gmail to receive or send messages to me. So word off mouth could spread because the people who used it would sing it’s praises which caused anticipation amongst those that couldn’t. On the other hand with Wave, the people who got invites couldn’t get to the point where they could sing its praises (if there were any to be sung) because it was too difficult to get their friends on there. By the time they made the service open to all, it was too late due to what Joel Spolsky called The Segway Phenomenon

PR grows faster than the quality of your code. Result: everybody checks out your code, and it's not good yet. These people will be permanently convinced that your code is simple and inadequate, even if you improve it drastically later. I call this the Marimba phenomenon . Or, you get PR before there's a product people can buy, then when the product really comes out the news outlets don't want to do the story again. We'll call this the Segway phenomenon.

Some may point to Facebook as an example of a network that was invite-only but still managed to have network effects but there is a crucial difference in how Facebook regulated growth before opening up to all. Facebook opened its doors to entire networks of people at a time (i.e. everyone in a particular college, all college students, people from select employers, etc) not to arbitrary swaths of people on a first come, first served basis.

Hopefully more startups will keep this in mind before jumping on the invite-only bandwagon.

Note Now Playing: Eminem - Hell Breaks Loose (featuring Dr. Dre) Note


Categories: Social Software