I’ve spent the last two years leading a PM team that has been part of building software experiences that make me immensely proud. The team has built software experiences that millions of people will use in Windows 8 and a developer platform will enable thousands of developers to build great software. Over the course of the past year we’ve delivered

  1. The social experiences in the Windows 8 People app. With Windows 8 you get a great browse and share your friends’ updates on Facebook and Twitter. The feedback we’ve received about this functionality has been extremely positive which has been quite humbling.

  2. A straightforward way for Metro style apps to take advantage of Single Sign On in Windows with the Live SDK.

  3. A developer platform for SkyDrive which has enabled developers to integrate SkyDrive across multiple apps, websites and devices.

This has been one of the most exciting and fulfilling times of my career. After about eight years working in the same organization at Microsoft, first as part of MSN and now Windows Live I’ve decided to move to another part of the company.

Over the course of the past few years, I’ve looked on at Microsoft’s search competition with Google and often wondered why although there’s been a lot of focus on beating or matching Google in search relevance and experience, there hasn’t been as much heard about how we’d compete with AdWords especially since that’s actually how we make money in the space.

Recently I was giving one of my friends who works in our ads space feedback after using a number of ads products including Facebook ads, Google ads and Microsoft’s. He asked if instead of complaining about what I wouldn’t rather just come join the team and actually help out. I thought “why not?” and since then I’ve become a lead program manager on the Bing Ads team.

My new team will be responsible for a number of things including the Bing Ads API. Regular readers of my blog shouldn’t expect any changes. If anything I’ll try to increase my pace of posting once I’m ramped up in my new gig and can come up with a sane blog posting schedule.

Note Now Playing: Big Boi - Fo Yo Sorrows (featuring George Clinton and Too Short) Note


 

Categories: Personal

Eran Hammer-Lahav, the former editor of the OAuth 2.0 specification, announced the fact that he would no longer be the editor of the standard in a harshly critical blog post entitled OAuth 2.0 and the Road to Hell where he made a number of key criticisms of the specification the meat of which is excerpted below

Last month I reached the painful conclusion that I can no longer be associated with the OAuth 2.0 standard. I resigned my role as lead author and editor, withdraw my name from the specification, and left the working group. Removing my name from a document I have painstakingly labored over for three years and over two dozen drafts was not easy. Deciding to move on from an effort I have led for over five years was agonizing.

There wasn’t a single problem or incident I can point to in order to explain such an extreme move. This is a case of death by a thousand cuts, and as the work was winding down, I’ve found myself reflecting more and more on what we actually accomplished. At the end, I reached the conclusion that OAuth 2.0 is a bad protocol. WS-* bad. It is bad enough that I no longer want to be associated with it. It is the biggest professional disappointment of my career.

All the hard fought compromises on the mailing list, in meetings, in special design committees, and in back channels resulted in a specification that fails to deliver its two main goals – security and interoperability. In fact, one of the compromises was to rename it from a protocol to a framework, and another to add a disclaimer that warns that the specification is unlike to produce interoperable implementations.

When compared with OAuth 1.0, the 2.0 specification is more complex, less interoperable, less useful, more incomplete, and most importantly, less secure.

To be clear, OAuth 2.0 at the hand of a developer with deep understanding of web security will likely result is a secure implementation. However, at the hands of most developers – as has been the experience from the past two years – 2.0 is likely to produce insecure implementations.

Given that I’ve been professionally associated with OAuth 2.0 over the past few years from using OAuth 2.0 as the auth method for SkyDrive APIs to acting as an advisor for the native support of OAuth 2.0 style protocols in the Web Authentication Broker in Windows 8, I thought it would be useful to provide some perspective on what Eran has written as an implementer and user of the protocol.

The Good: Easier to work with than OAuth 1.0

I’ve been a big fan of web technologies for a fairly long time. The great thing about the web is that it is the ultimate distributed system and you cannot make assumptions about any of the clients accessing your service as people have tended to do in the enterprisey world past. This encourages technologies to be as simple as possible to reduce the causes of friction as much as possible. This has led to the rise of drop dead simple protocols like HTTP and data formats like JSON.

One of the big challenges with OAuth 1.0 is that it pushed a fairly complex and fragile set of logic on app developers who were working with the protocol. This blog post from the Twitter platform team on the most complicated feature in their API bears this out

Ask a developer what the most complicated part of working with the Twitter API is, and there's a very good chance that they'll say OAuth. Anyone who has ever written code to calculate a request signature understands that there are several precise steps, each of which must be executed perfectly, in order to come up with the correct value.

One of the points of our acting on your feedback post was that we were looking for ways to improve the OAuth experience.

Given that there were over 750,000 registered Twitter developers last year, this is a lot of pain to spread out across their ecosystem. OAuth 2.0 greatly simplifies the interaction model between clients and servers by eliminating the requirement to use signed request signatures as part of the authentication and authorization process.

 

The Bad: It’s a framework not a protocol

The latest draft of the OAuth 2.0 specification has the following disclaimer about interoperability

OAuth 2.0 provides a rich authorization framework with well-defined security properties.  However, as a rich and highly extensible framework with any optional components, on its own, this specification is likely to produce a wide range of non-interoperable implementations.

In addition, this specification leaves a few required components partially or fully undefined (e.g. client registration, authorization server capabilities, endpoint discovery).  Without these components, clients must be manually and specifically configured against a specific authorization server and resource server in order to interoperate.

What this means in practice for developers is that learning how one OAuth 2.0 implementation works is unlikely to help you figure out how another compliant one behaves given the degree of latitude that implementers have. Thus the likelihood of being able to take the authentication/authorization code you wrote with a standard library like DotNetOpenAuth against one OAuth 2.0 implementation and then pointing it at a different one by only changing a few URLs then expecting things to work is extremely low.

In practice I expect this to not be as problematic as it sounds on paper simply because at the end of the day authentication and authorization is a small part of any API story. In general, most people will still get the Facebook SDK, Live SDK, Google Drive SDK, etc of their target platform to build their apps and it is never going to be true that those will be portable between services. For services that don’t provide multiple SDKs it is still true that the rest of the APIs will be so different that the fact that the developer’s auth code has to change will not be as big of a deal to the developer.

That said, it is unfortunate that once cannot count on a degree of predictability across OAuth 2.0 implementations.

The Ugly: Making the right choices is left as an exercise for the reader

The biggest whammy in the OAuth 2.0 specification which Eran implies is the reason he decided to quit is hinted at in the end of the aforementioned disclaimer

This framework was designed with the clear expectation that future work will define prescriptive profiles and extensions necessary to achieve full web-scale interoperability.

This implies that there are a bunch of best practices in utilizing a subset of the protocol (i.e. prescriptive profiles) that are yet to be defined. As Eran said in his post, here is a list of places where there are no guidelines in the spec

  • No required token type
  • No agreement on the goals of an HMAC-enabled token type
  • No requirement to implement token expiration
  • No guidance on token string size, or any value for that matter
  • No strict requirement for registration
  • Loose client type definition
  • Lack of clear client security properties
  • No required grant types
  • No guidance on the suitability or applicability of grant types
  • No useful support for native applications (but lots of lip service)
  • No required client authentication method
  • No limits on extensions

There are a number of places where it would be a bad idea if an implementer decided not to implement a feature without considering the security implications such as token expiration. In my day job, I’ve also been bitten by the lack of guidance on token string sizes with some of our partners making assumptions about token size that later turned out to be inaccurate which led to scrambling on both sides.

My advice for people considering implementing OAuth 2.0 on their service would be to ensure there is a security review of whatever subset of the features you are implementing before deploying the service at large. If you can’t afford or don’t have security people on staff then at the minimum I’d recommend picking one of the big guys (e.g. Google, Facebook or Microsoft) and implementing the same features that they have since they have people on staff whose job is to figure out the secure combination of OAuth 2.0 features to implement as opposed to picking and choosing without a frame of reference.

Note Now Playing: Notorious B.I.G. - You're Nobody Till Somebody Kills You Note


 

Categories: Web Development

July 15, 2012
@ 11:04 PM

A few weeks ago Dalton Caldwell, founder of imeem and Picplz, wrote a well received blog post titled What Twitter could have been where he laments the fact that Twitter hasn’t fulfilled some of the early promise developers saw in it as a platform. Specifically he writes

Perhaps you think that Twitter today is a really cool and powerful company. Well, it is. But that doesn’t mean that it couldn’t have been much, much more. I believe an API-centric Twitter could have enabled an ecosystem far more powerful than what Facebook is today. Perhaps you think that the API-centric model would have never worked, and that if the ad guys wouldn’t have won, Twitter would not be alive today. Maybe. But is the service we think of as Twitter today really the Twitter from a few years ago living up to its full potential? Did all of the man-hours of brilliant engineers, product people and designers, and hundreds of millions of VC dollars really turn into, well, this?

His blog post struck a chord with developers which made Dalton follow it up with another blog post titled announcing an audacious proposal as well as launching the join.app.net. Dalton’s proposal is as follows

I believe so deeply in the importance of having a financially sustainable realtime feed API & service that I am going to refocus App.net to become exactly that. I have the experience, vision, infrastructure and team to do it. Additionally, we already have much of this built: a polished native iOS app, a robust technical infrastructure currently capable of handing ~200MM API calls per day with no code changes, and a developer-facing API provisioning, documentation and analytics system. This isn’t vaporware.

To manifest this grand vision, we are officially launching a Kickstarter-esque campaign. We will only accept money for this financially sustainable, ad-free service if we hit what I believe is critical mass. I am defining minimum critical mass as $500,000, which is roughly equivalent to ~10,000 backers.

As can be expected as someone who’s worked on software as both an API client and platform provider of APIs, I have a few thoughts on this topic. So let’s start from the beginning.

The Promise of Twitter Annotations

About two years ago, Twitter CEO Dick Costolo wrote an expansive “state of the union” style blog post about the Twitter platform. Besides describing the state of the Twitter platform at the time it also set forth the direction the platform intended to go in and made a set of promises about how Twitter saw its role relative to its developer ecosystem. Dick wrote

To foster this real-time open information platform, we provide a short-format publish/subscribe network and access points to that network such as www.twitter.com, m.twitter.com and several Twitter-branded mobile clients for iPhone, BlackBerry, and Android devices. We also provide a complete API into the functions of the network so that others may create access points. We manage the integrity and relevance of the content in the network in the form of the timeline and we will continue to spend a great deal of time and money fostering user delight and satisfaction. Finally, we are responsible for the extensibility of the network to enable innovations that range from Annotations and Geo-Location to headers that can route support tickets for companies. There are over 100,000 applications leveraging the Twitter API, and we expect that to grow significantly with the expansion of the platform via Annotations in the coming months.

There was a lot of excitement in the industry about Twitter Annotations both from developers as well as the usual suspects in the tech press like TechCrunch and ReadWriteWeb. Blog posts such as How Twitter Annotations Could Bring the Real-Time and Semantic Web Together show how excited some developers were by the possibilities presented by the technology. For those who aren’t familiar with the concept, annotations were supposed to be the way to attach arbitrary metadata to a tweet beyond the well-known 140 characters. Below is a screenshot from a Twitter presentation showing this concept visually

Although it’s been two years since the Twitter Annotations was supposed to begin testing this feature has never been released nor has it talked about by the Twitter Platform team in recent months. Many believe that Twitter Annotations will never be released due to a changing of the guard within Twitter which is hinted at in Dalton Caldwell’s What Twitter could have been post

As I understand, a hugely divisive internal debate occurred among Twitter employees around this time. One camp wanted to build the entire business around their realtime API. In this scenario, Twitter would have turned into something like a realtime cloud API company. The other camp looked at Google’s advertising model for inspiration, and decided that building their own version of AdWords would be the right way to go.

As you likely already know, the advertising group won that battle, and many of the open API people left the company.

It is this seeming change in direction that Dalton has seized on to create join.app.net.

 

Why Twitter Annotations was a Difficult Promise to Make

I have no idea what went on within Twitter but as someone who has built both platforms and API clients the challenges in delivering a feature such as Twitter Annotations seem quite obvious to me. A big problem when delivering a platform as part of an end user facing service is that there are often trade offs one has to make at the expense of the other. Doing things that make end users happy may end up ticking off people who’ve built businesses on your platform (e.g. layoffs caused by Google Panda update, companies losing customers overnight after launch of Facebook's timeline, etc). On the other hand, doing things that make developers happy can create suboptimal user experiences such as the “openness” of Android as a platform making it a haven for mobile malware.

The challenge with Twitter Annotations is that it threw user experience consistency out of the Window. Imagine that the tweet shown in the image above was created by Dare’s Awesome Twitter App which allows users to “attach” a TV episode from a source like Hulu or Netflix into the app before they tweet. Now in Dare’s Awesome Twitter App, the user sees their tweet an an inline experience where they can consume the video inline similar to what Twitter calls Expanded Tweets today. However the user’s friends who are using the Twitter website or other Twitter apps just see 140 characters. You can imagine that apps would then compete on how many annotations they supported and creating new interesting annotations. This is effectively what happened with RSS and a variety of RSS extensions with no RSS reader supporting the full gamut of extensions. Heck, I supported more RSS extensions in RSS Bandit in 2009 than Google Reader does today.

This cacophony of Annotations would have meant that not only would there no longer be such a thing as a consistent Twitter experience but Twitter itself would be on an eternal treadmill of supporting various annotations as they were introduced into the ecosystem by various clients and content producers.

Secondly, the ability to put arbitrary machine readable content in tweets would have made Twitter an attractive mechanism as a “free” publish-subscribe infrastructure of all manner of apps and devices. Instead of building a push notification system to communicate with client apps or devices, an enterprising developer could just create a special Twitter account and have all the end points connect to that. The notion of Twitter controlled botnets and the rise of home appliances connected to Twitter are all indicative that there is some demand for this capability. If this content not intended for humans ever becomes a large chunk of the data flowing through Twitter, how to monetize it will be extremely challenging. Charging people for using Twitter in this way isn’t easy since it isn’t clear how you differentiate a 20,000 machine botnet from a moderately popular Internet celebrity with a lot of fans who mostly lurk on Twitter.

I have no idea if Twitter ever plans to ship Annotations but if they do I’d be very interested to see how they would solve the two problems mentioned above.

Challenges Facing App.net as a Consumer Service

Now that we’ve talked about Twitter Annotations, let’s look at what App.net plans to deliver to address the demand that has yet to be fulfilled by the promise of Twitter Annotations. From join.app.net we learn the product that is intended to be delivered is

OK, great, but what exactly is this product you will be delivering?

As a member, you'll have a new social graph and real-time feed that you access from an App.net mobile application or website. At first, the user experience will be very similar to what Twitter was like before it turned into a media company. On a forward basis, we will focus on expanding our core experience by nurturing a powerful ecosystem based on 3rd-party developer built "apps". This is why we think the name "App.net" is appropriate for this service.

From a developer perspective, you will be able to read and write to a Twitter-like API. Developers behaving in good faith will have free reign to build alternate UIs, new business models of their own, and whatever they can dream up.

As this project progresses, we will be collaborating with you, our backers, while sharing and iterating screenshots of the app, API documentation, and more. There are quite a few technical improvements to existing APIs that we would like to see in the App.net API. For example, longer message character limits, RSS feeds, and rich annotations support.

In short it intends to be a Twitter clone with a more expansive API policy than Twitter. The key problem facing App.net is that as a social graph based services it will suffer from the curse of network effects. Social apps need to cross a particular threshold of people on the service before they are useful and once they cross that threshold it often leads to a “winner-take-all” dynamic where similar sites with smaller users bleed users to the dominant service since everyone is on the dominant service. This is how Facebook killed MySpace & Bebo and Twitter killed Pownce & Jaiku. App.net will need to differentiate itself more than just being “open version of popular social network”. Services like Status.net and Diaspora have tried and failed to make a dent with that approach while fresh approaches to the same old social graph and news feed like Pinterest and Instagram have grown by leaps and bounds.

A bigger challenge is the implication that it wants to be a paid social network. As I mentioned earlier, social graph based services live and die by network effects. The more people that use them the more useful they get for their members. Charging money for an online service is an easy way to reduce your target audience by at least an order of magnitude (i.e. reduce your target audience by at least 90%). As an end user, I don’t see the value of joining a Twitter-clone populated by people who could afford $50 a year as opposed to joining a free service like Facebook, Twitter or even Google+.

Challenges Facing App.net as a Developer Platform

As a developer it isn’t clear to me what I’m expected to get out of App.net. The question on how they came up with their pricing tiers is answered thusly

Additionally, there are several comps of users being willing to pay roughly this amount for services that are deeply valuable, trustworthy and dependable. For instance, Dropbox charges $10 and up per month, Evernote charges $5 and up per month, Github charges $7 and up per month.

The developer price is inspired by the amount charged by the Apple Developer Program, $99. We think this demonstrates that developers are willing to pay for access to a high quality development platform.

I’ve already mentioned above that network effects are working against App.net as a popular consumer service. The thought of spending $50 a year to target less users than I can by building apps on Facebook or Twitter’s platforms doesn’t sound logical to me. The leaves using App.net as a low cost publish-subscribe mechanism for various apps or devices I deploy. This is potentially interesting but I’d need to get more data before I can tell just how interesting it could be. For example, there is a bunch of talk about claiming your Twitter handle and other things which makes it sound like developers only get a single account on the service. True developer friendliness for this type of service would include disclosure on how one could programmatically create and manage nodes in the social graph (aka accounts in the system).

At the end of the day although there is a lot of persuasive writing on join.app.net, there’s just not enough to explain why it will be a platform that will provide me enough value as a developer for me to part with my hard earned $50. The comparison to Apple’s developer program is rich though. Apple has given developers five billion reasons why paying them $99 a year is worth it, we haven’t gotten one good one from App.net.

That said, I wish Dalton luck on this project. Props to anyone who can pick himself up and continue to build new things after what he went through with imeem and PicPlz.

Note Now Playing: French Montana - Pop That (featuring Rick Ross, Lil Wayne and Drake) Note


 

Categories: Platforms

I read an interesting blog post by Steven Levy titled Google Glass Team: ‘Wearable Computing Will Be the Norm’ with an interview with the Project Glass team which contains the following excerpt

Wired: Do you think this kind of technology will eventually be as common as smart phones are now?

Lee: Yes. It’s my expectation that in three to five years it will actually look unusual and awkward when we view someone holding an object in their hand and looking down at it. Wearable computing will become the norm.

The above reminds me of the Bill Gates quote, “there's a tendency to overestimate how much things will change in two years and underestimate how much change will occur over 10 years”. Coincidentally the past week has been full of retrospectives on the eve of the fifth anniversary of the iPhone. The iPhone has been a great example of how we can both overestimate and underestimate the impact of a technology. When the iPhone was announced as the convergence of an iPod, a phone and an internet mobile communicator the most forward thinking assumptions were that the majority of the Apple faithful who bought iPods would be people who bought iPhones and this would head off the demise of the iPod/MP3 player market category.

Five years later, the iPhone has effectively reshaped the computing industry. The majority of tech news today can be connected back to companies still dealing with the fallout of the creation of the iPhone and it’s progeny, the iPad. Entire categories of products across multiple industries have been made obsolete (or at least redundant) from yellow pages and paper maps to PDAs, point-and-shoot cameras and netbooks. This is in addition to the sociological changes that have been wrought (e.g. some children now think a magazine is a broken iPad). The most shocking change as a techie has been watching usage and growth of the World Wide Web being replaced by usage of mobile apps. No one really anticipated or predicted this five years ago.

Wearable computing will follow a similar path. It is unlikely that within a year or two of products like Project Glass coming to market that people will stop using smartphones especially since there are many uses for the ubiquitous smartphone that Project Glass hasn’t tried to address (e.g. playing Angry Birds or Fruit Ninja at the doctor’s office while waiting for your appointment). However it is quite clear that in our lifetime there will be the ability to put together scenarios that would have seemed far fetched for science fiction just a few years ago. It will one day be possible to look up the Facebook profile or future equivalent of anyone you meet at a bar, business meeting or on the street without the person being none the wiser simply by looking at them. Most of the technology to do this already exists, it just isn’t in a portable form factor. That is just one scenario that not only will be possible but will be commonplace with products like Project Glass in the future.

Focusing on Project Glass making smartphones obsolete is like focusing on the fact that the iPhone made iPod competitors like the Creative Zen Vision obsolete. Even if it did, that was not the interesting impact. As a software professional, it is interesting to ask yourself whether your product or business will be one of those obsoleted by this technology or empowered by it. Using analogies from the iPhone era, will you be RIM or will you be Instagram?

PS: I have to wonder what Apple thinks of all of this. When I look at the image below, I see a clunky and obtrusive piece of headgear that I can imagine makes Jonathan Ive roll his eyes and think he could do much better. Given Apple’s mantra is “If you don’t cannibalize yourself, someone else will” I expect this space to be very interesting over the next ten years.

Note Now Playing: Wale - Bag of Money (featuring Rick Ross, Meek Mill and T-Pain) Note


 

Categories: Technology

I stumbled on an interesting blog post today titled Why Files Exist which contains the following excerpt

Whenever there is a conversation about the future of computing, the discussion inevitably turns to the notion of a “File.” After all, most tablets and phones don’t show the user anything that resembles a file, only Apps that contain their own content, tucked away inside their own opaque storage structure.

This is wrong. Files are abstraction layers around content that are necessary for interoperability. Without the notion of a File or other similar shared content abstraction, the ability to use different applications with the same information grinds to a halt, which hampers innovation and user experience.

Given that one of the hats I wear in my day job is responsibility for the SkyDrive API, questions like whether the future of computing should include an end user facing notion of files and how interoperability across apps should work are often at the top of my mind. I originally wasn’t going to write about this blog post until I saw the discussion on Hacker News which was a bit disappointing since people either decided to get very pedantic on the specifics of how a computer file is represented in the operating system or argued that inter-app sharing between apps via intents (on Android) or contracts (in Windows 8/Windows RT) makes the notions of files obsolete.

The app-centric view of data (as espoused by iOS) is that apps own any content created within the app and there is no mechanism outside the app’s user experience to interact with or manage this data. This also means there is no global namespace where other apps or the end user can interact with this data also known as a file system. There are benefits to this approach such as greatly simplifying the concepts the user has to deal with and preventing both the user or other apps from mucking with the app’s experience. There are also costs to this approach as well.

The biggest cost is as highlighted in the Why Files Exist post is that interoperability is compromised. The reason is that it is a well known truism that data outlives applications. My contact list, my music library and the code for my side projects across the years are all examples of data which has outlived the original applications I used to create and manage them. The majority of this content is in the cloud today primarily because I want universal access to my data from any device and any app. A world where moving from Emacs to Visual Studio or WinAmp to iTunes means losing my files created in those applications would be an unfortunate place to live in the long term.

App-to-app sharing as is done with Android intents or contracts in Windows 8 is a powerful way to create loosely coupled integration between apps. However there is a big difference between one off sharing of data (e.g. share this link from my browser app to my social networking app) to actual migration or reuse of data (e.g. import my favorites and passwords from one browser app to another). Without a shared global namespace that all apps can access (i.e. a file system) you cannot easily do the latter.

The Why Files Exist ends with

Now, I agree with Steve Jobs saying in 2005 that a full blow filesystem with folders and all the rest might not be necessary, but in every OS there needs to be at least some user-facing notion of a file, some system-wide agreed upon way to package content and send it between applications. Otherwise we’ll just end up with a few monolithic applications that do everything poorly.

Here I actually slightly disagree with characterizing the problem as needing a way to package content and send it between applications. Often my data is actually conceptually independent of an application and it is more like I want to give access to my data to apps not that I want to package up some of my data from one app to another. For example, I wouldn’t characterize playing my MP3s originally ripped in Winamp or bought from Amazon MP3 in iTunes as packaging content between those apps and iTunes. Rather there is a global concept known as my music library which multiple apps can add to or play from.

So back to the question that is the title of this blog post; have files outlived their usefulness? Only if you think reusing data across multiple applications has.

Note Now Playing: Meek Mill - Pandemomiun (featuring Wale and Rick Ross Note


 

Categories: Cloud Computing | Technology

A story I’ve been following with some bemusement on Techmeme is the freak out about the Girls Around Me app. It started with the article in Cult of Mac titled This Creepy App Isn’t Just Stalking Women Without Their Knowledge, It’s A Wake-Up Call About Facebook Privacy [Update] which strangely blamed Facebook for the fact that an app was written using the FourSquare API that showed women who had recently checked in on FourSquare. True, some of these women had linked their Facebook accounts to FourSquare so one could click through to their Facebook profile but they also can link their Twitter profiles as well which is strangely not mentioned in the original article.

The reactions against the app have been swift. FourSquare banned the app from calling their API and listed a number of Terms of Service violations as the reason including

Foursquare

We also reserve the right to revoke access to our API for any reason, at our sole discretion. That being said, we aim to be consistent and transparent in our policies and how we enforce them

Apple has similarly acted against them and pulled the app from the Apple app store as well. I’ve found all of this interesting since none of this is new and FourSquare itself enables and encourages the sorts of thing this app has done.

The notion of using check-ins as a way for single people to find where the ladies are during a night out is not new. The eponymously named Where The Ladies At app has been doing this for over a year but anonymizes the information so that you can tell that there have been 10 check-ins from women at that new nightclub in town and only 3 from the bar nearby but doesn’t show who they are. However FourSquare itself had already signaled that it planned to move beyond anonymization in an interview granted to the Wall Street Journal a mere three weeks ago titled Foursquare Moves Beyond Check-Ins which states

Crowley said the company started noticing that many of its 15 million users weren’t using the app’s main function: a check-in feature that lets people broadcast where they are to their friends. Instead, users are increasingly turning to a feature the company launched in February last year called Explore, which gives users data about places around them that their friends have visited and shows them tips that have been left behind.

“There are a lot of people using Foursquare who aren’t checking in. People just open the app to consume data,” explained Crowley. “That’s a really important and interesting trend.”

Crowley said he had an epiphany at the end of 2011 that he needed to pivot the app to make consuming data a more central experience. For example, Crowley said Foursquare’s next version will focus more on Explore. The company also launched a feature in October called Radar, which users can turn on to alert them when they are near places Foursquare thinks they might enjoy.

If you are a FourSquare user, you can try out http://www.foursquare.com/explore and it is hard to understand why it is OK but Girls Around Me isn’t. I posted a screenshot of the Explore part of the FourSquare iOS app and here it is showing me some coffee shops in the Puget Sound area.

Within two clicks I was on the Facebook page of someone who was currently at a Starbucks listed on the map. This is the same feature that Girls Around Me got pulled from the Apple App Store from and that FourSquare calls a violation of their terms of use. Does that mean the FourSquare iOS app is going to be banned by FourSquare?

The reality is that this is the first time the media has really stopped to think about the risks of using FourSquare and has blown some of their realizations out of proportion. The fact of the matter is if

  1. You connect your Facebook or Twitter account to FourSquare AND
  2. Enable public check-ins

Then total strangers can see where you currently are in real-time and look up more information about you than you’d expect a total stranger sitting across from you at Starbucks would have.

I think this is really a user education issue about the risks of taking the above two steps. I also think this is being blown out of proportion by the tech press who don’t use FourSquare and can’t come to grips with the fact that people may be OK publicly sharing where they are on FourSquare since they had to turn on the feature in the first place. Of course, FourSquare pushes you to do this by tying being able to become the mayor of a location to sharing your check-ins publicly but it is still a step users have to explicitly take.

Personally I can’t wait to see if Apple or FourSquare ban the FourSquare iOS app for enabling the same scenarios as the Girls Around Me app. ;)

Note Now Playing: Rihanna - Talk That Talk (featuring Jay-Z) Note


 

Categories:

If you hang around technology blogs and news sites, you may have seen the recent dust up after it was discovered that many iOS apps upload user address books to their servers to perform friend finding operations. There has been some outrage about this because this has happened silently in the majority of cases. The reasons for doing this typically are not nefarious. Users sign up to the service and provide their email address as a username or contact field. This is used to uniquely identify the user. Then when one of the user’s friends joins the service, the app asks the friend for their contact list then finds all of the existing users they whose email addresses are in the new users list of contacts. Such people you may know features are the bread and butter of growing the size of a social graph and connectedness in social applications.

There are a number of valid problems with this practice and the outrage has focused on one of them; apps were doing this without explicitly telling users what they were doing and then permanently storing these contact lists. However there are other problems as well that get into tricky territory around data ownership and data portability. The trickiest being whether it is OK for me to share your email address and other contact details with another entity without your permission given it identifies you. If you are a private individual with an unlisted phone number and private email address only given to a handful of people, it seems tough to concede that it is OK for me to share this information with any random app that asks me especially if you have this information as private for a reason (e.g. celebrity, high ranking government official, victim of a crime, etc).

As part of my day job leading the program management team behind the Live Connect developer program which among other things provides access to the Hotmail and Messenger contact lists, these sorts of tricky issues where one has to balance data portability and user privacy are top of mind.  I was rather pleased this morning to read a blog post titled Hashing for privacy in social apps by Matt Gemmell which advocates the approach we took with Live Connect. Matt writes

Herein lies the (false) dilemma: you’re getting a handy social feature (automatic connection with your friends), but you’re losing your privacy (by allowing your friends’ email addresses to be uploaded to Path’s servers). As a matter of fact, your friends are also losing their privacy too.

What an awful choice to have to make! If only there was a third option!

For fun, let’s have a think about what that third option would be.

Mathematics, not magic

Hypothetically, what we want is something that sounds impossible:

  1. Some way (let’s call it a Magic Spell) to change some personal info (like an email address) into something else, so it no longer looks like an email address and can’t be used as one. Let’s call this new thing Gibberish.
  2. It must be impossible (or at least very time-consuming) to change the Gibberish back into the original email address (i.e. to undo the Magic Spell).
  3. We still need a way to compare two pieces of Gibberish to see if they’re the same.

Clearly, that’s an impossible set of demands.

Except that it’s not impossible at all.

We’ve just described hashing, which is a perfectly real and readily-available thing. Unlike almost all forms of magic, it does actually exist - and like all actually-existing forms of magic, it’s based entirely on mathematics. Science to the rescue!

This is the practice we’ve advocated with Live Connect as well. Instead of returning email addresses of a user’s contacts from our APIs, we provide email hashes instead. That way applications don’t need to store or match against actual email addresses of our users but can still get all the value of providing a user with the a way to find their Hotmail or Messenger contacts who also use that service.

We also provide code samples that show how to work with these hashes and I remember being in discussions with folks on the team as to whether developers would ever want to do this since storing and comparing email addresses is less code than working with hashes. Our conclusion was that it would be just a matter of time before this would be an industry best practice and it was best if we were ahead of the curve. It will be interesting to see if our industry learns from this experience or whether it will take external pressure.

Note Now Playing: Notorious B.I.G. - Want That Old Thing Back (featuring Ja Rule and Ralph Tresvant) Note


 

Categories: Web Development

Towards the end of last year, I realized I was about to bump up against the ”use it or lose it” vacation policy at work which basically means I either had to take about two weeks of paid vacation or forfeit the vacation. Since I hadn’t planned the time off I immediately became worried about what to do with all that idle time especially since if left to my own devices I’d play 80 straight hours of Modern Warfare 3 without pause.

To make sure the time was productively used I decided to write a mobile app as a learning exercise about the world of mobile development since I’ve read so much about it and part of my day job is building APIs for developers of mobile apps.  I ended up enjoying the experience so much I added an extra week of vacation and wrote two apps for Windows Phone. I’d originally planned to write one app for Windows Phone then port it to iOS or Android but gave up on that due to time constraints after some investigation of both.

I learned a bunch about mobile development from this exercise and a few friends have asked me to share of my thoughts on mobile development in general and building for Windows Phone using Microsoft platforms in particular. If you are already a mobile developer then some of this is old hat to you but I did find a bunch of what I learned to be counterintuitive and fairly eye opening so you might too.

Thoughts on Building Mobile Apps on Any Platform

This section is filled with items I believe are generally applicable if building iOS, Android or Windows Phone apps. These are mostly things I discovered as part of my original plan to write one app for all three platforms.

  1. A consistent hardware ecosystem is a force multiplier

    After realizing the only options for doing iPhone development on Windows was the Dragon Fire SDK which only supports games, I focused on learning as much as I could about Android development options. The Xamarin guys have MonoTouch which sounded very appealing to me as a way to leverage C# skills across Android and Windows Phone until I saw the $400 price tag. :)

    One of the things I noticed upon downloading the Android SDK as compared to installing the Windows Phone SDK is that the Android one came with a bunch of emulators and SDKs for various specific devices. As I started development on my apps, there were many times I was thankful for the consistent set of hardware specifications for Windows Phone. Knowing that the resolution was always going to be WVGA and so if something looked good in the emulator then it would look good on my device and those of my beta testers not only gave piece of mind but made UX development a breeze.

    Comparing this to an ecosystem like Android where the diversity of hardware devices with varying screen resolutions have made developers effectively throw up their hands as in this article quoted by Jeffrey Zeldman

    If … you have built your mobile site using fixed widths (believing that you’ve designed to suit the most ‘popular’ screen size), or are planning to serve specific sites to specific devices based on detection of screen size, Android’s settings should serve to reconfirm how counterproductive a practice this can be. Designing to fixed screen sizes is in fact never a good idea…there is just too much variation, even amongst ‘popular’ devices. Alternatively, attempting to track, calculate, and adjust layout dimensions dynamically to suit user-configured settings or serendipitous conditions is just asking for trouble.

    Basically, you’re just screwed if you think you can build a UI that will work on all Android devices. This is clearly not the case if you target Windows Phone or iOS development. This information combined with my experiences building for Windows Phone convinced me that it is more likely I’ll buy a Mac and start iOS development than it is that I’d ever do Android development.

  2. No-name Web Hosting vs. name brands like Amazon Web Services and Windows Azure

    One of my apps had a web service requirement and I initially spent some time investigating both Windows Azure and Amazon Web Services. Since this was a vacation side project I didn’t want expenses to get out of hand so I was fairly price sensitive. Once I discovered AWS charged less for Linux servers I spent a day or two getting my Linux chops up to speed given I hadn’t used it much since my the early 2000s. This is where I found out about yum and discovered the interesting paradox that discovering and installing software on modern Linux distros is simultaneously much easier and much harder than doing so on Windows 7. Anyway, that’s a discussion for another day.

    I soon realized I had been penny wise and pound foolish when focusing on the cost of Linux hosting when it turns out what breaks the bank is database hosting. Amazon charges about $0.11 an hour ($80 a month) for RDS hosting at the low end. Windows Azure seemed to charge around the same ballpark when I looked two months ago but it seems they’ve revamped their pricing site since I did my investigation.

    Once I realized database hosting would be the big deciding factor in cost. It made it easier for me to stick with the familiar and go with instead of as a LAMP  server stack. If I had stuck with LAMP , I could have gone with a provider like Blue Host to get the entire web platform + database stack for less than $10 with perks like free credits for Google ads thrown in. With the WISC stack, hosters like Discount ASP and Webhost 4 Life charge in the ballpark of $15 which is about $10 if you swap out SQL Server for MySQL.

    These prices were more my speed. I was quite surprised that even though all the blogs talk about AWS and Azure, it made the most sense for my bootstrapped apps to start with a vanilla web host and pay up to ten times less for service than using one of the name brand cloud computing services. Paying almost ~$100 a month for services with elastic scaling properties may make sense if my apps stick around and become super successful but not at the start.

    Another nice side effect of going with a web hosting provider is the reduced complexity from going with a cloud services provider. Anyone who's gone through the AWS getting started guides after coming from vanilla web hosting knows what I mean.

  3. Facebook advertising beats search ads for multiple app categories

    As mentioned above, one of the perks of some of the vanilla hosting providers is that they throw in free credits for ads on Google AdSense/Adwords and Facebook ads as part of the bundle. I got to experiment with buying ads on both platforms and I came away very impressed with what Facebook has built as an advertising platform.

    I remember reading a few years ago that MySpace had taught us social networks are bad for advertisers. Things are very different in today’s world. With search ads, I can choose to show ads alongside results when people search for a term that is relevant to my app. With Facebook ads, I get to narrowly target demographics based on detailed profile attributes such as Georgia Tech alumni living in New York who have expressed an interest in DC or Marvel comics. The latter seems absurd at first until you think about an app like Instagram.

    No one is searching for "best photo sharing app for the iphone" on Google and even if you are one of the few people who has, there aren’t a lot of you. On the other hand, at launch the creators of Instagram could go to Facebook and say we'd like to show ads to people who have liked or use an and who also have shown an affiliation for photo sharing apps or sites like Flickr, Camera+, etc then craft specific pitches for those demographics. I don’t know about you but I know which sounds like it would be more effective and relevant.

    This also reminded me that I'd actually clicked on more ads on Facebook than I've ever clicked on search ads.

  4. Lot's of unfilled niches still exist

    I remember being in college back in the day, flipping through my copy of Yahoo! Internet Life and thinking that we were oversaturated with websites and all the good ideas were already taken. This was before YouTube, Flickr, SkyDrive, Facebook or Twitter. Silly me. 

    The same can be said about mobile apps today. I hear a lot about there being 500,000 apps in the Apple app store and the same number being in Android Market. To some this may seem overwhelming but there are clearly still niches that are massively underserved on those platforms and especially on Windows Phone which just hit 50,000 apps.

    There are a lot of big and small problems in people's lives that can be addressed by bringing the power of the web to the devices in their pockets in a tailored way. The one thing I was most surprised by is how many apps haven't been written that you'd expect to exist just from extrapolating what we have on the Web and the offline world today. I don't just mean geeky things like a non-propeller head way to share bookmarks from my desktop to my phone and vice versa without emailing myself but instead applications that would enrich the lives of millions of regular people out there that they'd be gladly willing to pay $1 for (less than the price of most brands of bubble gum these days).

    If you are a developer, don't be intimidated by the size of the market nor be attracted to the stories of the folks who've won the lottery by gambling on being in the right place at the right time with the right gimmick (fart apps, sex position guides and yet another photo sharing app). There are a lot of problems that can be solved or pleasant ways to pass the time on a mobile device that haven’t yet been built. Look around at your own life and talk to your non-technical friends about their days. There is lots of inspiration out there if you just look for it.

  5. Look for Platforms that Favor User Experience over Developer Experience

    One of the topics I’ve wanted to write about in this blog is how my generation of software developers who came of age with the writings of Richard Stallman and Eric Raymond’s The Cathedral and the Bazaar with its focus on building software with a focus on making the developers who use the software happy collides with the philosophy of software developers who have come of age in the era of Steve Jobs and what Dave Winer has called The Un-Internet where focusing on providing a controlled experience which is smoother for end users leads to developers being treated as second fiddle. 

    As a developer, having to submit my app to some app store to get it certified when I could publish on the web as soon as I was done checking in the code to my local github repository is something I chafe against. When working on my Windows Phone apps, I submitted one to be certified and found prominent typos a few hours later. However there was nothing I could do but wait for five business days for my app to be approved after which I could submit the updated version to be certified which would take another week in calendar days. Or so I thought. 

    My initial app submission was rejected for failing a test case around proper handling of lack of network connectivity. I had cut some corners in my testing when it came to testing network availability support once I discovered NetworkInterface.GetIsNetworkAvailable() always returns true in the emulator which meant I had to actually test that process on my phone. I never got around to it by telling myself no one actually expects a network connected app to work if they don’t have connectivity.

    The Windows Phone marketplace rejected my app because it turns out it crashes if you lose network connectivity. I was actually pretty impressed that someone at Microsoft is tasked with making sure any app a user installs from the store doesn't crash for common edge cases. Then I thought about the fact that my wife, my 3 year old son, and my non-technical friends all use mobile apps and it is great that this level of base set of quality expectations are being built into the platform. Now when I think back to Joe Hewitt famously quitting the Apple App store and compare it to the scam of the week culture that plagues the Android marketplace, I know which model I prefer as a user and a developer. It’s the respect for the end user experience I see coming out of Cupertino and Redmond.

    This respect for end users ends up working for developers which is why there really is no surprise that iOS devs make 6 time smore than their Android counterparts because users are more likely to spend money on apps on iOS.

Thoughts on Microsoft-Specific Development

In addition to the general thoughts there were some things specific to either Windows Phone or WISC development I thought were worth sharing as well. Most of these were things I found on the extremely excellent Stack Overflow, a site which cannot be praised enough.

  1. Free developer tools ecosystem around Microsoft technology is mature and surprisingly awesome

    As a .NET developer I’ve been socialized into thinking that Microsoft tools are the realm of paying an arm and a leg for tools while people building on Open Source tools get great tools for free. When I was thinking about building my apps on Linux I actually got started using Python for a web crawler that was intended to be part of my app as well as for my web services. When I was looking at Python I played around with web.py and wrote the first version of my crawler using Beautiful Soup.

    As I moved on the .NET I worried I’d be stuck for such excellent free tooling but that was not the case. I found similar and in some cases better functionality for what I was looking for in Json.NET and the HTML Agility Pack. Besides a surprising amount of high quality, free libraries for .NET development, it was the free tools for working with SQL Server that sent me over the top. Once I grabbed SQL Complete, an autocomplete/Intellisense tool for SQL Server, I felt my development life was complete. Then I found ELMAH. Fatality…I’m dead and in developer heaven.

  2. Building RESTful web services that emit JSON wasn't an expected scenario from Microsoft dev tools teams?

    As part of my day job, I'm responsible for Live Connect which among other things provides a set of RESTful JSON-based APIs for accessing data in SkyDrive, Hotmail and Windows Live Messenger. So it isn't surprising that when I wanted to build a web service for one of my side projects I'd want to do the same. This is where things broke down.

    The last time I looked at web services development on the WISC the way to build web services was to use Windows Communication Foundation (WCF). So I decided to take a look at that and found out that the product doesn’t really support JSON-based web services out of the box but I could grab something called the WCF Web API off of CodePlex. Given the project seemed less mature than the others I’d gotten off of CodePlex I decided to look at ASP.NET and see what I could get there since it needs to enable JSON-based REST APIs as part of its much touted JQuery support. When I got to the ASP.NET getting started page, I was greeted with the statement that ASP.NET enables building 3 patterns of websites and I should choose my install based on what I wanted. Given that I didn't want to build an actual website not a web service I didn't pick any of them

    Since I was short on time (after all, this was my vacation) I went back to what I was familiar with and used ASP.NET web services with HTTP GET & POST enabled. I’m not sure what the takeaway is here since I clearly built my solution using a hacky mechanism and not a recommended approach yet it is surprising to me that what seems like such a mainline scenario isn’t supported in a clear out-of-the-box manner by Microsoft’s dev tools.

  3. Embrace the Back Button on Windows Phone

    One of the things I struggled with the most as part of Windows Phone development was dealing with the application lifecycle. The problem is that at any point the user can jump out of your app and the operating system will put your app in either a dormant state where data is still stored in memory or tombstone your app in which case it is killed and state your app cares about is preserved.

    One of the ways I eventually learned to thing about this the right way was to aggressively use the back button while testing my app. This led to finding all sorts of interesting problems and solutions such as how to deal with a login screen when the user clicks back and that a lot of logic I thought should be in the constructor of a page really should be in the OnNavigatedTo method (and don’t forget to de-register some of those event handlers in your OnNavigatedFrom event handler).

I could probably write more on this but this post has gotten longer than I planned and I need to take my son to daycare & get ready for work. I’ll try to be a more diligent blogger this year depending on whether the above doesn’t make too many people unhappy.

Happy New Year.

Note Now Playing: Kanye West - Devil In A New Dress (featuring Rick Ross)Note


 

Categories: Programming | Web Development

Spent the morning reading a well argued rant by Maciej Ceglowski titled The Social Graph is Neither where he argues that the current way human relationships are modeled on sites like Facebook is fundamentally flawed. He makes two basic arguments. The first

It’s Not a Graph

There's another fundamental problem in that a graph is a static thing, with no concept of time. Real life relationships are a shared history, but in the social graph they're just a single connection. My friend from ten years ago has the same relationship to me as the friend I dined with yesterday. You're left with forcing people (or their software) to maintain lists like 'Recent Contacts' because there is no place in the model to fit this information.

"No problem," says Poindexter. "We'll add a time series of state transitions and exponentially decaying edge weights, model group dynamics as directional flows, and pass a context object in with each query..." and around we go.

This obsession with modeling has led us into a social version of the Uncanny Valley, that weird phenomenon from computer graphics where the more faithfully you try to represent something human, the creepier it becomes. As the model becomes more expressive, we really start to notice the places where it fails.

Personally, I think finding an adequate data model for the totality of interpersonal connections is an AI-hard problem. But even if you disagree, it's clear that a plain old graph is not going to cut it.

Here I think Maciej is looking at the problem from the wrong end. The question isn’t whether we can perfectly model the real world in software but instead whether we can use software to improve the quality of our lives by solving real problems. I think of this as the Xanadu vs. World Wide Web problem. You can point to a dozen problems that exist on the web as designed today from broken links and the frailty DNS to the problems caused by anonymity such as spam and phishing. This hasn’t stopped the web from becoming the center of an $8 trillion economy because it solves a lot of human problems even though it is imperfect.

Thus the question isn’t whether a product solves a problem of how to differentiate between my high school friend I haven’t seen in 10 years who I still think of as a brother from the coworkers I communicate with regularly but have no real interest in their lives outside of work. The questions are actually (i) can a product add value to people’s lives without needing to add the complexity of modeling that abstraction and (ii) does the benefit of the “improvement” of solving that problem outweigh the costs it introduces to the system?. I think the answer to the first question when it comes to the social graph is clearly “Yes”.  As for the second, we really don’t know the answer to that one but from seeing the various imperfect attempts like Circles in Google+ I would bet the answer will be “No” for a long time. Which brings me to Maciej’s second point.

It’s Not Social

The problem FOAF ran headlong into was that declaring relationships explicitly is a social act. Documenting my huge crush on Matt in an XML snippet might faithfully reflect the state of the world, but it also broadcasts a strong signal about me to others, and above all to Matt. The essence of a crush is that it's furtive, so by declaring it in this open (but weirdly passive) way I've turned it into something different and now, dammit, I have to go back and edit my FOAF file again.

This is a ridiculous example (though it comes up with strange regularity in the docs), but we run into its milder manifestations all the time. Your best friend from high school surfaces and sends a friend request. Do you just click accept, or do you send a little message? Or do you ignore him because you don't want to deal with the awkward situation? Declaring connections is about as much fun as trying to whittle people from a guest list, with the added stress that social networking is too new for us to have shared social conventions around it.

Social graph proponents seem uninterested in the signaling problem. Leaving aside the technical issues of how to implemented, how does cutting ties actually work socially? Is there any way to be discreet, for example, or have connections naturally degrade over time? In real life, all relationships fade naturally if you don't maintain them, but right now social networks preserve ties in amber until we explicitly break them. Is my sister going to resent me if I finally defriend her annoying husband? Can I unfollow my ex now, or is that going to make her think I'm still hung up on her?

There's no way to take a time-out from our social life and describe it to a computer without social consequences. At the very least, the fact that I have an exquisitely maintained and categorized contact list telegraphs the fact that I'm the kind of schlub who would spend hours gardening a contact list, instead of going out and being an awesome guy. The social graph wants to turn us back into third graders, laboriously spelling out just who is our fifth-best-friend. But there's a reason we stopped doing that kind of thing in third grade!

I agree with the core premise that relying on people to explicitly declare and maintain their relationships in minute detail is a flawed enterprise. Maciej is 100% right that there are explicit social signals with real consequences when we decide to declare even in the most mundane of public ways that we are connected to others. My favorite example is the amount of scrutiny applied to Rep. Anthony Weiner's Twitter following list and how many point to the his explicit declaration of interest in a set of female followers as a catalyst which helped further his fall from grace. The thing I like about that example is that the nuance of simply declaring what kind of relationship you have with people on Twitter or categorizing them would have not helped in that situation.

The reality is that the publicness and interconnectedness of the World Wide Web is causing us to create new social norms which we are still figuring out. The same way people who leave rural areas for cities realize that some social norms they’d grown up with all their lives would now have to change so is humanity figuring out as we go along how we need to adjust our behavior and in some cases broaden the list of behaviors we accept as we become more interconnected online. In the past few years, we’ve seen Zuckerburg's law of information sharing applied in the real world and it’s amazing to think of the sorts of things people regularly share and expect to be shared with each compared to just a few years. It’s hard to believe that it was just five years ago when Time magazine was writing about how Facebook’s 8 million users would abandon the site because of the “intrusive” news feed, now Facebook has 800 million users thanks to that intrusive feature. 

Note Now Playing: Lil Wayne - It's Good (featuring Drake & Jadakiss) Note


 

Categories: Social Software

September 18, 2011
@ 07:39 PM

Yesterday I was reading an article titled Why Facebook is the New Yahoo by Mike Elgan which argues that Facebook adding features driven by its competition with Google+ “smacks of desperation”. The article’s core argument is excerpted below

The only way for Facebook (or any online service for that matter) to succeed is to re-invent itself. Facebook is scrambling to do so, trying this, trying that, desperately looking to thrill users with expanded engagement with existing social graphs. And Facebook has failed again and again.

Facebook tried to become the default e-mail client for members when it rolled out Facebook Messages, which enabled people to use a facebook.com e-mail address. Remember that?

Neither do I. Nobody uses it.

Then Facebook saw that FourSquare and Groupon were gaining some traction with social location check-in and coupons, and so it launched Places and Deals.

Nobody cared, and Facebook killed both of them.

Facebook would get a huge boost from usage on tablets -- tablets and social networks were made for each other, because they’re both used in the same way at the same time (most heavily while at home during leisure time). Yet Facebook has failed to come out with a tablet app, even though the iPad shipped a year and a half ago!

Now Facebook’s desperate new strategy appears to be: Just copy Google+.

I always find it interesting that different people can look at the same data and come to very different conclusions. The reason Facebook has 750 million active monthly users today and is widely presumed to one day get to a billion active users is because it constantly reinvents itself. The problem for authors like Mike Elgan is that Facebook has bucked the traditional narrative for big technology companies.

The tech press loves the innovators dilemma or disruptive technology narrative. Tech press loves to tell the story of a scrappy young company that comes from the blind spot of some big entrenched company to become dominant itself. They also love tearing down that same company a few years later when another scrappy young upstart shows up. This narrative is with us constantly; from Google’s social blind spot leaving an opening for Facebook to RIM being disrupted by touch-based smartphones with thriving app platforms. Even better for the story is when the upstart is a “web” company versus a bricks and mortar player such as Netflix versus Blockbuster.

The challenge for the tech press when it comes to Facebook is that the company deeply understands this narrative. After all Facebook was the usurper to MySpace in the classic tale of entrenched major player being disrupted by scrappy upstart. Facebook has their ear to the ground when it comes to potential usurpers and quickly moves to blunt their momentum often by what many have described as copying features. There are numerous examples of this including

The problem for tech watchers is that Facebook doesn’t let the innovator’s dilemma narrative get off the ground. Before too many could get hooked on social Q&A, check-ins or more interaction in the  news feed, Facebook made sure its users associated those features with their site. One could argue that the lack of mainstream penetration of Quora, FourSquare and FriendFeed is partially because the bulk of their offering is already available on Facebook so it’s hard to imagine how to argue to a mainstream user that you should use those sites when they already get that functionality on Facebook.

Therein lies the problem with Facebook. By definition, Facebook can’t go as deep on any of these scenarios as dedicated sites which means users are introduced to slightly watered down versions of a number of these new ideas as they have to still fit into Facebook’s site structure and core goals. However there’s just enough functionality provided by Facebook for people to either be satisfied with the experience (i.e. no reason to join FriendFeed when all of that functionality is on Facebook) or to decide they dislike it even if the feature doesn’t go as deep as it could (i.e. Facebook Places versus FourSquare). The latter is particularly pernicious because it means interesting new startup ideas don’t really get a chance to blossom before the mainstream is introduced to them.

I’m reminded a little of the world of RSS readers. A few years ago there was a lot of innovation in client RSS readers from commercial offerings like FeedDemon and NewsGator Inbox to home grown projects like RSS Bandit. However, RSS was eventually added to the big gorilla in client communication tools; Outlook. When this happened a lot of the innovation in this space dried up and it didn’t take long for Outlook to become the dominant RSS reader. This is despite the fact that Outlook didn’t go nearly as deep in the RSS reading technology it provided compared to dedicated RSS readers.

I see the same thing happening with Facebook when it comes to a number of social software ideas and it makes me a little sad to think about what we’re losing even though we are gaining the convenience of a one-stop shop for social.

Note Now Playing: Jay-Z and Kanye West - Illest Motherf**ker Alive (Includes intentional 3-minute silence) Note


 

Categories: Social Software