The big news in tech circles is this is the week Google Reader died. There has been a lot of collective shock about this since Google Reader was a staple of the sort of technology news omnivore and early adopter that was once considered to be the core fan base for Google. Despite Google’s protestations it’s quite clear that there was ample usage of the service with sites like Buzzfeed providing evidence that up to its last day it was driving over a million referrals a day. As someone who wrote a feed reader that synced with Google Reader which has been downloaded over 1.5 million times and is credited in Atom format specification it is sad to see such a core part of the RSS ecosystem come tumbling down.

The biggest surprise for many is how far Google seems to have strayed from its core mission as epitomized by this comment on Hacker News

ivank 1 day ago | link | parent

Organizing the World's Information... and setting it on fire 8 years later.

Like many companies before it, the company has done a great job of separating its public persona from the gritty reality of its business practices. Google’s business mission is to organize all the world’s information and use that to sell advertising. The great thing about this business model is that for a long time it has been a win-win for every one involved in the ecosystem. Consumers get great web services and products that are expensive to build and run for free such as Google Search, Gmail, Chrome and Android. Advertisers get to target consumers in new ways which have a better return on investment than they traditionally have been able to get from print and other media. While Google has minted hundreds of millionaires with the profits they’ve gotten from creating this ecosystem.

Unfortunately, Google has also hit the same problems that successful companies before it have had to face. The laws of large numbers mean that for Google to continue to be a great business for its shareholders (which includes its employees and executives) then it needs to find ways to eke even more money out of its customer base (advertisers) by giving them even more product (consumers with rich demographic and behavioral information) or by growing into new markets. Companies like Apple and Microsoft have faced this problem by growing new multi-billion dollar businesses. Apple went from a computer company to the iPod company and now it’s the iPhone company which is transitioning to becoming the iPad company. Microsoft started off as the BASIC company, transitioned to being the Windows & Office company and today has several billion dollar businesses from game consoles to database software. Google has decided to face this challenge by doubling down on their advertising business which means trying to suck in even more data via acquisitions like Waze and ITA as well as extracting more revenue from advertisers by removing consumer targeting options.

For the most part Google has diverted tech press scrutiny from this ongoing attempt to monopolize data sources and hover up personal information about users for later resale using misdirection. A few years ago the Google playbook was to release some technology branded as “open” such as OpenSocial which aimed to paint Facebook as “evil”, open sourced Android which paints Apple as “evil” or Google Chrome which was released with an excellent comic book explaining why it was so open. How could one complain about Google with a straight face when they were giving all of these free services to consumers and “open” technologies to the industry? No one would buy it. This was the strategy of the old Google under Eric Schmidt and it worked beautifully.

It has been an interesting hallmark of Google under Larry Page that it doesn’t play these games anymore. Pursuing Google’s business strategies at the expense of its public image as the good guy of the technology industry is now par for the course. I was taken aback when Google announced that it was going to fork Webkit since this was a blatant political move which many technology savvy geeks would see through. For days afterwards, the article A Short Translation from Bullshit to English of Selected Portions of the Google Chrome Blink Developer FAQ was a viral hit among developers on Twitter with excerpts such as

1.2 Is this new browser engine going to fragment the web platform's compatibility more?

Yes.

We intend to distract people from this obvious problem by continually implying that our as-yet unwritten replacement is somehow much better and more sophisticated than the rendering engine that until yesterday was more than good enough to permit us to achieve total dominance of the Windows desktop browsing market in less than two years.

This strategy has worked extremely well for Netscape, Microsoft, Apple and us in previous iterations of the browser wars, and we firmly believe that everyone in this industry was born yesterday and they will not recognise this for the total bullshit it so clearly is.

1.10 Is this going to be open source?

Not really.

While you can certainly read the source code, we're fully aware that actually tracking and understanding a modern HTML renderer is extremely difficult. In addition, the first changes we will make are intended specifically to break compatibility with WebKit, so the only organisation with sufficient resources to track our changes will no longer be able to do so.

In practice, this allows us to call the project "open" while simultaneously ensuring Google will be the only effective contributor to the Chrome and Blink source now and in the future. We've had enormous success co-opting the language of open source in the past to imply our products are better, and we aim to continue with that strategy.

So what does all of this have to do with Google Reader? As Marco Arment points out in his excellent post titled Lockdown, Google is in a battle with Facebook and Twitter to suck up user demographic and behavioral data around social signals. Google Reader is literally the opposite of that strategy since it is far too decentralized when compared to the model pioneered by Facebook Pages and Twitter.

Google Reader has been living on borrowed time since Facebook and Twitter became prominent. The only thing that has changed in 2013 is that Google’s management doesn’t think it’s worth it to throw a bone to millions of geeks and early adopters by keeping the lights running on the service with its existing skeleton crew. This new Google doesn’t care if geeks and early adopters just see it as another money hungry corporation that only focuses on the bottom line. Larry Page doesn’t give a shit.

Welcome to the new Google.

Note Now Playing: J.Cole - Can't Get Enough (featuring Trey Songz) Note


 

The problem with Google Reader was that by the time you solve all of the problems with the RSS experience, you’ve effectively invented Twitter.

And Jack Dorsey already did that.

Note Now Playing: Lil Wayne - Love Me (featuring Drake & Future) Note


 

In a past life, I worked on the social news feed for a number of Microsoft products including the Messenger Social feed in Hotmail & Messenger and most recently the People app in Windows 8. When I worked on these products, we strongly believed in the integrity of the user experience and so never considered the social feed as a canvas for showing users ads.

Thus I read a pair of recent posts by Dalton Caldwell, founder of App.net, with some interest. Dalton wrote about the recent moves that both Twitter and Facebook are making to ensure that the social feeds on their sites become a great canvas for showing ads.

In his post Understanding Like-gate Dalton writes

The best ad is indistinguishable from content

We can expect to see Facebook deemphasizing traditional advertising units in favor of promoted news stories in your stream. The reason is that the very best advertising is content. Blurring the lines between advertising and content is one of the most ambitious goals a marketer could have.

Bringing earnings expectations into this, the key to Facebook “fixing” their mobile advertising problem is not to create a new ad-unit that performs better on mobile. Rather, it is for them to sell the placement of stories in the omnipresent single column newsfeed. If they are able to nail end-to-end promoted stories system, then their current monetization issues on mobile disappear.

In his post Twitter is pivoting Dalton writes

Predicting the future

In this paradigm, Twitter’s business model is to help brands “amplify their reach”. A brand participating in Twitter can certainly distribute their content for free and get free organic traffic, but if they want to increase their reach, they need to pay.

It’s no accident that this sounds exactly like the emerging Facebook business model. As discussed in that link, algorithmically filtered primary feeds are vastly easier to advertise against vs unfiltered feeds. The issue for Twitter is that Facebook already has a far larger userbase which is already trained to read an algorithmically filtered feed.

In a twist, I wouldn’t have predicted a few years ago it is now a regular occurrence for both users of Facebook and Twitter to see ads in their feeds. Twitter has gone as far as effectively crippling new Twitter apps to ensure that every Twitter user gets an ads-heavy unfiltered Twitter experience. The reason for this is straightforward. Both companies have sky high expectations from investors as evidenced by Facebook's $100 billion valuation it has failed to meet and Twitter's $8 - $10 billion valuation on $100 million in revenues. The challenge for both services is that investors are expecting Google-like returns on investment but neither of these companies have a Google-like business model.

The problem with ad supported online businesses is that for the most part their business models suck. In a traditional business, if you focus on building a great product or service that provides an excellent customer experience then you will be making money hand over fist. In most ad supported online businesses, your business is selling your product’s audience as opposed to the product itself. That means if you want to make more money you have to pimp out your audience often in disrespectful and intrusive ways to eke out that extra dollar.

The one place where this is different is online search (i.e. Google’s primary business). In the web search, the ads aren’t just indistinguishable from content but in the most lucrative cases, the ads are better than the content. As an example take a look at these searches

Since we may get different results, I’m including a screenshot below

There are 8 ads in this screenshot and 2 search results. However instead of being irritated as I would be if the ratio of ads to content was 4:1 in a YouTube video or Facebook feed, the ads are actually more relevant than the organic search results. This is the holy grail that Twitter and Facebook are trying to achieve.

As Dalton points out, Facebook has already socialized its users to the notion that brands will post commercial messages to the feed. In addition, brands have grown so entitled to it then when asked to pay for them since they are ads, they get outraged. However Facebook has been boiling this particular frog for a while. Facebook encourages advertisers to create Pages and users to like Pages so that they can stay connected to the brands they care about. Content in your feed from people and brands you don’t follow snuck in under the aegis of showing you content your friends interacted with. Finally not only has Facebook had promoted posts for brands for a while, they now also allow users to promote their personal posts to friends for $7 a pop.

Without really thinking about it much, we’re halfway to a future where a significant percentage of the content of your Facebook feed is paid. Since the posts go through the same ranking algorithm as your regular feed, they are more likely to be relevant to you than the traditional ad products that Facebook and other online properties are known for today. When the goal is to be entertained, do you really care if that viral video of the day being shared via a friend is a paid impression or not? 

Twitter is playing catch up here but if they don’t, the flop that was Facebook’s IPO will look tame in comparison.

Note Now Playing: Macklemore & Ryan Lewis - Can't Hold Us Feat. Ray Dalton Note


 

November 13, 2012
@ 01:38 PM

I’ve had about four hours of sleep but can’t seem to go back to sleep. There’s a pain of loss that feels like a death in the family and I hope writing this down helps in some way of dealing with it.

Yesterday it was announced that Steven Sinofsky is leaving Microsoft. As someone who considered Steven to be a role model of executive leadership and a source of my faith in the future of Microsoft this is a big shock. Part of me acknowledges that change is a natural part of life and nothing lasts forever but this is still a difficult incident to digest. Steven was a leader who understood how to leverage the strengths of an organization to build world class products while protecting the organizations from its inherent self defeating nature. As the saying goes a group is its own worst enemy.

When I think about Steven Sinofsky’s leadership style, I’m reminded of Joel Spolsky’s guide to interviewing which has the following succinct description of a great hire

In principle, it’s simple. You’re looking for people who are

  1. Smart, and
  2. Get things done.

That’s it. That’s all you’re looking for. Memorize that. Recite it to yourself before you go to bed every night. You don’t have enough time to figure out much more in a short interview, so don’t waste time trying to figure out whether the candidate might be pleasant to be stuck in an airport with, or whether they really know ATL and COM programming or if they’re just faking it.

People who are Smart but don’t Get Things Done often have PhDs and work in big companies where nobody listens to them because they are completely impractical. They would rather mull over something academic about a problem rather than ship on time. These kind of people can be identified because they love to point out the theoretical similarity between two widely divergent concepts. For example, they will say, “Spreadsheets are really just a special case of programming language,” and then go off for a week and write a thrilling, brilliant whitepaper about the theoretical computational linguistic attributes of a spreadsheet as a programming language. Smart, but not useful. The other way to identify these people is that they have a tendency to show up at your office, coffee mug in hand, and try to start a long conversation about the relative merits of Java introspection vs. COM type libraries, on the day you are trying to ship a beta.

People who Get Things Done but are not Smart will do stupid things, seemingly without thinking about them, and somebody else will have to come clean up their mess later. This makes them net liabilities to the company because not only do they fail to contribute, but they soak up good people’s time. They are the kind of people who decide to refactor your core algorithms to use the Visitor Pattern, which they just read about the night before, and completely misunderstood, and instead of simple loops adding up items in an array you’ve got an AdderVistior class (yes, it’s spelled wrong) and a VisitationArrangingOfficer singleton and none of your code works any more.

One of the interesting problems that faces a large software company is that it is very easy to become full of smart people that don’t get things done and then institutionalize this behavior by crowning them software architects or some equivalent. Steven’s leadership style encouraged a process and organizational structure, which you can read about in his book One Strategy: Organization, Planning, and Decision Making, that encourages getting stuff done by limiting the ability of the organization and people within the organization to take up positions where they strayed far from the goals of shipping a valuable product on time and within budget.

There are lots of people who disagreed with his philosophy and approach but it is hard to argue with the results of his efforts. Under him the team that shipped Windows Vista turned around and shipped Windows 7, the big ass table became one of Oprah's favorite things and one that’s close to home is that a mish mash of confusing consumer synchronization products became SkyDrive.

The way things get done in Steven’s organizations is so straightforward it hurts. You spend some time thinking about what you want to build, you write it down so the entire team has a shared vision of what they’re going to build and then you build it. The part where things become contentious is that getting things done (aka shipping) requires discipline. This means not changing your mind unless you have a good reason to after you’ve decided on what to build and knowing when to cut loses if things are coming in late or over budget. A great post about what it is like for an engineer working in a Steven Sinofsky organization that embraces these principles was written by Larry Osterman about Windows 7.

Each of the feature crews I’ve worked on so far has had dramatically different focuses – some of the features I worked on were focused on core audio infrastructure, some were focused almost entirely on UX (user experience) changes, and some features involved much higher level components. Because each of the milestones was separate, I was able to work on a series of dramatically different pieces of the system, something I’ve really never had a chance to do before.

In Windows 7, senior management has been extremely supportive of the various development teams that have had to make the hard decisions to scale back features that were not going to be able to make the quality bar associated with a Windows release – and there absolutely are major features that have gone all the way through planning only to discover that there was too much work associated with the feature to complete it in the time available. In Vista it would have been much harder to convince senior management to abandon features. In Win7 senior management has stood behind the feature teams when they’ve had to make the tough decisions. One of the messages that management has consistently driven home to the teams is “cutting is shipping”, and they’re right. If a feature isn’t coming together, it’s usually far better to decide NOT to deliver a particular feature then to have that feature jeopardize the ability to ship the whole system. In a typical Windows release there are thousands of features and it would be a real shame if one or two of those features ended up delaying the entire system because they really weren’t ready.

The process of building 7 has also been dramatically more transparent – even sitting at the bottom of the stack, I feel that I’ve got a good idea about how decisions are being made. And that increased transparency in turn means that as an individual contributor I’m able to make better decisions about scheduling. This transparency is actually a direct fallout of management’s decision to let the various feature teams make their own decisions – by letting the feature teams deeper inside the planning process, the teams naturally make better decisions.

Of course that transparency works both ways. Not only were teams allowed to see more about what was happening in the planning process, but because management introduced standardized reporting mechanisms across the product, the leads at every level of the hierarchy were able to track progress against plan at a level that we’ve never had before. From an individual developer’s standpoint, the overhead wasn’t too onerous – basically once a week, you were asked to update your progress against plan on each of your work items. That status was then rolled up into a series of spreadsheets and web pages that allowed each manager to track all the teams’ progress against plan. This allowed management to easily and quickly identify which teams were having issues and take appropriate action to ensure that the schedules were met (either by simplifying designs, assigning more developers, or whatever).

Transparency was also a cornerstone of Steven’s leadership style. The level of transparency into the organization’s decision making process via formalized mechanisms as described above as well as his personal decision making process has been unprecedented in my experience at Microsoft. It may not be as transparent as Google’s TGIF but on the other hand, I don’t think there’s anywhere else at Microsoft where visibility into how and why decisions are made was as clear as in the Windows organization.

At the end of the day, I’ll miss Steven and his influence on Microsoft. I’d like to think I became a better manager and leader from my time working spent working in his organization as well as the multiple exchanges we had over the years. Thanks for the memories.

Note Now Playing: Fall Out Boy - Thnks fr th MmrsNote


 

Categories: Life in the B0rg Cube

September 22, 2012
@ 03:06 PM

Last month I read Mike Arrington’s Why I Changed My Mind On Klout (And Invested) and thought to myself that I’d similarly changed my perspective on the much maligned social influence measuring service, Klout. My road to changing my mind on Klout was due to two unrelated sets of occurrences.

The first step to changing my mind were the high profile acquisitions of Vitrue by Oracle for $300 million and Buddy Media by Salesforce for $689 million. Both companies were sold for hundreds of millions of dollars for building enterprise versions ping.fm, tools for managing a company’s social media profile across social networks like Facebook, Twitter and Google+. The lesson from these is that just because something sounds dumb as a consumer play doesn’t mean it isn’t a great enterprise play. More importantly it made clear that helping companies figure out social media is a serious business.

The second incident contributed to my rethought perspective on Klout was Facebook's acquisition of Karma. Karma was co-founded by Lee Linden after his initial success with Tapjoy which grew to a company with a $100 million in revenues. Lee was a friend of mine during his Microsoft days and I can still remember him as this hyperactive guy who couldn’t stop talking about starting his own company and taking advantage of the opportunities in the industry. I remember him telling me about his idea for a mobile startup that would be an ad exchange which would help mobile devs maximize the revenue they got from ads in their free apps. I thought the idea had a low barrier to entry but don’t remember actively pointing that out. A few pivots later, the idea evolved into a pay-per-install ad network that was pulling in a $100 million a year. The lesson from this was that once a good team actually learns the challenges particular businesses face in an area, they can pivot their product to better serve those customers.

So how do these things apply to Klout? Klout tries to figure out who the experts are at particular topics in various social networks. This is valuable to at least two interesting players

  1. Social CRM products: The companies acquired by Oracle and Salesforce now can sell products to companies that help them better manage their Facebook and Twitter profiles but there is still a missing piece. The next logical step is helping companies figure out who their most valuable followers are on those sites and helping them target those customers. Helping a local business like the Pro Sports Club in Bellevue (for example) to figure out a one stop shop for creating and posting to a Facebook, Twitter and Google+ is cool. But even better would be telling them which of their customers they should give a few perks who they could be confident have a lot of “clout” with their audience on things like fitness recommendations. Helping the Pro Sports Club find the budding Jillian Michaels in their customer base would be worth a ton of money to them.

  2. Twitter: The gripes about how bad the targeted ads are on Twitter are the stuff of legend. Personally I have grown tired of the number of times I’ve seen ads for women’s hair products or home installations of air conditioning systems in my Twitter stream. Even though it is a crude approximation, the inferred topics of influence on my Klout profile would be a much better basis for Twitter to decide to use for showing me ads than whatever algorithms is using today. From Twitter’s perspective, Klout is sitting on a goldmine of information. An attempt to acquire Klout by Twitter sounds as inevitable as their acquisition of Summize to beef up their search product a few years ago.

In short, after thinking about it I’m convinced Klout provides a valuable service that is worth a lot of money to certain players in the industry. That said, as a social media user I do think it’s unfortunate that there is a service that provides a score for one’s participation in social media since it creates a set of incentives that may lead to unsavory behavior that harms the ecosystem as a whole.

Note Now Playing: Trey Songz - 2 Reasons (featuring T.I.)Note


 

Categories: Social Software

I’ve spent the last two years leading a PM team that has been part of building software experiences that make me immensely proud. The team has built software experiences that millions of people will use in Windows 8 and a developer platform will enable thousands of developers to build great software. Over the course of the past year we’ve delivered

  1. The social experiences in the Windows 8 People app. With Windows 8 you get a great browse and share your friends’ updates on Facebook and Twitter. The feedback we’ve received about this functionality has been extremely positive which has been quite humbling.

  2. A straightforward way for Metro style apps to take advantage of Single Sign On in Windows with the Live SDK.

  3. A developer platform for SkyDrive which has enabled developers to integrate SkyDrive across multiple apps, websites and devices.

This has been one of the most exciting and fulfilling times of my career. After about eight years working in the same organization at Microsoft, first as part of MSN and now Windows Live I’ve decided to move to another part of the company.

Over the course of the past few years, I’ve looked on at Microsoft’s search competition with Google and often wondered why although there’s been a lot of focus on beating or matching Google in search relevance and experience, there hasn’t been as much heard about how we’d compete with AdWords especially since that’s actually how we make money in the space.

Recently I was giving one of my friends who works in our ads space feedback after using a number of ads products including Facebook ads, Google ads and Microsoft’s. He asked if instead of complaining about what I wouldn’t rather just come join the team and actually help out. I thought “why not?” and since then I’ve become a lead program manager on the Bing Ads team.

My new team will be responsible for a number of things including the Bing Ads API. Regular readers of my blog shouldn’t expect any changes. If anything I’ll try to increase my pace of posting once I’m ramped up in my new gig and can come up with a sane blog posting schedule.

Note Now Playing: Big Boi - Fo Yo Sorrows (featuring George Clinton and Too Short) Note


 

Categories: Personal

Eran Hammer-Lahav, the former editor of the OAuth 2.0 specification, announced the fact that he would no longer be the editor of the standard in a harshly critical blog post entitled OAuth 2.0 and the Road to Hell where he made a number of key criticisms of the specification the meat of which is excerpted below

Last month I reached the painful conclusion that I can no longer be associated with the OAuth 2.0 standard. I resigned my role as lead author and editor, withdraw my name from the specification, and left the working group. Removing my name from a document I have painstakingly labored over for three years and over two dozen drafts was not easy. Deciding to move on from an effort I have led for over five years was agonizing.

There wasn’t a single problem or incident I can point to in order to explain such an extreme move. This is a case of death by a thousand cuts, and as the work was winding down, I’ve found myself reflecting more and more on what we actually accomplished. At the end, I reached the conclusion that OAuth 2.0 is a bad protocol. WS-* bad. It is bad enough that I no longer want to be associated with it. It is the biggest professional disappointment of my career.

All the hard fought compromises on the mailing list, in meetings, in special design committees, and in back channels resulted in a specification that fails to deliver its two main goals – security and interoperability. In fact, one of the compromises was to rename it from a protocol to a framework, and another to add a disclaimer that warns that the specification is unlike to produce interoperable implementations.

When compared with OAuth 1.0, the 2.0 specification is more complex, less interoperable, less useful, more incomplete, and most importantly, less secure.

To be clear, OAuth 2.0 at the hand of a developer with deep understanding of web security will likely result is a secure implementation. However, at the hands of most developers – as has been the experience from the past two years – 2.0 is likely to produce insecure implementations.

Given that I’ve been professionally associated with OAuth 2.0 over the past few years from using OAuth 2.0 as the auth method for SkyDrive APIs to acting as an advisor for the native support of OAuth 2.0 style protocols in the Web Authentication Broker in Windows 8, I thought it would be useful to provide some perspective on what Eran has written as an implementer and user of the protocol.

The Good: Easier to work with than OAuth 1.0

I’ve been a big fan of web technologies for a fairly long time. The great thing about the web is that it is the ultimate distributed system and you cannot make assumptions about any of the clients accessing your service as people have tended to do in the enterprisey world past. This encourages technologies to be as simple as possible to reduce the causes of friction as much as possible. This has led to the rise of drop dead simple protocols like HTTP and data formats like JSON.

One of the big challenges with OAuth 1.0 is that it pushed a fairly complex and fragile set of logic on app developers who were working with the protocol. This blog post from the Twitter platform team on the most complicated feature in their API bears this out

Ask a developer what the most complicated part of working with the Twitter API is, and there's a very good chance that they'll say OAuth. Anyone who has ever written code to calculate a request signature understands that there are several precise steps, each of which must be executed perfectly, in order to come up with the correct value.

One of the points of our acting on your feedback post was that we were looking for ways to improve the OAuth experience.

Given that there were over 750,000 registered Twitter developers last year, this is a lot of pain to spread out across their ecosystem. OAuth 2.0 greatly simplifies the interaction model between clients and servers by eliminating the requirement to use signed request signatures as part of the authentication and authorization process.

 

The Bad: It’s a framework not a protocol

The latest draft of the OAuth 2.0 specification has the following disclaimer about interoperability

OAuth 2.0 provides a rich authorization framework with well-defined security properties.  However, as a rich and highly extensible framework with any optional components, on its own, this specification is likely to produce a wide range of non-interoperable implementations.

In addition, this specification leaves a few required components partially or fully undefined (e.g. client registration, authorization server capabilities, endpoint discovery).  Without these components, clients must be manually and specifically configured against a specific authorization server and resource server in order to interoperate.

What this means in practice for developers is that learning how one OAuth 2.0 implementation works is unlikely to help you figure out how another compliant one behaves given the degree of latitude that implementers have. Thus the likelihood of being able to take the authentication/authorization code you wrote with a standard library like DotNetOpenAuth against one OAuth 2.0 implementation and then pointing it at a different one by only changing a few URLs then expecting things to work is extremely low.

In practice I expect this to not be as problematic as it sounds on paper simply because at the end of the day authentication and authorization is a small part of any API story. In general, most people will still get the Facebook SDK, Live SDK, Google Drive SDK, etc of their target platform to build their apps and it is never going to be true that those will be portable between services. For services that don’t provide multiple SDKs it is still true that the rest of the APIs will be so different that the fact that the developer’s auth code has to change will not be as big of a deal to the developer.

That said, it is unfortunate that once cannot count on a degree of predictability across OAuth 2.0 implementations.

The Ugly: Making the right choices is left as an exercise for the reader

The biggest whammy in the OAuth 2.0 specification which Eran implies is the reason he decided to quit is hinted at in the end of the aforementioned disclaimer

This framework was designed with the clear expectation that future work will define prescriptive profiles and extensions necessary to achieve full web-scale interoperability.

This implies that there are a bunch of best practices in utilizing a subset of the protocol (i.e. prescriptive profiles) that are yet to be defined. As Eran said in his post, here is a list of places where there are no guidelines in the spec

  • No required token type
  • No agreement on the goals of an HMAC-enabled token type
  • No requirement to implement token expiration
  • No guidance on token string size, or any value for that matter
  • No strict requirement for registration
  • Loose client type definition
  • Lack of clear client security properties
  • No required grant types
  • No guidance on the suitability or applicability of grant types
  • No useful support for native applications (but lots of lip service)
  • No required client authentication method
  • No limits on extensions

There are a number of places where it would be a bad idea if an implementer decided not to implement a feature without considering the security implications such as token expiration. In my day job, I’ve also been bitten by the lack of guidance on token string sizes with some of our partners making assumptions about token size that later turned out to be inaccurate which led to scrambling on both sides.

My advice for people considering implementing OAuth 2.0 on their service would be to ensure there is a security review of whatever subset of the features you are implementing before deploying the service at large. If you can’t afford or don’t have security people on staff then at the minimum I’d recommend picking one of the big guys (e.g. Google, Facebook or Microsoft) and implementing the same features that they have since they have people on staff whose job is to figure out the secure combination of OAuth 2.0 features to implement as opposed to picking and choosing without a frame of reference.

Note Now Playing: Notorious B.I.G. - You're Nobody Till Somebody Kills You Note


 

Categories: Web Development

July 15, 2012
@ 11:04 PM

A few weeks ago Dalton Caldwell, founder of imeem and Picplz, wrote a well received blog post titled What Twitter could have been where he laments the fact that Twitter hasn’t fulfilled some of the early promise developers saw in it as a platform. Specifically he writes

Perhaps you think that Twitter today is a really cool and powerful company. Well, it is. But that doesn’t mean that it couldn’t have been much, much more. I believe an API-centric Twitter could have enabled an ecosystem far more powerful than what Facebook is today. Perhaps you think that the API-centric model would have never worked, and that if the ad guys wouldn’t have won, Twitter would not be alive today. Maybe. But is the service we think of as Twitter today really the Twitter from a few years ago living up to its full potential? Did all of the man-hours of brilliant engineers, product people and designers, and hundreds of millions of VC dollars really turn into, well, this?

His blog post struck a chord with developers which made Dalton follow it up with another blog post titled announcing an audacious proposal as well as launching the join.app.net. Dalton’s proposal is as follows

I believe so deeply in the importance of having a financially sustainable realtime feed API & service that I am going to refocus App.net to become exactly that. I have the experience, vision, infrastructure and team to do it. Additionally, we already have much of this built: a polished native iOS app, a robust technical infrastructure currently capable of handing ~200MM API calls per day with no code changes, and a developer-facing API provisioning, documentation and analytics system. This isn’t vaporware.

To manifest this grand vision, we are officially launching a Kickstarter-esque campaign. We will only accept money for this financially sustainable, ad-free service if we hit what I believe is critical mass. I am defining minimum critical mass as $500,000, which is roughly equivalent to ~10,000 backers.

As can be expected as someone who’s worked on software as both an API client and platform provider of APIs, I have a few thoughts on this topic. So let’s start from the beginning.

The Promise of Twitter Annotations

About two years ago, Twitter CEO Dick Costolo wrote an expansive “state of the union” style blog post about the Twitter platform. Besides describing the state of the Twitter platform at the time it also set forth the direction the platform intended to go in and made a set of promises about how Twitter saw its role relative to its developer ecosystem. Dick wrote

To foster this real-time open information platform, we provide a short-format publish/subscribe network and access points to that network such as www.twitter.com, m.twitter.com and several Twitter-branded mobile clients for iPhone, BlackBerry, and Android devices. We also provide a complete API into the functions of the network so that others may create access points. We manage the integrity and relevance of the content in the network in the form of the timeline and we will continue to spend a great deal of time and money fostering user delight and satisfaction. Finally, we are responsible for the extensibility of the network to enable innovations that range from Annotations and Geo-Location to headers that can route support tickets for companies. There are over 100,000 applications leveraging the Twitter API, and we expect that to grow significantly with the expansion of the platform via Annotations in the coming months.

There was a lot of excitement in the industry about Twitter Annotations both from developers as well as the usual suspects in the tech press like TechCrunch and ReadWriteWeb. Blog posts such as How Twitter Annotations Could Bring the Real-Time and Semantic Web Together show how excited some developers were by the possibilities presented by the technology. For those who aren’t familiar with the concept, annotations were supposed to be the way to attach arbitrary metadata to a tweet beyond the well-known 140 characters. Below is a screenshot from a Twitter presentation showing this concept visually

Although it’s been two years since the Twitter Annotations was supposed to begin testing this feature has never been released nor has it talked about by the Twitter Platform team in recent months. Many believe that Twitter Annotations will never be released due to a changing of the guard within Twitter which is hinted at in Dalton Caldwell’s What Twitter could have been post

As I understand, a hugely divisive internal debate occurred among Twitter employees around this time. One camp wanted to build the entire business around their realtime API. In this scenario, Twitter would have turned into something like a realtime cloud API company. The other camp looked at Google’s advertising model for inspiration, and decided that building their own version of AdWords would be the right way to go.

As you likely already know, the advertising group won that battle, and many of the open API people left the company.

It is this seeming change in direction that Dalton has seized on to create join.app.net.

 

Why Twitter Annotations was a Difficult Promise to Make

I have no idea what went on within Twitter but as someone who has built both platforms and API clients the challenges in delivering a feature such as Twitter Annotations seem quite obvious to me. A big problem when delivering a platform as part of an end user facing service is that there are often trade offs one has to make at the expense of the other. Doing things that make end users happy may end up ticking off people who’ve built businesses on your platform (e.g. layoffs caused by Google Panda update, companies losing customers overnight after launch of Facebook's timeline, etc). On the other hand, doing things that make developers happy can create suboptimal user experiences such as the “openness” of Android as a platform making it a haven for mobile malware.

The challenge with Twitter Annotations is that it threw user experience consistency out of the Window. Imagine that the tweet shown in the image above was created by Dare’s Awesome Twitter App which allows users to “attach” a TV episode from a source like Hulu or Netflix into the app before they tweet. Now in Dare’s Awesome Twitter App, the user sees their tweet an an inline experience where they can consume the video inline similar to what Twitter calls Expanded Tweets today. However the user’s friends who are using the Twitter website or other Twitter apps just see 140 characters. You can imagine that apps would then compete on how many annotations they supported and creating new interesting annotations. This is effectively what happened with RSS and a variety of RSS extensions with no RSS reader supporting the full gamut of extensions. Heck, I supported more RSS extensions in RSS Bandit in 2009 than Google Reader does today.

This cacophony of Annotations would have meant that not only would there no longer be such a thing as a consistent Twitter experience but Twitter itself would be on an eternal treadmill of supporting various annotations as they were introduced into the ecosystem by various clients and content producers.

Secondly, the ability to put arbitrary machine readable content in tweets would have made Twitter an attractive mechanism as a “free” publish-subscribe infrastructure of all manner of apps and devices. Instead of building a push notification system to communicate with client apps or devices, an enterprising developer could just create a special Twitter account and have all the end points connect to that. The notion of Twitter controlled botnets and the rise of home appliances connected to Twitter are all indicative that there is some demand for this capability. If this content not intended for humans ever becomes a large chunk of the data flowing through Twitter, how to monetize it will be extremely challenging. Charging people for using Twitter in this way isn’t easy since it isn’t clear how you differentiate a 20,000 machine botnet from a moderately popular Internet celebrity with a lot of fans who mostly lurk on Twitter.

I have no idea if Twitter ever plans to ship Annotations but if they do I’d be very interested to see how they would solve the two problems mentioned above.

Challenges Facing App.net as a Consumer Service

Now that we’ve talked about Twitter Annotations, let’s look at what App.net plans to deliver to address the demand that has yet to be fulfilled by the promise of Twitter Annotations. From join.app.net we learn the product that is intended to be delivered is

OK, great, but what exactly is this product you will be delivering?

As a member, you'll have a new social graph and real-time feed that you access from an App.net mobile application or website. At first, the user experience will be very similar to what Twitter was like before it turned into a media company. On a forward basis, we will focus on expanding our core experience by nurturing a powerful ecosystem based on 3rd-party developer built "apps". This is why we think the name "App.net" is appropriate for this service.

From a developer perspective, you will be able to read and write to a Twitter-like API. Developers behaving in good faith will have free reign to build alternate UIs, new business models of their own, and whatever they can dream up.

As this project progresses, we will be collaborating with you, our backers, while sharing and iterating screenshots of the app, API documentation, and more. There are quite a few technical improvements to existing APIs that we would like to see in the App.net API. For example, longer message character limits, RSS feeds, and rich annotations support.

In short it intends to be a Twitter clone with a more expansive API policy than Twitter. The key problem facing App.net is that as a social graph based services it will suffer from the curse of network effects. Social apps need to cross a particular threshold of people on the service before they are useful and once they cross that threshold it often leads to a “winner-take-all” dynamic where similar sites with smaller users bleed users to the dominant service since everyone is on the dominant service. This is how Facebook killed MySpace & Bebo and Twitter killed Pownce & Jaiku. App.net will need to differentiate itself more than just being “open version of popular social network”. Services like Status.net and Diaspora have tried and failed to make a dent with that approach while fresh approaches to the same old social graph and news feed like Pinterest and Instagram have grown by leaps and bounds.

A bigger challenge is the implication that it wants to be a paid social network. As I mentioned earlier, social graph based services live and die by network effects. The more people that use them the more useful they get for their members. Charging money for an online service is an easy way to reduce your target audience by at least an order of magnitude (i.e. reduce your target audience by at least 90%). As an end user, I don’t see the value of joining a Twitter-clone populated by people who could afford $50 a year as opposed to joining a free service like Facebook, Twitter or even Google+.

Challenges Facing App.net as a Developer Platform

As a developer it isn’t clear to me what I’m expected to get out of App.net. The question on how they came up with their pricing tiers is answered thusly

Additionally, there are several comps of users being willing to pay roughly this amount for services that are deeply valuable, trustworthy and dependable. For instance, Dropbox charges $10 and up per month, Evernote charges $5 and up per month, Github charges $7 and up per month.

The developer price is inspired by the amount charged by the Apple Developer Program, $99. We think this demonstrates that developers are willing to pay for access to a high quality development platform.

I’ve already mentioned above that network effects are working against App.net as a popular consumer service. The thought of spending $50 a year to target less users than I can by building apps on Facebook or Twitter’s platforms doesn’t sound logical to me. The leaves using App.net as a low cost publish-subscribe mechanism for various apps or devices I deploy. This is potentially interesting but I’d need to get more data before I can tell just how interesting it could be. For example, there is a bunch of talk about claiming your Twitter handle and other things which makes it sound like developers only get a single account on the service. True developer friendliness for this type of service would include disclosure on how one could programmatically create and manage nodes in the social graph (aka accounts in the system).

At the end of the day although there is a lot of persuasive writing on join.app.net, there’s just not enough to explain why it will be a platform that will provide me enough value as a developer for me to part with my hard earned $50. The comparison to Apple’s developer program is rich though. Apple has given developers five billion reasons why paying them $99 a year is worth it, we haven’t gotten one good one from App.net.

That said, I wish Dalton luck on this project. Props to anyone who can pick himself up and continue to build new things after what he went through with imeem and PicPlz.

Note Now Playing: French Montana - Pop That (featuring Rick Ross, Lil Wayne and Drake) Note


 

Categories: Platforms

I read an interesting blog post by Steven Levy titled Google Glass Team: ‘Wearable Computing Will Be the Norm’ with an interview with the Project Glass team which contains the following excerpt

Wired: Do you think this kind of technology will eventually be as common as smart phones are now?

Lee: Yes. It’s my expectation that in three to five years it will actually look unusual and awkward when we view someone holding an object in their hand and looking down at it. Wearable computing will become the norm.

The above reminds me of the Bill Gates quote, “there's a tendency to overestimate how much things will change in two years and underestimate how much change will occur over 10 years”. Coincidentally the past week has been full of retrospectives on the eve of the fifth anniversary of the iPhone. The iPhone has been a great example of how we can both overestimate and underestimate the impact of a technology. When the iPhone was announced as the convergence of an iPod, a phone and an internet mobile communicator the most forward thinking assumptions were that the majority of the Apple faithful who bought iPods would be people who bought iPhones and this would head off the demise of the iPod/MP3 player market category.

Five years later, the iPhone has effectively reshaped the computing industry. The majority of tech news today can be connected back to companies still dealing with the fallout of the creation of the iPhone and it’s progeny, the iPad. Entire categories of products across multiple industries have been made obsolete (or at least redundant) from yellow pages and paper maps to PDAs, point-and-shoot cameras and netbooks. This is in addition to the sociological changes that have been wrought (e.g. some children now think a magazine is a broken iPad). The most shocking change as a techie has been watching usage and growth of the World Wide Web being replaced by usage of mobile apps. No one really anticipated or predicted this five years ago.

Wearable computing will follow a similar path. It is unlikely that within a year or two of products like Project Glass coming to market that people will stop using smartphones especially since there are many uses for the ubiquitous smartphone that Project Glass hasn’t tried to address (e.g. playing Angry Birds or Fruit Ninja at the doctor’s office while waiting for your appointment). However it is quite clear that in our lifetime there will be the ability to put together scenarios that would have seemed far fetched for science fiction just a few years ago. It will one day be possible to look up the Facebook profile or future equivalent of anyone you meet at a bar, business meeting or on the street without the person being none the wiser simply by looking at them. Most of the technology to do this already exists, it just isn’t in a portable form factor. That is just one scenario that not only will be possible but will be commonplace with products like Project Glass in the future.

Focusing on Project Glass making smartphones obsolete is like focusing on the fact that the iPhone made iPod competitors like the Creative Zen Vision obsolete. Even if it did, that was not the interesting impact. As a software professional, it is interesting to ask yourself whether your product or business will be one of those obsoleted by this technology or empowered by it. Using analogies from the iPhone era, will you be RIM or will you be Instagram?

PS: I have to wonder what Apple thinks of all of this. When I look at the image below, I see a clunky and obtrusive piece of headgear that I can imagine makes Jonathan Ive roll his eyes and think he could do much better. Given Apple’s mantra is “If you don’t cannibalize yourself, someone else will” I expect this space to be very interesting over the next ten years.

Note Now Playing: Wale - Bag of Money (featuring Rick Ross, Meek Mill and T-Pain) Note


 

Categories: Technology

I stumbled on an interesting blog post today titled Why Files Exist which contains the following excerpt

Whenever there is a conversation about the future of computing, the discussion inevitably turns to the notion of a “File.” After all, most tablets and phones don’t show the user anything that resembles a file, only Apps that contain their own content, tucked away inside their own opaque storage structure.

This is wrong. Files are abstraction layers around content that are necessary for interoperability. Without the notion of a File or other similar shared content abstraction, the ability to use different applications with the same information grinds to a halt, which hampers innovation and user experience.

Given that one of the hats I wear in my day job is responsibility for the SkyDrive API, questions like whether the future of computing should include an end user facing notion of files and how interoperability across apps should work are often at the top of my mind. I originally wasn’t going to write about this blog post until I saw the discussion on Hacker News which was a bit disappointing since people either decided to get very pedantic on the specifics of how a computer file is represented in the operating system or argued that inter-app sharing between apps via intents (on Android) or contracts (in Windows 8/Windows RT) makes the notions of files obsolete.

The app-centric view of data (as espoused by iOS) is that apps own any content created within the app and there is no mechanism outside the app’s user experience to interact with or manage this data. This also means there is no global namespace where other apps or the end user can interact with this data also known as a file system. There are benefits to this approach such as greatly simplifying the concepts the user has to deal with and preventing both the user or other apps from mucking with the app’s experience. There are also costs to this approach as well.

The biggest cost is as highlighted in the Why Files Exist post is that interoperability is compromised. The reason is that it is a well known truism that data outlives applications. My contact list, my music library and the code for my side projects across the years are all examples of data which has outlived the original applications I used to create and manage them. The majority of this content is in the cloud today primarily because I want universal access to my data from any device and any app. A world where moving from Emacs to Visual Studio or WinAmp to iTunes means losing my files created in those applications would be an unfortunate place to live in the long term.

App-to-app sharing as is done with Android intents or contracts in Windows 8 is a powerful way to create loosely coupled integration between apps. However there is a big difference between one off sharing of data (e.g. share this link from my browser app to my social networking app) to actual migration or reuse of data (e.g. import my favorites and passwords from one browser app to another). Without a shared global namespace that all apps can access (i.e. a file system) you cannot easily do the latter.

The Why Files Exist ends with

Now, I agree with Steve Jobs saying in 2005 that a full blow filesystem with folders and all the rest might not be necessary, but in every OS there needs to be at least some user-facing notion of a file, some system-wide agreed upon way to package content and send it between applications. Otherwise we’ll just end up with a few monolithic applications that do everything poorly.

Here I actually slightly disagree with characterizing the problem as needing a way to package content and send it between applications. Often my data is actually conceptually independent of an application and it is more like I want to give access to my data to apps not that I want to package up some of my data from one app to another. For example, I wouldn’t characterize playing my MP3s originally ripped in Winamp or bought from Amazon MP3 in iTunes as packaging content between those apps and iTunes. Rather there is a global concept known as my music library which multiple apps can add to or play from.

So back to the question that is the title of this blog post; have files outlived their usefulness? Only if you think reusing data across multiple applications has.

Note Now Playing: Meek Mill - Pandemomiun (featuring Wale and Rick Ross Note


 

Categories: Cloud Computing | Technology