Yesterday Twitter announced Fabric, a new mobile SDK for Android and iOS composed of four distinct pieces

  • Crashlytics – an application analytics package that gives developers tools to measure how their apps are being used and measure app quality in the wild (i.e. crashes).
  • Twitter Kit – an SDK that makes it add Twitter integration such as signing in with Twitter, embedding tweets or posting tweets from your app
  • MoPub Kit – makes it easy to embed ads from Twitter’s ad network in your app so you can make money
  • Digits – makes it easy for any app to build phone number based sign-in similar to what Skype Qik and WhatsApp have. This is quite frankly a game changer.

The response to this release I’ve seen online have swung between two extremes, fawning adoration from the tech press proclaiming that Twitter has moved beyond tweets into mobile services and skepticism from developers who don’t trust Twitter as represented in this tweet below

The root of this angst is Twitter’s tumultuous relationship with developers of Twitter clients which eventually led to their infamous quadrant of death post which effectively limited the growth of any app whose primary function was to be a replacement Twitter experience. This hurt many developers who had been working on Twitter reading experiences and in fact led to the CEO of Flipboard quitting Twitter’s board in disgust.

Thus it is a valid question for developers as to whether they can trust Twitter this time? The answer is Yes for a very simple reason. Twitter’s API moves in 2012 and yesterday’s announcements were borne from the same motives, to grow its primary business of selling ads tied to their mobile experiences. In 2012, they had to address the fact that their liberal exposure of their service via their API had created a situation where a huge slice of their user base were using the app through experiences Twitter could not effectively monetize. 

At the height of the 3rd party Twitter app boom almost half of their users were using official apps (42%) although that percentage dwindled as they stepped up their mobile app efforts and sent the message to app developers that they no longer wanted people to compete with them on providing mobile experiences.

 

Taking control of the primary user experience for Twitter was the smart business decision and is why they now  generate over a billion dollars a year as a business.

This brings us to Fabric. All four components aid Twitters core business of selling ads for mobile experiences.

  • Twitter Kit increases engagement with Twitter by making it easy for users to consume and generate tweets from other apps without those apps being a threat to Twitter by becoming competing experiences.
  • Digits allows Twitter to build a profile of users based on their phone number the same way Facebook builds a profile of users based on the apps and websites they visit that use Facebook Connect.
  • Crashlytics + MoPub is the Trojan horse with a approach to Flurry which Yahoo acquired for $200 million. Crashlytics is a incredibly useful component that is valuable to all mobile apps since they all care about user behavior and crashes. Once you’re hooked on Crashlytics, it’s easier to upsell you to also using Twitters ad network and hence $$$.

All of these efforts help Twitter’s core business and it would be insanity for them to screw developers by abandoning them just as it would have been insanity for them to pursue an ad-based business model in a world where a huge chunk of their most active users were using 3rd party apps as their primary Twitter experience.

So go ahead, try out Fabric and judge it on its merits. I’m curious to hear what you think.

Note Now Playing: Chris BrownLoyal (featuring Lil Wayne and Tyga) Note


 

Categories: Platforms

I’ve read a number of articles about account security, passwords and secret questions this week for obvious reasons. Although I’ve seen a number of posts directed at end users as to how to better safeguard their accounts, I haven’t seen anything similar providing guidance to developers of online services on how to better safeguard their users in what is a very hostile environment.

Below are the top five (plus a bonus one) account security features that every competent online service should have implemented. None of these are ground breaking but it is quite clear that many services that we all use every day don’t implement even these basic security features thus putting our data at risk.

  1. Strong passwords including banning common passwords: The most basic practice is requiring that users create a strong password often by requiring some combination of minimum length, at least one of upper & lower case character and encouraging the use of punctuation. Although this is a good first steps there are other steps services need to take to ensure their users are using hard to guess passwords. One such approach is to take a look at the common common choices of user passwords that have been observed as a result of website hacks

    Analysis of these lists show that people are quite predictable and you often find "password", "abc123", "letmein" or the name of the website being used by a sizable percentage of the users on your site. It thus makes sense to ban users from using any of these fairly common passwords which can then lead to successful drive-by hacking incidents. For example, a hacker can take the basic approach of trying to log-in to a bunch of users accounts using "password", "123456" as their email address and if past history is a judge can end up compromising thousands of user accounts with just this brain dead tactic.

  2. Throttling failed password attempts: Regardless of how strong a user’s password is, it is trying to stop a bullet with a wet paper towel against a dedicated brute force attack if no protections are in place. Password cracking tools like John the Ripper can crack a strong eight character password in about 15 minutes. This means to fully protect users, online services should have a limit on how often a user can fail a password challenge before you put some road blocks in their way. These road blocks can include exponentially increasing delays after each failed attempt (wait 1 minute, if failed again then 2 minutes, etc) or requiring the person to solve a CAPTCHA to prove they are human.

    Another thing services should do is look at patterns of failed password attempts to see if broader prevention strategies are necessary. For example, if you are seeing hundreds of users failing multiple password attempts from a particular IP range you may want to block that IP range since given our previous discussion about weak passwords they probably have successfully hacked some of your accounts.

  3. 2-factor authentication: Every online service should give customers the option to trade convenience (i.e. password only sign in) with more security. Two-factor authentication is typically the practice of combining something the user knows (e.g. a password) with something the user has (e.g. their smart phone or biometric data). Although more inconvenient than just providing a password, it greatly increases the security for users who may be desirable targets for account hijackings or when providing a service that holds sensitive data. This is why it is supported by a number of popular online service providers including Google, Microsoft and Twitter.

    A common practice to improve the usability of 2-factor authentication is to give users the option to only require it the first time the sign-in from a particular device. This means that once the user goes through the two step authentication process from a new computer, you can assume that that device is safe and then only require a password the next time they sign in from that device. 

  4. Choose better secret questions or better yet replace them with proofs: Inevitably, users will forget the password they use with your service especially if you require strong passwords and have a policy that is incompatible with their default password choice (which hopefully isn’t “password1” Smile). A common practice, which has now become an Achilles heel of account security, is to have a set of back up questions that you ask the user if they have forgotten their password. The problem for account security is that it is often easier to guess the answers to these questions than it is to hack the user’s password. There is a great check list for what makes a good secret question at goodsecurityquestions.com with examples of good, fair and poor security questions.

    In general you should avoid security questions because most can be easily guessed such as what is your favorite color or sports team and for others their answers can be easily found on Facebook such as where the user went to high school or via social engineering your friends. A much better approach is to use a similar approach to 2-factor authentication where a user provides proof of something they have such as their smartphone (send an SMS) or alternate email account (send an email) to verify that they are who they say they are.

  5. Show customers their sign-in activity: When all else fails, it is important to give your customers the tools to figure out for themselves if they have been hacked. A good way to do this is to let them know of sign-in attempts that have occurred on their account so they can that either failed or were successful. Google does this today via its last account activity feature. You can find this by going to security.google.com and click Recent activity under “Security” on the left. Microsoft provides this with its recent activity feature which you can find by going to https://account.live.com/activity.

Implementing these features isn’t a cure all for account security woes and should instead be treated as the minimum bar for providing a reasonable level of security for your users. 

 

Note Now Playing: BeyonceFlawless Remix (featuring Nicki Minaj) Note


 

Categories: Cloud Computing | Programming

For the past couple of days, the tech press has been in an uproar from the news initially published in the AV Club that Facebook tinkered with users’ feeds for a massive psychology experiment in 2012. The money quote from the article is below

It shows how Facebook data scientists tweaked the algorithm that determines which posts appear on users’ news feeds—specifically, researchers skewed the number of positive or negative terms seen by randomly selected users. Facebook then analyzed the future postings of those users over the course of a week to see if people responded with increased positivity or negativity of their own, thus answering the question of whether emotional states can be transmitted across a social network. Result: They can! Which is great news for Facebook data scientists hoping to prove a point about modern psychology. It’s less great for the people having their emotions secretly manipulated.

The strange thing about the recent uproar is that the focus of the anger seems to be that Facebook ran the experiment. This is strange if you actually stop and think about what we actually know as humans.

1. People are influenced by what they see including what they see on social networks like Facebook. Remember all those, "Facebook makes you sadder" headlines from a year or two ago? How about the fact that just yesterday, the MayDay PAC raised $5 million from almost 50,000 people thanks to viral sharing on social media sites by people like George Takei? These are thousands of people being influenced to spend money to change how their government works based on what they saw in their news feed.

2. Facebook controls what you see in your news feed.

The second point can’t be emphasized enough. Remember when Facebook explicitly spelled out how Edgerank works?

Over the past few years, Facebook has made hundreds of tweaks to the news feed. Some we notice and others we don’t. The above image was from an article explaining one such tweak which caused posts by brands to start showing up much less in the news feed. Over the past few years Facebook’s news feed tweaks have caused our feeds to be filled with too much and then over time very few quizzes and polls, Zynga games like Mafia Wars & Farmville, articles my friends are reading from social readers, videos from social video sites like Viddy, Bitstrips comics and of course, Upworthy articles to name a few.

For each of these waves of content dominating our news feeds, some product manager decided to turn up or turn down the dial of said content based on our “engagement” with Facebook. There is no outside party vetting these changes nor is there even a way for such an interested party to even tell what these changes are. It is quite unprecedented in the history of the world for any entity (company or government) to control so much of the media that millions of people see daily without any visibility into its agenda or the content it is feeding to its subjects.

Most people who are still bloviating on this topic on Techmeme are upset that Facebook “manipulated people’s emotions without any oversight for an experiment” when the reality is that Facebook manipulates people’s emotions via tinkering with the news feed to increase their engagement (i.e. time spent on the site looking at ads) every minute of every hour of every day. 

That’s why Sheryl Sandberg gave this shrug as she responded that the major problem with the experiment is that it was poorly communicated. She’s right. Facebook does this every day. Manipulating your behavior by manipulating your news feed is their primary business. If anything, this experiment should be commended because it implies Facebook had at least at one point considered the impact of this manipulation on the psychological health of its users and wanted to understand it better.

Speaking of lack of oversight and transparency, one can’t help but wonder what subtle dampeners or viral boosts Facebook puts on sharing of content depending on the politics of the situation. For example, it’s interesting that George Takei posts still garner hundreds of thousands of likes each time they show up when other Facebook pages are seeing double digit percentage declines. With other media like Fox News or the Wall Street Journal, their agenda is understood by all and quite clear. On the other hand, Facebook editing which content from your friends or brands that you see, is driven by an unknown agenda while masquerading as serendipitous and organic content.

Maybe Facebook doesn’t manipulate your feed depending on politics. Maybe it did at one time then stopped. Maybe they will in the future. We don’t know and if it ever does happen we won’t even realize it.

So go ahead and freak out about one A/B test in 2012. That totally seems like the most worrisome thing about Facebook’s power over its users.

Note Now Playing: Rick RossNobody (featuring French Montana) Note


 

Categories: Social Software

As I write this the latest version of Skype for the iPhone has a 2 star rating as does Swarm by Foursquare. What these apps have in common is that they are both part of bold attempts to redesign a well-known and popular app which are being rejected by its core constituency. A consequence of my time working on Windows 8 is that I now obsess quite a bit about redesigning apps and determining what warning signs indicate that you are either going to greatly please or strongly offend your best users.

When I worked on Windows 8, there were a number of slogans that the team used to ensure the principles behind the work we were doing were understood by all. Some of them such as “content over chrome” were counterproductive in that slavish devotion to them led to ignoring decades of usability research by eschewing affordances and hiding navigation/controls within apps. However there were other principles from the Windows 8 era which I wish app developers took more to heart such as “win as one” which encouraged consistency with the overall platform’s UI model & working with other apps and “change is bad unless it's great” which encouraged respecting the past and only making changes that provided a noticeably better user experience. 

In addition to these two principles, I’ll add one more for app developer to keep in mind whenever the time calls for a redesign; “minimize the impacts of loss aversion”. For those who aren’t familiar with the term, loss aversion (aka the endowment effect) is the tendency for humans to strongly prefer avoiding losses to making gains. What this means for developers is that end users will react more strongly to losing a feature than they would to gaining that same feature. There are numerous studies that show how absurd humans can be in the face of loss aversion no matter how minor. My favorite example is how much people overreact to loss aversion when it comes to grocery shopping as taken from this blog post by Jon Geeting

There was a law set up last month in D.C. (passed unanimously by city council) to place a five-cent tax on paper and plastic bags at grocery stores, pharmacies and other food-service providers. So, basically, if I went shopping, my total came to $35.20, and I needed one bag to put it in, my total would then become $35.25. Similarly, if I needed two bags, my total would become $35.30, and so on — while if I simply bought reusable bags, I would be subject to no tax.

From what I hear from people in D.C., they absolutely hate it. Even though it’s just an extra five cents, they want absolutely nothing to do with it. They really want that nickel. So many people use less bags, bring their own, or just try to balance everything without one on their trip home. Think about how much less waste and pollution there is in D.C. now, because of a measly five-cent fee.

On the flip side, if you told people you’d give them 5 cents for each bag they brought from home they’d laugh in your face. Nobody is going to do an extra bit of work to be paid five cents even though they would do that work to avoid paying 5 cents. That’s loss aversion at work.

To recap, if you are redesigning an app you need to keep these three rules in mind

  1. Win as one: Whatever changes you make must feel like a consistent whole both within the app and with the platform your app resides on. Swarm and Foursquare have completely different aesthetics and integrate in a fairly disjointed manner often with no way to easily jump back and forth between both apps. Skype for iPhone is pretty much a Windows Phone app in look and feel complete with pivot controls and cut off title text. This is a very jarring experience compared to everything else on iOS.

  2. Change is bad unless it’s great: App developers need to be honest with themselves about whether a redesign is about solving a customer problem in a better way or is part of a corporate strategy. Facebook news feed is an example of a redesign which was actually driven by a need to solve customer problems which is why although it met with a massive user revolt at first, once people used it they loved it and the anger died down. Swarm exists because FourSquare now wants to compete with Yelp and needs to shed its history as a social check-in app which it sees as baggage as it evolves into a social discovery engine of things to do in your city. From an end user perspective, Skype for iPhone’s redesign is about making the app look and feel like a Microsoft metro-style app. Given these primary goals, it is no surprise that end users can tell that solving their problems came in second place as they review these apps.

  3. Minimize the impacts of loss aversion: Coupling a redesign with taking away features means people will focus on the missing features instead of whatever benefits you have provided with the redesign. Foursquare took away badges, mayorships, social feed of your friends check-ins and points as part of the split that created Swarm. There are a large number of one star reviews of the Swarm app complaining about these missing features. Skype for iPhone’s initial release took away deleting & editing messages while making others harder to find. Even features that are used once in a blue moon seem mission critical once people find out they are gone. Taking away features will always sting more than the actual value of those features. Taking multiple features away as part of a redesign means any benefits of the redesign will be lost in the ensuing outrage about the missing features.

Note Now Playing: Rick RossThe Devil is a Lie (featuring Jay-Z) Note


 

Categories: Social Software | Technology

The most interesting news from Facebook’s F8 last week was the announcement of App Links. If you are unfamiliar with the announcement, watch the 1 minute video embedded below which does a great job of setting up the sales pitch. Using App Links, mobile app developers can put markup in their web pages that indicate how to launch that page in their application on Android, iOS and Windows Phone. For example, clicking on a link to a FourSquare check-in from the news feed will launch the FourSquare app on your phone and will open that specific location or event.  .

The interesting question is why is Facebook doing this? It boils down to the fact that Facebook is an advertising company which makes the majority of its revenue from those ads asking you to install Candy Crush and Clash of Clans in your news feed.

Facebook’s pattern at this point is well known. They give you something of value for free (traffic) and once you get hooked they dial it down until you have to pay. The world is littered with the ashes of various companies who were once media darlings because Facebook gave them a bunch of free traffic from liberal news feed algorithms and then turned off the spigot. Just ask Viddy, all those social readers, Zynga, or read that hilarious break up letter from those guys at Eat24.

Publishers who use app links will likely get a boost in the news feed algorithm likely under the pretext that they provide a better user experience to consumers. Early success stories will cause lots of developers to create app links and then get hooked on the traffic they get from Facebook. Eventually your traffic will start dropping and any complaints will be met with an elaborate mathematical formula which explains why your content isn’t that hot on Facebook anymore. But don’t worry, you can fix all that by buying ads. Smile 

It’s obvious, devious and I love it. Especially since it does actually move the user experience of the mobile web forward even if the end goal is to make Facebook tons of money.

The other thing I give Facebook props for is holding a mirror up to the major search engines to see how silly we were being. Bing supports standards for app linking but it's only for Windows & Windows Phone apps. Google supports the same and again it only works for Android apps. Facebook is trying to say it doesn’t matter if you are on the web, Windows Phone, Android or iOS, links in the news feed should open in the native app on that platform. Google and Bing’s search engines on the other hand only supported the same when searching on the OSes from their parent companies. #strategytax

Hopefully Facebook’s move will bring more inclusiveness across the board from many online platform providers not just search engines. For example, I would love it if email providers also supported app links as well.

Note Now Playing: DJ Snake & Lil JonTurn Down For What Note


 

Chris Dixon has a fairly eloquent blog post where he talks about the decline of the mobile web. He cites the following chart

and talks about what it means for the future of innovation if apps which tend to be distributed from app stores managed by corporate gate keepers continue to dominate the web as the primary way people connect on the Internet.

Using HTTP Doesn’t Make Something Part of the Web

In response to Chris Dixon’s post I’ve seen a fallacy repeated a number of times. The most visible instance of this fallacy is John Gruber’s Rethinking What We Mean by ‘Mobile Web’ where he writes

I think Dixon has it all wrong. We shouldn’t think of the “web” as only what renders inside a web browser. The web is HTTP, and the open Internet. What exactly are people doing with these mobile apps? Largely, using the same services, which, on the desktop, they use in a web browser.
...
Yes, Apple and Google (and Amazon, and Microsoft) control their respective app stores. But the difference from Dixon’s AOL analogy is that they don’t control the internet — and they don’t control each other. Apple doesn’t want cool new apps launching Android-only, and it surely bothers Google that so many cool new apps launch iOS-first. Apple’s stance on Bitcoin hasn’t exactly kept Bitcoin from growing explosively. App Stores are walled gardens, but the apps themselves are just clients to the open web/internet.
...
The rise of mobile apps hasn’t taken anything away from the wide open world of web browsers and cross-platform HTML/CSS/JavaScript — other than supremacy. I think that bothers some, who saw the HTML/CSS/JavaScript browser-centric web’s decade-ago supremacy as the end point, the ultimate triumph of a truly open platform, rather than what it really was: just another milestone along the way of an industry that is always in flux, ever ebbing and flowing.

What we’ve gained, though, is a wide range of interaction capabilities that never could have existed in a web browser-centric world. That to me is cause for celebration.

The key point here is that the World Wide Web and the Internet are different things. The definition of the web I use comes from Tim Berners-Lee’s original proposal of a browsable information network of hyperlinked documents & media on a global network. The necessary building blocks for this are a way to identify these documents (URIs), the actual content of these documents (HTML/JS/CSS/media), how clients obtain these documents (HTTP) and the global network they site on (The Internet).

This difference is important to spell out because although HTTP and the Internet are key parts of the world wide web, they aren’t the web. One of the key things we lose with apps is public addressability (i.e. URIs for the technically inclined). What does this mean in practice

  • Visiting a website is as simple as being told “go to http://bing.com” from any browser on any platform using any device. Getting an app requires the app developer to have created an app for your platform which may not have occurred due to technical limitations, policy limitations of the platform owner or simply the cost of supporting multiple platforms being higher than they want to bear.

  • Content from apps is often invisible to search engines like Google and Bing since their information is not part of the web.

  • Publishing a website simply requires getting a web host or even just hosting your own server. Publishing an app means submitting your product to some corporation then restricting your content and functionality to their rules & regulations before being made available to end users.

The key loss being that we are regressing from a globally accessible information network which reaches everyone on earth and where no publisher needs permission to reach billions of people to lots of corporate controlled fiefdoms and walled gardens.

I don’t disagree with Gruber’s notion that mobile apps have introduced new models of interaction that would not have existed in a web-browser centric world. However that doesn’t mean we aren’t losing something along the way.

Note Now Playing: The HeavyShort Change Hero Note


 

Categories: Technology | Web Development

This morning I saw the following tweet from Steven Levy, a Wired reporter who's written a number of interesting books about software people and the great companies they've built

As part of my day job at Microsoft, I've begun to learn more about how advertising across the internet works on a technical level and it is quite interesting to learn how an image of a some head phones I looked at an e-commerce site ended up staring back at me from an ad on Facebook later that day.

The fundamental technology that makes this possible is Facebook Exchange (FBX). The infographic below provides an overview of how it enables ads from ecommerce sites to show up on Facebook and I’ll follow that up with a slightly more technical explanation.

Source: Business Insider

Facebook Exchange is a Real-Time Bidding platform which enables Facebook to sell ad slots on their page to the highest bidding advertisers in fractions of a second. Typically advertisers and publishers who own the pages where ads show up end up working together through an intermediary called a Demand Side Platform (DSP). A DSP such as AdRoll provides one of their retail partners such as American Apparel or Skull Candy with code to put tracking pixels on their site which allows the user to be identified and context such as what pages they’ve visited to be recorded. The retail partner then goes into AdRoll’s interface and decides how much they are willing to pay to show ads on various networks such as Facebook (via FBX) if a user who has visited one of their pages is shown an ad.

AdRoll then provides data to Facebook that allows the user to be uniquely identified within Facebook’s network. Later when that same user goes to Facebook, Facebook puts out a request on its Ad Exchange saying “Here’s a user who you might be interested in, how much are you willing to pay to show them an ad?”, AdRoll then cross-references that user’s opaque identifier with the behavioral data they have (i.e. what pages they were looking at on an advertiser’s site) and if there is a match they make a bid which will also include their ad for the page that piqued their interest. If the retailer wins the auction, then their ad is chosen and either rendered in the news feed or on the right hand side on Facebook’s desktop website. Each of these pieces needs to happen in fractions of a second but is still slow enough that rendering ads tends to noticeably be the slowest part of rendering the webpage.

In fact, there was a grand example of retargeting in action while I was researching this blog post. When I started writing this blog post I checked out AdRoll’s web page on how to use their service to retarget ads on Facebook. A few minutes later, this showed up in my news feed.

You can tell if an ad is retargeted on Facebook by hovering with your mouse cursor on the top right of the ad (on the desktop website) and then selecting the options. If the “About This Ad” link takes you somewhere outside Facebook then it is a retargeted ad.

Some ad providers like Quantcast provide an option to opt-out of retargeting for their service while others like AdRoll link to the Network Advertising Initiative (NAI) opt –out tool which provides an option to opt-out of retargeting for a variety of ad providers. Note that this doesn’t prevent you from getting ads and is instead just a signal to advertisers that you’d rather not have your ads personalized.

If you found this blog post informative I've begun a regular series of blog posts intended to answer questions about online advertising on Microsoft properties such as Bing & MSN and on industry trends. Hit me up on Twitter with your questions.

Note Now Playing: Ice CubeHood Mentality Note


 

Categories: Technology

Up until a few months ago, the term DevOps was simply another buzzword which filled my Twitter feed that evoked a particular idea but wasn’t really concrete to me. Similar to other buzzwords related to software development such as NoSQL and Agile, it is hard to pin down what the definitive definition of the term is just what it wasn’t. If you aren’t familiar with DevOps, a simple definition is that the goal of DevOps is to address this common problem when building online services

The Big Switch

A couple of months ago, my work group took what many would consider a rather extreme step in eliminating this wall between developers and operations. Specifically, Bing Ads transitioned away from the traditional Microsoft engineering model of having software design engineers (aka developers), software design engineers in test (testers) and service operations (ops) and merged all of these roles into a single engineering role. As it states in the Wikipedia entry for DevOps, the adoption of DevOps was driven by the following trends

  1. Use of agile and other development processes and methodologies
  2. Demand for an increased rate of production releases from application and business unit stakeholders
  3. Wide availability of virtualized and cloud infrastructure from internal and external providers
  4. Increased usage of data center automation and configuration management tools

All of these trends already applied to our organization before we made the big switch to merge the three engineering disciplines into a DevOps role. We’d already embraced the Agile development model complete with two to four week sprints, daily scrums, burn-down charts, and senior program managers playing the role of the product owner (although we use the term scenario owner). Given our market position as the underdog to Google in search and advertising, our business leaders always wants to ship more features, more quickly while maintaining high product quality. In addition, there’s a ton of peer pressure for all of us at Microsoft to leverage internal tools Windows Azure and Autopilot for as much of our cloud services needs as possible instead of rolling our own data centers and hardware configurations.

Technically our organization was already committed to DevOps practices before we made the transition that eliminated roles. However the what the organization realized is that a bigger change to the culture was needed for us to get the most value out of these practices. The challenge we faced is that the organizational structure of separate roles for developers, testers and operations tends to create these walls where one role feels their responsibility is for a certain part of the development cycle and then tosses the results of their efforts down stream to the next set of folks in the delivery pipeline. Developers tended to think their job was to write code and quality was the role of testers. Testers felt their role was to create test frameworks and find bugs then deployment was the role of the operations team. The operations team tended to think their role was keeping the live site running without the ability to significantly change how the product was built. No matter how open and collaborative the people are on your team, these strictly defined roles create these walls. My favorite analogy for this situation is like comparing two families who are on a diet trying to lose weight and one of them has fruit, veggies and healthy snacks in the pantry while the other has pop tarts, potato chips, chocolate and ice cream in theirs. No matter how much will power the latter family has, they are more likely to “cheat” on their diet than the first family because they have created an environment that makes it harder for them to do the right thing.

Benefits

The benefits of fully embracing DevOps are fairly self-evident so I won’t spend time on discussing the obvious benefits that have been beaten to death elsewhere. I will talk about the benefits I’ve seen in our specific case of merging the 3 previous engineering roles into a single one. The most significant change is the cultural change towards how we view automation of every step related to deployment and monitoring. It turns out that there is a big difference when approaching a problem from the perspective of taking away people’s jobs (i.e. automating what the operations team does) versus making your team more effective (i.e. reducing the amount of time the engineering team spends on operational tasks that can be automated thus giving us more time to work on features that move the business forward). This has probably the biggest surprise, although obvious in hindsight, as well as the biggest benefit.

We’ve also begun to see faster time to resolve issues from build breaks to features failing in production due to fact that the on-call person (we call them Directly Responsible Individuals or DRIs) is now a full member of the engineering team who is expected to be capable of debugging and fixing issues encountered as part of being on-call. This is an improvement from prior models where the operations team were the primary folks on-call and would tend to pull in the development team as a last resort outside of business hours.  

As a program manager (or product manager if you’re a Silicon Valley company), I find it has made my job easier since I have fewer people to talk to because we’ve consolidated engineering managers. No longer having to talk to an development manager separately from the manager of systems engineers separately from a test manager has made communication far more efficient for me.

Challenges

There are a number of risks with any organization taking the steps that we have at Bing Ads. The biggest risk is definitely attrition especially at a company like Microsoft where these well-defined roles have been a part of the culture for decades and are still part & parcel of how the majority of the company does business. A number of people may feel that this is a bait and switch on their career plans with the new job definitions not aligning with how they saw their roles evolving over time. Others may not mind that as much but may simply feel that their skills may not be as valuable in the new world especially as they now need to learn a set of new skills. I’ve had one simple argument when I’ve met people with this mindset. The first is that DevOps is here to stay. The industry trends that have had more and more companies from Facebook and Amazon to Etsy and Netflix blurring the lines between developers, test engineers and operations staff will not go away. Companies aren’t going to want to start shipping less frequently nor will they want to bring back manual deployment processes instead of automating as much as possible. The skills you learn in a DevOps culture will make you more broadly valuable wherever they find their next role whether it is a traditional specialized engineering structure or in a DevOps based organization.

Other places where we’re still figuring things out are best practices around ownership of testing. We currently try to follow a “you build it, you test it, you deploy it” culture as much as possible although allowing any dev to deploy code has turned out to be bit more challenging than we expected since we had to ensure we do not run afoul of the structures we had in place to stay compliant with various regulations. Testing your own code is one of topics where many in the industry have come out against as being generally a bad idea. I remember arguments from my college classes from software engineering professors about the blind spots developers have about their software requiring the need for dedicated teams to do testing. We do have mitigations in place such as test plan reviews and code reviews to ensure there are alternate pairs of eyes looking at the problem space not just the developer who created the functionality. There is also the school of thought that since the person who wrote the code will likely be the person woken up in the middle of the night if it goes haywire at an inopportune moment, there is a sense of self preservation that will cause more diligence to be applied to the problem than was the case in the previous eras of boxed software which is when most of the anti-developer testing arguments were made.

Further Reading

 

Note Now Playing: EminemRap God Note


 

Earlier this week my Twitter feed was flooded with reactions to the announcement of Amazon Prime Air. A vision which is compelling and sounds like something from a science fiction novel.

 

Flying robots delivering your impulse buys ordered from you smartphone within 30 minutes? Sign me up. 

Amazon's Robotic Vision

A few have been skeptical of Amazon Prime Air and some such as Konstantin Kakaes at Wired have described the announcement as an hour long infomercial hosted by Charlie Rose that is full of hot air. There are definitely a lot of things that need to get better before drone-based package delivery is a reality. On the technology end there are challenges such as improving navigation software and the getting more efficiency out of battery power. On the regulatory end the rules and regulations needed to ensure the safety of the populace in the midst of these flying drones still needs to be figured out.

Unlike a number of the skeptics, I'm quite confident that a number of the technological and regulatory hurdles will be surmounted in the next 5 years. On the other hand, I also believe Amazon oversold the technology. The logistics of actually delivering a package to a customer has been glossed over quite a bit. If you look at the video, this is the optimal situation for delivering a package.

Now think about all the places in urban environments that don't meet this criteria; apartment complexes, condos, office building, etc. Then think about all the places where packages are best left on the doorstep close to the house which would be infeasible for a drone to land close to. In fact it's likely the 30 minute delivery claim is meant to address some logistics challenges by assuming you'll be there to pick up anything the drone drops off on your lawn or driveway before Johnny Sticky Fingers does.

My suspicion is that the true impactful usage of Amazon Prime Air will be 30 minute delivery drone delivery to Amazon Locker locations.

Google's Robotic Vision

This morning my Twitter feed is abuzz with the news that Google's Andy Rubin, creator of Android and longtime robotics enthusiast, has acquired seven robotics companies that are creating technologies to build a mobile dexterous robot. This effort will be part of Google X which is also the home of Google Glass & self-driving cars. Andy Rubin describes the effort as follows

“I have a history of making my hobbies into a career,” Mr. Rubin said in a telephone interview. “This is the world’s greatest job. Being an engineer and a tinkerer, you start thinking about what you would want to build for yourself.”

He used the example of a windshield wiper that has enough “intelligence” to operate when it rains, without human intervention, as a model for the kind of systems he is trying to create. That is consistent with a vision put forward by the Google co-founder Larry Page, who has argued that technology should be deployed wherever possible to free humans from drudgery and repetitive tasks.

To spell it out, Google’s efforts in automation under project X have been about augmenting the human experience in ways that eliminate repetitive tasks which humans are poor at such as driving where human error is a significant cause of the over 30,000 deaths a year in the US due to automobile accidents. Even this somewhat frivolous example of ringing the doorbell at Andy Rubin’s home taken from a 2007 profile of the Android founder is consistent with that vision

If the scanner recognizes you, the door unlocks automatically. (The system makes it easier to deal with former girlfriends, Mr. Rubin likes to joke. No messy scenes retrieving keys — it’s just a simple database update.)

Those forced to use the doorbell are greeted with another technological marvel: a robotic arm inside the glass foyer grips a mallet and then strikes a large gong. Although Mr. Rubin won’t reveal its cost, it may be one of the world’s most expensive doorbells.

At the end of the day, I’m more inspired by a company looking at automating away all of the tediousness of every day life with Star Trek style technology from automated doorways and self-driving cars to fully autonomous robots than a vision of making impulse buying at Walmart 2.0 Amazon more convenient. I for one welcome our new robotic overlords.

Note Now Playing: AdeleRumor Has It Note


 

Categories: Technology

Danny Sullivan wrote an interesting blog post this morning titled Google’s Broken Promises & Who’s Running The Search Engine? whose central thesis is that Google now does a number of things it once described as “evil” when it comes to how search results and ads work in Google Search. Given that I now work in Bing Ads, this is a fairly interesting topic to me and one I now have some degree of industry knowledge about.

Promises Are Like Pie Crust, Easy to Make and Easy to Break

Danny Sullivan categorizes two big broken promises in his article, one in 2012 and one from 2013. The 2012 broken promise is excerpted below

The first broken promise came last year, when Google took the unprecedented step of turning one of its search products, Google Product Search, into a pure ad product called Google Shopping.

Google Shopping is a different creature. No one gets listed unless they pay. It’s as if the Wall Street Journal decided one day that it would only cover news stories if news makers paid for inclusion. No pay; no coverage. It’s not perfect metaphor. Paid inclusion doesn’t guarantee you’ll rank better or get favorable stories. But you don’t even get a chance to appear unless you shell out cold hard cash.

What Was Evil In 2004, Embraced In 2012

Shopping search engines have long had paid inclusion programs, but not Google. Google once felt so strongly that this was a bad practice that when it went public in 2004, it called paid inclusion evil, producing listings that would be of poor relevancy and biased. The company wrote, in part:

Because we do not charge merchants for inclusion in [Google Shopping], our users can browse product categories or conduct product searches with confidence that the results we provide are relevant and unbiased.

There is a similar Google then versus Google now perspective when looking at the second broken promise related to banner ads in the search results page.

“There will be no banner ads on the Google homepage or web search results pages,” Google promised in December 2005, on its main blog, to reassure consumers concerned that its new partnership with AOL would somehow change the service. Eight years later, Google’s testing big banner ads like these:

These excerpts could almost be a cautionary tale to idealistic young companies about making absolute statements and the staking one’s brand on these statements without thinking about the future. However that isn’t the point of this post.

I decided to write this post because Danny Sullivan’s article starts starts out quite strongly by pointing to this misalignment between Google’s past statements and their current behavior but then peters out. The rest of the article is spent studying Google’s org chart trying to figure out which individual to blame for these changes as well as trying to come up with a rationalization for these moves in the context of making search better for consumers. As an industry watcher the rationale for these moves is quite straightforward and has been a natural progression for years.

The Love of Money is the Root of all Evil

Any analysis of business decisions Google makes in the arena of search, is remiss if it fails to mention Google makes the majority of its revenue and profits from ads running on Google sites. As an example, Google made $9.39 billion last quarter from ads running on its sites whereas of the $3.15 billion it made from ads running on other people’s websites it paid those people $2.97 billion. Combining that with the fact that its $12.5 billion acquisition of Motorola has so far produced nothing but financial losses, there is a lot of pressure for Google to make as much money as possible from ads running on its sites specifically in Google Search results pages. 

When it comes to search engine advertising, the money is primarily in queries with “commercial intent”. This is a fancy way of saying that the person who is performing the search is planning to spend money. Advertisers are willing to pay several dollars to a search engine each time a customer clicks on their ad when the search term has commercial intent. In fact, companies are willing to spend up to $40 – $50 each time a user clicks on an ad if the user is searching for a life insurance policy or a home loan.

Over time both search engines and advertisers have figured out exactly where the money is and how to extract the most value from each other. Google has slowly been making changes to their search engine that implies that for queries with commercial intent they always want a cut of the action. This is why if you perform a search today that has commercial intent, there are an order of magnitude (i.e. ten times) as many links to ads as there are unpaid search engine results. For example, take a look at this screenshot of a query for “northface jackets” on Google.

There are two links on this page that are unpaid search results and eighteen links where Google gets paid if you click on them. Given that context, it is no surprise that Google eventually realized it was competing with itself by having a “free” shopping search engine. This explains the broken promise in 2012 related to paid inclusion.

Now if you take a close look at the above screenshot, you’ll notice that The North Face is actually the top advertiser on this page. This means that despite the fact that the user was specifically looking for the North Face brand products, the company has to still compete with other advertisers by paying people for clicks to their website from Google search results. Brand advertisers hate this. A lot. Not only did they spend a lot of money and effort to become a well-known brand but now they still end up paying when this brand recognition pays off and people explicitly are looking for them on Google.

This leads us to the second broken promise, banner ads in search results. What Google is trying to do is to appease brand advertisers by letting them “take over” the search engine results page in cases where the user is quite clearly searching for their brand. Treating this is a giant billboard that reinforces their brand as opposed to scrabbling with other advertisers for a user which they already consider theirs is a more amenable pitch. This explains the broken promise of 2013.

I expect to see more aggressive commercialization of the search results page given Google’s seeming lack of interest and inability to diversify their sources of income. Doing this while preserving the customer experience will be the #1 challenge of their search engine and other similarly advertising focused web products in 2014 and beyond.

Note Now Playing: Jay-ZF*ckwithmeyouknowigotit (featuring Rick Ross) Note