Raymond Chen has a blog post entitled You don't know what you do until you know what you don't do where he writes

I've seen a lot of software projects, and one thing I've learned is that you don't have a product until you start saying "No".

In the early phases of product design, you're all giddy with excitement. This new product will be so awesome. It will slice bread. It will solve world hunger. It's designed for everybody, from the technology-averse grandmother who wants to see picture of her grandkids to the IT manager who is in charge of 10,000 computers. It'll run equally well on a handheld device as in a data center.

When I see a product with an all-encompassing description like this, I say to myself, "They have no idea what their product is." You don't know what you do until you know what you don't do. And the sooner you figure out what you don't do the better, because a product that promises to do everything will never ship.

In my five years at Microsoft, I've seen a bunch of projects fail. Some were public flame outs that are still embarrassing to mention today while others are private mistakes that you'll never hear anyone outside the b0rg cube mention. A few months ago I wrote a blog post entitled Top 5 Signs Your Project is Doomed and since then I've considered a few more entries that should be on the list bringing the total to 10. The list below are common signs that a  software project is doomed. Meeting one or two of these criteria isn't necessarily the kiss of death but three or more and you might as well start circulating your resume. 

  1. Trying to do too much in the first version. See Raymond's point above.

  2. Taking a major dependency on unproven technology.

  3. Competing with an existing internal project that was either a cash cow or had backers that are highly placed in the corporate hierarchy.

  4. The team is understaffed. If you have less people than can handle the amount of work you have to do then the right thing to do is to scale back the project. Practically every other choice leads to failure.

  5. Complexity is one of the goals of the project because "complex problems require complex solutions".
  6. Schedule Chicken

  7. Scope Creep

  8. Second System Syndrome

  9. No Entrance Strategy. When a project can't articulate how it goes from a demo or prototype to being in the hands of end users, there's a problem. This is particularly relevant in the "Web 2,0" world where many startups only strategy for success is getting a mention on TechCrunch and the fact that their service has "viral" features.

  10. Tackling a problem you don't know how to solve. It's pretty amazing how often I've seen this occur.


 

Categories: Programming

According to the Infoworld article entitled Microsoft names leaders for search-and-ad unit

Microsoft Wednesday named Satya Nadella to lead the newly formed Search and Ad Platform group, the software giant's effort to optimize the advertising revenue-raising potential of its search business.

Nadella, previously corporate vice president for Microsoft's Business Solutions group, will report to Kevin Johnson, president of the Platform and Services Division, the company said in a statement.

I'm not sure this information is accurate since I haven't seen any sign of it on Microsoft Presspass nor has Satya Nadella's corporate profile been updated. However if it is, it would then create three VPs under Kevin Johnson who are in charge of Microsoft's three Web brands; Windows Live, MSN, and Live Search. The org chart representing all the folks who are in charge of Microsoft's online businesses would then be 

if the Infoworld article is accurate.

The only relevance this has to people who read my blog is that it gives a nice visual of where I fit in the org chart. I'm in Blake Irving's group, working on aspects of the Windows Live Platform that powers services used by the Windows Live Experience group. 


 

Categories: Life in the B0rg Cube

Via Joe Gregorio I found a post entitled Transactionless by Martin Fowler. Martin Fowler writes

A couple of years ago I was talking to a couple of friends of mine who were doing some work at eBay. It's always interesting to hear about the techniques people use on high volume sites, but perhaps one of the most interesting tidbits was that eBay does not use database transactions.
...
The rationale for not using transactions was that they harm performance at the sort of scale that eBay deals with. This effect is exacerbated by the fact that eBay heavily partitions its data into many, many physical databases. As a result using transactions would mean using distributed transactions, which is a common thing to be wary of.

This heavy partitioning, and the database's central role in performance issues, means that eBay doesn't use many other database facilities. Referential integrity and sorting are done in application code. There's hardly any triggers or stored procedures.

My immediate follow-up to the news of transactionless was to ask what the consequences were for the application programmer, in particular the overall feeling about transactionlessness. The reply was that it was odd at first, but ended up not being a big deal - much less of a problem than you might think. You have to pay attention to the order of your commits, getting the more important ones in first. At each commit you have to check that it succeeded and decide what to do if it fails.

I suspect that this is one of those topics like replacing the operations team with the application developers which the CxOs and architects think is a great idea but is completely hated by the actual developers. We follow similar practices in some aspects of the Windows Live platform and I've heard developers complain about the fact that the error recovery you get for free with transactions is left in the hands of application developers. The biggest gripes are always around rolling back complex batch operations. I'm definitely interested in learning more about how eBay makes transactionless development as easy as they claim, I wonder if Dan Pritchett's talk is somewhere online?

The QCon conference wasn't even on my radar but if Dan Pritchett's talk is indicative of the kind of content that was presented, then it looks like I missed out. Looking at the list of speakers it looks like a conference I wouldn't have minded attending and submitting a paper for. I wonder if there'll be a U.S. version of the conference in the future? 


 

Categories: Web Development

danah boyd wrote two really interesting posts this weekend that gave an interesting perspective on a couple of headlines I've seen in blogs and mainstream news. Her post on narcissism gave an interesting perspective on stories such as CNN's Study: Vanity on the rise among college students which had me curious for more details when I read it originally. The post on Twitter gives a practical perspective I hadn't considered or seen mentioned in all the blogosphere ravings about the service since the hype storm started after the SXSW conference.

Interesting excerpts from danah boyd's post entitled fame, narcissism and MySpace

For those who are into pop science coverage of academic work, i'd encourage you to start with Jake Halpern's "Fame Junkies" (tx Anastasia). For simplicity sake, let's list a few of the key findings that have emerged over the years concerning narcissism.

  • While many personality traits stay stable across time, it appears as though levels of narcissism (as tested by the NPI) decrease as people grow older. In other words, while adolescents are more narcissistic than adults, you were also more narcissistic when you were younger than you are now.
  • The scores of adolescents on the NPI continue to rise. In other words, it appears as though young people today are more narcissistic than older people were when they were younger.
...
My view is that we have trained our children to be narcissistic and that this is having all sorts of terrifying repercussions; to deal with this, we're blaming the manifestations instead of addressing the root causes and the mythmaking that we do to maintain social hierarchies. Let's unpack that for a moment.

American individualism (and self-esteem education) have allowed us to uphold a myth of meritocracy. We sell young people the idea that anyone can succeed, anyone can be president. We ignore the fact that working class kids get working class jobs. This, of course, has been exacerbated in recent years. There used to be meaningful working class labor that young people were excited to be a part of. It was primarily masculine labor and it was rewarded through set hierarchies and unions helped maintain that structure. The unions crumpled in the 1980s and by the time the 1987 recession hit, there was a teenage wasteland No longer were young people being socialized into meaningful working class labor; the only path out was the "lottery" (aka becoming a famous rock star, athlete, etc.).

Interesting excerpts from danah boyd's post entitled Tweet Tweet (some thoughts on Twitter)

Of course, the population whose social world is most like the tech geeks is the teens. This is why they have no problems with MySpace bulletins (which are quite similar to Twitter in many ways). The biggest challenge with teens is that they do not have all-you-can-eat phone plans. Over and over, the topic of number of text messages in one's plan comes up. And my favorite pissed off bullying act that teens do involves ganging up to collectively spam someone so that they'll go over their limit and get into trouble with their parents (phone companies don't seem to let you block texts from particular numbers and of course you have to pay 10c per text you receive). This is particularly common when a nasty breakup occurs and i was surprised when i found out that switching phone numbers is the only real solution to this. Because most teens are not permanently attached to a computer and because they typically share their computers with other members of the family, Twitterific-like apps wouldn't really work so well. And Twitter is not a strong enough app to replace IM time.

Read both posts, they are really good. And if you aren't subscribed to her blog, you should be.


 

The Australian iTWire has a rather biased and misinformed article entitled Official: Microsoft ‘bribes’ companies to use Live Search which contains the following excerpt

Microsoft’s new program is called “Microsoft Service Credits for Web Search” and has been unveiled by John Batelle’s ‘SearchBlog’. The money on offer is significant, especially when multiplied across thousands of PCs. The deal means that companies can earn between US $2 and US $10 per computer on an annual basis, plus a US $25,000 “enrollment credit” which is a nice big wad of cash that will likely need a large-ish, strong and sturdy brown paper bag to hold securely while being passed under the table.  

For companies that have thousands of computers, this could translate into anywhere from US $100,000 to $200,000 per year, which is money that could be put to good use in the IT department or elsewhere in the company.
...
Former Microsoft employee and blogger Robert Scoble who served as the online face of Microsoft during his three years at the company is not impressed with Microsoft’s moves in deciding to offer companies money to use search.  His arguments are quite valid and boil down to Microsoft really needing to create better products, rather than needing to pay companies to get more traction for Windows Live. After all, Scoble isn’t the first to observe that Google doesn’t need to pay anyone to use its search services – people use them voluntarily because of the quality of the results

The amount of bias in this article is pretty amazing considering that Microsoft is primarily reacting to industry practices created by the Google [which have also been adopted by Yahoo!]. Let me count the ways Google bribes companies and individuals to use their search engine

  1. Google pays AdSense publishers for each user they convince to install Firefox with the Google Toolbar installed. Details are in the documentation for the AdSense Referrals Feature. Speculation on Slashdot was that they pay $1 per user who switches to Firefox + Google Toolbar.

  2. Google paid Dell $1 billion dollars to ensure that Google products are preinstalled in all the computers they sell and the default search engine/home page is set to Google. Details of this deal were even published in iTWire.

  3. Google paid Adobe an undisclosed amount to bundle Google Toolbar [which converts your default search engine in your browser to theirs] with all Adobe products.

  4. Google entered a distribution deal with Sun Microsystems to bundle Google Toolbar [which converts your default search engine in your browser to theirs] with all new installations of the Java runtime.

  5. Google products which converts your default search engine in your browser to theirs are bundled with the Winzip archiving utility. Financial details of the deal were undisclosed.

  6. Google is the default search engine for both the Opera and Firefox browsers. Both vendors get a cut of the search revenue generated from user searches which runs in the millions of dollars.

I could go on but my girlfriend just told me it's time for breakfast and I'm already in trouble for blogging on a Sunday morning. However the above links should be enough to debunk the inaccurate statements in the iTWire article. I guess iTWire's "journalism" is further validation of the saying that you should never let the facts get in the way of a good flame.


 

Whenever I talk to folks at work about branding and some of our products I usually get two kinds of responses. On the one hand, there are those who think branding is important and we could be doing a better job. Then there are others who believe we should focus on shipping quality products and the rest will fall into place. The second position is somewhat hard to argue with because I end up sounding like I advocate that marketing is more important than shipping quality products. Luckily, I now have two real world examples of the importance of getting branding right for your software even if you do have a quality product.

EXHIBIT A: Topix.net

In a blog post entitled Kafka-esque! Rich Skrenta writes

I'm in the Wall Street Journal today, with a story about our purchase of Topix.com for $1M and the SEO issues related to moving the domain.
...
Back in 2003 when we were looking for a name, we came across Topix.net. The name 'topix' really fit what we were trying to do, it was a heck of a lot better than the other names we'd come up with. It turned out we could buy the name from a South Korean squatter for $800. So we took it.  Of course I knew we were breaking one of the rules of domain names, which is never get anything besides the .com. But I thought that advice might be outmoded.
...
Surely, the advice that you had to have a .com wasn't as relevant anymore?

Well, we got our answer when our very first press story came out. This was in March 2004 when we got a front page business section launch story in the Mercury News. They gave us sweet coverage since we were the only startup to come out of palo alto in months (this was just as the dot-com crash was beginning to thaw). Unfortunately, while the article clearly spelled "Topix.net", the caption under our photo -- the most visible part of the story after the headline -- called us Topix.com. Someone had transcribed the name and mistakenly changed the .net to .com, out of habit, I suppose.

Since that time we've built up quite a bit of usage, and much of it return visitors who have bookmarked one of our pages, or become active in our local forums. But still, we continued to have issues where someone will assume a .com ending for the name. Mail gets sent to the wrong address, links to us are wrong, stories incorrectly mention our URL.

Beyond that, as part of some frank self-evaluations we were doing around our site and how we could make it better, and the brand stronger, we ran some user surveys and focus groups. "What do you think of the name?" was one of the questions we asked. The news was good & bad; people actually really liked the name 'topix', but the '.net' was a serious turn-off. It confused users, it made the name seem technical rather than friendly, and it communicated to the world that "we didn't own our own name."

EXHIBIT B: Democracy Player

In a blog post entitled A name change Nicholas Reville writes

This is some big news for us. We are planning to change the name of Democracy Player.

We chose the name ‘Democracy’ almost two years ago when we were first setting up PCF. We knew it was an ambitious name, but we thought that it made a clear statement about how important it is that an open internet TV platform is for our culture.
...
And, even though I’m about to explain why we need to change it, I’m glad we’ve had this name for the past year. It’s funny that a name like ‘Democracy’ can become a name for software– I think it turned out to be less odd than we expected. When people hear a name, they tend to accept it. And it helped us assert our mission clearly: free, open, and dedicated to democratizing video online. I think conveying that mission so strongly was crucial for us.

But the name also confused a huge number of potential users. In all our debates about whether you could call something ‘Democracy’ and how people would react to the name, we hadn’t realized that so many people would simply assume that the software was for politicians and videos about politics. We hear this response over and over, and it’s a real limitation to our user base.

So we’re changing the name to Miro.


 

Categories: Windows Live

A number of blogs I read have been talking about Amazon's S3 service a lot recently. I've seen posts from Jeff Atwood, Shelley Powers and most recently Dave Winer. I find it interesting that S3 is turning into a classic long tail service that works for both startups who are spending hundreds of thousands to millions of dollars a year to service millions of users (like Smugmug) to bloggers who need some additional hosting for their cat pictures. One reason I find this interesting is that it is unclear to me S3 is a business that will be profitable in the long term by itself.

My initial assumption was that S3 was a way for Amazon to turn a lemons into lemonade with regards to bandwidth costs. Big companies like Amazon are usually billed for bandwidth using 95th percentile billing, which is explained below

With 95th percentile billing, buyers are billed each month at a fixed price multiplied by the peak traffic level, regardless of how much is used the rest of the time. Thus with the same nominal price, the effective price is higher for buyers with burstier traffic patterns.

So my assumption was that S3 allows Amazon to make money from bandwidth they were already being charged for and not using. As for storage, my guess is that they are either making a miniscule amount of profit or at cost. Where this gets tricky is that, if S3 gets popular enough then all of a sudden it no longer is a way to make money from bandwidth they are being billed for but aren't using but instead impacts their actual bandwidth costs which then changes the profit equation for the service. Without any data on Amazon's cost structure it is unclear whether this would make the service unprofitable or whether this is already factored into their pricing.

On the other hand, Amazon's Elastic Compute Cloud (EC2) isn't something I've seen a lot of bloggers rave about. However it seems to be the service that shows that Amazon is making a big play to be the world's operating system in the sky as opposed to dabbling in providing some of its internal services to external folks as a cost savings measure. With EC2 you can create a bunch of virtual servers in their system and load it up with an Amazon Machine Image (AMI). An AMI is basically a server operating system and the platform components you need on it. Typical AMIs are an instance of a LAMP system (Linux/Apache/MySQL/PHP/Perl/Python) although I did see one AMI that was an instance of Windows 2003 server. You can create as many or as few server instances as you need and are billed just for what you need.

I suspect that the combination of EC2 and S3  is intended to be very attractive to startups. Instead of spending hundreds of thousands of dollars building out clusters of servers, you just pay as you go when you get your monthly bill. There are only two problems with this strategy that I can see. The first is that, if I was building the next Digg, Flickr or del.icio.us I'm not sure I'd want to place myself completely at the mercy of Amazon especially since there doesn't seem to be any SLA published on the site. According to the CEO of Smugmug in his post Amazon S3: Outages, slowdowns, and problems they've had four major problems with S3 in the past year which has made them rely less on the service for critical needs. The second issue is that VC money is really, really, really easy to come by these days judging from the kind of companies that get profiled on TechCrunch and Mashable. If the latter should change, it isn’t hard to imagine dozens of enterprising folks with a couple of thousand dollars in their pockets deciding to go with S3  + EC2 instead of seeking VC funding. But for now, I doubt that this will be the case.  

What I suspect is that without some catalyst (e.g. the next YouTube is built on S3  + EC2)these services will not reach their full potential. This would be unfortunate because I think in much the same way we moved from everyone rolling their own software to shrinkwrapped software, we will need to move to shrinkwrapped Web platforms in the future instead of everyone running their own ad-hoc cluster of Windows or LAMP servers and solving the same problems that others have solved thousands of times already.

I wonder if Amazon has considered tapping the long tail by going up against GoDaddy's hosting services with S3  + EC2. They have the major pieces already although it seems that their prices would need to go down to compete with what GoDaddy charges for bandwidth although I suspect that Amazon's quality of service would be better.


 

March 15, 2007
@ 03:38 PM

My blog has been slow all day due to an unending flood of trackback spam. I've set up my IIS rules to reject requests from the IP address ranges the attacks are coming from but it seems that this hasn't been enough to prevent the trackback spam from making my blog unusable.

It looks like I should invest in a router with a built in firewall as my next step. Any ideas to prevent this from happening again are welcome.


 

Categories: Personal

From the press release entitled Microsoft Unites Xbox and PC Gamers With Debut of Games for Windows — LIVE we learn

REDMOND, Wash. — March 14, 2007 — Microsoft Corp. today announced the extension of the Xbox LIVE® games and entertainment network to the Windows® platform, bringing together the most popular online console game service with the most popular games platform in the world. Debuting on May 8, 2007, with the launch of the Windows Vista™ version of the Xbox® blockbuster “Halo® 2,” Games for Windows — LIVE will connect Windows gamers to over six million gamers already in the Xbox LIVE community. Then, launching in June, “Shadowrun™” will for the first time connect Windows gamers with Xbox 360™ players in cross-platform matches using a single service. “UNO®,” releasing later in 2007, will also support cross-platform play between Windows and Xbox 360.

This is pretty cool and I saw some of the demos when I was at CES in January. The funny thing is that one of my coworkers told me that we were announcing this soon but I thought he said "Games for Windows Live" so I thought he meant we were rebranding MSN Games. I didn't realize it was actually "Games for Windows — LIVE". This might get a tad confusing.


 

Categories: Video Games

Brendan Eich has a post on the Mozilla roadmap blog entitled The Open Web and Its Adversaries which references one of my posts on whether AJAX will remain as the technology of choice for building Rich Internet Applications. He writes

open standards and open source both empower user-driven innovation. This is old news to the Mozilla user community, who have been building and feeding back innovations for the life of the project, increasing over time to include Firefox add-ons and GreaseMonkey user scripts. (BTW, I am pushing to make add-on installation not require a restart in Firefox 3, and I intend to help improve and promote GreaseMonkey security in the Firefox 3 timeframe too.) Without forking, even to make private-label Firefoxes or FlashPlayers, users can innovate ahead of the vendor's ability to understand, codify, and ship the needed innovations.

Consider just the open standards that make up the major web content languages: HTML, CSS, DOM, JS. These mix in powerful ways that do not have correspondences in something like a Flash SWF. There is no DOM built inside the FlashPlayer for a SWF; there's just a display list. There's no eval in ActionScript, and ActionScript features a strict mode that implements a static type checker (with a few big loopholes for explicit dynamic typing). You can't override default methods or mutate state as freely as you can in the browser content model. Making a SWF is more like making an ASIC -- it's "hardware", as Steve Yegge argues.

This is not necessarily a bad thing; it's certainly different from the Open Web.
...
Dare Obasanjo argues that developers crave single-vendor control because it yields interoperation and compatibility, even forced single-version support. Yet this is obviously not the case for anyone who has wasted time getting a moderately complex .ppt or .doc file working on both Mac and Windows. It's true for some Adobe and Microsoft products, but not all, so something else is going on. And HTML, CSS, DOM and JS interoperation is better over time, not worse. TCP/IP, NFS, and SMB interoperation is great by now. The assertion fails, and the question becomes: why are some single-vendor solutions more attractive to some developers? The answers are particular, not general and implied simply by the single-vendor condition.

I'm surprised to see Brendan Eich conflating "openness" with the features of a particular technology. I'll start with Brendan's assertion that open standards and open source enable user-driven innovation. Open source allows people to modify the software they've been distributed however they like. Open standards like HTTP, FTP and NNTP allow people to build applications that utilize these technologies without being beholden to any corporate or government entity. It's hard for me to see how open standards enable user-driven innovation in the same way that open source does. I guess the argument could be made that open source applications built on proprietary technologies aren't as "free" as open source applications that implement open standards. I can buy that. I guess.

The examples of Firefox add-ons and GreaseMonkey user scripts don't seem to be an example of open source and open standards enabling user-driven innovation. They seem to be examples of why building an application as a platform with a well-designed plugin model works. After all, we have plugins for Internet Explorer, Gadgets for Google Personalized Homepage and Add-ins for Visual Studio which are all examples of user-driven innovation as plugins for an application which are built on a proprietary platform often using proprietary technologies. My point is  

open_source + open_standards != user_driven_innovations;

Being open helps, but it doesn't necessary lead to user driven innovations or vice versa. The rest of Brendan's post is even weirder because he presents the features of Flash's ActionScript versus AJAX (i.e. [X]HTML/CSS/Javascript/DOM/XML/XmlHttpRequest) as the conflict between properietary versus open technologies. Separating content from presentation, dynamic programming languages and rich object models are not exclusively the purvey of "open" technologies and it is disingenious for Brendan to suggest that. 

After all, what happens when Adobe and Microsoft make their RIA platforms more "Web-like"? Will the debate devolve into the kind of semantic hairsplitting we've seen with the OpenXML vs. ODF debate where Microsoft detractors are now attacking Microsoft for opening up and standardizing its XML file formats when their original arguments against the file formats where that they weren't open?

Personally, I'd like to see technical discussions on the best way to move the Web forward instead of the red herring of "openness" being thrown into the discussion. For instance, what are the considerations Web developers should make when they come to the crossroads where Adobe is offering Flash/Flex, Microsoft is offering WPF/E and the Mozilla & co are offering their extensions to the AJAX model (i.e. HTML 5) as the one true way? I've already stated what I think in my post What Comes After AJAX? and so far Adobe looks like they have the most compelling offering for developers but it is still early in the game and neither Microsoft nor Mozilla have fully shown their hands.


 

Categories: Web Development