The Australian iTWire has a rather biased and misinformed article entitled Official: Microsoft ‘bribes’ companies to use Live Search which contains the following excerpt

Microsoft’s new program is called “Microsoft Service Credits for Web Search” and has been unveiled by John Batelle’s ‘SearchBlog’. The money on offer is significant, especially when multiplied across thousands of PCs. The deal means that companies can earn between US $2 and US $10 per computer on an annual basis, plus a US $25,000 “enrollment credit” which is a nice big wad of cash that will likely need a large-ish, strong and sturdy brown paper bag to hold securely while being passed under the table.  

For companies that have thousands of computers, this could translate into anywhere from US $100,000 to $200,000 per year, which is money that could be put to good use in the IT department or elsewhere in the company.
...
Former Microsoft employee and blogger Robert Scoble who served as the online face of Microsoft during his three years at the company is not impressed with Microsoft’s moves in deciding to offer companies money to use search.  His arguments are quite valid and boil down to Microsoft really needing to create better products, rather than needing to pay companies to get more traction for Windows Live. After all, Scoble isn’t the first to observe that Google doesn’t need to pay anyone to use its search services – people use them voluntarily because of the quality of the results

The amount of bias in this article is pretty amazing considering that Microsoft is primarily reacting to industry practices created by the Google [which have also been adopted by Yahoo!]. Let me count the ways Google bribes companies and individuals to use their search engine

  1. Google pays AdSense publishers for each user they convince to install Firefox with the Google Toolbar installed. Details are in the documentation for the AdSense Referrals Feature. Speculation on Slashdot was that they pay $1 per user who switches to Firefox + Google Toolbar.

  2. Google paid Dell $1 billion dollars to ensure that Google products are preinstalled in all the computers they sell and the default search engine/home page is set to Google. Details of this deal were even published in iTWire.

  3. Google paid Adobe an undisclosed amount to bundle Google Toolbar [which converts your default search engine in your browser to theirs] with all Adobe products.

  4. Google entered a distribution deal with Sun Microsystems to bundle Google Toolbar [which converts your default search engine in your browser to theirs] with all new installations of the Java runtime.

  5. Google products which converts your default search engine in your browser to theirs are bundled with the Winzip archiving utility. Financial details of the deal were undisclosed.

  6. Google is the default search engine for both the Opera and Firefox browsers. Both vendors get a cut of the search revenue generated from user searches which runs in the millions of dollars.

I could go on but my girlfriend just told me it's time for breakfast and I'm already in trouble for blogging on a Sunday morning. However the above links should be enough to debunk the inaccurate statements in the iTWire article. I guess iTWire's "journalism" is further validation of the saying that you should never let the facts get in the way of a good flame.


 

Whenever I talk to folks at work about branding and some of our products I usually get two kinds of responses. On the one hand, there are those who think branding is important and we could be doing a better job. Then there are others who believe we should focus on shipping quality products and the rest will fall into place. The second position is somewhat hard to argue with because I end up sounding like I advocate that marketing is more important than shipping quality products. Luckily, I now have two real world examples of the importance of getting branding right for your software even if you do have a quality product.

EXHIBIT A: Topix.net

In a blog post entitled Kafka-esque! Rich Skrenta writes

I'm in the Wall Street Journal today, with a story about our purchase of Topix.com for $1M and the SEO issues related to moving the domain.
...
Back in 2003 when we were looking for a name, we came across Topix.net. The name 'topix' really fit what we were trying to do, it was a heck of a lot better than the other names we'd come up with. It turned out we could buy the name from a South Korean squatter for $800. So we took it.  Of course I knew we were breaking one of the rules of domain names, which is never get anything besides the .com. But I thought that advice might be outmoded.
...
Surely, the advice that you had to have a .com wasn't as relevant anymore?

Well, we got our answer when our very first press story came out. This was in March 2004 when we got a front page business section launch story in the Mercury News. They gave us sweet coverage since we were the only startup to come out of palo alto in months (this was just as the dot-com crash was beginning to thaw). Unfortunately, while the article clearly spelled "Topix.net", the caption under our photo -- the most visible part of the story after the headline -- called us Topix.com. Someone had transcribed the name and mistakenly changed the .net to .com, out of habit, I suppose.

Since that time we've built up quite a bit of usage, and much of it return visitors who have bookmarked one of our pages, or become active in our local forums. But still, we continued to have issues where someone will assume a .com ending for the name. Mail gets sent to the wrong address, links to us are wrong, stories incorrectly mention our URL.

Beyond that, as part of some frank self-evaluations we were doing around our site and how we could make it better, and the brand stronger, we ran some user surveys and focus groups. "What do you think of the name?" was one of the questions we asked. The news was good & bad; people actually really liked the name 'topix', but the '.net' was a serious turn-off. It confused users, it made the name seem technical rather than friendly, and it communicated to the world that "we didn't own our own name."

EXHIBIT B: Democracy Player

In a blog post entitled A name change Nicholas Reville writes

This is some big news for us. We are planning to change the name of Democracy Player.

We chose the name ‘Democracy’ almost two years ago when we were first setting up PCF. We knew it was an ambitious name, but we thought that it made a clear statement about how important it is that an open internet TV platform is for our culture.
...
And, even though I’m about to explain why we need to change it, I’m glad we’ve had this name for the past year. It’s funny that a name like ‘Democracy’ can become a name for software– I think it turned out to be less odd than we expected. When people hear a name, they tend to accept it. And it helped us assert our mission clearly: free, open, and dedicated to democratizing video online. I think conveying that mission so strongly was crucial for us.

But the name also confused a huge number of potential users. In all our debates about whether you could call something ‘Democracy’ and how people would react to the name, we hadn’t realized that so many people would simply assume that the software was for politicians and videos about politics. We hear this response over and over, and it’s a real limitation to our user base.

So we’re changing the name to Miro.


 

Categories: Windows Live

A number of blogs I read have been talking about Amazon's S3 service a lot recently. I've seen posts from Jeff Atwood, Shelley Powers and most recently Dave Winer. I find it interesting that S3 is turning into a classic long tail service that works for both startups who are spending hundreds of thousands to millions of dollars a year to service millions of users (like Smugmug) to bloggers who need some additional hosting for their cat pictures. One reason I find this interesting is that it is unclear to me S3 is a business that will be profitable in the long term by itself.

My initial assumption was that S3 was a way for Amazon to turn a lemons into lemonade with regards to bandwidth costs. Big companies like Amazon are usually billed for bandwidth using 95th percentile billing, which is explained below

With 95th percentile billing, buyers are billed each month at a fixed price multiplied by the peak traffic level, regardless of how much is used the rest of the time. Thus with the same nominal price, the effective price is higher for buyers with burstier traffic patterns.

So my assumption was that S3 allows Amazon to make money from bandwidth they were already being charged for and not using. As for storage, my guess is that they are either making a miniscule amount of profit or at cost. Where this gets tricky is that, if S3 gets popular enough then all of a sudden it no longer is a way to make money from bandwidth they are being billed for but aren't using but instead impacts their actual bandwidth costs which then changes the profit equation for the service. Without any data on Amazon's cost structure it is unclear whether this would make the service unprofitable or whether this is already factored into their pricing.

On the other hand, Amazon's Elastic Compute Cloud (EC2) isn't something I've seen a lot of bloggers rave about. However it seems to be the service that shows that Amazon is making a big play to be the world's operating system in the sky as opposed to dabbling in providing some of its internal services to external folks as a cost savings measure. With EC2 you can create a bunch of virtual servers in their system and load it up with an Amazon Machine Image (AMI). An AMI is basically a server operating system and the platform components you need on it. Typical AMIs are an instance of a LAMP system (Linux/Apache/MySQL/PHP/Perl/Python) although I did see one AMI that was an instance of Windows 2003 server. You can create as many or as few server instances as you need and are billed just for what you need.

I suspect that the combination of EC2 and S3  is intended to be very attractive to startups. Instead of spending hundreds of thousands of dollars building out clusters of servers, you just pay as you go when you get your monthly bill. There are only two problems with this strategy that I can see. The first is that, if I was building the next Digg, Flickr or del.icio.us I'm not sure I'd want to place myself completely at the mercy of Amazon especially since there doesn't seem to be any SLA published on the site. According to the CEO of Smugmug in his post Amazon S3: Outages, slowdowns, and problems they've had four major problems with S3 in the past year which has made them rely less on the service for critical needs. The second issue is that VC money is really, really, really easy to come by these days judging from the kind of companies that get profiled on TechCrunch and Mashable. If the latter should change, it isn’t hard to imagine dozens of enterprising folks with a couple of thousand dollars in their pockets deciding to go with S3  + EC2 instead of seeking VC funding. But for now, I doubt that this will be the case.  

What I suspect is that without some catalyst (e.g. the next YouTube is built on S3  + EC2)these services will not reach their full potential. This would be unfortunate because I think in much the same way we moved from everyone rolling their own software to shrinkwrapped software, we will need to move to shrinkwrapped Web platforms in the future instead of everyone running their own ad-hoc cluster of Windows or LAMP servers and solving the same problems that others have solved thousands of times already.

I wonder if Amazon has considered tapping the long tail by going up against GoDaddy's hosting services with S3  + EC2. They have the major pieces already although it seems that their prices would need to go down to compete with what GoDaddy charges for bandwidth although I suspect that Amazon's quality of service would be better.


 

March 15, 2007
@ 03:38 PM

My blog has been slow all day due to an unending flood of trackback spam. I've set up my IIS rules to reject requests from the IP address ranges the attacks are coming from but it seems that this hasn't been enough to prevent the trackback spam from making my blog unusable.

It looks like I should invest in a router with a built in firewall as my next step. Any ideas to prevent this from happening again are welcome.


 

Categories: Personal

From the press release entitled Microsoft Unites Xbox and PC Gamers With Debut of Games for Windows — LIVE we learn

REDMOND, Wash. — March 14, 2007 — Microsoft Corp. today announced the extension of the Xbox LIVE® games and entertainment network to the Windows® platform, bringing together the most popular online console game service with the most popular games platform in the world. Debuting on May 8, 2007, with the launch of the Windows Vista™ version of the Xbox® blockbuster “Halo® 2,” Games for Windows — LIVE will connect Windows gamers to over six million gamers already in the Xbox LIVE community. Then, launching in June, “Shadowrun™” will for the first time connect Windows gamers with Xbox 360™ players in cross-platform matches using a single service. “UNO®,” releasing later in 2007, will also support cross-platform play between Windows and Xbox 360.

This is pretty cool and I saw some of the demos when I was at CES in January. The funny thing is that one of my coworkers told me that we were announcing this soon but I thought he said "Games for Windows Live" so I thought he meant we were rebranding MSN Games. I didn't realize it was actually "Games for Windows — LIVE". This might get a tad confusing.


 

Categories: Video Games

Brendan Eich has a post on the Mozilla roadmap blog entitled The Open Web and Its Adversaries which references one of my posts on whether AJAX will remain as the technology of choice for building Rich Internet Applications. He writes

open standards and open source both empower user-driven innovation. This is old news to the Mozilla user community, who have been building and feeding back innovations for the life of the project, increasing over time to include Firefox add-ons and GreaseMonkey user scripts. (BTW, I am pushing to make add-on installation not require a restart in Firefox 3, and I intend to help improve and promote GreaseMonkey security in the Firefox 3 timeframe too.) Without forking, even to make private-label Firefoxes or FlashPlayers, users can innovate ahead of the vendor's ability to understand, codify, and ship the needed innovations.

Consider just the open standards that make up the major web content languages: HTML, CSS, DOM, JS. These mix in powerful ways that do not have correspondences in something like a Flash SWF. There is no DOM built inside the FlashPlayer for a SWF; there's just a display list. There's no eval in ActionScript, and ActionScript features a strict mode that implements a static type checker (with a few big loopholes for explicit dynamic typing). You can't override default methods or mutate state as freely as you can in the browser content model. Making a SWF is more like making an ASIC -- it's "hardware", as Steve Yegge argues.

This is not necessarily a bad thing; it's certainly different from the Open Web.
...
Dare Obasanjo argues that developers crave single-vendor control because it yields interoperation and compatibility, even forced single-version support. Yet this is obviously not the case for anyone who has wasted time getting a moderately complex .ppt or .doc file working on both Mac and Windows. It's true for some Adobe and Microsoft products, but not all, so something else is going on. And HTML, CSS, DOM and JS interoperation is better over time, not worse. TCP/IP, NFS, and SMB interoperation is great by now. The assertion fails, and the question becomes: why are some single-vendor solutions more attractive to some developers? The answers are particular, not general and implied simply by the single-vendor condition.

I'm surprised to see Brendan Eich conflating "openness" with the features of a particular technology. I'll start with Brendan's assertion that open standards and open source enable user-driven innovation. Open source allows people to modify the software they've been distributed however they like. Open standards like HTTP, FTP and NNTP allow people to build applications that utilize these technologies without being beholden to any corporate or government entity. It's hard for me to see how open standards enable user-driven innovation in the same way that open source does. I guess the argument could be made that open source applications built on proprietary technologies aren't as "free" as open source applications that implement open standards. I can buy that. I guess.

The examples of Firefox add-ons and GreaseMonkey user scripts don't seem to be an example of open source and open standards enabling user-driven innovation. They seem to be examples of why building an application as a platform with a well-designed plugin model works. After all, we have plugins for Internet Explorer, Gadgets for Google Personalized Homepage and Add-ins for Visual Studio which are all examples of user-driven innovation as plugins for an application which are built on a proprietary platform often using proprietary technologies. My point is  

open_source + open_standards != user_driven_innovations;

Being open helps, but it doesn't necessary lead to user driven innovations or vice versa. The rest of Brendan's post is even weirder because he presents the features of Flash's ActionScript versus AJAX (i.e. [X]HTML/CSS/Javascript/DOM/XML/XmlHttpRequest) as the conflict between properietary versus open technologies. Separating content from presentation, dynamic programming languages and rich object models are not exclusively the purvey of "open" technologies and it is disingenious for Brendan to suggest that. 

After all, what happens when Adobe and Microsoft make their RIA platforms more "Web-like"? Will the debate devolve into the kind of semantic hairsplitting we've seen with the OpenXML vs. ODF debate where Microsoft detractors are now attacking Microsoft for opening up and standardizing its XML file formats when their original arguments against the file formats where that they weren't open?

Personally, I'd like to see technical discussions on the best way to move the Web forward instead of the red herring of "openness" being thrown into the discussion. For instance, what are the considerations Web developers should make when they come to the crossroads where Adobe is offering Flash/Flex, Microsoft is offering WPF/E and the Mozilla & co are offering their extensions to the AJAX model (i.e. HTML 5) as the one true way? I've already stated what I think in my post What Comes After AJAX? and so far Adobe looks like they have the most compelling offering for developers but it is still early in the game and neither Microsoft nor Mozilla have fully shown their hands.


 

Categories: Web Development

March 13, 2007
@ 06:18 PM

Tim Bray has an excellent post entitled OpenID which attempts to separate hype from fact when it comes to the technorati's newest darling, OpenID. He writes

The buzz around OpenID is becoming impossible to ignore. If you don't know why, check out How To Use OpenID, a screencast by Simon Willison. As it's used now (unless I'm missing something) OpenID seems pretty useless, but with only a little work (unless I'm missing something) it could be very useful indeed.

Problem: TLS · The first problem is that OpenID doesn't require the use of TLS (what's behind URIs that begin with https:).
...
Problem: What's It Mean?
· Another problem with OpenID is that, well, having one doesn't mean very much; just that you can verify that some server somewhere says it believes that the person operating the browser owns that ID.
...
Problem: Phishing
· This is going to be a problem, but I don't think it's fair to hang it on OpenID, because it's going to be equally a problem with any browser-based authentication. Since browser-based authentication is What The People Want, we're just going to have to fight through this with a combination of browser engineering and (more important) educating the general public
...
The Real Problem · Of course, out there in the enterprise space where most of Sun's customers live, they think about identity problems at an entirely different level. Single-sign-on seems like a little and not terribly interesting piece of the problem. They lose sleep at night over "Attribute Exchange";once you have an identity, who is allowed to hold what pieces of information about you, and what are the right protocols by which they may be requested, authorized, and delivered? The technology is tough, but the policy issues are mind-boggling.
So at the moment I suspect that OpenID isn't that interesting to those people.

I've been thinking about OpenID from the context of authorization and sharing across multiple social networks. Until recently I worked on the authorization platform for a lot of MSN Windows Live properites (i.e. the platform that enables setting permissions on who can view your Windows Live Space, MSN Calendar, or Friends list from Windows Live Messenger). One of the problems I see us facing in the future is lack of interoperability across multiple social networks. This is a problem when your users have created their friend lists (i.e. virtual address books) on sites like Facebook, Flickr or MySpace. One of the things you notice about these services is that they all allow you to set permissions on who can view your profile or content.More importantly, if your profile/content is non-public then they all require that the people who can view your profile must have an account with their service. We do the same thing across Windows Live so it isn't a knock on them.

What I find interesting is this; what if on Flickr I could add http://mike.spaces.live.com as a contact then give Mike Torres permission to view my photos without him having to get a Yahoo! account? Sounds interesting doesn't it? Now let's go back to the issues with OpenID raised by Tim Bray.

The first thing to do is to make sure we all have the same general understanding of how OpenID works. It's basically the same model as Microsoft Passport Windows Live ID, Google Account Authentication for Web-Based Applications and Yahoo! Browser Based Authentication. A website redirects you to your identity provider, you authenticate yourself (i.e. login) on your identity providers site and then are redirected back to the referring site along with your authentication ticket. The ticket contains some information about you that can be used to uniquely identify you as well as some user data that may be of interest to the referring site (e.g. username). Now we have a high level understanding of how it all works, we can talk about Tim Bray's criticisms. 

TLS/SSL
On the surface it makes sense that identity providers should use SSL when you login to your account after being redirected there by a service that supports OpenID. However as papers like TrustBar: Protecting (even Naïve) Web Users from Spoofing and Phishing Attacks, SSL/TLS does little to prevent the real security problems on the Web today, namely Web page spoofing (i.e. Phishing) and the large amount of malware on user PCs which could be running key loggers. This isn't to say that using SSL/TLS isn't important, just that it's like putting bars on your windows and leaving the front door open. Thus I can understand why it isn't currently required that identity providers support SSL/TLS. However a little security is better than no security at all. 

What Does It Mean?
I agree with Tim Bray that since OpenID is completely decentralized, websites that support it will likely end up creating whitelists of sites they want to talk to otherwise they risk their systems being polluted by malicious or inconsiderate OpenID providers. See Tim Bray's example of creating http://www.tbray.org/silly-id/ which when queried about any OpenID beginning with that URI instantly provides a positive response without authenticating the user. This allows multiple people to claim http://www.tbray.org/silly-id/BillGates for example. Although this may be valid if one was creating the OpenID version of BugMeNot, it is mostly a nuisance to service providers that want to accept OpenID.

Phishing
Using susceptibility to phishing as an argument not to use OpenID seems like shutting the barn door when the horse has already bolted. The problem is that security conscious folks don't want users getting used to the idea of providing their username and password for one service whenever prompted by another service. After all, the main lesson we've been trying to teach users about preventing phishing is to only enter their username and password to their primary sites when they type them in themselves not when they follow links. OpenID runs counter to this teaching. However the problem with that teaching is that users are already used to doing this several times a day. Here are three situations from this morning where I've been asked to enter  my username and password from one site on another

  1. Connected Desktop Apps: Google Toolbar prompts me for my Gmail username and password when I try to view my bookmarks. The goal of the Google Account Authentication is to create a world where random apps asking me for my Gmail username and password by redirecting me to the Google login page is commonplace. The same goes for the the various Flickr uploader tools and Yahoo! Browser Based Authentication
  2. Importing Contacts: On Facebook, there is an option to import contacts from Yahoo! Mail, Hotmail, AOL and Gmail which requires me to enter my username and password from these services into their site. Every time I login to Yahoo! Mail there is a notice that asks me to import my contacts from other email services which requires me to give them my credentials from these services as well.
  3. Single Sign-On: Whenever I go to the Expedia sign-in page I'm given the option of signing in with my .NET Passport which happens to be the same username and password I use for all Windows Live and MSN services as well as company health site that has information about any medical conditions I may have.

Given the proliferation of this technique in various contexts on the Web today, it seems partisan to single out OpenID as having problems with phishing. If anything, THE WEB has a problem with phishing which needs to be solved by the browser vendors and the W3C who got us in this mess in the first place.

Attribute Exchange
This usually goes hand in hand with any sort of decentralized/federated identity play. So let's say I can now use my Windows Live ID to login to Flickr. What information should Flickr be able to find out about me from talking to Windows Live besides my username? Should I be able to control that or should it be something that Flickr and Windows Live agree on as part of their policies? How is the user educated that the information they entered in one context (i.e. in Windows Live) may be used in a totally different context on another site. As Tim Bray mentioned in his post, this is less of a technology issue and more a policy thing that will likely differ for enterprises versus "Web 2.0" sites. That said, I'm glad to see that Dick Hardt of Sxip Identity has submitted a proposal for OpenID Attribute Exchange 1.0 which should handle the technology aspect of the problem.

Disclaimer: This is not an endorsement of OpenID by Microsoft or an indication of the future direction of authentication and authorization in Windows Live. This is me brainstorming some ideas in my blog and seeing whether the smart folks in my reader base think these ideas make sense or not. 


 

Categories: Web Development

March 13, 2007
@ 04:33 PM

One of the links referenced in my recent posting about Wikipedia led me to reread the Wikipedia entry for "Dare Obasanjo". It seems there is still an outstanding issue with my entry according to folks on the Talk page because there isn't a non-blog source (i.e. mainstream media) that verifies that my dad is Olusegun Obasanjo.

For some reason it irritates me that I have a Wikipedia entry with a giant banner that claims I'm lying about my parenthood.Given that I'll be back home in a few weeks to belatedly celebrate my dad's seventieth birthday, I wonder if any Wikipedia savvy folks can point out what kind of "evidence" usually satisfies the bureaucrats on that site. Will a photograph of us together do the trick (if so I already have a few at home I can scan and upload to Flickr)? Will it have to be a photograph printed in a newspaper? Or is the only way that banner comes off is if there is a Nigerian newspaper webpage on the Internet that says he's my dad.

I need to see what strings I have to pull to get my name cleared.


 

Categories: Personal

March 11, 2007
@ 02:14 PM

Yesterday I went shopping and every store had reminders that daylight saving time begins today. Every year before "springing forward" or "falling back" I always double check the current time at time.gov and the US Naval Observatory Master Clock Time . However neither clock has sprung forward. Now I'm not sure if who I can trust to tell me the right time. :(

Update: Looks like I spoke too soon. It seems most of the clocks in the house actually figured out that today was the day to "spring forward" and I had the wrong time. :)


 

Categories: Technology

Every once in a while someone asks me about software companies to work for in the Seattle area that aren't Microsoft, Amazon or Google. This is the third in a series of weekly posts about startups in the Seattle area that I often mention to people when they ask me this question.

AgileDelta builds XML platforms for mobile devices that are optimized for low power, low bandwidth devices. They have two main products; Efficient XML and Mobile Information Client. I'm more familiar with the Efficient XML since it has been selected as the basis for the W3C's binary XML format and has been a lynch pin for a lot of the debate around binary XML.  The Efficient XML product is basically a codec which allows you to create and consume XML in their [soon to be formerly] proprietary binary format that makes it more efficient for use in mobile device scenarios. A quick look at their current customer lists indicates that their customer base is mostly military and/or defence contractors. I hadn't realized how popular XML was in military circles.  

AgileDelta was founded by John Schneider and Rich Rollman who are formerly of Crossgain, a company founded by Adam Bosworth which was acquired by BEA. Before that Rich Rollman was at Microsoft and he was one of the key folks behind MSXML and SQLXML. Another familiar XML geek who works there is Derek Denny-Brown who spent over half a decade working as a key developer on the XML parsers at Microsoft.

Press: AgileDelta in PR Newswire

Location: Bellevue, WA

Jobs: careers@agiledelta.com, current open positions are for a Software Engineer, Sales Professional, Technical Writer and Quality Assurance Engineer.