October 12, 2008
@ 08:24 PM

Bloglines stopped polling my feed over a week ago probably due to a temporary error in my feed. I've been trying to find a way to get them to re-enable my feed given that for the 1,670 subscribed to my feed on their service my blog hasn't been updated since October 3rd. Unfortunately there doesn't seem to be a way to contact the product team.

I sent a mail via the contact form but didn't get a response and their support forum is overrun with spam which leads me to believe it has been abandoned. Any ideas on how I can get Bloglines to start polling my feed again?

Note Now Playing: Babyface - When Can I See You Note


 

Categories: Personal

Some of my readers who missed the dotcom boom and bust from the early 2000s may not be familiar with FuckedCompany, a Web site that was dedicated to gloating about layoffs and other misfortunes at Web companies as the tech bubble popped.  Although fairly popular at the turn of the century the Web site was nothing more than postings about which companies had recently had layoffs, rumors of companies about to have layoffs and snarky comments about stock prices. You can read some of the old postings yourself in the WayBack Machine for FuckedCompany. I guess schadenfreude is a national pastime.

The current financial crises which has led to the worst week in the history of the Dow Jones and S&P 500 indexes as well as worldwide turmoil in financial markets to the point where countries like Austria, Russia, Iceland, Romania and Ukraine had to suspend trading on their stock markets. This has clearly pointed to the need for another schadenfreude filled website which gloats about the misfortunes of others. Thankfully TechCrunch has stepped up to the plate. Here are some of their opening morsels as they begin their transformation from tech bubble hypesters into its gloating eulogizers

For some reason, I was expecting more leadership from Arrington and his posse. Anyway, instead of reading and encouraging this sort of garbage from TechCrunch it would be great if more people keep posts like Dave McClure's Fear is the Mind Killer of the Silicon Valley Entrepreneur (we must be Muad'Dib, not Clark Kent) in mind instead. The last thing we need is popular blogs AND the mass media spreading despair and schadenfreude at a time like this.

Note Now Playing: T.I. - Dead And Gone (Featuring Justin Timberlake) Note


 

Categories: Current Affairs

John Battelle has a blog post entitled When Doesn't It Pay To Pay Attention To Search Quality? which contains the following statement and screenshot

the top result is the best result - and it's a paid link.

Bad Results

In the past I've talked about Google's strategy tax which is the conflict between increasing the relevance of their search results and increasing the relevance of their search ads. The more relevant Google's "organic" results are the less likely users are to click on their ads which means the less money the company makes. This effectively puts a cap on how good Google's search quality can get especially given the company's obsessive measurements of every small change they make to get the most bang for the buck.

When I first wrote about this, the conclusion from some quarters was that this inherent conflict of interest would eventually be Google's undoing since there were search innovations that they would either be slow to adopt or put on the back burner so as not to harm their cash cow. However John Battelle's post puts another spin on the issue. As long as people find what they want it doesn't matter if the result is "organic" or an ad.

As Jeremy Zawodny noted in his post The Truth about Web Navigation 

Oh, here's a bonus tip: normal people can't tell the difference between AdSense style ads and all the other links on most web sites. And almost the same number don't know what "sponsored results" on the Search Results Page are either. It's just a page of links to them. They click the ones that look like they'll get them what they want. It's that simple.

even more interesting is the comment by Marshall Kirkpatrick in response to Jeremy's post

The part of your post about AdWords reminds me of a survey I read awhile ago. Some tiny percentage of users were able to tell the difference between paid and natural search results, then once away from the computer almost all of them when asked said that the best ways to make it clear would be: putting paid links in a colored box, putting them in a different section of the page and putting the words "sponsored links" near them!! lol

What this means in practice is that the relevance of Google's ads in relation to the search term will increase in comparison to the relevance of the organic search results for that term. John Battelle has shown one example of this in his blog post. Over time this trend will get more pronounced. The problem for Google's competitors is that this doesn't necessarily mean that their search experience will get worse over time since their ad relevance will likely make up for any deficiencies in their organic results (at least for commercial queries – where the money is). What competitors will have to learn to exploit is Google's tendency to push users to Adwords results by making their organic results satisfactory instead of great.  

For example, consider the following search results page which my wife just got while looking for an acupuncturist in Bellevue, Washington

The interesting thing about the organic results is that it is relevant but very cluttered thus leading to the paradox of choice. On the other hand, the sponsored links give you a name and a description of the person's qualifications in their first result. Which result do you think my wife clicked?

Now why do you think Google ended up going with this format for commercial/business-based search results?  The average Web user is a satisficer and will look for the clearest and simplest result which is in the ads. However the geeks and power users (i.e. people who don't click on ads) are often maximizers when it comes to search results and are thus served by the organic results.

The question for you is whether you'd consider this a weakness or a strength of Google Search?

Note Now Playing: Missy Elliott - Hit Em Wit Da Hee (remix) (feat. Lil Kim & Mocha) Note


 

Given the current spirit of frugality that fills the air due to the credit crises I'm reconsidering whether to replace my AT&T Tilt (aka HTC Kaiser) with an iPhone 3G. After test driving a couple of iPhones I've realized that the really compelling reason for me to switch is to get a fully-featured Web browser instead of my current situation of having to choose between text-based "mobile" versions of popular sites or mangled Web pages.

I was discussing this with a coworker and he suggested that I try out alternative browsers for the Windows Mobile before getting an iPhone. I'd previously tried DeepFish from the Microsoft Live Labs team but found it too buggy to be usable. I looked for it again recently but it seems it has been cancelled. This led me to try out SkyFire which claims to give a complete PC-style Web browsing experience [including Flash Video, AJAX, Silverlight and Quicktime video] on a mobile phone.

After using SkyFire for a couple of days, I have to admit that it is a much improved Web browsing experience compared to what shipped by default on my phone. At first I marveled at how a small startup could build such a sophisticated browser in what seems like a relatively short time until I learned about the clever hack which is at the center of the application. None of the actual rendering and processing of content is done on your phone. Instead, there is an instance of a Web browser (supposedly Firefox) running on the SkyFire servers which acts as a proxy for your phone and then sends you a compressed image of the fully rendered results. There is still some clever hackery involved especially with regards to converting a streaming Flash video into a series of animated image and accompanying sound then sending it down to your phone in real-time. However it is nowhere near as complex as shipping complete Javascript, Flash, Quicktime and Silverlight implementations on mobile phone browser. 

The one problem with SkyFire's approach is that all of your requests go through their servers. This means your passwords, emails, bank account records or whatever other Web sites you visit with your mobile browser will flow through SkyFire's servers. This may be a deal breaker for some while for others it will mean being careful about what sites they visit using the browser. 

If this sounds interesting, check out the video demo below

Note Now Playing: Michael Jackson - PYT (Pretty Young Thing) Note


 

Categories: Technology

Yesterday there was a news article on MSNBC that claimed 1 in 6 now owe more on their mortgage then their property is worth. The article states

The relentless slide in home prices has left nearly one in six U.S. homeowners owing more on a mortgage than the home is worth, raising the possibility of a rise in defaults — the very misfortune that touched off the credit crisis last year. The result of homeowners being "underwater" is more pressure on an economy that is already in a downturn. No longer having equity in their homes makes people feel less rich and thus less inclined to shop at the mall.

And having more homeowners underwater is likely to mean more eventual foreclosures, because it is hard for borrowers in financial trouble to refinance or sell their homes and pay off their mortgage if their debt exceeds the home's value. A foreclosed home, in turn, tends to lower the value of other homes in its neighborhood.

Among people who bought within the past five years, it's worse: 29 percent are underwater on their mortgages, according to an estimate by real-estate Web site Zillow.com.

According to Zillow, our home is one of those that is currently "underwater" because it's estimated value has dropped $25,000 since we bought it according to their algorithms. Given that we bought our home last year I don't doubt that we are underwater and in fact I expect our home value to only go down further. This is because the disparity between median house values and median incomes is still fairly stark even with the current depreciation in the neighborhood.

Here's what I mean; according to Zillow the median household income in the area is about $46,000 while the median home price is around $345,000. This disparity is shocking when you apply some of the basic rules from the "old days" before we had the flood of easy credit which led up to the current crises. For argument's sake, let's assume that everyone that moves to the area actually pays the traditional 20% down payment even though the personal savings rate of the average American is in the negative. This means they need a mortgage of $276,000. Plugging that number into a simple mortgage calculator assuming a 30 year loan at 5.75% interest gives a monthly mortgage payment of over $1600.

Using the traditional debt-to-income ratio of 0.28 a person with $46,000 in gross income shouldn't get a mortgage that over $1100 because they are hard pressed to afford it. Using another metric, the authors of the Complete Idiot's Guide to Buying and Selling a Home argue that you shouldn't get a mortgage over 2 1/2 times your household income which still has us with around $150,000 being the appropriate size of a mortgage that someone that lives in my neighborhood can afford.

However you slice it even assuming a 20% down payment, the people in my neighborhood live in homes that they couldn't afford to get a legitimate mortgage on at today's prices. That is fundamentally broken. 

Things get particularly clear when you look at the chart below and realize that house prices rose over $100,000 dollars in the past five years. 

A lot of people have started talking about "stabilizing home prices" and "bailing out home owners" because of underwater mortgages. In truth, easy credit caused houses to become overpriced especially when you consider that house prices were rising at a much faster rate than wages. Despite the current drop, house prices are still unrealistic and will need to come down further. Trying to prevent that from happening is like trying to have our cake and eat it too. You just can't.

I expect that more banks will end up having to create programs like Bank of America's Nationwide Homeownership Retention for CountryWide Customers which will modify mortgage principals and interest rates downwards in a move that will end up costing them over $8.6 billion but will make it more likely that their customers can afford to pay their mortgages. I'm surprised that it took a class action lawsuit to get this to happen instead of common sense. Then again it is 8.6 BILLION dollars. 

Note Now Playing: 50 Cent - When It Rains It Pours Note


 

Categories: Current Affairs | Personal

October 7, 2008
@ 03:37 PM

I logged in to my 401K account today and was greeted by the following message

Personal Rate of Return from 01/01/2008 to 10/06/2008 is -23.5%

Of course, it could have been worse,  I could have had it all in the stock market.

I've been chatting with co-workers who've only posted single digit percentage loses (i.e. their 401K is down less than 10% this year) and been surprised that every single person in that position had hedged their bets by having a large chunk of their 401K as cash. I remember Joshua advising me to do this a couple of months ago when things started looking bad but I took it as paranoia, now I wish I had listened.

Of course, I'd still have the problem of having to trust the institution that was holding the cash like the guy from the MSNBC article excerpted below

Mani Behimehr, a home designer living in Tustin, Calif., isn't feeling reassured after what happened to WaMu and Wachovia. After he heard the news that WaMu had been seized and sold to JP Morgan, he rushed out to withdraw about $150,000 in savings and opened a new account at Wachovia only to learn about its sale to Citigroup two days later.

"I thought this is the strongest economy in the world; nothing like that happens in this country," said Behimehr, 46, who is originally from Iran.

At least I don't have to worry about living off of my 401(k) anytime soon.

Update: A commenter brought up that I should explain what a 401(k) account is for non-US readers. From wikipedia; in the United States of America, a 401(k) plan allows a worker to save for retirement while deferring income taxes on the saved money and earnings until withdrawal. The employee elects to have a portion of his or her wage paid directly, or "deferred," into his or her 401(k) account. In participant-directed plans (the most common option), the employee can select from a number of investment options, usually an assortment of mutual funds that emphasize stocks, bonds, money market investments, or some mix of the above.

Note Now Playing: Abba - Money, Money, Money Note


 

Categories: Current Affairs | Personal

A common practice among social networking sites is to ask users to import their contacts from one of the big email service providers (e.g. Yahoo! Mail, Hotmail or Gmail) as part of the sign up process. This is often seen as a way to bootstrap the user's social network on the site by telling the user who in their email address book is also a user of the site they have just joined. However, there is one problem with this practice which Jeremy Keith described as the password anti-pattern and I quote

The problems start when web apps start replicating bad design patterns as if they were viruses. I want to call attention to the most egregious of these anti-patterns.

Allowing users to import contact lists from other services is a useful feature. But the means have to justify the ends. Empowering the user to import data through an authentication layer like OAuth is the correct way to export data. On the other hand, asking users to input their email address and password from a third-party site like GMail or Yahoo Mail is completely unacceptable. Here’s why:

It teaches people how to be phished.

The reason these social networks request login credentials for new users is because they log-in to the user's email account and then screen scrape the contents of their address books. For a long time, the argument for doing this has been that the big email services have not given them an alternative and are thus holding user data hostage.

This may have been true once but it isn't anymore. Google has the Contacts Data API, Yahoo! has their Address Book API and Microsoft has the Windows Live Contacts API. Each of these is provides a user-centric authorization model where instead of a user giving their email address and password to random sites, they log-in at their email provider's site and then delegate authority to access their address book to the social network site. 

The only problem that remains is that each site that provides an address book or social graph API is reinventing the wheel both with regards to the delegated auth model they implement and the actual API for retrieving a user's contacts. This means that social networking sites that want to implement a contact import feature have to support a different API and delegated authorization model for each service they want to talk to even though each API and delegated auth model effectively does the same thing.

Just as OAuth has slowly been increasing in buzz as the standard that will sweep away the various proprietary delegated auth models that we have on the Web today, there has been a similar effort underway by a similar set of dedicated individuals intent on standardizing contacts and social graph APIs on the Web. The primary output from this effort is the Portable Contacts API.

I've read been reading latest draft specification of the Portable Contacts API and below are some of the highlights as well as some thoughts on them

  • A service's Portable Contacts API endpoint needs to be auto-discoverable using XRDS-Simple (formerly YADIS).

  • The API supports either direct authorization where a caller provides a username and password as well as delegated authorization where the caller passes a delegation token obtained by the application out-of-band. The former MUST support HTTP Basic Authentication while the latter MUST support OAuth. My initial thinking is that there must be some kind of mistake and the spec meant to say HTTP Digest Authentication not HTTP Basic Authentication (which is described as insecure in the very RFC that defines at it).

  • The API defines a specific URL structure that sites must expose. Specifically /@me/@all which returns all of a user's contacts, /@me/@all/{id} which returns the contact with the given ID, and /@me/@self which returns contact info for the user must all be supported when appended to the base URI that is the Portable Contacts API end point. In general being prescriptive about how services should structure their URI space is to be discouraged as explained in Joe Gregorio's post No Fishing - or - Why 'robots.txt and 'favicon.ico' are bad ideas and shouldn't be emulated but since the service gets to control the base URI via XRDS-Simple this isn't as bad as the situation we have today with robots.txt and favicon.ico

  • The API defines a set of query parameters for filtering (filterBy, filterOp & filterValue) and sorting (sortBy & sortOrder) of results. The API wisely acknowledges that it may be infeasible or expensive for services to support these operations and so supporting them is optional. However services MUST indicate which parameters were ignored/not supported when returning responses to requests containing the aforementioned parameters. I definitely like the approach of having services indicate which parameter was ignored in a request because it wasn't supported. However it would be nice to get an explicit mechanism for determining which features are supported by the provider in a without having to resort to seeing which parameters to various API calls were ignored.

  • The API also defines query parameters for pagination (startIndex & count). These are pretty straightforward and are also optional to support.

  • There are query parameters to indicate which fields to return (fields parameter whose value is a comma delimited list) and what data format to return (format parameter can be either JSON or XML). The definition of the XML format is full of errors but seems to be a simplistic mapping of JSON to XML. It also isn't terribly clear how type information is conveyed in the XML format. It clearly seems like the having the API support XML is currently a half baked afterthought within the spec.

  • Each result set begins with a series of numeric values; startIndex, itemsPerPage and totalResults which are taken from OpenSearch followed by a series of entries corresponding to each contact in the users address book.

  • The API defines the schema as a contact as the union of fields from the vCard data format and those from OpenSocial. This means a contact can have the basic fields like name, gender and birthday as well as more exotic fields like happiestWhen, profileSong and scaredOf. Interesting data models they are building over there in OpenSocial land. :)

Except for the subpar work with regards to defining an XML serialization for the contacts schema this seems like a decent spec.

If anything, I'm concerned by the growing number of interdependent specs that seem poised to have a significant impact on the Web and yet are being defined outside of formal standards bodies in closed processes funded by big companies. For example, about half of the references in the Portable Contacts API specs are to IETF RFCs while the other half are to specs primarily authored by Google and Yahoo! employees outside of any standards body (OpenSocial, OAuth, OpenSearch, XRDS-Simple, etc). I've previously questioned the rise of semi-closed, vendor driven consortiums in the area of Web specifications given that we have perfectly good and open standards bodies like IETF for defining the Open Web but this led to personal attacks on TechCrunch with no real reasons given for why Web standards need to go in this direction. I find that worrying. 

Note Now Playing: Madonna - Don't Cry For Me Argentina Note


 

Nick O'Neil of AllFacebook.com recently posted a blog entry entitled The Future of Widgets on Facebook: Dead where he wrote

As a joke I created the Bush Countdown Clock when the platform launched and amazingly I attracted close to 50,000 users. While the application was nothing more than a simple flash badge, it helped a lot of people express themselves. Expression is not Facebook’s purpose though, sharing is. Widgets or badges that help users express their personal beliefs, ideals, and personality are now harder to find with the new design.

Thanks to the redesign all the badges which were “cluttering” the profile have been moved to a “Boxes” tab which most people don’t visit apparently. When the new profile was first rolled out, the traffic to my application actually jumped a little but oddly enough on September 11th, things took a turn for the worse. I’m not sure what happened but my guess is that a lot of the profiles started to get shifted over.
...
It’s clear though that widgets have not survived the shift over and my guess is that within a matter of weeks we will see most top-performing widget applications practically disappear.

-Bush Countdown Clock Daily Traffic Graph-

This is one aspect of the Facebook redesign that I didn't consider in my original post on What You Can Learn from the Facebook Redesign. Although moving the various applications which are basically badges for self expression like Bumper Sticker does reduce page load times, by relegating them to an infrequently visited tab they are guaranteed to be less useful (people don't see them on my profile) and less likely to be spread virally (people don't see them on my profile and say "I gotta have that"). On the other hand, applications that are primarily about users interacting with each other such as Scrabble and We're Related should still do fine.

Application developers have already started inventing workarounds to Facebook's changes which penalize their apps. For example, the Bumper Sticker application now focuses on adding items to your Mini-Feed instead of adding a badge/box to your profile. This gives it valuable placement on your profile (if only for a short time) and a small chance that it will show up in the News Feeds of your friends.

This aspect of the redesign has definitely attacked what many had started calling the MySpace-ization of Facebook which resulted in the need for a  Facebook Profile Clean Up Tool. It will be interesting what this will lead to new classes of applications becoming popular on the site or whether it just another chapter in the cat & mouse game that is spreading virally on the Facebook platform.

Note Now Playing: Game - We Don't Play No Games (feat. G-Unit) Note


 

Categories: Social Software

For a hot, pre-IPO startup I'm surprised that Facebook seems to be going through an exodus of co-founders (how many do they have?) and well-respected key employees. Just this year alone we've seen the following exits

If you combine the list above with the New York Times article, The Facebooker Who Friended Obama which states

Mr. [Chris] Hughes, 24, was one of four founders of Facebook. In early 2007, he left the company to work in Chicago on Senator Obama’s new-media campaign. Leaving behind his company at such a critical time would appear to require some cognitive dissonance: political campaigns, after all, are built on handshakes and persuasion, not computer servers, and Mr. Hughes has watched, sometimes ruefully, as Facebook has marketed new products that he helped develop.

That's three departures from people named as co-founders of Facebook. Of course, it isn't unusual for co-founders to leave a business they helped start even if the business is on the path to being super successful (see Paul Allen) but it is still an interesting trend nonetheless.

Note Now Playing: T.I. - My Life, Your Entertainment (Ft. Usher) Note


 

October 1, 2008
@ 04:22 PM

Werner Vogels, CTO of Amazon, writes in his blog post Expanding the Cloud: Microsoft Windows Server on Amazon EC2 that

With today's announcement that Microsoft Windows Server is available on Amazon EC2 we can now run the majority of popular software systems in the cloud. Windows Server ranked very high on the list of requests by customers so we are happy that we will be able to provide this.

One particular area that customers have been asking for Amazon EC2 with Windows Server was for Windows Media transcoding and streaming. There is a range of excellent codecs available for Windows Media and there is a large amount of legacy content in those formats. In past weeks I met with a number of folks from the entertainment industry and often their first question was: when can we run on windows?

There are many different reasons why customers have requested Windows Server; for example many customers want to run ASP.NET websites using Internet Information Server and use Microsoft SQL Server as their database. Amazon EC2 running Windows Server enables this scenario for building scalable websites. In addition, several customers would like to maintain a global single Windows-based desktop environment using Microsoft Remote Desktop, and Amazon EC2 is a scalable and dependable platform on which to do so.

This is great news. I'm starting a month long vacation as a precursor to my paternity leave since the baby is due next week and was looking to do some long overdue hacking in-between burping the baby and changing diapers. My choices were

  • Facebook platform app
  • Google App Engine app
  • EC2/S3/EBS app

The problem with Amazon was the need to use Linux which I haven't seriously messed with since my college days running SuSe. If I could use Windows Server and ASP.NET while still learning the nuances of EC2/S3/EBS that would be quite sweet.

I wonder how who I need to holler at to get in the Windows Server on EC2 beta? Maybe Derek can hook me up. Hmmmmm.

Note Now Playing: Lil Wayne - Best Rapper Alive [Explicit] Note


 

Categories: Platforms | Web Development