Steve Jobs Quote taken from tweet by @Gartenberg

Every product or business leader understands the importance of having the right strategy. This includes being deeply aware of your competitive landscape, understanding your customers and how they use your product and ensuring you have the right plan to continue to make your customers happy while moving your business forward. However few take into account the importance of the alignment of your current strategy and your organization’s culture. This is especially important as strategies may evolve time as the marketplace changes while organizational cultures are fairly stagnant.

In today’s technology landscape the only constant is change. Companies that are impervious one day are dwarfed by upstarts seemingly the next day. Whether it is Uber leading to the bankruptcy of Yellow Cab in San Francisco, Netflix eclipsing Blockbuster or Apple’s iPhone leading to the decline Blackberry there are multiple examples in recent memory of established products that seemed to be a permanent fixture of the world that were bested by others that gave customers a better experience. In all of these cases, there are examples of the incumbent struggling and failing to adapt to the new world. Yellow Cab companies created mobile apps like YoTaxi and Blockbuster tried offering DVDs by mail and Blackberry eventually offered touch screen phones like the Storm. A key problem for all of these disrupted companies was that they tried to switch strategies but could not surmount the handicaps of their organizational cultures and fundamental approach to doing business.

Culture Shock: How Blackberry Failed to Respond to the iPhone

There is a great overview of the downfall of Blackberry in the Globe & Mail article titled Inside the fall of BlackBerry: How the smartphone inventor failed to adapt which is excerpted below

RIM executives figured they had time to reinvent the company. For years they had successfully fended off a host of challengers. Apple’s aggressive negotiating tactics had alienated many carriers, and the iPhone didn’t seem like a threat to RIM’s most loyal base of customers – businesses and governments. They would sustain RIM while it fixed its technology issues.

But smartphone users were rapidly shifting their focus to software applications, rather than choosing devices based solely on hardware. RIM found it difficult to make the transition, said Neeraj Monga, director of research with Veritas Investment Research Corp. The company’s engineering culture had served it well when it delivered efficient, low-power devices to enterprise customers. But features that suited corporate chief information officers weren’t what appealed to the general public.

“The problem wasn’t that we stopped listening to customers,” said one former RIM insider. “We believed we knew better what customers needed long term than they did. Consumers would say, ‘I want a faster browser.’ We might say, ‘You might think you want a faster browser, but you don’t want to pay overage on your bill.’ ‘Well, I want a super big very responsive touchscreen.’ ‘Well, you might think you want that, but you don’t want your phone to die at 2 p.m.’ “We would say, ‘We know better, and they’ll eventually figure it out.’ ”

In reading the article it is clear that at that the Blackberry leadership decided that it was important for them strategically to address the rise of the iPhone. However as the unnamed insider makes clear above, their organizational culture was simply not set up to make the mental shift to view their product and needs customers differently after the iPhone launched.

There is a great quote from Upton Sinclair “It is difficult to get a man to understand something, when his salary depends on his not understanding it.”

Blackberry considered its primary customers to be carriers and businesses since they were the ones paying for devices and services. Carriers did not want phones with capable browsers because they didn’t want high cellular data usage to overwhelm their network while businesses who bought phones for their employees did not want to have said employees goofing off in apps. This is why there was a strong disincentive to not listen to what users of their phones were saying when asked what they actually wanted. On the other hand, Apple approached the problem by thinking about how to provide the very best experience to users of their phones with the confidence that people would happily pay a premium for it.

Blockbuster faced a similar challenge with Netflix. A key part of Blockbuster’s business was charging people late fees. This meant that Blockbuster was literally making money off of the inconvenience of their product experience. This meant that as an organization competing with Netflix on customer experience would have meant attacking one of their cash cows and there was a strong disincentive not to do that.

These are just two of many examples that highlight the point that when your strategy changes then your entire organizational culture will have to change as well. Your organizational culture is defined by what positive behaviors you encourage and what negative behaviors you tolerate. Blackberry couldn’t compete with Apple when teams were still motivated & rewarded for keeping corporate CIOs happy and there was no way Blockbuster could compete with Netflix when they fundamentally saw themselves as a classic retail video rental store and ignored the power of online experiences.

Cultural Appropriation: How Google’s Android Project Responded to the iPhone

There are examples of other companies adjusting their strategies and engineering culture when faced with dramatic change in their industry. My favorite is taken from this excerpt from the article The Day Google Had to 'Start Over' on Android which is excerpted below

In 2005, on Google’s sprawling, college-like campus, the most secret and ambitious of many, many teams was Google’s own smartphone effort—the Android project. Tucked in a first-floor corner of Google’s Building 44, surrounded by Google ad reps, its four dozen engineers thought that they were on track to deliver a revolutionary device that would change the mobile phone industry forever.

As a consumer I was blown away. I wanted one immediately. But as a Google engineer, I thought ‘We’re going to have to start over.’

By January 2007, they’d all worked sixty-to-eighty-hour weeks for fifteen months—some for more than two years—writing and testing code, negotiating soft­ware licenses, and flying all over the world to find the right parts, suppliers, and manufacturers. They had been working with proto­types for six months and had planned a launch by the end of the year . . . until Jobs took the stage to unveil the iPhone.

Chris DeSalvo’s reaction to the iPhone was immediate and visceral. “As a consumer I was blown away. I wanted one immediately. But as a Google engineer, I thought ‘We’re going to have to start over.’”

The way Google reacted to the iPhone is quite telling especially when contrasted with the Blackberry story. The Android team spent two years working 60 – 80 hours a week to deliver a new phone operating system but once they saw that Apple had raised the bar they decided they needed to go back to the drawing board. It is quite stark to compare what Android prototypes looked like before the launch of the iPhone and what consumers associate with Android after it launched.

Android Before the iPhone launched Android After the iPhone launched

It would have been really easy for the team working on Android to continue down their current path at the time. They’d already spent 2 years working hard on a Blackberry-style phone and knew that was already a proven lucrative market. However with a culture focused on customer experience, they realized that Apple’s path and not Blackberry’s was the future and they reimagined their product.

Facebook: An Upstart Becomes an Incumbent and Avoids a Culture of Complacency.

Facebook dethroning MySpace as the top social network in the US and the going on to displace the majority of the regionally popular social networks from around the world like Bebo, Orkut and Mixi is a good example of an upstart dethroning incumbents. On the other hand what has been even more impressive is how Facebook evolved as smartphones became more popular.

A few years ago it was commonplace to read articles like Here's Why Google and Facebook Might Completely Disappear in the Next 5 Years which argued the following

Facebook is also probably facing a tough road ahead as this shift to mobile happens.  As Hamish McKenzie said last week, “I suspect that Facebook will try to address that issue [of the shift to mobile] by breaking up its various features into separate apps or HTML5 sites: one for messaging, one for the news feed, one for photos, and, perhaps, one for an address book. But that fragments the core product, probably to its detriment.”

Considering how long Facebook dragged its feet to get into mobile in the first place, the data suggests they will be exactly as slow to change as Google was to social.  Does the Instagram acquisition change that? Not really, in my view.  It shows they’re really fearful of being displaced by a mobile upstart.  However, why would bolting on a mobile app to a Web 2.0 platform (and a very good one at that) change any of the underlying dynamics we’re discussing here? I doubt it.

At the time there was significant belief that since Facebook was dominant on the desktop and quite dependent on desktop-based revenue from Facebook games like Farmville for a huge chunk of its revenue, it would act like a typical incumbent and double down on where it was making money while an upstart beat them on mobile. So where the pundits right?

Here’s a report from 3 years after the one above which proclaims

The world's largest social network said nearly 84 percent of the 890 million people who used its service daily did so on a mobile phone. Nearly 86 percent of the 1.39 billion people who accessed Facebook each month did so on a mobile device as well, a new record for the company.

Advertisers followed those numbers. Mobile ads accounted for about 69 percent of the company's $3.85 billion in revenue. Overall, the company's sales jumped nearly 49 percent from the same time a year ago.

Those numbers underscore the Menlo Park, Calif., company's increasing reliance on mobile devices for its business. Much of the technology industry has become fixated on smartphones and tablets, as people throughout the world switch from desktop computers. Investors are now paying more attention to the mobile aspects of Facebook and its competitors. Few companies have successfully navigated the switch to mobile devices as effectively as social networks, like Facebook and Twitter.

In 3 years, Facebook went from a dead man walking due to the rise of the smartphone to becoming the biggest success story on mobile by making the most money and having the most users of any other mobile app. Facebook has done this by having a culture that comes across as almost paranoid when it comes to losing users to competitors which is why they do things like forcing employees to use Android (the most popular mobile OS), copying features like mad from startups as diverse as Medium, Timehop and Twitter, and seemingly overspends to acquire mobile startups like Instagram & WhatsApp even though the usage of their core apps is still growing like crazy.

Other companies, most notably Google, have tried to unseat Facebook from its perch on top of the social networking world but none has been able to match their paranoid frenzied approach of building or buying every interesting thing happening in social media. This aspect of Facebook’s culture and approach to their business has proven hard to copy or defeat.

How This Applies to Your Organization

As mentioned earlier, your organizations culture is defined by the positive behavior you reward and the negative behaviors you tolerate. When you define your strategy you have to also take a hard look at the behaviors you reward within your organization to ensure that the culture you have created aligns with your strategy. My current employer, Microsoft, is in the midst of a strategic redefinition and culture change as I write this and it’s been interesting to see the big and small changes that have occurred. From how sales teams are compensated & how performance reviews across the company have been tweaked to which engineering projects are now approved & celebrated, there have been numerous changes made with the goal of ensuring the company’s culture doesn’t conflict with its new strategy.

So far the stock market has been very receptive to these changes over the past 2 years which is a testament to how well things can go if your new strategy & corporate culture align.

Note Now Playing: Young ThugPower Note


 

Categories: Technology

As I write this the latest version of Skype for the iPhone has a 2 star rating as does Swarm by Foursquare. What these apps have in common is that they are both part of bold attempts to redesign a well-known and popular app which are being rejected by its core constituency. A consequence of my time working on Windows 8 is that I now obsess quite a bit about redesigning apps and determining what warning signs indicate that you are either going to greatly please or strongly offend your best users.

When I worked on Windows 8, there were a number of slogans that the team used to ensure the principles behind the work we were doing were understood by all. Some of them such as “content over chrome” were counterproductive in that slavish devotion to them led to ignoring decades of usability research by eschewing affordances and hiding navigation/controls within apps. However there were other principles from the Windows 8 era which I wish app developers took more to heart such as “win as one” which encouraged consistency with the overall platform’s UI model & working with other apps and “change is bad unless it's great” which encouraged respecting the past and only making changes that provided a noticeably better user experience. 

In addition to these two principles, I’ll add one more for app developer to keep in mind whenever the time calls for a redesign; “minimize the impacts of loss aversion”. For those who aren’t familiar with the term, loss aversion (aka the endowment effect) is the tendency for humans to strongly prefer avoiding losses to making gains. What this means for developers is that end users will react more strongly to losing a feature than they would to gaining that same feature. There are numerous studies that show how absurd humans can be in the face of loss aversion no matter how minor. My favorite example is how much people overreact to loss aversion when it comes to grocery shopping as taken from this blog post by Jon Geeting

There was a law set up last month in D.C. (passed unanimously by city council) to place a five-cent tax on paper and plastic bags at grocery stores, pharmacies and other food-service providers. So, basically, if I went shopping, my total came to $35.20, and I needed one bag to put it in, my total would then become $35.25. Similarly, if I needed two bags, my total would become $35.30, and so on — while if I simply bought reusable bags, I would be subject to no tax.

From what I hear from people in D.C., they absolutely hate it. Even though it’s just an extra five cents, they want absolutely nothing to do with it. They really want that nickel. So many people use less bags, bring their own, or just try to balance everything without one on their trip home. Think about how much less waste and pollution there is in D.C. now, because of a measly five-cent fee.

On the flip side, if you told people you’d give them 5 cents for each bag they brought from home they’d laugh in your face. Nobody is going to do an extra bit of work to be paid five cents even though they would do that work to avoid paying 5 cents. That’s loss aversion at work.

To recap, if you are redesigning an app you need to keep these three rules in mind

  1. Win as one: Whatever changes you make must feel like a consistent whole both within the app and with the platform your app resides on. Swarm and Foursquare have completely different aesthetics and integrate in a fairly disjointed manner often with no way to easily jump back and forth between both apps. Skype for iPhone is pretty much a Windows Phone app in look and feel complete with pivot controls and cut off title text. This is a very jarring experience compared to everything else on iOS.

  2. Change is bad unless it’s great: App developers need to be honest with themselves about whether a redesign is about solving a customer problem in a better way or is part of a corporate strategy. Facebook news feed is an example of a redesign which was actually driven by a need to solve customer problems which is why although it met with a massive user revolt at first, once people used it they loved it and the anger died down. Swarm exists because FourSquare now wants to compete with Yelp and needs to shed its history as a social check-in app which it sees as baggage as it evolves into a social discovery engine of things to do in your city. From an end user perspective, Skype for iPhone’s redesign is about making the app look and feel like a Microsoft metro-style app. Given these primary goals, it is no surprise that end users can tell that solving their problems came in second place as they review these apps.

  3. Minimize the impacts of loss aversion: Coupling a redesign with taking away features means people will focus on the missing features instead of whatever benefits you have provided with the redesign. Foursquare took away badges, mayorships, social feed of your friends check-ins and points as part of the split that created Swarm. There are a large number of one star reviews of the Swarm app complaining about these missing features. Skype for iPhone’s initial release took away deleting & editing messages while making others harder to find. Even features that are used once in a blue moon seem mission critical once people find out they are gone. Taking away features will always sting more than the actual value of those features. Taking multiple features away as part of a redesign means any benefits of the redesign will be lost in the ensuing outrage about the missing features.

Note Now Playing: Rick RossThe Devil is a Lie (featuring Jay-Z) Note


 

Categories: Social Software | Technology

Chris Dixon has a fairly eloquent blog post where he talks about the decline of the mobile web. He cites the following chart

and talks about what it means for the future of innovation if apps which tend to be distributed from app stores managed by corporate gate keepers continue to dominate the web as the primary way people connect on the Internet.

Using HTTP Doesn’t Make Something Part of the Web

In response to Chris Dixon’s post I’ve seen a fallacy repeated a number of times. The most visible instance of this fallacy is John Gruber’s Rethinking What We Mean by ‘Mobile Web’ where he writes

I think Dixon has it all wrong. We shouldn’t think of the “web” as only what renders inside a web browser. The web is HTTP, and the open Internet. What exactly are people doing with these mobile apps? Largely, using the same services, which, on the desktop, they use in a web browser.
...
Yes, Apple and Google (and Amazon, and Microsoft) control their respective app stores. But the difference from Dixon’s AOL analogy is that they don’t control the internet — and they don’t control each other. Apple doesn’t want cool new apps launching Android-only, and it surely bothers Google that so many cool new apps launch iOS-first. Apple’s stance on Bitcoin hasn’t exactly kept Bitcoin from growing explosively. App Stores are walled gardens, but the apps themselves are just clients to the open web/internet.
...
The rise of mobile apps hasn’t taken anything away from the wide open world of web browsers and cross-platform HTML/CSS/JavaScript — other than supremacy. I think that bothers some, who saw the HTML/CSS/JavaScript browser-centric web’s decade-ago supremacy as the end point, the ultimate triumph of a truly open platform, rather than what it really was: just another milestone along the way of an industry that is always in flux, ever ebbing and flowing.

What we’ve gained, though, is a wide range of interaction capabilities that never could have existed in a web browser-centric world. That to me is cause for celebration.

The key point here is that the World Wide Web and the Internet are different things. The definition of the web I use comes from Tim Berners-Lee’s original proposal of a browsable information network of hyperlinked documents & media on a global network. The necessary building blocks for this are a way to identify these documents (URIs), the actual content of these documents (HTML/JS/CSS/media), how clients obtain these documents (HTTP) and the global network they site on (The Internet).

This difference is important to spell out because although HTTP and the Internet are key parts of the world wide web, they aren’t the web. One of the key things we lose with apps is public addressability (i.e. URIs for the technically inclined). What does this mean in practice

  • Visiting a website is as simple as being told “go to http://bing.com” from any browser on any platform using any device. Getting an app requires the app developer to have created an app for your platform which may not have occurred due to technical limitations, policy limitations of the platform owner or simply the cost of supporting multiple platforms being higher than they want to bear.

  • Content from apps is often invisible to search engines like Google and Bing since their information is not part of the web.

  • Publishing a website simply requires getting a web host or even just hosting your own server. Publishing an app means submitting your product to some corporation then restricting your content and functionality to their rules & regulations before being made available to end users.

The key loss being that we are regressing from a globally accessible information network which reaches everyone on earth and where no publisher needs permission to reach billions of people to lots of corporate controlled fiefdoms and walled gardens.

I don’t disagree with Gruber’s notion that mobile apps have introduced new models of interaction that would not have existed in a web-browser centric world. However that doesn’t mean we aren’t losing something along the way.

Note Now Playing: The HeavyShort Change Hero Note


 

Categories: Technology | Web Development

This morning I saw the following tweet from Steven Levy, a Wired reporter who's written a number of interesting books about software people and the great companies they've built

As part of my day job at Microsoft, I've begun to learn more about how advertising across the internet works on a technical level and it is quite interesting to learn how an image of a some head phones I looked at an e-commerce site ended up staring back at me from an ad on Facebook later that day.

The fundamental technology that makes this possible is Facebook Exchange (FBX). The infographic below provides an overview of how it enables ads from ecommerce sites to show up on Facebook and I’ll follow that up with a slightly more technical explanation.

Source: Business Insider

Facebook Exchange is a Real-Time Bidding platform which enables Facebook to sell ad slots on their page to the highest bidding advertisers in fractions of a second. Typically advertisers and publishers who own the pages where ads show up end up working together through an intermediary called a Demand Side Platform (DSP). A DSP such as AdRoll provides one of their retail partners such as American Apparel or Skull Candy with code to put tracking pixels on their site which allows the user to be identified and context such as what pages they’ve visited to be recorded. The retail partner then goes into AdRoll’s interface and decides how much they are willing to pay to show ads on various networks such as Facebook (via FBX) if a user who has visited one of their pages is shown an ad.

AdRoll then provides data to Facebook that allows the user to be uniquely identified within Facebook’s network. Later when that same user goes to Facebook, Facebook puts out a request on its Ad Exchange saying “Here’s a user who you might be interested in, how much are you willing to pay to show them an ad?”, AdRoll then cross-references that user’s opaque identifier with the behavioral data they have (i.e. what pages they were looking at on an advertiser’s site) and if there is a match they make a bid which will also include their ad for the page that piqued their interest. If the retailer wins the auction, then their ad is chosen and either rendered in the news feed or on the right hand side on Facebook’s desktop website. Each of these pieces needs to happen in fractions of a second but is still slow enough that rendering ads tends to noticeably be the slowest part of rendering the webpage.

In fact, there was a grand example of retargeting in action while I was researching this blog post. When I started writing this blog post I checked out AdRoll’s web page on how to use their service to retarget ads on Facebook. A few minutes later, this showed up in my news feed.

You can tell if an ad is retargeted on Facebook by hovering with your mouse cursor on the top right of the ad (on the desktop website) and then selecting the options. If the “About This Ad” link takes you somewhere outside Facebook then it is a retargeted ad.

Some ad providers like Quantcast provide an option to opt-out of retargeting for their service while others like AdRoll link to the Network Advertising Initiative (NAI) opt –out tool which provides an option to opt-out of retargeting for a variety of ad providers. Note that this doesn’t prevent you from getting ads and is instead just a signal to advertisers that you’d rather not have your ads personalized.

If you found this blog post informative I've begun a regular series of blog posts intended to answer questions about online advertising on Microsoft properties such as Bing & MSN and on industry trends. Hit me up on Twitter with your questions.

Note Now Playing: Ice CubeHood Mentality Note


 

Categories: Technology

Earlier this week my Twitter feed was flooded with reactions to the announcement of Amazon Prime Air. A vision which is compelling and sounds like something from a science fiction novel.

 

Flying robots delivering your impulse buys ordered from you smartphone within 30 minutes? Sign me up. 

Amazon's Robotic Vision

A few have been skeptical of Amazon Prime Air and some such as Konstantin Kakaes at Wired have described the announcement as an hour long infomercial hosted by Charlie Rose that is full of hot air. There are definitely a lot of things that need to get better before drone-based package delivery is a reality. On the technology end there are challenges such as improving navigation software and the getting more efficiency out of battery power. On the regulatory end the rules and regulations needed to ensure the safety of the populace in the midst of these flying drones still needs to be figured out.

Unlike a number of the skeptics, I'm quite confident that a number of the technological and regulatory hurdles will be surmounted in the next 5 years. On the other hand, I also believe Amazon oversold the technology. The logistics of actually delivering a package to a customer has been glossed over quite a bit. If you look at the video, this is the optimal situation for delivering a package.

Now think about all the places in urban environments that don't meet this criteria; apartment complexes, condos, office building, etc. Then think about all the places where packages are best left on the doorstep close to the house which would be infeasible for a drone to land close to. In fact it's likely the 30 minute delivery claim is meant to address some logistics challenges by assuming you'll be there to pick up anything the drone drops off on your lawn or driveway before Johnny Sticky Fingers does.

My suspicion is that the true impactful usage of Amazon Prime Air will be 30 minute delivery drone delivery to Amazon Locker locations.

Google's Robotic Vision

This morning my Twitter feed is abuzz with the news that Google's Andy Rubin, creator of Android and longtime robotics enthusiast, has acquired seven robotics companies that are creating technologies to build a mobile dexterous robot. This effort will be part of Google X which is also the home of Google Glass & self-driving cars. Andy Rubin describes the effort as follows

“I have a history of making my hobbies into a career,” Mr. Rubin said in a telephone interview. “This is the world’s greatest job. Being an engineer and a tinkerer, you start thinking about what you would want to build for yourself.”

He used the example of a windshield wiper that has enough “intelligence” to operate when it rains, without human intervention, as a model for the kind of systems he is trying to create. That is consistent with a vision put forward by the Google co-founder Larry Page, who has argued that technology should be deployed wherever possible to free humans from drudgery and repetitive tasks.

To spell it out, Google’s efforts in automation under project X have been about augmenting the human experience in ways that eliminate repetitive tasks which humans are poor at such as driving where human error is a significant cause of the over 30,000 deaths a year in the US due to automobile accidents. Even this somewhat frivolous example of ringing the doorbell at Andy Rubin’s home taken from a 2007 profile of the Android founder is consistent with that vision

If the scanner recognizes you, the door unlocks automatically. (The system makes it easier to deal with former girlfriends, Mr. Rubin likes to joke. No messy scenes retrieving keys — it’s just a simple database update.)

Those forced to use the doorbell are greeted with another technological marvel: a robotic arm inside the glass foyer grips a mallet and then strikes a large gong. Although Mr. Rubin won’t reveal its cost, it may be one of the world’s most expensive doorbells.

At the end of the day, I’m more inspired by a company looking at automating away all of the tediousness of every day life with Star Trek style technology from automated doorways and self-driving cars to fully autonomous robots than a vision of making impulse buying at Walmart 2.0 Amazon more convenient. I for one welcome our new robotic overlords.

Note Now Playing: AdeleRumor Has It Note


 

Categories: Technology

I read an interesting blog post by Steven Levy titled Google Glass Team: ‘Wearable Computing Will Be the Norm’ with an interview with the Project Glass team which contains the following excerpt

Wired: Do you think this kind of technology will eventually be as common as smart phones are now?

Lee: Yes. It’s my expectation that in three to five years it will actually look unusual and awkward when we view someone holding an object in their hand and looking down at it. Wearable computing will become the norm.

The above reminds me of the Bill Gates quote, “there's a tendency to overestimate how much things will change in two years and underestimate how much change will occur over 10 years”. Coincidentally the past week has been full of retrospectives on the eve of the fifth anniversary of the iPhone. The iPhone has been a great example of how we can both overestimate and underestimate the impact of a technology. When the iPhone was announced as the convergence of an iPod, a phone and an internet mobile communicator the most forward thinking assumptions were that the majority of the Apple faithful who bought iPods would be people who bought iPhones and this would head off the demise of the iPod/MP3 player market category.

Five years later, the iPhone has effectively reshaped the computing industry. The majority of tech news today can be connected back to companies still dealing with the fallout of the creation of the iPhone and it’s progeny, the iPad. Entire categories of products across multiple industries have been made obsolete (or at least redundant) from yellow pages and paper maps to PDAs, point-and-shoot cameras and netbooks. This is in addition to the sociological changes that have been wrought (e.g. some children now think a magazine is a broken iPad). The most shocking change as a techie has been watching usage and growth of the World Wide Web being replaced by usage of mobile apps. No one really anticipated or predicted this five years ago.

Wearable computing will follow a similar path. It is unlikely that within a year or two of products like Project Glass coming to market that people will stop using smartphones especially since there are many uses for the ubiquitous smartphone that Project Glass hasn’t tried to address (e.g. playing Angry Birds or Fruit Ninja at the doctor’s office while waiting for your appointment). However it is quite clear that in our lifetime there will be the ability to put together scenarios that would have seemed far fetched for science fiction just a few years ago. It will one day be possible to look up the Facebook profile or future equivalent of anyone you meet at a bar, business meeting or on the street without the person being none the wiser simply by looking at them. Most of the technology to do this already exists, it just isn’t in a portable form factor. That is just one scenario that not only will be possible but will be commonplace with products like Project Glass in the future.

Focusing on Project Glass making smartphones obsolete is like focusing on the fact that the iPhone made iPod competitors like the Creative Zen Vision obsolete. Even if it did, that was not the interesting impact. As a software professional, it is interesting to ask yourself whether your product or business will be one of those obsoleted by this technology or empowered by it. Using analogies from the iPhone era, will you be RIM or will you be Instagram?

PS: I have to wonder what Apple thinks of all of this. When I look at the image below, I see a clunky and obtrusive piece of headgear that I can imagine makes Jonathan Ive roll his eyes and think he could do much better. Given Apple’s mantra is “If you don’t cannibalize yourself, someone else will” I expect this space to be very interesting over the next ten years.

Note Now Playing: Wale - Bag of Money (featuring Rick Ross, Meek Mill and T-Pain) Note


 

Categories: Technology

I stumbled on an interesting blog post today titled Why Files Exist which contains the following excerpt

Whenever there is a conversation about the future of computing, the discussion inevitably turns to the notion of a “File.” After all, most tablets and phones don’t show the user anything that resembles a file, only Apps that contain their own content, tucked away inside their own opaque storage structure.

This is wrong. Files are abstraction layers around content that are necessary for interoperability. Without the notion of a File or other similar shared content abstraction, the ability to use different applications with the same information grinds to a halt, which hampers innovation and user experience.

Given that one of the hats I wear in my day job is responsibility for the SkyDrive API, questions like whether the future of computing should include an end user facing notion of files and how interoperability across apps should work are often at the top of my mind. I originally wasn’t going to write about this blog post until I saw the discussion on Hacker News which was a bit disappointing since people either decided to get very pedantic on the specifics of how a computer file is represented in the operating system or argued that inter-app sharing between apps via intents (on Android) or contracts (in Windows 8/Windows RT) makes the notions of files obsolete.

The app-centric view of data (as espoused by iOS) is that apps own any content created within the app and there is no mechanism outside the app’s user experience to interact with or manage this data. This also means there is no global namespace where other apps or the end user can interact with this data also known as a file system. There are benefits to this approach such as greatly simplifying the concepts the user has to deal with and preventing both the user or other apps from mucking with the app’s experience. There are also costs to this approach as well.

The biggest cost is as highlighted in the Why Files Exist post is that interoperability is compromised. The reason is that it is a well known truism that data outlives applications. My contact list, my music library and the code for my side projects across the years are all examples of data which has outlived the original applications I used to create and manage them. The majority of this content is in the cloud today primarily because I want universal access to my data from any device and any app. A world where moving from Emacs to Visual Studio or WinAmp to iTunes means losing my files created in those applications would be an unfortunate place to live in the long term.

App-to-app sharing as is done with Android intents or contracts in Windows 8 is a powerful way to create loosely coupled integration between apps. However there is a big difference between one off sharing of data (e.g. share this link from my browser app to my social networking app) to actual migration or reuse of data (e.g. import my favorites and passwords from one browser app to another). Without a shared global namespace that all apps can access (i.e. a file system) you cannot easily do the latter.

The Why Files Exist ends with

Now, I agree with Steve Jobs saying in 2005 that a full blow filesystem with folders and all the rest might not be necessary, but in every OS there needs to be at least some user-facing notion of a file, some system-wide agreed upon way to package content and send it between applications. Otherwise we’ll just end up with a few monolithic applications that do everything poorly.

Here I actually slightly disagree with characterizing the problem as needing a way to package content and send it between applications. Often my data is actually conceptually independent of an application and it is more like I want to give access to my data to apps not that I want to package up some of my data from one app to another. For example, I wouldn’t characterize playing my MP3s originally ripped in Winamp or bought from Amazon MP3 in iTunes as packaging content between those apps and iTunes. Rather there is a global concept known as my music library which multiple apps can add to or play from.

So back to the question that is the title of this blog post; have files outlived their usefulness? Only if you think reusing data across multiple applications has.

Note Now Playing: Meek Mill - Pandemomiun (featuring Wale and Rick Ross Note


 

Categories: Cloud Computing | Technology

Software companies love hiring people that like solving hard technical problems. On the surface this seems like a good idea, unfortunately it can lead to situations where you have people building a product where they focus more on the interesting technical challenges they can solve as opposed to whether their product is actually solving problems for their customers.

I started being reminded of this after reading an answer to a question on Quora about the difference between working at Google versus Facebook where Edmond Lau David Braginsky wrote

Culture:
Google is like grad-school. People value working on hard problems, and doing them right. Things are pretty polished, the code is usually solid, and the systems are designed for scale from the very beginning. There are many experts around and review processes set up for systems designs.

Facebook is more like undergrad. Something needs to be done, and people do it. Most of the time they don't read the literature on the subject, or consult experts about the "right way" to do it, they just sit down, write the code, and make things work. Sometimes the way they do it is naive, and a lot of time it may cause bugs or break as it goes into production. And when that happens, they fix their problems, replace bottlenecks with scalable components, and (in most cases) move on to the next thing.

Google tends to value technology. Things are often done because they are technically hard or impressive. On most projects, the engineers make the calls.

Facebook values products and user experience, and designers tend to have a much larger impact. Zuck spends a lot of time looking at product mocks, and is involved pretty deeply with the site's look and feel.

It should be noted that Google deserves credit for succeeding where other large software have mostly failed in putting a bunch of throwing a bunch of Ph.Ds at a problem at actually having them create products that impacts hundreds of millions people as opposed to research papers that impress hundreds of their colleagues. That said, it is easy to see the impact of complexophiles (props to Addy Santo) in recent products like Google Wave.

If you go back and read the Google Wave announcement blog post it is interesting to note the focus on combining features from disparate use cases and the diversity of all of the technical challenges involved at once including

  • “Google Wave is just as well suited for quick messages as for persistent content — it allows for both collaboration and communication”
  • “It's an HTML 5 app, built on Google Web Toolkit. It includes a rich text editor and other functions like desktop drag-and-drop”
  • “The Google Wave protocol is the underlying format for storing and the means of sharing waves, and includes the ‘live’ concurrency control, which allows edits to be reflected instantly across users and services”
  • “The protocol is designed for open federation, such that anyone's Wave services can interoperate with each other and with the Google Wave service”
  • “Google Wave can also be considered a platform with a rich set of open APIs that allow developers to embed waves in other web services”

The product announcement read more like a technology showcase than an announcement for a product that is actually meant to help people communicate, collaborate or make their lives better in any way. This is an example of a product where smart people spent a lot of time working on hard problems but at the end of the day they didn't see the adoption they would have liked because they they spent more time focusing on technical challenges than ensuring they were building the right product.

It is interesting to think about all the internal discussions and time spent implementing features like character-by-character typing without anyone bothering to ask whether that feature actually makes sense for a product that is billed as a replacement to email. I often write emails where I write a snarky comment then edit it out when I reconsider the wisdom of sending that out to a broad audience. It’s not a feature that anyone wants for people to actually see that authoring process.


Some of you may remember that there was a time when I was literally the face of XML at Microsoft (i.e. going to http://www.microsoft.com/xml took you to a page with my face on it Smile). In those days I spent a lot of time using phrases like the XML<-> objects impedance mismatch to describe the fact that the dominate type system for the dominant protocol for web services at the time (aka SOAP) actually had lots of constructs that you don’t map well to a traditional object oriented programming language like C# or Java. This was caused by the fact that XML had grown to serve conflicting masters. There were people who used it as a basis for document formats such as DocBook and XHTML. Then there were the people who saw it as a replacement to for the binary protocols used in interoperable remote procedure call technologies such as CORBA and Java RMI. The W3C decided to solve this problem by getting a bunch of really smart people in a room and asking them to create some amalgam type system that would solve both sets of completely different requirements. The output of this activity was XML Schema which became the type system for SOAP, WSDL and the WS-* family of technologies. This meant that people who simply wanted a way to define how to serialize a C# object in a way that it could be consumed by a Java method call ended up with a type system that was also meant to be able to describe the structural rules of the HTML in this blog post.

Thousands of man years of effort was spent across companies like Sun Microsystems, Oracle, Microsoft, IBM and BEA to develop toolkits on top of a protocol stack that had this fundamental technical challenge baked into it. Of course, everyone had a different way of trying to address this “XML<->object impedance mismatch which led to interoperability issues in what was meant to be a protocol stack that guaranteed interoperability. Eventually customers started telling their horror stories in actually using these technologies to interoperate such as Nelson Minar’s ETech 2005 Talk - Building a New Web Service at Google and movement around the usage of building web services using Representational State Transfer (REST) was born. In tandem, web developers realized that if your problem is moving programming language objects around, then perhaps a data format that was designed for that is the preferred choice. Today, it is hard to find any recently broadly deployed web service that doesn’t utilize on Javascript Object Notation (JSON) as opposed to SOAP.


The moral of both of these stories is that a lot of the time in software it is easy to get lost in the weeds solving hard technical problems that are due to complexity we’ve imposed on ourselves due to some well meaning design decision instead of actually solving customer problems. The trick is being able to detect when you’re in that situation and seeing if altering some of your base assumptions doesn’t lead to a lot of simplification of your problem space then frees you up to actually spend time solving real customer problems and delighting your users. More people need to ask themselves questions like do I really need to use the same type system and data format for business documents AND serialized objects from programming languages?

Note Now Playing: Travie McCoy - Billionaire (featuring Bruno Mars) Note


 

This past weekend, I bought an Apple iPhone as a replacement for my AT&T Tilt which was slowly succumbing to hardware glitches. As a big fan of Windows Mobile devices and their integration with Microsoft Exchange I was wary of adopting an iPhone. I was concerned about it not having all of the features I'd gotten used to but on the other hand I was looking forward to replacing my iPod+AT&T Tilt combination with a single device.

Here are my positive and negative impressions of the device after five days

The Good

There are lots of things I like about the iPhone experience. Below are my favorite aspects of the experience thus far

  • Visual Polish: The visual polish of the 1st and 3rd party applications on the iPhone is amazing. There are so many nice touches in the phone from the coverflow browsing experience when playing music to the transparent pop over windows when you receive an SMS text. The few 3rd party applications I've used have also seemed similarly polished although I've only downloaded less than ten apps. It's like using a futuristic device that you see in movies not an actual phone from real life. 

  • Browsing and Purchasing Applications: As someone whose used Handango and ActiveSync to install applications on my Windows Mobile device, I have to say that experience pales in comparison to being able to browse, search, purchase and install apps directly from my phone. Having the app store integrated into iTunes actually seems superfluous.

  • It's also an iPod: I used to think my iPod classic was a fantastic music playing device until I got my iPhone. Now it seems rather primitive and ugly. I shocked at how much better the music playing experience is on my iPhone and have tossed my iPod classic into our junk drawer.  Now my pockets are lighter and I got an iPod upgrade. Nice. 

  • The Web browser: The browser supports multi-touch for zoom and it does AJAX. Hell, yeah.

  • Autocomplete when sending emails: When sending a mail, it uses autocomplete to fill out the TO/CC/BCC line by looking in your contact list and the email addresses of people in your inbox. This is a very nice touch.

The Bad

There are also a number of features I miss from owning a Windows Mobile device which I hope are addressed in the future or I might eventually find myself switching back

  • Email Search: Windows Mobile devices can search emails in your Exchange inbox by sending a query to the server. Using this functionality you aren't limited to searching the emails on your device but can search at least month of emails and get the results sent down to your phone. The iPhone has no email search functionality.

  • No integration with the Global Address List: On my AT&T Tilt I often needed to send emails to co-workers whose email addresses weren't in my contact list. All I needed to do was type out their names and then I could pull up their information (email address, phone number and office location) down to my phone to either add them as a contact or insert their email address into an email. I've felt rather handicapped without this functionality.  The autocomplete feature which uses all the email addresses from your local inbox has been a slight mitigation.

  • No Flash in the browser: After getting used having a Flash-capable browser on my AT&T Tilt via Skyfire it is rather irritating that I've now lost that functionality by switching to an iPhone. You don't really notice how much Flash video content there is on the Web until you start missing it. My last post was of a Flash video which is a broken link when I browse my blog from my iPhone. Lame.

  • Managing meetings is awful: As a program manager at Microsoft I schedule a lot of meetings. Every once in a while I may be running late for a meeting and have to either send a mail out to the attendees telling them I'll be late or cancel the meeting. Neither of these options is available on the iPhone.

  • Reply flags not set in Exchange: With a Windows Mobile phone, when you respond to a mail via the phone it is properly marked as a mail you've replied to when you view it in Outlook on the desktop. The iPhone developers remembered to track which emails you've responded to on the device but failed to propagate that information back to Exchange. For a while, I thought I was going senile because I remembered responding to mails but they weren't marked as being replied to in my inbox. After I found my replies in my Sent mail folder, I realized what was wrong. 

  • Tasks: Although I've never tried Getting Things Done, I am a big fan of Outlook Tasks and often add new tasks or mark them as complete from my phone. The iPhone does not synchronize your Outlook tasks from Exchange which is a glaring oversight. For now, I've gotten around this by spending $10 on KeyTasks for the iPhone which is somewhat hacky but works. 

The Ugly

So far there have been two aspects of the user experience that have been facepalm worthy

  • Ringtones: On my AT&T Tilt it was pretty straightforward to make any MP3 snippet eligible to be used as a ring tone simply by dropping it in the right folder. The iPhone requires that I pay $0.99 for a song I already own to use it as a ring tone. Seriously?

  • Using your iPhone on multiple computers: I typically purchase music and burn CDs on my home computer while using my iPod as a music library at work. This functionality is disabled out of the box on the iPhone. You can only sync your iPhone to one computer which includes only being able to play music off of your iPhone on a single computer. This is pretty ridiculous given multicomputer households and people who use their iPhones at home and at work. Thankfully, the Internet is full of workarounds to this foolishness on Apple's part. 

Despite what looks like a long list of complaints this is probably the best mobile phone I've ever owned. It just isn't perfect.

Note Now Playing: Jay-Z - Can't Knock the Hustle Note


 

Categories: Technology

From Palm Pre and Palm WebOS in-depth look we learn

The star of the show was the new Palm WebOS. It's not just a snazzy new touch interface. It's a useful system with some thoughtful ideas that we've been looking for. First of all, the Palm WebOS takes live, while-you-type searching to a new level. On a Windows Mobile phone, typing from the home screen initiates a search of the address book. On the Palm WebOS, typing starts a search of the entire phone, from contacts through applications and more. If the phone can't find what you need, it offers to search Google, Maps and Wikipedia. It's an example of Palm's goal to create a unified, seamless interface.

Other examples of this unified philosophy can be found in the calendar, contacts and e-mail features. The Palm Pre will gather all of your information from your Exchange account, your Gmail account and your Facebook account and display them in a single, unduplicated format. The contact listing for our friend Dave might draw his phone number from our Exchange account, his e-mail address from Gmail and Facebook, and instant messenger from Gtalk. All of these are combined in a single entry, with a status indicator to show if Dave is available for IM chats.

This is the holy grail of contact management experiences on a mobile phone. Today I use Exchange as the master for my contact records and then use tools like OutSync to merge in contact data for my Outlook contacts who are also on Facebook before pushing it all down to my Windows Mobile phone (the AT&T Tilt). Unfortunately this is a manual process and I have to be careful of creating duplicates when importing contacts from different places.

If the Palm Pre can do this automatically in a "live" anmd always connected way without creating duplicate or useless contacts (e.g. Facebook friends with no phone or IM info shouldn't take up space in my phone contact list) then I might have to take this phone for a test drive.

Anyone at CES get a chance to play with the device up close?

Note Now Playing: Hootie & The Blowfish - Only Wanna Be With You Note


 

Categories: Social Software | Technology

This morning, Jeff Atwood wrote a blog post about software piracy entitled My Software Is Being Pirated where he talks about how companies can deal with the fact that the piracy rate among their users could be as high as 90%. He writes

Short of ..

  1. selling custom hardware that is required to run your software, like the Playstation 3 or Wii
  2. writing a completely server-side application like World of Warcraft or Mint

.. you have no recourse. Software piracy is a fact of life, and there's very little you can do about it. The more DRM and anti-piracy devices you pile on, the more likely you are to harm and alienate your paying customers. Use a common third party protection system and it'll probably be cracked along with all the other customers of that system. Nobody wants to leave the front door to their house open, of course, but you should err on the side of simple protection whenever possible. Bear in mind that a certain percentage of the audience simply can't be reached; they'll never pay for your software at any price. Don't penalize the honest people to punish the incorrigible. As my friend Nathan Bowers so aptly noted:

Every time DRM prevents legitimate playback, a pirate gets his wings.

In fact, the most effective anti-piracy software development strategy is the simplest one of all:

  1. Have a great freaking product.
  2. Charge a fair price for it.

(Or, more radically, choose an open source business model where piracy is no longer a problem but a benefit -- the world's most efficient and viral software distribution network. But that's a topic for a very different blog post.)

It is interesting to note that Jeff's recommendation for an effective anti-piracy solution is actually contradicted by the example game from his post; World of Goo. The game is an excellent product and is available for ~$15 yet it is still seeing a 90% piracy rate. In fact, the most effective anti-piracy strategy is simply to route around the problem as Jeff originally stated. Specifically

  • target custom hardware platforms such as the iPhone or XBox 360 which don't have a piracy problem
  • build Web-based software

However if you do decide to go down the shrinkwrapped software route, I'd suggest casting a critical eye on any claims that highlight the benefits of the "Open Source business model" to shrinkwrapped software developers. Open Source software companies have been around for over a decade (e.g. RedHat was founded in 1995) and we now have experience as an industry with regards to what works and what doesn't work as a business model for Open Source software.

There are basically three business models for companies that make money from Open Source software, they are

  1. Selling support, consulting and related services for the "free" software (aka the professional open source business model ) – RedHat
  2. Dual license the code and then sell traditional software licenses to enterprise customers who are scared of the GPL – MySQL AB
  3. Build a proprietary Web application powered by Open Source software – Google

As you scan this list, it should be clear that none of these business models actually involves making money directly from selling only the software. This is problematic for developers of shrinkwrapped, consumer software such as games because none of the aforementioned business models actually works well for them.

For developers of shrinkwrapped software, Open Source only turns piracy from a problem into a benefit if you're willing to forego building consumer software and you have software that is either too complicated to use without handholding OR you can scare a large percentage of your customers into buying traditional software licenses by using the GPL instead of the BSDL.

Either way, the developers of World of Goo are still screwed. Sad

Note Now Playing: The Notorious B.I.G. - Mo Money Mo Problems (feat. Mase & Puff Daddy) Note


 

Categories: Technology

A couple of months ago, Russell Beattie wrote a post about the end of his startup entitled The end of Mowser which is excerpted below

The argument up to now has been simply that there are roughly 3 billion phones out there, and that when these phones get on the Internet, their vast numbers will outweigh PCs and tilt the market towards mobile as the primary web device. The problem is that these billions of users *haven't* gotten on the Internet, and they won't until the experience is better and access to the web is barrier-free - and that means better devices and "full browsers". Let's face it, you really aren't going to spend any real time or effort browsing the web on your mobile phone unless you're using Opera Mini, or have a smart phone with a decent browser - as any other option is a waste of time, effort and money. Users recognize this, and have made it very clear they won't be using the "Mobile Web" as a substitute for better browsers, rather they'll just stay away completely.

In fact, if you look at the number of page views of even the most popular mobile-only websites out there, they don't compare to the traffic of popular blogs, let alone major portals or social networks.

Let me say that again clearly, the mobile traffic just isn't there. It's not there now, and it won't be.

What's going to drive that traffic eventually? Better devices and full-browsers. M-Metrics recently spelled it out very clearly - in the US 85% of iPhone owners browsed the web vs. 58% of smartphone users, and only 13% of the overall mobile market. Those numbers *may* be higher in other parts of the world, but it's pretty clear where the trend line is now. (What a difference a year makes.) It would be easy to say that the iPhone "disrupted" the mobile web market, but in fact I think all it did is point out that there never was one to begin with.

I filed away Russell's post as interesting at the time but hadn't really experienced it first hand until recently. I recently switched to using Skyfire as my primary browser on my mobile phone and it has made a world of difference in how a use my phone. No longer am I restricted to crippled versions of popular sites nor do I have to lose features when I visit the regular versions of the page. I can view the real version of my news feed on Facebook. Vote up links in reddit or Digg. And reading blogs is no longer an exercise in frustration due to CSS issues or problems rendering widgets. Unsurprisingly my usage of the Web on my phone has pretty much doubled.

This definitely brings to the forefront how ridiculous of an idea it was to think that we need a "mobile Web" complete with its own top level domain (.mobi). Which makes more sense, that every Web site in the world should create duplicate versions of their pages for mobile phones and regular browsers or that software + hardware would eventually evolve to the point where I can run a full fledged browser on the device in my pocket? Thanks to the iPhone, it is now clear to everyone that this idea of a second class Web for mobile phones was a stopgap solution at best whose time is now past. 

One other thing I find interesting is treating the iPhone as a separate category from "smartphones" in the highlighted quote. This is similar to a statement made by Walt Mossberg when he reviewed Google's Android. That article began as follows

In the exciting new category of modern hand-held computers — devices that fit in your pocket but are used more like a laptop than a traditional phone — there has so far been only one serious option. But that will all change on Oct. 22, when T-Mobile and Google bring out the G1, the first hand-held computer that’s in the same class as Apple’s iPhone.

The key feature that the iPhone and Android have in common that separates them from regular "smartphones" is that they both include a full featured browser based on Webkit. The other features like downloadable 3rd party applications, wi-fi support, rich video support, GPS, and so on have been available on phones running Windows Mobile for years. This shows how important having a full Web experience was for mobile phones and just how irrelevant the notion of a "mobile Web" has truly become.

Note Now Playing: Kilo Ali - Lost Y'all Mind Note


 

Categories: Technology

Given the current spirit of frugality that fills the air due to the credit crises I'm reconsidering whether to replace my AT&T Tilt (aka HTC Kaiser) with an iPhone 3G. After test driving a couple of iPhones I've realized that the really compelling reason for me to switch is to get a fully-featured Web browser instead of my current situation of having to choose between text-based "mobile" versions of popular sites or mangled Web pages.

I was discussing this with a coworker and he suggested that I try out alternative browsers for the Windows Mobile before getting an iPhone. I'd previously tried DeepFish from the Microsoft Live Labs team but found it too buggy to be usable. I looked for it again recently but it seems it has been cancelled. This led me to try out SkyFire which claims to give a complete PC-style Web browsing experience [including Flash Video, AJAX, Silverlight and Quicktime video] on a mobile phone.

After using SkyFire for a couple of days, I have to admit that it is a much improved Web browsing experience compared to what shipped by default on my phone. At first I marveled at how a small startup could build such a sophisticated browser in what seems like a relatively short time until I learned about the clever hack which is at the center of the application. None of the actual rendering and processing of content is done on your phone. Instead, there is an instance of a Web browser (supposedly Firefox) running on the SkyFire servers which acts as a proxy for your phone and then sends you a compressed image of the fully rendered results. There is still some clever hackery involved especially with regards to converting a streaming Flash video into a series of animated image and accompanying sound then sending it down to your phone in real-time. However it is nowhere near as complex as shipping complete Javascript, Flash, Quicktime and Silverlight implementations on mobile phone browser. 

The one problem with SkyFire's approach is that all of your requests go through their servers. This means your passwords, emails, bank account records or whatever other Web sites you visit with your mobile browser will flow through SkyFire's servers. This may be a deal breaker for some while for others it will mean being careful about what sites they visit using the browser. 

If this sounds interesting, check out the video demo below

Note Now Playing: Michael Jackson - PYT (Pretty Young Thing) Note


 

Categories: Technology

With the releases of betas of Google Chrome and Internet Explorer 8 as well as the recent release of Firefox 3, the pundits are all in a tizzy about the the new browser wars. I don't know if it is a war or not but I do like the fact that in the past few months we've seen clear proof that the end user experience when browsing the Web is going to get an upgrade for the majority of Web users.

Whenever there is such active competition between vendors, customers are typically the ones that benefit and the "new browser wars" are no different. Below are some of the features and trends in the new generation of browsers that has me excited about the future of the Web browsing user experience

One Process Per Tab

As seen in: IE 8 beta, Chrome

With this feature browsers are more resilient to crashes since each tab has its own process so a bug which would cause the entire browser to crash in an old school browser only causes the user to lose the tab in next generation browser. This feature is called Loosely Coupled IE (LCIE) in Internet Explorer 8 and described in the documentation of the Chrome Process Manager in the Google Chrome Comic Book.

This feature will be especially welcome for users of add-ons and browser toolbars since the IE team has found that up to 70% of browser crashes are caused by extensions and now these crashes will no longer take down the entire browser.

Smarter Address Bars

As seen in: IE 8 beta, Chrome, Firefox 3

Autocomplete in browser address bars has been improved. Instead of trying to match a user entered string as the start of a URL (e.g. "cn" autocompletes to http://cnn.com) newer browsers match any occurrence of the string in previously seen URLs and page titles (e.g. "cn" matches http://cnn.com, http://google.cn and a blog post on Wordpress with the title "I've stopped watching CNN").  Like Mark Pilgrim, I was originally suspicious of this feature but now cannot live without it.

This feature is called AwesomeBar in Firefox 3, OmniBox in Google Chrome and Smart Address Bar in IE 8.

Restoring Previous Browsing Sessions

As seen in: Firefox 3, Chrome, IE 8 beta

I love being able to close my browser and restart my operating system safe in the knowledge that whenever I launch the browser it is restored to exactly where I left off. Both Firefox and Chrome provide an option to make this behavior the default but the closest I've seen to getting a similar experience in the betas of IE 8 requires a click from the "about:Tabs" page. However given "about:Tabs" is my start page it gives maximum flexibility since I don't have to be slowed down by the opening up the four or five previously open browser tabs every time I launch my browser.

Search Suggestions

As seen in: IE 8 beta, Chrome, Firefox 3

In the old days, the only way to get search suggestions when typing a search query in your browser's search box was if you had a vendor specific search toolbar installed (e.g. Google Suggest for Firefox). It is becoming more commonplace for this to be native functionality of the Web browser. Google Chrome supports this if the default search provider is Google.  IE 8 beta goes one better by making this feature a platform that any search engine can plug into and currently provides search suggestions for the following search providers; Wikipedia, Amazon, Google, Live Search and Yahoo! as at this writing. 

Updated: Firefox has also supported search suggestions using a provider model since Firefox 2 via OpenSearch and ships with suggestions enabled for Google and Yahoo! by default.

Offline Support

As seen in: Chrome, IE 8 beta, Firefox 3

The WHAT WG created specifications which describes secure mechanisms for Web applications to store large amounts of user data on a local system using APIs provided by modern Web browsers. Applications can store megabytes of data on the user's local machine and have it accessible via the DOM. This feature was originally described in the Web Applications 1.0 specification and is typically called DOM Storage. You can read more about it in the Mozilla documentation for DOM Storage and the IE 8 beta documentation for DOM Storage. The related APIs are currently being defined as part of HTML 5.

Chrome supports this functionality by bundling Google Gears which is a Google defined set of APIs for providing offline storage. 


The most interesting thing about this list is that if you follow the pronouncements from various pundits on sites like Techmeme, you'd think all of these features were originated by Google and appeared for the first time in Chrome.

Update: An amusing take on the pundit hype about Google Chrome from Ted Dziuba in The Register article Chrome-fed Googasm bares tech pundit futility

Now Playing: Metallica - Cyanide


 

Categories: Technology | Web Development

August 24, 2008
@ 11:32 AM

Last week my blog was offline for a day or so because I was the victim of a flood of SQL injection attacks that are still hitting my Web site at the rate of multiple requests a second. I eventually managed to counter the attacks by installing URLScan 3.0 and configuring it to reject HTTP requests that resemble SQL injection attacks. I found out about URLScan in two ways; from a blog post Phil Haack wrote about Dealing with Denial of Service Attacks where it seems he's been caught up in the same wave of attacks that brought down my blog and via an IM from Scott Hanselman who saw my tweet on Twitter about being hacked and pointed me to his blog post on the topic entitled Hacked! And I didn't like it - URLScan is Step Zero.

This reminded me that I similarly found another useful utility, WinDirStat, via a blog post as well. In fact when i think about it, a lot of the software I end up trying out is found via direct or indirect recommendations from people I know. Typically through blog posts, tweets or some other communication via a social networking or social media service. This phenomenon can be clearly observed in closed application ecosystems like the Facebook platform, where statistics have shown that the majority of users install new applications after viewing them on the profiles of their friends.

One of the things I find most interesting about the Facebook platform and now the Apple App Store is that they are revolutionizing how we think about software distribution. Today, finding interesting new desktop/server/Web apps either happens serendipitously via word of mouth or [rarely] is the result of advertising or PR. However finding interesting new applications if you are a user of Facebook or the Apple iPhone isn't a matter of serendipity. There are well understood ways of finding interesting applications that harnesses social and network effects from user ratings to simply finding out what applications your friends are using.

As a user, I sometimes wish I had an equivalent experience as a user of desktop applications and their extensions. I've often thought it would be cool to be able to browse the software likes and dislikes of people such as Omar Shahine, Scott Hanselman and Mike Torres to see what their favorite Windows utilities and mobile applications were. As a developer of a feed reader, although it is plain to see that Windows has a lot of reach since practically everyone runs it sometimes I'm envious of the built in viral distribution features that come with the Facebook platform or the unified software distribution experience that is the iPhone App Store. Sure beats hosting your app on SourceForge and hoping that your users are blogging about the app to spread it via word of mouth or paying for prominence on sites like Download.com.

A lot of the pieces are already there. Microsoft has a Windows Marketplace but for the life of me I'd have never found out about it if I didn't work at Microsoft and someone I know switched teams to start working there. There are also services provided by 3rd parties like Download.com, the Firefox Add-Ons page and Tucows. It would be interesting to see what could be stitched together if you throw in a social graph via something like Facebook Connect, an always-on well integrated desktop experience similar to the Apple App Store and one of the aforementioned sites. I suspect the results would be quite beneficial to app developers and users of Windows applications.

What do you think?

Now Playing: Metallica - The Day That Never Comes


 

Categories: Technology

Disclaimer: This post does not reflect the opinions, thoughts, strategies or future intentions of my employer. These are solely my personal opinions. If you are seeking official position statements from Microsoft, please go here.

Earlier this week, David Recordon announced the creation of the Open Web Foundation at OSCON 2008. His presentation is embedded below

From the organization's Web site you get the following outline of it's mission

The Open Web Foundation is an attempt to create a home for community-driven specifications. Following the open source model similar to the Apache Software Foundation, the foundation is aimed at building a lightweight framework to help communities deal with the legal requirements necessary to create successful and widely adopted specification.

The foundation is trying to break the trend of creating separate foundations for each specification, coming out of the realization that we could come together and generalize our efforts. The details regarding membership, governance, sponsorship, and intellectual property rights will be posted for public review and feedback in the following weeks.

Before you point out that this seems to create yet another "standards" organization for Web technology, there are already canned answers to this question. Google evangelist Dion Almaer provides justification for why existing Web standards organizations do not meet their needs in his post entitled The Open Web Foundation; Apache for the other stuff where he writes 

Let’s take an example. Imagine that you came up with a great idea, something like OAuth. That great idea gains some traction and more people want to get involved. What do you do? People ask about IP policy, and governance, and suddenly you see yourself on the path of creating a new MyApiFoundation.

Wait a minute! There are plenty of standards groups and other organizations out there, surely you don’t have to create MyApiFoundation?

Well, there is the W3C and OASIS, which are pay to play orgs. They have their place, but MyApi may not fit in there. The WHATWG has come up with fantastic work, but the punting on IP is an issue too.

At face value, it's hard to argue with this logic. The W3C charges fees using a weird progressive taxation model where a company pays anything from a few hundred to several thousand dollars depending on how the W3C assesses their net worth. OASIS similarly charges from $1,000 to $50,000 depending on how much influence the member company wants to have in the organization. After that it seems there are a bunch of one off organizations like the Open ID foundation and the WHATWG that are dedicated to a specific technology. 

Or so the spin from the Open Web Foundation would have you believe.

In truth there is already an organization dedicated to producing "Open" Web technologies that has a well thought out policy on membership, governance, sponsorship and intellectual property rights that isn't pay to play. This is not a new organization, it actually happens to be older than David Recordon who unveiled the Open Web Foundation.

The name of this organization is the Internet Engineering Task Force (IETF). If you are reading this blog post then you are using technologies for the "Open Web" created by the IETF. You may be reading my post in a Web browser in which case the content was transferred to you over HTTP (RFC 2616) and if you're reading it in an RSS reader then I should add that you're also directly consuming my Atom feed (RFC 4287). Some of you are reading this post because someone sent you an email which is another example of an IETF protocol at work, SMTP (RFC 2821).

The IETF policy on membership doesn't get more straightforward; join a mailing list. I am listed as a member of the Atom working group in RFC 4287 because I was a participant in the atom-syntax mailing list. The organization has a well thought out and detailed policy on intellectual property rights as it relates the IETF specifications which is detailed in RFC 3979: Intellectual Property Rights in IETF Technology and slightly updated in RFC 4879: Clarification of the Third Party Disclosure Procedure in RFC 3979.

I can understand that a bunch of kids fresh out of college are ignorant of the IETF and believe they have to reinvent the wheel to Save the Open Web but I am surprised that Google which has had several of it's employees participate in the IETF processes which created RFC 4287, RFC 4959, RFC 5023 and RFC 5034 would join in this behavior. Why would Google decide to sponsor a separate standards organization that competes with the IETF that has less inclusive processes than the IETF, no clear idea of how corporate sponsorship will work and a yet to be determined IPR policy?

That's just fucking weird.

Now Playing: Boyz N Da Hood - Dem Boys (remix) (feat T.I. & The Game)


 

Categories: Technology

For the past few years, the technology press has been eulogizing desktop and server-based software while proclaiming that the era of Software as a Service (SaaS) is now upon us. According to the lessons of the Innovator's Dilemma the cheaper and more flexible SaaS solutions will eventually replace traditional installed software and the current crop of software vendors will turn out to be dinosaurs in a world that belongs to the warm blooded mammals who have conquered cloud based services.

So it seems the answer is obvious, software vendors should rush to provide Web-based services and extricate themselves from their "legacy" shrinkwrapped software business before it is too late. What could possibly go wrong with this plan? 

Sarah Lacy wrote an informative article for Business Week about the problems facing software vendors who have rushed into the world of SaaS. The Business Week article is entitled On-Demand Computing: A Brutal Slog and contains the following excerpt

On-demand represented a welcome break from the traditional way of doing things in the 1990s, when swaggering, elephant hunter-style salesmen would drive up in their gleaming BMWs to close massive orders in the waning days of the quarter. It was a time when representatives of Oracle (ORCL), Siebel, Sybase (SY), PeopleSoft, BEA Systems, or SAP (SAP) would extol the latest enterprise software revolution, be it for management of inventory, supply chain, customer relationships, or some other area of business. Then there were the billions of dollars spent on consultants to make it all work together—you couldn't just rip everything out and start over if it didn't. There was too much invested already, and chances are the alternatives weren't much better.

Funny thing about the Web, though. It's just as good at displacing revenue as it is in generating sources of it. Just ask the music industry or, ahem, print media. Think Robin Hood, taking riches from the elite and distributing them to everyone else, including the customers who get to keep more of their money and the upstarts that can more easily build competing alternatives.

But are these upstarts viable? On-demand software has turned out to be a brutal slog. Software sold "as a service" over the Web doesn't sell itself, even when it's cheaper and actually works. Each sale closed by these new Web-based software companies has a much smaller price tag. And vendors are continually tweaking their software, fixing bugs, and pushing out incremental improvements. Great news for the user, but the software makers miss out on the once-lucrative massive upgrade every few years and seemingly endless maintenance fees for supporting old versions of the software.

Nowhere was this more clear than on Oracle's most recent earnings call (BusinessWeek.com, 6/26/08). Why isn't Oracle a bigger player in on-demand software? It doesn't want to be, Ellison told the analysts and investors. "We've been in this business 10 years, and we've only now turned a profit," he said. "The last thing we want to do is have a very large business that's not profitable and drags our margins down." No, Ellison would rather enjoy the bounty of an acquisition spree that handed Oracle a bevy of software companies, hordes of customers, and associated maintenance fees that trickle straight to the bottom line.

SAP isn't having much more luck with Business by Design, its foray into the on-demand world, I'm told. SAP said for years it would never get into the on-demand game. Then when it sensed a potential threat from NetSuite, SAP decided to embrace on-demand. Results have been less than stellar so far. "SAP thought customers would go to a Web site, configure it themselves, and found the first hundred or so implementations required a lot of time and a lot of tremendous costs," Richardson says. "Small businesses are calling for support, calling SAP because they don't have IT departments. SAP is spending a lot of resources to configure and troubleshoot the problem."

In some ways, SaaS vendors have been misled by the consumer Web and have failed to realize that they still need to spend money on sales and support when servicing business customers. Just because Google doesn't advertise it's search features and Yahoo! Mail doesn't seem to have a huge support staff that hand holds customers as it uses their product doesn't mean that SaaS vendors can expect to cut their sales and support calls. The dynamics of running a free, advertising based service aimed at end users is completely different from running a service where you expect to charge business tens of thousands to hundreds of thousands to use your product.

In traditional business software development you have three major cycles with their own attendant costs; you have to write the software, you have to market the software and then you have to support the software. Becoming a SaaS vendor does not eliminate any of these costs. Instead it adds new costs and complexities such as managing data centers and worrying about hackers. In addition, thanks to free advertising based consumer services and the fact that companies like Google that have subsidized their SaaS offerings using their monopoly profits in other areas, business customers expect Web-based software to be cheaper than its desktop or server-based alternatives. Talk about being stuck between a rock and a hard place as a vendor.

Finally, software vendors that have existing ecosystems of partners that benefit from supporting and enhancing their shrinkwrapped products also have to worry about where these partners fit in a SaaS world. For an example of the kinds of problems these vendors now face, below is an excerpt from a rant by Vladimer Mazek, a system administrator at ExchangeDefender, entitled Houston… we have a problem which he wrote after attending one of Microsoft's partner conferences

Lack of Partner Direction: By far the biggest disappointment of the show. All of Microsoft’s executives failed to clearly communicate the partnership benefits. That is why partners pack the keynotes, to find a way to partner up with Microsoft. If you want to gloat about how fabulous you are and talk about exciting commission schedules as a brand recommender and a sales agent you might want to go work for Mary Kay. This is the biggest quagmire for Microsoft – it’s competitors are more agile because they do not have to work with partners to go to market. Infrastructure solutions are easy enough to offer and both Google and Apple and Amazon are beating Microsoft to the market, with far simpler and less convoluted solutions. How can Microsoft compete with its partners in a solution ecosystem that doesn’t require partners to begin with?

This is another example of the kind of problems established software vendors will have to solve as they try to ride the Software as a Service wave instead of being flattened by it.  Truly successful SaaS vendors will eventually have to deliver platforms that can sustain a healthy partner ecosystems to succeed in the long term. We have seen this in the consumer space with the Facebook platform and in the enterprise space with SalesForce.com's AppExchange. Here is one area where the upstarts that don't have a preexisting shrinkwrap software businesses can turn a disadvantage (lack of an established partner ecosystem) into an advantage since it is easier to start from scratch than to retool.

The bottom line is that creating a Web-based version of a popular desktop or server-based product is just part of the battle if you plan to play in the enterprise space. You will have to deal with the sales and support that go with selling to businesses as well as all the other headaches of shipping "cloud based services" which don't exist in the shrinkwrap software world. After you get that figured out, you will want to consider how you can leverage various ISVs and startups to enhance the stickiness of your service and turn it into a platform before one of your competitor's does. 

I suspect we still have a few years before any of the above happens. In the meantime we will see lots more software companies complaining about the paradox of embracing the Web when it clearly cannibalizes their other revenue streams and is less lucrative than what they've been accustomed to seeing. Interesting times indeed.

Now Playing: Flobots - Handlebars


 

Last week TechCrunch UK wrote about a search startup that utilizes AI/Semantic Web techniques named True Knowledge. The post entitled VCs price True Knowledge at £20m pre-money. Is this the UK’s Powerset?  stated

The chatter I’m hearing is that True Knowledge is being talked about in hushed tones, as if it might be the Powerset of the UK. To put that in context, Google has tried to buy the Silicon Valley search startup several times, and they have only launched a showcase product, not even a real one. However, although True Knowledge and Powerset are similar, they are different in significant ways, more of which later.
...
Currently in private beta, True Knowledge says their product is capable of intelligently answering - in plain English - questions posed on any topic. Ask it if Ben Affleck is married and it will come back with "Yes" rather than lots of web pages which may or may not have the answer (don’t ask me!).
...
Here’s why the difference matters. True Knowledge can infer answers that the system hasn’t seen. Inferences are created by combining different bits of data together. So for instance, without knowing the answer it can work out how tall the Eiffel Tower is by inferring that it is shorter that the Empire State Building but higher than St Pauls Cathedral.
...
AI software developer and entrepreneur William Tunstall-Pedoe is the founder of True Knowledge. He previously developed a technology that can solve a commercially published crossword clues but also explain how the clues work in plain English. See the connection?

The scenarios described in the TechCrunch write up should sound familiar to anyone who has spent any time around fans of the Semantic Web. Creating intelligent agents that can interrogate structured data on the Web and infer new knowledge has turned out to  be easier said than done because for the most part content on the Web isn't organized according to the structure of the data. This is primarily due to the fact that HTML is a presentational language. Of course, even if information on the Web was structured data (i.e. idiomatic XML formats) we still need to build machinary to translate between all of these XML formats.

Finally, in the few areas on the Web where structured data in XML formats is commonplace such as Atom/RSS feeds for blog content, not a lot has been done with this data to fulfill the promise of the Semantic Web.

So if the Semantic Web is such an infeasible utopia, why are more and more search startups using that as the angle from which they will attack Google's dominance of Web search? The answer can be found in Bill Slawski's post from a year ago entitled Finding Customers Through Anti-Commercial Queries where he wrote

Most Queries are Noncommercial

The first step might be to recognize that most queries conducted by people at search engines aren't aimed at buying something. A paper from the WWW 2007 held this spring in Banff, Alberta, Canada, Determining the User Intent of Web Search Engine Queries, provided a breakdown of the types of queries that they were able to classify.

Their research uncovered the following numbers: "80% of Web queries are informational in nature, with about 10% each being navigational and transactional." The research points to the vast majority of searches being conducted for information gathering purposes. One of the indications of "information" queries that they looked for were searches which include terms such as: “ways to,” “how to,” “what is.”

Although the bulk of the revenue search engines make is from people performing commercial queries such as searching for "incredible hulk merchandise", "car insurance quotes" or "ipod prices", this is actually a tiny proportion of the kinds of queries people want answered by search engines. The majority of searches are about the five Ws (and one H) namely "who", "what", "where", "when", "why" and "how". Such queries don't really need a list of Web pages as results, they simply require an answer. The search engine that can figure out how to always answer user queries directly on the page without making the user click on half a dozen pages to figure out the answer will definitely have moved the needle when it comes to the Web search user experience.

This explains why scenarios that one usually associates with AI and Semantic Web evangelists are now being touted by the new generation of "Google-killers". The question is whether knowledge inference techniques will prove to be more effective than traditional search engine techniques when it comes to providing the best search results especially since a lot of the traditional search engines are learning new tricks.

Now Playing: Bob Marley - Waiting In Vain


 

Categories: Technology

Disclaimer: This post does not reflect the opinions, thoughts, strategies or future intentions of my employer. These are solely my personal opinions. If you are seeking official position statements from Microsoft, please go here.

About a month ago Joel Spolsky wrote a rant about Microsoft's Live Mesh project which contained some interesting criticism about the project and showed that Joel has a personal beef with the Googles & Microsofts of the world for making it hard for him to hire talented people to work for his company. Unsurprisingly, lots of responses focused on the latter since it was an interesting lapse in judgement for Joel inject his personal frustrations into what was meant to be a technical critique of a software project. However there were some ideas worthy of discussion in Joel's rant that I've been pondering over the past month. The relevant parts of Joel's article are excerpted below

It was seven years ago today when everybody was getting excited about Microsoft's bombastic announcement of Hailstorm, promising that "Hailstorm makes the technology in your life work together on your behalf and under your control."

What was it, really? The idea that the future operating system was on the net, on Microsoft's cloud, and you would log onto everything with Windows Passport and all your stuff would be up there. It turns out: nobody needed this place for all their stuff. And nobody trusted Microsoft with all their stuff. And Hailstorm went away.
...
What's Microsoft Live Mesh?

Hmm, let's see.

"Imagine all your devices—PCs, and soon Macs and mobile phones—working together to give you anywhere access to the information you care about."

Wait a minute. Something smells fishy here. Isn't that exactly what Hailstorm was supposed to be? I smell an architecture astronaut.

And what is this Windows Live Mesh?

It's a way to synchronize files.

Jeez, we've had that forever. When did the first sync web sites start coming out? 1999? There were a million versions. xdrive, mydrive, idrive, youdrive, wealldrive for ice cream. Nobody cared then and nobody cares now, because synchronizing files is just not a killer application. I'm sorry. It seems like it should be. But it's not.

But Windows Live Mesh is not just a way to synchronize files. That's just the sample app. It's a whole goddamned architecture, with an API and developer tools and in insane diagram showing all the nifty layers of acronyms, and it seems like the chief astronauts at Microsoft literally expect this to be their gigantic platform in the sky which will take over when Windows becomes irrelevant on the desktop. And synchronizing files is supposed to be, like, the equivalent of Microsoft Write on Windows 1.0.

As I read the above rant I wondered what world Joel has been living in the past half decade. Hailstorm has actually proven to have been a very visionary and accurate picture of how the world ended up. A lot of the information that used to sit in my desktop in 2001 is now in the cloud. My address book is on Facebook, my photos are on Windows Live Spaces and Flickr, my email is in Hotmail and Yahoo! Mail, while a lot of my documents are now on SkyDrive and Google Docs. Almost all of these services provide XML-based APIs for accessing my data and quite frankly I find it hard to distinguish the ideas behind a unified set of user-centric cloud APIs that was .NET My Services from Google GData. A consistent set of APIs for accessing a user's contact lists, calendar, documents, inbox and profile all stored on the servers of a single company? Sounds like we're in a time warp doesn't it? Even more interesting is that outlandish sounding scenarios at the time such as customers using a delegated authentication model to grant applications and Web sites temporary or permanent access to their data stored in the cloud are now commonplace. Today we have OAuth, Yahoo! BBAuth, Google AuthSub, Windows Live DelAuth and even just the plain old please give us your email password.

In hindsight the major problem with Hailstorm seems to have been that it was a few years ahead of its time and people didn't trust Microsoft. Funny enough, a lot of the key people who were at Microsoft during that era like Vic Gundotra and Mark Lucovsky are now at Google, a company and a brand which the Internet community trusts a lot more than Microsoft, working on Web API strategy. 

All of this is a long winded way of saying I think Joel's comparison of Live Mesh with Hailstorm is quite apt but just not the way Joel meant it. I believe that like Hailstorm, Live Mesh is a visionary project that in many ways tries to tackle problems that people will have or don't actually realize they have. And just like with Hailstorm where things get muddy is separating the vision from the first implementation or "platform experience" of that vision.

I completely agree with Joel that synchronizing files is not a killer application. It just isn't sexy and never will be. The notion of having a ring or mesh of devices where all my files synchronize across each device in my home or office is cool to contemplate from a technical perspective. However it's not something I find exciting or feel that I need even though I'm a Microsoft geek with a Windows Mobile phone, an XBox 360, two Vista laptops and a Windows server in my home. It seems I'm not the only one that feels that way according to a post by a member of the Live Mesh team entitled Behind Live Mesh: What is MOE? which states

Software + Services

When you were first introduced to Live Mesh, you probably played with the Live Desktop.  It’s pretty snazzy.  Maybe you even uploaded a few files too.  Hey, it’s a cool service!  You can store stuff in a cloud somewhere and access it anywhere using a webpage.  Great!

As I look at the statistics on the service though, I notice that a significant portion of our users have stopped here.  This pains me, as there’s a whole lot more you can do with Live Mesh.  Didn’t you hear all the hoopla about Software + Services?  Ever wonder, “Where’s the software?”

You might have noticed that on the device ring there’s a big orange button with a white ‘+’ sign.  The magic happens when you click that big orange button and opt to “add a device” to your mesh. 

So what excites me as a user and a developer about Live Mesh? I believe seamless synchronization of data as a platform feature is really interesting. Today I use OutSync to add people's Facebook photos to their contact information on my Windows Mobile phone. I've written my own RSS reader which synchronizes the state of my RSS subscriptions with Google Reader. Doug Purdy wrote FFSync so he can share his photos, music taste and other data on his Mac with his friends on FriendFeed. It may soon be possible to synchronize my social graph across multiple sites via Google Friend Connect. Google is working on using Google Gears to give me offline access to my documents in Google Docs by synchronizing the state between my desktop and their cloud. Earlier this week Apple announced mobile.me which enables users to synchronize their contacts, emails, calendar and photos across the Web and all their devices.

Everywhere I look data synchronization is becoming more and more important and also more commonplace. I expect this trend to continue over time given the inevitable march of the Web. Being able to synchronize my data and my actions from my desktop to the Web or across Web sites I frequent is extremely enabling. Thus having a consistent  set of standards-based protocols for enabling these scenarios as well as libraries for the key platforms that make this approachable to developers will be very beneficial to users and to the growth of the Web. 

At the rate things are going, I personally believe that this vision of the Web will come to pass with or without Microsoft in the same way that Hailstorm's vision of the Web actually came to pass even though Microsoft canned the project. Whether Microsoft is an observer or a participant in this new world order depends on whether Live Mesh as a product and as a vision fully embraces the Web and collaboration with Web companies (as Google has ably done with GData/OpenSocial/FriendConnect/Gears/etc) or not. Only time will tell what happens in the end.

Now Playing: Usher - In This Club (remix) (feat. Beyonce & Lil Wayne)


 

Categories: Technology

I've been having problems with hard drive space for years. For some reason, I couldn't get over the feeling that I had less available space on my hard drive than I could account for. I'd run programs like FolderSizes and after doing some back of the envelope calculations it would seem like I should have gigabytes more free space than what was actually listed as available according to my operating system.

Recently I stumbled on a blog post by Darien Nagle which claimed to answer the question Where's my hard disk space gone? with the recommendation that his readers should try WinDirStat. Seeing nothing to lose I gave it a shot and I definitely came away satisfied. After a quick install, it didn't take long for the application to track down where all those gigabytes of storage I couldn't account for had gone. It seems there was a hidden folder named C:\RECYCLER that was taking up 4 GigaBytes of space.

I thought that was kind of weird so I looked up the folder name and found Microsoft KB 229041 - Files Are Not Deleted From Recycler Folder which listed the following symptoms

SYMPTOMS
When you empty the Recycle Bin in Windows, the files may not be deleted from your hard disk.

NOTE: You cannot view these files using Windows Explorer, My Computer, or the Recycle Bin.

I didn't even have to go through the complicated procedure in the KB article to delete the files, I just deleted them directly from the WinDirStat interface.

My only theory as to how this happened is that some data got orphaned when I upgraded my desktop from Windows XP to Windows 2003 since the user accounts that created them were lost. I guess simply deleting the files from Windows Explorer as I did a few years ago wasn't enough.

Good thing I finally found a solution. I definitely recommend WinDirStat, the visualizations aren't half bad either.

Now Playing: Eminem - Never Enough (feat. 50 Cent & Nate Dogg)


 

Categories: Technology

If you work in the technology industry it pays to be familiar with the ideas from Geoffrey Moore's insightful book Crossing the Chasm. In the book he takes a look at the classic marketing bell curve that segments customers into Early Adopters, Pragmatists, Conservatives and Laggards then points out that there is a large chasm to cross when it comes to becoming popular beyond an initial set of early adopters. There is a good review of his ideas in Eric Sink's blog post entitled Act Your Age which is excerpted below

The people in your market segment are divided into four groups:

Early Adopters are risk takers who actually like to try new things.

Pragmatists might be willing to use new technology, if it's the only way to get their problem solved.

Conservatives dislike new technology and try to avoid it.

Laggards pride themselves on the fact that they are the last to try anything new.

This drawing reflects the fact that there is no smooth or logical transition between the Early Adopters and the Pragmatists.  In between the Early Adopters and the Pragmatists there is a chasm.  To successfully sell your product to the Pragmatists, you must "cross the chasm". 

The knowledge that the needs of early adopters and those of the majority of your potential user base differ significantly is extremely important when building and marketing any technology product. A lot of companies have ended up either building the wrong product or focusing their product too narrowly because they listened too intently to their initial customer base without realizing that they were talking to early adopters.

The fact is that early adopters have different problems and needs from regular users. This is especially true when you compare the demographics of the Silicon Valley early adopter crowd which "Web 2.0" startups often try to court with the typical users of social software on the Web.  In the few years I've been working on building Web applications, I've seen a number of technology trends and products that have been heralded as the next big thing by technology pundits which actually never broke into the  mainstream because they don't solve the problems of regular Internet users. Here are some examples

  • Blog Search: A few years ago, blog search engines were all the rage. You had people like Marc Cuban talking up IceRocket and Robert Scoble harranguing Web search companies to build dedicated blog search engines. Since then the products in that space have either given up the ghost (e.g. PubSub, Feedster), turned out to be irrelevant (e.g. Technorati, IceRocket) or were sidelined (e.g. Google Blog Search, Yahoo! Blog Search). The problem with this product category is that except for journalists, marketers and ego surfing A-list bloggers there aren't many people who need a specialized feature set around searching blogs.  

  • Social bookmarking: Although del.icio.us popularized a number of "Web 2.0" trends such as tagging, REST APIs and adding social features to a previously individual task, it has never really taken off as a mainstream product. According to the former VC behind the service it seems to have peaked at 2 million unique visitors last year and is now seeing about half that number of unique users. Compare that to Yahoo! bookmarks which was seeing 20 million active users a year and a half ago.

  • RSS Readers: I've lost track of all of the this is the year RSS goes mainstream articles I've read over the past few years. Although RSS has turned out to be a key technology which powers a number of interesting functionality behind the scenes (e.g. podcasting) actually subscribing and reading news feeds in an RSS reader has not become a mainstream activity of Web users. When you think about it, it is kind of obvious. The problem an RSS reader solves is "I read so many blogs and news sites on daily basis, I need a tool to help me keep them all straight". How many people who aren't enthusiastic early adopters (i) have this problem and (ii) think they need a tool to deal with it?

These are just the first three that came to mind. I'm sure readers can come up with more examples of their own. This isn't to say that all hyped "Web 2.0" sites haven't lived up to their promise. Flickr is an example of an early adopter hyped site that showed up sprinkled with "Web 2.0" goodness that has become a major part of the daily lives of tens of millions of people across the Web.

When you look at the list of top 50 sites in the U.S. by unique visitors it is interesting to note what common theme unites the recent "Web 2.0" entrants into that list. There are the social networking sites like MySpace and Facebook which harness the natural need of young people to express their individuality yet be part of social cliques.  Then there are the sites which provide lots of flexible options that enable people to share their media with their friends, family or the general public such as Flickr and YouTube. Both sites also have figured out how to harness the work of the few to entertain and benefit the many as have Wikipedia and Digg as well. Then there are sites like Fling and AdultFriendFinder which seem to now get more traffic than the personal sites you see advertised on TV for obvious reasons.

However the one overriding theme is that all of these recent entrants is that they solve problems that everyone [or at least a large section of the populace] has. Everyone likes to communicate with their social circle. Everyone likes watching funny videos and looking at couple pics. Everyone wants to find information about topics they interested in or find out what's going on around them. Everybody wants to get laid.

If you are a Web 2.0 company in today's Web you really need to ask yourselves, "Are we solving a problem that everybody has or are we building a product for Robert Scoble?"

Now Playing: Three 6 Mafia - I'd Rather (feat. DJ Unk)


 

Categories: Social Software | Technology

December 17, 2007
@ 05:52 PM

Recently my Cingular 3125 crapped out and I picked up an AT&T Tilt (aka HTC Kaiser) which I've already developed a love<->hate relationship with. I'd originally considered getting an iPhone but quickly gave up that dream when I realized it didn't integrate well with Microsoft Exchange. When you attend 3 - 5 meetings a day, having a phone that can tell you the room number, topic and attendees of your next meeting as you hop, skip and jump from meeting room to meeting room is extremely useful.

There's a lot I like about the phone. The QWERTY keyboard and wide screen make writing emails and browsing the Web a much better experience than on my previous phone. In addition, being able flip out the keyboard & tilt the screen is a spiffy enough effect that it gets ooohs and aaahs from onlookers the first time you do it. Another nice touch is that there are shortcut keys for Internet Explorer, your message center and the Start menu. When I was trying out the phone, the AT&T sales person said I'd become hooked on using those buttons and he was right, without them using the phone would be a lot more cumbersome.

There are some features specific to Windows Mobile 6 that I love. The first is that I can use any MP3, WAV or AAC file as a ringtone. After spending $2.50 for a 20 second snippet of a song I already owned and not being able to re-download the song after switching phones, I decided I wanted no part of this hustle from the cell phone companies. All I needed to do was download MP3 Cutter and I have as many ringtones as I have songs on my iPod. They've also fixed the bug from Windows Mobile 5 where if your next appointment shown on the home screen is for a day other than the current day, clicking on it takes you today's calendar instead of the calendar for that day. My phone also came with Office Mobile which means I can actually read all those Word, Excel and Powerpoint docs I get in email all the time.

So what do I dislike about this phone? The battery life is ridiculously short. My phone is consistently out of battery life at the end of the work day. I've never had this problem with the half dozen phones I've had over the past decade. What's even more annoying is that unlike every other phone I've ever seen, there is no battery life indicator on the main screen. Instead I have to navigate to Start menu->Settings->System->Power if I want to check my battery life. On the other hand, there are redundant indicators showing whether I am on the EDGE or 3G networks where the battery indicator used to be in Windows Mobile 5. Another problem is that voice dialing is cumbersome and often downright frustrating. There is a great rant about this in the post What's Wrong With Windows Mobile and How WM7 and WM8 Are Going to Fix It on Gizmodo which is excerpted below

the day-to-day usage of Windows Mobile isn't what you'd call "friendly," either. In fact, it'd probably punch you in the face if you even made eye contact. Take dialing, for instance. How can the main purpose of a phone—calling someone—be so hard to do?

...

If you're using a Windows Mobile Professional device, you have a few options, none of which are good:

• You can pull out the stylus to tap in the digits. This requires two hands.

• You can try and use your fingertip to call, which doesn't normally work, so you'll use your fingernail, which does work but, as it results in many misdialed numbers, takes forever.

• You can slide out the keyboard and find the dialpad buried among the QWERTY keys and dial, which requires two hands and intense concentration.

• You can try and bring up the contact list, which takes a long-ass time to scroll through, or you can slide out the keyboard again and search by name. Again, two hands.

• Voice Command has been an option for years, but then again, it kinda works, but it doesn't work well.

• Probably the best way to go is to program your most important numbers into speed dial, as you'll be able to actually talk to the correct person within, say, three button presses.

Compare that to the iPhone, which has just a touchscreen, but gets you to the keypad, your favorites, recent calls or your contact list, all within two key presses of the home screen.

It's amazing to me that there are five or six different options if you want to dial and call a number yet they all are a usability nightmare. One pet peeve that is missing from the Gizmodo rant is that when a call is connected, the keypad is hidden. This means that if you are calling any sort of customer service system (e.g. AT&T wireless, Microsoft's front desk, your cable company, etc) you need to first tap "Keypad" with your fingernail and then deal with the cumbersome dialpad.

So far, I like the phone more than I dislike it.  **** out of *****.

I'd love to see the next version of the iPhone ship with the ability to talk to Exchange and support for 3G, and see whether the next generation of Windows Mobile devices stack up.

Rihanna - Hate That I Love You (feat. Ne-Yo)


 

Categories: Technology

DISCLAIMER: This post does not reflect the opinions, thoughts, strategies or future intentions of my employer. These are solely my personal opinions. If you are seeking official position statements from Microsoft, please go here.

Last week, Microsoft announced Office Live Workspace which is billed as an online extension to Microsoft Office. Unsurprisingly, the word from the pundits has been uniformly negative especially when comparing it to Google Docs.

An example of the typical pundit reaction to this announcement is Ken Fisher's post on Ars Technica entitled Office Live Workspace revealed: a free 250MB "SharePoint Lite" for everyone where he writes

Office Live Workspace is not an online office suite. The aim of OLW is simple: give web-connected users a no-cost place to store, share, and collaborate on Office documents. To that end, the company will give registered users 250 MB of storage space, which can be used to store documents "in the cloud" or even "host" them for comments by other users equipped with just a web browser (you will be able to manage the access rights of other users). However, and this is important: you cannot create new Office documents with this feature nor can you edit documents beyond adding comments without having a copy of Microsoft Office installed locally.

As you can see, this is not a "Google Docs killer" or even an "answer" to Google Docs. This is not an online office suite, it's "software plus service." Microsoft's move here protects the company's traditional Office business, in that it's really positioned as a value-add to Office, rather than an Office alternative. Microsoft has seen success with its business-oriented SharePoint offering, and Microsoft is taking a kind of "SharePoint Lite" approach with OLW.

The focus of pundits on "an online office suite" and a "Google Docs Killer" completely misses the point when it comes to satisfy the needs of the end user. As a person who is a fan of the Google Docs approach, there are three things I like that it brings to the table

  • it is free for consumers and people with basic needs
  • it enables "anywhere access" to your documents
  • it requires zero install to utilize

The fact that it is Web-based and uses AJAX instead of Win32 code running on my desktop is actually a negative when it comes to responsiveness and feature set. However the functionality of Google Docs hits a sweet spot for a certain class of users authoring certain classes of documents. By the way, this is a textbook example of low-end disruption from Clay Christensen's book "The Innovator's Dilemma". Taking a lesson from much another hyped business book, Money Ball, disruption often happens when the metrics used to judge successful products don't actually reflect the current realities of the market.  

The reality of today's market is that a lot of computer users access their data from multiple computers and perhaps their mobile device during the course of a normal day.  The paradigm of disconnected desktop software is an outdated relic that is dying away. Another reality of today's market is that end users have gotten used to being able to access and utilize world class software for free and without having to install anything thanks to the Googles, Facebooks and Flickrs of the world. When you put both realities together, you get the list of three bullet points above which are the key benefits that Google Docs brings to the table.

The question is whether there is anything Microsoft can do to stem what seems like inevitable disruption by Google Docs and if so, does Office Live Workspace improve the company's chances in any way? I believe the answer to both questions is Yes. If you are already a user of Microsoft's office productivity software then Office Live Workspace gives you a free way to get "anywhere access" to your documents without having to install anything even if the computer does not have Microsoft Office installed.

As I mentioned earlier, a number of pundits have been fairly dismissive of this and declared a no-contest victory for the Google Docs approach. Steven Burke has an article entitled Five Reasons Google Docs Beats Office Live Workspace where he lists a number of ways Google Docs compares favorably Microsoft Workspace. Of his list of five reasons, only one seems like a genuine road block that will hurt adoption by end users. Below are his reasons in bold with my comments underneath.

Steven Burke: Office Live Workspace Does Not Allow You To Create And Edit Documents Within A Web Page. Google Docs Does

This is a lame restriction. I assume this is to ensure that the primary beneficaries of this offering have purchased Microsoft Office (thus it is a software + services play instead of a software as a service play). I can understand the business reasons why this exists but it is often a good business strategy to cannibalize yourself before competitors do it especially when it is clear that such cannibalization is inevitable. The fact that I am tethered to Office in creating new documents is lame. I hope competitive pressure makes this "feature" go away.

Steven Burke: Microsoft Office Live Workspace Has A 250 Mbyte 1,000 Average Office Documents Limitation. Google Docs Does Not.

I don't worry to much about space limitations especially since this is in beta. If Microsoft can figure out how to give people 5GB of space for email in Hotmail and 1GB file storage space in SkyDrive all for FREE, I'm sure we can figure out how to give more than 250MB of storage to people who've likely spent hundreds of dollars buying our desktop software.  

Steven Burke: Microsoft's Office Live WorkSpace Is VaporWare. Google Docs is Real.

The vaporware allegation only makes sense if you think (a) it is never going to ship or (b) you need a solution today. If not, it is a product announcement like any other in the software industry meant to give people a heads up on what's coming down the line. If industry darlings like Apple and Google can get away with it, why single out Microsoft?

Steven Burke: You're Better Off Trusting Google Than Microsoft When It Comes To Web 2.0 Security Issues.

I don't know about you, but over the past year I've heard about several  security flaws in Google's AJAX applications including Cross Site Request Forgery issues in Gmail, leaking people's email addresses via the chat feature of Google presentations, cross site scripting issues that allowed people to modify your documents in Google Docs & Spreadsheets, and lots more. On the flip side, I haven't heard about even half as many security issues in Microsoft's family of Web applications whether they are Office Live, MSN or Windows Live branded.

In fact, one could argue that trusting Google to keep your data secure in their AJAX applications is like trusting a degenerate gambler with your life savings. So far the company has proven to be inept at securing their online services which is problematic if they are pitching to store people's vital business documents.

Steven Burke: Office Live Workspace Is Optimized For Microsoft Office Word, Excel and PowerPoint Data. Google Is Optimized For Web 2.0.

I guess this means Google's service is more buzzword compliant than Microsoft's. So what? At the end of the day, this most important thing is providing value to your customers not repping every buzzword that spews forth from the likes of Mike Arrington and Tim O'Reilly.  

Tomoyasu Hotei - Battle without Honor or Humanity


 

Categories: Technology

Yesterday morning, I tossed out a hastily written post entitled It Must Be a Fun Time to Work on Microsoft Office which seems to have been misread by some folks based on some of the comments I’ve seen on my blog and in other places. So further exposition of some of the points in that post seems necessary.

First of all, there’s the question of who I was calling stupid when talking about the following announcements

  • Google announcing the launch of Presently, their Web-based Powerpoint clone. Interestingly enough, one would have expected presentation software to be the most obvious application to move to the Web first instead of the last.
  • Yahoo! announcing the purchase of Zimbra, a developer of a Web-based office productivity and collaboration suite.
  • Microsoft announcing the it would integrate Web-based storage and collaboration into it’s desktop office productivity suite.
  • IBM announcing that it would ship it’s own branded version of an Open Source clone of Microsoft’s desktop productivity suite.

Given that three of these announcements are about embracing the Web and the last one is about building disconnected desktop software, I assumed it was obvious who was jumping on a dying paradigm while the rest of the industry has already moved towards the next generation. To put this another way, James Robertson’s readers were right that I was talking about IBM.

There is something I did want to call out about James Robertson’s post. He wrote

People have moved on to the 80% solution that is the web UI, because the other advantages outweigh that loss of "richness".

I don’t believe that statement when it comes to office productivity software. I believe that the advantages of leveraging the Web are clear. From my perspective

  1. universal access to my data from any device or platform 
  2. enabling collaboration with “zero install” requirements on collaborators

are clear advantages that Web-based office productivity software has over disconnected desktop software.

It should be noted that neither of these advantages requires that the user interface is Web-based or that it is rich (i.e. AJAX or Flash if it is Web-based). Both of these things help but they aren’t a hard requirement.

What is important is universal access to my data via the Web. The reason I don’t have an iPhone is because I’m hooked on my Windows Mobile device because of the rich integration it has with my work email, calendar and tasks list. The applications on my phone aren’t Web-based, they are the equivalent of “desktop applications” for my phone. Secondly, I didn’t have to install them because they were already on my phone [actually I did have to install Oxios ToDo List but that’s only because the out-of-the-box task list synchronization in Windows Mobile 5 was less than optimal for my needs].

I used to think that having a Web-based interface was also inevitable but that position softened once I realized that you’ll need offline support which means building support for local storage + synchronization into the application (e.g. Google Reader's offline mode) to truly hit the 80/20 point for most people given how popular laptops are these days. However once you’ve built that platform, the same storage and synchronization engine could be used by a desktop application as well.

In that case, either way I get what I want. So desktop vs. Web-based UI doesn’t matter since they both have to stretch themselves to meet my needs. But it is probably a shorter jump to Web-enable the desktop applications than it is to offline-enable the Web applications.  

Now playing: Playa Fly - Feel Me


 

This is one of those posts I started before I went on my honeymoon and never got around to finishing. There are lots of interesting things happening in the world of office productivity software these days. Here are four announcements from the past three weeks that show just how things are heating up in this space, especially if you agree with Steve Gillmor that Office is Dead *(see footnote).

From the article Google Expands Online Software Suite 

MOUNTAIN VIEW, Calif. (AP) — Google Inc. has expanded its online suite of office software to include a business presentation tool similar to Microsoft Corp.'s popular PowerPoint, adding the latest twist in a high-stakes rivalry.

Google's software suite already included word processing, spreadsheet and calendar management programs. Microsoft has been reaping huge profits from similar applications for years.

Unlike Google's applications, Microsoft's programs are usually installed directly on the hard drives of computers.

From the article I.B.M. to Offer Office Software Free in Challenge to Microsoft’s Line

I.B.M. plans to mount its most ambitious challenge in years to Microsoft’s dominance of personal computer software, by offering free programs for word processing, spreadsheets and presentations.

Steven A. Mills, senior vice president of I.B.M.’s software group, said the programs promote an open-source document format.

The company is announcing the desktop software, called I.B.M. Lotus Symphony, at an event today in New York. The programs will be available as free downloads from the I.B.M. Web site.

From the blog post Yahoo scoops up Zimbra for $350 million

Yahoo has been on an acquisition binge late, but mostly to expand its advertising business. Now Yahoo is buying its way deeper into the applications business with the acquisition of Zimbra for a reported $350 million, mostly in cash. Zimbra developed a leading edge, Web 2.0 open source messaging and collaboration software suite, with email, calendar, document processing and a spreadsheet.

and finally, from the press release Microsoft Charts Its Software Services Strategy and Road Map for Businesses

 Today Microsoft also unveiled the following:

  • Microsoft® Office Live Workspace, a new Web-based feature of Microsoft Office that lets people access their documents online and share their work with others

Office Live Workspace: New Web Functionality for Microsoft Office

Office Live Workspace is among the first entries in the new wave of online services. Available at no charge, Office Live Workspace lets people do the following:

  • Access documents anywhere. Users can organize documents and projects for work, school and home online, and work on them from almost any computer even one not connected to the company or school network. They can save more than 1,000 Microsoft Office documents to one place online and access them via the Web.
  • Share with others. Users can work collaboratively on a project with others in a password-protected, invitation-only online workspace, helping to eliminate version-control challenges when e-mailing drafts to multiple people. Collaborators who don’t have a desktop version of Microsoft Office software can still view and comment on the document in a browser.

As you can see one of these four announcements is not like the others. Since it isn’t fair to pick on the stupid, I’ll let you figure out which company is jumping on a dying paradigm while the rest of the industry has already moved towards the next generation.  The Web is no longer the future of computing, computing is now about the Web.

* I do. Disconnected desktop software needs to go the way of the dodo.

Now playing: Prince - Sign 'O' the Times


 

Since tomorrow is the Data Sharing Summit, I was familiarizing myself with the OpenID specification because I was wondering how it dealt with people making false claims about their identity.

Scenario: I go to http://socialnetwork.example.com which supports OpenID and claim that my URL is http://brad.livejournal.com (i.e. I am Brad Fitzpatrick). The site redirects me to https://www.livejournal.com/login.bml along with a query string that has certain parameters specified (e.g. “?openid.mode=checkid_immediate&openid.identity=brad&openid.return_to=http://socialnetwork.example.com/home.html”) which is a long winded way of saying “LiveJournal can you please confirm that this user is brad then redirect them back to our site when you’re done?” At this point, I could make an HTTP request to http://socialnetwork.example.com/home.html, specify the Referrer header value as https://www.livejournal.com/login.bml and claim that I’ve been validated by LiveJournal as brad.  

This is a pretty rookie example but it gets the idea across. OpenID handles this spoofing problem by requiring an OpenID consumer (e.g. http://socialnetwork.example.com) to first make an association request to the target OpenID provider (e.g. LiveJournal) before making performing any identity validation. The purpose of the request is to get back an association handle which is in actuality a shared secret between both services which should be specified by the Consumer as part of each checkid_immediate request made to the Identity Provider.  

There is also a notion of a dumb mode where instead of making the aforementioned association request, the consumer asks the Identity Provider whether the assoc_handle returned by the redirected user is a valid one via a check_authentication request. This is a somewhat chattier way to handle the problem but it leads to the same results.

So far, I think I like OpenID. Good stuff.

Now playing: Gangsta Boo - Who We Be


 

Categories: Technology

I try to avoid posting about TechMeme pile ups but this one was just too irritating to let pass. Mark Cuban has a blog post entitled The Internet is Dead and Boring which contains the following excerpts 

Some of you may not want to admit it, but that's exactly what the net has become. A utility. It has stopped evolving. Your Internet experience today is not much different than it was 5 years ago.
...
Some people have tried to make the point that Web 2.0 is proof that the Internet is evolving. Actually it is the exact opposite. Web 2.0 is proof that the Internet has stopped evolving and stabilized as a platform. Its very very difficult to develop applications on a platform that is ever changing. Things stop working in that environment. Internet 1.0 wasn't the most stable development environment. To days Internet is stable specifically because its now boring.(easy to avoid browser and script differences excluded)

Applications like Myspace, Facebook, Youtube, etc were able to explode in popularity because they worked. No one had to worry about their ISP making a change and things not working. The days of walled gardens like AOL, Prodigy and others were gone.
...
The days of the Internet creating explosively exciting ideas are dead. They are dead until bandwidth throughput to the home reaches far higher numbers than the vast majority of broadband users get today.
...
So, let me repeat, The days of the Internet creating explosively exciting ideas are dead for the foreseeable future..

I agree with Mark Cuban that the fundamental technologies that underly the Internet (DNS and TCP/IP) and the Web in particular (HTTP and HTML) are quite stable and are unlikely to undergo any radical changes anytime soon. If you are a fan of Internet infrastructure then the current world is quite boring because we aren't likely to ever see an Internet based on IPv8 or a Web based on HTTP 3.0. In addition, it is clear that the relative stability in the Web development environment and increase in the number of people with high bandwidth connections is what has led a number of the trends that are collectively grouped as "Web 2.0".

However Mark Cuban goes off the rails when he confuses his vision of the future of media as the only explosive exciting ideas that can be enabled by a global network like the Internet. Mark Cuban is an investor in HDNet which is a company that creates and distributes professionally produced content in high definition video formats. Mark would love nothing more than to distribute his content over the Internet especially since lack of interest in HDNet in the cable TV universe (I couldn't find any cable company on the Where to Watch HDNet page that actually carried the channel).

Unfortunately, Mark Cuban's vision of distributing high definition video over the Internet has two problems. The first is the fact that distributing high quality video of the Web is too expensive and the bandwidth of the average Web user is insufficient to make the user experience pleasant. The second is that people on the Web have already spoken and content trumps media quality any day of the week. Remember when pundits used to claim that consumers wouldn't choose lossy, compressed audio on the Web over lossless music formats? I guess no one brings that up anymore given the success of the MP3 format and the iPod. Mark Cuban is repeating the same mistake with his HDNet misadventure.  User generated, poor quality video on sites like YouTube and larger library of content on sites like Netflix: Instant Viewing is going to trump the limited line up on services like HDNet regardless of how much higher definition the video quality gets.

Mark Cuban has bet on a losing horse and he doesn't realize it yet. The world has changed on him and he's still trying to work within an expired paradigm. It's like a newspaper magnate blaming printer manufacturers for not making it easy to print a newspaper off of the Web instead of coming to grips with the fact that the Internet with its blogging/social media/user generated content/craig's list and all that other malarky has turned his industry on its head.

This is what it looks like when a billionairre has made a bad investment and doesn't know how to recognize the smell of failure blasting his nostrils with its pungent aroma.


 

Categories: Current Affairs | Technology

I just read the post on the Skype weblog entitled What happened on August 16 about the cause of their outage which states

On Thursday, 16th August 2007, the Skype peer-to-peer network became unstable and suffered a critical disruption. The disruption was triggered by a massive restart of our users’ computers across the globe within a very short time frame as they re-booted after receiving a routine set of patches through Windows Update.

The high number of restarts affected Skype’s network resources. This caused a flood of log-in requests, which, combined with the lack of peer-to-peer network resources, prompted a chain reaction that had a critical impact.

Normally Skype’s peer-to-peer network has an inbuilt ability to self-heal, however, this event revealed a previously unseen software bug within the network resource allocation algorithm which prevented the self-healing function from working quickly.

This problem affects all networks that handle massive numbers of concurrent user connections whether they are peer-to-peer or centralized. When you deal with tens of millions of users logged in concurrently and something causes a huge chunk of them to log-in at once (e.g. after an outage or a synchronized computer reboot due to operating system patches) then your system will be flooded with log-in requests. All the major IM networks (including Windows Live) have all sorts of safeguards in place within the system to prevent this from taking down their networks although how many short outages are due to this specific issue is anybody’s guess.

However Skype has an additional problem when such events happen due to it’s peer-to-peer model which is described in the blog post All Peer-to-Peer Models Are NOT Created Equal -- Skype's Outage Does Not Impugn All Peer-to-Peer Models 

According to Aron, like its predecessor Kazaa, Skype uses a different type of Peer-To-Peer network than most companies. Skype uses a system called SuperNodes. A SuperNode Peer-to-Peer system is one in which you rely on your customers rather than your own servers to handle the majority of your traffic. SuperNodes are just normal computers which get promoted by the Skype software to serve as the traffic cops for their entire network. In theory this is a good idea, but the problem happens if your network starts to destabilize. Skype, as a company, has no physical or programmatic control over the most vital piece of its product. Skype instead is at the mercy of and vulnerable to the people who unknowingly run the SuperNodes.

This of course exposes vulnerabilities to any business based on such a system -- systems that, in effect, are not within the company's control.

According to Aron, another flaw with SuperNode models concerns system recovery after a crash. Because Skype lost its SuperNodes in the initial crash, its network can only recover as fast as new SuperNodes can be identified.

This design leads to a virtuous cycle when it comes to recovering from an outage. With most of the computers on the network being rebooted, they lost a bunch of SuperNodes and so when the computers came back online they flooded the remaining SuperNodes which in turn went down and so on…

All of this is pretty understandable. What I don’t understand is why this problem is just surfacing. After all, this isn’t the first patch Tuesday. Was the bug in their network resource allocation process introduced in a recent version of Skype? Has the service been straining for months and last week was just the tipping point? Is this only half the story and there is more they aren’t telling us?

Hmmm… 

Now playing: Shop Boyz - Party Like A Rockstar (remix) (feat. Lil' Wayne, Jim Jones & Chamillionaire)


 

Categories: Technology

Matt Cutts has a blog post entitled Closing the loop on malware where he writes

Suppose you worked at a search engine and someone dropped a high-accuracy way to detect malware on the web in your lap (see this USENIX paper [PDF] for some of the details)? Is it better to start protecting users immediately, or to wait until your solution is perfectly polished for both users and site owners? Remember that the longer you delay, the more users potentially visit malware-laden web pages and get infected themselves.

Google chose to protect users first and then quickly iterate to improve things for site owners. I think that’s the right choice, but it’s still a tough question. Google started flagging sites where we detected malware in August of last year.

When I got home yesterday, my fiancée informed me that her laptop was infected with spyware. I asked how it happened and she mentioned that she’d been searching for sites to pimp her MySpace profile. Since we’d talked in the past about visiting suspicious websites I wondered why she chosen to ignore my advise. Her response? “Google didn’t put the This Site May Harm Your Computer warning on the link so I thought the site was safe. Google failed me.”

I find this interesting on several levels. There’s the fact that this feature is really useful and engenders a sense of trust in Google’s users. Then there’s the palpable sense of betrayal on the user’s part when Google’s “not yet perfectly polished” algorithms for detectings malicious software fails to indicate a bad site. Finally, there’s the observation that instead of blaming Microsoft who produces the operating system and theWeb  browser which were both infected by the spyware, she chose to blame Google who produced the search engine that led to the malicious site instead. Why do you think this is? I have my theories…

Now playing: Hurricane Chris - Ay Bay Bay


 

Categories: Technology

There was an article on Ars Technica this weekend entitled Google selleth then taketh away, proving the need for DRM circumvention which is yet another example of how users can be screwed when they bet on a platform that utilizes DRM. The article states

It's not often that Google kills off one of its services, especially one which was announced with much fanfare at a big mainstream event like CES 2006. Yet Google Video's commercial aspirations have indeed been terminated: the company has announced that it will no longer be selling video content on the site. The news isn't all that surprising, given that Google's commercial video efforts were launched in rather poor shape and never managed to take off. The service seemed to only make the news when embarrassing things happened.

Yet now Google Video has given us a gift—a "proof of concept" in the form of yet another argument against DRM—and an argument for more reasonable laws governing copyright controls.

Google contacted customers late last week to tell them that the video store was closing. The e-mail declared, "In an effort to improve all Google services, we will no longer offer the ability to buy or rent videos for download from Google Video, ending the DTO/DTR (download-to-own/rent) program. This change will be effective August 15, 2007."

The message also announced that Google Checkout would issue credits in an amount equal to what those customers had spent at the Google Video store. Why the quasi-refunds? The kicker: "After August 15, 2007, you will no longer be able to view your purchased or rented videos."

See, after Google takes its video store down, its Internet-based DRM system will no longer function. This means that customers who have built video collections with Google Video offerings will find that their purchases no longer work. This is one of the major flaws in any DRM system based on secrets and centralized authorities: when these DRM data warehouses shut down, the DRM stops working, and consumers are left with useless junk.

Furthermore, Google is not refunding the total cost of the videos. To take advantage of the credit Google is offering, you have to spend more money, and furthermore, you have to spend it with a merchant that supports Google Checkout. Meanwhile, the purchases you made are now worthless.

This isn't the first time nor will it be the last time that some big company gives up on a product strategy tied to DRM, thus destroying thousands of dollars in end user investments. I wonder how many more fiascos it will take before consumers wholeheartedly reject DRM* or government regulators are forced to step in.

 Now playing: Panjabi MC - Beware (feat. Jay-Z)


 

Categories: Technology

July 23, 2007
@ 05:44 PM

I just found out that Vint Cerf, one of the founding fathers of the Internet and current Google employee, will be giving a talk in the Seattle area later today. I’ll be attending the talk and have a list of questions I’d like to ask during the Q&A session. However I suspect that a number of my readers likely have better questions than those I can come up with. Here are my questions, let me know what you think of them and suggest better ones if you don’t like mine

As usual, I’ll blog the proceedings as a Trip Report

Now playing: Twista - Do Wrong (feat. Lil Kim)


 

Categories: Technology

Disclaimer: This may sound like a rant but it isn't meant to be. In the wise words of Raymond Chen this is meant to highlight problems that are harming the productivity of developers and knowledge workers in today's world. No companies or programs will be named because the intent is not to mock or ridicule. 

This morning I had to rush into work early instead of going to the gym because of two limitations in the software around us.

Problem #1: Collaborative Document Editing

So a bunch are working on a document that is due today. Yesterday I wanted to edit the document but found out I could not because the software claimed someone else was currently editing the document. So I opened it in read-only mode, copied out some data, edited it and then sent my changes in an email to the person who was in charge of the document. As if that wasn’t bad enough…

This morning, as I'm driving to the gym for my morning work out, I glance at my phone to see that I've received mail from several co-workers because it I've "locked" the document and no one can make their changes. When I get to work, I find out that I didn’t close the document within the application and this was the reason none of my co-workers could edit it. Wow.

The notion that only one person at a time can edit a document or that if one is viewing a document, it cannot be edited seems archaic in today’s globally networked world. Why is software taking so long to catch up?

Problem #2: Loosely Coupled XML Web Services

While I was driving to the office I noticed another email from one of the services that integrates with ours via a SOAP-based XML Web Service. As part of the design to handle a news scenario we added a new type that was going to be returned by one of our methods (e.g. imagine that there was a GetFruit() method which used to return apples and oranges which now returns apples, oranges and bananas) . This change was crashing the applications that were invoking our service because they weren’t expecting us to return bananas.

However, the insidious thing is that the failure wasn’t because their application was improperly coded to fail if it saw a fruit it didn’t know, it was because the platform they built on was statically typed. Specifically, the Web Services platform automatically converted the XML to objects by looking at our WSDL file (i.e. the interface definition language which stated up front which types are returned by our service) . So this meant that any time new types were added to our service, our WSDL file would be updated and any application invoking our service which was built on a Web services platform that performed such XML<->object mapping and was statically typed would need to be recompiled. Yes, recompiled.

Now, consider how many potentially different applications that could be accessing our service. What are our choices? Come up with GetFruitEx() or GetFruit2() methods so we don’t break old clients? Go over our web server logs and try to track down every application that has accessed our service? Never introduce new types? 

It’s sad that as an industry we built a technology on an eXtensible Markup Language (XML) and our first instinct was to make it as inflexible as technology that is two decades old which was never meant to scale to a global network like the World Wide Web. 

Software should solve problems, not create new ones which require more technology to fix.

Now playing: Young Jeezy - Bang (feat. T.I. & Lil Scrappy)


 

Categories: Technology | XML | XML Web Services

I was recently in a conversation where we were talking about things we'd learned in college that helped us in our jobs today. I tried to distill it down to one thing but couldn't so here are the three I came up with.

  1. Operating systems aren't elegant. They are a glorious heap of performance hacks piled one upon the other.
  2. Software engineering is the art of amassing collected anecdotes and calling them Best Practices when in truth they have more in common with fads than anything else.
  3. Pizza is better than Chinese food for late night coding sessions.

What are yours?


 

Categories: Technology

March 28, 2007
@ 05:50 PM

Dave Winer has a post entitled How basic is Twitter? where he writes

So inevitably, a query about the value of namespaces leads you to wonder if there will be TwitterClones, web-based services that emulate the Twitter API, that keep internal data structures similar to Twitter, and most important, peer with Twitter, the same way Twitter peers with IM and SMS systems.

This is as far as I got in my thinking when last night I decided to ask Les Orchard, a developer I know for quite a few years, and who I've worked with on a couple of projects, both of which use the kind of technology that would be required for such a project --

What if there were an open source implementation of Twitter?

Nik Cubrilovic happened to be online at the moment and he jumped in with an idea. Les confessed that he was thinking of doing such a project. I thought to myself that there must be a lot of developers thinking about this right about now. We agreed it was an interesting question, and I said I'd write it up on Scripting News, which is what I'm doing right now.

What do you think? Is Twitter important, like web servers, or blogging software, so important that we should have an open source implementation of something that works like Twitter and can connect up to Twitter? Where are the tough sub-projects, and how much does it depend on the willingness of the developers of Twitter #1 to support systems that connect to theirs?

The problem I see here is that Twitter isn't like web servers or a blogging engine because Twitter is social software. Specifically, the value of Twitter to its users is less about its functionality and more about the fact that their friends use it. This is the same as it is for other kinds of social/communications software like Facebook or Windows Live Messenger. Features are what gets the initial users in the door but it's the social network that keeps them there. This is a classic example of how social software is the new vendor lock-in.

So what does this have to do with Open Source? Lots. One of the primary benefits to customers of using Open Source software is that it denies vendor lock-in because the source code is available and freely redistributable. This is a strong benefit when the source code is physically distributed to the user either as desktop software or as server software that the user installs. In both these cases, any shabby behavior on the part of the vendor can lead to a code fork or at the very least users can take matters into their own hands and improve the software to their liking.   

Things are different on the "Web 2.0" world of social software for two reasons. The obvious one being that the software isn't physically distributed to the users but the less obvious reason is that social software depends on network effects. The more users you have, the more valuable the site is to each user. Having access to Slashcode didn't cause the social lock-in that Slashdot had on geek news sites to end. That site was only overtaken when a new service that harnessed network effects better than they did showed up on the scene (i.e. Digg). Similarly, how much value do you think there is to be had from a snapshot of the source code for eBay or Facebook being made available? This is one area where Open Source offers no solution to the problem of vendor lock-in. In addition, the fact that we are increasingly moving to a Web-based world means that Open Source will be less and less effective as a mechanism for preventing vendor-lockin in the software industry. This is why Open Source is dead, as it will cease to be relevant in a world where most consumers of software actually use services as opposed to installing and maintaining software that is "distributed" to them.  

Granted, I have no idea why Dave Winer would like to build an Open Source competitor to Twitter. The main thing that will come out of it is that it will make it easier for people to build Twitter knock offs. However given how easy it is to roll your own knock off of popular "Web 2.0" social sites (e.g. 23, SuperGu, Uncut, etc) this doesn't seem like a lofty goal in itself. I'm curious as to where Dave is going with this since he often has good insights that aren't obvious at first blush.


 

Categories: Technology

March 11, 2007
@ 02:14 PM

Yesterday I went shopping and every store had reminders that daylight saving time begins today. Every year before "springing forward" or "falling back" I always double check the current time at time.gov and the US Naval Observatory Master Clock Time . However neither clock has sprung forward. Now I'm not sure if who I can trust to tell me the right time. :(

Update: Looks like I spoke too soon. It seems most of the clocks in the house actually figured out that today was the day to "spring forward" and I had the wrong time. :)


 

Categories: Technology

While I was house hunting a couple of weeks ago, I saw a house for sale that has a sign announcing that there was an "Open House" that weekend. I had no idea what an "Open House" was so I asked a real estate agent about it. I learned that during an "Open House", a real estate agent sits in an empty house that is for sale and literally has the door open so that people interested in the house can look around and ask questions about the house. The agent pointed out that with the existence of the Internet, this practice has now become outdated because people can get answers to most of their questions including pictures of the interior of houses for sale on real estate listing sites.

This got me to thinking about the Old Way vs. Net Way column that used to run in the Yahoo! Internet Life magazine back in the day. The column used to compare the "old" way of performing a task such as buying a birthday gift from a store with the "net" way of performing the same task on the Web.

We're now at the point in the Web's existence where some of the "old" ways to do things are now clearly obsolete in the same way it is now clear that the horse & buggy is obsolete thanks to the automobile. After looking at my own habits, I thought it would be interesting to put together a list of the top five industries that have been hurt the most by the Web. From my perspective they are

  1. Map Makers: Do you remember having to buy a map of your city so you could find your way to the address of a friend or coworker when you'd never visited the neighborhood? That sucked didn't it? When was the last time you did that versus using MapQuest or one of the other major mapping sites.

  2. Travel Agents: There used to be a time when if you wanted to get a good deal on a plane ticket, hotel stay or vacation package you had to call or visit some middle man who would then talk to the hotels and airlines for you. Thanks to sites like Expedia the end result may be the same but the process is a lot less cumbersome.

  3. Yellow Pages: When I can find businesses near me via sites like http://maps.live.com and then go to sites like Judy's Book or City Search to get reviews, the giant yellow page books that keep getting left at my apartment every year are nothing but giant doorstops.

  4. CD Stores: It's no accident that Tower Records is going out of business. Between Amazon and the iTunes Music Store you can get a wider selection of music, customer reviews and instant gratification. Retail stores can't touch that.

  5. Libraries: When I was a freshman in college I went to the library a lot. By the time I got to my senior year most of my research was exclusively done on the Web. Libraries may not be dead but their usefulness has significantly declined with the advent of the Web.

I feel like I missed something obvious with this list but it escapes me at the moment. I wonder how many more industries will be killed by the Internet when all is said and done. I suspect real estate agents and movie theaters will also go the way of the dodo within the next decade.

PS: I suspect I'm not the only one who finds the following excerpt from the The old way vs. the net way article hilarious

In its July issue, it compared two ways of keeping the dog well-fed. The Old Way involved checking with the local feed store and a Petco superstore to price out a 40-lb. bag of Nutra Adult Maintenance dog food. The effort involved four minutes of calling and a half-hour of shopping.

The Net Way involved electronically searching for pet supplies. The reporter found lots of sites for toys and dog beds, but no dog food. An electronic search specifically for dog food found a "cool Dog Food Comparison Chart" but no online purveyor of dog chow. Not even Petco's Web site offered a way to order and purchase online. The reporter surfed for 30 minutes, without any luck. Thus, the magazine declared the "old way" the winner and suggested that selling dog food online is a business waiting to be exploited.

Yeah, somebody needs to jump on that opportunity. :)


 

Categories: Technology

February 1, 2007
@ 01:19 AM

Miguel de Icaza of Gnumeric, GNOME and Ximian fame has weighed in with his thoughts on the FUD war that is ODF vs. OOXML. In his blog post entitled The EU Prosecutors are Wrong Miguel writes

Open standards and the need for public access to information was a strong message. This became a key component of promoting open office, and open source software. This posed two problems:

First, those promoting open standards did not stress the importance of having a fully open source implementation of an office suite. Second, it assumed that Microsoft would stand still and would not react to this new change in the market.

And that is where the strategy to promote the open source office suite is running into problems. Microsoft did not stand still. It reacted to this new requirement by creating a file format of its own, the OOXML.
...

The Size of OOXML

A common objection to OOXML is that the specification is "too big", that 6,000 pages is a bit too much for a specification and that this would prevent third parties from implementing support for the standard. Considering that for years we, the open source community, have been trying to extract as much information about protocols and file formats from Microsoft, this is actually a good thing.

For example, many years ago, when I was working on Gnumeric, one of the issues that we ran into was that the actual descriptions for functions and formulas in Excel was not entirely accurate from the public books you could buy.

OOXML devotes 324 pages of the standard to document the formulas and functions. The original submission to the ECMA TC45 working group did not have any of this information. Jody Goldberg and Michael Meeks that represented Novell at the TC45 requested the information and it eventually made it into the standards. I consider this a win, and I consider those 324 extra pages a win for everyone (almost half the size of the ODF standard).

Depending on how you count, ODF has 4 to 10 pages devoted to it. There is no way you could build a spreadsheet software based on this specification.
...
I have obviously not read the entire specification, and am biased towards what I have seen in the spreadsheet angle. But considering that it is impossible to implement a spreadsheet program based on ODF, am convinced that the analysis done by those opposing OOXML is incredibly shallow, the burden is on them to prove that ODF is "enough" to implement from scratch alternative applications.
...
The real challenge today that open source faces in the office space is that some administrations might choose to move from the binary office formats to the OOXML formats and that "open standards" will not play a role in promoting OpenOffice.org nor open source.

What is worse is that even if people manage to stop OOXML from becoming an ISO standard it will be an ephemeral victory.

We need to recognize that this is the problem. Instead of trying to bury OOXML, which amounts to covering the sun with your finger.

I think there is an interesting bit of insight in Miguel's post which I highlighted in red font. IBM and the rest of the ODF opponents lobbied governments against Microsoft's products by arguing that its file formats where not open. However they did not expect that Microsoft would turn around and make those very file formats open and instead compete on innovation in the user experience.

Now ODF proponents like Rob Weir who've been trumpeting the value of open standards now find themselves in the absurd position of arguing that is a bad thing for Microsoft to open up its file formats and provide exhaustive documentation for them. Instead they demand that Microsoft  should either  abandon backwards compatibility with the billions of documents produced by Microsoft Office in the past decade or that it should embrace and extend ODF to meet its needs. Neither of which sounds like a good thing for customers. 

I guess it's like Tim Bray said, life gets complicated when there are billion$ of dollars on the line. I'm curious to see how Rob Weir responds to Miguel's post. Ideally, we'll eventually move away from these absurd discussions about whether it is a bad thing for Microsoft to open up its file formats and hand them over to an international standards body to talking about how we office productivity software can improve the lives of workers by innovating on features especially with regards to collaboration in the workplace.  After all everyone knows that single user, office productivity software is dead. Right?


 

Categories: Technology | XML

Some of you may have seen the recent hubbub related to Microsoft and BlueJ. If you haven't you can get up to speed from articles such as Microsoft copies BlueJ, admits it, then patents it. An update to this story was posted by Dan Fernandez in his blog post entitled Update: Response to BlueJ Patent Issues where he wrote

On Friday, an alert reader emailed me about a new article by Michael Kölling, the creator of BlueJ, about a patent issued by Microsoft for features in Object Test Bench that are comparable to BlueJ's Object Bench. I'll post the full "anatomy of a firedrill" some time later, but for now we can officially say that the patent application was a mistake and one that should not have happened. To fix this, Microsoft will be removing the patent application in question. Our sincere apologies to Michael Kölling and the BlueJ community.

I'm glad this has been handled so quickly. I hope the news of Microsoft withdrawing the patent application spreads as fast and as far as the initial outrage at the news of the patent application being filed. 


 

Categories: Technology

There's an article in the NY Times entitled Want an iPhone? Beware the iHandcuffs which contains the following excerpt

Even if you are ready to pledge a lifetime commitment to the iPod as your only brand of portable music player or to the iPhone as your only cellphone once it is released, you may find that FairPlay copy protection will, sooner or later, cause you grief. You are always going to have to buy Apple stuff. Forever and ever. Because your iTunes will not play on anyone else’s hardware.

Unlike Apple, Microsoft has been willing to license its copy-protection software to third-party hardware vendors. But copy protection is copy protection: a headache only for the law-abiding.

Microsoft used to promote its PlaysForSure copy-protection standard, but there must have been some difficulty with the “for sure” because the company has dropped it in favor of an entirely new copy-protection standard for its new Zune player, which, incidentally, is incompatible with the old one.

Pity the overly trusting customers who invested earlier in music collections before the Zune arrived. Their music cannot be played on the new Zune because it is locked up by software enforcing the earlier copy-protection standard: PlaysFor(Pretty)Sure — ButNotTheNewStuff.

The name for the umbrella category for copy-protection software is itself an indefensible euphemism: Digital Rights Management. As consumers, the “rights” enjoyed are few. As some wags have said, the initials D.R.M. should really stand for “Digital Restrictions Management.”

It's weird to see the kind of anti-DRM screed that one typically associates with people like Cory Doctorow getting face time in the New York Times. DRM is bad for society and bad for consumers. It's that unfortunate that Microsoft is the company that has made one of the bogey men of anti-DRM activists a reality. As Mini-Microsoft wrote in his blog post The Good Manager, etc, etc, ...

In the meantime, I think a positive-because-it's-so-negative result of Zune is that it added fire to the DRM debate

No longer is it a theoretical problem that buying a lot of DRMed music from a vendor leaves you vulnerable if the DRM becomes unsupported or falls out of favor. Thanks to Zune and its lack of support for PlaysForSure. Now even the New York Times has joined the in the rally against DRM.

I have to agree with Mini-Microsoft, this is one of those things that is so bad that it is actually turns a 180 and will be good for all of us in the long run.


 

Categories: Technology

I checked out the official Apple iPhone site especially the screencasts of the phone user interface and ipod capabilities set. As an iPod owner, $500 is worth it just to get my hands on this next generation iPod which makes my Video iPod look old and busted. On the other hand, although the text messaging UI is pretty sweet a cellphone without tactile feedback when pushing its buttons is a pain in the ass especially when the layout of the buttons continually changes. I wouldn't wish that on my worst enemy. Maybe I'm just unusual in the fact that I don't want to be required to look at the cellphone's screen when using it. I pull my phone out of my pocket, unlock it and call the last number dialed often without looking at the screen before putting it to my ear. It's hard to imagine that my muscle memory would ever get used to to doing that without tactile feedback from the phone when navigating its interface. It also hasn't been announced whether the phone will be able to sync with Microsoft Exchange or not. As someone who used his phone to keep on top of the goings on at work while at CES, this is another non-starter.

That said, I have to agree with a lot of the stuff said in the article Macworld: Ten Myths of the Apple iPhone. A lot of the complaints about the iPhone just seem like sour grapes. Me, I'm going to wait until I can get an unlocked iPhone so I don't have to replace my Cingular 3125 or until Apple ships a 6th generation iPod (aka iPhone sans phone features).


 

Categories: Technology

A perennial topic for debate on certain mailing lists at work is rich client (i.e. desktop) software versus Web-based software. For every person that sings the praises of Web-based program such as Windows Live Mail, there's someone to wag their finger who points out that "it doesn't work offline" and "not everyone has a broadband connection". A lot of these discussions have become permathreads on the some of the mailing lists I'm one and I can recite detailed arguments for both sides in my sleep.

However I think both sides miss the point and agree more than they disagree. The fact is that highly connected societies such as the North America and Western Europe computer usage overlaps almost completely with internet usage (see Nielsen statistics for U.S. homes and Top 25 most connected countries). This trend will only increase as internet penetration spreads across developing countries emerging markets. 

What is important to understand is that for a lot of computer users, their computer is an overpriced paperweight if it doesn't have an Internet connection. They can't read the news, can't talk to their friends via IM, can't download music to their iPods Zunes, can't people watch on Facebook or MySpace, can't share the pictures they just took with their digital cameras, can't catch up on the goings on at work via email, they can't look up driving directions, can't check the weather report, can't do research for any reports they have to write and the list goes on. Keeping in mind that connectivity is key is far more important than whether the user experience is provided via a desktop app written using Win32 or is a "Web 2.0" website powered by AJAX. Additionally, the value of approachability and ease of use over "features" and "richness" cannot be emphasized enough.

Taken from that perspective, a lot of things people currently consider "features" of desktop applications are actually bugs in todays Internet-connected world. For example, I have different files in the "My Documents" folders on the 3 or 4 PCs I use regularly. Copying files between PCs and keeping track of what version of what file is where is an annoyance. FolderShare to the rescue.

When I'm listening to my music on my computer I sometimes want to be able to find out what music my friends are listening to, recommend my music to friends or just find music similar to what I'm currently playing. Last.fm and iLike to the rescue.

The last time I was on vacation in Nigeria, I wanted to check up on what was going on at work but never had access to a computer with Outlook installed nor could I have actually set it up to talk to my corporate account even if I could. Outlook Web Access to the rescue.

Are these arguments for Web-based or desktop software? No. Instead they are meant to point out that improving the lives of computer users should mean finding better ways of harnessing their internet connections and their social connections to others. Sometimes this means desktop software,   sometimes it will mean Web-based software and sometimes it will be both.


 

Categories: Technology

I don't really have much commentary on this but I thought this was still worth sharing. Earlier, this week Joel Spolsky wrote a blog post entitled Choices = Headaches where he writes

I'm sure there's a whole team of UI designers, programmers, and testers who worked very hard on the OFF button in Windows Vista, but seriously, is this the best you could come up with?

Image of the menu in Windows Vista for turning off the computer

Every time you want to leave your computer, you have to choose between nine, count them, nine options: two icons and seven menu items. The two icons, I think, are shortcuts to menu items. I'm guessing the lock icon does the same thing as the lock menu item, but I'm not sure which menu item the on/off icon corresponds to.

This was followed up yesterday by Moishe Lettvin who used to work on the feature at Microsoft and has since gone to Google to work on Orkut. In his post entitled The Windows Shutdown crapfest, Moishe gives his perspective on some of the problems he faced while working on the feature for Longhorn Vista.

My main problem with Joel's post seems to be that his complaint seems to already be addressed by Vista. Isn't that the icon for a power button right there on the screen? So the nine options he complains about are really for advanced users? Regular users should only need to ever click the power button icon or the padlock icon. 

Then again, we shouldn't let the facts get in the way of a good anti-Microsoft rant. :)


 

November 9, 2006
@ 11:18 PM
A few days ago there was an article on the BBC News site entitled Zune problems for MSN customers which stated

But in a move that could alienate some customers, MSN-bought tracks will not be compatible with the new gadget. The move could also spell problems for the makers of MP3 players which are built to work with the MSN store.
...
The problem has arisen because tracks from the MSN Music site are compatible with the specifications of the Plays For Sure initiative. This was intended to re-assure consumers as it guaranteed that music bought from services backing it would work with players that supported it. MSN Music, Napster, AOL Music Now and Urge all backed Plays For Sure as did many players from hardware makers such as Archos, Creative, Dell and Iriver.

In a statement a Microsoft spokesperson said: "Since Zune is a separate offering that is not part of the Plays For Sure ecosystem, Zune content is not supported on Plays For Sure devices." The spokesperson continued: "We will not be performing compatibility testing for non-Zune devices, and we will not make changes to our software to ensure compatibility with non-Zune devices."
...
Microsoft said that its Windows Media Player will recognise Zune content which might make it possible to put the content on a Plays For Sure device. However, it said it would not provide customer support to anyone attempting this.

On a similar note there was an article entitled Trying Out the Zune: IPod It’s Not in the New York Times today which states

Microsoft went with its trusted Windows strategy: If you code it, the hardware makers will come (and pay licensing fees). And sure enough, companies like Dell, Samsung and Creative made the players; companies like Yahoo, Rhapsody, Napster and MTV built the music stores.

But PlaysForSure bombed. All of them put together stole only market-share crumbs from Apple. The interaction among player, software and store was balky and complex — something of a drawback when the system is called PlaysForSure.“Yahoo might change the address of its D.R.M. server, and we can’t control that,” said Scott Erickson, a Zune product manager. (Never mind what a D.R.M. server is; the point is that Microsoft blames its partners for the technical glitches.) Is Microsoft admitting, then, that PlaysForSure was a dud? All Mr. Erickson will say is, “PlaysForSure works for some people, but it’s not as easy as the Zune.”

So now Microsoft is starting over. Never mind all the poor slobs who bought big PlaysForSure music collections. Never mind the PlaysForSure companies who now find themselves competing with their former leader. Their reward for buying into Microsoft’s original vision? A great big “So long, suckas!” It was bad enough when there were two incompatible copy-protection standards: iTunes and PlaysForSure. Now there will be three.

(Although Microsoft is shutting its own PlaysForSure music store next week, it insists that the PlaysForSure program itself will live on.)

Microsoft’s proprietary closed system abandons one potential audience: those who would have chosen an iPod competitor just to show their resentment for Apple’s proprietary closed system. To make matters worse, you can’t use Windows Media Player to load the Zune with music; you have to install a similar but less powerful Windows program just for the Zune. It’s a ridiculous duplication of effort by Microsoft, and a double learning curve for you.

So how is the Zune? It had better be pretty incredible to justify all of this hassle.

As it turns out, the player is excellent.

On days like this, I miss having Robert Scoble roaming the halls in the B0rg cube. It sucks when you let the press tell your story for you.


 

October 13, 2006
@ 04:32 PM

Stephen O'Grady has a blog post entitled What is Office 2.0? where he writes

As some of you know having spoken with me on the subject, I have little patience for philosophical discussions of what Web 2.0 really means. When pressed on the subject, I usually just point to properties like del.icio.us and say, "That is Web 2.0." Likewise, I'm not terribly concerned with creating strict textual definitions of what Office 2.0 is, as long as I can credibly cite examples that exhibit the tendencies of a "next generation" office platform. As this show amply demonstrates, that part's easy. Google Docs & Spreadsheets, Joyent, Zoho, and so on? Very Office 2.0. Microsoft Office and OpenOffice.org? Office 1.0. Q.E.D.

While the question of what Office 2.0 is doesn't really keep me up at night, however, what it means absolutely does. We have a unique view on the technologies, because we're not merely covering analysts but avid users. And what's obvious to me, both as an analyst and a user, is that Office 2.0 has strengths for every weakness, and weaknesses for every strength.

The trend of talking about things without defining them and then revelling in the fact that they are ill-defined really makes me wonder for the future of discourse in the software industry. I thought all the discussions about SOA were bad but "Web 2.0" and "Office 2.0" puts all that to shame. I'm especially disappointed to see people who call themselves "analysts" like Stephen O'Grady join in this nonsense.

The problem with his del.icio.us example is that when I look at del.icio.us I see a bunch of things, I see a site that has

  • tagging/folksonomies
  • open APIs
  • user generated "content"
  • supports syndication via RSS feeds
  • a relatively small amount of users and is likely to stay a niche service
  • nothing of interest that will ever draw me in as a regular user

The problem with lumping all these things together is that the impact of each of the main bullet points is difference. The impact of the trend of more websites filled with user generated content from blogs to podcasts is different from the impact of the trend towards open APIs and "the Web as a platform".

Similarly when it comes to "Office 2.0" the impact of anywhere access to my business data from any thin client (aka browser) is completely different different from the promise of Web-scale collaboration in business environments that some of the products Stephen O' Grady mentions portend. Lumping all these things together then failing to articulate them makes it difficult to discuss, analyze and consider the importance [or lack thereof] of what's going on on the Web today.

Please, stop it. Think of the children.


 

Categories: Technology

September 18, 2006
@ 05:20 PM

Via Shelley Powers, I stumbled on a post entitled The Future of White Boy clubs which has the following graphic

Lack of intellectual diversity is one of the reasons I decided to stop attending technology conferences about 'Web 2.0'. I attended the Web 2.0 conference last year and the last two ETechs. After the last ETech, I realized I was seeing the same faces and hearing the same things over and over again. More importantly, I noticed that the demographics of the speaker lists for these conferences don't match the software industry as a whole let alone the users who we are supposed to be building the software for.

There were lots of little bits of ignorance by the speakers and audience which added up in a way that rubbed me wrong. For example, at last year's Web 2.0 conference a lot of people were ignorant of Skype except as 'that startup that got a bunch of money from eBay'. Given that there are a significant amount of foreigners in the U.S. software industry who use Skype to keep in touch with folks back home, it was surprising to see so much ignorance about it at a supposedly leading edge technology conference. The same thing goes for how suprised people were by how teenagers used the Web and computers, then again the demographics of these conferences are skewed towards a younger crowd. There are just as many women using social software such as photo sharing, instant messaging, social networking, etc as men yet you rarely see their perspectives presented at any of these conferences. 

When I think of diversity, I expect diversity of perspectives. People's perspectives are often shaped by their background and experiences. When you have a conference about an industry which is filled with people of diverse backgrounds building software for people of diverse backgrounds, it is a disservice to have the conversation and perspectives be homogenous. The software industry isn't just young white males in their mid-20s to mid-30s not is that the primary demographic of Web users. However if you look at the speaker lists of the various Web 2.0 conferences that seem to show up on a monthly basis (e.g. Office 2.0, The Future of Web Apps, ).

As a service to future conference organizers, I'm going to provide a handy dandy table to help in diversifying your conference. Next time you want to organize a conference and you realize that you've filled it with a bunch of people who look and think just like you, try replacing the names in the left column of this table with those on the right, at the very least it'll make your speaker list look like that of O'Reilly's Web 2.0 conference which not only has people of diverse ethnic backgrounds and gender, but also people with different professional experiences

ReplaceWith
Mike ArringtonOm Malik
Ben TrottMena Trott
Stewart ButterfieldCaterina Fake
Clay ShirkyDanah Boyd
Fred WilsonVinod Khosla

You get the idea. Diversifying your conference speaker list doesn't mean reducing the quality of speakers as many racist and sexist motherfuckers tend to state whenever this comes up. 

PS: I'm interested in talking to folks at startups that are building micro applications like widgets for MySpace, any idea what a good conference to meet such folks would be?


 

Categories: Technology

September 5, 2006
@ 10:05 PM

From the press release entitled Industry Testing of Windows Vista Release Candidate 1 Begins we learn

Microsoft Announces Estimated Retail Pricing for All Windows Vista Editions

With Windows XP, customers often had to make tradeoffs in features and functionality as the Windows XP editions were aligned with specific hardware types. With Windows Vista, customers now have the ability to make choices between editions based on the valuable features they desire, which are now available as standard features of mainstream editions. For example, 64-bit support and Tablet PC and touch technology are standard features of the Home Premium and Business editions.

Pricing information for all Windows Vista editions is available online, along with additional information on the various editions of Windows Vista.

It looks like my next choice of operating system will have a suggested retail price for full package product of $399.00 and a suggested upgrade retail price of $259.00. Given that I'm running Windows Server 2003 at home it looks like I'll be paying the higher price. 


 

Categories: Technology

August 16, 2006
@ 11:01 AM

Robert Scoble has a blog post entitled Blogs and Digg, not geeky enough? where he writes

I notice a general trend looking through blogs, TechMeme, and Digg. There aren’t many coders anymore.

Five years ago the discussions were far more technical and geeky. Even insiderish. When compared to the hype and news of today.

It makes me pine for ye old RSS vs. Atom geek flamefests.

Anyone else notice this trend?

Sites like TechMeme and Digg hone in on what is popular to the general audience even if it is the general audience interested in software. There are more people interested in the impact of software-powered companies like Google, Yahoo!, Microsoft, MySpace, Youtube, and so on than there are people interested in the technology that powers these companies. There are going to be more people speculating about Google's next new service than those interested in a dissection of how the AJAX on one of Google's sites works. There are more people talking about Google Maps mashups than there are people talking about how to build them. There are more people interested in the next "Web 2.0" startup that Yahoo! is going to buy than are interested in technical language wars about whether Flash or AJAX is the way to go in building such sites. That's why you won't see Raymond Chen, Simon Willison or Jon Udell on TechMeme and Digg as often as you'll see the Michael Arringtons, Robert Scobles and  Om Maliks of the world.

This doesn't mean "there aren't many coders anymore" as Robert Scoble suggests. It just means that there are more people interested in the 'industry' part of the "software industry" than in the 'software' part. What else is new?


 

Categories: Technology

User interfaces for computers in general and web sites in particular seem to be getting on my nerves these days. It's really hard to browse for what you are looking for on a number of buzzword-complaint websites today. Most of them seem to throw a tag cloud and/or search box at you and call it a day.

Search boxes suck as a navigation interface because they assume I already know what I'm looking for. I went to the Google Code - Project Hosting website and wanted to see the kinds of projects hosted there. Below is a screenshot of the website from a few minutes ago.

Notice that the list of project labels (aka tags) shown below the search box are just 'sample labels' as opposed to a complete classification scheme or hierarchy. They don't even list fairly common programming topics like VisualBasic, Javascript or FreeBSD. If I want to browse any of these project labels, I have to resort to poring over search results pages with minimal information about the projects.

Using tag clouds as a navigation mechanism is even more annoying. I recently visited Yahoo! Gallery to see what the experience was like when downloading new plugins for Yahoo! Messenger. On the main page, there is a link that says Browse Applications which takes me to a page that has the following panel on the right side. So far so good.

I click on the Messenger link and then was taken to the following page.

What I dislike about this page is how much space is taken up by useless crap (i.e. the tag cloud full of uninformative tags) while the actual useful choices for browsing such as 'most popular' and 'highest rated' are given so little screen real estate and actually don't even show up on some screens without scrolling down. The tag cloud provides little to no value on this page except to point out that whoever designed is hip to all the 'Web 2.0' buzzwords.

PS: Before anyone bothers to point this out, I realize a number of Microsoft sites also have similar issues.


 

Recently I was reading an email and realized that I'd dismissed the content of the email before I'd finished reading it. I wondered why I had done that and after performing some self analysis I realized that the email contained several instances of certain phrases which caused me to flip the bozo bit on the content of the email. Below are my top 5 'bozo bit' phrases which automatically make my eyes glaze over and my mind shut off once I see them in an email I'm reading

  1. synergy: This is usually a synonym for "we've run out of ideas but think integrating our products will give us a shot in the arm". Classic example of synergy at work is the AOL/TimeWarner merger which turned out to be such a bad idea that Steve Case apologized for it last week.

  2. make(s) us more agile: I usually have no problem with this word if it is used by people who write code or at best are one level removed from those who write code. On the other hand, whenever I see a VP or middle management wonk use "make(s) us more agile" they not only show an ignorance of the principles of the agile manifesto but often propose things that make developers less agile not more.

  3. innovative: This one bothers me on multiple levels. The first is that many people fail to realize that new features aren't innovation, every idea you've had has already been had by someone else. You are at best just executing the idea a little differently. Just this weekend, I looked at Digg for the first time and realized that all the hubbub was about a knock-off of Kuro5hin with a more charismatic project leader and accompanying podcast. Another thing that bothers me about 'innovative' is that it is often about using technology for technology's sake instead of providing actual value to one's customers. A classic example of this comes from my first internship at Radiant Systems, when the company announced a partnership with AOL to provide email access at gas pumps. The stock actually jumped for a day or two until people realized what a dumb idea it was. Who's going to spend time logging into a terminal at a gas pump to check their email? People hate spending time at the gas pump. Can you imagine waiting behind a car at a gas station while the person in front of you was spending time deletiong the spam from their inbox at the gas pump? I think not.

  4. web 2.0: I realize this is flogging a dead horse but since this is the phrase that inspired this post I decided to include it. What I hate about this phrase is that it is so imprecise. I have no idea what the fuck people are talking about when they say Web 2.0. Even Tim O'Reilly who coined the term had to use a five page essay just to explain What is Web 2.0 which boiled down to Web 2.0 was a grab-bag of the key features of websites popular among the geek set regardless of whether they'd existed since 'Web 1.0' or were just new fads trends. It gets even better, earlier this month Tim O'Reilly published Levels of the Game: The Hierarchy of Web 2.0 Applications which establishes levels of Web 2.0 compliance. MapQuest is at Compliance Level 0 of Web 2.0 while Flickr is at Compliance Level 2 of Web 2.0 and Wikipedia is at Compliance Level 3. If this all makes sense to you, then I guess I'll see you at the invitation-only-yet-still-costs-thousands-of-dollars-to-attend Web 2.0 conference this year.

  5. super excited: This one may just be a Microsoft thing. The reason I can't stand this phrase is that it is an obvious overexaggeration of how the person feels about what they are talking about since it often is associated with information that is barely interesting let alone super exciting. Do you know what would be super exciting? Getting a phone call from Carmen Electra telling you that she was using StumbleUpon, found your blog and thought you sounded cute and would like to meet you. That's super exciting. Your product just shipped? Your division just had another reorg? You just added a new feature to your product? Those aren't even interesting let alone super exciting.

What are yours?


 

Categories: Ramblings | Technology

Sean Alexander, who works on Windows digital media team at Microsoft, has a blog post entitled Thoughts on PlaysforSure and Zune Announcement which provides his perspective on some of the speculation about Microsoft's Zune announcement and it's impact on Microsoft's PlaysForSure program. He writes

From what I've learned, Zune is a new brand for Microsoft - Zune is about community, music and entertainment discovery.  You'll experience Zune with a family of devices and software that bring it all together. Yes, we all want more details, but we’ll have to be a little patient for more details. Check out www.comingzune.com and sign up if you want more details.

 

One question that gets asked here is the relationship to our existing PlaysforSure program. The Windows digital media team (of which I've been a member) has been focused on raising the tide for all boats, raising the experience for many partners through programs like PlaysforSure, giving sessions on 360 degree product design at partner events, offering frank feedback on product designs when requested and more.  We want Windows to be the best place to experience digital music and entertainment.  The Windows team will continues to work closely with service and device partners to make Windows a great platform for any digital media.

 

And one need only look as far as the MP3 player/portable media player market to find other examples of taking multiple approaches.  At least two of the largest consumer electronics manufacturers compete on not one, not two, but three levels:

  • They supply memory for their own, and competitive MP3 players
  • They design and sell MP3 "engines" (systems on a chip) for their own, and competitive MP3 device manufacturers
  • They design, build and compete for retail space for their own, branded MP3 players
There are many other examples that can be drawn within Microsoft as well – for example, Microsoft Game Studios competes with independent game publishers for consumer dollars on the same platform (Xbox) also built by Microsoft. In all these cases, relationships of trust must be established independently between product groups or divisions.  The same holds true here as well.   It’s hard to understand unless you’re inside Microsoft but these groups have separate P&Ls (Profit/Loss metrics) and that sometimes means trying different strategies.

I've seen a bunch of negative speculation about Zune and PlaysForSure both from technology news articles such as C|Net's Swan song for Microsoft's music allies? and blog posts such as Magic 8-Ball Answers Your Questions Regarding Microsoft’s ‘Zune’. I'm glad to see Sean offering his perspective as someone who works on the Windows digital media team on PlaysForSure.

The cool thing about blogging is that if people are talking about you and your product, you can just join in the conversation.


 

Categories: Technology

Om Malik has a blog post entitled Microsoft Partners, You Been Zunked which talks about what the recent Zune announcement means for Microsoft's partners in the digital media business. He writes

So Microsoft is going to get into the music device business - imitating the same “integrated experience” philosophy as Apple has successfully deployed to carve itself a big share of the portable music player and online music business.
...
More on that some other day, but the real and perhaps the only story in the news is that Microsoft’s partners - from device makers to music services - just got double crossed by the company they choose to believe in. I like to call it Zun-ked (a tiny take off on Punked.)

Let me break this down: Zune - the devices, the platform, and the store/service - will compete with everyone from Apple (of course) to Creative Technologies, iRiver, Samsung, Archos, Rhapsody, Napster, Yahoo Music and anyone dumb enough to buy into Microsoft’s visions of Urge, Media Player, PlayForSure etc.

Microsoft could argue that Zune would be unique and those others can still do business. But it is also a classic example of why Microsoft is lumbering bureaucratic morass wrapped in a can of conflicts. A modern day version of medieval fiefdoms, perhaps? Take for instance, Urge which is built into Windows Vista, and is what I guess you could call an almost integrated experience. What happens to consumers when faced with the choice of Zune or Urge!!! Answer - iPod.

This thought popped into my head as well and I'm sure there are folks at Microsoft who have answers to the questions Om asked. We already have Microsoft employees like Richard Winn and Cesar Menendez blogging about Zune which means that Microsoft is definitely participating in the conversation. It'll be interesting to hear what they have to say about how Zune relates to Urge, PlaysForSure and a number of other questions that have been asked in various stories about the announcement. 


 

Categories: Technology

Michael Gartenberg of Jupiter Research has a blog post entitled Zune is Real and Here's What it Means - First Take Analysis where he writes

If you have the current issue of Billboard, there's an article in there as well.

First, this is an acknowledgement that Microsoft is clearly not happy with Apple's dominance in digital music. I don't think it is concern about new growth scenarios. It's more a concern that Apple controls a key endpoint in the digital home and that Apple bits flow only to other Apple controlled bits or devices. That scenario doesn't bode well for Microsoft's larger ambitions Second, even though Microsoft still talks about the diversity of the Windows platform as an overall advantage, let's face it, the platform argument is dead and licensees will have to deal with it. On one hand, no one has ever successful created a business where you license technology to licensees and simultaneously compete with them on the device side. On the other hand, it's not like there's a lot of other places for licensees to go to get technology.

So what's the challenge? Essentially there are three things.

  • Creating a technically competent challenger...
  • Creating a lifestyle device...
  • Creating a platform...
...
Early market share, however, isn't likely to come from disgruntled iPod users looking to switch. The real losers in the short term are likely to be the likes of Creative, iRiver and other former partners that have failed to deliver to market share from Apple and will now find themselves not only competing with Apple but with their former partners from Redmond.

Interesting. As someone who's bought 5 iPods over the past few years (2 for me, 1 for my mom, 1 for my girlfriend and 1 for her daughter) I'm quite the fan of Apple's devices and often walk the hallways at work looking like one of those silhouettes from the iPod ads. I'll definitely take one out for a test drive when I'm shopping for my next music player.  So far nothing has compared to the iPod experience but Microsoft's work with XBox/XBox Live shows the company can compete when it comes to hardware/online service combos.

PS: Isn't it weird how different the results are for http://images.google.com/images?q=ipod+ad vs. http://www.live.com/#q=ipod%20ad&scope=images&lod=2&page=results?


 

Categories: Technology

I recently stumbled on YouOS and was struck by how bad an idea I thought it was. I don't even have to write down why, because Jon Udell has already beaten me to the punch with his article Application UI goes back to basics where he writes

Consider the effects of the graphical user interface. At hospital admitting desks, in accountants’ offices, and at video retail stores, I watch people perform tasks for which the desktop metaphor — with its cluttered surface and overlapping resizable windows — is at best a distraction and at worst an impediment.

Although YouOS is an interesting bit of technical wizardry, it seems like a step back when it comes to providing value to end users. The fact that there are multiple, tailored interfaces to my data on the Web (e.g. del.icio.us for my links, My Yahoo! for my digital dashboard, MSN Spaces for my photos and social network, etc) all accessible from a different tab in my browser is a lot more powerful than the classic WIMP interface that drives desktop computing. Trying to port the desktop metaphor to the Web is like working on how to fuel your car with hay because that is what horses eat.

Last year at the Web 2.0 conference, both Ray Ozzie and Sergey Brin said similar things when asked about Web-based office suites. Of course, since then Google purchased Writely and shipped Google Spreadsheets which is somewhat contradictory. :)


 

Categories: Technology

June 27, 2006
@ 04:55 AM

Quentin Clark has posted a new entry entitled Update to the Update on the WinFS team blog that answers some of the questions that have been raised since his post on Friday about the status of WinFS. He writes

Is WinFS dead?
Yes and No. Yes, we are not going to ship WinFS as a separate, monolithic software component. But the answer is also No - the vision remains alive and we are moving the technology forward. A lot of the technology really was database stuff – and we’re putting that into SQL and ADO. But some of the technology, especially the end user value points, are not ready, and we’re going to continue to work on that in incubation. Some or all of these technologies may be used by other Microsoft products going forward.

Does your plan for WinFS have any impact on Windows Vista?
There is no impact on Windows Vista. We announced back in August 2004 that WinFS would not be in Windows Vista.

Will the "Relational Filesystem" ever be in Windows?
Hey – we are very busy finishing Vista, and just aren’t ready to talk about what comes next. The vision for a richer storage in Windows is very much alive.  With the new tools for searching and organizing information in Windows Vista, we are taking a good step towards that vision.  

Why are parts of WinFS going into SQL Server?
We have a vision around data that guides us we call the "Data Platform Vision". We’ve been talking with customers about this for some time, and we have heard consistent positive feedback. It was clear that the integrated storage and automation features of WinFS will help SQL Server deliver on the "Beyond Relational" and "Continuous Availability and Automation" promises of that vision. We decided to focus resources on delivering these technologies to our customers as part of the Data Platform Vision in the near term.

Why did Microsoft announce this now after talking about WinFS at TechEd so recently?
When we were at TechEd, we had not made the decision. Sure, it was under discussion, but we did not have all the information we needed and we had not made the call yet. We did share the news as soon as we had the final word. We could have waited longer to disclose the information and made the change in plans less of a contrast, but we chose to notify people as soon as we could. This is why we used the blog and didn’t fire-up the big MS PR machinery – that takes time.

I commented internally that the response to Quentin's original blog post shows that there has been a discrepancy between what the WinFS team has been working on and what the developer community believes they were delivering. I got to read a draft of this blog post before it went up and it does a better job of stating what has happened with WinFS and even seems to have incorporated some of my feedback. I hope Charles Miller doesn't think this post is also un-blog-like. :)


 

June 26, 2006
@ 03:33 PM

I was reading Charles Miller's post entitled We Come to Bury WinFS... where he wrote

The first thing to strike me about the blog post announcing the end of WinFS as a Vista feature is how totally un-blog-like it is.

Every comment (bar one) got the point. WinFS is dead. Its carcass is being split between SQL Server and ADO.NET, and the relational filesystem that was going to change the way we use computers is no longer just postponed to be shipped after Vista, it’s gone.

The blog post itself, however, is written entirely in marketing-speak. The engineer talks about how super-excited the team is about this "new direction", how encouraging this news is, and leaves the fate of Vista for a final, particularly obfuscatory paragraph. Nary a word is allowed to suggest that the last nail in the coffin for Vista’s most eagerly anticipated feature might be a huge let-down to those people who have been watching it slip further and further down the schedule since its fanfare announcement as a part of Longhorn three years ago.

Did Microsoft forget everything Scoble was supposed to be teaching them, so quickly?

Every now and then, you’ve got to put out a mea culpa. You’ve promised something that turned into something else, or that you changed your mind about, or that you just can’t deliver. In the mass-media world, you do this by spinning the story as positively as you can. The message will be filtered by intermediaries before it reaches the public, and it’s expected the journalists in the middle will get the point, pulling quotes from the positive spin to offset the otherwise negative message.

I agree 100% with Charles Miller's sentiments about the blog posting on WinFS. This seems to be another example a case where Microsoft overpromised and underdelivered failed to deliver but even worse instead of owning up to this, the blog post spins this as being what customers want. From reading the hundred or so comments and trackbacks to Quentin Clark's post it doesn't seem that there are many people who are excited or encouraged that what once was touted as a pillar of Longhorn is now just another checkbox feature in SQL Server. This makes Microsoft look bad to developers because it means that we are either insulting our developer customers by thinking we can pull the wool over their eyes in such a blatant way or even worse that we are completely out of touch with them. Either way, it sucks and I feel like we should apologise to developers and perhaps even the software industry as a whole. Microsoft did offer a mea culpa to developers for the delay between Internet Explorer versions and I think this is another one of those cases where we should do the same.

I feel like I should probably throw in some last thoughts about WinFS, the technology, especially since in my previous post I claim that this decision should have been made a few years ago. Below is a random collection of my thoughts WinFS which can also be gleaned from my various blog posts on the technology over the last few years

  1. There was a divergence of opinion in what the team was building and what the people thought the team was building. A common misconception was that WinFS would somehow make search "better" on the desktop in the same way that desktop search tools like Lookout do. I addressed this misconception further in my post Killing the "WinFS is About Making Search Better" Myth from almost two years ago.

  2. The project had the biggest example of scope creep I'd ever seen at Microsoft. When WinFS swallowed ObjectSpaces the team decided that instead of just tackling the hard problem of building an object/relational file system for Windows that they also wanted to tackle the hard problem of  building an enterprise-class object to relational mapping technology with the same product in a single release. It also didn't help that key ObjectSpaces folks like Matt Warren, Dinesh Kulkarni and Luca Bolognese ended up joining the C# team to work on LINQ which meant WinFS inherited all of the problems of ObjectSpaces but not necessarily all the folks who had been working on those problems.

  3. The chicken and the egg problem. One of the key ideas in the WinFS type system for the Windows desktop was that we'd have common file system level types for high level concepts like people/contacts, email messages or photos instead of just opaque bits on disk with some file format specific metadata. To take advantage of this, existing applications would have to be rewritten or new [backwards incompatible] applications would have to be written targetting WinFS. The primary benefits of making this change [besides the improved programming model] were the network effects if lots of applications used these types (e.g. Outlook stored mail identified by a WinFS contact, RSS Bandit stored RSS feeds from that WinFS contact, AOL Instant Messenger stored IM conversation logs using that contact, etc). Even if you got these network effects you then had to deal with the Windows registry problem (i.e. apps stomping over each others data which is one of the main problems with the Windows registry today).

  4. I never saw good answers to the questions Jon Udell asked in his blog posts Questions about Longhorn, part 1: WinFS and Questions about Longhorn, part 2: WinFS and semantics. Specifically, the world is betting big on open file formats such as XML including parts of Microsoft (e.g. Microsoft Office) so why would anyone want to build aplications targetting a proprietary Windows-only file system that didn't have a good XML story for getting data out of the platform?

I should probably stop now. Even though all the information above is freely available to anyone who reads blogs and can put two and two together, some may object to the above collection of thoughts.
 

June 24, 2006
@ 04:57 PM

Quentin Clark has a blog post entitled WinFS Update where he writes

There are many great technical innovations the WinFS project has created – innovations that go beyond just the WinFS vision but are part of a broader Data Platform Vision the company is pursuing.  The most visible example of this today is the work we are now doing in the next version of ADO.NET for Orcas.  The Entities features we are now building in ADO.NET started as things we were building for the WinFS API.  We got far enough along and were pushed on the general applicability of the work that we made the choice to not have it be just about WinFS but make it more general purpose (as an aside – this stuff is really coming together – super cool). 

Other technical work in the WinFS project is at a similar point – specifically the integration of unstructured data into the relational database, and automation innovations that make the database "just work" with no DBAs – "richer store" work.  It's these storage innovations that have matured to the point where we are ready to start working on including them in our broader database product.  We are choosing now to take the unstructured data support and auto-admin work and deliver it in the next release of MS SQL Server, codenamed Katmai.  This really is a big deal – productizing these innovations into the mainline data products makes a big contribution toward the Data Platform Vision we have been talking about.  Doing this also gives us the right data platform for further innovations. 

These changes do mean that we are not pursuing a separate delivery of WinFS, including the previously planned Beta 2 release.  With most of our effort now working towards productizing mature aspects of the WinFS project into SQL and ADO.NET, we do not need to deliver a separate WinFS offering. 

So that's it, no more WinFS. This is the right decision, albeit two years too late but better late than never. It's sad to think about the projects that got killed or disrupted because of WinFS only for this to happen. In a recent column entitled Taking One for the Team Robert X. Cringley has a quote from Management By Baseball by Jeff Angus which reads "When I worked for a few years at Microsoft Corporation in the early '80s,...no one cared to track and codify past failures as a way to help managers create guidelines of paths to follow and avoid". I hope this doesn't end up happening with the lessons from the WinFS project.


 

Categories: Technology

I was pretty surprised to find the press release entitled Microsoft Robotics Studio Provides Common Ground for Robotics Innovation via Todd Bishop this morning. It states

PITTSBURGH— June 20, 2006 — Today at RoboBusiness Conference and Exposition 2006, Microsoft Corp. showcased the community technology preview (CTP) of a new Windows®-based environment for academic, hobbyist and commercial developers to easily create robotic applications for a wide variety of computing platforms. In addition, early adopter companies, universities and research institutes offered demos and provided support for the new Microsoft® Robotics Studio development platform. The community technology preview of the Microsoft Robotics Studio is available for download at http://msdn.microsoft.com/robotics.

"Microsoft, together with the upcoming LEGO® MINDSTORMS® NXT, will help further amplify the impact of robotics,” said Søren Lund, director of LEGO MINDSTORMS at the LEGO Group. “The MINDSTORMS robotics toolset has enjoyed a strong community of users since 1998, and the launch of our next-generation platform includes many built-in features that further the community’s ability to take MINDSTORMS programming out of the box. In combination with Microsoft Robotics Studio, PC users will have a sophisticated tool that will further extend the powerful NXT hardware and software to an even wider range of developers who wish to create advanced applications for their LEGO robots."

At first glance, I thought this was an announcement that Microsoft would be getting into building robots like Asimo but it seems that instead it is Microsoft getting into the business of building development platforms for programming robots. There is a good overview of Microsoft Robotics Studio on MSDN which describes the core pieces of the platform. Interesting, I can now program LEGO Mindstorms using C# and the .NET Framework instead of lower level languages like NQC (Not Quite C) which is quite cool.

A neat bit of trivia is this is the project that George Moore, who now runs the Windows Live developer platform team, came to Windows Live from. It's interesting to see how different ones job roles can be from year to year at Microsoft. 


 

Categories: Technology

This week is TechEd 2006, Microsoft's primary conference for IT professionals and developers. There'll be a bunch of announcements about Microsoft products over the next few days but with Robert Scoble on his way out I'm not sure where we are supposed to get our info.

Anyway, on Sunday there was a keynote given by Ray Ozzie which I thought was interesting and is excerpted below

For those of us close to IT, and who have been close to IT for many years, this is a jarring reversal from the days when we saw the latest innovations in computing and communications at places like NCC and COMDEX. In those days, the enterprise requirements for large-scale transaction systems, and the public sector requirements for large-scale scientific computing drove creation of the world's most advanced data centers. Enterprises were showcases for vendors' most sophisticated and scalable technologies.

Today some of the world's most advanced data centers are those designed to directly serve consumers out on the Internet. For example, last month there were about 130 million people who used Windows Live Spaces, another 230 million used our Messenger IM service. More than 250 million people used Windows Hotmail service, hundreds of millions of active, unique users each month. Clearly, building systems at this scale is different than building software for enterprise servers, which are designed to serve thousands or tens of thousands of concurrent users.

It's estimated that just among Microsoft and Yahoo and Google, there are well over 1 million servers racked up in data centers, located around the globe, serving trillions of e-mails, and IMs, and searches, housing many, many petabytes of storage, serving 1 billion Internet users. And the investment continues, you don't have to stray far from our Redmond headquarters to see.
...
At times of disruption like this there are always extremists. Twenty-five years ago, at the beginning of the PC revolution, some predicted the death of the mainframe, because of the PC. Now there are extremists who believe that every application will be accessed through a browser, and that everything will move to this computing cloud, that your enterprise data center will go away, that you'll trust third parties with your business information, and systems.

Microsoft is taking a very pragmatic approach; a seamless, blended, client-server-service approach. We want to make sure that you can easily transition client and server-based applications to services, or vice-versa. Our services won't be disconnected from existing applications, but instead are going to be designed to complement and extend our Windows and Office platforms to the Internet.

Under the name Live, we'll provide a blend of desktop software, server-based software, and our own enterprise service offering, and our partners' offerings, enabling you to make the right tradeoffs that make the most sense for your business. One notable example of this client-server-service synergy can be found in our approach to information management and search. Our goal is to provide the people within your organization a simplified, unified way of getting at the information that they need, no matter where it resides.

There are two themes I like here. The first is that it seems Ray Ozzie agrees with the My Website is Bigger Than Your Enterprise meme. The funny thing is that even though our CTO gets it, sometimes it is hard to explain to some of the folks working on server products at Microsoft. There is a big difference in the complexity and scale requirements to build a system like Hotmail or MSN Spaces versus building an Exchange or a SharePoint. There are lessons to learn on here, both on the part of vendors like Microsoft as well as customers of enterprise software who want to utilize the lessons that mega-scale online services have learned the hard way. The second theme is that there is a continuum of software experiences that spans both desktop applications and Web applications. I don't believe that Web applications will or should replace desktop applications. On the flip side, I think that desktop applications that don't harness the power of the network (the Web or the intranet) will begin to look archaic in a few years. Ray Ozzie seems to totally get this which is good for Microsoft. Maybe in a few years we can get Steve Gillmor to stop calling it Office Dead. :)

PS: From Trevin's TechEd update it looks like our MSN vs. Windows Live "branding strategy" is just as confusing to end users as I expected. He wrote

Booth fraffic[sic] was pretty light today, and the most frequent question was "What is Windows Live?".  People's guesses at what Windows Live was were all over the map -- some thought it was related to Live Communicator, others thought it was just the www.live.com homepage, while others took wild stabs in the dark.  Gerard commented that one person thought it might be related to our Education/Learning divison.
 
After talking to about 25 customers, it was abundantly clear that customers have no idea at all what Windows Live is, or how it relates to Windows or MSN.  This explained why there was so little traffic to our booth -- of the people that stopped by, they almost did it by accident.

Trevin is confident that our customers will soon have some of their confusion cleared. I on the other hand am not so sure. I'm sure once the major MSN properties like Hotmail, MSN Messenger and MSN Spaces are rebranded there will be more awareness of 'Windows Live' by customers. However I suspect the confusion around the difference between 'MSN' and 'Windows Live' will continue for quite a while. Maybe some of the marketing folks who fixed the weirdness of having both WinFX and the .NET Framework as dueling brands will be reorged into our division and can fix this foolishness.


 

Categories: Technology

Tim O'Reilly ran a series of blog posts a few months ago on the O'Reilly Radar blog entitled "Database War Stories" where he had various folks from Web companies talk about their issues scaling databases. I thought the series of posts had an interesting collection of anecdotes and thus I'm posting this here so I have a handy link to the posts as well as the highlights from each entry.

  1. Web 2.0 and Databases Part 1: Second Life: Like everybody else, we started with One Database All Hail The Central Database, and have subsequently been forced into clustering. However, we've eschewed any of the general purpose cluster technologies (mysql cluster, various replication schemes) in favor of explicit data partitioning. So, we still have a central db that keeps track of where to find what data (per-user, for instance), and N additional dbs that do the heavy lifting. Our feeling is that this is ultimately far more scalable than black-box clustering.

  2. Database War Stories #2: bloglines and memeorandum: Bloglines has several data stores, only a couple of which are managed by "traditional" database tools (which in our case is Sleepycat). User information, including email address, password, and subscription data, is stored in one database. Feed information, including the name of the feed, description of the feed, and the various URLs associated with feed, are stored in another database. The vast majority of data within Bloglines however, the 1.4 billion blog posts we've archived since we went on-line, are stored in a data storage system that we wrote ourselves. This system is based on flat files that are replicated across multiple machines, somewhat like the system outlined in the Google File System paper, but much more specific to just our application. To round things out, we make extensive use of memcached to try to keep as much data in memory as possible to keep performance as snappy as possible.

  3. Database War Stories #3: Flickr: tags are an interesting one. lots of the 'web 2.0' feature set doesn't fit well with traditional normalised db schema design. denormalization (or heavy caching) is the only way to generate a tag cloud in milliseconds for hundereds of millions of tags. you can cache stuff that's slow to generate, but if it's so expensive to generate that you can't ever regenerate that view without pegging a whole database server then it's not going to work (or you need dedicated servers to generate those views - some of our data views are calculated offline by dedicated processing clusters which save the results into mysql).

  4. Database War Stories #4: NASA World Wind: Flat files are used for quick response on the client side, while on the server side, SQL databases store both imagery (and soon to come, vector files.) However, he admits that "using file stores, especially when a large number of files are present (millions) has proven to be fairly inconsistent across multiple OS and hardware platforms."

  5. Database War Stories #5: craigslist: databases are good at doing some of the heavy lifting, go sort this, give me some of that, but if your database gets hot you are in a world of trouble so make sure can cache stuff up front. Protect your db!

    you can only go so deep with master -> slave configuration at some point you're gonna need to break your data over several clusters. Craigslist will do this with our classified data sometime this year.

    Do Not expect FullText indexing to work on a very large table.

  6. Database War Stories #6: O'Reilly Research: The lessons:

    • the need to pay attention to how data is organized to address performance issues, to make the data understandable, to make queries reliable (i.e., getting consistent results), and to identify data quality issues.
    • when you have a lot of data, partitioning, usually by time, can make the data usable. Be thoughtful about your partitions; you may find its best to make asymmetrical partitions that reflect how users most access the data. Also, if you don't write automated scripts to maintain your partitions, performance can deteriorate over time.
  7. Database War Stories #7: Google File System and BigTable: Jeff wrote back briefly about BigTable: "Interesting discussion. I don't have much to add. I've been working with a number of other people here at Google on building a large-scale storage system for structured and semi-structured data called BigTable. It's designed to scale to hundreds or thousands of machines, and to make it easy to add more machines the system and automatically start taking advantage of those resources without any reconfiguration. We don't have anything published about it yet, but there's a public talk about BigTable that I gave at University of Washington last November available on the web (try some searches for bigtable or view the talk)."

  8. Database War Stories #8: Findory and Amazon: On Findory, our traffic and crawl is much smaller than sites like Bloglines, but, even at our size, the system needs to be carefully architected to be able to rapidly serve up fully personalized pages for each user that change immediately after each new article is read. Our read-only databases are flat files -- Berkeley DB to be specific -- and are replicated out using our own replication management tools to our webservers. This strategy gives us extremely fast access from the local filesystem. We make thousands of random accesses to this read-only data on each page serve; Berkeley DB offers the performance necessary to be able to still serve our personalized pages rapidly under this load. Our much smaller read-write data set, which includes information like each user's reading history, is stored in MySQL. MySQL MyISAM works very well for this type of non-critical data since speed is the primary concern and more sophisticated transactional support is not important.

  9. Database War Stories #9 (finis): Brian Aker of MySQL Responds: Brian Aker of MySQL sent me a few email comments about this whole "war stories" thread, which I reproduce here. Highlight -- he says: "Reading through the comments you got on your blog entry, these users are hitting on the same design patterns. There are very common design patterns for how to scale a database, and few sites really turn out to be all that original. Everyone arrives at certain truths, flat files with multiple dimensions don't scale, you will need to partition your data in some manner, and in the end caching is a requirement."

    I agree about the common design patterns, but I didn't hear that flat files don't scale. What I heard is that some very big sites are saying that traditional databases don't scale, and that the evolution isn't from flat files to SQL databases, but from flat files to sophisticated custom file systems. Brian acknowledges that SQL vendors haven't solved the problem, but doesn't seem to think that anyone else has either.

I found most of the stories to be interesting especially the one from the Flickr folks. Based on some early thinking I did around tagging related scenarios for MSN Spaces I'd long since assumed that you'd have to throw out everything you learned in database class at school to build anything truly performant. It's good to see that confirmed by more experienced folks.

I'd have loved to share some of the data we have around the storage infrastructure that handles over 2.5 billion photos for MSN Spaces and over 400 million contact lists with over 8 billion contacts for Hotmail and MSN Messenger. Too bad the series is over. Of course, I probably wouldn't have gotten the OK from PR to share the info anyway. :)


 

While I was in the cafeteria with Mike Vernal this afternoon I bumped into some members of the Windows Desktop Search team. They mentioned that they'd heard that I'd decided to go with Lucene.NET for the search feature of RSS Bandit instead of utilizing WDS. Much to my surprise they were quite supportive of my decision and agreed that Lucene.NET is a better solution for my particular problem than relying on WDS. In addition, they brought an experienced perspective to a question that Torsten and I had begun to ask ourselves. The question was how to deal with languages other than English.

When building a search index, the indexer has to know what the stop words it shouldn't index are (e.g. a, an, the) as well as have some knowledge about word boundaries. Where things get tricky is that a user can receive content in multiple languages, you may receive email in Japanese from some friends and English from others. Similarly you could subscribe to some feeds in French and others in Chinese. Our original thinking was that we would have to figure out the language of each feed and build a separate search index for each language. This approach seemed error prone for a number of reasons

  1. Many feeds don't provide information about what language they are in
  2. People tend to mix different languages in their speech and writing. Spanglish anyone?

The Windows Desktop Search folks advised that instead of building a complicated solution that wasn't likely to work correctly in the general case, we should consider simply choosing the indexer based on the locale/language of the Operating System. This is already what we do today to determine what language to display in the UI and we have considered allowing users to change the UI language in future which would also affect the search indexer [if we chose this approach]. This assumes that people read feeds primarily in the same language that they chose for their operating system. This seems like a valid assumption but I'd like to hear from RSS Bandit users if this is indeed the case. 

If you use the search features of RSS Bandit, I'd appreciate getting your feedback on this issue.


 

Categories: RSS Bandit | Technology

In his post Exploring Live Clipboard Jon Udell posts a screencast he made about LiveClipboard. He writes

I've been experimenting with microformats since before they were called that, and I'm completely jazzed about Live Clipboard. In this screencast I'll walk you through examples of Live Clipboard in use, show how the hCalendar payload is wrapped, grab hCalendar data from Upcoming and Eventful, convert it to iCalendar format for insertion into a calendar program, inject it natively into Live Clipboard, and look at Upcoming and Eventful APIs side-by-side.

All this leads up to a question: How can I copy an event from one of these services and paste it into another? My conclusion is that adopting Live Clipboard and microformats will be necessary but not sufficient. We'll also need a way to agree that, for example, this venue is the same as that venue. At the end, I float an idea about how we might work toward such agreements.

The problem that Jon Udell describes is a classic problem when dealing with mapping data from different domains. I posted about this a few months ago in my post Metadata Quality and Mapping Between Domain Languages where I wrote

The problem Stefano has pointed out is that just being able to say that two items are semantically identical (i.e. an artist field in dataset A is the same as the 'band name' field in dataset B) doesn't mean you won't have to do some syntactic mapping as well (i.e. alter artist names of the form "ArtistName, The" to "The ArtistName") if you want an accurate mapping.

This is the big problem with data mapping. In Jon's example, the location is called Colonial Theater in Upcoming and Colonial Theater (New Hampshire) in Eventful. In Eventful it has a street address while in Upcoming only the street name is provided. Little differences like these are what makes data mapping a hard problem. Jon's solution is for the community to come up with global identifiers for venues as tags (e.g. Colonial_Theater_NH_03431) instead of waiting for technologists to come up with a solution. That's good advice because there really isn't a good technological solution for this problem. Even RDF/Semantic Web junkies like Danny Ayers in posts like Live clipboard and identifying things start with assumptions like every venue has a unique identifier which is it's URI. Of course this ignores the fact that coming up with a global, unique identification scheme for the Web is the problem in the first case. The problem with Jon's approach is the same one that is pointed out in almost every critique of folksonomies, people won't use the same tags for the same concept. Jon might useColonial_Theater_NH_03431 while I use Colonial_Theater_95_Maine_Street_NH_03431 which leaves us with the same problem of inconsistent identifiers being used for the same venue. 

I assume that for the near future we continue seeing custom code being written to make data integration across domains work. Unfortunately, no developments on the horizon look promising in making this problem go away.

PS: Ray Ozzie has a post on some of the recent developments in the world of Live Clipboard in his post Wiring Progress, check it out.


 

Categories: Technology | Web Development

February 13, 2006
@ 04:54 PM

Last month, Dave Winer wrote the following in a post on his blog

Today's been a day for epiphanies, small and large. A small one is that tech.memeorandum.com is not really about technology, it's about the business of technology. Actually it's narrower than that, it's the West Coast-centered technology business. I'd love to see a Memeorandum-like service that focused on technology, the ones and zeroes, and left out the fluff and the bubbles.

I agree a 100% with Dave Winer here. I think the concept and implementation of tech.memeorandum.com is nothing short of fantastic. On the other hand, the content typically leaves much to be desired. For example, this morning's top story is that some analysts are now pessimistic on Google's stock price because they just realized they have competitors like Microsoft and Yahoo. As a technology geek, I couldn't care less about such mindless crap. Like Dave Winer I'm more interested in what programmer types are currently geeking about as opposed to what pundits pontificating on Google vs. Microsoft vs. Yahoo are gabbing about.

I gather that the tech.memeorandum.com algorithm is based on figuring what the current hot topics are among certain A-list bloggers. The problem with this is that most A-list technology bloggers are pundits not technologists. This means they spend most of their time talking about technology companies not technology. There's a big difference between what you'll find on Robert Scoble or John Battelle's blogs versus what you'll see on the blog of Simon Willison or Don Box. I personally would rather see a tech.memeorandum.com that was based on showing me what was hot amongst technology implementers like Simon and Don versus among technology watchers like Scoble and Battelle.  

Ideally, I should be able to customize tech.memeorandum.com so I can treat it as a personal aggregator. I'd love to be able to provide it the OPML for blogs.msdn.com and have it show me what the hot topics were among Microsoft bloggers. From my perspective,  tech.memeorandum.com is a good implementation of a great idea. However it is just the beginning. I wonder who will be first to take it to the next level and enable us to build personalized meme trackers? 


 

Categories: Technology

From the official Google Blog we find the post All buttoned up which informs us that

As the Google Toolbar has gotten more popular, the greatest source of ideas about new features has come from our users. The breadth and variety of these requests is so large that it's hard to satisfy everyone. But then we started noticing engineers on the team had cool hacks on their Toolbars for doing customized searches on our internal bugs database, corporate employee directory, etc... We were barely done asking ourselves whether it was possible to offer this capability in the new Google Toolbar beta when one of the engineers started designing a feature called Custom Buttons. Here are some of the coolest aspects of Custom Buttons and why I think they're a big deal:

1) Simple API: The term API is almost a misnomer -- it literally takes seconds to make one of these. I just can't resist the urge to make a new one every time I run into new website. A couple of simple steps and voila - a new button's sitting on your Toolbar (check out the Getting Started Guide).

2) Flexibility: The simple inclusion of RSS & Atom feeds (and particularly allowing the update of toolbar button icons through feeds) has allowed for buttons like a weather button and a mood ring button.

3) Accessibility: Most users don't even need to make buttons. It takes one click on our buttons gallery or on a website that offers them to install a button for your favorite sites. And the custom buttons we built to search our intranet showed us how valuable a customizable toolbar can be to organizations, so now there's an enterprise version of Google Toolbar that can be securely deployed across a company.

I use the Google toolbar quite frequently when performing searches and one of my biggest gripes is that it doesn't give me the option of using Google Music Search for my searches. So when I found out about the new version of the toolbar, I downloaded it out and clicked on "Add Search Type" which took me to the Google Toolbar Button Gallery. Guess what? There's no option for adding Google Music Search to my list of default search types.

So I tried reading the documentation on Getting Started with the Google Toolbar API so I could figure out how to add it myself and came up short.  The entire API seems to assume that some stuff gets installed in my right-click menu in Internet Explorer which doesn't seem to be the case. I wonder if I need to reboot to get the changes to show up? Bah. Now I feel irritated that I just wasted 15 minutes of my time on this. 


 

Categories: Technology

There are a couple of contentious topics I tend not to bother debating online because people on both sides of the argument tend to have entrenched positions. The debate on abortion in the U.S. is an example of such a topic. Another one for me is DRM and it's sister topics piracy copyright infringement and file sharing networks.

Shelley Powers doesn't seem to have my aversion for these topics and has written an insightful post entitled Debate on DRM which contains the following excerpt

Doc Searls points to a weblog post by the Guardian Unlimited’s Lloyd Shepherd on DRM and says it’s one of the most depressing things he’s read. Shepherd wrote:

I’m not going to pick a fight with the Cory Doctorows of the world because they’re far more informed and cleverer than me, but let’s face it: we’re going to have to have some DRM. At some level, there has to be an appropriate level of control over content to make it economically feasible for people to produce it at anything like an industrial level. And on the other side of things, it’s clear that the people who make the consumer technology that ordinary people actually use - the Microsofts and Apples of the world - have already accepted and embraced this. The argument has already moved on.

Doc points to others making arguments in refutation of Shepherd’s thesis (Tom Coates and Julian Bond), and ends his post with:

We need to do with video what we’ve started doing with music: building a new and independent industry...


I don’t see how DRM necessarily disables independents from continuing their efforts. Apple has invested in iTunes and iPods, but one can still listen to other formats and subscribe to other services from a Mac. In fact, what Shepard is proposing is that we accept the fact that companies like Apple and Google and Microsoft and Yahoo are going to have these mechanisms in place, and what can we do to ensure we continue to have options on our desktops?

There’s another issue though that’s of importance to me in that the concept of debate being debated (how’s this for a circular discussion). The Cluetrain debate method consists of throwing pithy phrases at each other over (pick one): spicey noodles in Silicon Valley; a glass of ale in London; something with bread in Paris; a Boston conference; donuts in New York. He or she who ends up with the most attention (however attention is measured) wins.

In Doc’s weblog comments, I wrote:

What debate, though? Those of us who have pointed out serious concerns with Creative Commons (even demonstrating problems) are ignored by the creative commons people. Doc, you don’t debate. You repeat the same mantra over and over again: DRM is bad, openness is good. Long live the open internet (all the while you cover your ears with your hands and hum “We are the Champions” by Queen under your breath).

Seems to me that Lloyd Shepherd is having the debate you want. He’s saying, DRM is here, it’s real, so now how are we going to come up with something that benefits all of us?

Turning around going, “Bad DRM! Bad!” followed by pointing to other people going “Bad DRM! Bad!” is not an effective response. Neither is saying how unprofitable it is, when we only have to turn our little eyeballs over to iTunes to generate an “Oh, yeah?”

Look at the arguments in the comments to Shepherd’s post. He is saying that as a business model, we’re seeing DRM work. The argument back is that the technology fails. He’s talking ‘business’ and the response is ‘technology’. And when he tries to return to business, the people keep going back to technology (with cries of ‘…doomed to failure! Darknet!’).

The CES you went to showed that DRM is happening. So now, what can we do to have input into this to ensure that we’re not left with orphaned content if a particular DRM goes belly up? That we have fair use of the material? If it is going to exist, what can we do to ensure we’re not all stuck with betamax when the world goes VHS?

Rumbles of ‘darknet’, pointers to music stores that feature few popular artists, and clumsy geeky software as well as loud hyperbole from what is a small majority does not make a ‘debate’. Debate is acknowledging what the other ’side’ is saying, and responding accordingly. Debate requires some openness.

There is reason to be concerned about DRM (Digital Rights Management–using technology to restrict access to specific types of media). If operating systems begin to limit what we can and cannot use to view or create certain types of media; if search engine companies restrict access to specific types of files; if commercial competition means that me having an iPod, as compared to some other device, limits the music or services at other companies I have access to, we are at risk in seeing certain components of the internet torn into pieces and portioned off to the highest bidders.

But by saying that all DRM is evil and that only recourse we have is to keep the Internet completely free, and only with independents will we win and we will win, oh yes we will–this not only disregards the actuality of what’s happening now, it also disregards that at times, DRM can be helpful for those not as well versed in internet technologies.

I tend to agree with Shelley 100% [as usual]. As much as the geeks hate to admit it, DRM is here to stay. The iTunes/iPod combination has shown that consumers will accept DRM in situations where they are provided value and that the business model is profitable. Secondly,  as Lloyd Shepherd points out,  the major technology companies from Microsoft and Intel to Apple and Google are all building support for DRM in their products for purchasing and/or consuming digital media.

Absolutists who argue that DRM is evil and should be shunned are ignoring reality. I especially despise arguments that are little more than throwing around dogmatic, pithy phrases such as "information wants to be free" and other such mindless drivel. If you really think DRM is the wrong direction, then create the right direction by proposing or building a workable alternative that allows content creators to get paid without losing their rights. I'd like to see more discussions in the blogosphere like Tim Bray's On Selling Art instead of the kind of crud perpetuated by people like Cory Doctorow which made me stop reading Boing Boing.

PS: There's also a good discussion going on in the comments to Shelley's blog post. Check it out.


 

Categories: Technology

January 3, 2006
@ 07:31 PM

It's another year, which means it's soon going to be time to figure out which conferences I'll be attending over the next few months. So far, three conferences have come up on my radar and I suspect I'll attend at least two of them. The conferences in order of my likelihood of attending them are

  1. VSLive: A conference for Visual Studio developers. I'll likely be there with other folks from MSN Windows Live to talk about the various APIs we provide and perhaps give hints or details on some of our upcoming API plans.

  2. ETech: I attended this conference last year and found it extremely valuable. There were developers from small and large Web companies talking about technical issues they had faced while delivering services on the Web as well as announcing cool new offerings. The list of speakers is great; Danah Boyd, Joel Spolsky, Kathy Sierra, Sam Ruby, Jon Udell, Simon Willison and Ray Ozzie. I don't plan to miss this one. 

  3. MIX: This is a Microsoft conference that will focus on our hip, Web-based offerings like IE7, Windows Media, Windows Live!, as well as "Atlas", Microsoft’s new AJAX framework. Given that I'll already have lost a week of work by attending ETech and I won't really be learning anything I can't find on internal websites by attending the conference, I'll probably miss this one. Of course, if my workload is light and/or I'm told I'll be speaking I might end up attending.

If you'll be at any of these conferences and would like to meet up to shoot the breeze about mindless geekery, holla at me. Also what other interesting Web geek conferences are out there?


 

Categories: Technology

Kurt Cagle has a post entitled Open Standards and Organic Foods which begins

A question was posed to me recently concerning exactly what I meant when I talked about open standards, and how they differed from open source. In reviewing some of my previous postings, one of the things that I realized was that while I had offered up a number of definitions in passing, there really wasn't any single, stock answer that I or others had seen for what exactly open standards mean. Moreover, a lot of people tend to look at open standards with a somewhat jaundiced eye, as if it was simply one more marketing label in a field that is already way oversaturated with marketing buzzwords - they didn't understand why open standards were important, or they didn't understand the distinction between open source and open standards.

The software industry is now full of buzzwords and buzz phrases that are so ambiguous that if you ask five people what they mean you are likely to get ten different definitions. The problem this causes is that people often talk past each other even if they use the same words or even worse miscommunication occurs due to basic assumptions about the conversation which are incorrect. Examples of such ambiguous buzz phrases include; web 2.0, service oriented architecture and standards.

Some people I've talked to about this are surprised that I add 'standards' to this list. However the definition of what constitutes a 'standard' is in the eye of the beholder. About a year and a half ago, I wrote a blog post entitled Are Standards in the Software Industry a Chimera? which stated 

The word "standard' when it comes to software and computer technology is usually meaningless. Is something standard if it produced by a standards body but has no conformance tests (e.g. SQL)? What if it has conformance testing requirements but is owned by a single entity (e.g. Java)? What if it is just widely supported with no formal body behind it (e.g. RSS)?

For every one of the technologies mentioned above (RSS, Java, and SQL) you'll find people who will argue that they are standards and people who will argue that they aren't. SQL is produced by a standards body and has a number of formal specifications but since there is no conformance requirements most database vendors have embraced and extended it. It is difficult to write non-trivial SQL queries that will work across Microsoft's SQL Server, MySQL, Oracle's databases and IBM's DB2. The Java programming language and platform is supported by a number of vendors and has rigid conformance tests which make the statement "write once, run anywhere" true for the most part, however it is a proprietary technology primarily controlled by Sun Microsystems. Content syndication using RSS 0.9x/RSS 2.0 feeds is the most popular web service on the planet but the specifications were primarily authored and controlled by a single individual and have no formal standards body or corporation backing them till this day. In each case, the technology is 'standard' enough for there to be thriving markets around them with multiple vendors providing customers with valuable services.

From a customer perspective, standards are a means to an end and in this case the goal of standards is to prevent vendor lock-in. As long as users can choose between multiple RSS readers or developers can choose between multiple Java implementations, there is enough standardization for them. Where things become contentious is that there are multiple ways to get to the same solution (lack of lock-in).

"Open standards" are even more ambiguous since [as an industry] we don't even have a clear idea of what constitutes a standard. I read through Kurt Cagle's post and he never actually ends up defining "Open Standard" beyond providing analogies and rationales for why he believes in them. An interesting statement that Kurt makes in his post is the following

I suspect that in many ways the open standards movement is, at its core, a reaction to the rather virulent degenerate capitalism that exists today, in which a person can profit far out of proportion to the amount of work that they do, usually at the expense of many others who lose disproportionately to their work load.

The notion of 'profitting in proportion to your work' is pretty bogus and foreign to capitalism. Capitalism is all about the value of your work to others not how much work you put in. A minor league baseball player doesn't work an order of magnitude less than a major league baseball player yet he makes that much less. A multiplatinum recording artist doesn't work an order of magnitude harder than local bands trying to get big but makes that much more. It may not sound fair but that's capitalism. In recent centuries humans have experimented with other socio-economic movements that are more 'fair' but so far capitalism is what has stuck. </digression>

Anyway, my point is that buzz phrases like "standards", "service oriented architecture" and "web 2.0" have such diluted and ambiguous definitions to be effectively meaningless in technical discourse. People who've been in the industry for a while eventually learn to filter out these phrases [and often the people speaking them as well] when engaged in technical discourse. If you are a technical person you probably should be more explicit about what you mean by using phrases such as "freely implementable and patent unencumbered", "SOAP-based web services" and "AJAX powered website" in place of the aforementioned buzz phrases. Oh, and if they don't match up to what you mean when you use those statements then that just proves my point about the ambiguity of these buzz phrases.


 

Categories: Technology

December 19, 2005
@ 05:56 PM

Robert Scoble has a post entitled Riya not recognized by Google where he recommends that Microsoft look into purchasing Riya.com. He writes

I’ve heard many rumors about Riya over the past few weeks. One strong rumor, reported by Om Malik, among others, was that Riya was getting purchased by Google.

I know our M&A guys had met with Riya too and had passed on the deal after negotiations got too expensive (translation someone else had bid more than we were willing to pay). So, I was suprised that during the past few days I had heard that Riya’s deal with Google wasn’t going to happen.

Today Munjal, Riya’s CEO, said on his blog that they were going to continue on as an independent firm and that the rumors are incorrect.

This is actually very good for Microsoft and Yahoo. Why? Cause this team is high quality and the technology is great (I’ve been using the alpha recently and like it a lot).

Now, why doesn’t Microsoft purchase them? Well, I’ve been in contact with our M&A folks. We have a lot of NIH syndrome here cause we have similar technology that our research teams have developed. I’ve seen our photo/face recognition capabilities and they are pretty cool too and, indeed, are better in some areas and not as good in others.

I have a couple of opinions here but mostly it is advice to Robert given that I've been recently involved in acquisition related discussions as part of my day job. My main thought about Google passing on Riya is that I expected this given that they demoed their in-house image recognition software at the Web 2.0 conference. Thus purchasing Riya would primarily be about augmenting their internal development team which reduces the value of the deal to Google.

From a Microsoft perspective, I'd expect to see a bunch of NIH as well. We have folks who are doing quite a lot in the area of improving digital photograph at Microsoft Research. You can read about some of the work in articles like MSR's Life of a Digital Photo or view demos about interesting work in Object class recognition from the Cambridge arm of Microsoft Research. I don't think I've seen anything as cool as the stuff demoed by Riya and Google but we do have some interesting stuff cooking at Microsoft Research nonetheless. 

The problem for Scoble is finding a product team that actually thinks what Riya is doing is valuable enough to make it worth however many tens of millions of dollars the VCs think the company is worth. Simply saying "What they are doing is cool" isn't enough. The folks at MSR face the same problems and the fact that there aren't lots of transitions from cool research demo to Microsoft product shows just how difficult this process can be. 


 

Categories: Technology

December 17, 2005
@ 05:15 PM

A friend of mine called me yesterday to ask for my opinions on the fact that Meebo just got a few million dollars in funding. For those not in the know, Meebo is an AJAX version of Trillian. And what is Trillian? It's an instant messaging client that supports AIM, ICQ, MSN, Yahoo Messenger, and IRC.

In his post How Much Did Meebo Get? Om Malik asks

Here is the rub: Since the company basically aggregates all four major IM networks in a browser, all the four major IM owners - AMYG are out of the acquisition game. One of them buys the company, the others shut down access to their respective networks. The very quality that makes Meebo attractive to end-users will make it difficult for them to be acquired. But there is one option: eBay. When all fails, you know who to call. Skype did. Interactive Corp is another long shot, but they are bargain hunters not premium payers.

Regarding acquisitions, there are three reasons why one of the major players would buy a company; users, technology and people. Unless the start up is in a brand new market that the major player isn't playing in, buying a company for its users is usually not the case. This is because big players like Google, Yahoo! and Microsoft usually either have orders of magnitude more users than the average 'popular' startup or could get just as many or more users when they ship a rival service. The more common reason for a big player like Microsoft or Yahoo! buying a company is for exclusive technology/IP and for the development team. Yahoo! buying del.icio.us or Flickr isn't about getting access to the 250,000 - 300,000 users of these services given that they have less users than the least popular services on Yahoo!'s network. Instead it's about getting people like Joshua Schachter, Caterina Fake and Stewart Butterfield building the next generation of Yahoo!'s products. Don Dodge covers this in slightly more detail in his post Microsoft will acquire my company.

Let's talk about Meebo specifically. The user base is too small to be a factor so the interesting things are the technology and the people. First we should look at the technology. An AJAX instant messaging client isn't novel and companies like Microsoft have been providing one for years. A framework built on reverse engineering IM protocols is cool but not useful. As Om Malik points out, the major players tolerate activities like companies like Meebo & Trillian because it is counterproductive for [example] AOL suing a tiny company like Trillian for misusing its network. On the other hand, they wouldn't tolerate it from a major player like Microsoft primarily because that becomes a significant amount of traffic on their network and licensing access becomes a valid revenue generating scenario. Thus, the technology is probably not worth a great deal to one of the big players. That leaves the people,  according to the Meebo team page there are three people; a server dev, a DHTML/AJAX dev and a business guy (likely to be useless overhead in an acquisition). The question then is how many million dollars would Google, Yahoo! or Microsoft think is worth for the skills of both [most likely excellent] developers? Then you have to factor in the markup because the company got VC funding...

You can probably tell that I agree with Om Malik that it is unlikely that this company would be of interest to any of the four major IM players.

If you are building a Web startup with the intention of flipping it to one of the majors, only three things matter; technology/IP, users and
the quality of your technical team. Repeatedly ask yourself: would Microsoft want our users? would Google want our technology? would Yahoo! want our people?

It's as simple as that.


 

Categories: Technology

December 2, 2005
@ 06:03 PM

James Robertson has a blog post entitled The $100 notebook where he writes

Here's another breathless story on the $100 (actually, looks like it will be $200) notebook. There's some cool things about this, including the fact that it can be powered by a hand crank. However, there are a number of simple problems too -

  • For the truly poor, access to laptops isn't a solution. Access to clean water is way, way higher on the scale
  • Tech support. Ok - you hand out a few hundred in some remote village. What the heck do the new users do when there are problems?

This is a pie in the sky solution, IMHO. It's like deciding to hand out cheap cars, and only later noticing that there are no gas stations for the recipients to use. I understand that the people behind this are well intentioned - but laptops are only useful when there's a hell of a lot of other infrastructure supporting them. The well intentioned folks behind this plan need to aim a lot lower.

Attitudes like this really, really irritate me. The same way that there are rich people and poor people in the United States, there are also parts of Africa that are less well off than others. It isn't all one freaking desert with bony kids surrounded by flies from South Africa to Algeria. For example, in Nigeria there are probably more cell phones per capita in the major cities than in most parts of the United States. The fact that some people get to use the latest iPods and iBooks in the U.S. doesn't mean there aren't homeless bums eating less than three square meals and sleeping on the streets in the same zip codes. Yet I don't see folks like James Robertson posting about how every homeless person has to be housed and every orphan found foster parents before we can enjoy iPods and laptop PCs.

If the plight of people in Africa bothers you so much instead of criticizing those who are making an honest attempt to help with your "armchair quarterbacking" why don't you contribute some of your time & dollars. Put your money where your mouth is.


 

Categories: Technology

Due to the Thanksgiving holiday, I've spent the past day and a half with a large variety of young people whose ages range from 11 to 22 years old. The various conversations I've participated in and overheard have cemented some thoughts I've had about competition in the consumer software game.

This series of thoughts started with a conversation I had with someone who works on MSN Windows Live Search. We talked about the current focus we have on 'relevance' when it comes to our search engine. I agree that it's great to have goals around providing users with more relevant results but I think this is just a one [small] part of the problem. Google rose to prominence by providing a much better search experience than anyone else around. I think it's possible to build a search engine that is as good as Google's. I also think its possible to build one that is a little better than they are at providing relevant search results. However I strongly doubt that we'll see a search engine much better than Google's in the near future. I think that in the near future, what we'll see is the equivalent of Coke vs. Pepsi. Eventually, will we see the equivalents the Pepsi Challenge with regards to Web search engines? Supposedly, the Pepsi challenge shows that people prefer Pepsi to Coke in a blind taste test. However the fact is Coca Cola is the world's #1 soft drink, not Pepsi. A lot of this is due to Coke's branding and pervasive high quality advertising, not the taste of their soft drink. 

Google's search engine brand has gotten to the point where it is synonymous with Web search in many markets. With Google, I've seen a 7-year old girl who was told she was being taken to the zoo by her parents, rush to the PC to 'Google' the zoo to find out what animals she'd see that day. That's how pervasive the brand is.It's like the iPod and portable MP3 players. People ask for iPods for Xmas not MP3 players. When I get my next portable MP3 player, I'll likely just get a video iPod without even bothering to research the competition. Portable audio used to be synonymous with the Sony Walkman until the game changed and they got left behind. Now that portable audio is synonymous with MP3 players, it's the Apple iPod. I don't see them being knocked off their perch anytime soon unless another game changing transition occurs.

So what does this mean for search engine competition and Google? Well, I think increasing a search engine's relevance to become competitive with Google's is a good goal but it is a route that seems guaranteed to make you the Pepsi to their Coke or the Burger King to their McDonalds. What you really need is to change the rules of the game, the way the Apple iPod did.

The same thing applies to stuff I work on in my day job. Watching an 11-year old spend hours on  MySpace and listening to college sorority girls talk about how much they use The Facebook, I realize we aren't just competing with other software tools and trying to build more features. We are competing with cultural phenomena. The MSN Windows Live Messenger folks have been telling me this about their competition with AOL Instant Messenger in the U.S. market and I'm beginning to see where they are coming from. 


 

Jeremy Epling responds to my recent post entitled Office Live: Evolve or Die with some disagreement. In his post Web Versions of Office Apps, Jeremy writes

In his post Office Live: Evolve or Die Dare Obasanjo a writes

I can understand the Office guys aren’t keen on building Web-based versions of their flagship apps but they are going to have get over it. They will eventually have to do it. The only question is whether they will lead, follow or get the hell out of the way.

I agree and disagree with Dare on this one. I agree because I think OWA should have been built and has a 2 compelling reasons to be Web based.

  1. Our increasing mobile population needs quick and easy anywhere access to communication. This is satisfied by a Web based app because a PC only needs a Web browser to “open” you mail app.
  2. Email is already stored on a server.

I disagree with Dare because of the two OWA advantages I listed above don’t equally apply to the other Office apps.

I don’t think anywhere access to document creation/editing is one of Office’s customer’s biggest pain points. Since it is not a major pain point it does not warrant investment because the cost of replicating all the Office flag ships apps as AJAX Web apps is too high.

There's a lot to disagree with in Jeremy's short post. In the comments to my original post, Jeremy argued that VPN software makes needing AJAX web apps redundant. However it seems he has conceded that this isn't true with the existence of Outlook Web Access. Considering that our increasingly mobile customers can use the main Outlook client either through their VPN or even with straight HTTP/HTTPS using the RPC over HTTP feature of Exchange, it is telling that many instead choose to use OWA.

Let's ignore that contradiction and just stick strictly to the rules Jeremy provides for deciding that OWA is worth doing but Web-based versions of Excel or Word are not. Jeremy's first point is that increasingly mobile users need access to their communications tools. I agree. However I also believe that people need 'anywhere' access to their business documents as well. As a program manager, most of my output is product specifications and email (sad but true). I don't see why I need 'anywhere' access to the latter but not the former. Jeremy's second point is that corporate email is already stored on a server. First of all, in my scenarios our team's documents are stored on a Sharepoint server. Secondly, even if all my documents were on my local machine, that doesn't change the fact that ideally I should be able to access them from another machine without needing VPN's and the same version of Office on the machine I'm on. In fact, Orb.com provides exactly this 'anywhere' access to digital media on my PC and it works great. Why couldn't this notion be extended to my presentations, spreadsheets and documents as well? 

Somebody is eventually going to solve this problem. As an b0rg employee, I hope its Microsoft. However if we keep resisting the rising tide that is the Web maybe it'll be SalesForce.com or even Google that will eat our lunch in this space.


 

Categories: Technology

November 15, 2005
@ 04:03 PM

I rarely repost comments I made in other blogs but this is one of those times. In his post I gave Douglas Englebart a mouse and a book Robert Scoble writes

It all started earlier this afternoon when Buzz Bruggeman asked me in an email “want to have dinner with Douglas Englebart?”

First of all, if you don’t know who Douglas Englebart is you better do some reading. He invented the mouse and many of the concepts that you are now using to read my words. And he did that 40 years ago. Yes, he was that far ahead.
...
Some key things stuck with me.

1) Doug is a frustrated inventor. He was frustrated over and over again during his career by people who just didn’t get his ideas. 2) He says he has many ideas that he hasn’t shared yet. We talked about the way the system could change from how it sees what you’re paying attention to, for instance. 3) He repeated for us the creation of the mouse. Said they still don’t know who came up with the name “mouse.” That was the part of the dinner I filmed. 4) He challenged the business people at the table (specifically looking at Andy and me) to come up with a way to increase the speed that innovations get used. He didn’t say it, but his eyes told me that taking 25 years for the world to get the mouse was too long and his career would have been a lot more interesting if people could have gotten his ideas quicker. I told him that ideas move around the world a lot faster now due to blogs and video (imagine trying to explain what Halo 2 was going to look like if all you had to describe it was ASCII text).

The most interesting writing I have seen on the adoption of ideas is Malcolm Gladwell's Tipping Point. I read it this summer as part of my excercise in figuring out why MySpace was so successful and the book was full of insight and interesting examples of idea propagation. If  Doug hasn't read it, I'd suggest that he read it.

I agree that it takes too long for innovations to make it into the mainstream. Ethernet, SQL databases, and object oriented languages running on garbage collected VMs, are all two or three decades old but only started really affecting the mainstream in the last decade. AJAX which is all the rage this year was invented last century. Dave Winer first started talking about payloads for RSS in 2001 but podcasting only took off over the last year.

We are closing the gap from innovation to adoption but it definitely could be better. I agree with Robert that blogs and other forms of mass communication being available to the general public will only accelerate this trend.


 

Categories: Technology

It is interesting to see people rediscover old ideas. Robert Scoble has a post entitled Silicon Valley got my attention: the future of Web businesses where he writes

What is Zvents capturing? Well, they know you like football. They know you probably are in San Francisco this weekend. And, if you click on one or two of the events, they know you’re interested in them. Now, what if you see an ad for a pair of Nikon binoculars. If you click on that, then Zvents would be able to capture that as well.

Now, what other kinds of things might football fans, who are interested in binoculars, who are in San Francisco, want to do this weekend? Hmmm, Amazon sure knows how to figure that kind of problem out, right? (Ever buy a Harry Potter book on Amazon? They suggest other books for you to buy based on past customer behavior!!!)

It goes further. Let’s say this is 2007. Let’s say that Google (or Yahoo or MSN) has a calendar "branding" gadget out. Let’s say they have a video "monetization" gadget out. Zvents could build the calendar "branding" gadget into their page. What would they get out of that? Lots of great PR, and a Google (or MSN or Yahoo) logo in everyone’s face. But, they would also know where you’d be this weekend. Why? Cause you would have added the 49ers football game to your calendar. So, they would know where you are gonna be on Sunday. And, that you just bought binoculars. Over time Google/MSN/Yahoo would be able to learn even more about you and bring you even more ads. How?
...
It’s all attention. So, now, what if Zvents and Google shared their attention with everyone through an API. Now, let’s say I start a new Web business. Let’s call it "Scoble’s tickets and travel." You come to my site to book a trip to London, let’s say. Well, now, what do I know about you? I know you were in San Francisco, that you like coffee, that you just bought some binoculars, that you like football. So, now I can suggest hotels near Starbucks and I can suggest places where you’ll be able to use your binoculars (like, say, that big wheel that’s in the middle of London). Even the football angle might come in handy. Imagine I made a deal with the local soccer team. Wouldn’t it be useful to put on my page "49ers fans get $10 off European football tickets."

Four years ago, while interning at Microsoft, I saw a demo about Hailstorm which made me suspct the project's days were numbered. The demo involved a scenario very similar to what Robert describes in his post. Just substitute Zvents with "online CD retailer" and calendar gadget with "upcoming concerts gadget" and that Robert's scenario was the Hailstorm demo I saw.

The obvious problem with this "Attention API" and Hailstorm is that it requires a massive database of customer behavior. At the time, Microsoft's pitch with Hailstorm was that its online retailers and other potential Hailstorm partners should give all this juicy customer data to Microsoft then pay to access it. It doesn't take a rocket scientist to tell that most of them told Microsoft to take a long walk of a short pier.

Now let's use a more concrete example, like Amazon. The folks at Amazon know exactly what kind of movies, music and books I like. It's possible to imagine them making a deal with TicketMaster to show me upcoming concerts I may be interested in when I visit their site. The reverse is also possible, Amazon may be able to do a better job of recommending music to me based on concerts I have attended whose tickets I purchased via TicketMaster.

What's the first problem that you have to solve when trying to implement this? identity. How do you connect my account on Amazon with my account on TicketMaster in a transparent manner? This is one of the reasons why Passport was such a big part of the Hailstorm vision. It was how Microsoft planned to solve the identity problem which was key to making a number of the Hailstorm scenarios work. Almost half a decade later, the identity problem is still not solved.

Identity is just problem #1.

If you scratch at this problem a little, you also will likely find an ontology problem as well. How do I map the concepts in Amazon's database (hip hop CDs) with related concepts in TicketMaster's database (rap concerts)? The Hailstorm solution was to skip solving this because it was all coming out of the same database. However even simple things like mapping Rap to HipHop or Puff Daddy to P. Diddy can be fraught with problems if both databases weren't created by the exact same organization. Trying to scale this across different business partners is a big problem and is pretty much a cottage industry in the enterprise world.

Thus Ontologies is problem #2.

There are more problems to discover as one attempts to build the Attention API and an Attention economy. At least it was a fun trip down memory lane remembering my intern days. :)


 

Categories: Technology

Robert Scoble has a post entitled Search for Toshiba music player demonstrates search engine weakness where he complains about relevance of search results returned by popular web search engines. He writes

Think search is done? OK, try this one. Search for:

Toshiba Gigabeat

Did you find a Toshiba site? All I see is a lot of intermediaries.

I interviewed a few members of the MSN Search team last week and I gave them hell about this. When I'm writing I want to link directly to the manufacturer's main Web site about their product. Why? Because that's the most authoritative place to go.

But I keep having trouble finding manufacturer's sites on Google, MSN, and Yahoo.

Relevancy ratings on search engines still suck. Let's just be honest about it as an industry.

Can the search researchers find a better algorithm? I sure hope so.

Here, compare for yourself. If you're looking for the Toshiba site, did you find what you're looking for when you do searches Google ? On Yahoo ? On MSN ?

Here's the right answer: http://www.gigabeat.toshiba.com . Did you find it with any of the above searches? I didn't.

The [incorrect] assumption in Robert Scoble's post is that the most relevant website for a person searching for information on a piece of electronic equipment is the manufacturer's website. Personally, if I'm considering buying an MP3 player or other electronic equipment I'm interested in (i) reviews of the product and (ii) places where I can buy it. In both cases, I'd be surpised if the manufacturer's website would be the best place to get either.

Relevancy of search results often depends on context. This is one of the reasons why the talk on Vertical Search and A9.com at ETech 2005 resonated so strongly with me. The relevancy of search results sometimes depends on what I want to do with the results. A9.com tries to solve this by allowing users to customize the search engines they use when they come to the site. Google has attempted to solve this by mixing in both traditional web search results with vertical results inline. For example, searching for MSFT on Google returns traditional search results and a stock quote. Also searching for "Seattle, WA" on Google returns traditional web search results and a map. And finally, searching for "Toshiba Gigabeat" on Google returns traditional web search reults and a list of places where you can buy one. 

Even with these efforts, it is unlikely any of them would solve the problem Scoble had as well as if he just used less ambiguous searches. For example, a better test of relevance is which search engine gives the manufacturer's website for the search for "toshiba gigabeat website".

I found the results interesting and somewhat surprising. There definitely is a ways to go in web search.


 

Categories: Technology

Below is an excerpt from a transcript of an interview with Bill Gates by Jon Udell during last week's Microsoft Professional Developer Conference (PDC).

JU: So a few people in the audience spontaneously commented when they saw the light version of the presentation framework, I heard the words "Flash competitor" in the audience. Do you think that's a fair observation? And do you think that that's potentially a vehicle for getting Avalon interfaces onto not just devices but non-Windows desktops? To extend the reach of Avalon that way?

BG: From a technology point of view, what the Windows Presentation Foundation/Everywhere thing does -- I think it was called Jolt internally. It overlaps what Flash does a lot. Now, I don't think anybody makes money selling lightweight presentation capability onto phones and devices and stuff like that. We're making this thing free, we're making it pervasive. I don't think anybody's business model is you get a bunch of royalties for a little presentation runtime. So there'll certainly be lots of devices in the world that are going to have Flash and they're going to have this WPF/E -- which they say they're going to rename, but that's the best they could do for now -- there'll be lots of devices that have both of those, but they don't conflict with each other. It's not like a device maker say -- oh my god, do I pick Flash, do I pick WPF/E? You can have both of those things and they co-exist easily. They're not even that big.

JU: And it's a portable runtime at this point, so is it something that conceivably takes XAML apps to a Mac desktop or a Linux desktop? Is that a scenario?

BG: The Mac is one of the targets that we explicitly talked about, so yes. Now it's not 100 percent of XAML, we need to be clear on that. But the portion of XAML we've picked here will be everywhere. Absolutely everywhere. And it has to be. You've got to have, for at least reading, and even some level of animation, you've got to have pervasiveness. And will there be multiple things that fit into that niche? Probably. Because it's not that hard to carry multiple...you as a user don't even know when you're seeing something that's WPF/E versus Flash versus whatever. It just works.

One of my concerns when it came to the adoption of the the Windows Presentation Foundation (formerly Avalon) has been the lack of cross platform/browser support. A couple of months ago, I wrote about this concern in my post The Lessons of AJAX: Will History Repeat Itself When it Comes to Adoption of XAML/Avalon?. Thus it is great to see that the Avalon folks have had similar thoughts and are working on a cross-platform story for the Windows Presentation Foundation.

I spoke to one of the guys behind WPF/E yesterday and it definitely looks like they have the right goals. This will definitely be a project to watch.


 

Categories: Technology | Web Development

In every release of RSS Bandit, I spend some time working on performance. Making the application use less memory and less CPU time while adding more features is a challenge. Recently I read a post by Mitch Denny entitled RSS Bandit Performance Problems where after some investigation he found a key culprit for some of our memory consumption issues in the last release. Mitch wrote

Last weekend I was subscribed to over about 1000 RSS feeds and conicidentally last weekend RSSBandit also became unusable. Obviously I had reached some kind of threshold that the architecture of RSSBandit wasn’t designed to cope with.

My first instinct was to ditch and go and find something a bit faster – after all it is a .NET application and we know how much of a memory hog those things are! Errr – hang on. Don’t I encourage our customers to go out and use .NET to build mission critical enterprise applications every day? I really needed to take a closer look at what was going on.

In idle RSSBandit takes up around 120–170MB of RAM on my laptop. Thats more than Outlook and SQL Server, and often more than Visual Studio (except when its in full flight) but to be honest I’m not that surprised because in order for it to give me the unread items count it has to process quite a few files containing lots of unique strings – that means relatively large chunks of being allocated just for data.

I decided to look a bit deeper and run the CLR Allocation Profiler over the code to see where all the memory (and by extension good performance was going). I remembered this article by Rico Mariani which included the sage words that “space is king” and while I waited for the profiler to download tried to guess what the problem would be based on my previous experience.

What I imagined was buckets of string allocations to store posts in their entirety and a significant number of XML related object allocations but when I looked at the allocation graph I saw something interesting.

... [see http://notgartner.com/Downloads/AllocationGraph3.GIF]

As you can see there is a huge amount of traffic between this native function and the NativeWindow class. It was at this point that I started to suspect what the actual problem was and had to giggle at how many times this same problem pops up in smart client applications.

From what I can tell the problem is an excessive amount of marshalling to the UI thread is going on. This is causing threads to synchronise (tell tale DynamicInvoke calls are in there) and quite a bit of short term memory to be allocated over the lifetime of the application. Notice that there is 610MB of traffic between the native function and NativeWindow so obviously that memory isn’t hanging around.

The fix? I don’t know - but I suspect if I went in to the RSSBandit source and unplugged the UI udpates from the UpdatedFeed event the UI responsiveness would increase significantly (the background thread isn’t continually breaking into the main loop to update an unread count on a tree node).

It seems most of the memory usage while running RSS Bandit on Mitch's computer came from callbacks in the RSS parsing code that update the unread item count in the tree view within the GUI whenever a request to fetch an RSS feed is completed. Wow!!!

This is the last place I would have considered looking for somewhere to optimize our code and yet another example of why  "measurement is king" when it comes to figuring out how to improve the performance of an application. Given that a lot of the time a feed is requested there is no need to update the UI since no new items are fetched, there is a lot of improvement that we can gain here.

Yet again I am reminded that writing a rich client application like RSS Bandit using the .NET Framework means that you have to be familiar with a bunch of Win32/COM interop crap even if all you want to do is code in C#. I guess programming wouldn't be so challenging if not for gotchas like this lying around all over the place. :)


 

Categories: RSS Bandit | Technology

Nick Bradbury has a post entitled AttentionTrust.org in which he talks about a new non-profit entity that has been formed by Steve Gillmor, Seth Goldstein and a few others. Nick writes

In a nutshell, the idea is that your attention data - that is, data that describes what you're paying attention to - has value, and because it has value, when you give someone your attention you should expect to be given something in return. And just because you give someone your attention, it doesn't mean that they own it. You should expect to get it back.

I know that sounds a little weird - it took me a while to grok it, too. So I'll use an example that's familiar to many of us: Netflix ratings and recommendations. By telling Netflix how you rate a specific movie you're telling them what you're paying attention to, and in return they can recommend additional DVDs to you based on how other people rated the same movie. In return for giving them your attention data - which is of great value to them - they provide you features such as recommendations that they hope will be valuable to you. In my mind, this is a fair trade.

But what if Netflix collected this information without your knowledge, and rather than using it to give you added value they sold it to another service instead? I imagine that many people wouldn't like that idea - chances are, you'd want to be given the opportunity to decide who this information can be shared with. This is one of the goals of AttentionTrust.org: to leave you in charge of what's done with your attention data.

But what about this whole idea of mobility, as mentioned on the AttentionTrust.org site? What's the benefit of making this stuff mobile? Dave Winer provides a nice example: suppose you could share your Netflix attention data with a dating site such as Match.com, so you could find possible partners who like the same movies as you? For that sort of thing to be possible, you'd need to be able to get your attention data back from any service which collects it. (As an aside, this also means you could share your Netflix queue with any new DVD rental service that comes down the pike - so my guess is that smaller, up-and-coming sites will be more willing to share attention data than the more entrenched sites will.).

The attention data is what separates the giants in the Web world like Amazon & Netflix from their competitors. It is in their best interests to collect as much data as possible about what users are interested in so they can target their users better. The fact that [for example] fans of G-Unit also like 50 Cent is data that makes Amazon a bunch of money since they can offer bundle deals and recommendations which lead to more sales. Additionally record labels and concert organizers are also interested customers in the aggregate data of where people's musical interests lie. It is arguable that this is also beneficial to customers since it makes it more likely that their favorite artists will appear in concert together (for example). Similar concepts exist in the physical world such as supermarket loyalty cards.

How much data websites can store about users can vary widely depending on what jurisdiction they are in. Working at MSN, I know first hand some of the legal and privacy hurdles we have to clear in various markets before we can collect data and how we must make users aware of the data we collect. All this is documented in the MSN Privacy policy. To better target user's we'd love to collect as much data as possible but instead adhere to strict policies informed by laws from various countries and guidelines from various privacy bureaus.

It currently isn't clear to me whether AttentionTrust.org plans to become another privacy body like TRUSTe or whether they plan to be a grassroots evangelization body like the WaSP. Either approach can be effective although they require different skill sets. I'll be interested in seeing how much impact they'll have on online retailers.

As to why I called this the "Return of Hailstorm" in the title of this blog post? It's all in the 2001 Microsoft press release entitled "Hailstorm" on the Horizon which among other things stated

"HailStorm" is designed to place individuals at the center of their computing experience and take control over the technology in their lives and better protect the privacy of their personal information. "HailStorm" services will allow unprecedented collaboration and integration between the users' devices, their software and their personal data. With "HailStorm", users will have even greater and more specific control over what people, businesses and technologies have access to their personal information.

Of course we all know how that turned out. The notion of mobile attention data basically requires Web companies like Netflix & Amazon to give up what for them is a key competitive advantage. It makes no business sense for them to want to that. I wish Steve Gillmor and company luck with their new endeavors but unless they plan to lobby lawmakers I don't see them getting much traction with some of their ideas.


 

Categories: Technology

My buddy Erik Meijer and Peter Drayton have written a paper on programming languages entitled Static Typing Where Possible, Dynamic Typing When Needed: The The End of the Cold War Between Programming Languages. The paper is meant to seek a middle ground between the constant flame wars over dynamically typed vs. statically typed programming language. The paper is pretty rough and definitely needs a bunch of work. Take the following excerpt from the first part of the paper

Static typing fanatics try to make us believe that “well-typed programs cannot go wrong”. While this certainly sounds impressive, it is a rather vacuous statement. Static type checking is a compile-time abstraction of the runtime behavior of your program, and hence it is necessarily only partially sound and incomplete. This means that programs can still go wrong because of properties that are not tracked by the type-checker, and that there are programs that while they cannot go wrong cannot be type-checked. The impulse for making static typing less partial and more complete causes type systems to become overly complicated and exotic as witnessed by concepts such as "phantom types" and "wobbly types"
...
In the
mother of all papers on scripting, John Ousterhout argues that statically typed systems programming languages make code less reusable, more verbose, not more safe, and less expressive than dynamically typed scripting languages. This argument is parroted literally by many proponents of dynamically typed scripting languages. We argue that this is a fallacy and falls into the same category as arguing that the essence of declarative programming is eliminating assignment. Or as John Hughes says, it is a logical impossibility to make a language more powerful by omitting features. Defending the fact that delaying all type-checking to runtime is a good thing, is playing ostrich tactics with the fact that errors should be caught as early in the development process as possible.

We are interesting in building data-intensive three-tiered enterprise applications. Perhaps surprisingly, dynamism is probably more important for data intensive programming than for any other area where people traditionally position dynamic languages and scripting. Currently, the vast majority of digital data is not fully structured, a common rule of thumb is less then 5 percent. In many cases, the structure of data is only statically known up to some point, for example, a comma separated file, a spreadsheet, an XML document, but lacks a schema that completely describes the instances that a program is working on. Even when the structure of data is statically known, people often generate queries dynamically based on runtime information, and thus the structure of the query results is statically unknown.

The comment about making programming languages more powerful by removing features being a logical impossibility seems rather bogus and seems out of place in an academic paper. Especially when one can consider the 'removed features' to be restrictions which limit the capabilities of the programming language.

I do like the fact that the paper tries to dissect the features of statically and dynamically typed languages that developers like instead of simply arguing dynamic vs. static as most discussions of this form take. I assume the purpose of this dissection is to see if one could build a programming language with the best of both worlds. From personal experience, I know Erik has been interested in this topic from his days.

Their list of features runs the gamut from type inference and coercive subtyping to lazy evaluation and prototype inheritence. Although the list is interesting I can't help but think that it seems to me that Erik and Peter already came to a conclusion and tried to fit the list of features included in the paper to that conclusion. This is mainly taken from the fact that a lot of the examples and features are taken from  instead of popular scripting languages.

This is definitely an interesting paper but I'd like to see more inclusion of dynamic languages like Ruby, Python and Smalltalk instead of a focus on C# variants like . The paper currently looks like it is making an argument for Cω 2.0 as opposed to real research on what the bridge between dynamic and static programming languages should be.


 

Categories: Technology

It seems both Google and Yahoo! provided interesting news on the personalized search front recently.

Yahoo! MyWeb 2.0 seems to merge the functionality of del.icio.us with the power of Yahoo! search. I can now add tags to my Yahoo! bookmarks, view cached versions of my bookmarked pages and perform searches restricted to the sites in my bookmark list. Even cooler is that I can share my bookmarks with members of my contact list or just make them public. The search feature also allows one to search sites restricted to those shared by others. All they need to do is provide an API and add RSS feeds for this to be a del.icio.us killer.

Google Personalized Search takes a different tack in personalizing search results. Google's approach involves tracking your search history and tracking what search results you cliked on. Then when next you perform searches, Google Personalized Search brings certain results closer to the top based on your search history or previous click behavior.

As for this week's news about MSN Search? Well you can catch what our CEO had to say about us in the ZDNet article Ballmer confident, but admits failings. We definitely have a lot of catching up to do but I don't think the race is over yet.


 

Categories: Social Software | Technology

Stephen O' Grady has a blog post entitled Gmail Fighting Off Greasemonkey? where he writes

I'm not sure of the reasoning behind it, but it appears that Google may have made some behind the scenes changes to Gmail that disrupted the scipts and extensions I rely on to get the most out of my account. One of the Apache devs noted here that his Greasemonkey enabled persistent searches were no longer functioning, and in the same timeframe I've lost my delete key. It's not just Greasemonkey scripts, however, as my Gmail Notifier extension for Firefox has apparently been disabled as well. While it's Google's perogative to make changes to the service as necessary for maintenance or other reasons, I'm hoping this is not a deliberate play at preventing would-be participants from enhancing the value of Google's service. It's remarkable how much less useful Gmail is to me when I have to log in to see if I have mail, or can't easily delete the many frivolous emails I receive each day (yes, I'm aware that I can use a POP client for this, but I'd rather not).
...
Update: As one reader and a few posters to the GM listserv have noted, one thing that's disrupted a variety of user scripts has been the fact that the domain to Gmail has changed from gmail.google.com to mail.google.com. While simply adding the domains into the GM interface had no effect on my Gmail, a reinstallation of a version of the script with updated domain returned my beloved Delete button. What do developers think of this change with Google's service? Here's one take from the GM list: "I noticed [the domain change] too. Why they can't just leave it alone, I can't understand." To be a bit less harsh, while Google probably had good reasons for making the change, it would have been great to see them be proactive and notify people of the change via their blog or some other mechanism.

I find this hilarious. Greasemonkey scripts work by effectively screen scrapping the website and inserting changes into the HTML. Stephen and others who are upset by Google's change are basically saying that Google should never change the HTML or URL structure of the website ever again because it breaks their scripts. Yeah, right.

Repeat after me, a web page is not an API or a platform.  

Speaking of apps breaking because of GMail domain changes it seems RSS Bandit users were also affected by the GMail domain name change. It looks like we have a problem with forwarding the username and password after being redirected from https://gmail.google.com/gmail/feed/atom to https://mail.google.com/gmail/feed/atom. I'll fix this bug in the next version but in the meantime RSS Bandit users should be able to work around this by changing the URL manually to the new one. My bad.


 

Categories: RSS Bandit | Technology

Shelley Powers has a few posts that are critical of Microsoft's InfoCard project entitled You Want We Should What? and What do you want from Digital Identity. I'm not really up to speed on all the digital identity and InfoCard discussion you can find in places like Kim Cameron's blog mainly because it bores me to tears. However one thing that struck me when reading Shelley's posts and then reading a few entries from Kim's blog is that it seemed they both were expecting different people to use the InfoCard technology.

I've found it hard to tell whether the identity folks at Microsoft expect InfoCard to mainly be used by Enterprises who need to identify people who communicate across identity domains (e.g. the way Office Communicator is used to communicate with people within an enterprise or folks using Yahoo!, MSN or AOL instant messaging tools) or whether they expect it to be used as some sort of federated single sign on system for various websites. Reading the Microsoft's Vision for an Identity Metasystem whitepaper it seems InfoCard and the "Identity Metasystem" are meant to do this and much more. My spider sense tends to tingle when I see v1 technologies that aim to solve such diverse problems in a broad sweep.

The end of the whitepaper states Passport and MSN plan to implement support for the identity metasystem as an online identity provider for MSN and its partners. Interesting, I may have to start digging into this space since it will eventually impact my day job. 


 

Categories: MSN | Technology

If you are a geek, you may have heard of the Firefox extension called GreaseMonkey which lets you to add bits of DHTML ("user scripts") to any web page to change its design or behavior. This basically enables you to remix the Web and either add features to your favorite web sites or fix broken UI design.

Earlier this week, there was a post to the Greasemonkey mailing list pointing out the existence of Turnabout. Below are some excerpts from the Turnabout website.  

What is Turnabout?

Turnabout is an Internet Explorer plugin that runs user scripts of your choice on any website. User scripts are like plugins for websites. They can enhance your web experience in a lot of ways:

  • Block pop-ups
  • Change text URLs into clickable links
  • Add features, like adding custom search links in your Gmail account
  • Enlarge text fields that are too small
  • …And more!

Essentially, Turnabout does for IE what Greasemonkey does for Firefox .

So where does this leave the other recently announced Greasemonkey for Internet Explorer project, Trixie? Turnabout seems like a better bet for a couple of reasons. First of all, Turnabout doesn't require the .NET Framework like Trixie does. Secondly, Turnabout comes with source code but not with any licensing information which means it is not Open Source. Although Trixie's source code can be easily deciphered with Reflector, that technically is reverse engineering. Finally and most importantly, the developer of Trixie has stopped work on it now that Turnabout exists.

For now I'll be uninstalling Trixe and trying out Turnabout. I'm glad to see that Trixe inspired an even better project to get launched. REMIX the Web. 


 

Categories: Technology

Adam Bosworth has a blog post entitled AJAX reconsidered that hits right at the heart of some questions I've been asking myself about the renewed interest in using DHTML and server callbacks via XMLHttpRequest to build website applications. Adam writes

I've been thinking about why Ajax is taking off these days and creating great excitement when, at the time we originally built it in 1997 (DHTML) and 1997 (the XML over HTTP control) it had almost no take up.
...
First, the applications that are taking off today in Ajax aren't customer support applications per se. They are more personal applications like mail or maps or schedules which often are used daily. Also people are a lot more familiar with the web and so slowly extending the idiom for things like expand/collapse is a lot less threatening than it was then. Google Maps for example uses panning to move around the map and people seem to love it.

Secondly, the physics didn't work in 1997. A lot of Ajax applications have a lot of script (often 10 or 20,000 lines) and without broadband, the download of this can be extremely painful. With broadband and standard tricks for compressing the script, it is a breeze. Even if you could download this much script in 1997, it ran too slowly. Javascript wasn't fast enough to respond in real time to user actions let alone to fetch some related data over HTTP. But Moore's law has come to its rescue and what was sluggish in 1997 is often lightning quick today.

Finally, in 1997 or even in 1999 there wasn't a practical way to write these applications to run on all browsers. Today, with work, this is doable. It would be nice if the same code ran identically on Firefox, IE, Opera, and Safari, and in fact, even when it does, it doesn't run optimally on all of them requiring some custom coding for each one. This isn't ideal, but it is manageable.

I find the last point particularly interesting. If Web browsers such as Firefox had not cloned Microsoft's proprietary APIs in a way made it easy to write what were formerly IE specific applications in a cross-browser manner then AJAX wouldn't be the hip buzzword du jour. This brings me to Microsoft's next generation of technologies for building rich internet applications; Avalon and XAML.

A few months ago, C|Net ran an article entitled Will AJAX help Google Clean Up? In the article the following statement was attributed to a Microsoft representative

"It's a little depressing that developers are just now wrapping their heads around these things we shipped in the late 20th century," said Charles Fitzgerald, Microsoft's general manager for platform technologies. "But XAML is in a whole other class. This other stuff is very kludgy, very hard to debug. We've seen some pretty impressive hacks, but if you look at what XAML starts to solve, it's a major, major step up."

Based on how adoption of DHTML/AJAX occured over the past few years I suspect that Avalon/XAML will follow a similar path since the initial conditions are similar. If I am correct then even if Avalon/XAML is a superior technology to DHTML/AJAX (which I believe to be the case) it will likely be shunned on the Web due to lack of cross-browser interoperability but may flourish within homogenous intranets. This shunning will continue until suitable clones for the functionality of Avalon/XAML appears for other browsers. In which case as soon as some highly visible pragmatist adopts the technology then it will become acceptable. However it is unclear to me that cloning XAML/Avalon is really feasible especially if the technology is evolved at a regular pace as opposed to being let languish as Microsoft's DHTML/AJAX technologies have been. This would mean that Avalon/XAML would primarily be an intranet technology used by internal businesses applications and some early adopter websites as was the case with DHTML/AJAX. The $64,000 question for me is whether this is a desirable state of affairs for Microsoft and if not what should be done differently to prevent it?

Of course, this is all idle speculation on my part as I procrastinate instead of working on a whitepaper for work. 

Disclaimer: The aforementioned statements are my opinions and do not represent the intentions, thoughts, plans or strategies of my employer.  


 

Categories: Technology

If you are a geek, you may have heard of the Firefox extension called GreaseMonkey which lets you to add bits of DHTML ("user scripts") to any web page to change its design or behavior. This basically enables you to remix the Web and either add features to your favorite web sites or fix broken UI design.

Over the Memorial day weekend, I got the hookup on where to obtain a similar application for Internet Explorer named Trixie. Below are some excerpts from the Trixie website

What is a Trixie script?

Any Greasemonkey script is a Trixie script.  Though, due to differences between Firefox and Internet Explorer, not all Greasemonkey scripts can be executed within IE.  Trixie makes no attempt to allow Greasemonkey scripts to run unaltered, since it is best to have the script author account for the differences and have the script run on both browsers if he/she so chooses.

Refer to the excellent Greasemonkey documentation to learn how to write Greasemonkey/Trixie scripts.  Note that some of the information there won't be applicable to Internet Explorer and Trixie.

Installing Trixie

Trixie requires the Microsoft .NET framework to be installed.

To install Trixie, download and run TrixieSetup  (latest version: 0.2.0).  You should ideally close all instances of Internet Explorer before doing this.  By default, TrixieSetup installs Trixie to your Program Files\Bhelpuri\Trixie directory (you can of course change this location).  It also installs a few scripts to the Program Files\Bhelpuri\Trixie\Scripts directory.

Restart IE after installing Trixie.  Once you have restarted, go to the Tools menu.  You should see a menu item called "Trixie Options".  Selecting that will show you the list of scripts installed and which one's are enabled or disabled.

Once you have installed Trixie, you browse the Web just like you always do.  Trixie works in the background executing the scripts on the designated pages and customizing them to your needs.

I've been using Trixie for the past few days and so far it rocks. I also looked at the code via Reflector and it taught me a thing or two about writing browser plugins with C#. So far my favorite Trixie Script is the one that adds site statistics to an MSN Spaces blog when you view your own space.

It looks like I need to spend some time reading Dive Into Greasemonkey so I can write my own user scripts to fix annoyances in web sites I use frequently. Remix the web, indeed.

Update: It took me a few minutes to figure out why the project was called Trixie. If you're a Speed Racer fan, you probably figured it out after reading the second paragraph of this post. :)


 

Chris Anderson has been learning Python via Jim Hugunin's excellent IronPython and came to the conclusion that Microsoft has been missing the boat in programming language trends. In his post The Hobbyist and the Script  he writes

Scripting is great for glue code - the same thing that VB1 and 2 used to be great at. VB3 started to get into the serious app development with rich data integration, and VB4 brought us into the 32-bit era, and in a way back to glue code. VB4's embracing ActiveX controls gave VB developers an entirely new place to play. I remember working on an application using a beta of VB5 and writing my "hard core code" in MFC ActiveX controls. After a while I started writing more and more of the code in VB, because it worked and wasn't the bottle neck in anyway for the application.

I think that scripting and many dynamic languages are in the same camp. They are great for small applications and writing glue code. Look at Google Maps, the real processing is on the server, and the client side AJAX is just the glue between the user, the browser, and the backend server. I would argue that something more beefy like Outlook Web Access (a Microsoft AJAX application, writen before AJAX was the name) demonstrates more of the limitations of writing the majority of your client interface in script.

Regardless of the limitations, our singular focus on strongly typed compiled languages has blinded us to the amazing productivity and approachability of dynamic scripting langauges like Python and Ruby. I'm super excited that we are changing this. Hiring Jim Hugunin is a great start. I hope we continue this, and really look to add a strong dynamic language and scripting story to our developer portfolio.

I've actually been quite upset by how many programming language camps Microsoft has neglected in its blind pursuit of competition with Java and the JVM. Besides the scripting language camps, there are significant customer camps we have neglected as well. We have neglected Visual Basic developers, we gave them a poor replacement in Visual Basic.NET which the market has rejected which can be gleaned by posts such as Chris Sells's pointing out that publishers don't want to VB.NET books and the Classic VB petition. We have neglected DHTML developers who have been building web sites and web applications against Internet Explorer, we don't even have a Javascript IDE let alone a story for people trying to build complex AJAX/DHTML applications like GMail or Google Maps.

The main problem is that Microsoft is good at competing but not good at caring for customers. The focus of the developer division at Microsoft is the .NET Framework and related technologies which is primarily a competitor to Java/JVM and related technologies. However when it comes to areas where there isn't a strong, single competitor that can be focused on (e.g. RAD development, scripting languages, web application development) we tend to flounder and stagnate. Eventually I'm sure customer pressure will get us of our butts, it's just unfortunate that we have to be forced to do these things instead of doing them right the first time around.

On a similar note, thank God for Firefox.


 

 From the Business Week article, The World According to Ballmer

Clearly alluding to Microsoft's key Internet search rival, Ballmer said: "The hottest company right now -- the one nobody thinks can do any wrong -- may just be a one-hit wonder."

Actually Google is already at least a two hit wonder; AdSense/Adwords and the Google search engine. Given that revenues from AdSense are growing to match those from the Google website, it seems inevitable that in a few years it'll be more important to Google that they are the #1 ad provider on the Web not whether they are the #1 search engine.

 

 


 

It was announced at E3 this week that XBox 360 will be backwards compatible with the original XBox. My good friend, Michael Brundage, was the dev for this feature. Read about XBox Backwards Compatibility from the horses mouth. My favorite quote from his article is

The first impression you should get is that these numbers are fantastic for high-definition Xbox 360 games. Wow! But on further reflection, they're not so good for emulating Xbox games at 30 fps. On my 1.25GHz G4 PowerBook, VPC 7 emulates a 295MHz x86 processor -- so the emulator is more than 4 times faster than the machine it's emulating. So most people look at these numbers and conclude that Xbox backwards compatibility can't be done.

Then there are a few people who understand emulators at a very technical level, or understand both Xbox systems inside and out to an expert level of detail that I'm not about to go into here. They perform more sophisticated calculations using the Art of Software Engineering, but ultimately reach the same conclusions as those not skilled in the Art: Backwards compatibility can't be done. [One such skeptic interviewed me for my current job, and pointedly asked during the interview how I planned to handle the project's certain future cancellation.]

And yet, here it is. It's magic!

Last year I got to meet J Allard and one of the questions I asked was about backwards compatibility in the next version of the XBox. He gave the impression that they wanted to do it but it would be a very difficult task. I would never have guessed that Mr. XQuery would be the one to get the job done.

Great job, Michael.


 

Categories: Technology

Today I was reading a blog post by Dave Winer entitled Platforms where he wrote

It was both encouraging and discouraging. It was encouraging because now O'Reilly is including this vital topic in its conferences. I was pitching them on it for years, in the mid-late 90s. It should have been on the agenda of their open source convention, at least. It was discouraging, because with all due respect, they had the wrong people on stage. This is a technical topic, and I seriously doubt if any of the panelists were actually working on this stuff at their companies. We should be hearing from people who are actually coding, because only they know what the real problems are.

I was recently thinking the same thing after seeing the attendance list for the recent O'Reilly AJAX Summit. I was not only surprised by the people who I expected to see on the list but didn't but also by who they did decide to invite. There was only one person from Google even though their use of DHTML and IXMLHttpRequest is what started the AJAX fad. Nobody from Microsoft even though Microsoft invented DHTML & IXMLHttpRequest and has the most popular web browser on the planet. Instead they have Anil Dash talk about the popularity of LiveJournal and someone from Technorati talk about how they plan to jump on the AJAX bandwagon.

This isn't to say that some good folks weren't invited. One of the guys behind the Dojo toolkit was there and I suspect that toolkit will be the one to watch within the next year or so. I also saw from the comments in Shelley Powers's post, Ajax the Manly Technology, that Chris Jones from Microsoft was invited. Although it's good to see that at least one person from Microsoft was invited, Chris Jones wouldn't be on my top 10 list of people to invite. As Dave Winer stated in the post quoted above, you want to invite implementers to technical conferences not upper management.

If I was going to have a serious AJAX summit, I'd definitely send invites to at least the following people at Microsoft.

  1. Derek Denny-Brown: Up until a few weeks ago, Derek was the development lead for MSXML which is where IXMLHttpRequest comes from. Derek has worked on MSXML for several years and recently posted on his blog asking for input from people about how they'd like to see the XML support in Internet Explorer improve in the future.

  2. Scott Isaacs: The most important piece of AJAX is that one can modify HTML on the fly via the document object model (DOM) which is known by many as DHTML. Along with folks like Adam Bosworth, Scott was one of those who invented DHTML while working on Internet Explorer 4.0. Folks like Adam Bosworth and Eric Sink have written about how significant Scott was in the early days of DHTML.  Even though he no longer works on the browser team, he is still involved with DHTML/AJAX as an architect at MSN which is evidenced by sites such as http://www.start.com/2 and http://spaces.msn.com

  3. Dean Hachamovitch: He runs the IE team. 'nuff said.

I'd also tell them that the invitations were transferrable so in case they think there are folks that would be more appropriate to invite, they should send them along instead.

It's kind of sad to realize that the various invite-only O'Reilly conference are just a way to get the names of the same familiar set of folks attached to the hot new technologies as opposed to being an avenue to get relevant people from the software industry to figure out how they can work together to advance the state of the art.


 

Categories: Technology

Russell Beattie has a post entitled Spotlight Comments are the Perfect Spot for Tags! where he writes

I read just about every sentence of the Ars Technica overview of OSX Tiger and learned a lot, especially the parts where the author drones on about OSX's support for meta-data in the filesystem. I originally thought the ability to add arbitrary meta data to any file or folder was an interesting capability, albeit not particularly useful in day-to-day activities. But then I was just playing around and saw the Spotlight Comments field that's now included at the very top of a file or folder's Info box and I grokked it! Now that there's actually an easy way to both add and to search for meta-data on files and folders, then there's actually a reason to put it in! But not just any meta-data... What's the newest and coolest type of meta-data out there? Yep, tags! And the comments fields is perfect for this!

Obviously nothing has changed in terms of the UI or search functionality, just the way I think about meta data. Before I may have ignored an arbitray field like "comments" even if I could search on it (haven't I been able to do something similar in Windows?). But now that I "get" tagging, I know that this isn't the place for long-winded description of the file or folder, just keywords that I can use to refer to it later. Or if those files are shared on the network, others can use these tags to find the files as well. Fantastic!

This sounds like a classic example of "When you have a hammer, everything looks like a nail". One of the interesting things about the rush to embrace tagging by many folks is the refusal to look at the success of tagging in context. Specifically, how did successful systems like del.icio.us get around the Metacrap problem which plague all attempts to create metadata systems? I see two aspects of the way del.icio.us applied tagging which I believe were key to it becoming a successful system.

  1. Tagging is the only organizational mechanism: In del.icio.us, the only way to organize your data is to apply tags to it. This basically forces users to tag their data if they want to use the service otherwise it quickly becomes difficult to find anything.

  2. It's about the folksonomy: What really distinguishes services like del.icio.us from various bookmarks sites that have existed since I was in college is that users can browse each other's data based on their metadata. The fact that del.icio.us is about sharing data encourages users to bookmark sites more than they typically do and to apply tags to the data so that others may find the links.

Neither of the above applies when applying tags to files on your hard drive. My personal opinion is that applying tagging to the file system is applying an idea that works in one context in another without understanding why it worked in the first place.


 

Categories: Technology

Almost four years ago I wrote an article entitled C# from a Java Developer's Perspective which is still one of the most popular comparisons of C# and Java on the Web. The article gets a couple thousand hits a month and I still get email about it from developers thanking me for writing it. Given the amount of changes in Java 1.5 5.0 and C# 2.0 I think the time has come to update this article. Below is my proposed table of contents for the new version. I'd appreciate comments on anything people thing is either missing or placed incorrectly.

  1. The More Things Change The More They Stay The Same
    This section describes concepts and language features that are almost exactly the same in C# and Java.
    1. We Are All Objects
    2. Keyword Jumble
    3. Of Virtual Machines and Language Runtimes
    4. Heap Based Classes and Garbage Collection
    5. Arrays Can Be Jagged
    6. No Global Methods
    7. Interfaces, Yes. Multiple Inheritance, No.
    8. Strings Are Immutable
    9. Unextendable Classes
    10. Throwing and Catching Exceptions
    11. Member Initialization at Definition and Static Constructors
    12. Generics - new!!!
    13. Boxing - new!!!
    14. Variable Length Parameter Lists - new!!!

  2. The Same But Different
    This section describes concepts and language features that differ either only in syntax or in some similarly minor manner between C# and Java.
    1. Main Method
    2. Inheritance Syntax
    3. Run Time Type Identification (is operator)
    4. Namespaces
    5. Constructors, Destructors and Finalizers
    6. Synchronizing Methods and Code Blocks
    7. Access Modifiers
    8. Reflection
    9. Declaring Constants
    10. Primitive Types
    11. Array Declarations
    12. Calling Base Class Constructors and Constructor Chaining
    13. For or is that foreach loops? - new!!!
    14. Metadata Annotations - new!!!
    15. Enumerations - new!!!

  3. An Ever So Slight Feeling of Dj Vu
    This section describes concepts and language features that exist in C# that are similar to those that exist in Java but with a significant difference.
    1. Nested classes
    2. Threads and Volatile Members
    3. Operator Overloading
    4. switch Statement
    5. Assemblies
    6. Collections
    7. goto (no longer considered harmful)
    8. Virtual Methods (and final ones too)
    9. File I/O
    10. Object Serialization
    11. Documentation Generation from Source Code Comments
    12. Multiple Classes in a Single File
    13. Importing Libraries
    14. Events
    15. Cross Language Interoperability

  4. Now For Something Completely Different
    This section describes language features and concepts that exist in C# and have no Java counterpart.
    1. Deterministic Object Cleanup
    2. Delegates - changed!!!
    3. Value Types (Structs)
    4. Run Time Type Identification (as operator)
    5. Properties - changed!!!
    6. Multidimensional Arrays
    7. Indexers
    8. Preprocessor Directives - changed!!!
    9. Aliases
    10. Runtime Code Generation
    11. Pointers and Unsafe Code
    12. Pass by Reference
    13. Verbatim Strings
    14. Overflow Detection
    15. Explicit Interface Implementation
    16. Friend Assemblies - new!!!
    17. The Global Namespace Qualifier - new!!!
    18. Continuations (Iterators) - new!!!
    19. Partial Classes - new!!!
    20. Anonymous Methods - new!!!

  5. Wish You Were Here
    This section describes language features and concepts that exist in Java and have no C# counterpart.
    1. Checked Exceptions
    2. Cross Platform Portability (Write Once, Run Anywhere)
    3. Extensions
    4. strictfp
    5. Dynamic Class Loading
    6. Interfaces That Contain Fields
    7. Anonymous Inner Classes
    8. Static Imports - new!!!

 

Categories: Technology

Jonathan Pincus contacted me a few days ago about being part of a birds of a feather session on "20% Time" at the 15th Annual Conference on Computers, Freedom & Privacy. It will be held at the Westin in Seattle at 9PM on Thursday, April 14th.

It seems there'll be someone from Google there as well which should interesting. I'd like to hear how Google handled some of the issues raised in my post Some Thoughts and Questions About Google 20% Time.


 

Categories: Technology

I saw an link to an interesting site in Robert Scoble's post Paul remixes Google Maps and Craig's List in interesting new way  where he writes

What happens when you mix Google Maps with Craig's List? Paul Rademacher shows us.

This is a cautionary tale for Microsoft: them who has the best API's will get used in the most interesting new ways.

Like Ballmer says: developers, developers, developers, developers, developers...

Actually this has little to do with APIs given that there is neither an official Craig's List API nor is there a Google Maps API. This looks more like a combination of HTML screen scraping for getting the Craig's List data and good old fashioned reverse engineering. I suspect Paul didn't have to do much reverse engineering in the Google Maps case because Engadget already published an article called HOW-TO: Make your own annotated multimedia Google map which shows exactly how to build your own applications on top of Google Maps.

Despite that this is definitely a cool hack.

This shows one of the interesting side effects of building an AJAX site. You basically have to create an API for all the Javascript callbacks from the web browser back to the server. Once you do that, anyone else can call this API as well. I doubt that the Google folks anticipated that there would be this much interest in the API the browser uses the talk to the Google Maps server.

PS: Is anyone other reader of Scoble's blog irritated by the fact that he can't point to anything on the Web without throwing some Microsoft spin on it?


 

Categories: Technology

I've seen a number of responses to my recent post, SOA, AJAX and REST: The Software Industry Devolves into the Fashion Industry, about the rebranding of existing techniques for building websites which used to be called DHTML or Remote Scripting to AJAX. I've found the most interesting responses to be the ones that point out why this technique isn't seeing mainstream usage in designing websites.

Scott Isaacs has a post entitled AJAX or as I like to call it, DHTML where he writes

As Adam Bosworth explained on Dare's blog, we built Ajax-like web applications back in 1997 during and after shipping IE4 (I drove the design of DHTML and worked closely with the W3C during this period).  At that time xmlhttp (or xml for that matter) did not exist. Instead, structured data was passed back and forth encapsulated in Javascript via hidden IFrames.  This experimental work helped prove the need for a structured, standardized approach for managing and transporting data.

Over the years, quite a few web applications have been built using similar approaches - many are Intranet applications and one of the best was Outlook Web Access. However, very few main-stream web-sites appeared. I believe this was due to a number of factors - the largest being that the typical web user-experience falls apart when you start building web-based applications.  The user-interface issues revolve mostly around history and refresh. (For example - In GMail, navigate around your inbox and either refresh the page or use the back button. In Spaces (IE Users), expand and view comments and hit the back button.  I am willing to wager that what happens is not what you really want).  The problem lies in we are building rich applications in an immature application environment. We are essentially attempting to morph the current state of the browser from a dynamic form package to a rich application platform.

I have noticed these problems in various Javascript based websites and it definitely is true that complex Javascript navigation is incompatible with the traditional navigation paradigm of the Web browser. Shelley Powers looks at the problems with Javascript based websites from another angle in her post The Importance of Degrading Gracefully where she writes

Compatibility issues aside, other problems started to pop up in regards to DHTML. Screen readers for the blind disabled JavaScript, and still do as far as I know (I haven’t tried a screen reader lately). In addition, security problems, as well as pop-up ads, have forced many people to turn off JavaScript–and keep it off.

(Search engines also have problems with DHTML based linking systems.)

The end result of all these issues–compatibility, accessibility, and security–is a fundamental rule of web page design: whatever technology you use to build a web page has to degrade, gracefully. What does degrade, gracefully, mean? It means that a technology such as Javascript or DHTML cannot be used for critical functionality; not unless there’s an easy to access alternative.
...
Whatever holds for DHTML also holds for Ajax. Some of the applications that have been identified as Ajax-enabled are flickr and Google’s suite of project apps. To test how well both degrade, I turned off my JavaScript to see how they do.
...
Google’s gmail, on the other hand, did degrade, but did not do so gracefully. If you turn off JavaScript and access the gmail page, you’ll get a plain, rather ugly page that makes a statement that the primary gmail page requires JavaScript, either turn this on, get a JS enabled browser, or go to the plain HTML version.

Even when you’re in the plain HTML version, a prominent line at the top keeps stating how much better gmail is with a Javascript enabled browser. In short, Google’s gmail degrades, by providing a plain HTML alternative, but it didn’t do so gracefully; not unless you call rubbing the customer’s nose in their lack of JS capability “graceful”.

You don’t even get this message with Google Suggest; it just doesn’t work (but you can still use it like a regular search page). As for Google Maps? Not a chance–it is a pure DHTML page, completely dependent on JavaScript. However, Mapquest still works, and works just as well with JS as without.

(Bloglines also doesn’t degrade gracefully — the subscription is based on a JavaScript enabled tree. Wordpress, and hence Wordform, does degrade gracefully.)

If we’re going to get excited about new uses of existing technology, such as those that make up the concept of Ajax, then we should keep in mind the rule of degrading gracefully: Flickr is an example of a company that understands the principle of degrading gracefully; Google is an example of a company, that doesn’t.

Update: As Doug mentions in comments, flickr is dependent on Flash. If Flash is not installed, it does not degrade gracefully.

I do remember being surprised that I had to add "http://*.google.com" to my trusted sites list to get Google Maps to work. Of course, none of the above observations is new but given that we are seeing a renaissance of interest in using Javascript for building websites, it seems like a good idea for a similar return of the arguments discussing the cons of this approach as well.


 

Categories: Technology

This was a late breaking session that was announced shortly after The Long Tail: Conversation with Chris Anderson and Joe Kraus. Unfortunately, I didn't take my tablet PC with me to the long tail session so I don't have any notes from it. Anyway, back to Google Code.

The Google Code session was hosted by Chris DiBona. The Google Code homepage is is similar to YSDN in that it tries to put all the Google APIs under a single roof. The site consists of three main parts; information on Google APIs, links to projects Open Sourced by Google that are hosted on SourceForge and highlighted projects created by third parties that use Google's APIs.

The projects Open Sourced by Google are primarily test tools and data structures used internally. They are hosted on SourceForge although there seemed to be some dislike for the features of the site both from Chris and members of the audience. Chris did feel that among the various Open Source project hosting sites existing today, SourceForge was the one most likely to be around in 10 years. He mentioned that Google was ready to devote some resources to helping the SourceForge team improve their service.


 

Categories: Technology | Trip Report

These are my notes on the From the Labs: Google Labs session by Peter Norvig, Ph.D.

Peter started off by pointing out that since Google hires Ph.D's in their regular developer positions they often end up treating their developers as researchers while treating their researchers as developers.

Google's goal is to organize the world's data. Google researchers aid in this goal by helping Google add more data sources to their search engines. They have grown from just searching HTML pages on the Web to searching video files and even desktop search. They are also pushing the envelop when it comes to improving user interfaces for searching such as with Google Maps.

Google Suggest which provides autocomplete for the Google search box was written by a Google developer (not a researcher) using his 20% time. The Google Personalized site allows users to edit a profile which is used to weight search results when displaying them to the user. The example he showed was searching for the term 'vector' and then moving the slider on the page to showing more personalized results. Since his profile showed an interest in programming, results related to vector classes in C++ and Java were re-sorted to the top of the search results. I've heard Robert Scoble mention that he'd like to see search engines open up APIs that allow users to tweak search parameters in this way. I'm sure he'd like to give this a whirl. Finally he showed Google Sets which was the first project to show up on the Google Labs site. I remember trying this out when it first showed up and thinking it was magic. The coolest thing to try out is to give it three movies starring the same and watch it fill out the rest.


 

Categories: Technology | Trip Report

These are my notes on the From the Labs: Yahoo! Research Labs session by
Gary William Flake.

Yahoo! has two research sites, Yahoo! Next and Yahoo! Research Labs.  The Yahoo! Next site has links to betas of products that will eventually become products such as Y!Q contextual search, a Firefox version of the Yahoo! toolbar and Yahoo! movie recommendations.

The Yahoo! research team focuses on a number of core research areas areas including machine learning, collective intelligence, and text mining. They publish papers frequently related to these topics.

Their current primary research project is the Tech Buzz Game which is a fantasy prediction market for high-tech products, concepts, and trends. This is in the same vein as other fantasy prediction markets such as the Hollywood Stock Exchange and the Iowa Electronics Market. The project is being worked in in collaboration with O'Reilly Publishing. A product's buzz is a function of the volume of search queries for that term. People who constantly predict correctly can win more virtual money which they can use to bet more. The name of this kind of market is a dynamic peri-mutuel market.

The speaker felt Tech Buzz would revolutionize the way auctions were done. This seemed to be a very bold claim given that I'd never heard of it. Then again it isn't like I'm auction geek.


 

Categories: Technology | Trip Report

These are my notes on the From the Labs: Microsoft Research session by Richard F. Rashid, Ph.D.

Rick decided that in the spirit of ETech, he would focus on Microsoft Research projects that were unlikely to be productized in the conventional sense.

The first project he talked about was SenseCam. This could be considered by some to be the ultimate blogging tool. It records the user's experiences during the day by taking pictures, recording audio and even monitoring the temperature. There are some fancy tricks it has to do involving usage of internal motion detectors to determine when it is appropriate to take a picture so it doesn't end up blurry because the user was moving. There are currently 20 that have been built and clinical trials have begun to see if the SenseCam would be useful as aid to people with severe memory loss.

The second project he discussed was the surface computing project. The core idea around surface computing is turning everyday surfaces such as tabletops or walls into interactive input and/or display devices for computers. Projectors project displays on the surface and cameras detect when objects are placed on the surface which makes the display change accordingly. One video showed a bouncing ball projected on a table which recognized physical barriers such as the human hand when they were placed on the table. Physical objects placed on the table could also become digital objects. For example, placing a magazine on the table would make the computer copy it and when the magazine was removed a projected image of it would remain. This projected image of the magazine could then be interacted with such as by rotating and magnifying the image.

Finally he discussed how Microsoft Research was working with medical researchers looking for a cure for HIV infection. The primary problem with HIV is that it constantly mutates so the immune system and drugs cannot recognize all its forms to neutralize them in the body. This is similar to the spam problem where the rules for determining whether a piece of mail is junk mail keeps changing as spammers change their tactics. Anti-spam techniques have to use a number of pattern matching heuristics to figure out whether a piece of mail is spam or not. MSR is working with AIDS/HIV researchers to see whether such techniques couldn't be used to attack HIV in the human body.


 

Categories: Technology | Trip Report

These are my notes in the Vertical Search and A9 by Jeff Bezos.

The core idea behind this talk was powerful yet simple.

Jeff Bezos started of by talking about vertical search. In certain cases, specialized search engines can provide better results than generic search engines. One example is searching Google for Vioxx and performing the same search on a medical search engine such as PubMed. The former returns results that are mainly about class action lawsuits while the latter returns links to various medical publications about Vioxx. For certain users, the Google results are what they are looking for and for others the PubMed results would be considered more relevant.

Currently at A9.com, they give users the ability to search both generic search engines like Google as well as vertical search engines. The choice of search engines is currently small but they'd like to see users have the choice of building a search homepage that could pull results from thousands of search engines. Users should be able to add any search engine they want to their A9 page and have those results display in A9 alongside Google or Amazon search results. To facilitate this, they now support displaying search results from any search engine that can provide search results as RSS. A number of search engines already do this such as MSN Search and Feedster. There are some extensions they have made to RSS to support providing search results in RSS feeds. From where I was standing some of the extension elements I saw include startIndex, resultsPerPage and totalResults.

Amazon is calling this initiative OpenSearch.

I was totally blown away by this talk when I attended it yesterday. This technology has lots of potential especially since it doesn't seem tied to Amazon in any way so MSN, Yahoo or Google could implement it as well. However there are a number of practical issues to consider. Most search engines make money from ads on their site so creating a mechanism where other sites can repurpose their results would run counter to their business model especially if this was being done by a commercial interest like Amazon.


 

Categories: Technology | Trip Report

These are my notes on the Remixing Technology at Applied Minds session by W. Daniel Hillis.

This was a presentation by one of the key folks at Applied Minds. It seems they dabble in everything from software to robots. There was an initial demo showing a small crawling robot where he explained that they discovered that six legged robots were more stable than those with four legs. Since this wasn't about software I lost interest for several minutes but did hear the audience clap once or twice for the gadgets he showed.

Towards the end the speaker started talking about an open market place of ideas. The specific scenario he described was the ability to pull up a map and have people's opinions of various places on the map show up overlayed on the map. Given that people are already providing these opinions on the Web today for free, there isn't a need to have to go through some licenced database of reviews to get this information. The ability to harness the collective consciousness of the World Wide Web in this manner was the promise of the Semantic Web which the speaker felt was going to be delivered. His talk reminded me a lot of the Committee of Gossips vision of the Semantic Web that Joshua Allen continues to evangelize.

It seems lots of smart people are getting the same ideas about what the Semantic Web should be. Unfortunately, they'll probably have to route around the W3C crowd if they ever want to realize this this vision.


 

Categories: Technology | Trip Report

These are my notes on the The App is the API: Building and Surviving Remixable Applications by Mike Shaver. I believe I heard it announced that the original speaker couldn't make it and the person who gave the talk was a stand in.

This was one of the 15 minute keynotes (aka high order bits). The talk was about Firefox and its extensibility model. Firefox has 3 main extensibility points; components, RDF data sources and XUL overlays.

Firefox components are similar to Microsoft's COM components. A component has a contract id which an analogous to a GUID in the COM world. Components can be MIME type handlers, URL scheme handlers, XUL application extensions (e.g. mouse gestures) or inline plugins (similar to ActiveX). The Firefox team is championing a new plugin model that is similar to ActiveX which is expected to be supported by Opera and Safari as well. User defined components can override built in components by claiming their contract id in a process which seemed to be fragile but the speaker claimed has worked well so far.

Although RDF is predominantly used as a storage format by both Thunderbird and Firefox, the speaker gave the impression that this decision was a mistake. He repeatedly stated that graph based data model was hard for developers to wrap their minds around and that it was too complex for their needs. He also pointed out that whenever RDF was criticized by them, advocates of the technology [and the Semantic Web] would claim tha there were  future benefits that would be reaped from using RDF.

XUL overlays can be used to add toolbar buttons, tree widget columns and context menus to the Firefox user interface. They can also be used to create style changes in viewed pages as well. A popular XUL overlay is GreaseMonkey which the author showed could be used to add features to web sites such as persistent searches to GMail all using client side script. The speaker did warn that such overlays which applied style changes were inherently fragile since they depend on processing the HTML on the site which could change without warning if the site is redesigned. He also mentioned that it was unclear what the versioning model would be for such scripts once new versions of Firefox showed up.


 

Categories: Technology | Trip Report

March 2, 2005
@ 12:40 PM

In the post Another totally obvious factoid Dave Winer writes

Okay, we don't know for a fact that Google is working on an operating system, but the tea leaves are pretty damned clear. Why else would they have hired Microsoft's operating system architect, Mark Lucovsky? Surely not to write a spreadsheet or word processor.

Considering that after working on operating systems Mark Lucovsky went on to become the central mind behind Hailstorm, it isn't clear cut that Google is interested in him for his operating systems knowledge. It will be interesting to see if after pulling of what Microsoft couldn't by getting the public to accept AutoLink when they rejected SmartTags, Google will also get the public to accept their version of Hailstorm.

I can already hear the arguments about how Google's Hailstorm would be just like a beloved butler whose job it was to keep an eye on your wallet, rolodex and schedule so you don't have to worry about them. Positive perception in the market place is a wonderful thing.


 

Categories: Technology

As a user of Microsoft products and an employee of the company, I am of two minds about its entrance into the anti-spyware and anti-virus arenas. I agree with the sentiments Michael Gartenburg of Jupiter Research shared in his post Microsoft and Security - Demand if they do and demand if they don't

It's tough to be Microsoft. On one hand, folks insist that security, spyware and virus are issues that they must own. On the other hand, when they do own it and respond, they get dinged for it. Microsoft's decision to get into the business and make these tools available should be lauded. Security is a tough issue that I've written about before and users need to also take on a share of the responsibility of keeping their systems safe. The fact is, even with good solutions on the market, not enough users are protecting their systems and if Microsoft entering the game can help change that, it's a good thing.

Given that spyware is quite likely the most significant problem facing users of Windows I believe that Microsoft has a responsibility to its customers to do something about it. Others may disagree. For example, Robert X. Cringely attacks Microsoft for its recent announcements about getting in the anti-spyware market in his post Killing Us With Kindness: At Microsoft, Even Good Deeds Are Predatory. He writes

How can giving away software tools be bad? Look at how Microsoft is doing it. Their anti-virus and anti-spyware products are aimed solely at users of Windows XP SP2. This has very different effects on both the user base and the software industry. For users, it says that anyone still running Windows 98, ME, or 2000 has two alternatives -- to upgrade to XP/SP2 or to rely on non-Microsoft anti-virus and anti-spyware products. Choosing to upgrade is a 100+ million-unit windfall for Microsoft. That's at least $10 billion in additional revenue of which $9 billion is profit -- all of it coming in the next 12 months.That $10 billion, by the way, comes from you and me, and comes solely because of Microsoft's decision to offer "free" software.

Of course, you can decide not to upgrade to XP/SP2 and rely instead on third-party products to protect your system. But Microsoft has set the de facto price point for such products at zero, zilch, nada. By doing this, they have transferred their entire support obligation for these old products to companies like Symantec and Network Associates without transferring to those companies any revenue stream to make it worth their trouble. Maybe Peter Norton will say, "Screw this, I'm not going to support all these millions of people for nothing." Well, that's Symantec (not Microsoft) apparently forcing users into the same upgrade from which Microsoft (not Symantec) gains all the revenue.
...
Look how this decision transforms Microsoft. By choosing to no longer support a long list of products (is that even legal?), hundreds and perhaps thousands of developers will be switching to new duties. If this were any other company, I would predict layoffs, but a key strategy for Microsoft is hiring the best people simply to keep them from working elsewhere, so I don't think layoffs are likely. What IS likely is an exodus of voluntary departures. What's also likely is that those hundreds or thousands of reassigned developers will be moved to some other doomsday product -- something else for us to eagerly anticipate or fear.

Cringely seems to be confusing supporting old versions of Windows with providing applications that run on current versions of Windows. Windows 98, Windows 98 SE and Windows Millenium are old versions of Windows whose support life cycle was supposed to end about a year ago but was extended by Microsoft. At the current time, Microsoft will continue to provide critical security updates for these operating systems but new applications won't be targetting them but instead will target the current version of Windows for the desktop which is Windows XP.

Microsoft's anti-spyware and anti-virus applications are not an operating system upgrade but instead new applications targetting the current versions of Windows. Even if they were, Windows 98, Windows SE and Windows Millenium are past the stage in their support life cycle where they'd be eligible for such upgrades anyway. Given that Robert X. Cringely is quite familiar with the ways of the software industry I'm surprised that he expects a vendor of shrinkwrapped software to be providing new features to seven year old versions of their software when current versions exist. This practice is extremely uncommon in the software industry. I personally have never heard of such an instance by any company.


 

Michael Gartenburg has a blog posting entitled Is Google doing what Microsoft couldn't with their new search bar?  where he writes

As Yogi would say, "it's deja vous, all over again". When Google introduced the newest version if their toolbar, it seems they added a feature that sounds very similar to what Microsoft wanted to do with SmartTags. Apparently the new software will create links in web text that will send you back to Google sites or sites of their choosing. If I recall correctly, there was a huge outcry over the SmartTag feature. Even petitions. How come there is no outcry here? Is it because Google does no evil?

Like I said yesterday, who needs a new browser to do stuff like this when you can co-opt IE with a toolbar?

This is one of the key differences between Google and Microsoft; perception. I am glad to see Google imitating one of Microsoft's innovations from a few years ago. After all, imitation is the sincerest form of flattery. As can be expected Dave Winer is already on the offensive.

Personally, I can't wait to see how much cognitive dissonance this causes the Slashdot crowd.


 

Categories: Technology

I got an email from Shelly Farnham announcing a Social Computing Symposium, sponsored by Microsoft Research which will be held at Redmond Town Center on April 25-26. Below is an excerpt from the announcement

In the spring of 2004 a small two-day Social Computing Symposium, sponsored by Microsoft Research, brought together researchers, builders of social software systems, and influential commentators on the social technology scene...A second symposium at Microsoft Research is planned for April 25th-26th, 2005...

If you are interested in attending the symposium, please send a brief, 300-500 word position paper. The symposium is limited to 75 people, and participants will be selected on the basis of submitted position papers.

Position papers should not be narrowly focused on either academic study or industry practice. Rather, submissions should do one or more of the following: address theoretical and methodological issues in the design and development of social computing technologies; reflect on concrete experiences with mobile and online settings; offer glimpses of novel systems; discuss current and evolving practices; offer views as to where research is needed.

We are particularly interested in position papers that explore any of the following areas. However, given the symposium’s focus on new innovation in social technologies, we are open to other topics as well.

a) The digitization of identity and social networks.

b) Proliferation and use of social metadata.

c) Mobile, ubiquitous social technologies changing the way we socialize.

d) Micropublishing of personal content (e.g. blogs), and the democratization of information exchange and knowledge development.

e) Social software on the global scale: the impact of cross-cultural differences in experiences of identity and community.

Please send your symposium applications to scspaper@microsoft.com by February 28th.

I would like to attend which means I have to cough up a position paper. I have 3 ideas for position papers; (i) Harnessing Latent Social Networks: Building a Better Blogroll with XFN and FOAF, (ii) Blurring the Edges of Online Communication Forms by Integrating Email, Blogging and Instant Messaging or (iii) Can Folksonomies Scale to Meet the Challenges of the Global Web?

So far I've shared this ideas with one person and he thought the first idea was the best. I assume some of the readers of my blog will be at this symposium. What would you guys like to get a presentation or panel discussion on?


 

So I just read an interesting post about Technorati Tags on Shelley Powers's blog entitled Cheap Eats at the Semantic Web Café. As I read Shelley's post I kept feeling a strong sense of deja vu which I couldn't shake. If you were using the Web in the 1990s then the following descriptions of Technorati Tags taken from their homepage should be quite familiar.

What's a tag?

Think of a tag as a simple category name. People can categorize their posts, photos, and links with any tag that makes sense.

....

The rest of the Technorati Tag pages is made up of blog posts. And those come from you! Anyone with a blog can contribute to Technorati Tag pages. There are two ways to contribute:

  • If your blog software supports categories and RSS/Atom (like Movable Type, WordPress, TypePad, Blogware, Radio), just use the included category system and make sure you're publishing RSS/Atom and we'll automatically include your posts! Your categories will be read as tags.
  • If your blog software does not support those things, don't worry, you can still play. To add your post to a Technorati Tag page, all you have to do is "tag" your post by including a special link. Like so:
    <a href="http://technorati.com/tag/[tagname]" rel="tag">[tagname]</a>
    The [tagname] can be anything, but it should be descriptive. Please only use tags that are relevant to the post. No need to include the brackets. Just make sure to include rel="tag".

    Also, you don't have to link to Technorati. You can link to any URL that ends in something tag-like. These tag links would also be included on our Tag pages:
    <a href="http://apple.com/ipod" rel="tag">iPod</a>
    <a href="http://en.wikipedia.org/wiki/Gravity" rel="tag">Gravity</a>
    <a href="http://flickr.com/photos/tags/chihuahua" rel="tag">Chihuahua</a>

If you weren't using the Web in the 1990s this may seem new and wonderful to you but the fact is we've all seen this before. The so-called Technorati Tags are glorified HTML META tags with all their attendant problems. The reason all the arguments in Shelley's blog post seemed so familiar is that a number of them are the same ones Cory Doctorow made in his article Metacrap from so many years ago. All the problems with META tags are still valid today most important being the fact that people lie especially spammers and that even well intentioned people tend to categorize things incorrectly or according to their prejudices.

META tags simply couldn't scale to match the needs of the World Wide Web and are mostly ignored by search engines today. I wonder why people think that if you dress up an old idea with new buzzwords (*cough* folksonomies *cough* social tagging *cough*) that it somehow turns a bad old idea into a good new one?


 

Categories: Technology

I noticed an eWeek article this morning titled Microsoft Won't Bundle Desktop Search with Windows which has had me scratching my head all morning. The article contains the following excerpts

Microsoft Corp. has no immediate plans to integrate desktop search into its operating system, a company executive said at a conference here this weekend.
...
Indeed, while including desktop search in Windows might seem like a logical step to many, "there's no immediate plan to do that as far as I know," Kroese said. "That would have to be a Bill G. [Microsoft chairman and chief software architect Bill Gates] and the lawyers' decision."

I thought Windows already had desktop search. In fact, Jon Udell of  Infoworld recently provided a screencast in his blog post Where was desktop search when we needed it? which shows off the capabilities of the built-in Windows desktop search which seems almost on par with the recent offerings from MSN, Google and the like.

Now I'm left wondering what the EWeek article means. Does it mean there aren't any plans to replace the annoying animated dog  with the MSN butterfly? That Microsoft has access to a time machine and will go back and rip out desktop search from the operating system including the annoying animated dog? Or is there some other obvious conclusion that can be drawn from the facts and the article that I have failed to grasp?

The technology press really disappoints me sometimes.


 

Categories: Technology

I've been doing a bit of reading about folksonomies recently. The definition of folksonomy in Wikipedia currently reads

Folksonomy is a neologism for a practice of collaborative categorization using simple tags in a flat namespace. This feature has begun appearing in a variety of social software. At present, the best examples of online folksonomies are social bookmarking sites like del.icio.us, a bookmark sharing site, Flickr, for photo sharing, and 43 Things, for goal sharing.

What I've found interesting about current implementations of folksonomies is that they are   blogging with content other than straight text. del.icio.us is basically a linkblog and Flickr isn't much different from a photoblog/moblog. The innovation in these sites is in merging the category metadata from the different entries such that users can browse all the links or photos which match a specified keyword. For example, here are recent links added del.icio.us with the tag 'chicken' and recent photos added to Flickr with the tag 'chicken'. Both sites not only allow browsing all entries that match a particular tag but also go as far as alowing one to subscribe to particular tags as an RSS feed.

I've watched with growing bemusement as certain people have started to debate whether folksonomies will replace traditional categorization mechanisms. Posts such as The Innovator's Lemma  by Clay Shirky, Put the social back into social tagging by David Weinberger and it's the social network, stupid! by Liz Lawley go back and forth about this very issue. This discussion reminds me of the article Don't Let Architecture Astronauts Scare You by Joel Spolsky where he wrote

A recent example illustrates this. Your typical architecture astronaut will take a fact like "Napster is a peer-to-peer service for downloading music" and ignore everything but the architecture, thinking it's interesting because it's peer to peer, completely missing the point that it's interesting because you can type the name of a song and listen to it right away.

All they'll talk about is peer-to-peer this, that, and the other thing. Suddenly you have peer-to-peer conferences, peer-to-peer venture capital funds, and even peer-to-peer backlash with the imbecile business journalists dripping with glee as they copy each other's stories: "Peer To Peer: Dead!"

I think Clay is jumping several steps ahead to conclude that explicit classification schemes will give way to categorization by users. The one thing people are ignoring in this debate (as in all technical debates) is that the various implementations of folksonomies are popular because of the value they provide to the user. When all is said and done, del.icio.us is basically a linkblog and Flickr isn't much different from a photoblog/moblog. This provides inherent value to the user and as a side effect [from the users perspective] each new post becomes part of an ecosystem of posts on the same topic which can then be browsed by others. It isn't clear to me that this dynamic exists everywhere else explicit classification schemes are used today.

One thing that is clear to me is that personal publishing via RSS and the various forms of blogging have found a way to trample all the arguments against metadata in Cory Doctorow's Metacrap article from so many years ago. Once there is incentive for the metadata to be accurate and it is cheap to create there is no reason why some of the scenarios that were decried as utopian by Cory Doctorow in his article can't come to pass. So far only personal publishing has provided the value to end users to make both requirements (accurate & cheap to create) come true.

Postscript: Coincidentally I just noticed a post entitled Meet the new tag soup  by Phil Ringnalda pointing out that emphasizing end-user value is needed to woo people to create accurate metadata in the case of using semantic markup in HTML. So far most of the arguments I've seen for semantic markup [or even XHTML for that matter] have been academic. It would be interesting to see what actual value to end users is possible with semantic markup or whether it really has been pointless geekery as I've suspected all along. 


 

Categories: Technology

January 21, 2005
@ 01:44 AM

My article on Cω is finally published. It appeared on XML.com as Introducing Comega while it showed up as the next installment of my Extreme XML column on MSDN with the title An Overview of Cω: Integrating XML into Popular Programming Languages. It was also mentioned on Slashdot in the story Microsoft Research's C-Omega

I'll be following this with an overview of ECMAScript for XML (E4X) in a couple of months.


 

Categories: Technology

O'Reilly Emerging Technology Conference. It looks like I'm going to be attending the O'Reilly Emerging Technology Conference in San Diego. If you're there and want to find me I'll be with the rest of the exhibitors hanging out with MSR Social Computing Group talking about the social software applications that have been and are being built by MSN with input from the folks at Microsoft Research. I should also be at Tuesday's after party.

 

Categories: Technology

If you've been following the blogosphere you should know by now that the Google, Yahoo! and MSN search engines decided to start honoring the rel="nofollow" attribute on links to mean that the linked page shouldn't get any increased ranking from that link. This is intended to reduce the incentive for comment spammers who've been flooding weblogs with links to their websites in comment fields. There is another side effect of the existence of this element which is pointed out by Shelley Powers in her post The Other Shoe on Nofollow where she writes

I expected this reason to use nofollow would take a few weeks at least, but not the first day. Scoble is happy about the other reason for nofollow: being able to link to something in your writing and not give ‘google juice’ to the linked.

Now, he says, I can link to something I dislike and use the power of my link in order to punish the linked, but it won’t push them into a higher search result status.

Dave Winer started this, in a way. He would give sly hints about what people have said and done, leaving you knowing that an interesting conversation was going on elsewhere, but you’re only hearing one side of it. When you’d ask him for a link so you could see other viewpoints, he would reply that "…he didn’t want to give the other party Google juice." Now I imagine that *he’ll link with impunity–other than the fact that Technorati and Blogdex still follow the links. For now, of course. I imagine within a week, Technorati will stop counting links with nofollow implemented. Blogdex will soon follow, I’m sure.

Is this so bad? In a way, yes it is. It’s an abuse of the purpose of the tag, which was agreed on to discourage comment spammers. More than that, though, it’s an abuse of the the core nature of this environment, where our criticism of another party, such as a weblogger, came with the price of a link. Now, even that price is gone.

I don't see this is an abuse of the tag, I see it as fixing a bug in Google's PageRank algorithm. It's always seemed broken to me that Google assumes that any link to a source is meant to convey that the target is authoritative. Many times people link to websites they don't consider authoritative for the topic they are discussing. This notion of 'the price of a link' has been based on a design flaw in Google's PageRank algorithm. Social norms should direct social behavior not bugs in software.

A post entitled Nofollow Sucks on the Aimless Words blog has the following statement

Consider what the wholesale implementation of this new web standard means within the blogosphere. "nofollow" is English for NO FOLLOW and common sense dictates that when spider finds this tag it will not follow the subsequent link.

The author of the blog post later retracts his statements but it does bring up an interesting point. Robert Scoble highlights the fact that it didn't take a standards committee to come up with this just backchannel conversations that took a few hours. However as Tim Ewald recently wrote in his post "Make it easy for people to pay you"

The value of the standardization process is that it digs issues - architectural, security, reliability, scalability, etc. - and addresses them. It also makes the language more tighter and less vague

The Aimless Words weblog points out that it is unclear to anyone who isn't party to whatever conversations that went on between Google, MSN, Yahoo and others what exactly are the semantics of rel='nofollow' on a link. Given that it is highly unlikely that all three search engines even use the same ranking algorithms I'm not even sure what it means for them to say the link doesn't contribute to the ranking of the site. Will the penalty that Yahoo search applies to such sites be the same in Google and MSN search? Some sort of spec or spec text would be nice to see instead of 'trust us' which seems to be what is emanating from all the parties involved at the current time.

PS: I was wondering why I never saw the posts about this on the Google blog  in RSS Bandit and it turned out to be because the Google Blog atom feeds are malformed XML. Hopefully they'll fix this soon.


 

Categories: Technology

Derek has a post entitled Search is not Search where he alludes to conversations we had about my post Apples and Oranges: WinFS and Google Desktop Search. His blog post reminds me about why I'm so disappointed that the benefits of adding structured metadata capabilities to file systems is being equated to desktop search tools that are a slightly better incarnation of the Unix grep command. Derek wrote

I was reminded of that conversation today, when catching up on a recent-ish publication from MIT's Haystack team: The Perfect Search Engine is Not Enough: A Study of Orienteering Behavior in Directed Search. One of the main points of the paper is that people tend not to use 'search' (think Google), even when they have enough information for search to likely be useful. Often they will instead go to a know location from which they believe they can find the information they are looking for.

For me the classic example is searching for music. While I tend to store my mp3s in a consistent directory structure such that the song's filename is the actual name of the song, I almost never use a generic directory search to find a song. I tend to think of songs as "song name: Conga Fury, band: Juno Reactor", or something like that, so when I'm looking for Conga Fury, I am more likely to walk the album subdirectories under my Juno Reactor directory, than I am to search for file "Conga Fury.mp3". The above paper talks a bit about why, and I think another key aspect that they don't mention is that search via navigation leverages our brain's innate fuzzy computation abilities. I may not remember how to spell "Conga Fury" or may think that it was "Conga Furvor", but by navigating to my solution, such inaccuracies are easily dealt with.

As Derek's example shows, comparing the scenarios enabled by a metadata based file system against those enabled by desktop search is like comparing navigating one's music library using iTunes versus using Google Desktop Search or the MSN Desktop Search to locate audio files.

Joe Wilcox (of Jupiter Research) seems to have reached a similar conclusion based on my reading of his post Yes, We're on the Road to Cairo where he wrote

WinFS could have anchored Microsoft's plans to unify search across the desktop, network and the Internet. Further delay creates opportunity for competitors like Google to deliver workable products. It's now obvious that rather than provide placeholder desktop search capabilities until Longhorn shipped, MSN will be Microsoft's major provider on the Windows desktop. That's assuming people really need the capability. Colleague Eric Peterson and I chatted about desktop search on Friday. Neither of us is convinced any of the current approaches hit the real consumer need. I see that as making more meaningful disparate bits of information and complex content types, like digital photos, music or videos.

WinFS promised to hit that need, particularly in Microsoft public demonstrations of Longhorn's capabilities. Now the onus and opportunity will fall on Apple, which plans to release metadata search capabilities with Mac OS 10.4 (a.k.a. "Tiger") in 2005. Right now, metadata holds the best promise of delivering more meaningful search and making sense of all the digital content piling up on consumers' and Websites' hard drives. But there are no standards around metadata. Now is the time for vendors to rally around a standard. No standard is a big problem. Take for example online music stores like iTunes, MSN Music or Napster, which all tag metadata slightly differently. Digital cameras capture some metadata about pictures, but not necessarily the same way. Then there are consumers using photo software to create their own custom metadata tags when they import photos.

I agree with his statements about where the real consumer need lies but disagree when he states that no standards around metadata exist. Music files have ID3 and digital images have EXIF. The problem isn't a lack of standards but instead a lack of support for these standards which is a totally different beast.

I was gung ho about WinFS because it looked like Microsoft was going to deliver a platform that made it easy for developers to build applications that took advantage of the rich metadata inherent in user documents and digital media. Of course, this would require applications that created content (e.g. digital cameras) to actually generate such metadata which they don't today. I find it sad to read posts like Robert Scoble's Desktop Search Reviewers' Guide where he wrote

2) Know what it can and can't do. For instance, desktop search today isn't good at finding photos. Why? Because when you take a photo the only thing that the computer knows about that file is the name and some information that the camera puts into the file (like the date it was taken, the shutter speed, etc). And the file name is usually something like DSC0050.jpg so that really isn't going to help you search for it. Hint: put your photos into a folder with a name like "wedding photos" and then your desktop search can find your wedding photos.

What is so depressing about this post is that it costs very little for the digital camera or its associated software to tag JPEG files with comments like 'wedding photos' as part of the EXIF data which would then make them accessible to various applications including desktop search tools. 

Perhaps the solution isn't expending resources to build a metadata platform that will be ignored by applications that create content today but instead giving these applications incentive to generate this metadata. For example, once I bought an iPod I became very careful to ensure that the ID3 information on the MP3s I'd load on it were accurate since I had a poor user experience otherwise.

I wonder what the iPod for digital photography is going to be. Maybe Microsoft should be investing in building such applications instead of boiling the oceans with efforts like WinFS which aim to ship everything including the kitchen sink in version 1.0.  


 

Categories: Technology

Doc Searls has a post entitled Resistance isn't futile where he writes

Russell Beattie says "it's game over for a lot of Microsoft competitors." I don't buy it, and explained why in a comment that's still pending moderation. (When the link's up, I'll put it here.)

Meanwhile, I agree with what Phillip Swann (who is to TVs what Russell is to mobile devices) says about efforts by Microsoft and others to turn the TV into a breed of PC:

...it's not going to happen, no matter how much money is spent in the effort. Americans believe the TV is for entertainment and the PC is for work. New TV features that enhance the viewing experience, such as Digital Video Recorders, High-Definition TV, Video on Demand, Internet TV (the kind that streams Net-based video to the television, expanding programming choices) and some Interactive TV features (and, yes, just some), will succeed. Companies that focus on those features will also succeed.

But the effort to force viewers to perform PC tasks on the TV will crash faster than a new edition of a buggy PC software. I realize that doesn't speak to all of Russell's points, or to more than a fraction of Microsoft's agenda in the consumer electronics world; but it makes a critical distinction (which I boldfaced, above) that's extremely important, and hard to see when you're coming from the PC world.

It seems Doc Searls is ignoring the truth around him. Millions of people [including myself] watch TV by interacting with a PC via TiVo and other PVRs. I haven't met anyone who after using a PVR who wants to go back to regular TV. As is common with most Microsoft detractors Doc Searls is confusing the problems with v1/v2 of a product with the long term vision for the product. People used to say the same things about Windows CE & PalmOS but now Microsoft has taken the lead in the handheld market.

The current crop of Windows Media Centers have their issues, many of which have even been pointed out by Microsoft employees. However it is a big leap to translate that to people don't want more sophistication out of their television watching experience. TiVo has already taught us that people do. The question is who will be providing the best experience possible when the market matures?


 

Categories: Technology

Recently Ted Leung posted a blog entry entitled Linguistic futures where he summarized a number of recent discussions in the blogosphere about potential new features for the current crop of popular programming languages. He wrote

1. Metaprogramming facilities

Ian Bicking and Bill Clementson were the primary sources on this particular discussion. Ian takes up the simplicity argument, which is that metaprogramming is hard and should be limited -- of course, this gets you things like Python 2.4 decorators, which some people love, and some people hate. Bill Mill hates decorators so much that he wrote the redecorator, a tool for replacing decorators with their "bodies". 

2. Concurrency

Tim Bray and Herb Sutter provided the initial spark here. The basic theme is that the processor vendors are finding it really hard to keep the clock speed increases going (that's actually been a trend for all of 2004), so they're going to start putting more cores on a chip... But the big take away for software is that uniprocessors are going to get better a lot more slowly than we are used to. So that means that uniprocessor efficiency matters again, and the finding concurrency in your program is also going to be important. This impacts the design of programming languages as well as the degree of skill required to really get performance out of the machine...

Once that basic theme went out, then people started digging up relevant information. Patrick Logan produced information on Erlang, Mozart, ACE, Doug Lea, and more. Brian McCallister wrote about futures and then discovered that they are already in Java 5.

It seems to me that Java has the best support for threaded programming. The dynamic languages seem to be behind on this, which is must change if these predictions hold up. 

3. Optional type checking in Python

Guido van Rossum did a pair of posts on this topic. The second post is the scariest because he starts talking about generic types in Python, and after seeing the horror that is Java and C# generics, it doesn't leave me with warm fuzzies.

Patrick Logan, PJE, and Oliver Steele had worthwhile commentary on the whole mess. Oliver did a good job of breaking out all the issues, and he worked for quite a while on Dylan which had optional type declarations. PJE seems to want types in order to do interfaces and interface adaptation, and Patrick's position seems to be that optional type declarations were an artifact of the technology, but now we have type inference so we should use that instead. 

Coincidentally I recently finished writing an article about Cω which has integrated both optional typing via type inference and concurrency into C#. My article indirectly discusses the existence of type inference in Cω but doesn't go into much detail. I don't mention the concurrency extensions in Cω in the article primarily due to space constraints. I'll give a couple of examples of both features in this blog post.

Type inference in Cω allows one to write code such as

public static void Main(){
  x = 5; 
  Console.WriteLine(x.GetType()); //prints "System.Int32"
}

This feature is extremely beneficial when writing queries using the SQL-based operators in Cω. Type inference allows one turn the following Cω code

public static void Main(){

  struct{SqlString ContactName; SqlString Phone;} row;
  
  struct{SqlString ContactName; SqlString Phone;}* rows = select
            ContactName, Phone from DB.Customers;
 
  foreach( row in rows) {
      Console.WriteLine("{0}'s phone number is {1}", row.ContactName, row.PhoneNumber);
   }
}

to

public static void Main(){

  foreach( row in select ContactName, PhoneNumber from DB.Customers ) {
      Console.WriteLine("{0}'s phone number is {1}", row.ContactName, row.PhoneNumber);
   }
}

In the latter code fragment the type of the row variable is inferred so it doesn't have to be declared. The variable is now seemingly dynamically typed but really isn't since the type checking is done at compile time. This seems to offer the best of both worlds because the programmer can write code as if it is dynamically typed but is warn of type errors at compile time when a type mismatch occurs.

As for concurrent programming, many C# developers have embraced the power of using delegates for asynchronous operations. This is one place where I think C# and the .NET framework did a much better job than the Java language and the JVM. If Ted likes what exists in the Java world I bet he'll be blown away by using concurrent programming techniques in C# and .NET. Cω takes the support for asynchronous programming further by adding mechanisms for tying methods together in the same way a delegate and its callbacks are tied together. Take the following class definition as an example

public class Buffer {
   public async Put(string s);
   public string Get() & Put(string s) { return s; }
}

In the Buffer class a call to the Put() Get() method blocks until a corresponding call to a Get() Put() method is made. Once this happens the parameters to the Put() method are treated as local variable declarations in the Get() method and then the code block runs. Similarly a call to a Get() method blocks until a corresponding Put() method is called. On the other hand a call to a Put() method returns immediately while its arguments are queued as inputs to a matching call to the Get() method. This assumes that each Put() call has a corresponding Get() call and vice versa.  

There are a lot more complicated examples in the documentation available on the Cω website.


 

Categories: Technology

Mike Vernal and I are supposed to be writing a Bill Gates Think Week paper about Social Software. However given how busy both our schedules are this may turn out to be easier said than done. For this reason I've decided that I'll continue blogging my thoughts around this class of software that led me to switch job roles a few months ago.

Today's entry is inspired by a blog post by Stowe Boyd entitled Mark Ranford on Open Standards for Social Tools. Stowe writes

I would like to see -- as just one example -- a means to manage my personal social tools digital identity independently of the various services through which I apply and augment it. None of the social tools that I use today -- whether communication tools, coordinative tools, or community tools -- support anything like what should be in place. My eBay or Amazon reputation is not fungible; my slash dot karma cannot be tapped when I join the Always-On Network; and the degree of connectedness I have achieved through an explicit social networking solution like Spoke, LinkedIn, or ZeroDegrees or through a more implicit social media model as supported by blogging cannot interoperate in the other context in any productive way.

We are forced to live in a thousand separate walled gardens; a thousand, disconnected worlds, where each has to be managed and maintained as if the other don't exist at all.

As a result, I have gotten to the point where I am going to retreat from those worlds that are the least open, the least integrated to others, and the most self-centered. The costs of participating with dozens of tiny islands of socializing are just too high, and I have decided to extricate myself from them all.

This is the biggest problem with the world of Social Software today. I wrote about this in my previous post on the topic entitled Social Software is the Platform of the Future. In that post I wrote

So where do we begin? It seems prudent to provide my definition of social software so we are all on the same page. Social software is any software that enables people to interact with one another. To me there are five broad classes of social software. There is software that enables 

1. Communication (IM, Email, SMS, etc)
2. Experience Sharing (Blogs, Photo albums, shared link libraries such as del.icio.us)
3. Discovery of Old and New Contacts (Classmates.com, online personals such as Match.com, social networking sites such as Friendster, etc)
4. Relationship Management (Orkut, Friendster, etc)
5. Collaborative or Competitive Gaming (MMORPGs, online versions of traditional games such as Chess & Checkers, team-based or free-for-all First Person Shooters, etc)

Interacting with the aforementioned forms of software is the bulk of the computing experience for a large number of computer users especially the younger generation (teens and people in their early twenties). The major opportunity in this space is that no one has yet created a cohesive experience that ties together the five major classes of social software. Instead the space is currently fragmented. Google definitely realizes this opportunity and is aggressively pursuing entering these areas as is evidenced by their foray into GMail, Blogger, Orkut, Picasa, and most recently Google Groups 2. However Google has so far shown an inability to tie these together into a cohesive and thus "sticky" experience. On the other hand Yahoo! has been better at creating a more integrated experience and thus a better online one-stop-shop (aka portal) but has been cautious in venturing into the newer avenues in social software such as blogs or social networking. And then there's MSN and AOL.

Since posting that entry I've changed jobs and now work at MSN delivering social software applications such as MSN Messenger, Hotmail and MSN Spaces. My new job role which has given me a more enlightened perspective on some of these problems. The issues Stowe has with the existing Social Software landscape will not be easily solved with industry standards, if at all. The reasons for this are both social and technical.

The social problems are straightforward, there is little incentive for competing social software applications to make it easy for people to migrate away from their service. There is no business incentive for Friendster to make it easy to export your social network to Orkut or for eBay to make it easy to export your sales history and reputation to Yahoo! Auctions. Besides the obvious consequence of lock-in, another more subtle consequence is that the first mover advantage is very significant in the world of Social Software. New entrants into various social software markets need to either be extremely innovative (e.g. GMail) or bundle their offerings with other more popular services (e.g. Yahoo! Auctions) to gain any sort of popularity. Simply being cheaper or better [for some definition of better] does not cut it.

The value of a user's social network and social information is the currency of a lot of online services. This is one of the reasons efforts like Microsoft's Hailstorm was shunned by vendors. The biggest value users get out of services like eBay and Amazon is that they remember information about the user such as how many successful sales they've made or their favorite kinds of music. Users return to such services because of the value of the social network around the service (Amazon reviews, eBay sales feedback, etc) and accumulated information about the user that they hold. Hailstorm aimed to place a middleman between the user and the vendors with Microsoft as the broker. Even though this might have turned out to be better for users, it was definitely bad for the various online vendors and they rejected the idea. The fact that Microsoft was untrusted within the software industry did not help. A similar course of events is  playing itself out with Microsoft's identity service, Passport. The current problems with sharing identities across multiple services have been decried by many, even Microsoft critics feel that Passport may have been better than the various walled gardens we have today.

The technical problems are even more interesting. The fact of the matter is that we still don't know how to value social currency in any sort of objective way. Going back to Stowe's examples, exactly what should having high karma on Slashdot translate to besides the fact that you are a regular user of the site? Even the site administrators will tell you that your Slashdot karma is a meaningless value. How do you translate the fact that the various feeds for my weblog have 500 subscribers in Bloglines into some sort of reputation value when I review articles on Amazon? The fact is that there is no objective value for reputation, it is all context and situation specific. Even for similar applications, differences in how certain data is treated can make interoperability difficult.

Given the aforementioned problems I suspect that for the immediate future walled gardens will be the status quo in the world of social software.

As for MSN, we will continue to make the experience of sharing, connecting and interacting with friends and family as cohesive as possible across various MSN properties. One of the recent improvements we made in getting there were outlined by Mike Pacheloc in his post Your contacts and buddy lists are the same! where he wrote

Over the last couple of years we took the challenge of integrating the MSN Messenger buddy lists and your MSN Address Book Contacts into one centralized place in MSN.  Although they were called Contacts in both MSN Messenger, Hotmail, and other places in MSN, only until now are they actually one list!  Some benefits of doing this:

* You can now keep detailed information about your MSN Messenger buddies.  Not just the Display Name and Passport login, but all their email addresses, phone numbers, street addresses and other other information.
* Creating a buddy in MSN Messenger means you immediately can send email to that buddy in Hotmail, because the information is already in the Hotmail Contacts list!
* If you define a Group in Messenger, that Group is available in Hotmail.  You can email the Group immediately.  If you rename the Group in Hotmail, the change is immediately made in Messenger.

These are a few of the benefits of the integration we did on the services platform.

The benefits listed above do not do justice to the how fundamental the change has been. Basically, we've gotten rid of one of major complaints about online services; maintaining to many separate lists of people you know. One of the benefits of this is that you can utilize this master contact list across a number of scenarios outside of just one local application like an email application or an IM client. For example, in MSN Spaces we allow users to use their MSN Messenger allow list (people you've granted permission to contact you via IM) as a access control list for who can view your Space (blog, photo album, etc). There are a lot more interesting things you can do once the applications you use can tell "here are the people I know, these are the ones I trust, etc". We may not get there as an industry anytime soon but MSN users will be reaping the benefits of this integration in more and more ways as time progresses.

Well, I have to go eat some breakfast, I'm starved...

Happy New Year!!!


 

Categories: MSN | Technology

December 19, 2004
@ 02:06 AM

The longer I have my AudioVox SMT 5600 the more I begin to understand what  Russell Beattie has been preaching all these years. Yesterday I attended the Phoenix Suns vs. Seattle Sonics game at Key Arena. In between the 1st and 2nd quarters they were playing some songs by Lil Jon (having gone to school in Atlanta I smile whenever I hear "Skeet Skeet" in public) then lo and behold I notice Lil Jon below sitting courtside. My date informed me that he was scheduled to be in concert later that night.

I quickly navigated to http://www.ticketmaster.com on my phone to see if I could get some tickets but it seems the site doesn't support mobile browsers. I've slowly begun to get hooked on having Internet access with me wherever I am. Email, traffic reports, movie times and more can now be checked anytime and anywhere. How have I managed without a SmartPhone all this time?


 

Categories: Technology

Almost two years ago I wrote a blog entry entitled Useful vs. Useless Abstractions which stated that the invention of URIs by the IETF/W3C crowd to replace the combination of URLs and URNs was a step backwards. I wrote

URIs are a merger of the syntax of URLs and URNs which seem to have been repurposed from their original task of identifying and locating network retrievable documents to being more readable versions UUIDs which can be used to identify any person, place or thing regardless of whether it is a file on the Internet or a feeling in your heart.

This addition to the URN/URL abstraction seemed to address some of the bits which may have been considered to be leaky (if I enter http://www.yahoo.com in my browser and it loads it from its cache then the URL isn't acting as a location but as an identifier). Others also saw URIs as a way for people who needed user friendly UUIDs for use on the Web. I've so far come into contact with URIs in two aspects of my professional experience and they have both left a bad taste in my mouth.

URIs and the Semantic Web: Ambiguity2

One problem with URIs is that they don't uniquely identify a single thing. Consider the following hyperlinked statements

Dare is a Georgia Tech alumni.

Dare's website is valid XHTML.

In the above statements I use the URI "http://www.25hoursaday.com" to identify both myself and my web page. This is a bad thing for the Semantic Web. If you read Aaron Swartz's excellent primer on the Semantic Web you will notice where he talks about RDF and its dependence on URIs
...
Now consider...

<http://aaronsw.com/> <http://love.example.org/terms/reallyLikes> <http://www.25hoursaday.com/> .

Can you tell whether Aaron really like my website or me personally from the above RDF statement? Neither can I. This inherrent ambiguity is yet another issue with the vision of the Semantic Web and the current crop of Semantic Web technologies that are overly dependent on URIs.

Over the past few years I've been on the W3C Technical Architecture Group mailing list I've seen this inherent ambiguity of URIs result in many lengthy, seemingly never-ending discussions about how to workaround this problem or whether it is even a problem in itself. The discussion thread entitled Information resources? which morphed into referendum on httpRange-14  is the latest incarnation of this permathread on the WWW-TAG mailing list.

I was much heartened to see that Tim Berners-Lee is beginning to see some of the problems caused by the inherent ambiguity of URIs. In his most recent response to the "referendum on httpRange-14 " thread he writes 

> It is a best practice that there be some degree of consistency
> in the representations provided via a given URI.

Absolutely.

> That applies *both* when a URI identifies a picture of
> a dog *and* when a URI identifies the dog itself.
>
> *All* URIs which offer consistent, predictable representations will be
> *equally* beneficial to users, no matter what they identify.

Now here seems to be the crunch.
The web architecture relies, we agree I think, on this consistency
or predictability of representations of a given URI.

The use of the URI in the web is precisely that it is associated
with that class of representations which could be returned for it.

Because the "class of representations which could be returned"
is a rather clumsy notion, we define a conceptual thing
which is related to any valid representation associated with the URI,
and as the essential property of the class is a similarity in
information content, we call the thing an Information Resource.

So a URI is a string whose sole use in the web architecture
is to denote that information resource.

Now if you say in the semantic web architecture that the same  will
identify a dog, you have a conflict.


>
>> The current web relies on people getting the same information from
>> reuse of the same URI.
>
> I agree. And there is a best practice to reinforce and promote this.
>
> And nothing pertaining to the practice that I and others employ, by
> using http: URIs to identify non-information resources, in any way
> conflicts with that.

Well, it does if the semantic web can talk about the web, as the
semantic web can't be ambiguous about what an identifier identifies in the way that
one can in english.

I want my agent to be able to access a web page, and then use the URI
to refer to the information resource without having to go and find some
RDF somewhere to tell it whether in fact it would be mistaken.

I want to be able to model lots and lots of uses of URIs in existing
technology in RDF. This means importing them wholesale,
it needs the ability to use a URI as a URI for the web page without
asking anyone else.

The saga continues. The ambiguity of URIs have also been a problem in XML namespaces since users of XML often wonder assume a namespace URI should lead to a network retrievable document when accessed.  Since they are URIs, this isn't necessarily true. If they were URLs it would be and if they were URNs they would not be.
 

Categories: Technology

It seems about half the feeds in my aggregator are buzzing with news of the new Google Desktop Search. Although I don't really have the need for a desktop search product I was going to download it and try it out anyway until I found it user a web browser interface accessed via a local web server. Not being a fan of browser based user interfaces I decided to pass. Since then I've seen a couple of posts from people like Joe Gregorio and Julia Lerman who've claimed that Google Desktop Search delivers the promise of WinFS today.

Full text search is really orthogonal to what WinFS is supposed to enable on the Windows platform. I've written about such misconceptions in the past, most recently in my post Killing the "WinFS is About Making Search Better" Myth where I wrote

At its core, WinFS was about storing strongly typed objects in the file system instead of opaque blobs of bits. The purpose of doing this was to make accessing and manipulating the content and metadata of these files simpler and more consistent. For example, instead of having to know how to manipulate JPEG, TIFF, GIF and BMP files there would just be a Photo item type that applications would have to deal with. Similarly one could imagine just interacting with a built in Music item instead of programming against MP3, WMA, OGG, AAC, and WAV files. In talking to Mike Deem a few months ago and recently seeing Bill Gates discuss his vision for WinFS to folks in our building a few weeks ago it is clear to me that the major benefits of WinFS to end users is the possibilities it creates in user interfaces for data organization.

Recently I switched from using WinAmp to iTunes on the strength of the music organizational capabilities of the iTunes library and "smart playlists". The strength of iTunes is that it provides a consistent interface to interacting with music files regardless of their underlying type (AAC, MP3, etc) and provides ways to add metadata about these music files (ratings, number of times played) then organize these files according to this metadata. Another application that shows the power of the data organization based on rich, structured metadata is Search Folders in Outlook 2003. When I used to think of WinFS I got excited about being able to perform SQL-like queries over items in the file system. Then I heard Bill Gates and Mike Deem speak about WinFS then saw them getting excited about the ability to take the data organizational capabilities of features like the My Pictures and My Music folders in Windows to the next level it all clicked.

Now this isn't to say that there aren't some searches made better by coming up with a consistent way to interact with certain file types and providing structured metadata about these files. For example a search like

Get me all the songs [regardless of file type] either featuring or created by G-Unit or any of its members (Young Buck, 50 Cent, Tony Yayo or Lloyd Banks) between 2002 and 2004 on my hard drive

is made possible with this system. However it is more likely that I want to navigate this in a UI like the iTunes media library than I want to type the equivalent of SQL queries over my file system.

Technologies like Google Desktop Search solve a problem a few people have while WinFS is aimed at solving a problem most computer users have. The problem the Google Desktop Search mainly satisfies is how to locate a single file in your file system that may not be easy to navigate to via the traditional file system explorer. However most computer users put files in few locations on their file system so they usually know where to find the file they need. Most of the time I put files in one of four folders on my hard drive 

  1. My Documents
  2. My Music
  3. Visual Studio Projects (subfolder of My Documents)
  4. My Download Files 

For some of my friends you can substitute the "Visual Studio Projects" folder for the "My Pictures" folder. I also know a number of people who just drop everything on their Windows desktop. However the point is still the same, lots of computer users store a large amount of their content in a single location where it eventually becomes hard to manipulate, organize and visualize the hundreds of files contained therein. The main reason I stopped using WinAmp was that the data organization features of Windows Explorer are so poor. Basically all I have when dealing with music files is 'sort by type' or some variation of 'sort by name' and a list view. iTunes changed the way I listened to music because it made it extremely easy to visualize and navigate my music library. The ability to also perform rich ad-hoc queries via Smart Playlists is also powerful but a feature I rarely use.

Tools like Lookout and Google Desktop Search are a crutch to get around the fact that the file navigation metaphor on most desktop systems is past its prime and is in dire need of improvement. This isn't to say fast full text search isn't important, even with all the data organizational capabilities of Microsoft Outlook I still tend to use Lookout when looking for emails sent past a few weeks ago. However it is not the high order bit in solving the problems most computer users have with locating and interacting with the files on their hard drives.

The promise of WinFS is that it aims to turn every application [including file navigation applications like Windows explorer] into the equivalent of Outlook and iTunes when it comes to data visualization and navigation by baking such functionality into the file system APIs and data model. Trying to reduce that to "full text search plus indexing" is missing the forest for the trees. Sure that may get you part of the way but in the end it's like driving a car with your feet. There is a better way and it is much closer than most people think.


 

Categories: Technology

A few days ago I saw the article Xamlon looks to beat Microsoft to the punch  on C|Net which begins

On Monday, Colton's company Xamlon released its first product, a software development kit designed to speed development of user interface software for Web applications. Xamlon built the program from the published technical specifications of Microsoft's own user interface development software, which Microsoft itself doesn't plan to release until 2006.

I've been having difficulty processing this news over the past few days. Reading the Xamlon homepage gives more cause for pause. The site proclaims

XAML is a revolution in Windows application development. Xamlon is XAML today.

  • Rapidly build Windows user interfaces with HTML-like markup
  • Easily draw user interfaces and convert directly to XAML
  • Deploy to the Windows desktop and to Internet Explorer with absolutely no changes to your application.
  • Run XAML applications on versions of Windows from ’98 to Longhorn, and via the Web with Internet Explorer
  • Write applications that port easily to Avalon

What I find interesting about this are the claims that involve unreleased products that aren't expected to beta until next year and ship the year afterwards. I can understand that it is cool to claim to have scooped Microsoft but considering that the XAML and Longhorn are still being worked on it seems strange to claim to have built a product that is compatible with unreleased Microsoft products.

While writing this blog entry I decided to take a quick glance at the various Avalon folks' blogs and I stumbled on a post entitled Attribute grammar for xaml attributes from Rob Relyea, a program manager on the Avalon team.  Rob writes

As part of this change, the flexibility that people have with compact syntax will be reduced.  Today, they can use *Bind(), *Button(), *AnyClass() in any attribute.  We'd like to restict this to a small set of classes that are explicitly in need of being set in an attribute.

I'm not going into great detail in the description of our fix because I'd prefer to be the first company to ship our design.  :-)

Considering that XAML isn't even in beta yet, one can expect a lot more changes to the language before it ships in 2006. Yet Xamlon claims to be compatible with XAML. I have no idea how the Xamlon team plans to make good on their promise to be compatible with XAML and Longhorn before they've even shipped but I'd love to see what developers out there think about this topic.

I totally empathize with the Avalon team right now. I'm in the process of drafting a blog post about the changes to System.Xml of the .NET Framework that have occured between Whidbey beta 1 and Whidbey beta 2. Even though we don't have companies building products based on interim versions of System.Xml we do have book authors who'll have to rewrite [or maybe even eliminate] chapters about our stuff based on changes then off course there's the Mono folks implementing System.Xml who seem to be tracking our Whidbey betas.  


 

October 8, 2004
@ 05:58 PM

In his post Debating WS-* Geoff Arnold writes

Tim Bray continues to discuss the relevance of the so-called WS-* stack: the collection of specifications related to XML-based web services. I'm not going to dive into the technology or business issues here; however Tim referred to a piece by Dare Obasanjo which argues that WS-* Specs are like JSRs. I tried to add a comment to this, but Dare's blog engine collapsed in a mess of XML, so I'll just post it here. Hopefully you'll be able to get back to read the original piece if you're interested. [Update: It looks as if my comment made it into Dare's blog after all.]

Just out of curiosity... if WS-* are like JSRs, what's the equivalent of the JCP? Where's the process documented, and what's the governance model? The statement "A JSR is basically a way for various Java vendors to standardize on a mechanism for solving a particular customer problem" ignores the fact that it's not just any old "way"; it's a particular "way" that has been publically codified, ratified by the community, and evolved to meet the needs of participants.

Microsoft isn't trying to compete with standards organizations. The JCP process falls out of the fact that Sun decided not to submit Java to a standards body but got pushback from customers and other Java vendors for something similar. So Sun manufactured an organization and process quite similar to a standards body with itself at the head. Microsoft isn't trying to get into this game.

The WS-* strategy that Microsoft is pursuing is informed from a lot of experience in the world of XML and standards. In the early days of XML, the approach to designing XML standards [especially at the W3C] was to throw together a bunch of preliminary ideas and competing draft specs without implementation experience then try to merge that into a coherent whole. This has been problematic as I wrote a few months ago

In recent times the way the W3C produces a spec is to either hold a workshop where different entities can submit proposals and then form a working group based on coming up with a unification of the various proposals or forming a working group to find come up with a unification of various W3C Notes  submitted by member companies. Either way the primary mechanism the W3C uses to produce technology specs is to take a bunch of contradictory and conflictiong proposals then have a bunch of career bureaucrats try to find some compromise that is a union of all the submitted specs. There are two things that fall out of this process. The first is that the process takes a long time, for example the XML Query workshop was in 1998 and six years later the XQuery spec is still a working draft. Also XInclude proposal was originally submitted to the W3C in 1999 but five years later it is just a candidate recommendation. Secondly, the specs that are produced tend to be too complex yet minimally functionaly since they compromise between too many wildly differing proposals. For example, W3C XML Schema was created by unifying the ideas behind DCD, DDML, SOX, and XDR. This has lead to a dysfunctional specification that is too complex for the simple scenarios and nigh impossible to use in defining complex XML vocabularies.

The WS-* process Microsoft has engaged the industry in aims at preventing this problems from crippling the world of XML Web Services as it has the XML world. Initial specs are written by the vendors planning who'll primarily be implementing the functionality then they are revised based on the results of various feedback and interoperability workshops. As a result of these workshops some specs are updated while others turn out to be infeasible and are deprecated. Some people such as Simon Fell, in his post WS-Gone, have complained that these leads to a situation where things are too much in flux but I think this is a lot better than publishing standards which turn out to contain features that are either infeasible to implement or are just plain wrong. Working in the world of XML technologies over the past three years I've seen both.

The intention is that eventually the specs that show that they are the fittest will end up in the standards process. This exactly what has happened with WS-Security (OASIS) and WS-Addressing (W3C). I expect more to follow in the future.


 

Categories: Technology | XML

In his post What is the platform? Adam Bosworth writes

When I was at Microsoft, the prevailing internal assumption was that:
1) Platforms were great because they were "black holes" meaning that the more functionality they had, the more they sucked in users and the more users they had the more functionality they sucked in and so, it was a virtuous cycle.
...
The real value in my opinion has moved from the software to the information and the community. Amazon connects you to books, movies, and so on. eBay connects you to goodness knows how many willing sellers of specific goods. Google connects you to information and dispensers of goods and services. In every case, the push is for better and more timely access both to information and to people. I cannot, for the life of me, see how Longhorn or Avalon or even Indigo help one little bit in this value chain.

My mother never complains that she needs a better client for Amazon. Instead, her interest is in better community tools, better book lists, easier ways to see the book lists, more trust in the reviewers, librarian discussions since she is a librarian, and so on.

The platform of this decade isn't going to be around controlling hardware resources and rich UI. Nor do I think you're going to be able to charge for the platform per se. Instead, it is going to be around access to community, collaboration, and content. And it is going to be mass market in the way that the web is mass market, in the way that the iPod is mass market, in the way that a TV is mass market. Which means I think that it is going to be around services, not around boxes.

Last week while hanging out with Mike Vernal and a couple of smart folks from around Microsoft I had an epiphany about how the core of the consumer computing experience of the future would be tied to Web-based social software not operating systems and development platforms. When I read Adam Bosworth's post this weekend, it became clear to me that folks at Google have come to the same conclusion or soon will once Adam is done with them.

So where do we begin? It seems prudent to provide my definition of social software so we are all on the same page. Social software is any software that enables people to interact with one another. To me there are five broad classes of social software. There is software that enables 

  1. Communication (IM, Email, SMS, etc)
  2. Experience Sharing (Blogs, Photo albums, shared link libraries such as del.icio.us)
  3. Discovery of Old and New Contacts (Classmates.com, online personals such as Match.com, social networking sites such as Friendster, etc)
  4. Relationship Management (Orkut, Friendster, etc)
  5. Collaborative or Competitive Gaming (MMORPGs, online versions of traditional games such as Chess & Checkers, team-based or free-for-all First Person Shooters, etc)

Interacting with the aforementioned forms of software is the bulk of the computing experience for a large number of computer users especially the younger generation (teens and people in their early twenties). The major opportunity in this space is that no one has yet created a cohesive experience that ties together the five major classes of social software. Instead the space is currently fragmented. Google definitely realizes this opportunity and is aggressively pursuing entering these areas as is evidenced by their foray into GMail, Blogger, Orkut, Picasa, and most recently Google Groups 2. However Google has so far shown an inability to tie these together into a cohesive and thus "sticky" experience. On the other hand Yahoo! has been better at creating a more integrated experience and thus a better online one-stop-shop (aka portal) but has been cautious in venturing into the newer avenues in social software such as blogs or social networking. And then there's MSN and AOL.

One thing Adam fails to mention in his post is that the stickiness of a platform is directly related to how tightly it holds on to a users data. Some people refer to this as lock-in. Many people will admit that the reason they can not migrate from a platform is due to the fact that they have data tied to that platform they do not want to give up. For the most part on Windows, this has been local documents in the various Microsoft Office formats. The same goes for database products, data tends to outlive the application that was originally designed to process it nine times out of ten. This is one of the reasons Object Oriented Databases failed, they were too tightly coupled to applications as well as programming languages and development platforms. The recent push for DRM in music formats is also another way people are beginning to get locked in. I know at least one person who's decided he won't change his iPod because he doesn't want to loose his library of AAC encoded music purchased via the iTunes Music Store.

The interesting thing about the rise of social software is that this data lock-in is migrating from local machines to various servers on the World Wide Web. At first the battle for the dominant  social software platform will seem like a battle amongst online portals. However this has an interesting side effect to popular operating systems platforms. If the bulk of a computer user's computing experience is tied to the World Wide Web then the kind of computer or operating system the browser is running on tends to be irrelevant.

Of course, there are other activities that one performs on a computer such as creating business documents such as spreadsheets or presentations and listening to music. However most of these are not consumer activities and even then a lot of these are becoming commodified. Music already has MP3s which are supported on every platform. Lock-in based on office document formats can't last forever and I suspect that within the next five more years it will cease to be relevant. This is not to say that all people need is a web browser for all their computing needs but considering how much most people's computer interaction is tied to the Internet, it seems likely that owning the user's online experience will one day be as valuable as owning the operating system the user's Web browser is running on. Maybe more so if operating systems become commodified thanks to the efforts of people like Linus Torvalds.

This foray by Google into building the social software platform is definitely an interesting challenge to Microsoft both in the short term (MSN) and in the long term (Windows). This should be fun to watch.


 

Categories: Technology

In recent times I've been pitching the concept of a digital information hub to various folks at work.  Currently people have multiple aplications for viewing and authoring messages. There are instant messengers, email clients, USENET news readers and RSS/Atom aggregators. All of these applications basically do the same thing; provide a user interface for authoring and viewing messages sent by one or more people to the user.

Currently the split has been based on what wire protocol is used to send and receive the messages. This is a fairly arbitrary distinction which means little to non-technical users. The more interesting distinction is usage patterns. For all of the aforementioned application types messages really fall into two groups; messages I definitely will  read and messages I might want to read. In Outlook, I have messages sent directly to me which I'll definitely read and messages on various discussion lists I am on [such as XML-DEV] which I might want to read if the titles seem interesting or relvant to me. In Outlook Express, there are newsgroups where I read every message and others where I skim content looking for titles that are of interest or are relevant to me. In RSS Bandit, there are feeds where I read every single post (such as Don's or Joshua's blogs) and those where I skim them looking for headlines (e.g. Blogs @ MSDN). The list goes on...

The plan I've had for RSS Bandit for a while has been to see if I can evolve it into the single application where I manage all messages sent to me. Adding NNTP support is a first step in this direction. Recently I realized that some other folks have realized the power of the digital information hub; Google.

However Google has decided to bring the mountain to Mohammed. Instead of building an application that manages messages sent via all the different protocols in a single application they've decided to expose the major classes of messages as Atom feeds. They already provide Atom feeds for weblogs hosted on Blogger. Recently they've experimented with Atom feeds for USENET groups as well as Atom feeds for your GMail account. This means instead of one application being your digital information hub, any Atom savvy client (such as RSS Bandit) can now hold this honor if you use Google as your online content provider.

This is very, very interesting. I'm beginning to really like Google.


 

Categories: RSS Bandit | Technology

September 27, 2004
@ 07:28 AM

Today my TiVo dissapointed me for what I feel is the last time. Yesterday I set it to record 4 never before aired episodes of Samurai Jack. Sometime between 6:30 PM this evening and 11 PM the TiVo decided that recording suggestions was more important than keeping around one of the episodes of Samurai Jack.

For the past few months I've been disappointed with the TiVo's understanding of priority when keeping recordings. For example, it shouldn't delete a first run episode over a rerun especially when the Season Pass for the deleted episode is set to record only first runs.  This is a pretty basic rule I'm sure I could write myself if I had access to the TiVo source code. This last mistake is the straw that has broken the camel's back and I'm now seeking a replacement to TiVo.

I now realize I prefer an Open Source solution so I can hack it myself. Perhaps I should take a look at MythTV.


 

Categories: Technology

September 19, 2004
@ 10:14 PM

Tim Bray has another rant on the prolifieration of WS-* specs in the XML Web Services world. In his post The Loyal WS Opposition he writes

I Still Dont Buy It No matter how hard I try, I still think the WS-* stack is bloated, opaque, and insanely complex. I think its going to be hard to understand, hard to implement, hard to interoperate, and hard to secure.

I look at Google and Amazon and EBay and Salesforce and see them doing tens of millions of transactions a day involving pumping XML back and forth over HTTP, and I cant help noticing that they dont seem to need much WS-apparatus.

One way to view the various WS-* specifications is that they are akin to Java Specification Requests (JSRs) in the Java world. A JSR is basically a way for various Java vendors to standardize on a mechanism for solving a particular customer problem. Usually this mechanism takes the form of an Application Programming Interface (API). Some JSRs are widely adopted and have become an integral aspect of programming on the Java platform (e.g. the JAXP JSR). Some JSRs are pushed by certain vendors while being ignored by others leading to overlap (e.g. the JDO JSR which was voted against by BEA, IBM and Oracle but supported by Macromedia and Sun). Then there's Enterprise Java Beans which is generally decried as a bloated and unnecessarily complex solution to business problems. Again that was the product of the JSR process.

The various WS-* specs are following the same pattern as JSRs which isn't much of a surprise since a number of the players are the same (e.g. Sun & IBM). Just like Tim Bray points out that one can be productive without adopting any of the WS-* family of specifications it is similarly true that one can be productive in Java without relying on the products of JSRs but instead  rolling one's own solutions. However this doesn't mean there aren't benefits of standardizing on the high level mechanisms for solving various business problems besides saying "We use XML and HTTP so we should interop".   

Omri Gazitt, the Produce Unit Manager of the Advanced XML Web Services team has a post on WS-Transfer and WS-Enumeration which should hit close to home for Tim Bray since he is the co-chair of the Atom working group

WS-Transfer is a spec that Don has wanted to publish for a year now.  It codifies the simple CRUD pattern for Web services (the operations are named after their HTTP equivalents - GET, PUT, DELETE, and there is also a CREATE pattern.  The pattern of manipulating resources using these simple verbs is quite prevalent (Roy Fielding's REST is the most common moniker for it), and of course it underlies the HTTP protocol.  Of course, you could implement this pattern before WS-Transfer, but it does help to write this down so people can do this over SOAP in a consistent way.  One interesting existing application of this pattern is Atom (a publishing/blogging protocol built on top of SOAP).  Looking at the Atom WSDL, it looks very much like WS-Transfer - a GET, PUT, DELETE, and POST (which is the CREATE verb specific to this application).  So Atom could easily be built on top of WS-Transfer.  What would be the advantage of that?  The same advantage that comes with any kind of consistent application of a technology - the more the consistent pattern is applied, the more value it accrues.  Just the value of baking that pattern into various toolsets (e.g. VS.NET) makes it attractive to use the pattern. 

I personally think WS-Transfer is very interesting because it allows SOAP based applications to model themselves as REST Web Services and get explicit support for this methodolgy from toolkits. I talked about WS-Transfer with Don a few months ago and I've had to bite my tongue for a while whenever I hear people complain that SOAP and SOAP based toolkits don't encourage building RESTful XML Web Services.

I'm not as impressed with WS-Enumeration but I find it interesting that it also covers another use case of the ATOM API which is a mechanism for pulling down the content archive  from a weblog or similar system in a sequential manner. 


 

Categories: Technology | XML

September 19, 2004
@ 08:00 PM

Reading a post on Dave Winer blog I caught the following snippet

NY Times survey of spyware and adware. "...a program that creeps onto a computer’s hard drive unannounced, is wrecking the Internet." 

I've noticed that every time I sit at the computer of a non-technical Windows user I end up spending at least an hour removing spyware from their computer. Yesterday, I encountered a particularly nasty piece of work that detected when the system was being scanned by Ad-Aware and forced a system reboot. I'd never realized how inadequate the functionality of the Add or Remove Programs dialog was for removing applications from your computer until spyware came around. After I was done yesterday, I began to suspect that some of the spyware that was polite enough to add an entry in "Add or Remove Programs" simply took the Uninstall instruction as a command to go into stealth mode. One application made you fill out a questionairre before it let you uninstall it. I wondered if it would refuse to uninstall if it didn't like my answers to its questions.

Something definitely has to be done about this crap. In the meantime I suggest using at least two anti-spyware applications if attempting to clean a system. I've already mentioned Ad-Aware, my other recommendation is Spybot - Search & Destroy.


 

Categories: Technology

I recently read an InfoWorld article entitled Gartner: Ignore Longhorn and stick with XP where it states

Microsoft Corp. may choose never to release its vaunted and long-overdue project WinFS, following its removal from the next version of Windows, according to analysts Gartner Inc.
...
Microsoft has said Longhorn will still include local desktop searching as a taste of the power of WinFS' relational database capabilities, but Gartner sees this as a hint that WinFS may never arrive. "Because Microsoft has committed to improving search without WinFS, it may choose never to deliver the delayed WinFS," Gartner said.

The fundamental premise of the above statements is that the purpose of WinFS is to make local desktop search better or to use a cruder term to create "Google for the Desktop". It may be true that when it first started getting pitched one of the scenarios people described was making search better. However as WinFS progressed the primary scenarios its designers focused on enabling didn't have much to do with search. If you read Longhorn evangelist Jeremy Mazner's blog posting entitled What happened to WinFS? posted after the Longhorn changes were announced you'll find the following excerpt

The WinFS team spent a solid couple weeks going through this evaluation.  There are of course plenty of things you could do to increase the confidence level on a project the size of WinFS, since it has so many features, including:

  • Built-in schemas for calendar, contacts, documents, media, etc
  • Extensibility for adding custom schema or business logic
  • File system integration, like promotion/demotion and valid win32 file paths
  • A synchronization runtime for keeping content up to date
  • Rich API support for app scenarios like grouping and filtering
  • A self-tuning management service to keep the system running well
  • Tools for deploying schema, data and applications

The above feature list is missing the recent decision to incorporate the features of the object relational mapping technology codenamed ObjectSpaces into WinFS. Taking all these features together none of them is really focused on making it easier for me to find things on my desktop.

At its core, WinFS was about storing strongly typed objects in the file system instead of opaque blobs of bits. The purpose of doing this was to make accessing and manipulating the content and metadata of these files simpler and more consistent. For example, instead of having to know how to manipulate JPEG, TIFF, GIF and BMP files there would just be a Photo item type that applications would have to deal with. Similarly one could imagine just interacting with a built in Music item instead of programming against MP3, WMA, OGG, AAC, and WAV files. In talking to Mike Deem a few months ago and recently seeing Bill Gates discuss his vision for WinFS to folks in our building a few weeks ago it is clear to me that the major benefits of WinFS to end users is the possibilities it creates in user interfaces for data organization.

Recently I switched from using WinAmp to iTunes on the strength of the music organizational capabilities of the iTunes library and "smart playlists". The strength of iTunes is that it provides a consistent interface to interacting with music files regardless of their underlying type (AAC, MP3, etc) and provides ways to add metadata about these music files (ratings, number of times played) then organize these files according to this metadata. Another application that shows the power of the data organization based on rich, structured metadata is Search Folders in Outlook 2003. When I used to think of WinFS I got excited about being able to perform SQL-like queries over items in the file system. Then I heard Bill Gates and Mike Deem speak about WinFS then saw them getting excited about the ability to take the data organizational capabilities of features like the My Pictures and My Music folders in Windows to the next level it all clicked.

Now this isn't to say that there aren't some searches made better by coming up with a consistent way to interact with certain file types and providing structured metadata about these files. For example a search like

Get me all the songs [regardless of file type] either featuring or created by G-Unit or any of its members (Young Buck, 50 Cent, Tony Yayo or Lloyd Banks) between 2002 and 2004 on my hard drive

is made possible with this system. However it is more likely that I want to navigate this in a UI like the iTunes media library than I want to type the equivalent of SQL queries over my file system.

More importantly, this system doesn't make much easier to find stuff I've lost on my file system like Java code I wrote while in college or drafts of articles created several years ago that I never finished. When I think "Google on the Desktop", that's the problem I want to see solved. However MSN just bought LookOut so I have faith that we will be solving this problem in the near future as well.


 

Categories: Technology

August 30, 2004
@ 04:21 PM

Today I was going to release the RSS Bandit v1.2.0.114 service pack 1. However I will not be able to because even though everything worked fine when testing the application the moment I decided to build the installer and run it for the first time on my machine it crashed. The only difference I could tell was that while testing it I created 'Debug' builds but for the installer I created a 'Release' build. Even more annoying is the exception that occurs. It seems having an empty static constructor is causing a TypeInitializationException but only in 'Release' builds.

I hate this shit. I'll have to look into this when I get back from work this evening.

On the positive side, it looks like the RSS Bandit road map is on schedule. SP1 for v1.2.0.114 is basically done once I can track down the TypeInitializationException issue and then an installer should show up shortly afterwards. We've started the refactorings needed for NNTP support this weekend and should be done later this week. It's funny, I used to think the code was well factored because the infrastructure for supporting multiple  syndication formats was straightforward and flexible. Then supporting NNTP comes along and a lot of the assumptions we made in the code need to be thrown out of the Window.


 

Categories: Ramblings | Technology

August 22, 2004
@ 03:30 AM

Every once in a while I like to post a list of articles I'm either in the process of writing or considering writing to get feedback from people on what they'd like to see or whether the topics are even worthwhile. Below is a list of the next couple of articles I'm either in the process of writing or plan to write over the next few months.

  1. An Introduction to Validating XML Documents with Schematron (MSDN) : An introduction to Schematron including examples showing how one can augment a W3C XML Schema document using Schematron thus creating an extremely powerful XML schema language.  Code samples will use Schematron.NET

  2. Designing XML Formats: Versioning vs. Extensibility (XML 2004 Conference) : This is the presentation and paper for my XML 2004 talk. It will basically be the ideas in my article On Designing Extensible, Versionable XML Formats with more examples and less fluff.

  3. The XML Litmus Test - Deciding When and Why To Use XML (MSDN) : After seeing more and more people at work who seem to not understand what XML is good for or what the decision making process should be for adopting XML I decided to put this article together.  This will basically be an amalgamation of my XML Litmus Test blog post and my Understanding XML article on MSDN.  

  4. XML in Cw (XML.com)  : An overview of the XML based features of Cw. The Cw type system contains several constructs that reduce the impedance mismatch between OO and XSD by introducing concepts such as anonymous types, choices [aka union types], nullable types and constructing classes from XML literals into the .NET world. The ability to process such strongly typed XML objects using rich query constructs based on SQL's select operator will also be covered.

  5. A Comparison of Microsoft's C# Programming Language to Sun Microsystems' Java Programming Language 2nd edition : About 3 years ago I wrote a C# vs. Java comparison while I was still in school which has become the most popular comparison of both languages on the Web. I still get mail on a semi-regular basis from people who've been able to transition between both languages due to the information in my comparison document. I plan to update this article to reflect the proposed changes announced in Java 1.5 and C# 2.0

On top of this I've been approached twice in the past few months about writing a technology book. Based on watching the experiences of others my gut feel is that it isn't worth the effort. I'd be interested in any feedback on the above article ideas or even suggestions for new articles that you'd be interested in seeing on MSDN or XML.com from me.


 

Categories: Technology | XML

My issue of Playboy came in the mail so I got to read the the infamous Google interview. If you don't have a Playboy subscription or balk at buying the magazine from the newstands you can get the interview from Google's amended SEC filings. I didn't read the entire interview but there were no surprises in what I read.

I was recently talking to a coworker who's on the fence about whether to go to Google or stay at Microsoft and it was interesting talking about the pros and cons of both companies. As we talked Google began to remind me of Netscape in its heyday. A company full of bright, young guys who've built a killer application for the World Wide Web and is headed for a monster IPO. The question is whether Google will squander their lead like Netscape did (Yes, I realize my current employer may have had something to do with that) or whether they'll be the next Yahoo!

There are a couple of things Google has done over the past few years that have made me wonder whether the company has enough adult supervision and business acumen to rise above being a one trick pony in the constantly changing Internet landscape. Some of them are touched on by Larry and Sergey in their interview

  1. http://www.google.com is non-sticky: Nothing on the main Google site encourages the user to hang around the site or even return to the website besides the quality of the search results. According to the company's founders this is by design. The problem with this reasoning is that if and when its competitors such as MSN Search and Yahoo! Search get good enough there isn't anything keeping people tied to the site. It seems unfathomable now but there was a time that it seemed unfathomable that anyone would use anything besides AltaVista or Excite to search the Web. It's happened before and it can happen again. Google seems ill-prepared for this occurence.

  2. Inability to tie together disparate offerings: The one thing that has separated Yahoo! from all the Web portals that were all the rage a couple of years ago is that it managed to tie its many offerings into a single cohesive package with multiple revenue streams. The Yahoo! experience seamlessly ties in My Yahoo!, Yahoo! Groups (formerly eGroups), Yahoo! Calendar, Yahoo! Maps, Yahoo! Shopping, Yahoo! Finance, Yahoo! News, Yahoo! Movies, Yahoo! Messenger and the Yahoo! Companion. I use most of these Yahoo sites and tools on a daily basis and use all of them at least once a month. Besides advertising related to search there are several entry points for Yahoo! to get revenue from me.

    Compare this to Google which although has a number of other offerings available from the Google website has a number of offerings they haven't figured out how to make synergistic such as their purchase of Blogger or sites like Orkut. Yahoo! would have gotten a lot more mileage out of either site than Google currently has done. Another aspect of this issue is gleaned from this excerpt from a post by Dave Winer entitled Contact with Google

    Another note, I now have four different logins at Google: Orkut, AdSense, Blogger and Gmail. Each with a different username and password. Now here's an area where Google could be a leader, provide an alternative to Passport, something we really need, a Google-size problem.

    Yahoo! has a significantly larger number of distinct offerings yet I access all of them through a single login. This lack of cohesiveness indicates that either there isn't a unified vision as to how to unite this properties under a single banner or Google has been unable to figure out how to do so.

  3. GMail announced to quickly: Google announced GMail with its strongest selling point being that it gave you 100 times more space than competing free email services. However GMail is still in beta and not available to the general public while it's competitors such as Hotmail and Yahoo! Mail have announced upping their limits to 250MB and 100MB respectively with gigabytes of storage available and other features available to users for additional fees. This has basically stolen Google's thunder and halted a potential exodus of users from competing services while GMail isn't even out of beta yet.

  4. Heavy handed tactics in the Web syndication standards world: Recently Google decided to use a interim draft of a technology specification instead of a de facto industry standard for syndicating content from their Blogger website thus forcing users to upgrade or change their news aggregators as well as ensuring that there would be at least two versions of the Atom syndication format in the wild (the final version and the interim version supported by Google). This behavior upset a lot of users and aggregator developers. In fact, the author of the draft specification of the Atom syndication format that Google supported over RSS has also expressed dismay at the choice Google made and is encouraging others not to repeat their actions.

All of these are examples of less than stellar decision making at Google. Even though in previous entries such as What Is Google Building? and What is Google Building II: Thin Client vs. Rich Client vs. Smart Client I've implied that Google may be on the verge of a software move so bold it could upstage Microsoft the same way Netscape planned to with the browser upstaging the operating system as a development and user platform, it isn't a slam dunk that they have what it takes to get there.

It will be interesting watching the Google saga unfold.


 

Categories: Technology

One important lesson I've learned about designing software is that sometimes it pays to smother one's perfectionist engineer instincts and be less ambitious about the problems one is trying to solve. Put more succintly, a technology doesn't have to solve every problem just enough problems to be useful. Two examples come to mind which hammered this home to me; Tim Berners-Lee's World Wide Web and collaborative filtering which sites like Amazon use.

  1. The World Wide Web: Almost every history of the World Wide Web you find online mentions how Tim Berners-Lee was inspired by Ted Nelson's Xanadu. The current Web is a pale imitation of the what Ted Nelson described over forty years ago of what a rich hypertext system should be capable of doing. However you're reading these words of mine over Tim Berners-Lee's Web not Ted Nelson's. Why is this?

    If you read the descriptions of the Xandadu model you'll notice it has certain lofty goals. Some of these include the ability to create bi-directional links, links that do not break, and built-in version management. To me it doesn't seem feasible to implement all these features without ending up building a closed system. It seems Tim Berners-Lee came to a similar conclusion and greatly simplified Ted Nelson's dream thus making it feasible to implement and adopt on a global scale. Tim Berners-Lee's Web punts on all the hard problems. How does the system ensure that documents once placed on the Web are always retrievable? It doesn't. Instead you get 404 pages and broken links. How does the Web ensure that I can find all the pages that link to another page? It doesn't. Does the Web enable me to view old versions of a Web page and compare revisions of it side by side? Nope.

    Despite these limitations Tim Berners-Lee's Web sparked a global information revolution. Even more interestingly over time various services have shown up online that have attempted to add the missing functionality of the Web such as The Internet Archive, Technorati and the Google Cache.

  2. Collaborative Filtering on Amazon: The first place I ever bought CDs online was CDNow.com (now owned by Amazon). One feature of the site that blew my mind was the fact the ability to get a list of recommended CDs to buy based on your purchase history and the ratings you gave various albums. The suggestions were always quite accurate and many times it suggested CDs I already owned and liked a lot.

    This feature always seemed like magic to me. I imagined how difficult it must have been to come up with a categorization and ranking systems for music CDs that could accurately match people up with music based on their tastes. It wasn't until Amazon debuted this feature that I realize the magic algorithms were simply 'people who purchased X aldo purchased Y'. My magic algorithms were just a bunch of not very interesting SQL queries.  

    There are limitations to this approach ,you need a large enough user base and enough purchases of certain albums to make them statistically significant, but the system works for the most part.

Every once in a while I am part of endless discussions about how we need to complicate a technology to satisfy every use case when in truth we don't have to solve every problem. Edge cases should not dictate a software systems design but too often they do.  


 

Categories: Technology

Since I wrote my What is Google Building? I've seen lots of interesting responses to it in my referrer logs. As usual Jon Udell's response gave me the most food for thought. In his post entitled Bloglines he wrote

Finally, I'd love it if Bloglines cached everything in a local database, not only for offline reading but also to make the UI more responsive and to accelerate queries that reach back into the archive.

Like Gmail, Bloglines is the kind of Web application that surprises you with what it can do, and makes you crave more. Some argue that to satisfy that craving, you'll need to abandon the browser and switch to RIA (rich Internet application) technology -- Flash, Java, Avalon (someday), whatever. Others are concluding that perhaps the 80/20 solution that the browser is today can become a 90/10 or 95/5 solution tomorrow with some incremental changes.
...
It seems pretty clear to me. Web applications such as Gmail and Bloglines are already hard to beat. With a touch of alchemy they just might become unstoppable.

This does seem like the missing part of the puzzle. The big problem with web applications (aka thin client applications) is that they cannot store a lot of local state. I use my primary mail readers offline (Outlook & Outlook Express) and I use my primary news aggregator (RSS Bandit) offline on my laptop when I travel or in meetings when I can't get a wireless connection. There are also lots of dial up users out there who don't have the luxury of an 'always on' broadband connection who also rely on the offline capabilities of such applications.

I suspect this is one of the reasons Microsoft stopped trying to frame the argument as thin client vs fat rich client. This discussion basically is arguing that an application with zero deployment and a minimalistic user interface is inferior to a desktop application that needs to be installed, updated and patched but has a fancier GUI. This is an argument that holds little water to most people which is why the popularity of Web applications has grown both on the Internet and on corporate intranets.

Microsoft has attempted to tackle this problem in two ways. The first attempt is to make rich client applications as easy to develop and deploy as web applications by creating a rich client markup language, XAML as well as the ClickOnce application deployment technology. The second is with better positioning by emphasizing the offline capabilities of rich clients and coming up with a new monicker for them, smart clients.

Companies that depend on thin client applications such as Google with GMail do realize their failings. However Google is in a unique position of being able to attract some very smart people who've been working on this problem for a while. For example, their recent hire Adam Bosworth wrote about technologies for solving this limitation in thin clients in a number of blog posts from last year; Web Services Browser, Much delayed continuation of the Web Services Browser and When connectivity isn't certain. The latter post definitely has some interesting ideas such as

the issue that that I want a great user experience even when not connected or when conected so slowly that waiting would be irritating. So this entry discusses what you do if you can't rely on Internet connectivity.

Well, if you cannot rely on the Internet under these circumstances, what do you do? The answer is fairly simple. You pre-fetch into a cache that which you'll need to do the work. What will you need? Well, you'll need a set of pages designed to work together. For example, if I'm looking at a project, I'll want an overview, details by task, breakout by employee, late tasks, add and alter task pages, and so on. But what happens when you actually try to do work such as add a task and you're not connected? And what does the user see.

To resolve this, I propose that we separate view from data. I propose that a "mobile page" consists both of a set of related 'pages' (like cards in WML), an associated set of cached information and a script/rules based "controller" which handles all user gestures. The controller gets all requests (clicks on Buttons/URL's), does anything it has to do using a combination of rules and script to decide what it should do, and then returns the 'page' within the "mobile page" to be displayed next. The script and rules in the "controller" can read, write, update, and query the associated cache of information. The cache of information is synchronized, in the background, with the Internet (when connected) and the mobile page describes the URL of the web service to use to synchronize this data with the Internet. The pages themselves are bound to the cache of information. In essence they are templates to be filled in with this information. The mobile page itself is actually considered part of the data meaing that changes to it on the Internet can also be synchronized out to the client. Throw the page out of the cache and you also throw the associated data out of the cache.

Can you imagine using something like GMail, Google Groups or Bloglines in this kind of environment? That definitely would put the squeeze on desktop applications.


 

Categories: Technology

July 24, 2004
@ 08:51 PM

In the past couple of months Google has hired four people who used to work on Internet Explorer in various capacities [especially its XML support] who then moved to BEA; David Bau, Rod Chavez, Gary Burd and most recently Adam Bosworth. A number of my coworkers used to work with these guys since our team, the Microsoft XML team, was once part of the Internet Explorer team. It's been interesting chatting in the hallways with folks contemplating what Google would want to build that requires folks with a background in building XML data access technologies both on the client side, Internet Explorer and on the server, BEA's WebLogic.

Another interesting recent Google hire is Joshua Bloch. He is probably the most visible guy working on the Java language at Sun behind James Gosling. Based on recent interviews with Joshua Bloch about Java his most recent endeavors involved adding new features to the language that mimic those in C#.

While chomping on some cheap sushi at Sushi Land yesterday some friends and I wondered what Google could be planning next. So far, the software industry including my employer has been playing catchup with Google and reacting to their moves. According to news reports MSN is trying to catch up to Google search and Hotmail ups their free storage limit to compete with GMail. However this is all reactive and we still haven't seen significant competition to Google News, Google Image Search, Google Groups or even to a lesser extent Orkut and Blogger. By the time the major online networks like AOL, MSN or Yahoo! can provide decent alternatives to this media empire Google will have produced their next major addition.

So far Google doesn't seem to have stitched all its pieces into a coherent media empire as competitors like Yahoo! have done but this seems like it will only be a matter of time. What is of more interest to the geek in me is what Google could build next that could tie it all together. As Rich Skrenta wrote in his post the Secret Source of Google's Power

Google is a company that has built a single very large, custom computer. It's running their own cluster operating system. They make their big computer even bigger and faster each month, while lowering the cost of CPU cycles. It's looking more like a general purpose platform than a cluster optimized for a single application.

While competitors are targeting the individual applications Google has deployed, Google is building a massive, general purpose computing platform for web-scale programming.

A friend of mine, Justin, had an interesting idea at dinner yesterday. What if Google ends up building the network computer? They can give users the storage space and reliability to run place all their data online. They can mimic the major desktop applications users interact with daily by using Web technologies. This sounds far fetched but then again, I'd have never imagined I'd see a free email service that gave 1GB of free email.

Although I think Justin's idea is outlandish but suspect the truth isn't much further from that.

Update: It seems Google also picked up another Java language guy from Sun. Neal Gafter who worked on various Java compiler tools including javac, javadoc and javap. Curiouser and curiouser.


 

Categories: Technology

A little while ago some members of our team experimented various ways to reduce the Relational<->Objects<->XML (ROX) impedance mismatch by adding concepts and operators from the relational and XML (specifically W3C XML Schema) world into an object oriented programming language. This effort was spear headed by a number of smart folks on our team including Erik Meijer, Matt Warren, Chris Lovett  and a bunch of others all led by William Adams. The object oriented programming language which was used as a base for extension was C#. The new language was once called X# but eventually became known as Xen.

Erik Meijer presented Xen at XML 2003 and I blogged about his presentation after the conference. There have also been two papers published about the ideas behind Xen; Programming with Rectangles, Triangles, and Circles and Unifying Tables, Objects and Documents. It's a new year and the folks working on Xen have moved on to other endeavors related to future versions of Visual Studio and the .NET Framework.

However Xen is not lost. It is now part of the Microsoft Research project, Cw (pronounced C-Omega). Even better you can download a preview of the Cw  compiler from the Microsoft Research downloads page


 

Categories: Technology | XML

July 4, 2004
@ 01:20 AM

After being confused by a number of blog posts I read over the past few hours it just hit me that both Sun and Apple are using the code name Tiger for the next version of their flagship software product. That clears up a bunch of confusion.


 

Categories: Technology

I read Joel Spolsky's How Microsoft Lost the API War yesterday and found it pleasantly coincidental. Some of the issues Joel brings are questions I've begun asking myself and others at work so I definitely agree with a lot of the sentiments in the article. My main problem with Joel's piece is that it doesn't have a central theme but instead meanders a little and lumps together some related but distinct issues. From where I sit, Joel's article made a few different major & minor points which bear being teased out and talked about separately. The points I found most interesting were

Major Points

  1. The primary value of a platform is how productive it makes developers, its ubiquity and how much of a stable environment it provides for them over time. Microsoft's actions in how has released both the .NET Framework and its upcoming plans for Longhorn run counter to this conventional wisdom.
  2. Microsoft used to be religious about backwards compatibility, now it no longer is.

Minor Points 

  1. The trend in application development is moving to Web applications instead of desktop applications.
  2. A lot of developers using the .NET Framework use ASP.NET, client developers haven't yet embraced the .NET Framework.
  3. The primary goal of WinFS (making search better)  can be acheived by less intrusive, simpler mechanisms.

So now to dive into his points in more detail.

.NET and Longhorn as Fire & Motion

The primary value of a platform is how productive it makes developers, its ubiquity and how much of a stable environment it provides for them over time. Microsoft's actions in how has released both the .NET Framework and its upcoming plans for Longhorn run counter to this conventional wisdom.

Joel approaches this point from two angles. First of all he rhapsodizes about how the Windows team bends over backwards to achieve backwards compatibility in their APIs even when this means keeping bug compatibility with old versions or adding code to handle specific badly written applications . This means users can migrate applications from OS release to OS release thus widening the pool of applications that can be used per OS. This is in contrast to the actions of competitors like Apple. 

Secondly, he argues that Microsoft is trying to force too many paradigm shifts on developers in too short a time.  First of all, developers have to make the leap from native code (Win32/COM/ASP/ADO) to managed code (ASP.NET/ADO.NET) but now Microsoft has already telegraphed that another paradigm shift is coming in the shape of Longhorn and WinFX. Even if you've made the leap to using the .NET Framework, Microsoft has already stated that technologies in the next release of the .Net Framework (Winforms, ASP.NET Web Services) are already outclassed by technologies in the pipeline (Avalon, Indigo). However to get these later benefits one not only needs to upgrade the development platform but the operating system as well. This second point bothers me a lot and I actually shot a mail to some MSFT VPs about 2 weeks ago raising a similar point with regards to certain upcoming technologies. I expected to get ignored but actually got a reasonable response from Soma with pointers on folks to have followup discussions with. So the folks above are aware of the concerns in this space. Duh!

The only problem I have with Joel's argument in this regard is that I think he connects the dots incorrectly. He agrees that Windows programming was getting too complex and years of cruft  eventually begins to become difficult to manage.  He also thinks the .NET Framework makes developers more productive. So it seems introducing the .NET Framework was the smart thing for Microsoft to do. However he argues that not many people are using it (actually that not many desktop developers are using it) . There are two reasons for this which I know first hand as a developer of a desktop application that runs in the .NET Framework (RSS Bandit)

  • The .NET Framework isn't ubiqitous on Windows platforms
  • The .NET Framework does not expose enough Windows functionality to build a full fledged Windows application with only managed code.

Both of these issues are why Microsoft is working on WinFX. Again, the elepahant in the living room issue is that it seems that Microsoft's current plans are fix these issues for developing on Longhorn not all supported Windows platforms.

Losing the Backwards Compatibility Religion

Microsoft used to be religious about backwards compatibility, now it no longer is.

Based on my experience as a program Manager for the System.Xml namespace in the .NET Framework I'd say the above statement isn't entirely accurate. Granted, the .NET Framework hasn't been around long enough to acquire a lot of cruft we do already have to be careful about breaking changes. In fact, I'm currently in the process of organizing backing out a change we made in Whidbey to make our W3C XML Schema validation more compliant in a particular case because it broke a number of major XML Web Services on the Web.

However I don't think I've seen anyone go above and beyond to keep bug compatibility in the way Raymond Chen describes in his blog. But then again I don't have insight into what every team working on the .NET

WinFS, Just About Search?

The primary goal of WinFS (making search better)  can be acheived by less intrusive, simpler mechanisms.

I'd actually had a couple of conversations this week with folks related to WinFS, including Mike Deem and Scoble.  We talked about the fact that external perceptions of the whats and whys of WinFS don't really jibe with what's being built. A lot of people think WinFS is about making search better [even a number of Longorn evangelists and marketing folks]. WinFS is really a data storage and data access platform that aims to enable a lot of scenarios, one of which just so happens to be better search. In addition, whether you improve full text search and the indexing service used by the operating system is really orthogonal to WinFS.

The main problem is that what the WinFS designers think WinFS should be, what customers and competitors expect it to be and what has actually been shown to developers in various public Longhorn builds are all different. It makes it hard to talk about what WinFS is or should be when your everyone's mental image of it is slightly different.

Disclaimer: The above statements are my personal opinions and do not reflect the intentions, strategies, plans or opinions of my employer.


 

I recently read a post by a Jeff Dillon (a Sun employee) entitled .NET and Mono: The libraries were he criticizes the fact that the .NET Framework has Windows specific APIs. Specifically he writes

Where this starts to fall apart is with the .NET and Mono libraries. The Java API writers have always been very careful not to introduce an API which does not make sense on all platforms. This makes Java extremely portable at the cost of not being able to do native system programming in pure Java. With .NET, Microsoft went ahead and wrote all kinds of APIs for accessing the registry, accessing COM objects, changing NTFS file permissions, and other very windows specific tasks. In my mind, this immediately eliminates .NET or Mono from ever being a purely system independent platform.

While I was still digesting his comments and considering a response I read an excellent followup by Miguel De Icaza in his post On .NET and portability where he writes

First lets state the obvious: you can write portable code with C# and .NET (duh). Our C# compiler uses plenty of .NET APIs and works just fine across Linux, Solaris, MacOS and Windows. Scott also pointed to nGallery 1.6.1 Mono-compliance post which has some nice portability rules.
...
It is also a matter of how much your application needs to integrate with the OS. Some applications needs this functionality, and some others do not.

If my choice is between a system that does not let me integrate with the OS easily or a system that does, I personally rather use the later and be responsible for any portability issues myself. That being said, I personally love to write software that takes advantage of the native platform am on, specially on the desktop.

At first I was confused by Jeff's post given that it assumes that the primary goal of the .NET Framework is to create a Write Once Run Anywhere platform. It's been fairly obvious from all the noise coming out of Redmond about WinFX that the primarily goal of the .NET Framework is to be the next generation Windows programming API which replaces Win32. By the way check out the WinFX overview API as JPG or WinFX API Overview as PDF.  Of course, this isn't to say that Microsoft isn't interested in creating an interoperable managed platform which is why there has been ECMA standardization of C#, the Common Language Infrastructure (CLI) and the Base Class Library (BCL). The parts of the .NET Framework that are explicitly intended to be interoperable across platforms are all parts of the ECMA standardization process. That way developers can have their cake and eat it too. A managed API that takes full advantage of their target platform and a subset of this API which is intended to be interoperable and is standardized through the ECMA process.

Now that I think about it I realize that folks like Jeff probably have no idea what is going on in .NET developer circles and assume that the goals of Microsoft with the .NET Framework are the same as that of Sun with Java. That explains why he positions what many see as a flaw of the Java platform as a benefit that Microsoft has erred in not repeating. I guess one man's meat is another man's poison.  


 

Categories: Technology

June 8, 2004
@ 09:22 AM

Jon Udell has started a series of blog posts about the pillars of Longhorn.  So far he has written Questions about Longhorn, part 1: WinFS and Questions about Longhorn, part 2: WinFS and semantics which ask the key question "If the software industry and significant parts of Microsoft such as Office and Indigo have decided on XML as the data interchange format, why is the next generation file system for Windows basically an object oriented database instead of an XML-centric database?" 

I'd be very interested in what the WinFS folks like Mike Deem would say in response to Jon if they read his blog. Personally, I worry less about how well WinFS supports XML and more about whether it will be fast, secure and failure resistant. After all, at worst WinFS will support XML as well as a regular file system does today which is good enough for me to locate and query documents with my favorite XML query language today. On the other hand, if WinFS doesn't perform well or shows the same good-idea-but-poorly-implemented nature of the Windows registry then it'll be a non-starter or much worse a widely used but often cursed aspect of Windows development (just like the Windows registry).

As Jon Udell points out the core scenarios touted for the encouraging the creation of WinFS (i.e search and adding metadata to files) don't really need a solution as complex or as intrusive to the operating system as WinFS. The only justification for something as radical and complex as WinFS is if Windows application developers end up utilizing it to meet their needs. However as an application developer on the Windows platform I primarily worry about three major aspects of WinFS. The first is performance, I definitely think having a query language over an optimized store in the file system is all good but wouldn't use it if the performance wasn't up to snuff. Secondly I worry about security, Longhorn evangelists like talking up what a wonderful world it would be if all my apps could share their data but ignore the fact that in reality this can lead to disasters. Having multiple applications share the same data store where one badly written application can corrupt the entire store is worrisome. This is the fundamental problem with the Windows registry and to a lesser extent the cause of DLL hell in Windows. The third thing I worry about is that the programming model will suck. An easy to use programming model often trumps almost any problem. Developers prefer building distributed applications using XML Web Services in .NET to the alternatives even though in some cases this choice leads to lower performance. The same developers would rather store information in the registry than come up with a robust alternative on their own because the programming model for the registry is fairly straightforward.

All things said, I think WinFS is an interesting idea. I'm still not sure it is a good idea but it is definitely interesting. Then again given that WinFS assimilated and thus delayed a very good idea from shipping, I may just be a biased SOB.

PS: I just saw that Jeremy Mazner posted a followup to Jon Udell's post entitled Jon Udell questions the value and direction of WinFS where he wrote

XML formats with well-defined, licensed schemas, are certainly a great step towards a world of open data interchange.  But XML files alone don't make it easier for users to find, relate and act on their information. Jon's contention is that full text search over XML files is good enough, but is it really?  I did a series of blog entries on WinFS scenarios back in February, and I don't think's Jon full text search approach would really enable these things. 

Jeremy mostly misses Jon's point which is aptly reduced to a single question at the beginning of this post. Jon isn't comparing full text search over random XML files on your file system to WinFS. He is asking why couldn't WinFS be based on XML instead of being an object oriented database.


 

Categories: Technology | XML

One of the more annoying aspects of writing Windows applications using the .NET Framework is that eventually you brush up against the limitations in the APIs provided by the managed classes and end up having to use interop to talk to Win32 or COM-based APIs. This process typically involves exposing native code in a manner that makes them look like managed APIs when in fact they are not. When there is an error in this mapping it results in hard-to-track memory corruption errors. All of the fun of annoying C and C++ memory corruption errors in the all new, singing and dancing .NET Framework.

Most recently we were bitten by this in RSS Bandit and probably would never have tracked this problem down if not for a coincidence. As part of forward compatibility testing at Microsoft, a number of test teams run existing .NET applications on current builds of future versions of the .NET Framework. One of these test teams decided to use RSS Bandit as a test application. However it seemed they could never get RSS Bandit to start without the application crashing almost instantly. Interestingly, it crashed at different points in the code depending on whether one compiled and ran the application on the current build of the .NET Framework or just ran an executable compiled against an older version of the .NET Framework on the current build. Bugs were filed against folks on the CLR team and the problem was tracked down.

It turns out that our declaration of the STARTUPINFO struct obtained from PInvoke.NET was incorrect. Specifically the following fields which were declared as

 [MarshalAs(UnmanagedType.LPWStr)] public string  lpReserved;
 [MarshalAs(UnmanagedType.LPWStr)] public string  lpDesktop;
 [MarshalAs(UnmanagedType.LPWStr)] public string  lpTitle;

were declared incorrectly. We should have declared them as

public IntPtr lpReserved; 
public IntPtr lpDesktop; 
public IntPtr lpTitle;

The reason for not declaring them as strings is that the Interop marshaler, after having converted the string to a managed string, will release the native data using CoTaskMemFree. This is clearly not the right thing to do in this case so we need to declare the fields as IntPtrs and then manually marshal them to strings via the Marshal.PtrToStringUni() API.

The problems with errors that occur due to such memory corruption issues is that their results are unpredictable. Some users may never witness a crash, while others witness the crash when their machines are under memory pressure or in some cases it crashes right away. Of course, the crash is never in the same place twice. Not only do these problems waste lots of developer time trying to track them down they lalso lead to negative user experience with the target application.

Hopefully, when Longhorn ships and introduces WinFX this class of problem will become a thing of the past. In the meantime, I need to spend some time going over our code that does all the Win32 interop to ensure that there are no other such issues waiting to rear their head.


 

Categories: Technology

I just read Tim Bray's entry entitled SOA Talk where he mentions listening to Steve Gillmor, Doc Searls, Jon Udell, Dana Gardner, and Dan Farber talk about SOA via “The Gillmor Gang” at ITConversations. I tried to listen to the radio show a few days ago but had the same problems Tim had. A transcript would definitely be appreciated.

What I found interesting is this excerpt from Tim Bray's blog post

Apparently a recent large-scale survey of professionals revealed that “SOA” has positive buzz and high perceived relevance, while “Web Services” scores very low. Huh?

This is very unsurprising to me. Regular readers of my blog may remember I wrote about the rise of the Service Oriented Architecture fad a few months ago. Based on various conversations with different people involved with XML Web Services and SOA I tend to think my initial observations in that post were accurate. Specifically I wrote

The way I see it the phrase "XML Web Services" already had the baggage of WSDL, SOAP, UDDI, et al so there a new buzzphrase was needed that highlighted the useful aspects of "XML Web Services" but didn't tie people to one implementation of these ideas but also adopted the stance that approaches such as CORBA or REST make sense as well.

Of the three words in the phrase "XML Web Services" the first two are implementation specific and not in a good way. XML is good thing primarily because it is supported by lots of platforms and lots of vendors not because of any inherrent suitability of the technology for a number of the tasks people utilize it for. However in situations where this interop is not really necessary then XML is not really a good idea. In the past, various distributed computing afficionados have tried to get around this by talking up the The InfoSet which was just a nice way of deprecating the notion of usage of the XML text format everywhere being a good thing. The second word in the phrase is similarly inapllicable in the general case. Most of the people interested in XML Web Services are interested in distributed computing which traditionally and currently is more about the intranet than it is about the internet. The need to justify the Web-like nature of XML Web Services when in truth these technologies probably aren't going to be embraced on the Web in a big way seems to have been a sore point of many discussions in distributed computing circles.

Another reason I see for XML Web Services having negative buzz versus SOA is that when many people think of XML Web Services, they think of overhyped technologies that never delivered such as Microsoft's Hailstorm.  On the other hand, SOA is about applying the experiences of 2 decades of building distributed applications to building such applications today and in the future. Of course, there are folks at Microsoft who are wary of being burned by the hype bandwagon and there've already been some moves by some of the thought leadership to distance what Microsoft is doing from the SOA hype. One example of this is the observation that lots of the Indigo folks now talk about 'Service Orientation' instead of 'Service Oriented Architecture'.

Disclaimer: The above comments do not represent the thoughts, intentions, plans or strategies of my employer. They are solely my opinion.


 

Categories: Technology | XML

Every once in a while I see a developer of a news aggregator that decides to add a 'feature' that unnecessarily chomps down the bandwidth of a web server in a manner one could classify as rude. The first I remember was Syndirella which had a feature that allowed you to syndicate an HTML page then specify regular expressions for what parts of the feed you wanted it to treat as titles and content. There are three reasons I consider this rude,

  • If a site hasn't put up an RSS feed it may be because they don't want to deal with the bandwidth costs of clients repeatedly hitting their sites on behalf of a few users
  • An HTML page is often larger than the corresponding RSS feed. The Slashdot RSS feed is about 2K while just the raw HTML of the front page of slashdot is about 40K
  • An HTML page could change a lot more often than the RSS feed [e.g. rotating ads, trackback links in blogs, etc] in situations where an RSS feed would not 

For these reasons I tend to think that the riught thing to do if a site doesn't support RSS is to send them a request that they do highlighting its benefits instead of eating up their bandwidth.

The second instance I've seen of what I'd call rude bandwidth behavior is a feature of NewsMonster that Mark Pilgrim complained about last year where every time it finds a new RSS item in your feed, it will automatically download the linked HTML page (as specified in the RSS item's link element), along with all relevant stylesheets, Javascript files, and images. Considering that the user may never click through to web site from the RSS view this is potentially hundreds of unnecessary files being downloaded by the aggregator a day. This is not an exaggeration, I'm subscribed to a hundred feeds in my aggregator and there are is an average of two posts a day to each feed so downloading the accompanying content and images is literally hundreds of files in addition to the RSS feeds being downloaded.

The newest instance of unnecessary bandwidth hogging behavior I've seen from a news aggregator was pointed out by Phil Ringnalda's comments about excessive hits from NewsCrazwler which I'd also seen in my referrer logs and had been puzzled about. According to the answer on the NewzCrawler support forums when NewzCrawler updates the channel supporting wfw:commentRss it first updates the main feed and then it updates comment feeds. Repeatedly downloading the RSS feed for the comments to each entry in my blog when the user hasn't requested them is unnecessary and quite frankly wasteful.

Someone really needs to set up an aggregator hall of shame.


 

Categories: Technology

I've been reading quite a bit about various opinions on standards in the software industry. The first piece I read about this was the C|Net article  You call that a standard? where Robert Glushko said

Q: Why have so many standards emerged for electronic commerce?
A: One of the issues here is what a standard is. That is one of the most abused words in the language and people like you (in the media) do not help by calling things standard that are not standards. Very few things are really standard. Standards come out of standards organizations, and there are very few of those in the world.

There is ANSI (American National Standards Institute), there is ISO (International Organization for Standardization), the United Nations. Things like OASIS and the W3C (World Wide Web Consortium) and WS-I (Web Services Interoperability Organization) are not standards organizations. They create specifications that occasionally have some amount of consensus. But it is the marketing term to call things standard these days.

I tend to agree that a lot of things the media and software industry pundits call “standards” are really specifications not standards. However I'd even claim that simply calling something a standard because some particular organization produced it doesn't really jibe with reality. There has been lengthy discussion about this C|Net article on XML-DEV and in one of the posts I wrote

The word "standard' when it comes to software and computer technology is usually meaningless. Is something standard if it produced by a standards body but has no conformance tests (e.g. SQL)? What if it has conformance testing requirements but is owned by a single entity (e.g. Java)? What if it is just widely supported with no formal body behind it (e.g. RSS)?
 
Whenever I hear someone say standard it's as meaningless to me as when I hear the acronym 'SOA', it means whatever the speaker wants it to mean.

Every one of the technologies mentioned in my post(s) on XML-DEV (SQL, Java, RSS, Flash) can be considered standards by developers and their customers for some definition of the word 'standard'. In particular, I want to sieze on Glushko's claim that standards are things produced by standards bodies with the example of ANSI (American National Standards Institute) and the “SQL standard”. Coincidentally I recently read an article entitled Is SQL a Real Standard Anymore? written by Michael Gorman who has been the Secretary of the ANSI (American National Standards Institute) NCITS (National Committee on Information Technology Standards) H2 Technical Committee on Database for over 23 years. In the article he begins

What Makes a Standard a Standard?

Simple. Not implementation, but conformance. And, conformance is “known” only after conformance testing. Then and only then can users know with any degree of certainty that a vendor’s product conforms to a standard.
...
But, from the late 1980s through 1996 there was conformance testing. This was accomplished by the United States Government Department of Commerce’s National Institute of Standards and Technology (NIST). NIST conducted the tests in support of public law that was originally known as the"Brooks Act," and later under other laws that were passed in the 1990s. The force behind the testing was that no Federal agency was able to buy a DBMS unless it passed conformance tests. Conformance meant the possibility of sales.
...
The benefits derived from the NIST conformance tests were well documented. A NIST commissioned study showed that there were about $35 million in savings from a program that only cost about $600 thousand. But, in 1996, NIST started to dismantle its data management standards program. The publically stated reason was"costs." Obviously, that wasn’t true.
...
In May of 1996, I wrote an article for the Outlook section of the Washington Post. It was unpublished as it was considered too technical. The key parts of the article were:

"Because of NIST’s FY-97 and beyond plans, SQL’s conformance tests and certifications, that is, those beyond the SQL shell will be left to the ANSI/SQL vendors. They however have no motivation whatsoever to perform full and complete testing nor self policing. Only the largest buyer has that motivation, and in the case of ANSI/SQL the largest buyer is the United States Government.
...
"Simply put, without robust test development and conformance testing by NIST, DBMS will return to the days of vendor specific, conflicting features and facilities that will lock Federal agencies into one vendor, or make DBMS frightfully expensive acquire, use, and dislodge.”

This definitely hits the nail on the head. Standards are a means to an end and in this case the goal of standards is to prevent vendor lock-in. That's it, plain and simple. The rest of Michael  Gorman's article goes on to elaborate how the things he predicted in his 1996 article have come pass and why SQL isn't much of a standard anymore since vendors basically pay lip service to it and have no motivation to take it with seriousness. Going back to Glushko's article on C|Net, SQL is a standard since it is produced by a standards body yet here we have the secretary of the committee saying that it isn't. Who are we to believe?

From my point of view, almost everything that is called a 'standard' by the technology press and pundits is really just a specification. The fact that W3C/ISO/ANSI/OASIS/WS-I/IETF/etc produced a specification doesn't make it a 'standard' by any real definition of the word except for one that exists in the minds of technology pundits. Every once in a while someone at Microsoft asks me “Is RSS a standard?“ and I always ask “What Does That Mean?“ because as shown by the articles linked above it is an ambiguous question. People ask the question for various reasons; they want to know about the quality of the specification, the openness of the process for modifying or extending the specification, where to seek clarifications, or whether the technology is controlled by a single vendor. All of these are valid questions but few [if any] of them are answered by the question “Is <technology foo> a standard“.


 

Categories: Technology

Ted Neward, James Robertson and a few others have been arguing back and forth about the wisdom of preventing derivation of one's classes by marking them as sealed in C# or final in Java. Ted covers various reasons why he thinks preventing derivation is valid in posts such as Uh, oh; Smalltalk culture clashes with Java/.NET culture... , Unlearn Bo... and Cedric prefers unsealed classes. On the other side is James Robertson who thinks doing so amounts to excessively trying to protection developers in his post More thoughts on sealed/final

As a library designer I definitely see why one would want to prevent subclassing of a particular type. My top 3 reasons are

  1. Security Considerations: If I have a class that is assumed to perform certain security checks or conform to certain integrity constraints which is depended upon by other classes then it may make sense to prevent inheritance of that class.

  2. Fundamental Classes: Certain types are so fundamental to the software system that replacing them is too risky to allow. Types such as strings or numeric types fall into this category. An API that takes a string or int parameter shouldn't have to worry whether someone has implemented their own “optimized string class“ that looks like System.String/java.lang.String but doesn't support Unicode or stores information as UTF-8 instead of UTF-16.

  3. Focused Points of Extensibility: Sometimes a object model is designed with a certain extensibility points in which case other potential extensibility points should be blocked. For example, in System.Xml in Whidbey we will pointing people to subclass XPathNavigator as their way of exposing data as XML instead of subclassing XmlDocument or XPathDocument. In such cases it would be valid to have made the latter classes final instead of proliferating bad ideas such as XmlDataDocument.
There is a fourth reason which isn't as strong but one I think is still valid

  1. Inheritance Testing Cost Prohibitive: A guiding principle for designing subclasses is the Liskov Substitution Principle which states

    If for each object o1 of type S there is an object o2 of type T such that for all programs P defined in terms of T, the behavior of P is unchanged when o1 is substituted for o2 then S is a subtype of T.
    although straightforward on the surface it is hard work testing that this is indeed the case when designing your base classes. To test that one's classes truly exhibit this principle subclasses should be created and run through a gamut of tests that the base class passes to see if the results are indistinguishable. Often it is actually the case that there is dependency on the internal workings of the class by related components. There are two examples of this in v1.0 of the APIs in System.Xml in the .NET Framework, XmlValidatingReader's constructor accepts an XmlReader as input but really only supports an XmlTextReader and XslTransform should accept an XPathNodeIterator as output from an extension function but really only supports the internal ResetableIterator. Neither of these cases is an example of where the classes should be sealed or final (in fact we explicitly designed the classes to be extensible) but they do show that it is very possible to ship a class where related classes actually have dependencies on its internals thus making subclassing it inappropriate. Testing this can be expensive and making classes final is one way to eliminate an entire category of tests that should be written as part of the QA process of the API. I know more than one team at work who've taken this attitude when designing a class or set of classes.

That's my $0.02 on the topic. I do agree that most developers don't think about the ramifications of making their classes inheritable when designing them but to me that is an argument that final/sealed should not be the default modifier on a class. Most developers won't remember to change the default modifier regardless of what it is, the question is whether it is more beneficial for there to be lots of classes that one can't derive from or that one can even if the class wasn't designed with derivation in mind. I prefer the latter.  


 

Categories: Technology

In Christoph Schittko's post about CapeClear's latest WSDL editor he writes

CapeClear recently released a new version of their free WSDL editor. The new version allows adding XML Schemas which, in my mind, was definitely a much needed feature...The only gripe I have is that the new version is no longer called WSDL Editor. Now it's the SOA editor

SOA is now a much overhyped and meaningless buzzword. The entire industry is hyping XML Web Services all over again without pointing out anything of much concrete worth. Microsoft is right up there as well with publications like Microsoft Architects Journal whose last issue had 3 of 5 articles with "Service Oriented" in the title with at least one other being probably the quintessential work on Service Oriented Architecture, Metropolis.

There are basically only 2 people's whose opinions about the mostly meaningless Service Oriented hoopla I consider worth anything. Pat Helland who was talking about service oriented architecures before the buzzword had a name and Don Box because he's the first person I've seen do a decent job of trying to distill the service oriented fundamentals (see A Guide to Developing and Running Connected Systems with Indigo).

Don's fundamentals

  • Boundaries are explicit
  • Services are autonomous
  • Services share schema and contract, not class
  • Service compatibility is determined based on policy

make a lot of sense to me. The only problem I have with Don is that he is working on Indigo which is anything from 1.5 to 2 years from shipping depending on who you're listening to but Microsoft is pimping SOA today. I've talked to him about doing a bit more writing service oriented development with existing technologies and even offered to cowrite if he can't find the time. Unfortunately I've been really really busy (we are short one PM and a dev on my immediate team, speaking of which WE ARE HIRING!!!) so haven't been very good at nagging him to do more writing. Hopefully once we lock down for Whidbey beta 1 I'll have some free time until the beta 2 deluge begins.


 

Categories: Technology

My buddy Josh Ledgard has a posts on the search for the best way to provide online discussion forums on Microsoft technologies for our customers. He has two posts that kick of his thoughts on the issues; MVP Summit Views and Issues with Threaded Discussions and Brainstorming Results of Online Discussion Solutions. In the former he basically states that during the MVP Summit the major feedback he got from MVPs was that they want discussions in an online forum to have all the functionality of NNTP newsgroups and newsreaders provide today (offline capability, watches, authentication & identification, etc) but they don't like the amount of traffic in the NNTP newsgroups. 

Josh's second post mainly tries to address the fact that a number of groups at Microsoft felt that newsgroups are a low tech solution and have ended up creating alternate online forums such as the ASP.NET forums and GotDotNet message boards. This basically has fragmented our developer support experience and is also problematic for product teams since we need to monitor multiple online venues using multiple tools.

My simple suggestion is that the various Microsoft forums should emit RSS that fully supports the Comment API and wfw:commentRss then developers will step up to the plate and create online or desktop aggregators that combine both the NNTP newsgroups and the Web forums. One of the reasons I plan to add NNTP support to RSS Bandit is exactly because I now have to use 3 different tools to monitor our developer forums (Outlook for mailing lists and to get alerts from Web forums, Outlook Express for NNTP newsgroups and RSS Bandit for blogs). There's no reason why I can't collapse this into two tools or even one if one uses something like Newsgator.

Josh and I are supposed to have lunch tomorrow. I'll see what the Visual Studio team is thinking of encouraging in this direction if anything.


 

A recent post by Brad Abrams on DateTime, Serialization and TimeZones highlights one of the problems with the the System.Datetime structure. The best way to illustrate his problem is with a code fragment and the output from the code

    DateTime now = DateTime.Now;
   
    Console.WriteLine(now);
    Console.WriteLine(now.ToUniversalTime());

    Console.WriteLine(XmlConvert.ToString(now));
    Console.WriteLine(XmlConvert.ToString(now.ToUniversalTime()));

which results in the following output

4/14/2004 9:29:31 AM
4/14/2004 4:29:31 PM
2004-04-14T09:29:31.9531250-07:00
2004-04-14T16:29:31.9531250-07:00

which at first seems right until you consider that all the values should reflect the same instant in time but do not. The problem is that the DateTime structure does not store time zone information which causes all sorts of problems when trying to transmit date or time values across the network or persist them. I've had to deal with this because I'm responsible for the System.Xml.XmlConvert class and have been working with folks that are responsible for XML Serialization, the DataSet and the CLR team as to how best to work around this problem. As Brad Abrams mentions in his blog post 

You can mark a DateTime instance as Local, UTC or Unspecified, and Unspecified means that it should not be serialized with a time zone. Unfortunately it won't magically take effect because it would break compatibility. But there will be ways to opt-in to this behavior for DataSet, XML Serialization and XML Convert.

 so there won't be time zone information added to the type but at the very least it should be possible to get my code fragment above to emit equivalent times in Whidbey. So time zone information still won't be supported but you can always normalize to UTC and have this preserved correctly instead of what happens now.

This is what has been proposed and I'm in the process of working out exactly how this should work for XmlConvert. We will most likely add overloads to the ToString method that takes a DateTime that allows people to control whether they want the backwards compatible (but incorrect) v1.0 behavior or the correct behavior when outputing a UTC date.


 

Miguel pointed me to an interesting discussion between Havoc Pennington of RedHat and Paolo Molaro, a lead developer of the Mono project. Although I exchanged mail with Miguel about this thread about a week ago I've been watching the discussion as opposed to directly commenting on it because I've been trying to figure out if this is just a discussion between a couple of Open Source developers or a larger discussion between RedHat and Novell being carried out by proxy.

Anyway, the root of the discussion is Havoc's entry entitled Java, Mono, or C++? where he starts of by pointing out that a number of the large Linux desktop projects are interested in migrating from C/C++ to managed code. Specifically he writes

In the Linux desktop world, there's widespread sentiment that high-level language technologies such as garbage collection, sandboxed code, and so forth would be valuable to have and represent an improvement over C/C++.

Several desktop projects are actively interested in this kind of technology:

  • GNOME: many developers feel that this is the right direction
  • Mozilla: to take full advantage of XUL, it has to support more than just JavaScript
  • OpenOffice.org: has constantly flirted with Java, and is considering using Java throughout the codebase
  • Evolution: has considered writing new code and features in Mono, though they are waiting for a GNOME-wide decision

Just these four projects add up to probably 90% of the lines of code in a Linux desktop built around them

Havoc then makes the argument that the Open Source community will have to make a choice between Java/JVM or C#/CLI. He argues against choosing C#/CLI by saying

Microsoft has set a clever trap by standardizing the core of the CLI and C# language with ECMA, while keeping proprietary the class libraries such as ASP.NET and XAML. There's the appearance of an open managed runtime, but it's an incomplete platform, and no momentum or standards body exists to drive it to completion in an open manner...Even if we use some unencumbered ideas or designs from the .NET world, we should never define our open source managed runtime as a .NET clone.

and argues for Java/JVM by writing

Java has broad industry acceptance, historically driven by Sun and IBM; it's by far the most-used platform in embedded and on the UNIX/Linux enterprise server...One virtue of Java is that it's at least somewhat an open standard; the Java Community Process isn't ideal, but it does cover all the important APIs. The barest core of .NET is an ECMA standard, but the class libraries of note are Microsoft-specific...It's unclear that anyone but Microsoft could have significant influence over the ECMA spec in any case...

Also worth keeping in mind, OO.org is already using Java.

Combining Java and Linux is interesting from another standpoint: it merges the two major Microsoft-alternative platforms into a united front.

At this point it is clear that Havoc does agree with what Miguel and the rest of the Mono folks have been saying for years about needing a managed code environment to elevate the state of the art in desktop application development on UNIX-based Open Source platforms. I completely disagree with him that Sun's JCP process is somehow more of an open standard than ECMA. That just seems absurd. He concludes the article with

What Next?

For some time, the gcj and Classpath teams have been working on an open source Java runtime. Perhaps it's time to ramp up this effort and start using it more widely in free software projects. How long do we wait for a proprietary JDK to become GPL compatible before we take the plunge with what we have?

The first approach I'd explore for GNOME would be Java, but supporting a choice of gcj or IKVM or the Sun/IBM JDKs. The requirement would be that only the least common denominator of these three can be used: only the subset of the Java standard completed in GNU Classpath, and avoiding features specific to one of the VMs. Over time, the least common denominator becomes larger; Classpath's goal is to complete the entire Java standard.

There is also some stuff about needing to come up with an alternative to XAML so that GNOME and co. stay competitive but that just seems like the typical Open Source need to clone everything a proprietary vendor does without thinking it through. There was no real argument as to why he thought it would be a good idea, just a need to play catchup with Microsoft.

Now on to the responses. Paolo has two responses to Havoc's call to action. Both posts argue that technically Mono is as mature as the Open Source Java/JVM projects and has niceties such as P/Invoke that make communication between native and managed code straightforward. Secondly, his major point is that there is no reason to believe that while Microsoft will eventually sue the Mono project for violating patents on .NET Framework technologies that Sun would not do the same with Java technologies. Not only has Sun sued before when it felt Java was being threatened (the lengthy lawsuit with Microsoft) but unlike Microsoft it has never given any Java technology to a standards body to administer in a royalty free manner as Microsoft has done with C# and the CLI. Miguel also followed up with his post Java, Gtk and Mono which shows that it is possible to write Java code against Mono which points out that language choice is separate from the choice of which runtime (JVM vs. CLI) you use. He also echoes Paolo's sentiments on Sun and Microsoft's behavior with regards to software patents and their technologies in his post On Software Patents.

Havoc has a number of followup posts where he points out various other options people have mailed him and where he points out that his primary worry is that the current state of affairs will lead to fragmentation in the Open Source desktop world.  Miguel responds in his post On Fragmentation, reply with the followng opening

Havoc, you are skipping over the fact that a viable compromise for the community is not a viable compromise for some products, and hence why you see some companies picking a particular technology as I described at length below.

which I agree with completely. Even if the Open Source community agreed to go with C#/CLI I doubt that Sun would choose anything besides Java for their “Java Desktop System”. If Havoc is saying having companies like Sun on board with whatever decision he is trying to arrive at is a must then he's already made the decision to go with Java and the JVM. Given that Longhorn will have managed APIs (aka WinFX) Miguel believes that the ability to migrate from Windows programming to Linux programming [based on Mono] would be huge.  I agree, one of the reasons Java became so popular was the ease with which one could migrate from platform to platform and preserve one's knowledge since Java was somewhat Write Once Run Anywhere (WORA). However this never extended to building desktop applications which Miguel is now trying to tap into by pushing Linux desktop development to be based on Mono.

I have no idea how Microsoft would react to the outcome that Miguel envisions but it should be an interesting ride.

 


 

Categories: Technology

March 16, 2004
@ 05:10 PM

While hanging around TheServerSide.com I discovered a series on Naked Objects. It's an interesting idea that eschews separating application layers in GUIs (via MVC) or server applications (presentation/business logic/data access layers) and instead only coding domain model objects which then have a standard GUI autogenerated for them. There are currently five articles in the series which are listed below with my initial impressions of each article provided below.

Part 1: The Case for Naked Objects: Getting Back to the Object-Oriented Ideal
Part 2: Challenging the Dominant Design of the 4-Layer Architecture
Part 3: Write an application in Java and deploy it on .Net
Part 4: Modeling simultaneously in UML, Java, and User Perspectives
Part 5: Fat is the new Thin: Building Rich Internet Applications with Naked Objects

Part 1 points out that in many N-tier server-side applications there are four layers; persistence, the domain model, the controller and presentation. The author points out that object-relational mapping frameworks are now popular as a mechanism for collapsing the domain model and persistence layer. Naked objects comes from the other angle and attempts to collapse the domain model, control and presentation layers. The article also argues that the current practices in application development efforts such as web services and component based architectures which separate data access from the domain model reduce many of the benefits of object oriented programming.

In a typical naked objects application, the framework uses reflection to determine the methods of an object and render them using a generic graphical user interface (screenshot). This encourages objects to be 'behaviourally complete' all significant actions that can be performed on the object must exist as methods on the object.

The author states that there are six benefits of using naked objects

  • Higher development productivity through not having to write a user interface
  • More maintainable systems through the enforced use of behaviourally-complete objects.
  • Improved usability deriving from a pure object-oriented user interface
  • Easier capture of business requirements because the naked objects would constitute a common language between developers and users.
  • Improved support for test-driven development
  • It facilitates cross-platform development

What I found interesting about the first article in the series is that the author rails against separating the domain model from data access layer but it seems naked objects are more about blending the GUI layer with the domain model. There seem to be some missing pieces to the article. Perhaps the implication is that one should use object-relational mapping technologies in combination with naked objects to collapse an application from 4 layers to a single 'behaviorally complete' domain model?

Part 2 focuses on implementing the functionality of a 4 layer application using naked objects. One of the authors had written a tutorial application for a book that which was software for running  an auto servicing shop which performed tasks like booking-in cars for service and billing the customer. The conclusion after the rewrite was that the naked objects implementation took less lines of code and had less classes than the previous implementation which had 4 layers. Also it took less time to add new functionality such as obtaining the customer's sales history to the application in the naked objects implementation than in the 4 layer implementation. 

There are caveats, one was that the user interface was not as rich as the one where the developer had an explicit presentation layer as opposed to relying on a generic autogenerated user interface. Also complex operations such as 'undoing' a business action were not supported in the naked objects implementation.  

Part 3 points out that if you write a naked objects implementation targetting Java 1.1 then you can compile it using J# without modification. Thus porting from Java to .NET should be a cinch as long as you use only Java 1.1. Nothing new here.

Part 4 points out that naked objects encourages “code first design” which the authors claim is a good thing. They also point out if one really wants to get UML diagrams out of a naked objects application they can use tools like Together which can generate UML from source code.

I'm not sure I agree that banging out code first and writing use cases or design documents afterwards is a software development methodology worth encouraging.

Part 5 trots out the old saw about rich internet applications and how much better they are than the limiting HTML-based browser applications. The author points out that with the writing a Java applet which uses the naked objects framework gives a richer user experience than an HTML-based application. However as mentioned in previous articles you could build an even richer client interface with an explicit presentation layer instead of relying on the generic user interface provided by the naked objects framework. 

Interesting ideas. I'm not sure how well they'd scale up to building real-world applications but it is always good to challenge assumptions so developers don't get complacent. 


 

Categories: Technology

I hung out with Lili Cheng on Tuesday and we talked about various aspects of social software and weblogging technologies. She showed me Wallop while I showed her RSS Bandit and we were both suitably impressed by each other. She liked the fact that in RSS Bandit you can view weblogs as conversations while I liked the fact that unlike other social software I've seen Wallop embraced blogging and content syndication.

We both felt there was more to social software than projects like Orkut and she had an interesting insight; Orkut and tools like it turned the dreary process of creating a contact list into something fun. And that's about it. I thought to myself that if a tool like Outlook allowed me to create contact lists in an Orkut-style manner instead of the tedious process that exists today that would really a killer feature. She also showed me a number of other research projects created by the Microsoft Research Social Computing Group. The one I liked best was MSR connections which can infer the relationships between people based on public information exposed via Active Directory. The visualizations produced were very interesting and quite accurate as well.  She also showed me a project called Visual Summaries which implemented an idea similar to Don Park's Friendship Circle using automatically inferred information.

One of the things Lili pointed out to me about aggregators and blogging is that people like to share links and other information with friends. She asked about how RSS Bandit enables that I mentioned the ability to export feed lists to OPML and email people blog posts as well as the ability to invoke w.bloggar from RSS Bandit. This did get me to thinking that we should do more in this space. One piece of low hanging fruit is that users should be able to export a category in their feed list to OPML instead of being limited to the exporting the whole feedlist since they only want to share a subset of their feed. I have this problem because my boss has asked to try out my feed list but I've hesitated to send it to him since I know I subscribe to a bunch of stuff he won't find relevant. I also find some of the ideas in Joshua Allen's post about FOAF + De.licio.us  to be very interesting if you substitute browser integration with RSS Bandit integration. He wrote  

To clarify, by better browser integration I meant mostly integration with the de.licio.us functionality.  For example, here are some of the things I would like to see:

·         Access to my shared bookmarks from directly in the favorites menu of IE; and adding to favorites from IE automatically adds to de.licio.us (I’m aware of the bookmarklet)

·         When browsing a page, any hyperlinks in the page which are also entries in your friends’ favorites lists would be emphasized with stronger or larger typeface

·         When browsing a page, some subtle UI cue would be given for pages that are in your friends’ favorites lists

·         When visiting a page that is in your favorites list, you could check a toolbar button to make the page visible to friends only, everyone, or nobody

·         When visiting a page which appears in other people’s favorites, provide a way to get at recommendations “your friends who liked this page also liked these other pages”, or “your friends recommend these pages instead of the one you are visiting”

I honestly think the idea becomes much more compelling when you share all page visits, and not just favorites.  You can cluster, find people who have similar tastes or even people who have opposite tastes.  And the “trace paths” could be much more useful

Ahhh, so many ideas yet so little free time. :)


 

Categories: RSS Bandit | Technology

I just read a post by Bruce Eckel entitled Generics Aren't where he writes

My experience with "parameterized types" comes from C++, which was based on ADA's generics... In those languages, when you use a type parameter, that parameter takes on a latent type: one that is implied by how it is used, but never explicitly specified.

In C++ you can do the equivalent:

class Dog {
public:
  void talk() { }
  void reproduce() { }
};

class Robot {
public:
  void talk() { }
  void oilChange() { }
};

template<class T> void speak(T speaker) {
  speaker.talk();
}

int main() {
  Dog d;
  Robot r;
  speak(d);
  speak(r);
}

Again, speak() doesn't care about the type of its argument. But it still makes sure – at compile time – that it can actually send those messages. But in Java (and apparently C#), you can't seem to say "any type." The following won't compile with JDK 1.5 (note you must invoke the compiler with the source -"1.5" flag to compile Java Generics):

public class Communicate  {
  public <T> void speak(T speaker) {
    speaker.talk();
  }
}

However, this will:

public class Communicate  {
  public <T> void speak(T speaker) {
    speaker.toString(); // Object methods work!
  }
}

Java Generics use "erasure," which drops everything back to Object if you try to say "any type." So when I say <T>, it doesn't really mean "anything" like C++/ADA/Python etc. does, it means "Object." Apparently the "correct Java Generic" way of doing this is to define an interface with the speak method in it, and specify that interface as a constraint. this compiles:

interface Speaks { void speak(); }

public class Communicate  {
  public <T extends Speaks> void speak(T speaker) {
    speaker.speak();
  }
}

What this says is that "T must be a subclass or implementation of Speaks." So my reaction is "If I have to specify a subclass, why not just use the normal extension mechanism and avoid the extra clutter and confusion?"

You want to call it "generics," fine, implement something that looks like C++ or Ada, that actually produces a latent typing mechanism like they do. But don't implement something whose sole purpose is to solve the casting problem in containers, and then insist that on calling it "Generics."

Although Bruce didn't confirm whether the above limitation exist in the C# implementation of generics he is right that they have the same limitations as Java generics. The article An Introduction to C# Generics  on MSDN describes the same limitations Bruce encountered with Java generics and shows how to work around them using constraints as Bruce discovered. If you read the problem statement described in the article on MSDN it seems the main goal of C# generics is to solve the casting problem in containers.

What I find interesting about Bruce's post is the implication that to properly implement generics one most provide duck typing. I've always thought the behavior of templates in C++ was weird in that one could pass in a parameter and not enforce constraints on the behavior of the type. Yet it isn't really dynamic or latent typing because there is compile time checking to see if the type supports those methods or operations.

A few years ago I wrote an article entitled C++ in 2005: Can It Become A Java Beater?  in which I gave some opinions on an interview with Bjarne Stroustrup where he discussed various language features he'd like to see in C++. One of those features was constraints on template arguments, below is an excerpt from my article on this topic

Constraints for template arguments

Bjarne: This can be simply, generally, and elegantly expressed in C++ as is.

Templates are a C++ language facility that enable generic programming via parameteric polymorphism. The principal idea behind generic programming is that many functions and procedures can be abstracted away from the particular data structures on which they operate and thus can operate on any type.

In practice, the fact that templates can work on any type of object can lead to unforeseen and hard to detect errors in a program. It turns out that although most people like the fact that template functions can work on many types without the data having to be related via inhertance (unlike Java), there is a clamor for a way to specialize these functions so that they only accept or deny a certain range of types.

The most common practice for constraining template arguments is to have a constraints() function that tries to assign an object of the template argument class to a specified base class's pointer. If the compilation fails then the template argument did not meet the requirements. 

The point I'm trying to get at is that both C++ users and its inventor felt that the being able to constrain the operations you could perform on the parameterized type as opposed to relying on duck typing was a desirable feature.

The next thing I want to point out is that Bruce does mention that generic programming in C++ was based on Ada's generics so I decided to spend some time reading up on them to see if they also supported duck typing. I read Chapter 12 of the book Ada 95: The Craft of Object-Oriented Programming where we learn

In the case of a linked list package, we want a linked list of any type. Linked lists of arrays, records, integers or any other type should be equally possible. The way to do this is to specify the item type in the package declaration as private, like this:

    generic
        type Item_Type is private;
    package JE.Lists is
        ...
    end JE.Lists;

The only operations that will be allowed in the package body are those appropriate to private types, namely assignment (:=) and testing for equality and inequality (= and /=). When the package is instantiated, any type that meets these requirements can be supplied as the actual parameter. This includes records, arrays, integers and so on; the only types excluded are limited types...

As you can see, the way you declare your generic type parameters puts restrictions on what operations you can perform on the type inside the package as well as what types you can supply as parameters. Specifying the parameter as ‘range <>’ allows the package to use all the standard operations on integer types but restricts you to supplying an integer type when you instantiate the package. Specifying the parameter as ‘private’ gives you greater freedom when you instantiate the package but reduces the range of operations that you can use inside the package itself.

So it looks like Ada gave you two options, neither of which look like what you can do in C++. You could either pass in any type then the only operations allowed on the type were equality and assignment or you could pass in a constrained type. Thus it doesn't look like Ada generics had the weird mix of static and duck typing that C++ templates have.

I am as disappointed as Bruce that neither C# nor Java support dynamic typing like languages such as Python or Smalltalk but I don't think parametric polymorphism via generics has ever been used to solve this problem. As I have pointed out neither Ada nor C++ actually give him the functionality he wants so I wouldn't be too hard on Java or C# if I was in his shoes.


 

Categories: Technology

RSS Bandit provides users with the ability to perform searches from its toolbar and view the results in the same UI used for reading blogs if the results can be viewed as an RSS feed. This integration is provided for Feedster searches and I was considering adding other blog search engines to the default. Torsten had given me a RSS link to some search results on PubSub and they seemed to be better than Feedster in some cases. So this evening I decided to try out PubSub's weblog search and see if there was a way to provide similar integration with RSS Bandit. Unfortunately, it turns out that one has to provide them with an email address before you can perform searches.

I guess they aren't in a rush to have any users.


 

Categories: Technology

March 7, 2004
@ 06:26 PM

There's currently a semi-interesting discussion about software patents on the XML-DEV mailing list sparked by a post by Dennis Sonoski entitled W3C Suckered by Microsoft where he rants angrily about why Microsoft is evil for not instantly paying $521 million dollars to Eolas and thereby starting a patent reform revolution. There are some interesting viewpoints voiced in the ensuing thread including Tim Bray's suggestion that Microsoft pay Tim Berners-Lee $5 million for arguing against the Eolas patent.

The thread made me think about what my position on filing software patents was given the vocal opposition to them on some online fora. I recently have gotten involved in patent discussions at work and I jotted down my thought processes as I was deciding whether filing for patents was a good idea or not. Belows are the pros and cons of filing for patents from my perspective in the trenches (so to speak).

PRO

  1. Having a patent or two on your resume is a nice ego and career boost.
  2. As a shareholder at Microsoft it is in my best interests to file patents which allow the company defend itself from patent suits and reap revenue from patent licencing.
  3. The modest financial incentive we get for filing patents would make for buying a few rounds of drinks with friends.

CON

  1. Filing patents involve having meetings with lawyers.
  2. Patents are very political because you don't want to snub anyone who worked on the idea but also don't want to cheapen them by claiming that people who were peripherally involved were co-inventers. For example, is a tester who points out a design flaw in an idea now one of the co-inventers if it was a fundamental flaw? 
  3. There's a very slight chance that Slashdot runs an article about a particular patent claiming that it is another evil plot by Microsoft. The fact that it is a slight chance is that the ratio of Slashdot articles about patents to those actually filed is quite small.

That was my thought process as I sat in on some patent meetings. Basically there is a lot of incentive to file patents for software innovations if you work for a company that can afford to do so. However the measure of degree of innovation is in the eye of the beholder [and up to prior art searches].

I've seen a number of calls for patent reform for software but not any that have any feasible or concrete proposals behind them. Most of the proponents of patent reform I've seen usually argue something akin to “Some patent that doesn't seem innovative to me got granted so the system needs to be changed“. How the system should be changed and whether the new system will not have problems of its own are left as excercises for the reader.

There have been a number of provocative writings about patent reform, the most prominent in my memory being the FSF's Patent Reform Is Not Enough and An Open Letter From Jeff Bezos On The Subject Of Patents. I suspect that the changes suggested by Jeff Bezos in his open letter do a good job of straddling the line between those who want do away with software and business method patents and those that want to protect their investment.

Disclaimer: The above statements are my personal opinions and do not represent my employer's view in anyway. 


 

As pointed out in a recent Slashdot article some researchers at HP Labs have come up with what they have termed a Blog Epidemic Analyzer which aims to “track how information propagates through networks. Specifically...how web based memes get passed on from one user to another in blog networks“. It sounds like an interesting idea, it would be cool to know who the first person to send out links about All Your Base Are Belong To Us or I Kiss You. I can also think of more serious uses of being able to track down the propagation of particular links across the World Wide Web.

Unfortunately, it seems the researchers behind this are either being myopic or have to justify the cost of their research to their corporate masters by trying to compare what they've done to Google. From the  Blog Epidemic Analyzer FAQ

2. What's the point?

There has been a lot of discussion over the fairness of blogs, powerlaws, and A-list bloggers (You can look at the discussion on Many2Many for some of the highlights). The reality is that some blogs get all the attention. This means that with ranking algorithms like Technorati's and Google's Page Rank highly linked blogs end up at the top of search pages. Sometimes (maybe frequently) this is what you want. However, it is also possible that you don't want the most connected blog. Rather you would like to find the blog that discovers new information first.

The above answer makes it sound like these guys have no idea what they are talking about. Google and Technorati do vastly different things. The fact that Google's search engine lists highly linked blogs at the top of search results that they are tangentially related to is a bug. For example, the fact that a random post by Russell Beattie about a company now makes him the fifth result that comes up for a search for that comapny in Google isn't a feature, it's a bug. The goal of Google (and all search engines) is to provide the most relevant results  for a particular search term. In the past, tying relevance to popularity was a good idea but with the advent of weblogs and the noise they've added to the World Wide Web this is becoming less and less of a good idea. Technorati on the other hand has one express purpose, measuring weblog popularity based on incoming links.

The HP iRank algorithm would be a nice companion piece to things like Technorati and BlogPulse but comparing it to Google seems like a stretch.


 

Categories: Technology

February 26, 2004
@ 05:45 PM

In his post WinFS Scenario #2: event planning Jeremy Mazner writes

So as you can see, the information for any given event is spread all over the place, which means I can’t really keep track of it all.  If I want to see the invite list and RSVP status, I’m either in Outlook (if I’m lucky) or off to some file share to find the spreadsheet.  If I want to see some information about an invitee (their phone number, or who their account manager is), it’s off to the directory or CRM system to look it up.  If I want to know what presentations have been approved for use, I crawl through email to find the one message from my general manager where he says the presentation is ready.  If I want to see the actual presentation, it’s back to a file share or Sharepoint

What I really want is a way to corral all these related items together, stick them in a big bucket with a label that says what event they’re for.  I want a simple UI where I can see what events are coming up, then browse through all the related material for each one, and maybe be able to answer some simple questions: how many presentations for this event are in the Approved state?  How many attendees have declined the invitation?

 

I’ll assert that this is really, really hard to do today.  Outlook wizards would probably argue that you could do this with some series of catagories, public folders, and shared calendars.  SharePoint gurus would say this is exactly what a Meeting Workspace is for.  Old school event planners might claim you could track this all in a nice big spreadsheet with multiple pages and links out to file locations.

This is a valid problem that Jeremy brings up in his scenario and one I've thought about in the past when it comes to tying information from disparate applications about the same person. Outlook comes closes to doing what I'd like to see here but it gets help from Exchange. I was curious as to how Jeremy thought WinFS could help here and his thoughts were similar to what I'd first thought when I heard about WinFS, specifically he wrote

What does WinFS provide that will help?

  • A common storage engine, one unified namespace for storage of any application data on your machine.  Whether I use Outlook, AOL Communicator, or Notes, my emails can all be stored in WinFS.  (Yes, I understand that new versions of these apps will have to built…encouraging that is my job as evangelist.)
  • A set of common schemas, so that an email is an email is an email, no matter what app created it, and it always has a To:, From: and Subject: that I can access through the same API.
  • A data model that supports relationships, so that my event management app can specify “this email from Bill is related to the Longhorn Design Review event, as is this calendar appointment for next month”
  • A data model that supports extensions and meta-data on relationships, so that I not only say “this contact Jon is associated with this Design Review event”, but also “Jon is a speaker at this event” and “Jon is the author of this deck that he’ll present” and “Jon has not yet confirmed attendance at the party afterwards.”
  • Win32 file system access, so that even though files are stored in WinFS, applications can still get to their streams via a Win32 path

It seems Jeremy and I had the same thoughts about how WinFS could help in this regard. After thinking about the problem for a little bit, I realized that having all applications store similar information in a central repository brings a number of problems with it. The two main problems I can see are unreliable applications that cause data corruption and security. The example I'll use is imagine if RSS Bandit , SharpReader, and FeedDemon all stored their data in WinFS using the model that Jeremy describes above. This means all RSS/Atom feeds and configuration data used by each application are not stored in application specific folders as is done today but in a unified store of RSS items. Here are two real problems that need to be surmounted

  1. In the past bugs in RSS Bandit that led to crashes also caused corrupted feed files. It's one thing for bugs in an application to corrupt its data or configuration files but another for it to corrupt globally shared data. This is akin to the pain users and developers have felt when buggy apps corrupt the registry. The WinFS designers will have to account of this occurence in some way even if it is just coming up with application design guidelines.

  2. For feeds that require authentication, RSS Bandit stores the user name and password required to access the feed in configuration files. Having such data globally shared means that proper security precautions must be taken. Just because a user has entered a password in RSS Bandit doesn't mean they want it exposed to any other applications especially potentially malicious ones. Of course, this is no different from what can happen tpday in Windows (e.g. most modern viruses/worms search the file system for email address book files to locate new victims to email themselves to) but with the model being pushed by WinFS this becomes a lot easier. A way to share data between WinFS-aware applications in a secure manner is a must have.

None of these is an insurmountable problem but it does point out that things aren't as easy as one suspects at first glance at WinFS. I pass Mike Deem in the hallway almost daily, I should talk to him about this stuff more often. In the mean time I'll be sure to catch the various WinFS designers on the next episode of the .NET Show on MSDN.


 

Categories: Technology

In his post entitled Back in the Saddle Don Box writes

My main takeaway was that it's time to get on board with Atom - Sam is a master cat herder and I for one am ready to join the other kittens. 

This is good news. Anyone who's read my blog probably can discern that I think the ATOM syndication format is a poorly conceived, waste of effort that unnecessarily fragments the website syndication world. On the other hand the ATOM API especially the bits about SOAP enabled clients are a welcome upgrade to the existing landscape of blog posting/editing APIs.

My experiences considerng how to implement the ATOM API in RSS Bandit have highlighted a one or two places where the API seems 'problematic' which actually point more to holes in the XML Web Services architecture than actual problems with the API. The two scenarios that come most readily to mind are

  1. Currently if a user would wants to post a comment to their blog using client software they need to configure all sorts of technical settings such as which API to use, port numbers, end point URLs and a lot more. For example, look at what one has to post to a dasBlog weblog from w.bloggar Ideally, the end user should just be able to point their client at their blog URL (e.g. http://www.25hoursaday.com/weblog) and it figures out the rest.

    The current ATOM specs describe a technique for discovering the web service end points a blog exposes which involves downloading the HTML page and parsing out all the <link> tags. I've disagreed with this approach in the past but the fact is that it does get the job done.

    What this situation has pointed out to me is that there is no generic way to go up to a website and find out what XML Web Service end points it exposes. For example, if you wanted to tell all the publiclly available Web Services provided by Microsoft you'd have to read Aaron Skonnard's A Survey of Publicly Available Web Services at Microsoft instead of somehow discovering this programmatically. Maybe this is what UDDI was designed for?

  2. Different blogs allow different syntax for posting comments. I've lost count of the amount of times I've posted a comment to a blog and wanted to provide a link but couldn't tell whether to just use a naked URL (http://www.example.com) or a hyperlink (<a href=“http://www.example.com“>example link</a>). Being that RSS Bandit has supported the CommentAPI for a while now I've constantly been frustrated by the inability to tell what kind of markup or markup subset the blog allows in comments. A couple of blogs provide formatting rules when one is posting a comment but there really is no programmatic way to discover this.

    Another class of capabilities I'd like to discover dynamically is which features a blog supports. For instance, the ATOM API spec used to have a 'Search facet' which was removed because it seemed to many people thought it'd be onerous to implement. What I'd have preferred would have been for it to be optional then clients could dynamically discover whether the ATOM end point had search capabilities and if so how rich they were.

    The limitation here is that there isn't a generic way to discover and enunciate the fine grained capabilities of an XML Web Service end point. At least not one I am familiar with.

It would be nice to see what someone like Don Box can bring to the table in showing how to architect and implement such a loosely coupled XML Web Service based system on the World Wide Web.


 

Categories: Technology | XML

Recently ZDNet ran an article entitled Google spurns RSS for rising blog format where it stated

The search giant, which acquired Blogger.com last year, began allowing the service's million-plus members to syndicate their online diaries to other Web sites last month. To implement the feature, it chose the new Atom format instead of the widely used, older RSS.

I've seen some discussion about the fact that Google only provides feeds for certain blogs in the ATOM 0.3 syndication format  which is an interim draft of the spec that is part of an effort being driven by Sam Ruby to replace RSS and related technologies. When I first read this I ignored it because I didn't have any Blogger.com feeds that were of interest to me. This changed today. This afternoon I found out that Steve Saxon, the author of the excellent article XPath Querying Over Objects with ObjectXPathNavigator had a Blogger.com blog that only provided an ATOM feed. Being that I use RSS Bandit as my aggregator of choice I cannot subscribe to his feed nor can I use a large percentage of the existing news aggregators to read Steve's feed.

What a find particularly stunning about Google's decision is that they have removed support for an existing, widely supported format for an interim draft of a format which  according to Sam Ruby's slides for the O'Reilly Emerging Technology Conference is several months away from being completed. An appropriate analogy for what Google has done would be like AOL abandoning support for HTML and changing all of its websites to use the May 6th 2003 draft of the XHTML 2.0 spec. It simply makes no sense.

Some people, such as Dave Winer believe Google is engaging in such user unfriendly behavior for malicious reasons but given that Google doesn't currently ship a news aggregator there doesn't seem to be much of a motive there (Of course, this changes once they ship one).  I recently stumbled across an article entitled The Basic Laws of Human Stupidity which described the following 5 laws

  1. Always and inevitably everyone underestimates the number of stupid individuals in circulation.

  2. The probability that a certain person be stupid is independent of any other characteristic of that person.

  3. A stupid person is a person who causes losses to another person or to a group of persons while himself deriving no gain and even possibly incurring losses.

  4. Non-stupid people always underestimate the damaging power of stupid individuals. In particular non-stupid people constantly forget that at all times and places and under any circumstances to deal and/or associate with stupid people always turns out to be a costly mistake.

  5. A stupid person is the most dangerous type of person.

The only question now is Is Google crazy, or crazy like a fox? and only time will tell the answer to that question.


 

Categories: Technology

February 18, 2004
@ 06:28 AM

Chris Sells writes

On his quest to find "non-bad WinFS scenarios" (ironically, because he was called out by another Microsoft employee -- I love it when we fight in public : ), Jeremy Mazner, Longhorn Technical Evangelist, starts with his real life use of Windows Movie Maker and trying to find music to use as a soundtrack. Let Jeremy know what you think.

I think the scenario is compelling. In fact, the only issue I have with the WinFS scenario that Jeremy outlines is that he implies that the metadata about music files Windows Media player exposes is tied to the application but in truth most of it is tied to the actual media files as regular file info [file location, date modified, etc] or as ID3 tags [album, genre, artist, etc]. This means that there doesn't even need to be explicit inter-application sharing of data.

If the file system had a notion of a music item which exposed the kind of information one sees in ID3 tags which was also exposed by the shell in standard ways then you could do lots of interesting things with music metadata without even trying hard. I also like it's quite compelling because metadata attached to music files is such a low hanging fruit that one can get immediate value out of and which exists today on the average person's machine.


 

Categories: Technology

February 12, 2004
@ 11:47 PM

According to Jeremy Zawodney the “My Yahoo's RSS module also groks Atom. It was added last night. It took about a half hour.” Seeing that he said it took only 30 minutes to implement this and there are a couple of things about ATOM that require a little thinking about it even if all you are interested in is titles and dates as My Yahoo! is I decided to give it a try and subscribe to Mark Pilgrim's Atom feed and this is what I ended up being shown in My Yahoo!

dive into mark  Remove

The first minor issue is that the posts aren't sorted chronologically but that isn't particularly interesting. What is interesting is if you go to the article entitled The myth of RSS compatibility its publication date is said to be “Wednesday, February 4, 2004” which is about a week ago and if you go to the post entitled  Universal Feed Parser 3.0 beta its publication date is said to be Wednesday, February 1, 2004 which is almost 2 weeks ago not a day ago like Yahoo! claims.  

The simple answer to the confusion can be gleaned from Mark's ATOM feed, that particular entry has a <modified> date of 2004-02-11T16:17:08Z, an <issued> date of 2004-02-01T18:38:15-05:00 and a <created> date of 2004-02-01T23:38:15Z. My Yahoo! is choosing to key the freshness of article of its modified date even though when one gets to the actual content it seems much older.

It is quite interesting to see how just one concept [how old is this article?] can lead to some confusion between the end user of a news aggregator and the content publisher. I also suspect that My Yahoo! could be similarly confused by the various issues with escaping content in Atom when processing titles but since I don't have access to a web server I can't test some of my theories.

I tend to wonder whether the various content producers creating Atom feeds will ditch their feeds for Atom 0.4, Atom 0.5 up until it becomes a final IETF spec or whether they'll keep parallel versions of these feeds so Atom 0.3 continues to live in perpetuity.

It's amazing how geeks can turn the simplest things into such a mess. I'm definitely going to sit it out until the IETF Atom 1.0 syndication format spec before spending any time working on this for RSS Bandit.


 

Categories: Technology

February 11, 2004
@ 04:02 PM

One of the big problems with arguing about metadata is that one persons data is another person's metadata. I was reading Joshua Allen's blog post entitled Trolling EFNet, or Promiscuous Memories where he wrote

  • Some people deride "metacrap" and complain that "nobody will enter all of that metadata".  These people display a stunning lack of vision and imagination, and should be pitied.  Simply by living their lives, people produce immense amounts of metadata about themselves and their relationships to things, places, and others that can be harvested passively and in a relatively low-tech manner.
  • Being able to remember what we have experienced is very powerful.  Being able to "remember" what other people have experienced is also very powerful.  Language improved our ability to share experiences to others, and written language made it possible to communicate experiences beyond the wall of death, but that was just the beginning.  How will your life change when you can near-instantly "remember" the relevant experiences of millions of other people and in increasingly richer detail and varied modality?
  • From my perspective it seems Joshua is confusing data and metadata. If I had a video camera attached to my forehead recording I saw then the actual audiovisual content of the files on my harddrive are the data while the metadata is information such as what date it was, where I was and who I saw. Basically the metadata is the data about data. The interesting thing about metadata is that if we have enough good quality metadata then we can do things like near-instantly "remember" the relevant experiences of ourselves and millions of other people. It won't matter if all my experiences are cataloged and stored on a hard drive if the retrieval process isn't automated (e.g. I can 'search' for experiences by who they were shared with, where they occured or when they occured) as opposed to me having to fast forward through gigabytes of video data. The metadata ideal would be that all this extra, descriptive information would be attached to my audiovisual experiences stored on disk so I could quickly search for “videos from conversations with my boss in October, 2003”.

    This is where metacrap comes in. From Cory Doctorow's excellent article entitled Metacrap

    A world of exhaustive, reliable metadata would be a utopia. It's also a pipe-dream, founded on self-delusion, nerd hubris and hysterically inflated market opportunities.

    This applies to Joshua's vision as well. Data acquisition is easy, anyone can walk around with a camcorder or digital camera today recording everything they can. Effectively tagging the content so it can be categorized in a way you can do interesting things with it search-wise is unfeasible. Cory's article does a lot better job than I can at explaining the many different ways this is unfeasible, cameras with datestamps and built in GPS are just a tip of the iceberg. I can barely remember dates once the event didn't happen in the recent past and wasn't a special occassion. As for built in GPS, until the software is smart enough to convert longitude and latitude coordinates to “that Chuck E Cheese in Redmond“ then they only solve problems for geeks not regular people.  I'm sure technology will get better but metacrap is and may always be an insurmountable problem on a global network like the World Wide Web without lots of standardization.


     

    Categories: Technology

    February 11, 2004
    @ 02:51 AM

    From Sam Ruby's slides for the O'Reilly Emerging Technology Conference

    Where are we going?

    • A draft charter will be prepared in time to be informally discussed at the IETF meeting is Seoul, Korea on the week of 29 February to 5 March 
    •  Hopefully, the Working Group itself will be approved in March 
    •  Most of the work will be done on mailing lists 
    •  Ideally, a face to face meeting of the Working Group will be scheduled to coincide with the August 1-6 meeting of the IETF in San Diego

    Interesting. Taking the spec to IETF implies that Sam thinks it's mostly done.  Well, I just hope the IETF's errata process is better than the W3C's.


     

    Categories: Technology

    February 10, 2004
    @ 05:30 PM

    Robert Scoble has a post entitled Metadata without filling in forms? It's coming where he writes

    Simon Fell read my interview about search trends and says "I still don't get it" about WinFS and metadata. He brings up a good point. If users are going to be forced to fill out metadata forms, like those currently in Office apps, they just won't do it. Fell is absolutely right.But, he assumed that metadata would need to be entered that way for every photo. Let's go a little deeper....OK, I have 7400 photos. I have quite a few of my son. So, let's say there's a new kind of application. It recognizes the faces automatically and puts a square around them. Prompting you to enter just a name. When you do the square changes color from red to green, or just disappears completely.
    ...
    A roadblock to getting that done today is that no one in the industry can get along for enough time to make it possible to put metadata into files the way it needs to be done. Example: look at the social software guys. Friendster doesn't play well with Orkut which doesn't play well with MyWallop, which doesn't play well with Tribe, which doesn't play well with ICQ, which doesn't play well with Outlook. What's the solution? Fix the platform underneath so that developers can put these features in without working with other companies and/or other developers they don't usually work with.

    The way WinFS is being pitched by Microsoft folks reminds me a lot of Hailstorm [which is probably unsurprising since a number of Hailstorm folks work on it] in that there are a lot of interesting and useful technical  ideas burdened by bad scenarios being hung on them. Before going into the the interesting and useful technical ideas around WinFS I'll start with why I consider the two scenarios mentioned by Scoble as “bad scenarios”.

    The thought that if you make the file system a metadata store automatically makes search better is a dubious proposition to swallow when you realize that a number of the searches that people can't do today wouldn't be helped much by more metadata. This isn't to say some searches wouldn't work better (e.g. searching for songs by title or artist), however there are some search scenarios such as searching for a particular image or video from a bunch of image files with generic names or searching for a song by lyrics which simply having the ability to tag media types with metadata doesn't seem like enough. Once your scenarios start having to involve using “face recognition software” or “cameras with GPS coordinates” for a scenario to work then it is hard for people not to scoff. It's like a variation of the popular Slashdot joke

    1. Add metadata search capabilities to file system
    2. ???
    3. You can now search for “all pictures taken on Tommy's 5th birthday party at the Chuck E Cheese in Redmond”.

     with the ??? in the middle implying a significant dfficulty in going from step 1 to 3.

    The other criticism is the fact that Robert's post implies that the reason applications can't talk to each other are technical. This is rarely the case. The main reasons applications don't talk to each other isn't a lack of technology [especially now that we have an well-defined format for exchanging data called XML] but for various social and business reasons. There are no technical reasons MSN Messenger can't talk to ICQ or which prevent Yahoo! Messenger from talking to AOL Instant Messenger. It isn't technical reasons that prevent my data in Orkut from being shared with Friendster or my book & music preferences in Amazon from being shared with other online stores I visit. All of these entities feel they have a competitive advantage in making it hard to migrate from their platforms.

    The two things Microsoft needs to do in this space is are to (i) show how & why it is beneficial for different applications to share data locally and (ii) provide guidelines as well as best practices for applications to share data their data in a secure manner.

    While talking to Joshua Allen, Dave Winer, Robert Scoble, Lili Cheng, and Curtis Wong yesterday it seemed clear to me that social software [or if you are a business user; groupware that is more individual-focused which gives people more control over content and information sharing] would be a very powerful and useful tool for businesses and end users if built on a platform like Longhorn with a smart data store that know how to create relationships between concepts as well as files (i.e. WinFS) and a flexible, cross platform distributed computing framework (i.e. Indigo).

    The WinFS folks and Longhorn evangelists will probably keep focusing on what I have termed “bad scenarios” because they demo well but I suspect that there'll be difficulty getting traction with them in the real world. Of course, I may be wrong and the various people who've expressed incredulity at the current pitches are a vocal minority who'll be proved wrong once others embrace the vision. Either way, I plan to experiment with these ideas once Longhorn starts to beta and seeing where the code takes me.


     

    Categories: Technology

    James Robertson writes

    Microsoft wants to move beyond objects:

    Box said technologies such as Java's Remote Method Invocation (RMI) and CORBA (Common Object Request Broker Architecture) all suffered similar problems. "The metaphor of objects as a primary distribution media is flawed. CORBA started out with wonderful intentions, but by the time they were done, they fell into the same object pit as COM."

    The problem with most distributed object technologies, Box said, is that programs require particular class files or .jar files (referring to Java), or .dll files (Microsoft's own dynamic linked libraries). "We didn't have (a) true arms-length relationship between programs," Box said. "We were putting on an appearance that we did, but the programs had far more intimacy with each other than anyone felt comfortable with."

    "How do we discourage unwanted intimacy?" he asked. "The metaphor we're going to use for integrating programs (on Indigo) is service orientation. I can only interact by sending and receiving messages. Message-based (communications) gives more flexibility

    I guess Don didn't get the memo - OO is all about the messages between the objects, and less about the actual objects themselves. Look at that last sentence - "Message based communications" gives more flexibility? What does he think a OO is about? You know, CORBA can be simple - in VisualWorks, it's amazingly, astoundingly simple. It takes a curly brace language like Java or C# to make it complex (at the developer level - I'm not talking implementation layer here).

    James Robertson completely misses the point of Don's comments on distributed computing with objects versus using message passing. An example of a service oriented architecture that uses message passing is HTTP on the World Wide Web. It is flexible, scalable and loosely coupled. No one can say with a straight face that using CORBA, Java RMI or DCOM is as scalable or as loosely coupled unless they're trying to sell you something. What Don and the folks on the Indigo team are trying to do is apply the lessons learned from the Web solving problems traditionally tackled by distributed object systems.

    It is just an unfortunate naming collision that some object oriented languages use the term “message passing” to describe invoking methods on objects which I'm sure is what's confused James Robertson given that he is a Smalltalk fan.


     

    Categories: Technology

    January 28, 2004
    @ 05:34 AM

    In a post entitled .NET Reality Check Jon Udell writes

    3. Programming language neutrality. Here's a statement, from an early Jeff Richter article about .NET, that provoked oohs and ahhs at the time: "It is possible to create a class in C++ that derives from a class implemented in Visual Basic." Well, does anybody do this now? Is it useful? Meanwhile, the dynamic language support we were going to get, for the likes of Perl and Python, hasn't arrived. Why not?

    The primary benefit of programming language neutrality is that library designers can build a library in one language and developers using other languages can use them without having to worry about language differences. The biggest example of this is the .NET Framework's Base Class Library, it is mainly written in C# but this doesn't stop VB.NET developers from writing .NET applications, implementing interfaces or subclassing types from the various System.* namespaces.

    Examples closer to home for me are in RSS Bandit which depends on a couple of third party libraries such as Chris Lovett's SgmlReader and Tim Dawson's SandBar. I've personally never had to worry about what language they were written in nor do I care. All I need to know is that they are targetted at the .NET Framework.

    On the other hand, when the .NET Framework was first being hyped there were a lot of over enthusiastic evangelists and marketters who tried to sell the programming language neutrality as something that would also allow you to have developers working on a single project use different languages. Although theoretically possible, that always seemed like an idea that would be unwise to implement in practice. I can imagine how problematic it would be if Torsten wrote managed C++ code and I wrote VB.NET code for the different parts of RSS Bandit we worked on. Fixing each others bugs would be a painful experience.


     

    Categories: Technology

    I just read Jon Udell's post on What RSS users want: consistent one-click subscription where he wrote

    Saturday's Scripting News asked an important question: What do users want from RSS? The context of the question is the upcoming RSS Winterfest... Over the weekend I received a draft of the RSS Winterfest agenda along with a In an October posting from BloggerCon I present video testimony from several of them who make it painfully clear that the most basic publishing and subscribing tasks aren't yet nearly simple enoughrequest for feedback. Here's mine: focus on users. .

    Here's more testimony from the comments attached to Dave's posting:

    One message: MAKE IT SIMPLE. I've given up on trying to get RSS. My latest attempt was with Friendster: I pasted in the "coffee cup" and ended up with string of text in my sidebar. I was lost and gave up. I'm fed up with trying to get RSS. I don't want to understand RSS. I'm not interested in learning it. I just want ONE button to press that gives me RSS.... [Ingrid Jones]

    Like others, I'd say one-click subscription is a must-have. Not only does this make it easier for users, it makes it easier to sell RSS to web site owners as a replacement/enhancement for email newsletters... [Derek Scruggs]

    For average users RSS is just too cumbersome. What is needed to make is simpler to subscribe is something analog to the mailto tag. The user would just click on the XML or RSS icon, the RSS reader would pop up and would ask the user if he wants to add this feed to his subscription list. A simple click on OK would add the feed and the reader would confirm it and quit. The user would be back on the web site right where he was before. [Christoph Jaggi]

    Considering that the most popular news aggregators for both the Mac and Windows platforms support the feed "URI" scheme including  SharpReader, RSS Bandit, NewsGator, FeedDemon (in next release), NetNewsWire, Shrook, WinRSS and Vox Lite I wonder how long it'll take the various vendors of blogging tools to wake up and smell the coffee. Hopefully by the end of the year, complaints like those listed above will be a thing of the past.


     

    Categories: RSS Bandit | Technology

    January 15, 2004
    @ 07:00 AM

    My boss, Mark Fussell, just purchased a Smart Watch with MSN Direct and I got to see one of them up close. In his post entitled Not an iPod Sheep [which is a reference to the fact that more and more folks on our team are geeking out Apple's lil' marvel] he writes

    Today I picked up my rash and purely impulsive Christmas buy, a Fossil Wrist.NET Smart watch. It was probably sub-consciously induced by the new kid who came to our school (around 1977) with a calculator on his watch. No matter that it was impossible to press any of the buttons to do even the most simple sums and that this was tremendously useless, the fact that it was on a watch with a calculator built in made it ultra cool and an instant friend maker.

    Now that I have my smart watch up and running (I had to leave the building and drive halfway home before it picked up a signal) I will say that it has some value. The #1 killer feature has to be the syncing with your Outlook calender appointments.... Of course having a wireless PC to look at the news and weather pretty much makes the other features on Smart watch useless, but “hey” I've just been told that George Bush wants to build a moon base by my watch - Wow! Now I can tell everyone all sorts of useless information. The #2 killer feature has to be the atomic clock accuracy, not that this is that necessary, but timing between meetings is everything. The #3 feature is the ability to send short (15 word?) instance messages to it.

    Having a handy device that syncs to my Outlook calendar is something I definitely like but I consider a watch a fashion accessory not something that is primarily a gadget. The geek appeal of the watch is definitely high but I suspect I'll end up getting a SmartPhone instead. The main problem is that I'd like to be able to sync with Outlook when away from my work machine which may turn out to be quite expensive based on current cellular plans compared to having something like a SmartWatch.


     

    Categories: Ramblings | Technology

    I feel somewhat hypocritical today. Although I've complained about bandwidth consumed by news aggregators polling RSS feeds and have implemented support for both HTTP conditional GET and gzip compression over HTTP to RSS Bandit (along with a number of other aggregators) it turns out that I don't put my money where my mouth is and use blogging software that utilizes either of the aformentioned bandwidth saving techniques.

    Ted Leung lists a few feeds from his list of subscriptions that don't use gzip compression and mine is an example of one that doesn't. He also mentiones in a previous post that

    As far as gzipped feeds go, about 10% of the feeds in my NNW (about 900) are gzipped. That's a lot worse than I expected. I understand that this can be tough -- the easiest way to implement gzipping is todo what Brent suggested, shove it off to Apache. That means that people who are being hosted somewhere need to know enough Apache config to turn gzip on. Not likely. Or have enlighted hosting admins that automatically turn it on, but that' doesn't appear to be the case. So blogging software vendors could help a lot by turning gzip support on in the software.

    What's even more depressing is that for HTTP conditional get, the figure is only about 33% of feeds. And this is something that the blogging software folks should do. We are doing it in pyblosxom.

    This is quite unfortunate. Every few months I read some "sky is falling" post about how the bandwidth costs of polling RSS feeds yet many blogging tools don't support existing technologies that could reduce the badnwidth cost of RSS polling by an order of magnitude. I am guilty of this as well.

    I'll investigate how difficult it'll be to practice what I preach in the next few weeks. It looks like an upgrade to my version of dasBlog is in the works.


     

    Categories: Technology

    I've just finished the first draft of a specification for Synchronization of Information Aggregators using Markup (SIAM) which is the result of a couple of weeks of discussion between myself and a number of others authors of news aggregators. From the introduction

    A common problem for users of desktop information aggregators is that there is currently no way to synchronize the state of information aggregators used on different machines in the same way that can be done with email clients today. The most common occurence of this is a user that uses a information aggregator at home and at work or at school who'd like to keep the state of each aggregator synchronized independent of whether the same aggregator is used on both machines.

    The purpose of this specification is to define an XML format that can be used to describe the state of a information aggregator which can then be used to synchronize another information aggregator instance to the same state. The "state" of information aggregator includes information such as which feeds are currently subscribed to by the user and which news items have been read by the user.

    This specification assumes that a information aggregator is software that consumes an XML syndication feed in one of the following formats; ATOM, [RSS0.91], [RSS1.0] or [RSS2.0]. If more syndication formats gain prominence then this specification will be updated to take them into account.

    This final draft owes a lot of its polish to comments from Luke Hutteman (author of SharpReader), Brent Simmons (author of NetNewsWire) and Kevin Hemenway aka Morbus Iff (author of AmphetaDesk ). There are no implementations out there yet although once enough feedback has been gathered about the current spec I'll definitely add this to RSS Bandit and deprecate the existing mechanisms for subscription harmonization.

    Brent Simmons has a post which highlights some of the the various issues that came up in our discussions entitled The challenges of synching.


     

    Categories: Technology | XML

    I've written what should be the final draft of the specification for the "feed" URI scheme. From the abstract

    This document specifies the "feed" URI (Uniform Resource Identifier) scheme for identifying data feeds used for syndicating news or other content from an information source such as a weblog or news website. In practice, such data feeds will most likely be XML documents containing a series of news items representing updated information from a particular news source.

    The primary change from the previous version was to incorporate feedback from Graham Parks about compliance with RFC 2396. The current grammar for the "feed" URI scheme is

    feedURI = 'feed:' absoluteURI | 'feed://' hier_part

    where absoluteURI and hier_part are defined in section 3 of RFC 2396. Support for one click subscription to syndication feeds via this URI scheme is supported in the following news aggregators;  SharpReader, RSS Bandit, NewsGator (in next release), NetNewsWire, Shrook, WinRSS and Vox Lite.

    The next step will be to find somewhere more permanent to host the spec.


     

    Categories: Technology

    January 1, 2004
    @ 10:51 AM

    Sean Campbell or Scott Swigart writes

    I want this also.  I want a theory that unifies objects and data.  We're not there yet.

     With a relational database, you have data and relationships, but no objects.  If you want objects, that's your problem, and the problem isn't insignificant.  There’s been a parade of tools and technologies, and all of them have fallen short on the promise of bridging the gap.  There's the DataSet, which seeks to be one bucket for all data.  It's an object, but it doesn't give you an object view of the actual data.  It leaves you doing things like ds.Tables["Customer"].Rows[0]["FirstName"].ToString().  Yuck.  Then there are Typed DataSets.  These give you a pseudo-object view of the data, letting you do: ds.Customer[0].FirstName.  Better, but still not what I really want.  And it's just code-gen on top of the DataSet.  There's no real "Customer" object here.

     

    Then, there are ObjectSpaces that let you do the XSD three-step to map classes to relational data in the database.  With ObjectSpaces you get real, bona fide objects.  However, this is just a bunch of goo piled on top of ADO.NET, and I question the scalability of this approach. 

     

    Then there are UDTs.  In this case, you've got objects all the way into the database itself, with the object serialized as one big blob into a single column.  To find specific objects, you have to index the properties that you care about, otherwise you're looking at not only a table scan, but rehydrating every row into an object to see if it's the object you're looking for.

     

    There's always straight XML, but at this point you're essentially saying, "There are no objects".  You have data, and you have schema.  If you're seeing objects, it's just an optical illusion on top of the angle brackets.  In fact, with Web services, it's emphatically stated that you're not transporting objects, you're transporting data.  If that data happens to be the serialization of some object, that's nice, but don't assume for one second that that object will exists on the other end of the wire.

     

    And speaking of XML, Yukon can store XML as XML.  Which is to say you have semi-structured data, as XML, stored relationally, which you could probably map to an XML property of an object with ObjectSpaces.

     

    What happens when worlds collide?  Will ObjectSpaces work with Yukon UDTs and XML?

     

    Oh, and don't forget XML Views, which let you view your relational data as XML on the client, even though it's really relational.

     

    <snip />

     

    So for a given scenario, do all of you know which technology to pick?  I'm not too proud to admit that honestly I don't.  In fact, I honestly don't know if I'll have time to stress test every one of these against a number of real problem domains and real data.  And something tells me that if you pick the wrong tool for the job, and it doesn't pan out, you could be pretty hosed. 

    Today we have a different theory for everything.  I want the Theory of Everything.

    I've written about this problem in the past although at the time I didn't have a name for the Theory of Everything, now I do. From my previous post entitled Dealing with the Data Access Impedance Mismatch I wrote

    The team I work for deals with data access technologies (relational, object, XML aka ROX) so this impedance mismatch is something that we have to rationalize all the time.

    Up until quite recently the primary impedance mismatch application developers had to deal with was the
    Object<->Relational impedance mismatch. Usually data was stored in a relational database but primarily accessed, manipulated and transmitted over the network as objects via some object oriented programming language. Many felt (and still feel) that this impedance mismatch is a significant problem. Attempts to reduce this impedance mismatch has lead to technologies such as object oriented databases and various object relational mapping tools. These solutions take the point of view that the problem of having developers deal with two domains or having two sets of developers (DB developers and application coders) are solved by making everything look like a single domain, objects. One could also argue that the flip side of this is to push as much data manipulation as you can to the database via technologies like stored procedures while mainly manipulating and transmitting the data on the wire in objects that closely model the relational database such as the .NET Framework's DataSet class.

    Recently a third player has appeared on the scene, XML. It is becoming more common for data to be stored in a relational database, mainly manipulated as objects but transmitted on the wire as XML. One would then think that given the previously stated impedance mismatch and the fact that XML is mainly just a syntactic device that XML representations of the data being transmitted is sent as serialized versions of objects, relational data or some subset of both. However, what seems to be happening is slightly more complicated. The software world seems to moving more towards using
    XML Web Services built on standard technologies such as HTTP, XML, SOAP and WSDL to transmit data between applications. And taken from the WSDL 1.1 W3C Note

    WSDL recognizes the need for rich type systems for describing message formats, and supports the XML Schemas specification (XSD) [11] as its canonical type system

    So this introduces a third type system into the mix, W3C XML Schema structures and datatypes. W3C XML Schema has a number of concepts that do not map to concepts in either the object oriented or relational models. To properly access and manipulate XML typed using W3C XML Schema you need new data access mechanisms such as XQuery. Now application developers have to deal with 3 domains or we need 3 sets of developers. The first instinct is to continue with the meme where you make everything look like objects which is what a number of XML Web Services toolkits do today including Microsoft's .NET Framework via the XML Serialization technology. This tends to be particularly lossy because traditionally object oriented systems do not have the richness to describe the constraints that are possible to create with a typical relational database let alone the even richer constraints that are possible with W3C XML Schema. Thus such object oriented systems must evolve to not only capture the semantics of the relational model but those of the W3C XML Schema model as well. Another approach could be to make everything look like XML and use that as the primary data access mechanism. Technologies already exist to make relational databases look like XML and make objects look like XML. Unsurprisingly to those who know me, this is the approach I favor. The relational model can also be viewed as a universal data access mechanism if one figured out how to map the constraints of the W3C XML Schema model. The .NET Framework's DataSet already does some translation of an XML structure defined in a W3C XML Schema to a relational structure.

    The problem with all three approaches I just described is that they are somewhat lossy or involve hacking one model into becoming the uber-model. XML trees don't handle the graph structures of objects well, objects can't handle concepts like W3C XML Schema's derivation by restriction and so on. There is also a fourth approach which is endorsed by Erik Meijer in his paper
    Unifying Tables, Objects, and Documents where one creates a new unified model which is a superset of the pertinent features of the 3 existing models. Of course, this involves introducing a fourth model.

    The fourth model mentioned  above is the unified theory of everything that Scott or Sean is asking for. Since the last time I made this post, my friend Erik Meijer has been busy and produced another paper that shows what such a unification of the ROX triangle would look like if practically implemented as a programming language in his paper Programming with Circles, Triangles and Rectangles. In this paper Erik describes the research language Xen which seems to be the nirvana Scott or Sean is looking for. However this is a research project and not something Sean or Scott will be likely to use in production in the next year.

    The main problem is that Microsoft has provided .NET developers with too much choice when it comes to building apps that retrieve data from a relational store, manipulate the data in memory then either push the updated information back to the store or send it over the wire. The one thing I have learned working as a PM on core platform technologies is that our customers HATE choice. It means having to learn multiple technologies and make decisions on which is the best, sometimes risking making the wrong choice. This is exactly the problem Scott or Sean is having with the technologies we announced at the recent Microsoft Professional Developer Conference (PDC) which will should be shiping this year. What technology should I use and when I should I use it?

    This is something the folks on my team (WebData - the data access technology team) know we have to deal with when all this stuff ships later this year which we will deal with to the best of our ability. Our users want architectural guidance and best practices which we'll endeavor to make available as soon as possible.

    The first step in providing this information to our users are the presentations and whitepaper we made available after PDC, Data Access Design Patterns: Navigating the Data Access Maze (Powerpoint slides) and Data Access Support in Visual Studio.NET code named “Whidbey”. Hopefully this will provide Sean, Scott and the rest of our data access customers with some of the guidance needed to make the right choice. Any feedback on the slides or document would be appreciated. Follow up documents should show up on MSDN in the next few months.


     

    Categories: Technology | XML

    Oleg Tkachenko writes

    The goals of exposing comments are: enabling for arbitrary RSS reader application to see comments made to blog items and to post new comments. There are several facilities developed by RSS commutity, which allow to achieve these goals:

    1. <slash:comments> RSS 2.0 extension element, which merely contains number of comments made to the specified blog item.
    2. RSS 2.0 <comments> element, which provides URI of the page where comments can be viewed and added (it's usually something like http://yourblog/cgi-bin/mt-comments.cgi?entry_id=blog-item-id in MT blogs).
    3. <wfw:commentRss> RSS 2.0 extension element, which provides URI of comment feeds per blog item (to put it another way - returns comments made to specified blog item as RSS feed).
    4. <wfw:comment> RSS 2.0 extension element, which provides URI for posting comments via CommentAPI.

    It works like a charm. Now users of SharpReader and RSS Bandit can view the comments to posts in Oleg's MovableType blog directly in their aggregator. Posting comments from RSS Bandit works as well. Hopefully, this will catch on and folks no longer have to choose between .TEXT and  dasBlog (i.e. ASP.NET/Windows based blogging tools) when they want a blog tool that supports exposing comment in their RSS feed. The more the merrier.


     

    Categories: Technology

    I've written the first draft of the specification for the "feed" URI scheme. From the abstract

    This document specifies the "feed" URI (Uniform Resource Identifier) scheme for identifying data feeds used for syndicating news or other content from an information source such as a weblog or news website. In practice, such data feeds will most likely be XML documents containing a series of news items representing updated information from a particular news source.

    The purpose of this scheme is to enable one click subscription to syndication feeds in a straightforward, easy to implement and cross platform manner. Support for one click subscription using the "feed" URI scheme is currently supported by NetNewsWire, Shrook, SharpReader and RSS Bandit. The author of NewsGator has indicated that support for one click subscription using the "feed" URI scheme will exist in next version.

    Any feedback on the draft specification would be appreciated.

    Update: Graham Parks has pointed out in the comments to this post that URIs of the form "feed://http://www.example.com/rss.xml" are not compliant with RFC 2396. This will be folded into the next draft of the spec.


     

    Categories: Technology