June 7, 2007
@ 06:06 PM

A couple of months ago, I was asked to be part of a recruiting video for Microsoft Online Business Group (i.e. Windows Live and MSN). The site with the video is now up. It's http://www.whywillyouworkhere.com. As I expected, I sound like a dork. And as usual I was repping G-Unit. I probably should have worn something geeky like what I have on today, a a Super Mario Bros. 1up T-shirt. :)


 

Categories: MSN | Windows Live

Via Todd Bishop I found the following spoof of Back to the Future starring Christopher Lloyd (from the actual movie) and Bob Muglia, Microsoft's senior vice president of the Server and Tools Business. The spoof takes pot shots at various failed Microsoft "big visions" like WinFS and Hailstorm in a humorous way. It's good to see our execs being able to make light of our mistakes in this manner. The full video of the keynote is here. Embedded below is the first five minutes of the Back to the Future spoof.


Video: Microsoft Back to the Future Parody


 

I've been thinking a little bit about Google Gears recently and after reading the documentation things I've realized that making a Web-based application that works well offline poses an interesting set of challenges. First of all, let's go over what constitutes the platform that is Google Gears. It consists of three components

  • LocalServer: Allows you to cache and serve application resources such as HTML pages, scripts, stylesheets and images from a local web server. 

  • Database: A relational database where the application can store data locally. The database supports both full-text and SQL queries.

  • WorkerPool: Allows applications to perform I/O expensive tasks in the background and thus not lock up the browser. A necessary evil. 

At first, this seemed like a lot to functionality being offered by Google Gears until I started trying to design how I'd take some of my favorite Web applications offline. Let's start with a straightforward case such as Google Reader. The first thing you have to do is decide what data needs to be stored locally when the user decides to go offline. Well, a desktop RSS reader has all my unread items even when I go offline so a user may expect that if they go offline in Google Reader this means all their unread items are offline. This could potentially be a lot of data to transfer in the split instant between when the user selects "go offline" in the Google Reader interface and she actually loses her 'net connection by closing her laptop. There are ways to work around this such as limiting how many feeds are available offline (e.g. Robert Scoble with a thousand feeds in his subscription list won't get to take all of them offline) or by progressively downloading all the unread content while the user is viewing the content in online mode. Let's ignore that problem for now because it isn't that interesting.

The next problem is to decide which state changes while the app is offline need to be reported back when the user gets back online. These seem to be quite straightforward,

  • Feed changed
    • Feed added
    • Feed deleted
    • Feed renamed
    • Feed moved
  • News item changed
    • Item marked read/unread
    • Item flagged/starred
    • Item tag updated

The application code can store these changes as a sequential list of modifications which are then executed whenever the user gets back online. Sounds easy enough. Or is it?

What happens if I'm on my laptop and I go offline in Google Reader and mark a bunch of stuff as read then unsubscribe from a few feeds I no longer find interesting. The next day when I get to work, I go online on my desktop, read some new items and subscribe to some new feeds. Later that day, I go online with my laptop. Now the state on my laptop is inconsistent from that on the Web server. How do we reconcile these differences?

The developers at Google have anticipated these questions and have answered them in Google Gears documentation topic titled Choosing an Offline Application Architecture which states

No matter which connection and modality strategy you use, the data in the local database will get out of sync with the server data. For example, local data and server data get out of sync when:

  • The user makes changes while offline
  • Data is shared and can be changed by external parties
  • Data comes from an external source, such as a feed

Resolving these differences so that the two stores are the same is called "synchronization". There are many approaches to synchronization and none are perfect for all situations. The solution you ultimately choose will likely be highly customized to your particular application.

Below are some general synchronization strategies.

Manual Sync

The simplest solution to synchronization is what we call "manual sync". It's manual because the user decides when to synchronize. It can be implemented simply by uploading all the old local data to the server, and then downloading a fresh copy from the server before going offline.
...

Background Sync

In a "background sync", the application continuously synchronizes the data between the local data store and the server. This can be implemented by pinging the server every once in a while or better yet, letting the server push or stream data to the client (this is called Comet in the Ajax lingo).

I don't consider myself some sort of expert on data synchronization protocols but it seems to me that there is a lot more to figuring out a data synchronization strategy than whether it should be done based on user action or automatically in the background without user intervention. It seems that there would be all sorts of decisions around consistency models and single vs. multi-master designs that developers would have to make as well. And that's just for a fairly straightforward application like Google Reader. Can you imagine what it would be like to use Google Gears to replicate the functionality of Outlook in the offline mode of Gmail or to make Google Docs & Spreadsheets behave properly when presented with conflicting versions of a document or spreadsheet because the user updated it from the Web and in offline mode?  

It seems that without providing data synchronization out of the box, Google Gears leaves the most difficult and cumbersome aspect of building a disconnected Web app up to application developers. This may be OK for Google developers using Google Gears since the average Google coder is a Ph.D but the platform isn't terribly useful to Web application developers who want to use it for anything besides a super-sized HTTP cookie. 

A number of other bloggers such as Roger Jennings and Tim Anderson have also pointed that the lack of data synchronization in Google Gears is a significant oversight. If Google intends for Google Gears to become a platform that will be generally useful to the average Web developer then the company will have to fix this oversight. Otherwise, they haven't done as much for the Web development world as the initial hype led us to believe. 


 

Categories: Programming | Web Development

Ian McKellar has a blog post entitled Insecurity is Ruby on Rails Best Practice where he points out that by default the Ruby on Rails framework makes sites vulnerable to a certain class of exploits. Specifically, he discusses the vulnerabilities in two Ruby on Rails applications, 37 Signals Highrise and Magnolia, then proposes solutions. He writes

Cross Site Request Forgery
CSRF is the new bad guy in web application security. Everyone has worked out how to protect their SQL database from malicious input, and RoR saves you from ever having to worry about this. Cross site scripting attacks are dying and the web community even managed to nip most JSON data leaks in the bud.

Cross Site Request Forgery is very simple. A malicious site asks the user's browser to carry out an action on a site that the user has an active session on and the victim site carries out that action believing that the user intended that action to occur. In other words the problem arises when a web application relies purely on session cookies to authenticate requests.
...
Solutions
Easy Solutions
There aren't any good easy solutions to this. A first step is to do referrer checking on every request and block GET requests in form actions. Simply checking the domain on the referrer may not be enough security if there's a chance that HTML could be posted somewhere in the domain by an attacker the application would be vulnerable again.

Better Solutions
Ideally we want a shared secret between the HTML that contains the form and the rails code in the action. We don't want this to be accessible to third parties so serving as JavaScript isn't an option. The way other platforms like Drupal achieve this is by inserting a hidden form field into every form that's generated that contains a secret token, either unique to the current user's current session or (for the more paranoid) also unique to the action. The action then has to check that the hidden token is correct before allowing processing to continue.

Incidents of Cross Site Request Forgery have become more popular with the rise of AJAX and this is likely to become as endemic as SQL injection attacks until the majority of Web frameworks take this into account in their out of the box experience. 

At Microsoft, the Web teams at MSN and Windows Live have given the folks in Developer Division the virtue of their experience building Web apps which has helped in making sure our Web frameworks like ASP.NET Ajax (formerly codenamed Atlas) avoid this issue in their default configuration. Scott Guthrie outlines the safeguards against this class of issues in his post JSON Hijacking and How ASP.NET AJAX 1.0 Avoids these Attacks where he writes

Recently some reports have been issued by security researchers describing ways hackers can use the JSON wire format used by most popular AJAX frameworks to try and exploit cross domain scripts within browsers. Specifically, these attacks use HTTP GET requests invoked via an HTML include element to circumvent the "same origin policy" enforced by browsers (which limits JavaScript objects like XmlHttpRequest to only calling URLs on the same domain that the page was loaded from), and then look for ways to exploit the JSON payload content.

ASP.NET AJAX 1.0 includes a number of default settings and built-in features that prevent it from being susceptible to these types of JSON hijacking attacks.
...
ASP.NET AJAX 1.0 by default only allows the HTTP POST verb to be used when invoking web methods using JSON, which means you can't inadvertently allow browsers to invoke methods via HTTP GET.

ASP.NET AJAX 1.0 requires a Content-Type header to be set to "application/json" for both GET and POST invocations to AJAX web services. JSON requests that do not contain this header will be rejected by an ASP.NET server. This means you cannot invoke an ASP.NET AJAX web method via a include because browsers do not allow append custom content-type headers when requesting a JavaScript file like this.

These mitigations would solve the issues that Ian McKellar pointed out in 37 Signals Highrise and Magnolia because HTML forms hosted on a malicious site cannot set the Content-Type header so that exploit is blocked. However neither this approach nor referrer checking to see if the requests come from your domain is enough if the malicious party finds a way to upload HTML or script onto your site. 

To completely mitigate against this attack, the shared secret approach is most secure and is what is used by most large websites. In this approach each page that can submit a request has a canary value (i.e. hidden form key) which must be returned with the request. If the form key is not provided, is invalid or expired then the request fails. This functionality is provided out of the box in ASP.NET by setting the Page.ViewStateUserKey property. Unfortunately, this feature is not on by default. On the positive side, it is a simple one line code change to get this functionality which needs to be rolled by hand on a number of other Web platforms today.


 

Categories: Web Development

The good folks on the Microsoft Experimentation Platform team have published a paper which gives a great introduction to how and why one can go about using controlled experiments (i.e. A/B testing) to improve the usability of a website. The paper is titled Practical Guide to Controlled Experiments on the Web: Listen to Your Customers not to the HiPPO and will be published as part of the Thirteenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. The paper begins

In the 1700s, a British ship’s captain observed the lack of scurvy among sailors serving on the naval ships of Mediterranean countries, where citrus fruit was part of their rations. He then gave half his crew limes (the Treatment group) while the other half (the Control group) continued with their regular diet. Despite much grumbling among the crew in the Treatment group, the experiment was a success, showing that consuming limes prevented scurvy. While the captain did not realize that scurvy is a consequence of vitamin C deficiency, and that limes are rich in vitamin C, the intervention worked. British sailors eventually were compelled to consume citrus fruit regularly, a practice that gave rise to the still-popular label limeys.

Some 300 years later, Greg Linden at Amazon created a prototype to show personalized recommendations based on items in the shopping cart (2). You add an item, recommendations show up; add another item, different recommendations show up. Linden notes that while the prototype looked promising, ―a marketing senior vice-president was dead set against it, claiming it will distract people from checking out. Greg was ―forbidden to work on this any further. Nonetheless, Greg ran a controlled experiment, and the ―feature won by such a wide margin that not having it live was costing Amazon a noticeable chunk of change. With new urgency, shopping cart recommendations launched. Since then, multiple sites have copied cart recommendations.

The authors of this paper were involved in many experiments at Amazon, Microsoft, Dupont, and NASA. The culture of experimentation at Amazon, where data trumps intuition (3), and a system that made running experiments easy, allowed Amazon to innovate quickly and effectively. At Microsoft, there are multiple systems for running controlled experiments. We describe several architectures in this paper with their advantages and disadvantages. A unifying theme is that controlled experiments have great return-on-investment (ROI) and that building the appropriate infrastructure can accelerate innovation.

I learned quite a bit from reading the paper although I did somewhat skip over some of the parts that involved math. It's pretty interesting when you realize how huge the impact of changing the layout of a page or moving links can be on the bottom line of a Web company. Were talking millions of dollars for the most popular sites. That's pretty crazy.

Anyway, Ronny Kohavi from the team mentioned that they will be giving a talk related to the paper at eBay research labs tomorrow at 11AM. The talk will be in Building 0 (Toys) in room 0158F. The address is 2145 Hamilton Avenue, San Jose, CA. If you are in the silicon valley area, this might be a nice bring your own lunch event to attend.


 

The MSN Soapbox team has a blog post entitled Soapbox Is Now Open For Full Video Viewing which states

We have just opened Soapbox on MSN Video for full video viewing. You no longer need to be signed into the service to view Soapbox videos!
 
However just as before, you still need to sign in to upload, comment, tag or rate videos. e Embedded Player is also available for full viewing and embedding.
 
While it might not be visible, we have spent the last month working hard on improving our backend, to make encoding and performance as scalable as possible.
 
We are also now conducting proactive filtering of all uploaded content, using technology from Audible Magic.Audible Magic bases their filtering on the video's audio track and their database is being updated constantly. By using this technology, we see this as an important step to ensure the viability and success of Soapbox over the long run.

You can dive right in and explore the site's new interface by checking out my friend Ikechukwu's music video. I like the redesign of the site although there seem to be a few minor usability quibbles. The content filtering is also an interesting addition to the site and was recently mentioned in the Ars Technica article Soapbox re-opens, beating YouTube to the punch with content filtering. Come to think of it, I wonder what's been taking Google so long to implement content filtering for copyrighted materials given how long they've been talking about it. It's almost as if they are dawdling to add the feature. I wonder why?

Speaking of usability issues, the main problem I have with the redesign is  that once you start browsing videos the site's use of AJAX/Flash techniques works against it. Once you click on any of the videos shown beside the one you are watching, the video changes inline instead of navigating to a new page which means the browser's URL doesn't change. To get a permalink to a video I had the hunt around in the interface until I saw a "video link"  which gave me the permalink to the video I was currently watching. Both Google Video and YouTube actually reload the page and thus change the URL when you click on a video. Even then YouTube makes the URL prominent on the page as well. Although I like the smoothness of inline transitions, making the permalink and other sharing options more prominent would make the site a lot more user friendly in my opinion.


 

Categories: MSN

If you are a member of the Microsoft developer community, you've probably heard of the recent kerfuffle between Microsoft and the developer of TestDriven.NET that was publicized in his blog post Microsoft vs TestDriven.Net Express. I'm not going to comment directly on the situation especially since lawyers are involved. However I did find the perspective put forth by Leon Bambrick in his post TestDriven.net-Gate: Don't Blame Microsoft, Blame Jason Weber to be quite insightful.

Leon Bambrick wrote

If you have time, you really ought to read the whole thing. I've read as much as I can, and here's my analysis.

Right from the start, before tempers flared, Microsoft's representative, Jason Weber should've done a much better job of convincing Jamie not to release the express sku. Jason did put a lot of effort in, and Microsoft spent a lot of effort on this, but Jason sabotaged his own sides efforts right throughout.

The first clear mistake is that Jason should've never referred to Jamie's work as a "hack". He did this repeatedly -- and it seems to have greatly exacerbated the situation. What did that wording gain Jason (or Microsoft)? It only worked to insult the person he was trying to come to agreement with. Name calling doesn't aid negotiation.

When Jamie finally agreed to remove the express version, he wanted a credible reason to put on his website. Note that Jamie had backed down now, and with good treatment the thing should've been resolved at that point. Here's the wording that Jason recommended:

"After speaking with Jason Weber from Microsoft I realized that by adding features to Visual Studio Express I was in breach of the Visual Studio license agreements and copyrights. I have therefore decided to remove support for the Visual Studio Express SKU's from TestDriven.Net. Jason was very supportive of TestDriven.Net's integration into the other Visual Studio 2005 products and I was invited to join the VSIP program. This would allow me to fly to Redmond each quarter and work closely with the Visual Studio development team on deeper integration."

This wording is offensive on four levels. One Two Three Four. That's a lot of offense!

Firstly -- it acts as an advertisement for Jason Weber. Why? Arrogance maybe? He's lording it over Jamie.

Second -- it supposes that Jason should publicly admit to violations. He need never admit such a thing.

Third -- it includes mention of breach of "copyright". I don't think such an allegation ever came up until that point. So this was a fresh insult.

Fourth -- it stings Jamie's pride, by suggesting that he was bribed into agreement. Ouch

So just when they got close to agreement, Jason effectively kicked Jamie in the nuts, pissed in his face, poked him in the eye, and danced on his grave.

That's not a winning technique in negotiations.

I believe there is a lesson on negotiating tactics that can be extracted from this incident. I really hope this situation reaches an amicable conclusion for the sakes of all parties involved.


 

Categories: Life in the B0rg Cube

June 4, 2007
@ 04:47 PM

Last week there was a bunch of discussion in a number of blogs about whether we need an interface definition language (IDL) for RESTful Web services. There were a lot of good posts on this topic but the posts from Don Box and Bobby Woolf which gave me the most food for thought.

In his post entitled WADL, WSDL, XSD, and the Web Don Box wrote

More interesting fodder on Stefan Tilkov's blog, this time on whether RESTafarians need WSDL-like functionality, potentially in the form of WADL.

Several points come to mind.

First, I'm doubtful that WADL will be substantially better than WSDL given the reliance on XSD to describe XML payloads. Yes, some of the cruft WSDL introduces goes away, but that cruft wasn't where the interop probems were lurking.

I have to concur with Don's analysis about XSD being the main cause of interoperability problems in SOAP/WS-* Web services. In a past life, I was the Program Manager responsible for Microsoft's implementations of the W3C's XML Schema Definition Language (aka XSD). The main problem with the technology is that XML developers wanted two fairly different things from a schema language

  1. A grammar for describing and enforcing the contract between producers and consumers of XML documents so that one could, for example, confirm that an XML document received was a valid purchase order or RSS feed.
  2. A way describe strongly typed data such as database tables or serialized objects as XML documents for use in distributed programming or distributed query operations.

In hindsight, this probably should have been two separate efforts. Instead the W3C XML Schema working group tried to satisfy both sets of consistuencies with a single XML schema language.  The resulting technology ended up being ill suited at both tasks.  The limitations placed on it by having to be a type system made it unable to describe common constructs in XML formats such as being able to have elements show up in any order (e.g. in an RSS feed title, description, pubDate, etc. can appear in any order as children of item) or being able to specify co-occurrence constraints (e.g. in an Atom feed a text construct may have XML content or textual content depending on the value of its type attribute).

As a mechanism for describing serialized objects for use in distributed computing scenarios (aka Web services) it caused several interoperability problems due to the impedance mismatch between W3C XML Schema and object oriented programming constructs. The W3C XML schema language had a number of type system constructs such as simple type facet restriction, anonymous types, structural subtyping, namespace based wildcards, identity constraints, and mixed content which simply do not exist in the typical programming language. This lead to interoperability problems because each SOAP stack had its own idiosyncratic way of mapping the various XSD type system constructs to objects in the target platform's programming language and vice versa. Also no two SOAP stacks supported the same set of XSD features. Even within Microsoft, let alone across the industry. There are several SOAP interoperability horror stories on the Web such as the reports from Nelson Minar on Google's problems using SOAP in posts such as Why SOAP Sucks and his ETech 2005 presentation Building a New Web Service at Google. For a while, the major vendors in the SOAP/WS-* space tried to tackle this problem by forming a WS-I XML Schema Profile working group but I don't think that went anywhere primarily because each vendor supported different subsets of XSD so no one could agree on what features to keep and what to leave out.

To cut a long story short, any technology that takes a dependency on XSD is built on a shaky technological foundation. According to the WADL specification there is no requirement that a particular XML schema language is used so it doesn't have to depend on XSD. However besides XSD, there actually isn't any mainstream technology for describing serialized objects as XML. So one has to be invented. There is a good description of what this schema language should look like in James Clark's post Do we need a new kind of schema language? If anyone can fix this problem, it's James Clark.

Ignoring the fact that 80% of the functionality of WADL currently doesn't exist because we either need to use a broken technology (i.e. XSD) or wait for James Clark to finish inventing Type Expressions for Data Interchange (TEDI). What else is wrong with WADL?

In a post entitled WADL: Declared Interfaces for REST? Bobby Woolf writes

Now, in typing rest, my colleague Patrick Mueller contemplates that he "wants some typing [i.e. contracts] in the REST world" and, among other things, discusses WADL (Web Application Description Language). Sadly, he's already gotten some backlash, which he's responded to in not doing REST. So I (and Richard, and others?) think that the major advantage of WSDL over REST is the declared interface. Now some of the REST guys seem to be coming around to this way of thinking and are thinking about declared interfaces for REST. I then wonder if and how REST with declared interfaces would be significantly different from WSDL (w/SOAP).

One thing I've learned about the SOAP/WS-* developer world is that people often pay lip service to certain words even though they use them all the time. For example, the technologies are often called Web services even though the key focus of all the major vendors and customers in this area is reinventing CORBA/DCOM with XML protocols as opposed to building services on the Web. Another word that is often abused in the SOAP/WS-* world is contract. When I think of a contract, I think of some lengthy document drafted by a lawyer that spells out in excruciating detail how two parties interact and what their responsibilities are. When a SOAP/WS-* developer uses the word contract and WSDL interchangeably, this seems incorrect because a WSDL is simply the XML version of OMG IDL. And an IDL is simply a list of API signatures. It doesn't describe expected message exchange patterns, required authentication methods, message traffic limits, quality of service guarantees, or even pre and post conditions for the various method calls. You usually find this information in the documentation and/or in the actual business contract one signed with the Web service provider. A WADL document for the REST Web service will not change this fact.

When a SOAP/WS-* says that he wants a contract, he really means he wants an interface definition language (IDL) so he can point some tool at a URL and get some stubs & skeletons automatically generated. Since this post is already long enough and I have to get to work, it is left as an exercise for the reader as to whether a technological approach borrowed from distributed object technologies like DCE/RPC, DCOM and CORBA meshes with the resource oriented, document-centric and loosely coupled world of RESTful Web services.

PS: Before any of the SOAP/WS-* wonks points this out, I realize that what I've described as a contract can in theory be implemented for SOAP/WS-* based services using a combination of WSDL 2.0 and WS-Policy. Good luck actually finding an implementation in practice that (i) works and (ii) is interoperable across multiple vendor SOAP stacks. 


 

Categories: XML Web Services

June 1, 2007
@ 04:09 AM

Mike Torres has the scoop in his blog post Windows Live betas - Writer, Messenger, Mail where he writes

Three applications I use on a daily basis just got updated today:

All three of them are rock-solid in terms of stability (of course, they're still betas!) and come highly recommended by yours truly.

There are also a couple of blog posts on the various Windows Live team blogs about the betas. The Windows Live Hotmail team writes about the newly renamed "Windows Live Mail" in the post New beta software to access your Hotmail offline, the Windows Live Writer team has a blog post letting us know Windows Live Writer Beta 2 Now Available, and finally the Windows Live Messenger team runs down the changes in their latest release in the post Messenger 8.5 Beta1 released.

Like Mike I've been using the betas for a while and they are all rock solid. Check 'em out and let the product teams know what you think.


 

Categories: Windows Live

Matt Warren has an excellent Microsoft history lesson in his blog post entitled The Origin of LINQ to SQL which explores how LINQ to SQL (Microsoft's Object/Relational mapping technology with programming language integration) came to be despite the internal politics at Microsoft which encouraged the entire company to bet on WinFS. I'm excerpting a lot of his blog post because I wouldn't be surprised if he ends up taking it down or redacting it later. He writes

LINQ to SQL, possibly Microsoft’s first OR/M to actually ship in ten years of trying, was never even supposed to exist.It started out as a humble Visual Studio project on my desktop machine way back in the fall of 2003
...
Luckily, it didn’t take me long to get the basics up and running. You see, it wasn’t the first time I’d slapped together an OR/M or modified a language to add query capabilities; having already designed ObjectSpaces and parts of C-Omega so I was certainly up to the task. Fortunately, it gets a lot easier the ‘next’ time you design and build something, especially if it was you that did the designing before and you have the opportunity to start over fresh.
...
Why didn’t I start with WinFS? After all, it was all the rage inside the SQL Server org at the time. Unfortunately, it was the same story as with ObjectSpaces. They were shipping before us. We weren’t on their radar. Their hubris was bigger than ours. Not to mention my belief that WinFS was the biggest fiasco I’d ever bore witness to, but that’s another story.

Yet, part of that story was the impetus to turn LINQ to SQL into an actual product.

WinFS client API even started out as a complete copy of the ObjectSpaces codebase and had all the same limitations. It just had more political clout as it was being lead by a figure at a higher-point in the corporate org chart, and so it was positioned as part of juggernaut that was making a massive internal land grab. We on the outside used to refer to WinFS as the black hole, forever growing, sucking up all available resources, letting nothing escape and in the end producing nothing except possibly a gateway to an alternate reality. Many of our friends and co-workers had already been sucked in, and the weekly reports and horror stories were not for the weak-of-heart. It eventually sucked up ObjectSpaces too and in the process killing it off so that in WinFS v2 it could all be ‘aligned’.

At that point, those of us designing LINQ got a bit worried. There were not too many in the developer division that believed in the mission of WinFS. As a developer tool for the masses, something simple that targeted the lower end was paramount. ObjectSpaces had been it, and now it was gone. There was still some glimmer of possibility that WinFS v2 might eventually get it right and be useful as a general OR /M tool. But all hope of that was shot when WinFS was pulled out of Vista and its entire existence was put in doubt. Had they immediately turned around and brought back ObjectSpaces, that might have worked, but in the intervening months ObjectSpaces had slipped past the point of no return for being part of .Net 2.0, turnover within the SQL org was spiraling out of control, and most of the brain-trust that knew anything about OR/M had already fled.

That’s when we realized we had no choice. If LINQ was to succeed it needed some legs to stand on. The ‘mock’ OR/M I was building was shaping up to be a pretty good contender. We had co-designed it in tandem with LINQ as part of the C# 3.0 design group and it really was a full-fledged implementation; we just never thought it was actually going to be a shipping product. It was simply meant to act as a stand-in for products that now no longer existed. So, for the sake of LINQ and the customer in general, we took up the OR/M torch officially, announcing our intention internally and starting the political nightmare that became my life for the next three years.

This takes me back. I had friends who worked on ObjectSpaces and it was quite heart breaking to see what internal politics can do to passionate folks who once believed that technologies stand based on merit and not who you know at Microsoft. At least this story had a happy ending. Passionate people figured out how to navigate the internal waters at Microsoft and are on track to shipping a really compelling addition to the developer landscape.

Editor's Note: I added the links to ObjectSpaces and C-Omega to the excerpts from Matt's post to provide some context.


 

Categories: Life in the B0rg Cube