Mary Jo Foley has an article on the Microsoft-Watch site entitled Worst Microsoft Product Name Ever where she writes

As Microsoft watchers inside and outside the company have noted, Microsoft is not terribly astute when it comes to naming its products. But on Wednesday, the branding department hit a new low, in terms of bad naming choices. Microsoft has decided to christen the new Windows desktop search application (that can search your desktop/Intranet and Internet), due to go beta later this year, as "Windows Live Search." But there already is a Windows Live Search – the Internet search service that is currently in beta. Are the two products the same? No, the Softies said. Are they related? Nope. We've decided we're going to try using Windows Live Search A (for the desktop app version) and Windows Live Search S for the MSN service. And we thought the SharePoint branding was confusing!

This sounds pretty amazing. Microsoft has created two unrelated products that are both called Windows Live Search. Wow. It's like we are determined to cause the Windows Live brand to turn to crap before any of the services even get out of beta. I guess the folks who were behind the .NET branding fiasco are still alive and well in the B0rg cube.

Update: I should clarify that this is likely just poor storytelling on our part as opposed to actual different products being named the same thing. If you read the press release Microsoft Enterprise Search Solutions Help Enable Effective Information Management you'll see the excerpt

To that end, the company will deliver a solution called Windows Live Search, which offers a single user interface (UI) to help people find and use all the information they care about from across the entire enterprise and beyond. It essentially binds together previously separate search solutions including Windows Desktop Search, Intranet search provided by Microsoft Office SharePoint Server 2007 and Internet search via Windows Live Search, among others. Any information available to any of these systems can be exposed in one place, instantly showing relevant and actionable search results from all its enterprise data sources, from the desktop and from the Web
...
To illustrate, a sales representative trying to find information about a customer she plans to visit could gather the needed data by accessing Office SharePoint Server 2007, initiating a search and pulling business data from a Siebel application in addition to gathering data off her desktop using Windows Desktop Search. However, the same search could be performed from within Windows Live Search to produce all of the relevant desktop, e-mail, intranet and Internet results. Furthermore, when the sales representative clicks through the results, she will see they are actually displayed from that same window. Windows Live Search displays full results without navigating away or opening additional applications.

The press release makes it hard to tell whether this is new functionality of http://search.live.com being announced or a duplicate product which is branded with the same name. If reporters are getting confused about our messaging then there is definitely something broken that we need to fix.


 

Categories: Windows Live

I just saw the following post on in the RSS Bandit forums entitled Dead? where one of our users asks

Haven't seen a news update in a long time, no releases for about 5 months... is rss bandit dead? It's a nice aggregator, but there are some neat features popping up in stuff like Rss Owl / other feed readers, so I'm debating changing readers...

...but if there's going to be active improvements in the future, there's no reason for me to learn an entirely new program just to come back to rss bandit in the end. ;)

What say you, Code Writers?

This has been bothering me as well. Torsten and I have been busy with our respective day jobs. Over the past few of months, I keep telling myself I'll start work on RSS Bandit "next week" but the following week always seems busier than the last. It hasn't helped that I've had conferences, vacations or out of town visitors every other month this year. I think I have the worst of the disruptions in my schedule done for this year. In addition, the project which has been consuming the lion's share of my time at work also looks like it will soon be able to go on auto-pilot.

Although I can't give an exact date for our next release, what I can say for sure is that we will have an RSS Bandit release this summer. What I would like from our users is some feedback on the RSS Bandit road map. I don't think I'm particularly attached to the features we currently have on tap for the Jubilee release. The major reworkings I'd like to do for the next release are fixing our excessive memory consumption, improving the integration with NewsGator Online, and rewriting our multithreaded code in a way that fixes the pernicious feed mix up issue. As for new features, the ones I want primarily revolve around enclosures/podcasts and conversation tracking similar to TechMeme, Megite and TailRank.

I believe Torsten also plans to revamp our UI. What else would you like to see us do in the next release?


 

Categories: RSS Bandit

If you're a regular reader of Don Box's weblog then you probably know that Microsoft has made available another Community Technical Preview (CTP) of Language Integrated Query (LINQ) aka C# 3.0. I think the notion of integrating data access and query languages into programming languages is the next natural evolution in programming language design. A large number of developers write code that performs queries over rich data structures of some sort whether they are relational databases, XML files or just plain old objects in memory. In all three cases, the code tends to be verbose and more cumbersome than it needs to be. The goal of the LINQ project is to try to simplify and unify data access in programming languages built on the .NET Framework. 

When I used to work on the XML team, we also used to salivate about the power that developers would get if they could get rich query over their data stores in a consistent manner. I was the PM for the IXPathNavigable interface and the related XPathNavigator class which we hoped people would implement over their custom stores to enable them to use XPath to query them. Some developers did do exactly that such as Steve Saxon with the ObjectXPathNavigator which allows you to use XPath to query a graph of in-memory objects. The main problem with this approach is that implementing IXPathNavigable for custom data stores is non-trivial especially given the impedence mismatch between XML and other data models. In fact, I've been wanting to do something like this in RSS Bandit for a while but the complexity of implementing my own custom XPathNavigator class over our internal data structures is something I've balked at doing.

According to Matt Warren's blog post Oops, we did it again it looks like the LINQ folks have similar ideas but are making it easier than we did on the XML team. He writes 

What's the coolest new feature?  IMHO, its IQueryable<T>. 

 DLINQ's query mechanism has been generalized and available for all to use as part of System.Query.  It implements the Standard Query Operators for you using expression nodes to represent the query. Your queries can now be truly polymorphic, written over a common abstraction and translated into the target environment only when you need it to.

    public int CustomersInLondon(IQueryable<Customer> customers) {

        int count = (from c in customers

                     where c.City == "London"

                     select c).Count();

        return count;

    }

Now you can define a function like this and it can operate on either an in memory collection or a remote DLINQ collection (or you own IQueryable for that matter.)  The query is then either run entirely locally or remotely depending on the target. 

If its a DLINQ query a count query is sent to the database.

SELECT COUNT(*) AS [value]

FROM [Customers] AS [t0]

WHERE [t0].[City] = @p0

If its a normal CLR collection, the query is executed locally, using the System.Query.Sequence classes definitions of the standard query operators.  All you need to do is turn your IEnumerable<Customer> into IQueryable<Customer>.  This is accomplished easily with a built-in ToQueryable() method.

  List<Customer> customers = ...;

  CustomersInLondon(customers.ToQueryable());

Wow!  That was easy.  But, how is this done?  How can you possible turn my List<T> into some queryable thingamabob?

Good question.  Glad you asked.

Check out this little gem: 

  Expression<Func<Customer,bool>> predicate = c => c.City == "London";           

  Func<Customer,bool> d = predicate.Compile();

 

Now you can compile lambda expressions directly into IL at runtime!

ToQueryable() wraps your IEnumerable<T> in IQueryable<T> clothing, uses the Queryable infrastructure to let you build up your own expression tree queries, and then when you enumerate it, the expression is rebound to refer to your IEnumerable<T> directly, the operators rebound to refer to System.Query.Sequence, and the resulting code is compiled using the built-in expression compiler.  That code is then invoked producing your results.

Amazing, but true.

I think it's pretty amazing that all I have to do as a developer is implement a simple iterator over my data structures (i.e. IEnumerable) and then I get all the power of Linq for free. Of course, if I want the queries to be performant it would make sense to implement IQueryable directly but the fact that the barrier to entry is so low if my perf needs aren't high is goodness.

For more information on LINQ, read the Linq project overview. If you are like me and are primarily interested in XLinq then check out XLinq: XML Programming Refactored (The Return Of The Monoids) which has the fingerprints of my former team all over it. Way to go guys!


 

Categories: XML

One of the reasons I like providing APIs to online services is that it gives users more control of their data. Alex Boyko, who's one of the testers on our team wrote a tool for migrating his blog from one blog service to the other using the APIs they provide. In his blog post Blog Content Transfer he wrote

Apparently, my old blogging site (Blogger) and the new one (MSN Spaces) expose some APIs that can be used to play with your content (Metaweblog API for Space and Atom API for Blogger). I spent some time over the weekend and wrote a tool that helped me to transfer my data between two sites.

In case if somebody else is excited about gleams as much as I am ;] I've decided to share a copy of BCTransfer (Blog Content Transfer).

 DOWNLOAD HERE

Please let me know if it works and especially if it does not work for your. I’ll be more than glad to help.

Please read this first. It tells your how to get a login for your space.

At this moment, it is a command-line tool written using .NET 2.0. So you need to have it installed (the easiest option for that is Windows Update). Here’s how you run it in the most basic scenario:

bctransfer -bu <old-username> -bp <old-password> -su <new-username> -sp <new-password>

Yet another reason why providing APIs for online services is a good for regular users as well as developers. Nice.


 

I found the following comments by Om Malik and Mike Arrington to be quite telling.

In his blog post entitled The Myth, Reality & Future of Web 2.0 Om Malik writes

The Myth of Web 2.0 is the investment opportunities. The reality of Web 2.0 is too little original thinking. Web 2.0, simply put, is a set of technologies and a new kind of thinking, which companies like Yahoo, Google, Microsoft and AOL are incorporating in their products. That’s the reality and the future of Web 2.0.

In the blog post entitled AOL To Release YouTube Clone Mike Arrington writes

Prepare for the launch of AOL UnCut (currently in open beta), a near perfect clone of YouTube...This is right on the heels of the launch of AIM Pages, which is directly targeting Myspace and other social networks...I am seeing an increasing trend of the big guys simply copying what successful startups are doing. AOL with this product and AIM Spaces. Google with Google Notepad and a flurry of other projects, etc. The only large company that is even experimenting with unproven concepts at this point is Microsoft with its various Live.com ideas. I’d like to see more experimenting at the big company level.

I guess the criticism has now grown from 'building a new Windows app is just doing research for Microsoft' to 'building a new Web application is just doing research for Google/Yahoo/AOL/Microsoft'. The more things change, the more they stay the same.

On the positive side, it is good to see Microsoft being called innovative in comparison to Google by a technology pundit.


 

Tim Ewald has been blogging about ways to add versioning to web services which work around the various limitations of the W3C XML Schema Definition Language (XSD).One bit of insight I always like to share when talking about XSD is that there are two primary usage scenarios that have developed around XML document validation and XML schemas. My article XML Schema Design Patterns: Is Complex Type Derivation Unnecessary? describes them as

  1. Describing and enforcing the contract between producers and consumers of XML documents: An XML schema ordinarily serves as a means for consumers and producers of XML to understand the structure of the document being consumed or produced. Schemas are a fairly terse and machine readable way to describe what constitutes a valid XML document according to a particular XML vocabulary. Thus a schema can be thought of as contract between the producer and consumer of an XML document. Typically the consumer ensures that the XML document being received from the producer conforms to the contract by validating the received document against the schema.

  2. Creating the basis for processing and storing typed data represented as XML documents: XSD describes the creation of a type annotated infoset as a consequence of document validation against a schema. During validation against an XSD, an input XML infoset is converted into a post schema validation infoset (PSVI), which among other things contains type annotations. However practical experience has shown that one does not need to perform full document validation to create type annotated infosets; in general many applications that use XML schemas to create strongly typed XML such as XML<->object mapping technologies do not perform full document validation, since a number of XSD features do not map to concepts in the target domain.

If you are building a SOAP-based XML Web service using the toolkits provided by the major vendors like IBM, Microsoft or BEA then it is most likely that your usage pattern aligns with scenario #2 above. This means that your Web service toolkit isn't completely enforcing that documents being consumed or generated by the service actually are a 100% valid against the schema. This seems bad until you realize that XSD is so limited in the constraints that it can describe that any XSD validation done would still need to be backed by a further business logic validation phase in your code. In his post Making everything optional Tim Ewald writes

DJ commented on my post addressing the problem Raimond raised with my versioning strategy. He wondered if he'd missed an earlier post where I argued that you not use XSD to validate your data because if you make content optional, you can't use it to check what has to be there. Since I haven't written about that yet, I figured I'd start to address it now.

When people build a schema for a single service, they tend to make it reflect the precise requirements of that system at that moment in time. Then, when those requirements change, they revise the schema. The result is a system that tends to be very brittle. If you take the same approach when you design a schema for use by multiple systems, describing a corporate level model for customer data for instance, things are even worse. Some systems won't have all the required data. They have to decide whether to (a) collect the data, (b) make up bogus data, or (c) not adopt the common model. None of these are good approaches.

To solve both these problems, I've started thinking about my schema not as the definition of what this system needs right now but as the definition of what the data should look like if it's present instead. I move the actual checking for what has to be present inside the system (either client or service) and implement it using either code or a narrowed schema that is duplicate of the contract schema with more constraints in place.

There are important lessons in Tim's posts which are unfortunately often learned the hard way. A document or message can have different required/optional fields depending on what part of the process your are in or even whether it is being used as input vs. output. It's hard to come up with on single schema definition for a common type across a system without resorting to "everything is optional" and then relying on code to do the specific business logic validation for which phase in the process your are in.

There's another great comment in Tim's follow up post More on making everything optional

I think it's important not to confuse your schema with your contract. A client and a service have to agree on all sorts of things, only some of which are captured in your WSDL/XSD(/Policy). My goal in proposing that almost everything in your XSD be optional is to find the sweet-spot between easy coding and flexibility for evolution.

Amen! Preach on brother.


 

Categories: XML Web Services

Steve Rubel has a blog post entitled Dissecting Windows Live PR with Data where he uses the infamous Alexa traffic chart which has become popular among bloggers and other amateur pundits to dissect the populariity of a website. Specifically he writes

According to Alexaholic there was a lag between these news events and when the site became a regular visit for many consumers. In other words, it took Windows Live a considerable amount of time following the launch to build any kind of critical mass. (A caveat here. Alexa data is questionable when measuring true traffic data because it amounts for a small subset of the total browsing public.  However, the overall trends it shows I feel are bankable.)

Picture 4-2

I've noticed that quite a few folks have misinterpreted the traffic spike within the past month for the live.com domain in Alexa. If you take a deeper look at the Traffic Details for Live.com you'll notice the following data

  • login.live.com - 46%
  • mail.live.com - 38%
  • ideas.live.com - 6%
  • live.com - 4%
  • local.live.com - 2%
  • help.live.com - 1%
  • expo.live.com - 1%
  • safety.live.com - 1%
  • Other websites - 1% 

That's right, the largest chunk of the traffic for the live.com domain is split between the Passport Windows Live ID login page which is now being used by most MSN and Windows Live services when signing in. The second largest slice is for the Windows Live Mail beta. I don't think one can draw any conclusions on the 'adoption' or popularity of Windows Live based on this data.


 

Categories: Windows Live

My buddy Joshua Allen has a blog post entitled Was WS-* a Failure? where he writes

Dare excerpts Yaron Goland, explaining how MSN uses POX instead of WS-* in many cases. It is very good to see MSFT employees no longer afraid to say that WS-* is sometimes not the right choice.

On the other hand, it's reasonable to say that WS-* met most of its objectives; and IMO has been a great success. Read this post from Miguel. Miguel makes the point that Java is still vendor-proprietary, in contrast to the way that .NET is ISO. IMO, one of the most important goals of WS-* was to break the stranglehold that J2EE had on the middleware/appserver market. Today, reading about Scott McNealy stepping down amid Sun financial troubles, it is hard to remember how dominant Sun used to be. But Sun is still very powerful in the enterprise, and I imagine it would be game over by now (with Sun/Oracle alliance being the clear winners) if Microsoft had not pushed WS-*. WS-* leveled the playing field, and gave both Microsoft and IBM ability to go head-to-head with Sun in app servers. Today, an Oracle/Microsoft alliance seems more realistic than Oracle/Sun.

So perhaps WS-* was the critical factor that liberated the Internet from a dark future of Sun/Java control, and enabled the new era of POX/HTTP to flourish.

That's an interesting perspective which I think is worth sharing. I also want to correct what seems like a misconception about Yaron's post. I'm not aware of many teams at MSN Windows Live that have eschewed using the WS-* family of technologies (SOAP/WSDL/XSD/etc) for Plain Old XML over HTTP (POX).  On my team, the distributed computing protocols we use are either SOAP or WebDAV. Most of the Windows Live teams whose back ends I'm familiar with also tend to use SOAP. The only upcoming change I can see for our team we have been flirting with the idea of using binary infosets instead of textual XML but still sticking with SOAP. I'm not sure whether we'll do it or not but that would be a compelling reason for taking the Windows Communications Foundation (aka Indigo) out for a spin. I see more dinners with Doug and Mike in my future.

On the other hand, the front end teams that actually build the web sites [as opposed to platform teams like mine] who use AJAX techniques seem to lean towards using JSON as opposed to XML. I guess that would make those services POJ not POX. :)

When I think about POX or RESTful Web services in the context of my day job, it is usually focused on the work we are doing for the Windows Live developer platform. I ask myself questions like "If we were to expose something equivalent to the Flickr API for MSN Spaces, what would be the best choice for developers?". For such questions, RESTful APIs seem like a no-brainer.


 

Categories: XML Web Services

May 12, 2006
@ 03:55 PM

One of the cool things about my day job is that I get to work with over a dozen teams all over Microsoft who are interested in consuming our Web services. This means that sometimes I get juicy scoops which I have to sit on for months before I can talk about them. One example, is the feature described in Joe Friend's blog post Blogging from Word 2007 where he writes

We've been working late into the nights and very late into our development schedule for Word 2007 and we have a special goody for all you bloggers in Beta 2 of Office 2007. That's right blog post authoring from Word. This is a very late breaking feature and is definitely beta software.
...
This is pretty standard stuff if you've ever used one of the many blog post authoring applications. In Beta 2 we support MSN Spaces, SharePoint 2007 (of course), Blogger, and Community Server (which is used for blogs.msdn.com). You can also set up a custom account with services that support the metaweblog API or the ATOM API. All the blog providers seems to interpret these APIs a bit different so there kinks we're still working out. But the basics should work in Beta 2. We hope to add a few more services to the list before we ship. The Word blog authoring feature is extensible and we will publish information so that blog providers can insure that their systems work with Word.

I met with Joe's team a few months ago and was pleased to hear they were going to add this feature to Word 2007. I've been working with them on this feature for a while and now that we actually have an official blog posting tool coming out of Microsoft, it's time for me to start investing more time in looking at exposing more of our blog-related features via an API.

This is definitely cool news. If you are a blogger that uses Microsoft Word, you definitely need to give it a try. They've gone out of their way to build a product that is easy to use and doesn't stomp on your expectations of a blogging tool (i.e. no nasty HTML for one). Mad props to Krista, Joe Friend and all the other folks who worked on getting this out the door.


 

May 11, 2006
@ 10:00 PM

I smiled today as I read Mark Nottingham's post where he described vendors of WS-* technologies as Vendor-pires. However what I found even more interesting was the following comment by Yaron Goland which states

First of all who says that the vampires are necessarily happy with WS-*? I leave it as an exercise to the reader to figure out why my group's VP (Brian Arbogast) goes around giving speechs about just using plain HTTP.

Second of all who says even the vampires can talk with each other in any useful way?!?!?! At least three people with text on this webpage know the answer to that question from hard real world experience.

Third, what to do without WS-*? Gosh, I don't know, how about ship useful code? I hear that HTTP stuff is pretty cool. If anyone cares you can peruse a bunch of blog entries on my website (www.goland.org) where I walk through a number of key enterprise scenarios and show that nothing more than HTTP+XML is required.

And yes, I still work for Microsoft. In fact, one of my jobs is to write the best practices for the design of all external interfaces for Windows Live. What you will see is a lot of HTTP, microformats, URL encodings and XML. My instructions are clear, first priority goes to simple HTTP interfaces..

Oh and here is a thought that an unnamed Microsofty gave me. The new shlock movie (http://www.imdb.com/title/tt0417148/) Snakes On A Plane (SOAP). Just a thought.

Yaron Goland is another one of the folks who'll be working on the Windows Live developer platform along with others like Danny Thorpe, Ken Levy and myself (virtually). I've been thinking that there is a big disconnect between the folks who sell the technologies and the folks who use them even within Microsoft. I've been slowly trying to bridge that gap but we definitely still have a long way to go as an industry.


 

Categories: XML Web Services