For the developers out there who'd like to ask questions about or report bugs in our implementation the MetaWeblog API for MSN Spaces  there is now a place to turn.

The MSN Spaces Development forum is where to go to ask questions about the MetaWeblog API for MSN Spaces, file bug reports and discuss with members of our developer community or the Spaces team about what you'd like to see us open up next.

There is even an RSS feed so I can keep up to date with recent postings using my favorite RSS reader. If you are interested in our API story, you should subscribe.


 

Categories: Windows Live

December 13, 2005
@ 06:27 PM

Nicholas Carr has a post entitled Sun and the data center meltdown which has an insightful excerpt on the kind of problems that sites facing scalability issues have to deal with. He writes

a recent paper on electricity use by Google engineer Luiz André Barroso. Barroso's paper, which appeared in September in ACM Queue, is well worth reading. He shows that while Google has been able to achieve great leaps in server performance with each successive generation of technology it's rolled out, it has not been able to achieve similar gains in energy effiiciency: "Performance per watt has remained roughly flat over time, even after significant efforts to design for power efficiency. In other words, every gain in performance has been accompanied by a proportional inflation in overall platform power consumption. The result of these trends is that power-related costs are an increasing fraction of the TCO [total cost of ownership]."

He then gets more specific:

A typical low-end x86-based server today can cost about $3,000 and consume an average of 200 watts (peak consumption can reach over 300 watts). Typical power delivery inefficiencies and cooling overheads will easily double that energy budget. If we assume a base energy cost of nine cents per kilowatt hour and a four-year server lifecycle, the energy costs of that system today would already be more than 40 percent of the hardware costs.

And it gets worse. If performance per watt is to remain constant over the next few years, power costs could easily overtake hardware costs, possibly by a large margin ... For the most aggressive scenario (50 percent annual growth rates), power costs by the end of the decade would dwarf server prices (note that this doesn’t account for the likely increases in energy costs over the next few years). In this extreme situation, in which keeping machines powered up costs significantly more than the machines themselves, one could envision bizarre business models in which the power company will provide you with free hardware if you sign a long-term power contract.

The possibility of computer equipment power consumption spiraling out of control could have serious consequences for the overall affordability of computing, not to mention the overall health of the planet.

If energy consumption is a problem for Google, arguably the most sophisticated builder of data centers in the world today, imagine where that leaves your run-of-the-mill company. As businesses move to more densely packed computing infrastructures, incorporating racks of energy-gobbling blade servers, cooling and electricity become ever greater problems. In fact, many companies' existing data centers simply can't deliver the kind of power and cooling necessary to run modern systems. That's led to a shortage of quality data-center space, which in turn (I hear) is pushing up per-square-foot prices for hosting facilities dramatically. It costs so much to retrofit old space to the required specifications, or to build new space to those specs, that this shortage is not going to go away any time soon.

When you are providing a service that becomes popular enough to attract millions of users, your worries begin to multiply. Instead of just worrying about efficient code and optimal database schemas, things like power consumption of your servers and data center capacity become just as important.

Building online services requires more than the ability to sling code and hack databases. Lots of stuff gets written about the more trivial aspects of building an online service (e.g. switch to sexy, new platforms like Ruby on Rails) but the real hard work is often unheralded and rarely discussed.


 

From the press release Microsoft and MCI Join to Deliver Consumer PC-to-Phone Calling we learn

REDMOND, Wash., and ASHBURN, Va. — Dec. 12, 2005 — Microsoft Corp. and MCI Inc. (NASDAQ: MCIP) today announced a global, multiyear partnership to provide software and services that enable customers to place calls from a personal computer to virtually any phone. The solution, MCI Web Calling for Windows Live™ Call, will be available through Windows Live Messenger, the upcoming successor to MSN® Messenger, which has more than 185 million active accounts around the world. The solution combines Windows Live software, advanced voice over Internet Protocol (VoIP) capabilities and the strengths of MCI’s expansive global network to give consumers an easy-to-use, convenient and cost-effective way to stay connected.

MCI and Microsoft are testing the service as part of a Windows Live Messenger limited beta with subscriptions initially available in the United States, and expect to jointly deliver the PC-to-phone calling capabilities to France, Germany, Spain and the United Kingdom in the coming weeks. Once subscribed to the service, customers can place calls to and from more than 220 countries with rates starting at $.023 per minute to the U.S., Canada, the U.K. and Western Europe during the beta testing period. Upon sign-up, MCI Web Calling customers will receive up to one hour of free calls. Final pricing will be determined when the product officially launches in 2006.

Another sweet Windows Live offering already in beta. You can find a screenshot of the upcoming functionality in the blog post Windows Live Call & MCI (Part II). I'll definitely be interested in trying out this feature once it ships. I used to PC-to-SMS feature all the time to send text messages to my girlfriend especially when I'm out of town. Extending this to phone calls would be great for calling family overseas.


 

Categories: Windows Live

I've been surprised to see several weblogs report that MSN Spaces 27 million blogs with over 7.6 million active bloggers. What I found surprising wasn't the inaccurate data on the number of weblogs or active users that we have. The surprise that these accurate sounding numbers were 'interpreted' from an off hand comment I made in my blog. The source of this information seems to be this post in the Blog Herald entitled MSN Spaces now has 27 million blogs and over 7.6 million active users: Microsoft which states

Microsoft’s blogging service has grown from an estimaited 18 million blogs in October to 27 million blogs and at least 7.6 million active bloggers, according to Dare Obasanjo from Microsoft in a post discussing server issues.

The service still remains in third position amongst blog providers, with Xanga and MySpace both believed to be hosting 40 million blogs each.

(note: calculations based on this line: "I never expected [Spaces] that we'd grow to be three times as big [as Live Journal] and three times as active within a year.")

I've been pretty surprised at the number of blogs I've seen quoting these numbers as facts when they are based on such fuzzy techniques. For the record we don't have 27 million blogs, the number is higher. As for our number of active users, that depends on your definition of active. Using one definition, we are over three times as active as LiveJournal. That's what I meant. 


 

Categories: Ramblings

I bumped into Irwin Dolobowsky a few weeks ago and he told me that he now worked on Windows Live Favorites. Irwin used to work on the XML team at Microsoft with me and in fact he took over http://msdn.microsoft.com/xml when I left the team last year. I'm glad to see that I'll be working closely with a couple more familiar faces.

Yesterday he let me know that they've started a team blog at http://spaces.msn.com/members/livefavorites. He's already started addressing some of the feedback from their early adopters such as his post on the Number of Favorites Limit. Check it out.


 

Categories: Windows Live

Our implementation of the MetaWeblog API for MSN Spaces is now publicly available. You can use the API to create, edit and delete blog posts on your space. The following blogging applications either currently work with our implementation of the MetaWeblog API or will in their next release

  1. W.Bloggar
  2. Blogjet
  3. Ecto
  4. Zoundry
  5. Qumana
  6. Onfolio
  7. Elicit
  8. PostXING
  9. Pocket Blogger
  10. Diarist - PocketPC

I have also provided a pair of tutorials for managing your MSN Spaces blog using desktop blogging tools, one on using Blogjet to manage your MSN Spaces blog and the other on using W.Bloggar to manage your blog on MSN Spaces.

The following information is for developers who would like to build applications that programmatically interact with MSN Spaces.

Supported Blogger and MetaWeblog API methods:

  • metaWeblog.newPost (blogid, username, password, struct, publish) returns string
  • metaWeblog.editPost (postid, username, password, struct, publish) returns boolean
  • metaWeblog.getPost (postid, username, password) returns struct
  • metaWeblog.getCategories (blogid, username, password) returns array of structs
  • metaWeblog.getRecentPosts (blogid, username, password, numberOfPosts) returns array of structs
  • blogger.deletePost(appkey, postid, username, password, publish) returns boolean
  • blogger.getUsersBlogs(appkey, username, password) returns array of structs
  • blogger.getUserInfo(appkey, username, password) returns struct

Unsupported MetaWeblog API methods:

  • metaWeblog.newMediaObject (blogid, username, password, struct) returns struct

NOTE: The appKey parameter used by the deletePost, getUsersBlogs and getUserInfo method is ignored. MSN Spaces will not require an application key to utilize its APIs.

Expect to see more information about the MetaWeblog API for MSN Spaces on http://msdn.microsoft.com/msn shortly. We also will be providing a forum to discuss the APIs for MSN Spaces at http://forums.microsoft.com/msdn in the next few days. If you have questions about using the API or suggestions about other APIs you would like to see, either respond to this blog entry or send me mail at dareo AT microsoft DOT com. 


 

Categories: Windows Live

The following is a tutorial on posting to your blog on MSN Spaces using the W.Bloggar desktop blogging application.

  1. Create a Space on http://spaces.msn.com if you don't have one

  2. Go to 'Edit Your Space->Settings->Email Publishing'

  3. Turn on Email Publishing (screenshot below)

  4. Choose a secret word (screenshot below)

  5. Download and install the latest version of W.Bloggar from http://www.wbloggar.com

  6. Go to File->Add Account

  7. On the next screen, answer "Yes, I want to add it as a new account" when asked whether you already have a blog

  8. Select 'Custom' as your blog tool and choose an alias for this account (screenshot below)

  9. Select your Custom Blog Tool Settings as shown (screenshot below)

  10. Specify your provider information as follows; Host=storage.msn.com, Page=/storageservice/MetaWeblog.rpc, Port=443, HTTPS=checked (screenshot below)

  11. Enter your username and password. Your username is the name of your space (e.g. I use 'carnage4life' because the URL of my space is http://spaces.msn.com/members/carnage4life). The password is the secret word you selected when you turned on Email-Publishing on your space. (screenshot below)

  12. Click Finish.

  13. Go ahead and create, edit or delete blog posts on your blog using W.Bloggar


 

Categories: Windows Live

The following is a tutorial on posting to your blog on MSN Spaces using the Blogjet desktop blogging application.

  1. Create a Space on http://spaces.msn.com if you don't have one

  2. Go to 'Edit Your Space->Settings->Email Publishing'

  3. Turn on Email Publishing (screenshot below)

  4. Choose a secret word (screenshot below)

  5. Download and install the latest version of Blogjet from http://www.blogjet.com

  6. Go to Tools->Manage Accounts

  7. Create a new account where the user name is the name of your space (e.g. I use 'carnage4life' because the URL of my space is http://spaces.msn.com/members/carnage4life). The password is the secret word you selected when you turned on Email Publishing on your space.

  8. On the next screen select "I already have a blog"

  9. Specify your provider information as shown below. Host=storage.msn.com, Page=/storageservice/MetaWeblog.rpc, Port=443, Use SSL=checked (screenshot below)

  10. Keep clicking Next until you are done

  11. Go ahead and create, edit or delete blog posts on your space using Blogjet


 

Categories: Windows Live

The number one problem that faces developers of feed readers is how to identify posts. How does a feed reader tell a new post from an old one whose title or permalink changed? In general how you do this is to pick a unique identifier from the metadata of the feed item to use to tell it apart from others. If you are using the Atom 0.3 & 1.0 syndication formats the identifier is the <atom:id> element, for RSS 1.0 it is the rdf:about attribute and for RSS 0.9x & RSS 2.0 it is the <guid> element.

The problem is that many RSS 0.9x & 2.0 feeds do not have a <guid> element which usually means a feed reader has to come up with its own custom mechanism for identifying items. In many cases, using the <link> element is enough because most items in a feed map to a single web resource with a permalink URL. In some pathological cases, a feed may not have <guid> or <link> OR even worse may use the same value in the <link> element for each item in the feed. In such cases, feed readers usually resort to heuristics which are guaranteed to be wrong at least some of the time.

So what does this have to do with the Newsgator API? Users of recent versions of RSS Bandit can synchronize the state of their RSS feeds with Newsgator Online using the Newsgator API. Where things get tricky is that this means that both the RSS Bandit and Newsgator Online either need to use the same techniques for identifying posts OR have a common way to map between their identification mechanisms. When I first used the API, I noticed that Newsgator has it's own notion of a "Newsgator ID" which it expects clients to use. In fact, it's worse than that. Newsgator Online assumes that clients that synchronize with it actually just fetch all their data from Newsgator Online including feed content. This is a pretty big assumption to make but I'm sure it made it easier to solve a bunch of tricky development problems for their various products. Instead of worrying about keeping data and algorithms on the clients in sync with the server, they just replace all the data on the client with the server data as part of the 'synchronization' process.

Now that I've built an application that deviates from this fundamental assumption I've been having all sorts of interesting problems. The most recent being that some users complained that read/unread state wasn't being synced via the Newsgator API. When I investigated it turned out that this is because I use <guid> elements to identify posts in RSS Bandit while the Newsgator API uses the "Newsgator ID". Even worse is that they don't even expose the original <guid> element in the returned feed items. So now it looks like fixing the read/unread not being synced bug involves bigger and more fundamental changes than I expected. More than likely I'll have to switch to using <link> elements as unique identifiers since it looks like the Newsgator API doesn't throw those away.

Frustrating.


 

Tim Ewald has an astute post entitled PaulD's new XSD data binding WG where he discusses a recently chartered W3C working group. He writes

Paul responded to yesterday's post to explain the need for the new W3C XML Schema Patterns for Databinding Working Group, which he chairs. He points out that the move by the WS-I to deprecate encoding in favor of literal schema was based on a reasonable argument (that there is no spec for how to translate an XSD in a WSDL - which describes a tree of named structural types - into an input to SOAP encoding - which acts on a graph of unnamed structural types) but that the end result made interop harder because it opened up the door to using all of XSD. I disagree. The WSDL spec opened the door to using all of XSD for both encoded and literal bindings. The work that SOAPbuilders did provided a set of test cases for mapping common types and structures. It did not, however, address questions like “how do you map substitution groups to code using an encoded binding”, something that is completely legal according to WSDL. In other words, the shift from encoding to literal in no way widened the number of databinding cases we had to be concerned about. That's a red herring. The real problem has been the lack of SOAPbuilders-style test suites to cover more of XSD or the lack of a formal specification that narrows XSD to a more easily supported subset (an option that the WS-I discarded).

This is one of those issues which that which I use to blame on the complexity of XSD but have adjusted to also blaming vendors of XML Web services toolkits as well. The core problem is that every vendor of XML Web Services toolkits pretends they are selling a toolkit for programming with distributed objects and tries their best to make their tool hide the XML-ness of the wire protocols (SOAP), interface description language (WSDL) and data types (XSD). Of course, these toolkits are all leaky abstractions made even leakier than usual by the impedance mismatch between XSD and the typical statically typed, object oriented programming language that is popular with the enterprise Web services crowd (i.e. Java or C#).

The W3C forming a working group to standardize the collection of hacks and kludges that various toolkits use when mapping XSD<->objects is an attempt to standardize the wrongheaded thinking of the majority of platform vendors selling XML Web Services toolkits.  

Reading the charter of the working group is even more disturbing because not only do they want to legitimize bad practices but they also plan to solve problems like how to version classes across programming languages and coming up with XML representations of common data structures for use across different programming languages. Thus the working group plans to invent as well as standardize common practice. Sounds like the kind of conflicting goals which brought us XSD in the first place. I wish them luck.


 

Categories: XML Web Services