Last week I had lunch with Joshua Allen and mentioned that I was planning to write a blog post about the game changing effect of some entity adding generally accessible offline support to the AJAX capabilities of traditional web browsers. It seems Jason Kottke has beaten me to writing this with his post  GoogleOS? YahooOS? MozillaOS? WebOS? , he even has a roll call of the usual suspects who might build this and why.

He writes

So who's going to build these WebOS applications? Hopefully anyone with XHTML/JavaScript/CSS skills, but that depends on how open the platform is. And that depends on whose platform it is. Right now, there are five organizations who are or could be moving in this direction:

  • Google. If Google is not thinking in terms of the above, I will eat danah's furriest hat. They've already shifted the focus of Google Desktop with the addition of Sidebar and changing the name of the application (it used to be called Google Desktop Search...and the tagline changed from "Search your own computer" to the more general "Info when you want it, right on your desktop"). To do it properly, I think they need their own browser (with bundled Web server, of course) and they need to start writing their applications to work on OS X and Linux (Google is still a Windows company)[4]. Many of the moves they've made in the last two years have been to outflank Microsoft, and if they don't use Google Desktop's "insert local code into remote sites" trick to make whatever OS comes with people's computers increasingly irrelevant, they're stupid, stupid, stupid. Baby step: make Gmail readable offline.
  • Yahoo. I'm pretty sure Yahoo is thinking in these terms as well. That's why they bought Konfabulator: desktop presence. And Yahoo has tons of content and apps that that would like to offer on a WebOS-like platform: mail, IM, news, Yahoo360, etc. Challenge for Yahoo: widgets aren't enough...many of these applications are going to need to run in Web browsers. Advantages: Yahoo seems to be more aggressive in opening up APIs than Google...chances are if Yahoo develops a WebOS platform, we'll all get to play.
  • Microsoft. They're going to build a WebOS right into their operating system...it's likely that with Vista, you sometimes won't be able to tell when you're using desktop applications or when you're at msn.com. They'll never develop anything for OS X or for Linux (or for browsers other than IE), so its impact will be limited. (Well, limited to most of the personal computers in the world, but still.)
  • Apple. Apple has all the makings of a WebOS system right now. They've got the browser, a Web server that's installed on every machine with OS X, Dashboard, iTMS, .Mac, Spotlight, etc. All they're missing is the applications (aside from the Dashboard widgets). But like Microsoft, it's unlikely that they'll write anything for Windows or Linux, although if OS X is going to run on cheapo Intel boxes, their market share may be heading in a positive direction soon.
  • The Mozilla Foundation. This is the most unlikely option, but also the most interesting one. If Mozilla could leverage the rapidly increasing user base of Firefox and start bundling a small Web server with it, then you've got the beginnings of a WebOS that's open source and for which anyone, including Microsoft, Google, Yahoo, and anyone with JavaScript chops, could write applications. To market it, they could refer to the whole shebang as a new kind of Web browser, something that sets it apart from IE, a true "next generation" browser capable of running applications no matter where you are or what computer (or portable device) you're using.

So yeah, that's the idea of the WebOS (as I see it developing) in a gigantic nutshell.

I disagree with some of his post; I think desktop web servers are a bad idea and also that the claims of the end of Microsoft's operating system dominance are premature. He is also mistaken about MSN not building stuff for browsers other than IE. Of course, overestimating Microsoft's stupidity is a common trait among web developer types.

However the rest of his post does jibe with a lot of thinking I did while on vacation in Nigeria. I'd suggest that anyone interested in current and future trends in web application development should check it out.

 

 

Categories: Web Development

My post on Why I Prefer SOA to REST got some interesting commentary yesterday that indicates that I probably should clarify what I was talking about. The most interesting feedback actually came from email from some evangelists at Microsoft who had a bunch of criticisms from the fact that I dared to use Wikipedia as a definitive reference to pointing out that SOA is a meaningless buzzword. So I'll try this again without using links to Wikipedia or the acronym "SOA".

My day job is designing services that will be used both within MSN by a number of internal properties (Hotmail, MSN Spaces, MSN Messenger, and a lot more) as well as figuring out what our external web services story will be for interacting with MSN Spaces. This means I straddle the fence of dealing with building distributed applications in a primarily homogenous intranet environment and the heteregenous World Wide Web. When I talk about "distributed applications" I mean both scenarios not just Web service development or enterprise service development.

Now let's talk about REST. In Chapter 5 of Roy Fieldings Dissertation where he introduces the concept of Representational State Transfer (REST) architectural style he writes

5.1.5 Uniform Interface

The central feature that distinguishes the REST architectural style from other network-based styles is its emphasis on a uniform interface between components (Figure 5-6). By applying the software engineering principle of generality to the component interface, the overall system architecture is simplified and the visibility of interactions is improved. Implementations are decoupled from the services they provide, which encourages independent evolvability. The trade-off, though, is that a uniform interface degrades efficiency, since information is transferred in a standardized form rather than one which is specific to an application's needs. The REST interface is designed to be efficient for large-grain hypermedia data transfer, optimizing for the common case of the Web, but resulting in an interface that is not optimal for other forms of architectural interaction.

In order to obtain a uniform interface, multiple architectural constraints are needed to guide the behavior of components. REST is defined by four interface constraints: identification of resources; manipulation of resources through representations; self-descriptive messages; and, hypermedia as the engine of application state. These constraints will be discussed in Section 5.2.
...
5.2.1.1 Resources and Resource Identifiers

The key abstraction of information in REST is a resource. Any information that can be named can be a resource: a document or image, a temporal service (e.g. "today's weather in Los Angeles"), a collection of other resources, a non-virtual object (e.g. a person), and so on. In other words, any concept that might be the target of an author's hypertext reference must fit within the definition of a resource. A resource is a conceptual mapping to a set of entities, not the entity that corresponds to the mapping at any particular point in time.

The REST architecture describes how a large interlinked web of hypermedia works which is what the World Wide Web is. It describes a way to build a certain class of distributed application, specifically one where you are primarily interested in manipulating linked representations of resources where the representations are hypermedia.

On to service orientation. The canon of service orientation are the four tennets taken from the article A Guide to Developing and Running Connected Systems with Indigo by Don Box where he wrote

In Indigo, a service is simply a program that one interacts with via message exchanges. A set of deployed services is a system. Individual services are built to lastthe availability and stability of a given service is critical. The aggregate system of services is built to allow for changethe system must adapt to the presence of new services that appear a long time after the original services and clients have been deployed, and these must not break functionality.

Service-oriented development is based on the four fundamental tenets that follow:

Boundaries are explicit A service-oriented application often consists of services that are spread over large geographical distances, multiple trust authorities, and distinct execution environments...Object-oriented programs tend to be deployed as a unit...Service-oriented development departs from object-orientation by assuming that atomic deployment of an application is the exception, not the rule. While individual services are almost always deployed atomically, the aggregate deployment state of the overall system/application rarely stands still.

Services are autonomous Service-orientation mirrors the real world in that it does not assume the presence of an omniscient or omnipotent oracle that has awareness and control over all parts of a running system.

Services share schema and contract, not class Object-oriented programming encourages developers to create new abstractions in the form of classes...Services do not deal in types or classes per se; rather, only with machine readable and verifiable descriptions of the legal "ins and outs" the service supports. The emphasis on machine verifiability and validation is important given the inherently distributed nature of how a service-oriented application is developed and deployed.

Service compatibility is determined based on policy Object-oriented designs often confuse structural compatibility with semantic compatibility. Service-orientation deals with these two axes separately. Structural compatibility is based on contract and schema and can be validated (if not enforced) by machine-based techniques (such as packet-sniffing, validating firewalls). Semantic compatibility is based on explicit statements of capabilities and requirements in the form of policy.

One thing I want to point out at this point is that neither REST nor service orientation are technologies. They are approaches to building distributed applications. However there are technologies typically associated with both approaches, REST has Plain Old XML over HTTP (POX/HTTP) and service orientation has SOAP.

My point from yesterday was that as far as approaches go, I prefer to think of building distributed applications from a service oriented perspective than from a REST perspective. This is completely different from endorsing SOAP over using POX/HTTP as the technology for building distributed applications. That is a discussion for another day.


 

Categories: XML Web Services

Can't write lyrics worth a damn, can't rap to save his life but can make phat beats and has an ego the size of a small planet. I guess we now know what it takes to be a successful rapper and it has nothing to do with rapping.


 

Categories: Music

Omar Shahine has a post where he talks about FireAnt. FireAnt is communications part of the AJAX framework shared by Hotmail, Start.com, MyMSN and MSN Spaces which Steve Rider alluded to in his post Spaces, Hotmail and Start (oh my!).

Omar writes

Last summer we spent a lot of time at the white-board evaluating a number of ways to deliver a new architecture for Hotmail. We considered a number of things:

  1. Modification of the current C/C++ ISAPI architecture to support a hybrid ASP model.
  2. .NET rewrite for the DataLayer and BusinessLayer and XML/XSLT for the PresentationLayer
  3. Same as #2 but the Presentation layer would be JavaScript, XMLHTTP, and DHTML/CSS. This now has the fancy name, AJAX.

After much deliberating, we chose #3, and started running. For 4 weeks basically 1 PM, a developer and an intern built a prototype, and then the real thing (in case you are in college I’d note how cool it is that we put an intern on the most important technology we we're building). As more people started to move over to the FireAnt project, we got more and more excited about what was happening. You see, writing AJAX code can be a pain, and we didn’t want to spend our days and nights writing a lot of JavaScript and debugging client side Script. Instead we built an infrastructure that dynamically take server side objects (classes and methods) and automatically generates client side JavaScript stubs. The end result is that the client side object model looked exactly like the server side object model. Information was transported across the wire using XMLHTTP and the whole thing happened Asynchronously.

We extended .NET Attributes to mark classes and methods as FireAnt classes/methods and at build time the script is generated. If you think of SOAP support in the .NET Framework, it’s basically similar. As a developer you do not worry about generating SOAP messages, or building a SOAP parser. All you do is mark your method as [WebMethod] and your classes as [Serializable] and the .NET framework takes care of proxying, class generation etc. That’s what we were shooting for.

This was a big deal for us as it allows us to be incredibly productive. Since last summer, we have built a ton of features using FireAnt and the JavaScript Frameworks from Scott Isaacs. Late last fall we went up to Redmond and showed FireAnt to a number of folks in MSN, one of those folks was Steve Rider. It was really exciting to see the looks on folks faces when Walter (our FireAnt “architect”) setup his “Hello World” demo. You could just see that people realized that doing AJAX style development any other way was crazy.

We’ve since showed our stuff to a number of teams inside Microsoft. As a result of our work, Walter and Scott have spent a considerable amount of time with the Whidbey/ASP.NET folks and it’s pretty exciting to see ATLAS come together. If you want to learn more, Walter will be giving a talk at the PDC on what we’ve built. It’s great so see collaboration between our team and the Developer Division as the end result will be a better more scalable version of the .NET Framework for you.

Trying to build a complex AJAX website with traditional Visual Studio.NET development tools is quite painful which is why the various teams at MSN have collaborated and built a unified framework. As Omar points out, one of the good things that has come out of this is that the various MSN folks went to the Microsoft developer division and pointed out they are missing the boat key infrastructure needed for AJAX development. This feedback was one of the factors that resulted in the recently announced Atlas project.

A key point Omar touches on is that development became much easier when they built a framework for handling serialization and deserialization of objects to transmitted using XMLHTTP. The trick here is that the framework handles both serialization and deserialization on both the server (ASP.NET code) and the client (Javascript code). Of course, this is AJAX development 101 and anyone who's used AJAX frameworks like AJAX.NET is familiar with these techniques. One of the interesting things that falls out of using a framework like this is that the serialization format becomes less interesting, one could as easily use JavaScript Object Notation (JSON) as opposed to some flavor of XML.

If you're going to be at the Microsoft Professional Developer's Conference (PDC) and are interested in professional AJAX development you should definitely make your way to the various presentations by the MSN folks. Also, we're always looking for developers so if building AJAX applications that will be utilized by millions of people on a daily basis sounds like your cup of tea give us your resume.


 

Categories: MSN | Web Development

August 21, 2005
@ 12:56 AM

The good folks at Google, Yahoo and MSN announced some sweet stuff for Web search aficionados this week.

  1. MSN: From the MSN Search blog post entitled Extending the MSN Search Toolbar we learn that the first of many add ins for the MSN Search toolbar is now available. The weather add in for MSN Toolbar is something I've wanted in a browser toolbar for a while. There is also information for developers interested in building their own add ins.

  2. Google: From the Google Weblog post entitled The linguasphere at large we learn that Google has launched the Google Search engine in 11 more languages bringing to total number of languages supported to 116. The coolest part of this announcement is that I can now search Google in Yoruba which is my dad's native tongue. Not bad at all.

  3. Yahoo: The last but not the least is the recent announcement about the Next Generation of Yahoo! Local on the Yahoo! Search blog. The launch showcases the integration of ratings and reviews by Yahoo! users with mapping and local information. This kind of like Yahoo! Maps meets CitySearch. Freaking awesome. Yahoo! continues to impress.

 


 

August 21, 2005
@ 12:47 AM

Yesterday I was chatting with Matt after he reviewed the paper I plan to submit for the next Bill Gates Think Week and he pointed out something that had been nagging me about using REpresentational State Transfer(REST) as a model for building distributed applications.  

In the current Wikipedia entry on REST, it states

An important concept in REST is the existence of resources (pieces of information), each of which can be referred to using a global identifier (a URL). In order to manipulate these resources, components of the network (clients and servers) communicate via a standardised interface (HTTP) and exchange representations of these resources (the actual files uploaded and downloaded) -- it is a matter of debate, however, whether the distinction between resources and their representations is too Platonic for practical use on the web, though it is popular in the RDF community.

Any number of connectors (e.g., clients, servers, caches, tunnels, etc.) can mediate the request, but each does so without "seeing past" its own request (referred to as "layering", another constraint of REST and a common principle in many other parts of information and networking architecture). Thus an application can interact with a resource by knowing two things: the identifier of the resource, and the action required -- it does not need to know whether there are caches, proxies, gateways, firewalls, tunnels, or anything else between it and the server actually holding the information. The application does, however, need to understand the format of the information (representation) returned, which is typically an HTML or XML document of some kind, although it may be an image or any other content.

Compare the above to the typical notion of service orientation such as that espoused in the article A Guide to Developing and Running Connected Systems with Indigo by Don Box where he wrote

In Indigo, a service is simply a program that one interacts with via message exchanges. A set of deployed services is a system. Individual services are built to last—the availability and stability of a given service is critical. The aggregate system of services is built to allow for change—the system must adapt to the presence of new services that appear a long time after the original services and clients have been deployed, and these must not break functionality.

Service-oriented development is based on the four fundamental tenets that follow:

Boundaries are explicit   A service-oriented application often consists of services that are spread over large geographical distances, multiple trust authorities, and distinct execution environments...Object-oriented programs tend to be deployed as a unit...Service-oriented development departs from object-orientation by assuming that atomic deployment of an application is the exception, not the rule. While individual services are almost always deployed atomically, the aggregate deployment state of the overall system/application rarely stands still.

Services are autonomous   Service-orientation mirrors the real world in that it does not assume the presence of an omniscient or omnipotent oracle that has awareness and control over all parts of a running system.

Services share schema and contract, not class   Object-oriented programming encourages developers to create new abstractions in the form of classes...Services do not deal in types or classes per se; rather, only with machine readable and verifiable descriptions of the legal "ins and outs" the service supports. The emphasis on machine verifiability and validation is important given the inherently distributed nature of how a service-oriented application is developed and deployed.

Service compatibility is determined based on policy   Object-oriented designs often confuse structural compatibility with semantic compatibility. Service-orientation deals with these two axes separately. Structural compatibility is based on contract and schema and can be validated (if not enforced) by machine-based techniques (such as packet-sniffing, validating firewalls). Semantic compatibility is based on explicit statements of capabilities and requirements in the form of policy.

The key thing to note here is that REST is all about performing a limited set of operations on an object (i.e. a resource) while SOA is all about making requests where objects are input and/or output.

To see what this difference means in practice, I again refer to the Wikipedia entry on REST which has the following example

A REST web application requires a different design approach than an RPC application. In RPC, the emphasis is on the diversity of protocol operations, or verbs; for example, an RPC application might define operations such as the following:

getUser()
addUser()
removeUser()
updateUser()
getLocation()
addLocation()
removeLocation()
updateLocation()
listUsers()
listLocations()
findLocation()
findUser()

With REST, on the other hand, the emphasis is on the diversity of resources, or nouns; for example, a REST application might define the following two resource types

 User {}
 Location {}

Each resource would have its own location, such as http://www.example.org/locations/us/ny/new_york_city.xml. Clients work with those resources through the standard HTTP operations, such as GET to download a copy of the resource, PUT to upload a changed copy, or DELETE to remove all representations of that resource. Note how each object has its own URL and can easily be cached, copied, and bookmarked. POST is generally used for actions with side-effects, such as placing a purchase order, or adding some data to a collection.

The problem is that although it is easy to model resources as services as shown in the example in many cases it is quite difficult to model a service as a resource. For example, a service that validates a credit card number can be modeled as a validateCreditCardNumber(string cardNumber) service. On the other hand it is unintuitive how one would model the service as a resource. For this reason I prefer to think about distributed applications in terms of services as opposed to resources.

This doesn't mean I don't think there is value in several aspects of REST. However I don't think it is the right model when thinking about building distributed applications.


 

Categories: XML Web Services

August 20, 2005
@ 11:32 PM
I've posted in the past about not understanding why people continue to use Technorati.com. It seems more people have realized that the service has been in bad shape for months and are moving on. Jason Kottke has a blog post entitled So Long, Technorati where he writes

That's it. I've had it. No more Technorati. I've used the site for, what, a couple of years now to keep track of what people were saying about posts on kottke.org and searching blogs for keywords or current events. During that time, it's been down at least a quarter of the time (although it's been better recently), results are often unavailable for queries with large result sets (i.e. this is only going to become a bigger problem as time goes on), and most of the rest of the time it's slow as molasses.

When it does return results in a timely fashion for links to kottke.org, the results often include old links that I've seen before in the results set, sometimes from months ago. And that's to say nothing of the links Technorati doesn't even display. The "kottke.org" smart list in my newsreader picks up stuff that Technorati never seems to get, and that's only pulling results from the ~200 blogs I read, most of which are not what you'd call obscure. What good is keeping track of 14 million blogs if you're missing 200 well-known ones? (And trackbacks perform even better...this post got 159 trackbacks but only 93 sites linking to it on Technorati.)

Over the past few months, I've been comparing the results from PubSub to those of Technorati and PS is kicking ass. Technorati currently says that 19 sites have linked to me in the past 6 days (and at least four of those are old and/or repeats...one is from last September, fer chrissakes) while PubSub has returned 38 fresh, unrepeated results during that same time. (Not that PubSub is all roses and sunshine either...the overlap between the result sets is surprisingly small.)

While their search of the live web (the site's primary goal) has been desperately in need of a serious overhaul, Technorati has branched out into all sorts of PR-getting endeavors, including soundbiting the DNC on CNN, tags (careful, don't burn yourself on the hot buzzword), and all sorts of XML-ish stuff for developers. Which is all great, but get the fricking search working first! As Jason Fried says, better to build half a product than a half-assed product. I know it's a terrifically hard problem, but Figure. It. Out.

Jason Kottke recommends IceRocket's blog search at the end of his post. I've been using the Bloglines Citations feature for the past couple of months and love it. That in combination with RSS feeds of search results via PubSub have replaced Technorati for all my ego searching needs.


 

Tim Bray has a recent post entitled The Real Problem that opens up the quarterly debate on the biggest usability problem facing XML syndication technologies like RSS and Atom; there is no easy way for end users to discover or subscribe to a website's feed.

Tim writes

One-Click Subscription First of all, most people don’t know about feeds, and most that do don’t subscribe to them. Check out the comments to Dwight Silverman’s What’s Wrong with RSS? (By the way, if there were any doubt that the blogging phenomenon has legs, the fact that so many people read them even without the benefits of RSS should clear that up).

Here’s the truth: an orange “XML” sticker that produces gibberish when you click on it does not win friends and influence people. The notion that the general public is going to grok that you copy the URI and paste it into your feed-reader is just ridiculous.

But, as you may have noticed, the Web has a built-in solution for this. When you click on a link to a picture, it figures out what kind of picture and displays it. When you click on a link to a movie, it pops up your favorite movie player and shows it. When you click on a link to a PDF, you get a PDF viewer.

RSS should work like this; it never has, but it can, and it won’t be very hard. First, you have to twiddle your server so RSS is served up correctly, for example as application/rss+xml or application/atom+xml. If you don’t know what this means, don’t worry, the person who runs your web server can do it in five minutes.

Second, you either need to switch to Atom 1.0 or start using <atom:link rel="self"> in RSS. If our thought leaders actually stepped up and started shouting about this, pretty well the whole world could have one-click subscriptions by next summer, using well-established, highly-interoperable, wide-open standards.

As long as people expect one click subscription to depend on websites using the right icons, the right HTML and the right MIME types for their documents it won't become widespread. On the other hand, this debate is about to become moot anyway because every major web browser is going to have a [Subscribe to this website] button on it in a year or so. Firefox already has Live Bookmarks, there's Safari RSS for Mac OS X users and Internet Explorer 7 will have Web Feeds.

As far as I'm concerned, the one click subscription problem has already been solved. I guess that's why Dave Winer is now arguing about what to name the feature across different Web browsers. After all, RSS geeks must always have something to argue about. :)


 

For the few folks that have asked, I have uploaded 50 Photos from my Nigeria Trip to the photo album in my MSN Space. The photos are from all over Nigeria specifically Lagos, Abuja and Abeokuta. They are somewhat crappy since I used a disposable camera I bought at the airport but they do capture the spirit of the vacation.

I guess I should start thinking about investing in a digital camera.

Update: Even though no one asked I've also decided to start rotating a song of the week on my space from the rap group I formed with Big Lo and a couple of other guys back in high school. Just click on the play button on the Windows Media player module to hear this week's track.


 

Categories: Trip Report

August 16, 2005
@ 06:40 PM

Joe Wilcox, an analyst for Jupiter Research, recently posted his changed impressions on MSN Spaces in his blog post Making Room for My Space. He writes

I have started using MSN Spaces as the place where I keep my personal Weblog. Duing 2004 and part of 2005, I used TypePad's blogging service, and more recently moved one of my domains to a bloghoster. While a domain offers great search engine exposure, using the hosted blogging software requires some knowledge of HTML/CSS coding and other techniques; it's more work and trouble than I have time for. TypePad is a good alterative that's fairly easy to use, but it's by no means point and click.

To Microsoft's credit, MSN Spaces is remarkably easy to use, or so I am discovering as I give the service a hard second look. Sure, there were glitches at beta launch, but the service seems solid now. Some established blogger balked at the lack of control, meaning Microsoft tools took most of it, when the service launched as beta. But Microsoft never meant the service for them, but the masses of people that hadn't yet started blogging, and maybe folks like me too busy to become an amateur blogsite designer.

The simplicity and beauty of Microsoft's approach foreshadows possible future product changes competitors and partners shouldn't ignore...MSN Spaces takes that approach, of providing easy tools for doing the most common blogsite tasks. The user doesn't have as much control, but he or she can get the most common tasks quickly done. Over time, Microsoft has increased the amount of control and customization that power users would want, such as Friday's release of three MSN Spaces PowerToys, for layout control, custom (sandbox) modules and Windows Media content.
...
I would encourage Microsoft competitors and partners to closely watch MSN Spaces' progress. Apple blindsided Microsoft with iPod and the iTunes Music Store, a circumstance well understood by Microsoft product managers. Simplicity is one cornerstone of the products' success. Synching iPod to iTunes is no more complicated than connecting the device to the computer. There are settings to do more, but the baseline functionality that is suitable to most users is plug and synch. Microsoft has embarked on a similar, simpler approach with MSN Spaces.

It is interesting seeing how geeks adore complexity in the software and hardware that they use. I can still remember Robert Scoble's complaints about Spaces in his post MSN Spaces isn't the blogging service for me  or even CmdrTaco's comments when Apple released the iPod, "No wireless. Less space than a nomad. Lame". Despite both products being dissed by A-list geeks they have become widely adopted by millions of people.  

More proof that designing for regular people is a lot different from designing for geeks.


 

Categories: MSN