There's an article on InfoQ entitled "Code First" Web Services Reconsidered which begins

Are you getting started on developing SOAP web services? If you are, you have two development styles you can chose between. This first is called “start-from-WSDL”, or “contract first”, and involves building a WSDL service description and associated XML schema for data exchange directly. The second is called “start-from-code”, or “code first”, and involves plugging sample service code into your framework of choice and generating the WSDL+schema from that code.

With either development style, the end goal is the same – you want a stable WSDL+schema definition of your service. This goal is especially important when you’re working in a SOA environment. SOA demands loose coupling of services, where the interface is fixed and separate from the implementation.

There are lots of problems with this article, the main one being that choosing between “contract first” and “code first” SOAP Web Services development styles is like choosing between putting a square peg in a round hole and putting a round peg in a square hole. Either way, you will have to deal with the impedance mismatch between W3C XML Schema (XSD) and objects from your typical OO system. This is because practically every SOAP Web service toolkit does some level of XML<->object mapping which ends up being lossy because W3C XML Schema contains several constructs that don’t really map well to objects and objects may contain constructs that don’t really map well to W3C XML Schema depending on your platform choice. 

The only real consideration when deciding between “code first” and “contract first” approaches is whether your service is concerned primarily with objects or whether it is concerned primarily with XML documents [preferrably with a predefined schema]. If you are just moving around data objects (i.e. your data model isn’t much more complex than JSON) then you are better off using a “code first” approach especially since most SOAP toolkits can handle the basic constructs that would result from generating WSDLs from such types. On the other hand, if you are transmitting documents that have a predefined schema (e.g. XBRL or OOXML documents) then you are better off authoring the WSDL by hand than trying to jump through hoops to get the XML<->object mapping technology in your off-the-shelf SOAP toolkit to do a great job with these schemas. Unfortunately, anyone consuming your SOAP Web service who is using an off-the-shelf SOAP toolkit will likely have interoperability issues when their toolkit tries to process schemas that fully utilize a significant number of the features W3C XML Schema. If your situation falls somewhere in the middle, then you’re probably screwed regardless of which approach you choose.

One of the interesting consequences of the adoption of RESTful Web services is that these interoperability issues have been avoided because most people who provide these services do not provide schemas in the W3C XML Schema. Thus discouraging the foolishness of XML<->object mapping based on XSD that causes interoperability problems in the SOAP world. Instead, most vendors who expose RESTful APIs that want to provide an object-centric programming model for developers who don’t want to deal with XML either create or encourage the third party creation of client libraries on target platforms which wrap their simple, RESTful Web services with a more object oriented facade. Examples of vendors who have gone with this approach for their RESTful APIs include

That way XML geeks like me get to party on the raw XML which is in a simple and straightforward format while the folks who are scared of XML can party on objects using their favorite programming language and platform. The best of both worlds.

“Contract first” vs. “Code first”? More like do you want to slam your head into a brick wall or a concrete wall?

Now playing: Big Boi - Kryptonite (I'm On It) feat. Killer Mike


 

Categories: XML Web Services

The Facebook developer blog has a post entitled Change is Coming which details some of the changes they've made to the platform to handle malicious applications including

Requests

We will be deprecating the notifications.sendRequest API method. In its place, we will provide a standard invitation tool that allows users to select which friends they would like to send a request to. We are working hard on multiple versions of this tool to fit into different contexts. The tool will not have a "select all" button, but we hope it enables us to increase the maximum number of requests that can be sent out by a user. The standardized UI will hopefully make it easier for users to understand exactly what they are doing, and will save you the trouble of building it yourself.

Notifications

Soon we will be removing email functionality from notifications.send, though the API function itself will remain active. In the future, we may provide another way to contact users who have added your app, as we know that is important. Deceptive and misleading notifications will continue to be a focus for us, and we will continue to block applications which behave badly and we will continue to iterate on our automated spam detection tools. You will also see us working on ways to automatically block deceptive notifications.

It looks like some but not all of the most egregious behavior is being targetted which is good. Specifically, I  wonder what is meant by deprecating the notifications.sendRequest API. When I think of API deprecation, I think of @deprecated in Java and Obsolete in C#, neither of which prevent the API from being used.

One of my biggest gripes with the site is the number of “friend requests” I get from applications with no way to opt out of getting these requests. However it doesn’t seem that this has been eliminated. Instead an API is being replaced with a UI component but the API isn’t even going away. I hope there is a follow up post where they describe the opt-out options they’ve added to the site so users can opt-out of getting so many unsolicited requests.

Now playing: Big Pun - Punish Me


 

August 28, 2007
@ 04:24 PM

People Who Need to Get the Fuck out of the White House
Donald Rumsfeld
Karl Rove
Alberto Gonzales
Dick Cheney
G.W. Bush

Now playing: Al Green - Tired Of Being Alone


 

Categories: Current Affairs

Robert Scoble has a blog post up entitled Why Mahalo, TechMeme, and Facebook are going to kick Google’s butt in four years where he argues that search based on social graphs (e.g. your Facebook relationships) or generated by humans (e.g. Mahalo) will eventually trump Google's algorithms. I'm not sure I'd predict the demise of Google but I do agree that the social graph can be used to improve search and other aspects of the Internet experience, in fact I agree so much that was the topic of my second ThinkWeek paper which I submitted earlier this year (Microsoft folks can find it here).

However I don’t think Google’s main threat from sites like Facebook is that they may one day build social graph powered search that beats Google’s algorithms. Instead it is that these sites are in direct conflict with Google’s mission to

organize the world's information and make it universally accessible and useful.

because they create lots of valuable content that Google can not access. Google has branched out of Web search into desktop search, enterprise search, Web-based email and enterprise application hosting all to fulfill this mission.

The problem that Google faces with Facebook is pointed out quite well in Jason Kottke’s post Facebook vs. AOL, redux where he writes

Think of it this way. Facebook is an intranet for you and your friends that just happens to be accessible without a VPN. If you're not a Facebook user, you can't do anything with the site...nearly everything published by their users is private. Google doesn't index any user-created information on Facebook.2

and in Jeff Atwood's post Avoiding Walled Gardens on the Internet which contains the following excerpt

I occasionally get requests to join private social networking sites, like LinkedIn or Facebook. I always politely decline…public services on the web, such as blogs, twitter, flickr, and so forth, are what we should invest our time in. And because it's public, we can leverage the immense power of internet search to tie it all-- and each other-- together.

What Jason and Jeff are inadvertantly pointing out is that once you join Facebook, you immediately start getting less value out of Google’s search engine. This is a problem that Google cannot let continue indefinitely if they plan to stay relevant as the Web’s #1 search engine.

What is also interesting is that thanks to efforts of Google employees like Mark Lucovsky, I can use Google search from within Facebook but without divine intervention I can’t get Facebook content from Google’s search engine. If I was an exec at Google, I’d worry a lot more about the growing trend of users creating Web content where it cannot be accessed by Google than all the “me too” efforts coming out of competitors like Microsoft and Yahoo!.

The way you get disrupted is by focusing on competitors who are just like you instead of actually watching the marketplace. I wonder how Google will react when they eventually realize how deep this problem runs?

Now playing: Metallica - Welcome Home (Sanitarium)


 

I try to avoid posting about TechMeme pile ups but this one was just too irritating to let pass. Mark Cuban has a blog post entitled The Internet is Dead and Boring which contains the following excerpts 

Some of you may not want to admit it, but that's exactly what the net has become. A utility. It has stopped evolving. Your Internet experience today is not much different than it was 5 years ago.
...
Some people have tried to make the point that Web 2.0 is proof that the Internet is evolving. Actually it is the exact opposite. Web 2.0 is proof that the Internet has stopped evolving and stabilized as a platform. Its very very difficult to develop applications on a platform that is ever changing. Things stop working in that environment. Internet 1.0 wasn't the most stable development environment. To days Internet is stable specifically because its now boring.(easy to avoid browser and script differences excluded)

Applications like Myspace, Facebook, Youtube, etc were able to explode in popularity because they worked. No one had to worry about their ISP making a change and things not working. The days of walled gardens like AOL, Prodigy and others were gone.
...
The days of the Internet creating explosively exciting ideas are dead. They are dead until bandwidth throughput to the home reaches far higher numbers than the vast majority of broadband users get today.
...
So, let me repeat, The days of the Internet creating explosively exciting ideas are dead for the foreseeable future..

I agree with Mark Cuban that the fundamental technologies that underly the Internet (DNS and TCP/IP) and the Web in particular (HTTP and HTML) are quite stable and are unlikely to undergo any radical changes anytime soon. If you are a fan of Internet infrastructure then the current world is quite boring because we aren't likely to ever see an Internet based on IPv8 or a Web based on HTTP 3.0. In addition, it is clear that the relative stability in the Web development environment and increase in the number of people with high bandwidth connections is what has led a number of the trends that are collectively grouped as "Web 2.0".

However Mark Cuban goes off the rails when he confuses his vision of the future of media as the only explosive exciting ideas that can be enabled by a global network like the Internet. Mark Cuban is an investor in HDNet which is a company that creates and distributes professionally produced content in high definition video formats. Mark would love nothing more than to distribute his content over the Internet especially since lack of interest in HDNet in the cable TV universe (I couldn't find any cable company on the Where to Watch HDNet page that actually carried the channel).

Unfortunately, Mark Cuban's vision of distributing high definition video over the Internet has two problems. The first is the fact that distributing high quality video of the Web is too expensive and the bandwidth of the average Web user is insufficient to make the user experience pleasant. The second is that people on the Web have already spoken and content trumps media quality any day of the week. Remember when pundits used to claim that consumers wouldn't choose lossy, compressed audio on the Web over lossless music formats? I guess no one brings that up anymore given the success of the MP3 format and the iPod. Mark Cuban is repeating the same mistake with his HDNet misadventure.  User generated, poor quality video on sites like YouTube and larger library of content on sites like Netflix: Instant Viewing is going to trump the limited line up on services like HDNet regardless of how much higher definition the video quality gets.

Mark Cuban has bet on a losing horse and he doesn't realize it yet. The world has changed on him and he's still trying to work within an expired paradigm. It's like a newspaper magnate blaming printer manufacturers for not making it easy to print a newspaper off of the Web instead of coming to grips with the fact that the Internet with its blogging/social media/user generated content/craig's list and all that other malarky has turned his industry on its head.

This is what it looks like when a billionairre has made a bad investment and doesn't know how to recognize the smell of failure blasting his nostrils with its pungent aroma.


 

Categories: Current Affairs | Technology

August 26, 2007
@ 11:10 PM

Over the last few months, there have been numerous articles by various bloggers and mainstream press speculating on whether use of Facebook will supplant email. I have also noticed that I use the private messaging feature in Facebook to keep in touch with more people, more often than I do with non-work related email.

I used to think that this was a welcome development from a user's point of view because spam is pretty much eliminated due to in-built white lists based on social networks. Or at least that's what I thought. However over the past few weeks I've been getting more and more unsolicited private messages on the site which aren't p3n1s enlargement or 419 scams but are still unsolicited. On taking a look at the privacy options, it turns out that there doesn't seem to be a way to opt out of being contacted by random people on the site. 

In fact, I'm surprised that regular spammers haven't yet flooded Facebook given how they seem to end up everywhere else you let people contact each other directly over the Web. One thing I find confusing is that I can swear that there was an option to opt out of messages from people who aren't on your friend's list or in one of your Networks. Or was this just my imagination?

The other kind of unsolicited mail that is totally wrecking my Facebook experience is unsolicited friend requests from Facebook applications. These aren't just regular friend requests. It seems that every application a user adds can make friend requests specific to the application. I'm getting friend requests from My Questions and Likeness on an almost daily basis with no way to permanently ignore friend requests from these applications.

I guess Clay Shirky's old saying is true, the definition of social software is stuff that gets spammed.


 

Categories: Social Software

August 24, 2007
@ 06:44 PM

Recently, my status message on Facebook was I'm now convinced microformats are a stupid idea. Shortly afterwards I got a private message from Scott Beaudreau asking me to clarify my statement. On reflection, I realize that what I find stupid is when people suggest using microformats and screen scraping techiques instead of utilizing an API when the situation calls for one. For example, the social network portability proposal on the microformats wiki states

The "How To" for social network profile sites that want to solve the above problems and achieve the above goals.

  1. Publish microformats in your user profiles:
    1. implement hCard on user profile pages. See hcard-supporting-profiles for sites that have already done this.
    2. implement hCard+XFN on the list of friends on your user profile pages. See hcard-xfn-supporting-friends-lists for sites that already do this. (e.g. [Twitter (http://twitter.com/)]).
  2. Subscribe to microformats for your user profiles:
    1. when signing up a new user:
      1. let a user fill out and "auto-sync" from one of their existing hcard-supporting-profiles, their name, their icon etc. Satisfaction Inc already supports this. (http://microformats.org/blog/2007/06/21/microformatsorg-turns-2/)
      2. let a user fill out and "auto-sync" their list of friends from one of their existing hCard+XFN supporting friends lists. Dopplr.com already supports this.

It boggles my mind to see the suggestion that applications should poll HTML pages to do data synchronization instead of utilizing an API. Instead of calling friends.get, why don't we just grab the entire friends page then parse out the handful of useful data that we actually need.

There are a number of places microformats are a good fit, especially in situations where the client application has to parse the entire HTML document anyway. Examples include using hCard to enable features like Live Clipboard or using hReview and hCalendar to help microformat search engines. However using them as a replacement for an API or an RSS feed is like using boxing gloves instead of oven mitts when baking a pie.

If all you have is a hammer, everything looks like a nail.

Now playing: D12 - American Psycho II (feat. B-Real)


 

David Berlind has a blog post entitled If ‘you’ build OpenID, will ‘they’ come? where he writes

In case you missed it last week, Microsoft is taking another swing at the idea of single sign-on technologies. Its first, Passport, failed miserably. Called Windows Live ID (following in the footsteps of everything else “Windows Live”), I guess you could call this “Son of Passport” or “Passport: The Sequel.” The question is (for Microsoft as much as anyone else), down the road, will we have “Passport The Thirteenth”?

When I saw the announcement, the first thought that went through my mind was whether or not Microsoft’s WLID service would also “double” as an OpenID node. OpenID is another single sign-on specification that has been gaining traction in open circles (no suprise there) and the number of OpenID nodes (providers of OpenID-based authentication) is growing.

In light of the WLID announcement from Microsoft and given the discussions that the Redmond company’s chief identity architect Kim Cameron and I have had (see After Passport, Microsoft is rethinking identity) about where Microsoft has to go to be more of an open player on the identity front, I tried to track him down to get an update on why WLID and OpenID don’t appear to be interoperable (I could be wrong on this).

Somewhere along the line, people have gotten the mistaken impression that the Windows Live ID Web Authentication SDK is about single sign-on. It isn’t. The primary reason for opening up our authentication system is to let non-Microsoft sites build and host widgets that access a user’s data stored within Windows Live or MSN services. This is spelled out in the recent blog posting about the release on the Windows Live ID team blog which is excerpted below

The benefits of incorporating Windows Live ID into your Web site include:

 

·         The ability to use Windows Live gadgets, APIs and controls to incorporate authenticated Windows Live services into your site.

For example, the recently announced collaboration between Windows Live and Bebo requires a way for Windows Live users on Bebo to authenticate themselves and utilize Windows Live services from the Bebo site. That’s what the Windows Live ID Web Authentication SDK is meant to enable.

Although the technological approaches are similar, the goal is completely different from that of OpenID which is meant to be a single sign-on system. 

Now playing: Mase - Return Of The Murda


 

Categories: Windows Live

This morning there were a number of news stories about collaboration between Windows Live and Bebo. These news stories didn’t tell the whole story. Articles such as C|Net’s Bebo's new instant messaging is Microsoft-flavored and TechCrunch’s Windows Live Messaging Comes to Bebo give the impression that the announcement was about instant messaging. However there was much more to the announcement. The agreement between Windows Live and Bebo spans two areas; social network portability and interop between Web-based IM and Windows Live Messenger

  1. Social Network Portability: As I’ve mentioned before a common practice among social networking sites is to ask users for their log-in credentials for their email accounts so that the social networking sites can screen scrape the HTML for the address book and import the user’s contact list into the social networking site. There are a number of problems with this approach, the main one being that the user is simply moving data from one silo to another without being able to get their contact list back from the social network and into their email client. There’s also the problem that this approach makes users more susceptible to phishing since it encourages them to enter their log-in credentials on random sites.  Finally, the user isn’t in control of how much data is pulled from their address book by the social network or how often it is pulled.

    The agreement between Windows Live and Bebo enables users to utilize a single contact list across both sites. Their friends in Bebo will be available as their contacts in Windows Live and vice versa. This integration will be facilitated by the Windows Live Contacts API which implements a user-centric access control model where the user grants applications permission to access and otherwise manipulate their contact list.

  2. Web-based IM and Windows Live Messenger interoperability: Users of Bebo that are also Windows Live Messenger users can opt in to getting notifications from Bebo as alerts in their desktop IM client. In addition, these users can add an “IM Me” button to their profile which allows people browsing their profile on the Web to initiate an IM conversation with them using a Microsoft-provided Web IM widget on the Bebo website which communicates with the Windows Live Messenger client on the profile owner’s desktop.

    The above scenarios were demoed at this year's MIX '07 conference during the session Broaden Your Market with Windows Live. The current plan is for the APIs for interacting with the Windows Live Messenger service and the IM widgets that can be embedded within a non-Microsoft website that power this scenario to be available via http://dev.live.com in the near future.

At the end of the day, it is all about putting users in control. We don’t believe that a user’s social graph should be trapped in a roach motel of our creation. Instead users should be able to export their contact lists from our service on their own terms and should be able to grow their social graph within Windows Live without having to exclusively use our services.

It’s your data, not ours. If you want it, you can have it. Hopefully, the rest of the industry comes around to this sort of thinking sooner rather than later.

Stay tuned, there’s more to come.

Now playing: Gucci Mane - So Icy (feat. Young Jeezy)


 

I just spent an hour doing some research in response to Sam Ruby's post Sousveillance where he wonders whether some of the descriptions of Facebook as a social graph roach motel (i.e. information about your relationships goes in, nothing comes out) is accurate. Sam writes

Dare seems to think that the root problem is oppression by the “man”.  In this case, a 23 year old.  Brad seems to view this as a technical problem.

I wonder what I wrote that gave that impression especially in the linked post. In that post, I was simply giving some advise about the kind of social problems you will face when treating unifying social graphs across different contexts and applications as a technical problem. If anyone is whining about oppression by Facebook, it would be Brad’s original manifesto which mentions the site by name over a dozen times.

Data point 1: one day when logging onto Facebook, I saw an offer to scan my AIM contacts and invite the ones that had Facebooks to be friends.  I unselected a few, and then clicked on submit.  Within hours, my network expanded greatly.  IM ids serve as useful foreign keys.

Like lots of popular social networking services, but not Windows Live Spaces, Facebook is fond of violating the terms of use of various email providers by screen scraping user address books and contact lists after collecting their log-in credentials.

However Facebook prevents this from being done to them by only showing email addresses as images which expire after a couple of minutes due to use of session keys. I once considered writing an application to import my Facebook contacts into Outlook but gave up once I realized I couldn’t find any free, off-the-shelf OCR APIs that I could use.

I did find an article on CodeProject about rolling your own OCR via neural networks which seems promising but I don't have the free time to mess with that right now. Maybe later in the year. Sam also writes

Data point 2: Facebook is a platform with an API.  If there is a need, it seems to me that one could develop an application using FQL to pull one’s friend list out of Facebook and share it externally.  The fact that I don’t know of such an application means one of four things is happening: (1) it exists, but I don’t know about it, (2) despite the alleged overwhelming demand for this feature, and obvious commercial opportunities it opens up, it hadn’t occurred to anyone, (3) I’m reading the documentation wrong, and it isn’t possible for applications to obtain access to one’s own Facebook ID for use as a foreign key, or (4) the demand simply isn’t there.

Or (5) the information returned by FQL about a user contains no contact information (no email address, no IM screen names, no telephone numbers, no street address)  so it is pretty useless as a way to utilize one’s friends list with applications besides Facebook since there is no way to cross-reference your friends using any personally identifiable association that would exist in another service.

When it comes to contact lists (i.e. the social graph), Facebook  is a roach motel. Lots of information about user relationships goes in but there’s no way for users or applications to get it out easily. Whenever an application like FacebookSync comes along which helps users do this, it is quickly shut down for violating their Terms of Use. Hypocrisy? Indeed.

Now playing: Lil Boosie & Webbie - Wipe Me Down (remix) (feat. Jim Jones, Fat Joe, Jadakiss & Foxx)


 

Categories: Social Software