If you go to http://dev.live.com/liveid you’ll see links to Windows Live ID for Web Authentication and Client Authentication which enable developers to build Web or desktop applications that can be used to authenticate users via Windows Live ID. The desktop SDK are still in alpha but the Web APIs have hit v1. You can get the details from the Windows Live ID team blog post entitled Windows Live ID Web Authentication SDK for Developers Is Released which states  

Windows Live ID Web Authentication allows sites who want to integrate with the Windows Live services and platform. We are releasing a set of tools that make this integration easier than ever.  

Web Authentication works by sending your users to the Windows Live ID sign-in page by means of a specially formatted link. The service then directs them back to your Web site along with a unique, site-specific identifier that you can use to manage personalized content, assign user rights, and perform other tasks for the authenticated user. Sign-in and account management is performed by Windows Live ID, so you don't have to worry about implementing these details.

Included with the Web Authentication software development kit (SDK) are QuickStart sample applications in the ASP.NET, Java, Perl, PHP, Python, and Ruby programming languages. You can get the sample applications for this SDK from the Web Authentication download page>on Microsoft.com.

As one of the folks who's been championing opening up our authentication platform to Web developers, this is good news. I'm not particularly sold on using Windows Live ID as a single sign-on instead of sites managing their own identities but I do think that now that we allow non-Microsoft applications (e.g. mashups, widgets, etc) to act on behalf of Windows Live users via this SDK, there'll be a burst of new APIs coming out of Windows Live that will allow developers build applications that manipulate a user's data stored within Windows Live services.

Opening up our platform will definitely be good for users and will be good for the Web as well. Kudos, to the Windows Live ID folks for getting this out.

Now playing: Nappy Roots - Po' Folks


 

Categories: Web Development | Windows Live

How social networks handle multiple social contexts (e.g. my work life versus my personal life) has been on my mind this week. Today I was in a meeting where someone mentioned that most of the people he knows have profiles on both MySpace and Facebook because their real friends are on MySpace while their work friends are on Facebook. This reminded me that my wall currently has a mix of posts from Robert Scoble about random geek crap and posts by friends from Nigeria who I haven’t talked to in years catching up with me.

For some reason I find this interleaving of my personal relationships and my public work-related persona somewhat unsettling. Then there’s this post by danah boyd, loss of context for me on Facebook which contains the following excerpt

Anyhow, I know folks are still going wheeeeee about Facebook. And I know people generally believe that growth is nothing but candy-coated goodness. And while I hate using myself as an example (cuz I ain't representative), I do feel the need to point out that context management is still unfun, especially for early adopters, just as it has been on every other social network site. It sucks for teens trying to balance mom and friends. It sucks for college students trying to have a social life and not piss off their profs. It sucks for 20-somethings trying to date and balance their boss's presence. And it sucks for me.

I can't help but wonder if Facebook will have the same passionate college user base next school year now that it's the hip adult thing. I don't honestly know. But so far, American social network sites haven't supported multiple social contexts tremendously well. Maybe the limited profile and privacy settings help, but I'm not so sure. Especially when profs are there to hang out with their friends, not just spy on their students. I'm wondering how prepared students are to see their profs' Walls filled with notes from their friends. Hmmm...

as usual danah hits the nail on the head. There are a number of ways I can imagine social network sites doing a better job at supporting multiple social contexts but they all involve requiring some work from the user to set up their social contexts especially if they plan to become a permanent fixture in their user’s lives. However most social network sites seem more interested in being the equivalent of popular nightclubs (e.g. MySpace) than in becoming a social utility in the same way that email and instant messaging have become. Facebook  is the first widely popular social networking site I suspect will buck this trend. If there is one place there is still major room for improvement in their user experience [besides the inability to opt out of all the annoying application requests] it’s here. This is the one area where the site is weak, and if my experience and danah’s observations are anything to go by, eventually the site will be less of a social software utility and more of a place to hang out and we know what eventually happens to sites like that.  

Now playing: Gym Class Heroes - New Friend Request


 

Categories: Social Software

Matt Cutts has a blog post entitled Closing the loop on malware where he writes

Suppose you worked at a search engine and someone dropped a high-accuracy way to detect malware on the web in your lap (see this USENIX paper [PDF] for some of the details)? Is it better to start protecting users immediately, or to wait until your solution is perfectly polished for both users and site owners? Remember that the longer you delay, the more users potentially visit malware-laden web pages and get infected themselves.

Google chose to protect users first and then quickly iterate to improve things for site owners. I think that’s the right choice, but it’s still a tough question. Google started flagging sites where we detected malware in August of last year.

When I got home yesterday, my fiancée informed me that her laptop was infected with spyware. I asked how it happened and she mentioned that she’d been searching for sites to pimp her MySpace profile. Since we’d talked in the past about visiting suspicious websites I wondered why she chosen to ignore my advise. Her response? “Google didn’t put the This Site May Harm Your Computer warning on the link so I thought the site was safe. Google failed me.”

I find this interesting on several levels. There’s the fact that this feature is really useful and engenders a sense of trust in Google’s users. Then there’s the palpable sense of betrayal on the user’s part when Google’s “not yet perfectly polished” algorithms for detectings malicious software fails to indicate a bad site. Finally, there’s the observation that instead of blaming Microsoft who produces the operating system and theWeb  browser which were both infected by the spyware, she chose to blame Google who produced the search engine that led to the malicious site instead. Why do you think this is? I have my theories…

Now playing: Hurricane Chris - Ay Bay Bay


 

Categories: Technology

August 14, 2007
@ 03:19 AM

Recently I've seen a bunch of people I consider to be really smart sing the praises of Hadoop such as Sam Ruby in his post Long Bets, Tim O’Reilly in his post Yahoo!’s Bet on Hadoop, and Bill de hÓra in his post Phat Data. I haven’t dug too deeply into Hadoop due to the fact that the legal folks at work will chew out my butt if I did, there a number of little niggling doubts that make me wonder if this is the savior of the world that all these geeks claim it will be. Here are some random thoughts that have made me skeptical

  1. Code Quality: Hadoop was started by Doug Cutting who created Lucene and Nutch. I don’t know much about Nutch but I am quite familiar with Lucene because we adopted it for use in RSS Bandit. This is probably the worst decision we’ve made in the entire history of RSS Bandit. Not only are the APIs a usability nightmare because they were poorly hacked out then never refactored, the code is also notoriously flaky when it comes to dealing with concurrency so common advice is to never use multiple threads to do anything with Lucene.

  2. Incomplete Specifications: Hadoop’s MapReduce and HDFS are a re-implementation of Google’s MapReduce and Google File System (GFS)  technologies. However it seems unwise to base a project on research papers that may not reveal all the details needed to implement the service for competitive reasons. For example, the Hadoop documentation is silent on how it plans to deal with the election of a primary/master server among peers especially in the face of machine failure which Google solves using the Chubby lock service. It just so happens that there is a research paper that describes Chubby but how many other services within Google’s data centers do MapReduce and Google File System (GFS)  depend on which are yet to have their own public research paper? Speaking of which, where are the Google research papers on their message queueing infrastructure? You know they have to have one, right? How about their caching layer? Where are the papers on Google’s version of memcached?Secondly, what is the likelihood that Google will be as forthcoming with these papers now that they know competitors like Yahoo! are knocking off their internal architecture?

  3. A Search Optimized Architecture isn’t for Everyone: One of the features of MapReduce is that one can move the computation close to the data because “Moving Computation is Cheaper than Moving Data”. This is especially important when you are doing lots of processing intensive operations such as the kind of data analysis that goes into creating the Google search index. However what if you’re a site whose main tasks are reading and writing lots of data (e.g. MySpace) or sending lots of transient messages back and forth yet ensuring that they always arrive in the right order (e.g. Google Talk) then these optimizations and capabilities aren’t much use to you and a different set of tools would serve you better. 

I believe there are a lot of lessons that can be learned from how the distributed systems that power the services behind Google, Amazon and the like. However I think it is waaaay to early to be crowning some knock off of one particular vendors internal infrastructure as the future of distributed computing as we know it.

Seriously.

PS: Yes, I realize that Sam and Bill are primarily pointing out the increasing importance of parellel programming as it relates to the dual trends of (i) almost major website that ends up dealing with lots of data and has lots of traffic eventually eschews relational database features like joins, normalization, triggers and transactions because they are not cost effective and (ii) the increased large amounts of data that the we generate and now have to process due to falling storage costs. Even though their mentions of Hadoop are incidental it still seems to me that it’s almost become a meme, one which deserves more scrutiny before we jump on that particular band wagon. 

Now playing: N.W.A. - Appetite For Destruction


 

Categories: Platforms

It seems like I was just blogging about Windows Live Hotmail coming out of beta and it looks like there is already a substantial update to the service being rolled out. From the Windows Live Hotmail team’s blog post entitled August: Hotmail will soon bring you more of your requests, better performance we learn

We went out of beta in May, and we’re already releasing something new. Today, these new features will begin to roll our gradually to all our customers over the next few weeks, so if you don’t immediately see them, be patient, they’re coming!

More storage! Just when you were wondering how you’d ever fill up 2 or 4 GB of mail, we’ve given you more storage. Free users will get 5 GB and paid users will get 10 GB of Hotmail storage.

Contacts de-duplication: Do you have five different entries for the same person in your Contacts? Yeah, me too, but not anymore. We’re the first webmail service to roll out “contacts de-duplication”. If you get a message from “Steve Kafka” and click “add contact” but there’s already a Steve Kafka, we’ll let you know and let you add Steve’s other e-mail address to your existing “Steve Kafka” contact entry. We’re just trying to be smarter to make your life easier and faster. There’s also a wizard you can run to clean up your existing duplicate contacts.

Accepting meeting requests: If you receive a meeting request, such as one sent from Outlook, you can now click “accept” and have it added to your Calendar. This had existed for years in MSN Hotmail, and we’re adding it to Windows Live Hotmail now.

You can turn off the Today page (if you want to). If you’d rather see your inbox immediately upon login, you have the option to turn off the page of MSN news (called the Today page). The choice is yours. 

A nice combination of new features and pet peeves fixed with this release. The contacts duplication issue is particularly annoying and one I’ve wanted to see fixed for quite a while.

So far we’ve seen updates Spaces, SkyDrive, and now Mail within the past month. The summer of Windows Live is on here and so far it’s looking pretty good. I wonder what else Windows Live has up it’s sleeve?

Now playing: P. Diddy - That's Crazy (remix) (feat. Black Rob, Missy Elliott, Snoop Dogg & G-Dep)


 

Categories: Windows Live

There was an article on Ars Technica this weekend entitled Google selleth then taketh away, proving the need for DRM circumvention which is yet another example of how users can be screwed when they bet on a platform that utilizes DRM. The article states

It's not often that Google kills off one of its services, especially one which was announced with much fanfare at a big mainstream event like CES 2006. Yet Google Video's commercial aspirations have indeed been terminated: the company has announced that it will no longer be selling video content on the site. The news isn't all that surprising, given that Google's commercial video efforts were launched in rather poor shape and never managed to take off. The service seemed to only make the news when embarrassing things happened.

Yet now Google Video has given us a gift—a "proof of concept" in the form of yet another argument against DRM—and an argument for more reasonable laws governing copyright controls.

Google contacted customers late last week to tell them that the video store was closing. The e-mail declared, "In an effort to improve all Google services, we will no longer offer the ability to buy or rent videos for download from Google Video, ending the DTO/DTR (download-to-own/rent) program. This change will be effective August 15, 2007."

The message also announced that Google Checkout would issue credits in an amount equal to what those customers had spent at the Google Video store. Why the quasi-refunds? The kicker: "After August 15, 2007, you will no longer be able to view your purchased or rented videos."

See, after Google takes its video store down, its Internet-based DRM system will no longer function. This means that customers who have built video collections with Google Video offerings will find that their purchases no longer work. This is one of the major flaws in any DRM system based on secrets and centralized authorities: when these DRM data warehouses shut down, the DRM stops working, and consumers are left with useless junk.

Furthermore, Google is not refunding the total cost of the videos. To take advantage of the credit Google is offering, you have to spend more money, and furthermore, you have to spend it with a merchant that supports Google Checkout. Meanwhile, the purchases you made are now worthless.

This isn't the first time nor will it be the last time that some big company gives up on a product strategy tied to DRM, thus destroying thousands of dollars in end user investments. I wonder how many more fiascos it will take before consumers wholeheartedly reject DRM* or government regulators are forced to step in.

 Now playing: Panjabi MC - Beware (feat. Jay-Z)


 

Categories: Technology

Disclaimer: This blog post does not reflect future product announcements, technical strategy or advice from my employer. Disregard this disclaimer at your own risk.

In my previous post Some Thoughts on Open Social Networks, I gave my perspective on various definitions of "open social network" in response to the Wired article Slap in the Facebook: It's Time for Social Networks to Open Up. However there was one aspect of the article that I overlooked when I first read it. The first page of the article ends with the following exhortation.

We would like to place an open call to the web-programming community to solve this problem. We need a new framework based on open standards. Think of it as a structure that links individual sites and makes explicit social relationships, a way of defining micro social networks within the larger network of the web.

This is a problem that interests me personally. I have a Facebook profile while my fiancée has a MySpace profile. Since I’m now an active user of Facebook, I’d like her to be able to be part of my activities on the site such as being able to view my photos, read my wall posts and leave wall posts of her own. I could ask her to create a Facebook account, but I already asked her to create a profile on Windows Live Spaces so we could be friends on that service and quite frankly I don’t think she’ll find it reasonable if I keep asking her to jump from social network to social network because I happen to try out a lot of these services as part of my day job. So how can this problem be solved in the general case?

OpenID to the Rescue

This is exactly the kind of problem that OpenID was designed to solve.  The first thing to do is to make sure we all have the same general understanding of how OpenID works. It's basically the same model as Microsoft Passport Windows Live ID, Google Account Authentication for Web-Based Applications and Yahoo! Browser Based Authentication. A website redirects you to your identity provider, you authenticate yourself (i.e. login) on your identity providers site and then are redirected back to the referring site along with your authentication ticket. The ticket contains some information about you that can be used to uniquely identify you as well as some user data that may be of interest to the referring site (e.g. username).

So how does this help us? Let’s say MySpace was an OpenID provider which is a fancy way of saying that I can use my MySpace account to login to any site that accepts OpenIDs . And now let’s say Facebook was a site that accepted OpenIDs  as an identification scheme. This means that I could add my fiancée to the access control list of people who could view and interact with my profile on Facebook by using the URL to her MySpace profile as my identifier for her.  So when she tries to access my profile for the first time, she is directed to the Facebook login page where she has the option of logging in with her MySpace credentials. When she chooses this option she is directed to the MySpace login page. After logging into MySpace with proper credentials, she is redirected back to Facebook  and gets a pseudo-account on the service which allows her to participate in the site without having to go through an account creation process.

Now that the user has a pseudo-account on Facebook, wouldn’t it be nice if when someone clicked on them they got to see a Facebook profile? This is where OpenID Attribute Exchange can be put to use. You could define a set of required and optional attributes that are exchanged as part of social network interop using OpenID. So we can insert an extra step [which is may be hidden from the user] after the user is redirected to Facebook after logging into MySpace where the user’s profile information is requested. Here is an example of the kind of request that could be made by Facebook after a successful log-in attempt by a MySpace user.

openid.ns.ax=http://openid.net/srv/ax/1.0
openid.ax.type.fullname=http://example.com/openid/sn_schema/fullname
openid.ax.type.gender=http://example.com/openid/sn_schema/gender
openid.ax.type.relationship_status=http://example.com/openid/sn_schema/relationship_status
openid.ax.type.location=http://example.com/openid/sn_schema/location
openid.ax.type.looking_for=http://example.com/openid/sn_schema/looking_for
openid.ax.type.fav_music=http://example.com/openid/sn_schema/fav_music
openid.ax.count.fav_music=3
openid.ax.required=fullname,gender,location
openid.ax.if_available=relationship_status,looking_for,fav_music

which could return the following results

openid.ns.ax=http://openid.net/srv/ax/1.0
openid.ax.type.fullname=http://example.com/openid/sn_schema/fullname
openid.ax.type.gender=http://example.com/openid/sn_schema/gender
openid.ax.type.relationship_status=http://example.com/openid/sn_schema/relationship_status
openid.ax.type.location=http://example.com/openid/sn_schema/location
openid.ax.type.looking_for=http://example.com/openid/sn_schema/looking_for
openid.ax.type.fav_music=http://example.com/openid/sn_schema/fav_music
openid.ax.value.fullname=Jenna
openid.ax.value.gender=F
openid.ax.value.relationship_status=Single
openid.ax.value.location=Seattle, WA, United States
openid.ax.value.looking_for=Friends
openid.ax.value.fav_music=hiphop,country,pop
openid.ax.update_url=http://www.myspace.com/url_to_send_changes_made_to_profile

With the information returned by MySpace, one can now populate a place holder Facebook profile for the user.

Why This Will Never Happen

The question at the tip of your tongue is probably “If we can do this with OpenID today, how come I haven’t heard of anyone doing this yet?”.  As usual when it comes to interoperability, the primary reasons for lack of interoperability are business related and not technical.  When you look at the long list of Open ID providers, you may be notice that there is no similar long list of sites that accept OpenID  credentials. In fact, there is no such list of sites readily available because the number of them is an embarassing fraction of the number of sites that act as Open ID providers. Why this discrepancy?

If you look around, you’ll notice that the major online services such as Yahoo! via BBAuth, Microsoft via Passport Windows Live ID, and AOL via OpenID all provide ways for third party sites to accept user credentials from their sites. This increases the value of having an account on these services because it means now that I have a Microsoft Passport Windows Live ID I not only can log-in to various Microsoft properties across MSN and Windows Live but also non-Microsoft sites like Expedia. This increases the likelihood that I’ll get an account with the service which makes it more likely that I’ll be a regular user of the service which means $$$. On the other hand, accepting OpenIDs does the exact opposite. It actually reduces the incentive to create an account on the site which reduces the likelihood I’ll be a regular user of the site and less $$$. Why do you think there is no OpenID link on the AOL sign-in page even though the company is quick to brag about creating 63 million OpenIDs?

Why would Facebook implement a feature that reduced their user growth via network effects? Why would MySpace make it easy for sites to extract user profile information from their service? Because openness is great? Yeah…right.

Openness isn’t why Facebook is currently being valued at $6 billion nor is it why MySpace is currently expected to pull in about half a billion in revenue this year. These companies are doing just great being walled gardens and thanks to network effects, they will probably continue to do so unless something really disruptive happens.   

PS: Marc Canter asks if I can attend the Data Sharing Summit between Sept. 7th – 8th. I’m not sure I can since my wedding + honeymoon is next month. Consider this my contribution to the conversation if I don’t make it.

Now playing: Wu-Tang Clan - Can It Be All So Simple


 

Today some guy in the hallway mistook me for the other black guy that works in our building. Like we all look alike. Or there can only be one black guy that works in a building at Microsoft. Must be a quota. :)

Then I find this video in my RSS feeds and surprisingly I find my name mentioned in the comment threads.

Too bad it wasn't funny.


 

The speculation on LiveSide was right. Windows Live Folders is now Windows Live SkyDrive. You can catch the announcement on the product team's blog post Introducing Windows Live SkyDrive! which states

It’s been a month and a half since our first release, and today we’re making three major announcements!

First, we’re happy to announce our new name:



Second, we’ve been listening intently to your feedback and suggestions, and based directly on that feedback, we’re excited to bring you our next release, featuring:

  • An upgraded look and feel — new graphics to go along with your new features!
  • "Also on SkyDrive" — easily get back to the SkyDrives you’ve recently visited
  • Thumbnail images — we heard you loud and clear, and now you can see thumbnails of your image files
  • Drag and drop your files — sick of our five-at-a-time upload limit? Drag and drop your files right onto your SkyDrive
  • Embed your stuff anywhere — with just a few clicks, post your files and folders anywhere you can post html

Third, we’re excited to introduce SkyDrive in two additional regions: UK and India.

It's great to see this getting out to the general public. It's been pretty sweet watching this come together over the past year. I worked on some of the storage and permissioning platform aspects of this last year and I was quite impressed by a lot of the former members of the Microsoft Max who are now working on this product.

We definitely have a winner here.  Check it out.

UPDATE: Someone asked for a video or screencast of the site in action. There's one on the Window's Vista team blog. It is embedded below


Demo: Windows Live SkyDrive

Now playing: 50 Cent - Outta Control (remix) (feat. Mobb Deep)


 

Categories: Windows Live

This weekend, I finally decided to step into the 21st century and began the process of migrating RSS Bandit to v2.0 of the .NET Framework. In addition, we've also moved our source code repository from CVS to Subversion and so far it's already been a marked improvement. Since the .NET Framework is currently on v3.0 and v3.5 is in beta 1, I'm fairly out of date when it comes to the pet peeves in my favorite programming tools. At least one of my pet peeves was fixed, in Visual Studio 2005 I finally have an IDE where "Find References to this Method" actually works. On the flip side, the introduction of generics has added a lot more frustrating moments than I expected. By now most .NET developers have seen the dreaded

Cannot convert from 'System.Collections.Generic.List<subtype of T> to 'System.Collections.Generic.List<T>'

For those of you who aren't familiar with C# 2.0, here are examples of code that works and code that doesn't work. The difference is often subtle enough to be quite irritating when you first encounter it.

WORKS! - Array[subtype of T]  implicitly casted to Array[T]

using System;
using Cybertron.Transformers;

public class TransformersTest{

  public
static void GetReadyForBattle(Transformer[] robots){
    foreach(Transformer bot in robots){
        if(!bot.InRobotMode)
            bot.Transform();
    }
  }

  public static void Main(string[] args){

    Autobot OptimusPrime = new Autobot();
    Autobot[] autobots = new Autobot[1];
    autobots[0] = OptimusPrime;

    Decepticon Megatron = new Decepticon();
    Decepticon[] decepticons = new Decepticon[1];
    decepticons[0] = Megatron;

    GetReadyForBattle(decepticons);
    GetReadyForBattle(autobots);

  }
}

DOESN'T WORK - List<subtype of T> implicitly casted to List<T>

using System;
using Cybertron.Transformers;

public class TransformersTest{

  public static void GetReadyForBattle(List<Transformer> robots){
    foreach(Transformer bot in robots){
    if(!bot.InRobotMode)
        bot.Transform();
    }
  }

  public static void Main(string[] args){

    Autobot OptimusPrime = new Autobot();
    List<Autobot> autobots = new List<Autobot>(1);
    autobots.Add(OptimusPrime);

    Decepticon Megatron = new Decepticon();
    List<Decepticon> decepticons = new List<Decepticon>(1);
    decepticons.Add(Megatron);

    GetReadyForBattle(decepticons);
    GetReadyForBattle(autobots);

  }
}

The reason this doesn't work has been explained ad nauseum by various members of the CLR and C# teams such as Rick Byer's post Generic type parameter variance in the CLR where he argues

More formally, in C# v2.0 if T is a subtype of U, then T[] is a subtype of U[], but G<T> is not a subtype of G<U> (where G is any generic type).  In type-theory terminology, we describe this behavior by saying that C# array types are “covariant” and generic types are “invariant”. 

 

There is actually a reason why you might consider generic type invariance to be a good thing.  Consider the following code:

 

List<string> ls = new List<string>();

      ls.Add("test");

      List<object> lo = ls;   // Can't do this in C#

      object o1 = lo[0];      // ok – converting string to object

      lo[0] = new object();   // ERROR – can’t convert object to string

 

If this were allowed, the last line would have to result in a run-time type-check (to preserve type safety), which could throw an exception (eg. InvalidCastException).  This wouldn’t be the end of the world, but it would be unfortunate.

Even if I buy that the there is no good way to prevent the error scenario in the above code snippet without making generic types invariant, it seems that there were a couple of ways out of the problem that were shut out by the C# language team. One approach that I was so sure would work was to create a subtype of System.Collections.Generics.List and define implict and explict cast operators for it. It didn't

WORKS! - MyList<T> implicitly casted to ArrayList via user-defined cast operator

using System;
using Cybertron.Transformers;


public class MyList<T>: List<T>{

  public static implicit operator MyList<T>(ArrayList target){
    MyList<T> newList = new MyList<T>();

    foreach(T item in target){
        newList.Add(item);
    }
    return newList;
  }
}

public class Test{

  public static void GetReadyForBattle(MyList<Transformer> robots){
    foreach(Transformer bot in robots){
        if(!bot.InRobotMode){
                bot.Transform();
            }
        }   
  }

  public static void Main(string[] args){

    Autobot OptimusPrime = new Autobot();
    ArrayList autobots = new ArrayList(1);
    autobots.Add(OptimusPrime);

    Decepticon Megatron = new Decepticon();
    ArrayList decepticons = new ArrayList(1);
    decepticons.Add(Megatron);

    GetReadyForBattle(decepticons);
    GetReadyForBattle(autobots);
  }
}

DOESN'T WORK - MyList<subtype of T> implicitly casted to MyList<T> via user-defined cast

using System;
using Cybertron.Transformers;


public class MyList<T>: List<T>{

  public static implicit operator MyList<T>(MyList<U> target) where U:T{
    MyList<T> newList = new MyList<T>();

    foreach(T item in target){
        newList.Add(item);
    }
    return newList;
  }

}

public class Test{

  public static void GetReadyForBattle(MyList<Transformer> robots){
    foreach(Transformer bot in robots){
        if(!bot.InRobotMode)
            bot.Transform();
    }
  }

  public static void Main(string[] args){

   
Autobot OptimusPrime = new Autobot();
    MyList<Autobot> autobots = new MyList<Autobot>(1);
    autobots.Add(OptimusPrime);

    Decepticon Megatron = new Decepticon();
    MyList<Decepticon> decepticons = new MyList<Decepticon>(1);
    decepticons[0] = Megatron;

    GetReadyForBattle(decepticons);
    GetReadyForBattle(autobots);

  }
}

I really wanted that last bit of code to work because it would have been quite a non-intrusive fix for the problem (ignoring the fact that I have to use my own subclasses of the .NET Framework's collection classes). At the end of the day I ended up creating a TypeConverter utility class which contains some of the dumbest code I've had to write to trick a compiler into doing the right thing, here's what it ended up looking like

WORKS - Create a TypeConverter class that encapsulates calls to List.ConvertAll

using System;
using Cybertron.Transformers;


public class TypeConverter{

  public static List<Transformer> ToTransformerList<T>(List<T> target) where T: Transformer{
    return target.ConvertAll(new Converter<T,Transformer>(MakeTransformer));
  }

  public static Transformer
MakeTransformer<T>(T target) where T:Transformer{
    return target;
/* greatest conversion code ever!!!! */
  }

}

public class Test{

public static void GetReadyForBattle(List<Transformer> robots){
    foreach(Transformer bot in robots){
        if(!bot.InRobotMode)
            bot.Transform();
        }
    }
 }

 public static void Main(string[] args){

    Autobot OptimusPrime = new Autobot();
    List<Autobot> autobots = new List<Autobot>(1);
    autobots.Add(OptimusPrime);

    Decepticon Megatron = new Decepticon();
    List<Decepticon> decepticons = new List<Decepticon>(1);
    decepticons.Add(Megatron);

    GetReadyForBattle(TypeConverter.ToTransformerList(decepticons));
    GetReadyForBattle(TypeConverter.ToTransformerList(autobots));

 }

}

This works but it's ugly as sin. Anybody got any better ideas?

UPDATE: Lot's of great suggestions in the comments. Since I don't want to go ahead and modify a huge chunk of methods across our code base, I suspect I'll continue with the TypeConverter model. However John Spurlock pointed out that it is much smarter to implement the TypeConverter using generics for both input and output parameters instead of way I hacked it together last night. So our code will look more like

using System;
using Cybertron.Transformers;


public class TypeConverter{

 	/// <summary>
/// Returns a delegate that can be used to cast a subtype back to its base type.
/// </summary>
/// <typeparam name="T">The derived type</typeparam>
/// <typeparam name="U">The base type</typeparam>
/// <returns>Delegate that can be used to cast a subtype back to its base type. </returns>
public static Converter<T, U> UpCast<T, U>() where T : U {
return delegate(T item) { return (U)item; };
}

}


public class Test{

public static void GetReadyForBattle(List<Transformer> robots){
    foreach(Transformer bot in robots){
        if(!bot.InRobotMode)
            bot.Transform();
        }
    }
 }

 public static void Main(string[] args){

    Autobot OptimusPrime = new Autobot();
    List<Autobot> autobots = new List<Autobot>(1);
    autobots.Add(OptimusPrime);

    Decepticon Megatron = new Decepticon();
    List<Decepticon> decepticons = new List<Decepticon>(1);
    decepticons.Add(Megatron);

    GetReadyForBattle(decepticons
.ConvertAll(TypeConverter.UpCast<Decepticon, Transformer>()));
    GetReadyForBattle(autobots.ConvertAll(TypeConverter.UpCast<Autobot, Transformer>()));

 }

}


 

Categories: Programming