From the Microsoft-Watch article Google Pinches Another Microsoft Exec 

Google continues to hire away top Microsoft talent. But this time, Microsoft is fighting back. On Tuesday, Google announced plans to open a product research-and-development center in China, and said it was appointing former Microsoft vice president Kai-Fu Lee to head the operation. On Wednesday, Microsoft announced it was filing a lawsuit against Lee and Google, claiming breach of both employee confidentiality and non-compete agreement.
...
Other mid-level Microsoft executives have joined Google over the past couple of years, as well. And
Google opened a product-development office in Kirkland, Wash., late last year. Some industry watchers speculated that Google did so in order to attract more hires from Microsoft, which is headquartered in nearby Redmond, Wash.

It's the mid-level product and program managers whom Microsoft and other tech companies should guard most jealously, said Directions on Microsoft analyst Michael Cherry.

"While a lot of people make a big thing about the executives and senior vice presidents that leave, these people rarely ship software," Cherry said. "I think it is a bigger issue when the group program managers and program managers with ten plus years of experience silently leave. No one mourns their departure, but these are the people that take the grandiose architectures and wild-eyed visions and actually make them into products that people can use—and do so in a timely manner.

"The loss of these silent but hard working employees who keep the teams working together may have a bigger effect on the schedules of products like Yukon and Longhorn, and have a bigger long term impact on the company than any of the growing number of VPs and visionary architects," Cherry added.

From the Seattle PI article Ex-Microsoft exec sued over Google job

Microsoft also accused Google of "intentionally assisting Lee."

"Accepting such a position with a direct Microsoft competitor like Google violates the narrow noncompetition promise Lee made when he was hired as an executive," Microsoft said in its lawsuit. "Google is fully aware of Lee's promises to Microsoft, but has chosen to ignore them, and has encouraged Lee to violate them."
...
Tom Burt, a lawyer for Microsoft, said Lee announced Monday that he was leaving for the Google job and had given no indication that he planned to honor an agreement not to work for a direct competitor for one year.

"To the contrary, they're saying, 'In your face,'" Burt told The Associated Press.

Google shot back with a statement saying: "We have reviewed Microsoft's claims and they are completely without merit. Google is focused on building the best place in the world for great innovators to work. We're thrilled to have Dr. Lee on board at Google. We will defend vigorously against these meritless claims."

Um, OK...

 


 

Categories: Life in the B0rg Cube

Andy Edmonds has a post over on the MSN Search blog entitled Tagging Feedback at MSN Search where he talks about the internal app he built that is used to track feature requests and bug reports about MSN Search.

When the MSN Search team gets feedback or bug reports, each one is "tagged" with multiple keywords/categories which can then be analyzed later by frequency. The example, Andy shows in his post is the tag "ypResults" which is used to categorize featue requests for yellow page hits as part of web search results. With this system the search team can have a simple yet effective way to keep track of their most hot button issues

Andy showed this to me a few months ago and I thought it was really cool. I'd have loved to have a system like this when I used to work on the XML team to figure out what features/bugs were most often requested by users in a quantitative way.

Below is a screenshot of the feature (names changed to protect the innocent)


 

Categories: MSN

I was at the Anger Management 3 concert last night and it was quite the show. Lil' Jon & The Eastside Boyz were a welcome surprise as the opening act. They cycled through the BME clique hits from "Get Low" to "Salt Shaker" for the 30 minutes they were on stage. The problem with Lil' Jon is that most of the hits you associate him with are collaborations so at concerts you end up getting half the performance not being live for songs such as "Yeah!" or "Lovers & Friends".

The next set had the entire G-Unit record label including newly signed acts like Mobb Deep & M.O.P. performing for just over an hour. The first part of the G-Unit set sucked because we had to sit through the crap singles from Tony Yayo, Young Buck and Lloyd Banks solo efforts as well as some of the crud from The Massacre. Halfway through it picked up with the better songs from The Massacre (Disco Inferno, Candy Shop), old hits from Get Rich or Die Tryin' (P.I.M.P., In Da Club, Wanksta) and G-Unit's Beg for Mercy (I Wanna Get To Know Ya). M.O.P. did their hit from a few years ago, "Ante Up", and Mobb Deep hit the crowd with "Quiet Storm" without Lil' Kim. There was a momentary infusion of crap when a lot of time was devoted to a new 50 Cent & Mobb Deep song but the show got back on track after that. The G-Unit set was OK but I'd have loved t hear some of their mix tape cuts instead of just mainstream tracks.

Eminem killed. He made the concert go from OK to fantastic with almost an hour and a half of performances from himself and D12. Even 50 Cent got in on the act when they performed "Patiently Waiting" and "Gatman & Robin". The parts of the show where Eminem riffed with the audience about tabloids, Mariah Carey and Michael Jackson were also golden.

If this show is going to hit your town you should definitely check it out.


 

Categories: Music

My buddy Erik Meijer and Peter Drayton have written a paper on programming languages entitled Static Typing Where Possible, Dynamic Typing When Needed: The The End of the Cold War Between Programming Languages. The paper is meant to seek a middle ground between the constant flame wars over dynamically typed vs. statically typed programming language. The paper is pretty rough and definitely needs a bunch of work. Take the following excerpt from the first part of the paper

Static typing fanatics try to make us believe that “well-typed programs cannot go wrong”. While this certainly sounds impressive, it is a rather vacuous statement. Static type checking is a compile-time abstraction of the runtime behavior of your program, and hence it is necessarily only partially sound and incomplete. This means that programs can still go wrong because of properties that are not tracked by the type-checker, and that there are programs that while they cannot go wrong cannot be type-checked. The impulse for making static typing less partial and more complete causes type systems to become overly complicated and exotic as witnessed by concepts such as "phantom types" and "wobbly types"
...
In the
mother of all papers on scripting, John Ousterhout argues that statically typed systems programming languages make code less reusable, more verbose, not more safe, and less expressive than dynamically typed scripting languages. This argument is parroted literally by many proponents of dynamically typed scripting languages. We argue that this is a fallacy and falls into the same category as arguing that the essence of declarative programming is eliminating assignment. Or as John Hughes says, it is a logical impossibility to make a language more powerful by omitting features. Defending the fact that delaying all type-checking to runtime is a good thing, is playing ostrich tactics with the fact that errors should be caught as early in the development process as possible.

We are interesting in building data-intensive three-tiered enterprise applications. Perhaps surprisingly, dynamism is probably more important for data intensive programming than for any other area where people traditionally position dynamic languages and scripting. Currently, the vast majority of digital data is not fully structured, a common rule of thumb is less then 5 percent. In many cases, the structure of data is only statically known up to some point, for example, a comma separated file, a spreadsheet, an XML document, but lacks a schema that completely describes the instances that a program is working on. Even when the structure of data is statically known, people often generate queries dynamically based on runtime information, and thus the structure of the query results is statically unknown.

The comment about making programming languages more powerful by removing features being a logical impossibility seems rather bogus and seems out of place in an academic paper. Especially when one can consider the 'removed features' to be restrictions which limit the capabilities of the programming language.

I do like the fact that the paper tries to dissect the features of statically and dynamically typed languages that developers like instead of simply arguing dynamic vs. static as most discussions of this form take. I assume the purpose of this dissection is to see if one could build a programming language with the best of both worlds. From personal experience, I know Erik has been interested in this topic from his days.

Their list of features runs the gamut from type inference and coercive subtyping to lazy evaluation and prototype inheritence. Although the list is interesting I can't help but think that it seems to me that Erik and Peter already came to a conclusion and tried to fit the list of features included in the paper to that conclusion. This is mainly taken from the fact that a lot of the examples and features are taken from  instead of popular scripting languages.

This is definitely an interesting paper but I'd like to see more inclusion of dynamic languages like Ruby, Python and Smalltalk instead of a focus on C# variants like . The paper currently looks like it is making an argument for Cω 2.0 as opposed to real research on what the bridge between dynamic and static programming languages should be.


 

Categories: Technology

Robert Scoble has posted aseries ofentries comparing the Bloglines Citations feature with Technorati.com for finding out how many sites links to a particular URL. His conclusion seems to be that Technorati sucks compared to Bloglines which has led to an interesting back & forth discussion between him and David Berlind.

I've been frustrated by Technorati.com for quite a while and have been quietly using Bloglines Citations as an alternative when I want to get results from a web search and PubSub for results I want to subscribe to in my favorite RSS reader. Technorati seems to lack the breadth of either service when it comes to finding actual blog posts that link to a site and neither site brings up unrelated crap such as blogrolls in their results.

The only problem with Bloglines is that their server can't handle the load and the citations feature is typically down several times during the day. Technorati has also had similar problems recently.

At this point all that Technorati seems to have going for it is first mover advantage. Or is there some other reason to use Technorati over competitors like Bloglines or PubSub that I've missed?


 

From  Omar's post in Sender ID I see that Forbes has an article entitled Microsoft, Yahoo! Fight Spam--Sort Of. The article gives a pretty even handed description of the various approaches both Yahoo! and MSN are taking in dealing with phishing and spam.

In the article we learn

While some e-mail services have adopted SenderID, there are still many that have not. According to Cox, the other reason for the false positives is that not all users remain on a single server. “SPF says, ‘All of my mail should come from these servers,’” says Cox. For many of EarthLink’s customers, they can be legitimately on a variety of servers, such as a corporate server, and still send and receive mail using their EarthLink address. For those users, SPF fails.

EarthLink started testing DomainKeys in the first quarter of 2005 and now signs over 70% of all outgoing mail. Other companies are also testing DomainKeys. Yahoo! Mail claims to be receiving approximately 350 million inbound DomainKeys signed messages per day.

Critics have accused Microsoft forcing SenderID on the industry without addressing questions about perceived shortcomings. The company drew fresh criticism recently when reports claimed that its Hotmail service would delete all messages without a valid SenderID record beginning in November. While AOL uses SPF, many e-mail systemsdo not. If Microsoft went through with this, for example, a significant portion of valid e-mails would never reach intended Hotmail recipients.

Microsoft says that Hotmail will not junk legitimate e-mail solely because the sending domain lacks an SPF record. The company says SenderID will be weighed more heavily in filtering e-mails, but will remain one of the many factors used when evaluating incoming e-mail. The company did say that with increased adoption of Sender ID and SPF, it will eventually become a more reliable indicator.

Both SenderID and DomainKeys filter messages with spoofed e-mail addresses in which the sender has changed the "From:"field to make it look like someone else has sent the e-mail. For example, many phishing scams come from individuals posing as banks. Under the SenderID framework, if the bank has published an SPF record, the receiving server can compare the originating server against the SPF record. If they don’t match, the receiving server flags it as spam. DomainKeys perform a similar comparison but use an encrypted key in each message and the public key unique to each domain to check where the message originated.

The amount of phony email I get per week claiming to be from Paypal & eBay and requesting that I 'confirm my account info or my account will be cancelled' is getting ridiculous. I welcome any technology that can be used to fight this flood of crap.


 

Categories: MSN

July 17, 2005
@ 05:54 AM

From Tim Bray's post entitled Atom 1.0 we learn

There are a couple of IETF process things to do, but this draft (HTML version) is essentially Atom 1.0. Now would be a good time for implementors to roll up their sleeves and go to work.

I'll add this to the list of things I need to support in the next version of RSS Bandit. The Longhorn RSS team will also need to update their implementation as well. :)

I couldn't help but notice that Tim Bray has posted an entry entitled RSS 2.0 and Atom 1.0, Compared which is somewhat misleading and inaccurate. I find it disappointing that Tim Bray couldn't simply announce the upcoming release of Atom 1.0 without posting a FUD style anti-RSS post as well.

I'm not going to comment on Tim Bray's comparison post beyond linking to other opinions such as those from Alex Bosworth on Atom Failings and Don Park on Atom Pendantics.


 

The list of PDC 2005 sessions is out. The website is rather craptacular since I can't seem to link directly to search results or directly to sessions. However thanks to some inside information from my man Doug I found that if you search for "POX" in the the session track list, you'll find the following session abstract

Indigo: Web Services for XML Programmers
If you love XML, you will love this session. Learn how to write services that range from Plain Old XML (POX) to WS-I Basic Profile and WS-* using Indigo. Learn best practices for transforming and manipulating XML data as well as how and when to expose strong-typed views. If you use XML, XSLT, XSD, and serialization directly in your Web services today, this session offers the information you need to successfully migrate your services to Indigo.
Session Level(s): 300
Track(s): Communications

Microsoft's next generation development platforms are looking good for web developers. AJAX support? check. RSS support? check. And now it looks like the Indigo folks will be enabling developers to build distributed applications on the Web using plain old XML (POX) over HTTP as well as SOAP. 

A number of popular services on the Web expose APIs on the Web using POX (although they mistakenly call them REST APIs). In my post Misunderstanding REST: A look at the Bloglines, del.icio.us and Flickr APIs I pointed out that the Flickr API, del.icio.us API and the Bloglines sync API are actually examples of POX web services not REST web services. This approach to building services on the Web has grown increasingly popular over the past year and it's great that Microsoft's next generation distributed computing platform will support this approach.

I spent a bunch of time convincing the Indigo folks to consider widening their view of Web services and thanks to open minded folks like Doug, Don & Omri it looks like I was successful.

Of course, it isn't over yet. The icing on the cake would be the ability to get full support for using REpresentational State Transfer (REST) in Indigo. Wish me luck. :)

Update: I was going to let the Indigo guys break this themselves but I've been told that it is OK to mention that there will be first class support for building REpresentational State Transfer (REST) web services using Indigo.


 

Categories: XML Web Services

July 13, 2005
@ 01:36 PM

I stumbled on Bus Monster last week and even though I don't take the bus I thought it was a pretty cool application. There's a mapping application that I've been wanting for a few years and I instantly realized that given the Google Maps API I could just write it myself.

Before starting I shot a mail off to Chandu and Steve on the MSN Virtual Earth team and asked if their API would be able to support building the application I wanted. They were like "Hell Yeah" and instead of working on my review I started hacking on Virtual Earth. In an afternoon hacking session, I discovered that I could build the app I wanted and learned new words like geocoding.

My hack should be running internally on my web server at Microsoft before the end of the week. Whenever Virtual Earth goes live I'll move the app to my personal web site. I definitely learned something new with this application and will consider Hacking MSN Virtual Earth as a possible topic for a future Extreme XML column on MSDN. Would anyone be interested in that?


 

Categories: MSN | Web Development | XML