November 1, 2005
@ 10:05 PM

Today Microsoft announced Windows Live. The official blurb is in the press release at Microsoft Previews New Windows Live and Office Live Services

SAN FRANCISCO — Nov. 1, 2005 — Microsoft Corp. today previewed two new Internet-based software services — Windows Live™ and Microsoft® Office Live — designed to deliver rich and seamless experiences to individuals and small businesses. The new offerings combine the power of software plus services and are compelling enhancements to the Microsoft Windows® and Microsoft Office products. In particular, Windows Live helps bring together all the elements of an individual’s digital world while Office Live helps small companies do business online.
...

Windows Live

Windows Live™ is a set of personal Internet services and software designed to bring together in one place all of the relationships, information and interests people care about most, with more safety and security features across their PC, devices and the Web. Microsoft demonstrated early versions of several new Windows Live offerings, some of which are accessible at http://ideas.live.com, a new Web site where people can try the latest Windows Live beta services:

Live.com serves as the personalized starting point for Windows Live services, powered by cutting-edge technologies such as RSS and Asynchronous JavaScript and XML (AJAX). Live.com offers complete choice and customization for individuals who want quick access to the people and information they care about most. Live.com, which will be a great place to experience Windows Live Search, is available for trial today.

Windows Live™ Mail is a new, global Web e-mail service, built from the ground up to be faster, safer and simpler. Existing MSN® Hotmail® users will be able to seamlessly upgrade to the new service. People can sign up for the beta waiting list at http://ideas.live.com.

Windows Live™ Messenger helps individuals deepen their connections with the people they care about through instant messaging, file and photo sharing, PC-based calling, and more. Windows Live Messenger will enter the beta stage later this year. More information is available at http://ideas.live.com.

Windows Live™ Safety Center is a Web site where users can scan for and remove viruses from their PC on demand. The service is currently in beta form, available at http://ideas.live.com.

Windows OneCare™ Live is a previously announced PC health subscription that helps protect and maintain PCs via an integrated service that includes anti-virus, firewall, PC maintenance, and data backup and restore capability. People can sign up for the beta waiting list at http://ideas.live.com.

Windows Live™ Favorites is a service that enables individuals to access their Microsoft Internet Explorer and MSN Explorer favorites from any PC that’s online. The service is currently in beta form at http://ideas.live.com.

Windows Live will be offered alongside MSN.com, a global leader in services with more than 215 million active MSN Hotmail accounts; more than 185 million MSN Messenger contacts worldwide; and over 25 million MSN Spaces created by individuals to share their photos, Web logs (blogs) and interests with friends. MSN.com will continue to deliver rich programmed content and provide access to Windows Live services.

As someone who works on MSN Windows Live products I've seen about ten hours of presentations over the past few months on what this means for us and have come up with a simple way of explaining it to the uninitiated.

From a practical perspective, when I think about Windows Live I think about three things:

  1. User-centric web applications with rich user interfaces: You can expect more applications with rich, dynamic, user interfaces such as has been shown in the Mail beta and on http://www.live.com. For the geeks out there this means that you'll be seeing a lot more AJAX applications coming out of us and a focus on software that puts the user in control of their online experience.

  2. Smart desktop applications that improve the Windows user experience:  The MSN division has slowly become Microsoft's consumer software division. From desktop search to instant messaging, a number of key applications that were once thought of as bits that ship with the operating system are now being shipped on a more frequent basis by MSN. With Windows Live, this reality is being acknowledged and embraced. Expect to see more beneficial integration between consumer applications coming from Microsoft and our web properties such as the integration between MSN Messenger & MSN Spaces.

  3. The Web as a platform:  http://msdn.microsoft.com/msn was just the beginning, expect a lot more. Coincidentally I just finished giving a presentation to a few hundred of my co-workers from across the company on MSN Windows Live services as a Web platform. This is definitely an area I will be spending a lot of my time on in the following months.

To meet this vision will require some new offerings from Microsoft and the reworking of some existing products as well. In some cases, this will simply look like a branding change while in others it will be deeper fundamental changes in how the application works. You can try out some of the Windows Live applications today at http://ideas.live.com/.

Of course, this isn't an official explanation. That is what you'll find in the press release. Instead this is my interpretation based on talking to various folks who've been working on this and the various presentations we've gotten on the topic on my team. There is going to be a lot written about Windows Live over the next couple of days and a lot of it will be inaccurate or fueled by speculation. What I've written above is as accurate a picture as I can paint based on the knowledge I have as someone who now works on this stuff.


 

Categories: MSN

Danah Boyd has an excellent post on the differences between adults and adolescents when it comes to blogging and other forms of online expression. In her post designing for life stages Danah Boyd writes

Identity formation

When youth are coming into a sense of self, they move away from the home and look to the social world to build a socio-culturally situated identity. In other words, they engage in the public in order to make sense of social boundaries/norms and to develop a sense of self in relation to the broader social context. Youth go to the public to see and be seen and they negotiate a presentation of self depending on the reactions of peers and adults. Public performance is about getting those reactions in order to make sense of the world.

A main role of things like MySpace and Facebook is to produce a public sphere in order for youth to negotiate their peers and learn about the social world. People often ask me why teens don't just go out in a physical public. Simply put, they can't. We live in a culture of fear where most parents won't allow their children to go anywhere without supervision. Youth no longer have access to the streets or even neighborhood gathering spots. They are always in controlled locations where the norms are strictly dictated by adults - this is not a public sphere in which teens can make sense of sociability. Thus, they create their own. (Note: the production of a public and its implications is the cornerstone of my dissertation.)...

Contributive Participant in Society

And then we become adults. The bulk of adult-hood is evaluated based on contribution to society, participation, what you can create and do. It's about being a good citizen, laborer, parent. It's about the act of doing things. Your identity gets wrapped up in how you contribute to society ("So, what do you do?"). We ask youth about their hobbies and friends; we ask adults about their jobs and children. When we speak, we think that we have to produce information, be relevant, be efficient, be contributive. (And people wonder why growing up sucks.)

Nowhere is this shift more apparent than blogging land. While youth are doing identity production in terms of sociability, adults are creating new tasks for themselves - documenting, informing, conversing. It's all wrapped up in being part of the conversation, not in simply figuring out who you are.

This is one of the reasons why whenever I see words like blogosphere it makes me laugh. The worlds largest blogging site is probably MySpace. I suspect that MSN Spaces is the second largest although I'd have to ping folks from work to confirm. Both sites have significant populations of young adults (aka teenagers or adolescents). However whenever someone says blogosphere they usually mean some specific subset of blogs such as technology or politics focused blogs. Although A-list bloggers like Doc Searls, Dave Winer and Robert Scoble give the impression that blogging is about amateur punditry that competes with journalism and corporate blogging, the fact is that a large segment of the blogging population are just trying to express their identity and discover themselves online.

People building social software need to understand the needs of both classes of users. In fact, it's actually more complex than that because you often also have to factor in cultural differences as well since the Web is international. If you are interested in blogging and other aspects of social software, you really should subscribe to Danah's blog.


 

Categories: Social Software

It is interesting to see people rediscover old ideas. Robert Scoble has a post entitled Silicon Valley got my attention: the future of Web businesses where he writes

What is Zvents capturing? Well, they know you like football. They know you probably are in San Francisco this weekend. And, if you click on one or two of the events, they know you’re interested in them. Now, what if you see an ad for a pair of Nikon binoculars. If you click on that, then Zvents would be able to capture that as well.

Now, what other kinds of things might football fans, who are interested in binoculars, who are in San Francisco, want to do this weekend? Hmmm, Amazon sure knows how to figure that kind of problem out, right? (Ever buy a Harry Potter book on Amazon? They suggest other books for you to buy based on past customer behavior!!!)

It goes further. Let’s say this is 2007. Let’s say that Google (or Yahoo or MSN) has a calendar "branding" gadget out. Let’s say they have a video "monetization" gadget out. Zvents could build the calendar "branding" gadget into their page. What would they get out of that? Lots of great PR, and a Google (or MSN or Yahoo) logo in everyone’s face. But, they would also know where you’d be this weekend. Why? Cause you would have added the 49ers football game to your calendar. So, they would know where you are gonna be on Sunday. And, that you just bought binoculars. Over time Google/MSN/Yahoo would be able to learn even more about you and bring you even more ads. How?
...
It’s all attention. So, now, what if Zvents and Google shared their attention with everyone through an API. Now, let’s say I start a new Web business. Let’s call it "Scoble’s tickets and travel." You come to my site to book a trip to London, let’s say. Well, now, what do I know about you? I know you were in San Francisco, that you like coffee, that you just bought some binoculars, that you like football. So, now I can suggest hotels near Starbucks and I can suggest places where you’ll be able to use your binoculars (like, say, that big wheel that’s in the middle of London). Even the football angle might come in handy. Imagine I made a deal with the local soccer team. Wouldn’t it be useful to put on my page "49ers fans get $10 off European football tickets."

Four years ago, while interning at Microsoft, I saw a demo about Hailstorm which made me suspct the project's days were numbered. The demo involved a scenario very similar to what Robert describes in his post. Just substitute Zvents with "online CD retailer" and calendar gadget with "upcoming concerts gadget" and that Robert's scenario was the Hailstorm demo I saw.

The obvious problem with this "Attention API" and Hailstorm is that it requires a massive database of customer behavior. At the time, Microsoft's pitch with Hailstorm was that its online retailers and other potential Hailstorm partners should give all this juicy customer data to Microsoft then pay to access it. It doesn't take a rocket scientist to tell that most of them told Microsoft to take a long walk of a short pier.

Now let's use a more concrete example, like Amazon. The folks at Amazon know exactly what kind of movies, music and books I like. It's possible to imagine them making a deal with TicketMaster to show me upcoming concerts I may be interested in when I visit their site. The reverse is also possible, Amazon may be able to do a better job of recommending music to me based on concerts I have attended whose tickets I purchased via TicketMaster.

What's the first problem that you have to solve when trying to implement this? identity. How do you connect my account on Amazon with my account on TicketMaster in a transparent manner? This is one of the reasons why Passport was such a big part of the Hailstorm vision. It was how Microsoft planned to solve the identity problem which was key to making a number of the Hailstorm scenarios work. Almost half a decade later, the identity problem is still not solved.

Identity is just problem #1.

If you scratch at this problem a little, you also will likely find an ontology problem as well. How do I map the concepts in Amazon's database (hip hop CDs) with related concepts in TicketMaster's database (rap concerts)? The Hailstorm solution was to skip solving this because it was all coming out of the same database. However even simple things like mapping Rap to HipHop or Puff Daddy to P. Diddy can be fraught with problems if both databases weren't created by the exact same organization. Trying to scale this across different business partners is a big problem and is pretty much a cottage industry in the enterprise world.

Thus Ontologies is problem #2.

There are more problems to discover as one attempts to build the Attention API and an Attention economy. At least it was a fun trip down memory lane remembering my intern days. :)


 

Categories: Technology

Scott Isaacs has written a series of posts pointing out one of the biggest limitations of applications that use Asynchronous Javascript  and XML (AJAX). The posts are XMLHttpRequest - Do you trust me?, XMLHttpRequest - Eliminating the Middleman, and XMLHttpRequest - The Ultimate Outliner (Almost) . The limitation is discussed in the first post in the series where Scott wrote

Many web applications that "mash-up" or integrate data from around the web hit the following issue: How do you request data from third party sites in a scalable and cost-effective way? Today, due to the cross-domain restrictions of xmlhttprequest, you must proxy all requests through a server in your domain.

Unfortunately, this implementation is very expensive. If you have 20 unique feeds across 10000 users, you are proxying 200,000 unique requests! Refresh even a percentage of those feeds on a schedule and the overhead and costs really start adding up. While mega-services like MSN, Google, Yahoo, etc., may choose to absorb the costs, this level of scale could ruin many smaller developers. Unfortunately, this is the only solution that works transparently (where user's don't have to install or modify settings).

This problem arises because the xmlhttprequest object can only communicate back to the originating domain. This restriction greatly limits the potential for building "mash-up" or rich aggregated experiences. While I understand the wisdom behind this restriction, I have begun questioning its value and am sharing some of my thoughts on this

I encountered this limitation in the first AJAX application I wrote, the MSN Spaces Photo Album Browser, which is why it requires you to add my domain to your trusted websites list in Internet Explorer to work. I agree with Scott that this is a significant limitation that hinders the potential of various mashups on the Web today. I'd also like to see a solution to this problem proposed. 

In his post, Scott counters a number of the reasons usually given for why this limitation exists such as phishing attacks, cross site scripting and leakage of private data. However Derek Denny-Brown describes the big reason for why this limitation exists in Internet Explorer in his post XMLHttpRequest Security where he wrote

I used to own Microsoft's XMLHttpRequest implementation, and I have been involved in multiple security reviews of that code. What he is asking for is possible, but would require changes to the was credentials (username/password) are stored in Windows' core Url resolution library: URLMON.DLL. Here is a copy of my comments that I posted on his blog entry:

The reason for blocking cross-site loading is primarily because of cached credentials. Today, username/password information is cached, to avoid forcing you to reenter it for every http reference, but that also means that script on yahoo.com would have full access to _everything_ in your gmail/hotmail/bank account, without a pop-up or any other indication that the yahoo page was doing so. You could fix this by associating saved credentials with a src url (plus some trickery when the src was from the same sight) but that would require changes to the guts of windows url support libraries (urlmon.dll)

Comparing XML to CSS or images is unfair. While you can link to an image on another sight, script can't really interact with that image; or example posting that image back to the script's host sight. CSS is a bit more complicated, since the DOM does give you an API for interacting with the CSS, but I have never heard of anyone storing anything private to the user in a CSS resource. At worst, you might be able to figure out the user's favorite color.

Ultimately, it gets back to the problem that there needs to be a way for the user to explicitly enable the script to access those resources. If done properly, it would actually be safer for the user than the state today, where the user has to give out their username and password to sights other than the actual host associated with that login.
I'd love to see Microsoft step up and provide a solution that addresses the security issues. I know I've run against this implementation many times.

That makes sense, the real problem is that a script on my page could go to http://www.example.com/yourbankaccount and it would access your account info because of your cookies & cached credentials. That is a big problem and one that the browser vendors should work towards fixing instead of allowing the status quo to remain.

In fact, a proposal already exists for what this solution would look like from an HTTP protocol and API perspective. Chris Holland has a post entitled ContextAgnosticXmlHttpRequest-An informal RFC where he posts on some of the pros and cons of allowing cross-site access with IXmlHttpRequest but having the option to not send cookies and/or cached credentials.


 

Categories: Web Development

October 29, 2005
@ 04:33 PM

Yesterday I was at a Halloween event at a local grade school and I was peeved by at least three things I saw

  1. The lunch menu had "pizza", "cheese sticks and sauce" and "mini cheeseburgers" on it. Feeding growing kids junk food for lunch on a regular basis just seems like starting them off on the wrong foot nutritionally.

  2. The whiteboard in the gym had a list of reasons to excercise and on it cardiorespiratory was incorectly spelled as cardiorespitory several times. .

  3. There were a few bean bag toss games set up. Each kid got the same prize independent on how good or bad they did. The kids who got the bean bag through the difficult holes all 3 times got the same amount of candy as the kid who missed all three. That seems to send the wrong message about competition.

I could actually see myself complaining about one or more of the above if I was a parent with kids at the school. It looks like I'm going to be one of those parents when my time comes.


 

Categories: Ramblings

October 27, 2005
@ 05:53 PM

I read Anil Dashes's post The Interesting Economy a few days ago and didn't think much about it. Below is a key excerpt from his post

Today, Flickr has interestingness, which is a measure of some combination of how many times a picture has been viewed, how many comments it has, how many times it's been tagged or marked as a favorite, and some other special sauce. I suppose revealing the exact mix would encourage even more people to game the system, but the fact that it's not disclosed has led to a number of attempts to reverse-engineer the system. I doubt any of them are/will be successful (Flickr can update/evolve fast enough to change the algorithm if they figure it out) but that's probably going to be an ongoing dialogue.
...
What I'm wondering is, how is Flickr's interestingness different than the economy in Game Neverending? Than Second Life? (Or in Evercrack or Neverwinter or any of the other gaming platforms.) Is interestingness its own reward? Why don't I get to level up or power up when I create something interesting?

More to the point, the in-game economies of these games translate pretty cleanly into real-world cash, with eBay amplifying the efficiency of the currency conversion. And interestingness in other online media (like blogs) is rewarded by cash in a pretty straightforward way; I can sign up for TypePad, check a box to enable text ads, and pay for my account or point the proceeds to my PayPal account when I start getting lots of visitors.

But interestingness in Flickr doesn't pay. At least not yet. Non-pro users are seeing ads around my photos, but Yahoo's not sharing the wealth with me, even though I've created a draw. Flickr's plenty open, they're doing the right thing by any measure of the web as we saw it a year ago, or two years ago. Today, though, openness around value exchange is as important as openness around data exchange.

Since I read this it seems there has been a bunch of blog buzz about Anil's post. I found this out via Robert Scoble's post Anil Wants Flickr to Pay. Robert seems to think that the current trend towards "user generated content" is really about companies exploiting end users for money. I guess I'm biased because I work on services such as MSN Groups and MSN Spaces, but I disagree with Robert and Anil.

Using free services on websites like Flickr is a commercial exchange of goods and services. Flickr gives you a place to host your photos so you can share them with friends and in return they get paid for their services by placing ads around your photos. If you disagree with the terms of the service you can decide to choose another service such as Kodak's EasyShare Gallery (formerly Ofoto).

As with all things there will be some photo albums that will be more popular than others. These photo albums will likely bring in more ad clickthroughs and thus more money than the average photo album. Is this unfair? I don't think so. Is it unfair that my use of Google or MSN's search engines is subsidized by people who click on ads and I don't? Should people who click on more ads than the average user of a search engine be paid for doing so? 

Getting back to Flickr, since using the service is a commercial exchange entered into willingly by both parties I don't see why one could claim it is unfair. I can see the argument that Flickr should figure out how to reward its customers that bring in subtantially more ad revenue than the average user, but that would just be good business sense not something they are obligated to do.


 

Categories: Social Software

From the press release MSN Search Announces MSN Book Search we learn

SAN FRANCISCO — Oct. 25, 2005 — MSN Search today announced its intention to launch MSN® Book Search, which will support MSN Search’s efforts to help people find exactly what they’re looking for on the Web, including the content from books, academic materials, periodicals and other print resources. MSN Search intends to launch an initial beta of this offering next year. MSN also intends to join the Open Content Alliance (OCA) and work with the organization to scan and digitize publicly available print materials, as well as work with copyright owners to legally scan protected materials.

"With MSN Book Search, we are excited to be working with libraries worldwide to digitize and index information from the world’s printed materials, taking another step in our efforts to better answer people’s questions with trusted content from the best sources," said Christopher Payne, corporate vice president of MSN Search at Microsoft Corp. "We believe people will benefit from the ability to not just view a page, but to easily act on that data in contextually relevant ways, both online in the search experience and in the applications they are using."

MSN will first make available books that are in the public domain and is working with the Internet Archive to digitize the material. MSN will then work to extend its offering to other types of offline content. The digitized content will primarily be print material that has not been copyrighted, and Microsoft will clearly respect all copyrights and work with each partner providing the information to work out mutually agreeable protections for copyrights.

If you're keeping track that means all three major search engines (Yahoo!, Google and MSN) have announced book search engines. So far only Google is facing lawsuits from publishers because it plans to digitize copyrighted works unless the copyright holders explicitly opt-out. Expecting that publishers and authors will have to go to each search engine vendor that plans to offer a book search service to explicitly tell them not to redistribute their works seems to be putting an unnecessary burden on copyright holders and runs counter to the spirit of copyrights. 

The lawsuits around Google Print may turn out to be an interesting turning point in how copyright is viewed in the digital era.


 

Categories: MSN

From the post Google Base Was Sort of Live on Google Blogoscoped we learn

Several people report Google Base (as predicted yesterday) went live, or at least, its login-screen. I can’t reach it at the moment as it seems Google took it down again, but Dirson posted a screenshot to Flickr. So what is it? Quoting from Dirson’s screenshot of the login screen:

Post your items on Google.

Google Base is Google’s database into which you can add all types of content. We’ll host your content and make it searchable online for free.

Examples of items you can find in Google Base:

• Description of your party planning service
• Articles on current events from your website
• Listing of your used car for sale
• Database of protein structures

You can describe any item you post with attributes, which will help people find it when they search Google Base. In fact, based on the relevance of your items, they may also be included in the main Google search index and other Google products like Froogle and Google Local.

This reminds me of Amazon Simple Queue Service except they started with a web page instead of an API. I can't imagine that Google Base will be terribly useful without an API so I expect that one will show up with the release or shortly thereafter. I'm still unclear as to why this is an interesting idea although I'm sure some clever geek will find some way to build something cool with it. I also wonder if this will spur Amazon into doing more interesting things with their service as well.

Who would have thought that online lists would be a hot area? I guess it's time for some BigCo to snap up Ta-da Lists for a couple million dollars. ;)


 

Categories: Web Development

Since the release of the installer for the alpha version of the Nightcrawler edition of RSS Bandit we have fixed a number of bugs. We worked on a number of performance issues related to the responsiveness of the application. We also fixed a number of issues with our newsgroup support including fixing issues with password protected newsgroups. We also think we tracked down the issue that led to some items occassionally showing up in the wrong feeds.

You can download the next iteration of the Nightcrawler alpha at RssBandit.1.3.0.36.Nightcrawler.Alpha.zip.

We aren't ready to release a beta version of the installer because we aren't feature complete yet. There are two features that still need to be completely implemented; downloading of Enclosures/Podcasts and notifications of new comments on "watched" posts.

Also some features need fleshing out. Torsten pointed out this morning that I need to add support for RFC 2047 so that we can handle non-ASCII author names and post titles as part of the newsgroup support. I had hoped to find a free library with code that already does that but it seems that the only ones I can find are for sale. I guess writing that code must suck so much that no one wants to give it away for free. There goes a weekend or two. :)

NEW FEATURES IN NIGHTCRAWLER

  • NNTP Newsgroups support: Users can specify a public NNTP server such as news.microsoft.com and subscribe to newsgroups on that server. Permalinks in a newsgroup post point to the post on Google Groups.

  • Item Manipulation from Newspaper Views:  Items can be marked as read or unread and flagged or unflagged directly from the newspaper view. This improves end user work flow as one no longer has to leave the newspaper view and right-click on the list view to either flag or mark a post as unread. 

  • Subscription Wizard: The process for subscribing to newsgroups, search results and web feeds has been greatly simplified. For example, users no longer need to know the web feed of a particular web site to subscribe to it but can instead specify the web page URL and discovery of its web feed is done automatically. 

  • Synchronization with Newsgator Online: Users can synchronize the state of their subscribed feeds (read/unread posts, new/deleted feeds, etc) between RSS Bandit and their account on Newsgator Online. This allows the best of both worlds where one can use both a rich desktop client (RSS Bandit) and a web-based RSS reader (Newsgator Online) without having to worry about marking things as read in both places.

  • Using back and forward arrows to view last post seen in reading pane: When navigating various feeds in the RSS Bandit tree view it is very useful to be able to go back to the last feed or item viewed especially when using the [spacebar] button to read unread items. RSS Bandit now provides a way to navigate back and forth between items read in the reading pane using the back and forward buttons in the toolbar. NEW!!!

  • Atom 1.0 support: The Atom 1.0 syndication format is now supported. 

  • Threaded Posts Now Optional: The feature where items that link to each other are shown as connected items reminiscent of threaded discussions can now be disabled and is off by default. This feature is very processor intensive and can slow down the average computer to the point that is unusable if one is subscribed to a large number of feeds.

  • UI Improvements: Icons in the tree view have been improved to make them more distinctive and also visually separate newsgroups from web feeds.

 

I've been wondering if we shouldn't just lock down this release and push out the podcasting features to the Jubilee release. What do y'all think?


 

Categories: RSS Bandit

October 24, 2005
@ 04:06 PM

I recently read two posts on the official Google blog about the recent hubub around their efforts to digitize books and make them searchable over the Web. The posts are Why we believe in Google Print and The point of Google Print.

My immediate personal reaction was how different Google is from Microsoft when it comes to blogging. On the one hand Google is quick to fire people who don't toe the party line in their blogs while Microsoft encourages its employees to show their individual voices even if they sometimes disagree with the company's party line. On the other hand, Microsoft frowns on employees commenting on pending legal actions such as lawsuits while Google has its employees blogging their side of the story in an official capacity. The common thread here is "controlling the message". Google is all about that.

The other thing that struck me about Google's messaging around Google Print was pointed out by Dave Winer in his post  A turning point for the Web?

It's time to realize that Google is no longer the little company we used to love. They're now a huge company that pushes individuals around like a lot of other huge companies. They need some balance to their power. And it's ridiculous to blindly take their side on every issue. Sometimes they're wrong, and I believe this is one of those times. It's certainly worth considering the possibility that they're wrong.

Here's where the point about controlling the message shows up. By any measure, Google is multi-billion dollar, multinational corporation. However whenever its executives speak, they do an excellent job of portraying the company as if it is the altruistic side project of a bunch of geeky college kids. I don't just mean their corporate slogan of "Do No Evil" although it is one manifestation of this strategy. Better examples are Sergey Brin's comments at the recent Web 2.0 conference  where he states that their motives for creating the Google AdSense program was to help keep content-based websites stay in business. Of course, syndicating ads now brings in about three quarters of a billion dollars in revenue for them a quarter.

So what does this have to do with Google Print? Well, I personally don't buy computer books anymore thanks to the Web and search engines. The last book I bought was Beginning RSS and Atom Programming and that's only because I wrote one of the forewards. The only time I've opened a computer book in the past year was recently when I cracked open the reference section of Dynamic HTML when looking for some JavaScript minutae. If I had a good Web-based search engine for content within the book I wouldn't have needed the book. Also, I've been wanting a cheap or free Integrated Development Environment (IDE) for JavaScript for quite some time. If I'd found an ad for a JavaScript IDE while searching for content within the book in my 'hypothetical book search engine' I definitely would have clicked on it and maybe purchased the IDE. My 'hypothetical book search engine' would wean me completely off of needing to buy computer books while probably making a tidy sum for itself by selling my eyeballs to software companies trying to sell me IDEs, profilers, debuggers and software training. 

My point is that Google Print will likely make the company a lot of money and could cost certain publishes a lot of money in lost sales. Even if it doesn't, the publishing industry will likely cede some control to Google. That's what these lawsuits are about and from that perspective I can understand why various publishers have initiated lawsuits with Google. To frame this as 'the evil publishing industry is trying to prevent us from completing our corporate mission of making information more accessible to users' is disingenuous at best and downright manipulative at worst.

Markets are conversations, to succeed in the marketplace you have to dominate the conversation and control it to suit your needs. Google is definitely good at that.


 

Categories: Current Affairs