The folks behind Outlook Express Windows Mail Windows Live Mail Desktop (beta) are blogging up a storm. If you haven't already you should check out the http://spaces.msn.com/morethanmail and subscribe to their RSS feed.

There are already two good posts; Where did we come from? Where are we going? which talks about some of the RSS features they are adding to the next version of the client and Hey, "Blog it!" which talks about blogging from your email client which is a feature I've been working with Vlada on.

PS: Anyone else notice that Microsoft is will be shipping four different RSS readers this year? There's the Onfolio integration in Windows Live Toolbar, Internet Explorer 7, Outlook 2007 and now Windows Live Mail Desktop. At this rate I may have to stop working on RSS Bandit...yeah, right. ;)


 

Categories: Windows Live

These are my notes from the session G/localization: When Global Information and Local Interaction Collide by danah boyd

danah boyd began by pointing out what she means by G/localization. It is the ugliness that ensues when you bring the global and the local together. Today online spaces enable us to cross space and time. We can communicate with faraway peoples with the blink of an eye but in truth most of us do not live our lives in a multicultural environment which can cause problems when we build or participate in online communities. A culture is the artifacts, norms and values of a people. It isn't necessarily limited to nation-states, languages or ethnic groups. a company can have a 'corporate culture' and lots of attendees of ETech would probably identify themselves as being part of the 'geek culture'. In addition people tend to exist in multiple cultural frames simultaneously but don't usually notice until they are extracted from their normal routine (e.g. going on vacation).

There was once an assumption that mass media would lead to cultural homogenization. Although this is true in some respects, it has also led to some subcultures forming that a direct reaction to the mass culture such as the raver and goth subcultures among adolescents. Similar subcultures occur in online forums dating as far back as USENET, where newsgroups like rec.motorcyles were very different from others like rec.golf. In that era, social software tended to come in two distinct flavors. There was homogenous software that handled the communication needs of single groups such as mailing list software and then specialized built to handle the needs of a particular community such as Well.com.

Craig's List, Flickr and MySpace are examples of a new generation of successful social software. All three services have the following basic characteristics

  • Passionate designers and users: The creators of the services are passionate about the service they've created and use it themselves. All three services were seeded by friends and family of the founder(s) who became the foundation of a strong base of passionate users.

  • Public Personalities: Tom (MySpace), Stewart (Flickr) and Craig (Craig's List) put a human face on the service by directly interacting with users either in support roles or to give updates on the status of the service.

  • Integrated feedback loop: Changes to the sites are driven by customer demand which is often given directly to the people building the products

In anthropology there is a notion of 'embedded observation' where the researcher lives with the society being studied so as to learn from within the community instead of from outside. The designers of all three services seem to live by this principle when it comes to the cultures they've fostered. They things they do well are how they tend to watch, listen and learn directly from users instead of keeping user research as something outside the core design and development process done mainly for marketing purposes as is the case with many services. These services actually focus on 'real users' as opposed to personas or other caricatures of their user base. Another thing the above mentioned services do well is that they tend to nudge the culture instead of trying to control it. The Fakester saga on Friendster is an example of where the designers of a service tried to control the burgeoning culture of the service instead of flowing with it. Great social software services support and engage the community that has grown around their service instead of trying to control them.

It isn't all plain sailing, there are some key problems that face sites such as Craig's List, Flickr and MySpace including

  • Creator burnout: Being passionate about the product you work on often leads to overworking which eventually leads to burning out. Once this happens, it is hard for creators to maintain cultural embeddedness which leads to disconnects between the designers and users of the services.

  • Scaling: As a user base becomes more diverse it is often hard to deal with the increased cultural or even linguistic diversity of a service. An example is Orkut which became very popular amongst Brazilian users but none of the people working on the product understood their language. Secondly, as services become largeer they become harder to police which can eventually have significant consequences. Both Craig's List and MySpace are facing lawsuits because people feel they haven't effectively policed their services.

danah boyd then gave some guidelines for creators of social software that want to design through embeddedness

  1. Passion is everything
  2. Have safegaurds put in place to prevent burnout
  3. Diversify your staff
  4. Do not overdesign
  5. Enable and empower your users. Don't attempt to control them, sometimes they might go in a different direction from what you intended and that's OK.
  6. Integrate the development, design and customer support teams so they all know each other's pain.
  7. Stay engaged with the community
  8. Document the evolution of your community especially what aspects of the culture have driven feature design

The next topic was why people join online communities. The fact is most people like hanging out with people who are like them such as people who live in the same region, are of the same ethnic group or just share the same interests. Most people like to meet "new" people but not "different" people from them. There is also something about seemingly accidental or coincidental meetings that many people like. For example, two people can see each other on the bus every day for years and never talk but once they meet somewhere else they can spark up conversation about their shared identity (i.e. riders of a particular bus route). danah described this concept as the notion of familiar strangers.

danah boyd then showed some examples of the kinda of speech used on services like MySpace which is similar to L33t5p34k. She asserted that the creation of such dialects by teenagers are an attempt to assert their independence and at the same time obfuscate their speech from grown ups. In addition, she challenged the notion that machine translation would be ever be able to bridge languages due to cultural notions embedded in these languages. Simply translating teenage online speech to regular English in a mechanical manner loses some of the meanings of the words that are only understood by members of that community. One example she gave is the word nigger nigga. Depending on the culture of the speakers it could mean an affectionate term between males ("That's my nigga") to one which is intensely negative ("I can't believe Kimberly is a nigger lover"). Machine translation can't figure out the difference. Another real-world example which affects online communities is defining obscenity and pornography. Even the U.S. supreme court has given up on being able to properly define obscenity and pornography by saying it depends on the standards of the community. However when the community becomes anyone in the world with an Internet connection things become tricky. In the United States it's obscene to show women's nipples in public, on the other hand in Brazil you often find bare breasted women in national magazines while in United Arab Emirates a bare belly button is considered obscene. A picture considered tame in one country could be considered raunchy and obscene in others. danah talked about a conversation she wants saw on Flickr where women from the UAE were commenting on some photos of American women in tank tops and hot pants where they expressed sorrow that women in the U.S. need to objectify themselves sexually to be accepted by mainstream society. People like to argue about morality when it comes to building online services but the question is "Who's morality, yours or theirs?" This question becomes important to answer because it can lead to serious ramifications from lawsuits to your website being blocked in various countries.

In conclusion, danah boyd gave the following summary of what to do to design for G/localization

  • Empower users to personalize their experience
  • Enable users to control access to their online expressions of their personality by being able to make things private, public, etc.
  • Let users control opportunities for meeting people like them

I loved this talk. This was the only talk I attended where the Q&A session went on for ten minutes past when the talk was supposed to end and no one seemed ready to leave. danah boyd r0cks.


 

Categories: Trip Report

These are my notes from the session Feeds as a Platform: More Data, Less Work by Niall Kennedy, Brent Simmons and Jane Kim.

Niall Kennedy started of by talking about implementations of subscriptions and syndication feeds from the early days of the Web such as PointCast, Netscape NetCenter's Inbox Direct feature and Backweb. That was back in 1997. Today in 2006, there are dozens of applications for subscribing and consuming syndication feeds. Niall then broke feed readers up into four main categories; desktop, online, media center and mobile. Desktop RSS readers are usually text-centric and follow the 3-pane mail/news reader model. Consuming rich media from feeds (i.e. podcasts) is often not handed off to other applications instead of being an integrated part of the experience. Finally, a problem with desktop feed readers is that one cannot access their feeds from anywhere and at anytime. Online feed readers offer a number of advantages over desktop feed readers. An online reader is accessible from any computer so there is no problem of having to keep ones feeds in sync across multiple machines. Feeds are polled without the user needing to have an application constantly running as is the case with desktop readers. And the feed search capabilities are often more powerful since they can search feeds the user isn't subscribed to. Feed readers that are built into Media Centers enable people to consume niche content that may not be available from mainstream sources. Media Center feed readers can take advantage of the users playlists, recommendations/preferences and the time-shifting capabilities of the Media Center to provide a consistent and rich experience whether consuming niche content from the Web or mainstream content from regular television. Mobile readers tend to enable scenarios where the user needs to consume content quickly since users either ewant highly targetted information or just a quick distraction when consuming mobile content. Niall then gave examples of various kinds of feed readers such as iTunes which does audio and video podcasts, FireAnt which does video podcasts, and Screen3 which is a feed reader for mobile phones.

The talk then focused on some of the issues facing the ecosystem of feed providers and readers

  • too many readers hitting feed providers
  • payloads to feeds getting bigger due to enclosures
  • users suffering from information overload due to too many feeds
  • subscription and discovery are complicated
  • multiple feed formats
  • XML is often invalid - Google Reader team blogged that 15% of feeds ae invalid
  • roaming subscriptions between PCs when using a desktop reader

There are also developer specific issues such as handling multiple formats, namespaced extension elements, searching feeds, synchronization and ensuring one follows HTTP best practices that benefit when there exists a platform that already does the heavy lifting. Companies such as Google, NewsGator and Microsoft provide different platforms for processing feeds which can be used to either enhance existing feed readers or build new ones.

Jane Kim then took over the session to talk about the Windows RSS platform and how it is used by Internet Explorer 7. She began by explaining that one of the complelling things about Web feeds is that the syndicated information doesn't have to be limited to text. Thanks to the power of enclosures one can subscribe to calendars, contact lists, video and audio podcasts. For this reason the Internet Explorer team believes that consuming feeds goes beyond the browser which is one of the reasons they decided to build the Windows RSS platform. The integration of feed consumption in Internet Explorer is primarily geared at enabling novice users to easily discover, subscribe and read content from feeds. The Windows RSS platform consists of 3 core pieces that can be used by developers

  • Common Feed List - list of feeds the user has subscribed to
  • Download Engine - this manages downloading of feeds and enclosures
  • Centralized Feed Store - this is where the downloaded feeds and enclosures are stored on disk. All feed formats are normalized to a custom amalgam of RSS 2.0 and Atom 1.0.
By offering these centralized services it is hoped that this will lead to more sharing between feed reading applications instead of the current practice where information about a user's subscriptions are siloed within each feed reading application they use.

Jane then completed here talk by showing some demos. She showed subscribing to a Yahoo! News feed in Internet Explorer 7. She also showed some integration with the Windows RSS platform that is forthcoming in the next version of Feed Demon and the desktop gadgets in Windows Vista.

Niall Kennedy followed up by talking about the currently undocumented Google Reader API. The Google Reader team built an API which allows them to build multiple user interfaces for viewing a reader's subscription. The current view called 'lens' is just one of many views they have in development. One side effect of having this API is that third party developers could also build desktop or web-based feed readers on top of the Google Reader API. The API normalizes all feeds to Atom 1.0 [which causes some data loss] and provides mechanisms for tagging feeds, flagging items, marking items read/unread, searching the user's feeds and ranking/rating of items.

Brent Simmons took over at this point to talk about the NewsGator API . Brent was standing in for Greg Reinacker who couldn't make it to ETech but will be at MIX '06 to talk about some of the new things they are working on. Soon after Brent started working on NetNewsWire he began to get demands from users for building a solution that enabled them use NetNewsWire from multiple computers but keep the information in sync. He came up with a solution which involved being able to upload/download the state of a NetNewsWire instance to an FTP server which could then be used to synchronize another instance of NewNewsWire. This is the same solution I originally settled on for RSS Bandit.

After a while, Brent's customers started making more demands for synchronizing between feed readers on other platforms such as Windows or mobile phones and he realized that his solution couldn't meet those needs. He looked around at some options before settling on the NewsGator API. The Bloglines Sync API didn't work for synchronization since it is mainly a read-only API for fetching unread items from Bloglines. However it is complicated by the fact that fetching unread items from Bloglines marks them as read even if you don't read them in retrieving applications. So basically it sucks a lot. He also looked at the Apple .Mac Sync but that is limited to only the Mac platform.

The NewsGator API met all of Brent's needs by

  • uses standard protocols (SOAP) which can be consumed from any platform
  • using incremental synchronization [only downloading changes instead of entire application state] which improves bandwidth utilization
  • supports granular operations [add feed, mark item read, rename feed, etc]
Brent finished his talk by giving a demo of syncing NetNewsWire with the NewsGator Online Edition.


 

Categories: Trip Report

These are my notes from the session Search and the Network Effect by Christopher Payne and Frederick Savoye.

This session was to announce the debut of Windows Live Search, upgrades to the features of live.com, and the newly christened Windows Live toolbar (formerly MSN Search toolbar) which now comes with Onfolio.

The user interface of the live.com personalized portal has undergone an overhaul, a number of gadgets such as the weather and stock quotes gadgets now look a lot snazzier. To improve the RSS reading experience there is now the ability to expand the content of a news headline simply by hovering over it and then drill down into the content if necessary. In addition, the user interface for adding gadgets to one's page has been improved and is now more intuitive. Finally, a new feature is that one can now build multiple 'pages' to organize one's gadgets and feeds of interest. I like the idea of multiple pages. I'll probably end up with three on my start page; Gadgets, News, and Blogs. It'll definitely improve the cluttered look of my current start page.

Windows Live Search is the search experience you get when you do a search on live.com. When you do a web search, you no longer get a page of N results with a series of next links to get more results. Instead you get a stream of results and a smart scroll bar which you can use to scroll up or down to view the results. So if you do a search and want to view the 105th result, instead of clicking next until you get to the page showing results 101 to 150, you just scroll down. However as noted in some comments on Slashdot this may cause some usability problems. For one, I can no longer bookmark or remember that my search result was on the third page of search results. Secondly, the fact that the scroll bar isn't relative (i.e. if there are 2,000,000 search results, moving the scrollbar halfway down doesn't jump you to the 1,000,000th result) is counter to how people expect scroll bars to behave. Another innovation in Windows Live Search is the slider that is used to show more results in Web and Image search. In image search, the slider can be used to increase the number of search results which then resizes the thumbnails on-the fly as more or fewer results are shown. This was quite impressive. There is also a 'Feed' search tab which can be used to search within RSS feeds which can then be added to one's live.com page.

However the most interesting new feature of Windows Live search are 'search macros'. A search macro is a shortcut for a complex search result which can then be added as a tab on the search page. For example, I can customize the search tab to contain the defaults Web, Image, Local and Feed search as well as a dare.define search. The dare.define search would expand out to (site:wikipedia.org | site:answers.com | site:webopedia.com) and I'd use it when I was searching for definitions. Users can create their own search macros and share them with others. Brady Forrest of the Windows Live search team has a lready created a few such as brady.gawkermedia which can be used to search all Gawker media sites. Search macros basically allow people to build their own vertical search engines on top of the Windows Live search experience and it is accessible to regular users. There are already dozens of macros on the Microsoft Gadgets website. In his demo, Christopher Payne showed the difference in search results when one searches for information about arctic oil drilling using the limiting one's search using the conservative site search macro vs. the liberal site search macro.

To perform a search using a macro just type "macro:[macroname] [query]" in the Windows Live Search text box, for example "macro:brady.seattle queen anne" searches a number of websites about Seattle for information about Queen Anne. There are a number of interesting operators one can use to build a macro besides site such as linkdomain, prefer and contains to name a few.

The Windows Live toolbar has learned a few new tricks since it was called the MSN Toolbar. For one, it now integrates with Windows Live Favorites so you can access your browser favorites from any machine or the Web. Another new feature is the addition of an anti-phishing feature which warns users when they visit a suspect web site. However the most significant addition is the inclusion of Onfolio, an RSS reader which plugs into the browser.


 

Categories: Trip Report

These are my notes from the session eBay Web Services: A Marketplace Platform for Fun and Profit by Adam Trachtenberg.

This session was about the eBay developer program. The talk started by going over the business models for 'Web 2.0' startups. Adam Trachtenberg surmised that so far only two viable models have shown up (i) get bought by Yahoo! and (ii) put a lot of Google AdSense ads on your site. The purpose of the talk was to introduce a third option, making money by integrating with eBay's APIs.

Adam Trachtenberg went on to talk about the differences between providing information and providing services. Information is read-only while services are read/write. Services have value because they encourage an 'architecture of participation'.

eBay is a global, online marketplace that facilitates the exchange of goods. The site started off as being a place to purchase used collectibles but now has grown to encompass old and new items, auctions and fixed price sales (fixed price sales are now a third of their sales) and even sales of used cars. There are currently 78 million items being listed at any given time on eBay.

As eBay has grown more popular they have come to realize that one size doesn't fit all when it comes to the website. It has to be customized to support different languages and markets as well as running on devices other the PC. Additionally, they discovered that some companies had started screen scraping their site to give an optimized user experience for some power users. Given how fragile screen scraping is the eBay team decided to provide a SOAP API that would be more stable and performant for them than having people screen scrape the website.

The API has grown to over 100 methods and about 43% of the items on the website are added via the SOAP API. The API enables one to build user experiences for eBay outside the web browser such as integration with cell phones, Microsoft Office, gadgets & widgets, etc. The API has an affiliate program so developers can make money for purchases that happen through the API. An example of the kind of mashup one can build to make money from the eBay API is https://www.dudewheresmyusedcar.com. Another example of a mashup that can be used to make money using the eBay API is http://www.ctxbay.com which provides contextual eBay ads for web publishers.

The aforementioned sites are just a few examples of the kinds of mashups that can be built with the eBay API. Since the API enables buying and listing of items for sale as well as obtaining inventory data from the service, one can build a very diverse set of applications.


 

Categories: Trip Report

These are my notes from the session Building a Participation Platform: Yahoo! Web Services Past, Present, and Future by Jeffrey McManus

This was a talk about the Yahoo! Developer Network. Over the past year, Yahoo's efforts to harness the creativity of the developer community has lead to the creation of healthy developer ecosystem with tens of thousands of developers in it. They've built their ecosystem by providing web APIs, technical support for developers and diseminating information to the developer community via http://developer.yahoo.com. Over the past year they have released a wide variety of APIs for search, travel and mapping (AJAX, Flash and REST-based). They have also provided language specific support for JavaScript and PHP developers by offering custom libraries (JavaScript APIs for cross-browser AJAX, drag & drop, eventing and more) as well as output formats other than XML for their services (JSON and serialized PHP). They plan to provide specific support for other languages including Flash, VB.NET and C#.

The Yahoo! APIs are available for both commercial and non-commercial use. Jeffrey McManus then showed demos of various Yahoo! Maps applications from hobbyist developers and businesses.

Providing APIs to their services fits in with Yahoo!'s plan to enable users to Find, Use, Share and Expand all knowledge. Their APIs will form the basis of a 'participation platform' by allowing users to interact with Yahoo!'s services on their own terms. They then announced a number of new API offerings

  • Browser-based authentication: This is a mechanism to allow mashups to authenticate a Yahoo! user then call APIs on the user's behalf without having the mashup author store the username and password. Whenever the mashup wants to authenticate the user, they redirect the user to a Yahoo! login page and once the user signs in they are redirected back to the mashup page with a token in the HTTP header that the mashup can use for authentication when making API calls. This is pretty much how Microsoft Passport works. I pointed this out to Jeffrey McManus but he disagreed, I assume this is because he didn't realize the technical details of Passort authentication. . The application is given permission to act on behalf of the user for two weeks at a time after which the user has to sign-in again. The user can also choose to withdraw permission from an application as well.
  • Yahoo! Shopping API v2.0: This API will allow people to make narrow searches such as "Find X in size 9 men's shoes". Currently the API doesn't let you get as granular as Shoes->Men's Shoes->Size 9. There will also be an affiliate program for the Yahoo! Shopping API so people who drive purchases via the API can get money for it.

  • My Web API: This is an API for the Yahoo!'s bookmarking service called MyWeb.

  • Yahoo! Photos API: This will be a read/write API for the world's most popular photo sharing site.

  • Yahoo! Calendar API: A read/write API for interacting with a user's calendar

Most of the announced APIs will be released shortly and will be dependent on the browser-based authentication mechanism. This means they will not be able to be called by applications that aren't Web-based.

In addition, they announced http://gallery.yahoo.com which aims to be a unified gallery to showcase applications built with Yahoo! APIs but focused at end users instead of developers.

.Jeffrey McManus then went on to note that APIs are important to Yahoo! and may explain why a lot of the startups they've bought recently such as del.icio.us, blo.gs, Flickr, Dialpad, Upcoming and Konfabulator all have APIs.

As usual, I'm impressed by Yahoo!


 

Categories: Trip Report

These are my notes from the session Musical Myware by Felix Miller

This was a presentation about Last.fm which is a social music site. The value proposition of Last.fm is that it uses people's attention data (their musical interests) to make their use of the product better.

The first two questions people tend to ask about the Last.fm model are

  1. Why would people spy on themselves?
  2. Why would they give up their attention data to a company?
Last.fm gets around of having to answer these questions by not explicitly asking users for their attention data (i.e. their musical interests). Instead, all the music they listen to on the site is recorded and used to build up a music profile for the user. Only songs the user listens to in their entirety are considered valid submissions so as not to count songs the user skips through as something they like. The service currently gets about 8 million submissions a day and got over 1 billion submissions last year. One problem with submissions is that a lot of their songs have bad metadata, he showed examples of several misspellings of Britney Spears which exist in their song catalog today. For this reason, they only use the metadata from 8 million out of their 25 million songs for their recommendation engine.

The recommendation engine encourages users to explore new artists they would like as well as find other users with similar tastes. The site also has social networking features but they were not discussed in detail since that was not the focus of the talk. However the social networking features do show users one of the benefits of building up a music profile (i.e. hence giving up their attention data) since they can find new people with similar tastes. Another feature of the site is that since they have popularity rankings of artists and individual songs, they can recommend songs by obscurity or popularity. Appealing to the music snob in users by recommending obscure songs to them has been a cool feature.

The site does allow people to delete their music profile and extract it as an XML file as well.


 

Categories: Trip Report

These are my notes from the session Who Is the Dick on My Site? by Dick Hardt

This was a talk about the services provided by Sxip Identity which identifies itself as an 'Identity 2.0' company. Dick Hardt started the talk by telling us his name 'Dick' and then showing us images of lots of other people named 'Dick' such as Dick Cheney, Dick Grayson, Dick Dastardly and a bunch of others. The question then is how to differentiate Dick Hardt from all the other Dicks out there on the Web.

In addition, Dick Hardt raised the point that people may have different personas they want to adopt online. He used women as a classic example of multiple persona syndrome given that they constantly change personalities as they change their clothes and hair. He used Madonna as a specific example of a woman who showed multiple personalities. I personally found this part of the presentation quite sexist and it colored my impression of the speaker for the rest of the talk.

So how does one tell a Web site who one is today? This usually involves the use of shared secrets such as username/password combinations but these are vulnerable to a wide variety of attacks such as phishing.

Besides telling sites who I am it would be nice to also have a way to also tell them about me so I can move from site to site and just by logging in they know my music tastes, favorite books, and so on. However this could lead to privacy issues reminiscent of scenes from Franz Kafka's The Trial or George Orwell's 1984. There should be a way to solve this problem without having to deal with the ensuing privacy or security issues. I can't help but note that at this point I felt like I had time warped into a sales pitch for Microsoft Hailstorm. The presentation seemed quite similar to Hailstorm presentations I saw back in 2001.

Dick Hardt then talked about various eras of identity technology on the Web

  • Identity 1.0 - directory services and X.500
  • Identity 1.5 - SAML and other technologies that enable business partners to assert identity information about individuals. They require trust between the identity provider and the relying party
  • Identity 2.0 - user-centric identity models such as InfoCard

Sxip Identity has shipped v1.0 of their technology but has gotten feedback that its customers would like it to be a standard. They have now begun to investigate what it would mean to standardize their solution. One of their customers is Ning who used their technology to add identity management to their site in 12 minutes.


 

Categories: Trip Report

These are my notes from the Artificial, Artificial Intelligence: What It Is and What It Means for the Web by L. F. (Felipe) Cabrera, Ph.D.

This was a talk about Amazon's Mechanical Turk. The session began with the original story of the mechanical turk which was a hoax perpertrated in 1769 by Wolfgang von Kempelen. The original mechanical turk was a chess playing automaton that turned out to be a powered by a dimunitive human chess master as opposed to 'artificial intelligence'. In a sense it was artificial artificial intelligence.

There are lots of questions computers can answer for Amazon's customers such as "where is my order?" or "what would you recommend for me based on my music/book tastes?" but there are others it cannot. However there are other questions a computer can't answer well today such as "is this a picture of a chair or a table?". Amazon's Mechanical Turk provides a set of web APIs that enable developers harness human intelligence to answer questions that cannot be answered by computers. The service has grown moderately popular and now has about 23,000 people who answer questions asked via the API.

The service offers benefits to developers by giving them a chance to enhance their applications with knowledge computers can't provide, to businesses by offering them new ways to solve business problems and to end users who can make money by answering questions asked via the API.

Examples of businesses which have used the API include a translation service that uses the API to check the accuracy of translations, polling companies testing opinion polls and purchasers of search advertising testing which search keywords best match their ads.


 

Categories: Trip Report

These are my notes from the session The Future of Interfaces Is Multi-Touch by Jeff Han.

These was mainly a demo of touch screen computing with a screen that supported multiple points of contact. Touchscreens and applications today only support a single point of contact (i.e. the current location of the mouse pointer). There is a lot of potential that can be explored when a touchscreen that supports multiple points of contact at once is used. For example, truly collaborative computer usage between multiple people is possible.

Most of the content of the talk can be gleaned from the MultiTouch Interaction Research video on YouTube. Since I had already seen the video, I zoned out during most of the talk.


 

Categories: Trip Report