Joel Spolsky has a blog post entitled Wasabi where he writes

In most deployed servers today, the lowest common denominators are VBScript (on Windows), PHP4, and PHP5 (on Unix). If we try to require anything fancier on the server, we increase our tech support costs dramatically. Even though PHP is available for Windows, it's not preinstalled, and I don't want to pay engineers to help all of our Windows customers install PHP. We could use .NET, but then I'd have to pay engineers to install Mono for all our Unix customers, and the .NET runtime isn't quite ubiquitous on Windows servers.

Since we don't want to program in VBScript or PHP4 or even PHP5 and we certainly don't want to have to port everything to three target platforms, the best solution for us is a custom language that compiles to our target platforms.

Since we are not blub programmers, we like closures, active records, lambdas, embedded SQL a la LINQ, etc. etc. and so those are the kinds of features we put into Wasabi.

And since FogBugz goes back many years and was originally written in VBScript, Wasabi is 100% backwards-compatible with VBScript but includes obvious improvements. """Multiline strings.""" Dim a = 0. And so on.

Most people don't realize that writing a compiler like this is only about 2 months work for one talented person who read the Dragon book. Since the compiler only has one body of code to compile, it is much easier to write. It doesn't have to be a general-purpose compiler. It doesn't have a math library, for example.
...
That said, there are major drawbacks. The documentation is a little bit thin and disorganized, because we've only documented the diffs to VBScript, not the whole language. Programmers that join Fog Creek might take a little bit longer getting up to speed. Our edit-compile-test loop got slower because there's one more step.

Should you write your own compiler? Maybe, if you're doing something that's different enough from the mainstream and if there's no good off-the-shelf technology for your problem. There's a good chance that the domain you're working in could really use a domain-specific language.

This sounds like one of those things that sounds like a great way to reduce costs and lower productivity at first until you've had to live with this decision for a few years. There are a couple of other drawbacks to consider that Joel doesn't mention in his post either because they haven't had time to occur yet or probably because FogCreek may be a special case due to Joel's reputation.

  1. Recruiting new employees: Getting four years of experience using Java, C# or even Ruby is one thing. On the other hand, who wants to test what the marketability of "4 years of Wasabi experience" is on their resume?

  2. Programming languages and runtimes evolve: My intern started yesterday and he mentioned that he knows Java but not C#. I gave him a link to C# from a Java developer's perspective which was the most comprehensive comparison of both languages I could find. Since that article was written Java 1.5 has added generics, enumerations, boxing of value types and changed the syntax of the for loop. Similarly C# 2.0 has added generics, nullable types and anonymous methods. By C# 3.0 we'll have lambda expressions, type inferencing of local variables and embedded SQL-like query all built into the language.

    Imagine that you rolled your own language because C# or Java didn't have some of the above features (e.g. closures, lambdas, embedded SQL, etc). At what point do you decide that it makes more sense to keep going with your in-house programming language versus participating in the ecosystem of developer tools that exist around these technologies?

  3. Attrition: What happens when your compiler guru who cut his teeth on a twenty year old textbook on compiler theory decides to leave for greener pastures? Who's going to maintain and update your homegrown programming language as the field of software development evolves and customer requirements change?

These are just some of the problems Joel glosses over as he makes the case for rolling your own programming languages. What may have seemed like a good idea once upon a time often may turn out to be a bad decision in hindsight. Only time will tell, if Wasabi becomes one of those stories or not. 


 

Categories: Programming

I just got an email from J.J. Allaire pointing me to the blog post on the Windows Live Writer plugins blog entitled Windows Live Writer Blog This for RSS Bandit which states

And now comes Blog This for RSS Bandit.

RSS Bandit is a popular feed reader (what are feeds?) which by default can Blog This with w.bloggar (a desktop blogging client) and post to del.icio.us.

1

Installation is simple albeit manual. Extract the file, highlight the files, then copy and paste them into the RSS Bandit plugins folder: not into the Windows Live Writer plugin folder.

Start RSS Bandit. See something you would like to write about? Right-click on the headline and choose BlogThis using Windows Live Writer.

Windows Live Writer launches with the Select Destination Weblog window where you select which blog to post to. Once selected it takes a moment or two and then there is your screen with the text from the feed’s post:


I just wrote about wanting to write this plugin a few hours ago. I'll probably still write one on my own and replace the w.bloggar plugin in the default install of RSS Bandit with a Windows Live Writer plugin. Perhaps even an installer for existing users who don't want to wait until the next version of RSS Bandit to get this feature? 


 

Categories: RSS Bandit | Windows Live

In what seems like an interesting bit of corporate tit for tat I noticed the following two announcements this week

In his blog post entitled IronPython 1.0 released today! Jim Hugunin of Microsoft writes

I’m extremely happy to announce that we have released IronPython 1.0 today!

I started work on IronPython almost 3 years ago.  My initial motivation for the project was to understand all of the reports that I read on the web claiming that the Common Language Runtime (CLR) was a terrible platform for Python and other dynamic languages.  I was surprised to read these reports because I knew that the JVM was an acceptable platform for these languages.  About 9 years ago I’d built an implementation of Python that ran on the JVM originally called JPython and later shortened to Jython.  This implementation ran a little slower than the native C-based implementation of Python (CPython), but it was easily fast enough and stable enough for production use – testified to by the large number of Java projects that incorporate Jython today.
...
The more time I spent working on IronPython and with the CLR, the more excited I became about its potential to finally deliver on the vision of a single common platform for a broad range of languages.
...
IronPython is about bringing together two worlds.  The key value in IronPython is that it is both a true implementation of Python and is seamlessly integrated with the .NET platform. 

In other news, Tim Bray of Sun Microsystems has a blog post entitled JRuby Love where he writes

Charles Nutter and Thomas Enebo, better known as “The JRuby Guys”, are joining Sun this month. Yes, I helped make this happen, and for once, you’re going to be getting the Sun PR party line, because I wrote most of it.

Jacki DeCoster, one of our PR people, tried to imagine what kinds of questions people would have, and we went from there.

Why is Sun hiring JRuby developers Charles Nutter and Thomas Enebo? · First, they are excellent developers. Technologies like Ruby are getting intense interest from the developer community, and Sun is interested in anything that developers care about. ¶

What will their new role be at Sun? · First, they have to get JRuby to 1.0 and make sure that the major applications are running smoothly and are performant.

Interesting times indeed. My time spent on working with XML has made me appreciate the power of dynamic languages and I'll definitely be givin gIronPython a shot. I've started reading Dive Into Pythonand once I'm done I think my first programming assignment will be to write an IBlogExtension plugin for RSS Bandit that lets you post to your blog using Windows Live Writer.


 

Categories: Programming

September 7, 2006
@ 05:33 PM

From Omar Shahine's blog post entitled Inline Search for Internet Explorer we learn

This is simply a must have add-in for IE. For those of us that used the FireFox Find feature and were like "OMG", you can now have the same thing in IE. 

You can see in the screen shot below how this works:

[Source: Paul Thurrott's Internet Nexus]

As an Emacs user, I've grown used to having inline search as a feature and often get frustrated when the application I'm using doesn't support its. It's great to see this feature added to Internet Explorer. I just downloaded it and it works great with the most recent beta of Internet Explorer 7. Give it a shot.

By the way, Jeremy Epling and Joshua Allen who both work on the IE team also told me to check out http://www.ieaddons.com if I'm interested in tricking out my Internet Explorer install. I've added the site to my bookmark list and will check it out later today to see if there any other interesting IE extensions I've been missing out on.


 

While working on the favicon support for RSS Bandit I've been seeing some weird errors where a favicon for a particular website (e.g. Jamie Zawinski's favicon) cause a weird out of memory exception to be thrown when trying to load the image using the FromFile() method of the System.Drawing.Image class

It turns out the problem is spelled out in KB 810109: You receive a "System.OutOfMemoryException" error message when you try to use the Bitmap.FromFile method in the .NET Framework 1.0 which reads

SYMPTOMS
When you try to load an image by using the Bitmap.FromFile method in the Microsoft .NET Framework 1.0, you receive the following error message:
    An unhandle exception of type 'System.OutOfMemoryException' occurred in system.drawing.dll

CAUSE
This problem may occur when you use the Bitmap.FromFile method and one of the following conditions is true:
•    The image file is corrupted.
•    The image file is incomplete.

Note You may experience this problem if your application is trying to use the Bitmap.FromFile method on a file stream that is not finished writing to a file.
•    The image file does not have a valid image format or GDI+ does not support the pixel format of the file.
•    The program does not have permissions to access the image file.
•    The BackgroundImage propery is set directly from the Bitmap.FromFile method.

I'm posting this here so I have a handy pointer to it once I start getting bug reports about this feature not working correctly once we ship Jubilee. This feature definitely does jazz up the look of the application, so far my favorite favicon has been from the Dead 2.0 weblog.


 

Categories: Programming | RSS Bandit

Mike Arrington of TechCrunch has a blog post entitled Facebook Users Revolt, Facebook Replies where he writes

There has been an overwhelmingly negative public response to Facebook’s launch of two new products yesterday. The products, called News Feed and Mini Feed, allow users to get a quick view of what their friends are up to, including relationship changes, groups joined, pictures uploaded, etc., in a streaming news format. Many tens of thousands of Facebook users are not happy with the changes. Frank Gruber notes that a Facebook group has been formed called “Students Against Facebook News Feed”. A commenter in our previous post said the group was closing in on 100,000 members as of 9:33 PM PST, less than a day after the new features were launched. There are rumors of hundreds of other Facebook groups calling for a removal of the new features.

A site calling to boycott Facebook on September 12 has also been put up, as well as a petition to have the features removed. Other sites are popping up as well. There seems to be no counterbalancing group or groups in favor of the changes.

Facebook founder and CEO Mark Zuckerberg has responded personally, saying “Calm down. Breathe. We hear you.” and “We didn’t take away any privacy options.”

I gave the new features a thumbs up yesterday and stick by my review. No new information is being made available about users. Facebook privacy settings remain in their previous state, meaning you can have your information available throughout the network or just among your closest friends. Don’t want a particular piece of information to be syndicated out even to them? Remove any single piece of data by simply clicking the “x” button next to it and it will not appear in the news feed.

If this feature had been part Facebook since the beginning, their users would be screaming if Facebook tried to remove it. It’s a powerful way to quickly get lots of information about people you care about, with easy settings to remove that information for privacy reasons. No one can see anything that they couldn’t see yesterday. It’s just being distributed more efficiently.

I agree that the main problem with the feature is that it is “new” as opposed to any privacy implications. We’ve faced similar problems when designing some of the features of http://spaces.live.com and my advice to the Facebook team would be that it may be better to allow people to opt out of being in feeds than to argue with users about whether it is a privacy violation or not.

That’s a battle that they are not likely to win. Better to be seen as respecting your users wishes as opposed to being paternalistic overlords who think they know what's best for them. Don't make the same mistake Friendster made with fakesters.


 

I just read a blog post by Evan Williams, founder of Blogger entitled Pageviews are Obsolete where he writes

But it's this pageviews part that I think needs to be more seriously questioned. (This is not an argument that Blogger is as popular as MySpace—it's not.) Pageview counts are as suseptible as hit counts to site design decisions that have nothing to do with actual usage. As Mike Davidson brilliantly analyzed in April, part of the reason MySpace drives such an amazing number of pageviews is because their site design is so terrible.

As Mike writes: "Here's a sobering thought: If the operators of MySpace cleaned up the site and followed modern interface and web application principles tomorrow, here's what the graph would look like:"



Mike assumes a certain amount of Ajax would be involved in this more-modern MySpace interface, which is part of the reason for the pageview drop. And, as the Kiko guys wrote in their eBay posting, their pageview numbers were misleading because the site was built with Ajax. (Note: It's really easy to track Ajax actions in Google Analytics for your own edification.)

I've seen a lot of people repeat these claims about MySpace's poor design leading to increased page views. After taking a glance at the average number of page views per user on Alexa for MySpace (38.4 pageviews a day) and comparing it with competing sites such as FaceBook (28.2 pageviews a day), Bebo (31 pageviews a day) and Orkut (38.6 pageviews a day), their numbers don't seem out of the ordinary to me especially if you factor in the sites popularity.

Recently my girlfriend created a MySpace profile and a space on Windows Live Spaces which led me to consider the differences between how both sites are organized. After talking to her for a while about her experiences on both sites it became clear to me that there were fundamental differences in how the sites were expected to be used. Windows Live Spaces concentrates a lot on content creation and sharing that content with people you know (primarily your Windows Live Messenger buddies). On the other hand, MySpace is organized a lot around getting you to "people watch" and explore different user profiles and spaces. Comparing the experience after signing into both services is illuminating.

Anyway, the key observation here is that social networking sites such as MySpace are page view generating engines. Whereas blogging sites such as Blogger and to a lesser extent Windows Live Spaces are less about encouraging people to browse and explore other users on the site and are more about a single user creating content or other users consuming content from a single user. Go ahead and compare both of my girlfriend's spaces and see which one encourages you to click on other users more and which one is more about the owner of the site sharing their [written or digital media] content with you. 

Think about that the next time you hear someone say MySpace gets a lot of page views because they don't use AJAX.


 

September 5, 2006
@ 10:05 PM

From the press release entitled Industry Testing of Windows Vista Release Candidate 1 Begins we learn

Microsoft Announces Estimated Retail Pricing for All Windows Vista Editions

With Windows XP, customers often had to make tradeoffs in features and functionality as the Windows XP editions were aligned with specific hardware types. With Windows Vista, customers now have the ability to make choices between editions based on the valuable features they desire, which are now available as standard features of mainstream editions. For example, 64-bit support and Tablet PC and touch technology are standard features of the Home Premium and Business editions.

Pricing information for all Windows Vista editions is available online, along with additional information on the various editions of Windows Vista.

It looks like my next choice of operating system will have a suggested retail price for full package product of $399.00 and a suggested upgrade retail price of $259.00. Given that I'm running Windows Server 2003 at home it looks like I'll be paying the higher price. 


 

Categories: Technology

Gabe Rivera, author of Techmeme, has a blog post entitled Why I don't offer a personal filter where he writes

I'm facing another round of inquiries on personal filtering, mostly from Techmeme fans who've read Ross Mayfield's or Dare Obasanjo'sJeff Clavier and Ted Leung nearly a year ago!) recent thoughts on the matter. (Just for the record, the first round included requests from

Why don't I offer a personal filter service aka "meMeme" aka "my.memeorandum"? Briefly, filters based on the editorial approach used for Techmeme/memeorandum don't work well outside of a few topic domains (like politics and tech), because cross linking is typically too sparse to produce a compelling mix of news. Sam Ruby unintentionally confirmed this yesterday should you pause to consider what sort of daily news selection could be derived from his Venus output. While it's true that cross linking is dense in some blogospheres, these are largely the same domains already covered by my existing sites.

Why not try editorial approaches based on new kinds of semantic analyses? My belief is that the requisite technology is harder than anything powering Google News, Topix, or my current sites. Attempts based on current technologies come up woefully short, with the resulting "Daily Me" consisting of a seemingly random mix of content missing most or all "must have" articles and posts. And having the "must haves" is essential for winning the earlier adopter types that would dominate the userbase of such a filter in the first place.

I reread the output from Sam's blogroll and it reminded me that there is a difference between the scenarios that sites like Techmeme and Tailrank are interested in and the goals of a personalized meme tracker. Here are a copuple of questions to get you started on understanding the differences in implementation choices one might make between implementing a personalized meme tracker vs. a topic specific memetracker,

  1. Q: How do you deal with "noise" links such as http://del.icio.us/tag/rest or http://www.technorati.com/tag/AJAX which may be common in the feeds the user is interested in?

    A: In both cases, it would seem the first step is to hard code the application to understand certain kinds of links as "noise". The interesting question is how to deal with the introduction of new types of "noise" links to the ecosystem. A web-based application may be easily updated as new "noisy" links enter the system but things are a bit more difficult for a desktop application. Perhaps allowing users to nominate certain classes of links as noise?

  2. Q: What 'class' of news items or blog posts should be used in evaluating what is [currently] popular?

    A: It is quite obvious that simply using the entirety of the posts from a particular feed to calculate a links popularity is flawed. Using that metric, I suspect that links such as http://adaptivepath.com/publications/essays/archives/000385.php or http://scobleizer.wordpress.com/2006/06/10/correcting-the-record-about-microsoft/ would always be the most popular links from my blogroll. Using a specific date or time range (e.g. over the past 24-48 hours) seems to be what sites such as Techmeme and Tailrank seem to do. An aggregator such as RSS Bandit or FeedDemon may use other techniques such as only using 'unread' items to calculate currently popular topics.

  3. Q: How do you deal with link blogs?

    A: A number of people in my blogroll have blog posts that are basically a repost of al the links they have posted to del.icio.us that day (e.g. Stephen O'Grady and Mark Baker). Sites like Techmeme and Tailrank filter these posts because no one wants to see a bunch of headlines that are all of the form 'links for 2006-09-05' with no real content. On the other hand, if a large number of folks in my blogroll are linking to a particular news item then it is likely to be interesting to me regardless of whether there 'meaty' blog posts behind their links or just linkblog style postings.

These are a couple of the queestions that I've been pondering since I started thinking about this feature a couple of months ago. At the end of the day I think that although Gabe's perspective is useful since he did build the site that inspired this thinking, the scenarios are different enough to change some of the implementation choices in ways that may seem surprising to some.

PS: It seems Sam has already turned Gabe's feedback into code based on reading his blog post MeMeme 2.0. There are definitely interesting times ahead.


 

About six months ago, I wrote a blog post entitled Jubilee Thoughts: Tracking Hot Topics where I talked about adding meme tracking functionality similar to the features of Memeorandum and TailRank to RSS Bandit. Since I wrote that blog post I haven't written a lick of code that actually does this but I've thought and talked about it a lot. While all I've done is talk I can't help but notice that a few others have been writing code while I've been pontificating in my blog.

In his blog post entitled Spyder Spots a Memetracker Nick Bradbury writes

Andy "Spyder" Herron writes about the "personal memetracker" that's hidden in FeedDemon 2.0.0.25.

I had hoped to complete this feature by now, but as Andy points out, it still needs some work (which is why I hid it and gave it a "beta" label). If you'd like to try it out, select "Popular Topics" from the Browse menu (or just add the "Popular Topics" toolbutton to the toolbar above FeedDemon's browser).

I should add that this feature will probably be useful only to people who subscribe to a lot of feeds since it relies on common links to determine popularity. So if you're not subscribed to feeds which link to the same articles, chances are it won't show you any results.

In another blog post entitled MeMeme Sam Ruby writes

Ross Mayfield: Cue up not what is popular, or what the people I subscribed to produced.  Cue up what my social network has found interesting.
Herewith, a simple demonstration of what aggressive canonicalization can produce.  Venus may be in Python, but suppose I’m in a Ruby mood.  The cache is simply files in Atom 1.0 format, with all textual content normalized to XHTML.

Lets make a few simplifying assumptions: all posts are created equal, each post can only vote once for any given link (this also takes care of things like summaries which partially repeat content), posts implicitly vote (once!) for themselves, and the weight of a vote degrades as the square of the distance between when the post was made and now.

Here’s the code, and here’s a snapshot of the output.  The output took 6.239 elapsed seconds to produce on my laptop.  I still have more work to do to eliminate some of the self-referential links (in fact, I a priori removed Bob Sutor’s blog from the analysis as it otherwise he would dominate the results).  But I am confident that this is solvable, in fact, I am working on expanding what filters can do.  I’ll post more on that shortly.

With both Sam and Nick on the case, I'm quite sure that within the next few months it will be taken for granted that one of the features of news aggregator is to provide personalized meme tracking. Although I'm sure that we'll all use the same set of basic rules for providing this feature, I suspect that the problems that we are trying to solve will end up being different which will influence how we'll implement the feature.

For example, the main reason I want this feature isn't to track what the popular topics are across multiple blogs but instead to find what the popular topics are across aggregated blog feeds such as blogs.msdn.com and the numerous planet sites. In reading Sam's blog it seems he'd consider the same feed linking to the same news items as spam to filter out while I consider it be the only part of the feature I'd use. This issue illustrates the main problems I've had with designing the feature in my head. What "knobs" or options should we give users to control how the meme tracker decides what is interesting or not vs. what should be ignored when generating the list of 'hot topics' (e.g. the various meme trackers have said they filter out link blogs since they tend to dominate the results as well)?

Since I've decided to be more focused with regards to RSS Bandit development, I won't touch this feature until podcasting support is done. However I'd like to hear thoughts from our users in the meantime.