My sister is paying me a surprise visit this weekend and I decided to look on the Web for ideas for what we could do together this weekend. My initial thoughts were that we'd go to the movies and perhaps check out the Bodies: The Exhibition. I wouldn't to see if I could get a better suggestion on the Web.

My first instinct was to try Seattle - City Search but had to give up when I realized the only events listed for today were either announcements of what DJs would be at local clubs tonight or announcements sales at local stores. Another thing that bugged me is how few ratings there were for events or locations on City Search. This reminds me of a blog post on Search Engine Land entitled Local And The Paradox of Participation which came to a set of incorrect conclusions about a poll that claimed that people are equally likely to post a positive or negative review of an event or location. The incorrect conclusion was that it is a myth that few people are likely to post reviews. Given that locations and events that are attended by thousands of people tend to only have dozens of reviews on almost every review site I've ever seen seems to make this a fact not a myth. The poll only seems to imply that people are willing to share their opinion if prompted which is totally different from someone attending a nightclub or concert then feeling compelled to visit one of umpteen review sites to share their opinion. What is surprising to me is that there doesn't seem to even be a small community of die hard reviewers on City Search which is unlike most review sites I've seen. Just compare Amazon or IMDB which both seem to have a number of reviewers who are on top of certain categories of products.

Anyway, what does this have to do with Google? Well, I went to Rich Skrenta's much vaunted starting point of the Intenet and tried some queries such as "local events", "seattle events" and "events in seattle" with pathetic results. The only useful links in the search results page led me to a couple of event search engines (e.g. NWsource, Upcoming) that were pathetically underpopulated with event information. None of them even had a listing for Bodies: The Exhibition. Lame. 

I tried Google Local which turned out to be redirect to their mapping site. Shouldn't a local search engine be able to find events in my local area? Double lame.

Before you bother pointing it out, I realize that other search engines don't do a much better job either. This seems to point to an opportunity to add a lot of value in what must be a very lucrative search market. I'm surprised that Yahoo! hasn't figured out how to do more with their purchase of Upcoming.org. Then again Yahoo! hasn't figured what to do with any of the Web 2.0 startups they've purchased so maybe that is expecting too much. Maybe Google will purchase Eventful.com and fix this fairly big hole in their search offerings. Somehow I doubt it. .


 

I checked out the official Apple iPhone site especially the screencasts of the phone user interface and ipod capabilities set. As an iPod owner, $500 is worth it just to get my hands on this next generation iPod which makes my Video iPod look old and busted. On the other hand, although the text messaging UI is pretty sweet a cellphone without tactile feedback when pushing its buttons is a pain in the ass especially when the layout of the buttons continually changes. I wouldn't wish that on my worst enemy. Maybe I'm just unusual in the fact that I don't want to be required to look at the cellphone's screen when using it. I pull my phone out of my pocket, unlock it and call the last number dialed often without looking at the screen before putting it to my ear. It's hard to imagine that my muscle memory would ever get used to to doing that without tactile feedback from the phone when navigating its interface. It also hasn't been announced whether the phone will be able to sync with Microsoft Exchange or not. As someone who used his phone to keep on top of the goings on at work while at CES, this is another non-starter.

That said, I have to agree with a lot of the stuff said in the article Macworld: Ten Myths of the Apple iPhone. A lot of the complaints about the iPhone just seem like sour grapes. Me, I'm going to wait until I can get an unlocked iPhone so I don't have to replace my Cingular 3125 or until Apple ships a 6th generation iPod (aka iPhone sans phone features).


 

Categories: Technology

From the Reuters article R&B sales slide alarms music biz we learn

With the exception of new age, the smallest genre tracked by Nielsen SoundScan, R&B and rap suffered the biggest declines in 2006 of all styles of music. R&B, with album scans of 117 million units, was down 18.4% from 2005, while the rap subgenre's 59.5 million scans were down 20.7%. Total U.S. album sales fell 4.9% to 588.2 million units. Since 2000, total album sales have slid 25%, but R&B is down 41.4% and rap down 44.4%. In 2000, R&B accounted for 25.4% of total album sales, and rap 13.6%. In 2006, their respective shares fell to nearly 20% and 10%.
...
Merchants point to large second-week declines in new albums. For example, Jay-Z's 2006 "Kingdom Come" album debuted with 680,000 units in its first week and then dropped nearly 80%, to almost 140,000 units.
...
"Downloading and Internet file sharing is a problem and the labels are really late in fixing it," Czar Entertainment CEO and manager of the Game Jimmy Rosemond says. "With an artist like Game, his album leaked before it came out, and I had 4 million people downloading it."
...
In 2006, the best-selling rap album was T.I.'s "King," which sold 1.6 million copies, while the best-selling R&B album was Beyonce's "B'Day," which moved 1.8 million units. But those are exceptions.
...
A senior executive at one major label says ringtone revenue now exceeds track download revenue. And since Nielsen RingScan started tracking master ringtones in September, rap and R&B have comprised 87% of scans generated by the top 10 sellers.

Interscope's Marshall points out that Jibbs, for example, "has sold an incredible 1.4 million ringtones" -- a figure that might well offset lost album revenue. The rapper has moved 196,000 units of his "Jibbs Feat. Jibbs" album since its October 24 release. But figuring the ringtones he's sold at $2 apiece translates into $2.8 million in revenue, the equivalent of another 233,000 albums at a wholesale cost of $12 per unit.

And, Marshall adds, Chamillionaire has moved more than 3 million ringtones on top of scanning nearly 900,000 units of his "Sound of Revenge" album.

Some look at the above data and see it is an argument that the long tail spells the end of the hit. Others look at it and see it as more evidence that piracy is destroying the music industry. Or it may just be a sign that hip hop is finally played out. Me, I look at the ringtone industry and wonder whether it doesn't stand out as an example of where walled gardens and closed platforms have worked out quite well for the platform vendors and their partners yet [almost] detrimentally for consumers.


 

Categories: Current Affairs

Recently an RSS Bandit user made a feature request on our forums about a Good Google Reader Feature and wrote

On RSS Bandit, while reading all the news from a feed at the same time on the reading pane (feed selected) and scrolling down to read all news, you scroll all the page and nothing is marked as readed. This only happen when you select the message on Feed Details. On Google Reader every new you scroll down became marked as readed automatically. It's a very simple and natural scheme. Works really well, just check out http://reader.google.com.

I checked out Google Reader and I had to agree that the feature is pretty hot, so yesterday I brushed up on my knowledge of the HTML DOM and added the feature to RSS Bandit. Below is a video showing the new feature in action

What's funny is that Andy Edmonds asked me for this feature a couple of years ago and I never attempted to add it because at the time I was intimidated by Javascript and DHTML. It turned out to be a lot easier than I thought.

RSS Bandit users can expect to see this feature in the next beta of the Jubilee release which should be available in the next week and a half. It would be sooner but unfortunately I'm on my way to Las Vegas to attend CES for most of next week and Torsten is on vacation. By the way, users of Windows Vista should be glad to know that the next beta will finally run fine on that operating system.

NOTE: The slowness in the video is due to the fact that my CPU is pegged at 100% while capturing the screencast with Windows Media Encoder. This feature doesn't noticeably affect the performance of the application while running regularly.


 

Categories: RSS Bandit

Joel Spolsky has an seminal article entitled Don't Let Architecture Astronauts Scare You where he wrote

A recent example illustrates this. Your typical architecture astronaut will take a fact like "Napster is a peer-to-peer service for downloading music" and ignore everything but the architecture, thinking it's interesting because it's peer to peer, completely missing the point that it's interesting because you can type the name of a song and listen to it right away.

All they'll talk about is peer-to-peer this, that, and the other thing. Suddenly you have peer-to-peer conferences, peer-to-peer venture capital funds, and even peer-to-peer backlash with the imbecile business journalists dripping with glee as they copy each other's stories: "Peer To Peer: Dead!"

 The Architecture Astronauts will say things like: "Can you imagine a program like Napster where you can download anything, not just songs?" Then they'll build applications like Groove that they think are more general than Napster, but which seem to have neglected that wee little feature that lets you type the name of a song and then listen to it -- the feature we wanted in the first place. Talk about missing the point. If Napster wasn't peer-to-peer but it did let you type the name of a song and then listen to it, it would have been just as popular.

This article is relevant because I recently wrote a series of posts explaining why Web developers have begun to favor JSON over XML in Web Services. My motivation for writing this article were conversations I'd had with former co-workers who seemed intent on "abstracting" the discussion and comparing whether JSON was a better data format than XML in all the cases that XML is used today instead of understanding the context in which JSON has become popular.

In the past two weeks, I've seen three different posts from various XML heavy hitters committing this very sin

  1. JSON and XML by Tim Bray - This kicked it off and starts off by firing some easily refutable allegations about the extensibility and unicode capabilities of JSON as a general data transfer format.
  2. Tim Bray on JSON and XML by Don Box - Refutes the allegations by Tim Bray above but still misses the point.
  3. All markup ends up looking like XML by David Megginson - argues that XML is just like JSON except with the former we use angle brackets and in the latter we use curly braces + square brackets. Thus they are "Turing" equivalent. Academically interesting but not terribly useful information if you are a Web developer trying to get things done.

This is my plea to you, if you are an XML guru and you aren't sure why JSON seems to have come out of nowhere to threaten your precious XML, go read JSON vs. XML: Browser Security Model and JSON vs. XML: Browser Programming Models then let's have the discussion.

If you're too busy to read them, here's the executive summary. JSON is a better fit for Web services that power Web mashups and AJAX widgets due to the fact it gets around the cross domain limitations put in place by browsers that hamper XMLHttpRequest and that it is essentially serialized Javascript objects which makes it fit better client side scripting which is primarily done in Javascript. That's it. XML will never fit the bill as well for these scenarios without changes to the existing browser ecosystem which I doubt are forthcoming anytime soon.

Update: See comments by David Megginson and Steve Marx below.


 

Categories: XML

It's a new year and time for another brand new Windows Live service to show up in beta. This time it's the Windows Live for TV Beta which is described as follows

What it is
Windows Live™ for TV Beta is a rich, graphically-driven interface designed for people who use Windows Live Spaces and Messenger and Live Call on large-screen monitors and TVs. We're still in the early stages of this beta, so many of the features might not work properly yet. That's why we really need your feedback! This beta is in limited release, so you must request access to the trial group. After you’re in the beta, come back to this page and let us know what you think.

You can also find out more about the product on the team's blog at http://wlfortv.spaces.live.com which includes the following screenshot

Hey, I think I can see me in that screen shot. :)


 

Categories: Windows Live

A perennial topic for debate on certain mailing lists at work is rich client (i.e. desktop) software versus Web-based software. For every person that sings the praises of Web-based program such as Windows Live Mail, there's someone to wag their finger who points out that "it doesn't work offline" and "not everyone has a broadband connection". A lot of these discussions have become permathreads on the some of the mailing lists I'm one and I can recite detailed arguments for both sides in my sleep.

However I think both sides miss the point and agree more than they disagree. The fact is that highly connected societies such as the North America and Western Europe computer usage overlaps almost completely with internet usage (see Nielsen statistics for U.S. homes and Top 25 most connected countries). This trend will only increase as internet penetration spreads across developing countries emerging markets. 

What is important to understand is that for a lot of computer users, their computer is an overpriced paperweight if it doesn't have an Internet connection. They can't read the news, can't talk to their friends via IM, can't download music to their iPods Zunes, can't people watch on Facebook or MySpace, can't share the pictures they just took with their digital cameras, can't catch up on the goings on at work via email, they can't look up driving directions, can't check the weather report, can't do research for any reports they have to write and the list goes on. Keeping in mind that connectivity is key is far more important than whether the user experience is provided via a desktop app written using Win32 or is a "Web 2.0" website powered by AJAX. Additionally, the value of approachability and ease of use over "features" and "richness" cannot be emphasized enough.

Taken from that perspective, a lot of things people currently consider "features" of desktop applications are actually bugs in todays Internet-connected world. For example, I have different files in the "My Documents" folders on the 3 or 4 PCs I use regularly. Copying files between PCs and keeping track of what version of what file is where is an annoyance. FolderShare to the rescue.

When I'm listening to my music on my computer I sometimes want to be able to find out what music my friends are listening to, recommend my music to friends or just find music similar to what I'm currently playing. Last.fm and iLike to the rescue.

The last time I was on vacation in Nigeria, I wanted to check up on what was going on at work but never had access to a computer with Outlook installed nor could I have actually set it up to talk to my corporate account even if I could. Outlook Web Access to the rescue.

Are these arguments for Web-based or desktop software? No. Instead they are meant to point out that improving the lives of computer users should mean finding better ways of harnessing their internet connections and their social connections to others. Sometimes this means desktop software,   sometimes it will mean Web-based software and sometimes it will be both.


 

Categories: Technology

Over the holidays I had a chance to talk to some of my old compadres from the XML team at Microsoft and we got to talking about the JSON as an alternative to XML. I concluded that there are a small number of key reasons that JSON is now more attractive than XML for kinds of data interchange that powers Web-based mashups and Web gadgets widgets. This is the second in a series of posts on what these key reasons are.

In my previous post, I mentioned that getting around limitations in cross domain requests imposed by modern browsers has been a key reason for the increased adoption of JSON. However this is only part of the story.

Early on in the adoption of AJAX techniques across various Windows Live services I noticed that even for building pages with no cross domain requirements, our Web developers favored JSON to XML. One response that kept coming up is the easier programming model when processing JSON responses on the client than with XML. I'll illustrate this difference in ease of use via a JScript code that shows how to process a sample document in both XML and JSON formats taken from the JSON website. Below is the code sample

var json_menu = '{"menu": {' + '\n' +
'"id": "file",' + '\n' +
'"value": "File",' + '\n' +
'"popup": {' + '\n' +
'"menuitem": [' + '\n' +
'{"value": "New", "onclick": "CreateNewDoc()"},' + '\n' +
'{"value": "Open", "onclick": "OpenDoc()"},' + '\n' +
'{"value": "Close", "onclick": "CloseDoc()"}' + '\n' +
']' + '\n' +
'}' + '\n' +
'}}';


var xml_menu = '<menu id="file" value="File">' + '\n' +
'<popup>' + '\n' +
'<menuitem value="New" onclick="CreateNewDoc()" />' + '\n' +
'<menuitem value="Open" onclick="OpenDoc()" />' + '\n' +
'<menuitem value="Close" onclick="CloseDoc()" />' + '\n' +
'</popup>' + '\n' +
'</menu>';

WhatHappensWhenYouClick_Xml(xml_menu);
WhatHappensWhenYouClick_Json(json_menu);

function WhatHappensWhenYouClick_Json(data){

  var j = eval("(" + data + ")");

  WScript.Echo("
When you click the " + j.menu.value + " menu, you get the following options");

  for(var i = 0; i < j.menu.popup.menuitem.length; i++){
   WScript.Echo((i + 1) + "." + j.menu.popup.menuitem[i].value
    + " aka " + j.menu.popup.menuitem[i].onclick);
  }

}

function WhatHappensWhenYouClick_Xml(data){

  var x = new ActiveXObject( "Microsoft.XMLDOM" );
  x.loadXML(data);

  WScript.Echo("When you click the " + x.documentElement.getAttribute("value")
                + " menu, you get the following options");

  var nodes = x.documentElement.selectNodes("//menuitem");

  for(var i = 0; i < nodes.length; i++){
   WScript.Echo((i + 1) + "." + nodes[i].getAttribute("value") + " aka " + nodes[i].getAttribute("onclick"));
  }
}

When comparing both sample functions, it seems clear that the XML version takes more code and requires a layer of mental indirection as the developer has to be knowledgeable about XML APIs and their idiosyncracies. We should dig a little deeper into this. 

A couple of people have already replied to my previous post to point out that any good Web application should process JSON responses to ensure they are not malicious. This means my usage of eval() in the code sample, should be replaced with JSON parser that only accepts 'safe' JSON responses. Given that that there are JSON parsers available that come in under 2KB that particular security issue is not a deal breaker.

On the XML front, there is no off-the-shelf manner to get a programming model as straightforward and as flexible as that obtained from parsing JSON directly into objects using eval(). One light on the horizon is that E4X becomes widely implemented in Web browsers . With E4X, the code for processing the XML version of the menu document above would be 

function WhatHappensWhenYouClick_E4x(data){

  var e = new XML(data);

  WScript.Echo("When you click the " + j.menu.value + " menu, you get the following options");

  foreach(var m in e.menu.popup.menuitem){
   WScript.Echo( m.@value + " aka " + m.@onclick);
  }

}

However as cool as the language seems to be it is unclear whether E4X will ever see mainstream adoption. There is an initial implementation of E4X in the engine that powers the Firefox browser which seems to be incomplete. On the other hand, there is no indication that either Opera or Internet Explorer will support E4X in the future.

Another option for getting the simpler object-centric programming models out of XML data could be to adopt a simple XML serialization format such as XML-RPC and providing off-the-shelf Javascript parsers for this data format. A trivial implementation could be for the parser to convert XML-RPC to JSON using XSLT then eval() the results. However it is unlikely that people would go through that trouble when they can just use JSON.

This may be another nail in the coffin of XML on the Web. 


 

Categories: Web Development | XML | XML Web Services

Over the holidays I had a chance to talk to some of my old compadres from the XML team at Microsoft and we got to talking about the JSON as an alternative to XML. I concluded that there are a small number of key reasons that JSON is now more attractive than XML for kinds of data interchange that powers Web-based mashups and Web gadgets widgets. This is the first in a series of posts on what these key reasons are.

The first "problem" that chosing JSON over XML as the output format for a Web service solves is that it works around security features built into modern browsers that prevent web pages from initiating certain classes of communication with web servers on domains other than the one hosting the page. This "problem" is accurately described in the XML.com article Fixing AJAX: XMLHttpRequest Considered Harmful which is excerpted below

But the kind of AJAX examples that you don't see very often (are there any?) are ones that access third-party web services, such as those from Amazon, Yahoo, Google, and eBay. That's because all the newest web browsers impose a significant security restriction on the use of XMLHttpRequest. That restriction is that you aren't allowed to make XMLHttpRequests to any server except the server where your web page came from. So, if your AJAX application is in the page http://www.yourserver.com/junk.html, then any XMLHttpRequest that comes from that page can only make a request to a web service using the domain www.yourserver.com. Too bad -- your application is on www.yourserver.com, but their web service is on webservices.amazon.com (for Amazon). The XMLHttpRequest will either fail or pop up warnings, depending on the browser you're using.

On Microsoft's IE 5 and 6, such requests are possible provided your browser security settings are low enough (though most users will still see a security warning that they have to accept before the request will proceed). On Firefox, Netscape, Safari, and the latest versions of Opera, the requests are denied. On Firefox, Netscape, and other Mozilla browsers, you can get your XMLHttpRequest to work by digitally signing your script, but the digital signature isn't compatible with IE, Safari, or other web browsers.

This restriction is a significant annoyance for Web developers because it eliminates a number of compelling end user applications due to the limitations it imposes on developers. However, there are a number of common workarounds which are also listed in the article

Solutions Worthy of Paranoia

There is hope, or rather, there are gruesome hacks, that can bring the splendor of seamless cross-browser XMLHttpRequests to your developer palette. The three methods currently in vogue are:

  1. Application proxies. Write an application in your favorite programming language that sits on your server, responds to XMLHttpRequests from users, makes the web service call, and sends the data back to users.
  2. Apache proxy. Adjust your Apache web server configuration so that XMLHttpRequests can be invisibly re-routed from your server to the target web service domain.
  3. Script tag hack with application proxy (doesn't use XMLHttpRequest at all). Use the HTML script tag to make a request to an application proxy (see #1 above) that returns your data wrapped in JavaScript. This approach is also known as On-Demand JavaScript.

Although the first two approaches work, there are a number of problems with them. The first is that it adds a requirement that the owner of the page also have Web master level access to a Web server and either tweak its configuration settings or be a savvy enough programmer to write an application to proxy requests between a user's browser and the third part web service. A second problem is that it significantly increases the cost and scalability impact of the page because the Web page author now has to create a connection to the third party Web service for each user viewing their page instead of the user's browser making the connection. This can lead to a bottleneck especially if the page becomes popular. A final problem is that if the third party service requires authentication [via cookies] then there is no way to pass this information through the Web page author's proxy due to browser security models.

The third approach avoids all of these problems without a significant cost to either the Web page author or the provider of the Web service. An example of how this approach is utilized in practice is described in Simon Willison's post JSON and Yahoo!’s JavaScript APIs where he writes

As of today, JSON is supported as an alternative output format for nearly all of Yahoo!’s Web Service APIs. This is a Really Big Deal, because it makes Yahoo!’s APIs available to JavaScript running anywhere on the web without any of the normal problems caused by XMLHttpRequest’s cross domain security policy.

Like JSON itself, the workaround is simple. You can append two arguments to a Yahoo! REST Web Service call:

&output=json&callback=myFunction

The page returned by the service will look like this:

myFunction({ JSON data here });

You just need to define myFunction in your code and it will be called when the script is loaded. To make cross-domain requests, just dynamically create your script tags using the DOM:

var script = document.createElement('script');
script.type = 'text/javascript';
script.src = '...' + '&output=json&callback=myFunction';
document.getElementsByTagName('head')[0].appendChild(script);

People who are security minded will likely be shocked that this technique involves Web pages executing arbitrary code they retrieve from a third party site since this seems like a security flaw waiting to happen especially if the 3rd party site becomes compromised. One might also wonder what's the point of browsers restricting cross-domain HTTP requests if pages can load and run arbitrary Javascript code [not just XML data] from any domain.

However despite these concerns, it gets the job done with minimal cost to all parties involved and more often than not that is all that matters.

Postscript: When reading articles like Tim Bray's JSON and XML which primarily compares both data formats based on their physical qualities, it is good to keep the above information in mind since it explains a key reason JSON is popular on the Web today which turns out to be independent of any physical qualities of the data format. 


 

Categories: Web Development | XML | XML Web Services

Over the holidays I had a chance to talk to some of my old compadres from the XML team at Microsoft and we got to talking about the JSON as an alternative to XML. I concluded that there are a small number of key reasons that JSON is now more attractive than XML for kinds of data interchange that powers Web-based mashups and Web gadgets widgets. Expect a series of posts on this later today. 

I wasn't sure I was going to write about this until I saw Mike Arrington's blog post about a GMail vulnerability which implied that this is another data point in the XML vs. JSON debate. On reading about the vulnerability on Slashdot I disagree. This seems like a novice cross site scripting vulnerability that is independent of JSON or XML, and is succintly described in the Slashdot comment by TubeSteak which states

Here's the super simple explanation

1. Gmail sets a cookie saying you're logged in
2. A [3rd party] javascript tells you to call Google's script
3. Google checks for the Gmail cookie
4. The cookie is valid
5. Google hands over the requested data to you

If [3rd party] wanted to
keep your contact list, the javascript would pass it to a form and your computer would happily upload the list to [3rd party]'s server.

Mitigations to this problem are also well known and are also summarized in another Slashdot comment by buro9 who writes

When you surface data via Xml web services, you can only call the web service on the domain that the JavaScript calling it originates from. So if you write your web services with AJAX in mind exclusively, then you have made the assumption that JavaScript is securing your data.

The problem is created at two points:
1) When you rely on cookies to perform the implicit authentication that reveals the data.
2) When you allow rendering of the data in JSON which bypasses JavaScript cross-domain security.

This can be solved by doing two things:
1) Make one of the parameters to a web service a security token that authenticates the request.
2) Make the security token time-sensitive (a canary) so that a compromised token does not work if sniffed and used later.
.

The surprising thing is that I'd assumed that the knowledge of using canary values was commonplace but it took a lot longer than I expected to find a good bite size description of them. And when I did, it came from co-worker Yaron Goland in a comment to Mark Nottingham's post on DOM vs. Web where Yaron wrote

There are a couple of ways to deal with this situation:

Canaries - These are values that are generated on the fly and sent down with pages that contain forms. In the previous scenario evil.com wouldn't know what canary site X was using at that instant for that user and so its form post wouldn't contain the right value and would therefore be rejected. The upside about canaries is that they work with any arbitrary form post. The downside is that they require some server side work to generate and monitor the canary values. Hotmail, I believe, uses canaries.

Cookies - A variant on canaries is to use cookies where the page copies a value from a cookie into the form before sending it up. Since the browser security model only allows pages from the same domain to see that domain's cookie you know the page had to be from your domain. But this only works if the cookie header value isn't easily guessable so in practice it's really just canaries.

XMLHTTP - Using XMLHTTP it's possible to add HTTP headers so just throw in a header of any sort. Since forms can't add headers you know the request came from XMLHTTP and because of XMLHTTP's very strict domain security model you know the page that sent the request had to come from your site.

I guess just because something is common knowledge among folks building Web apps and toolkits at Microsoft doesn't mean it is common knowledge on the Web. This is another one of those things that everyone building Web applications should know about to secure their applications but very few actually learn.


 

Categories: Web Development