I’ve been blogging since before it was called “blogging”, I started around 1998 on a Geocities site. I like browsers. The internet is awesome. Web pages are great. Web pages have literally changed the world. Still, call me old fashioned, but I like Smart Clients. Whether you call them “Smart Clients”, “Fat Clients”, or “Client Server Applications” – they can often still offer the best user experience and the most rewarding developer experience out there.
In the ‘90s and ‘00s there were various reasons why companies were going online. A lot of it was hype, to be a part of this big movement that was going on. Some businesses found that some of their worst application deployment nightmares could be avoided by using the browser as a delivery mechanism. Some businesses found that a standards-compliant web page could (sort of, with a lot of pain) be used to reach a wider audience such as Mac or Unix users. Some businesses found that having things like database connections behind the firewall was a Good Thing.
Not all is well on the Web
Despite outrageous success, not everything is great in the land of browser based applications. Is the in-browser markup-based application the best choice or is it being carried forward purely by momentum? If someone at your business announces that “We’re going to build a new Widget Widgetizer for our Agents”, is it an unspoken assumption that it will be delivered via HTML? Is it cheaper and easier to build, debug, deploy, and maintain a browser based app than a Smart Client application? I wonder.
Doing layouts for web sites is still painful. HTML was not precisely meant to be a ubiquitous content-positioning mechanism. The post-<table/> world can be infuriating because CSS is still painful. Successfully doing CSS-only layouts seems to require Herculean efforts. Sure, people do it all the time, but these skills seem to be very hard won. The average CSS expert I talk to can rattle off an astounding list of browser specific gotchas and most will confess that testing is extremely time consuming. Firefox seems to mostly get it right, but that doesn’t matter if Safari and IE don’t. Hilariously, IE8 will get most things right but this is a curse in disguise: all of the server-side “if browser is IE” code out there will potentially now break under IE8. Even the dynamic menus generated by ASP.Net (a Microsoft product) were broken in the first IE8 beta, and appear to be broken still in the first release candidate. Maybe it’s just the way my mind works, but I’d rather troubleshoot a threading issue than try to figure out which CSS attributes function differently on which dom objects in which browser and scream about min-height:xx; being ignored.
HTML 5 promises to save us with real layout primitives, but if we can’t even get rid of IE6 after 7 years, what kind of adoption timetable can we expect with a whole new paradigm?
The Internet, which in its current incarnation was designed around stateless ideas, is being used to deliver applications which need to represent state to the user. It does this by delivering documents with complex layouts written in languages that were not meant to allow for complex layouts, supplemented by scripting languages pushed to the breaking point.
iPhone as indicator?
I had, for years, secretly held the theory that Microsoft avoidance was the primary factor that had propelled browser-based development in the direction it was going. Building native apps for the various LINUX flavors isn’t all that fun. Java Swing is absolutely terrible. Visual Basic, C++, or .NET on Windows? Pretty good! Personally, if I wanted to build an application that could reach over 90% of PCs in America, and have a great developer experience while doing so, I’d say that’s a pretty good bet. A lot of people disagree with me.
With the iPhone having gone native with Cocoa Touch, and Google re-imagining ActiveX via “Nclient” I wonder if we are seeing the beginning of a trend.
Silverlight to the Rescue?
I was meeting with a potential client who seemed to be somewhat familiar with some of my on-line antics. The client asked “So, why are you so into Silverlight?”. There are really two answers to this question. One of course is that my Nerd Brian gleefully laps up the serotonin released as a side effect of my diet of C# 3, VS2008, LINQ, Lambdas, vector graphics, and so forth. This is not an answer that lends itself well to building a proposition of the business value of your expertise. So what came out of my mouth was something like this:
If we take a look at the application designs and user experiences we are seeing in the current “web 2.0” world we can make some observations. We are seeing an amazing level of creativity and high production quality of graphics and animation. We are seeing instant interaction, multiple simultaneous asynchronous actions happening by exchanging small messages with the server instead of whole-screen refreshes. We have Google maps and several “Outlook in the browser” variants and a host of other accomplishments I wouldn’t have believed without proof. The technologies allowing us to do this are all in one way or another hacks, round-about techniques, unintended extensions, or fragile houses made of cards. They are steps towards real Smart Client technologies. They are all band-aids and medications lavished on a ruined Soldier who was never meant to fight this war. I prefer to take the most direct route and use a tool perfectly suited to the types of experiences we want to create. Silverlight is the direct route from A to B.
What’s your point?
At any rate, my goal here is not to start a war. Rather, let’s pretend your business is actually having a real discussion about whether an application should be browser based or not. What does the landscape look like today? I’ve already made some comments about the developer story, but what about:
Data, Firewalls, and you
I was serving as technical reviewer for a book called “Apache SOAP”, before SOAP became somewhat synonymous with “Web Services”. It seemed clear that Http-based message exchange could potentially solve one of the issues associated with the client-server era. The firewall is already open for HTTP traffic, so why not send data instead of markup? There are a number of options today for exchanging data through the firewall: WCF, Remoting, REST-full services, ASMX services. Not only does this negate one of the old benefits of markup-based applications, but a lot of things are going the direction of distributed computing, grid computing, or mashups, or crowd-sourcing. The success of Folding @Home, for example, shows what a distributed auto-updating smart client can do.
I often hear the words “Performance doesn’t matter”, mostly from people for whom performance is a black eye on their world view. Performance does matter. I currently have 250 movies in my Netflix queue and simply scrolling the page up and down in IE maxes out the CPU. Pulling up damonpayne.com in Firefox and resizing the browser window maxes out the CPU. Yet in Silverlight I can play full-screen HD video and the CPU is around 25%. Animations? Visual Tools? Large data sets? What about callbacks to a client without the client needing to constantly poll? I suppose network performance doesn’t matter either? In case you haven’t noticed, CPUs aren’t getting faster at the same rate they have been. Performance matters, and the compiled language camp has a big advantage here.
Through various techniques, it seems to me that the deploy-and-update issue has mostly been solved. Java had Java Web Start. There is a slew of aftermarket “Application Updaters” for various platforms. .Net had the Application Updater Block and now Click Once and XBAP. A rich Silverlight (or Flash) app can still be delivered through the browser but keep the advantages of a Smart Client platform. PERL has CPAN. Vista has Windows Update. In this late hour, what kind of deployment advantage can we say a web application has over a smart client?
Cross-Platform web apps
Standards (HTML, CSS, ECMA script), no matter how well specified, will somehow manage to be interpreted differently by different vendors. The nice thing about a plug-in model (Silverlight 2, Flash) is that they tend to come from a single vendor for every browser and operating system. Most of the code will be shared with a bare-minimum bootstrap rebuilt for the specific platform. There are some exceptions, such as Moonlight. In the case of Moonlight, Miguel, of whom I am a huge fan, actually has access to Microsoft’s internal specification and test suites. A spec PLUS a suite of tests that shows you if you are rendering correctly is a good thing. I have much more confidence that Microsoft and Novell can get the Silverlight experience to be 99.9% universal than I do that IE/Firefox/Opera/Safari/Chrome will come close to converging.
Smart client software can give you great control over the presentation of your brand.
After 10 years of the browser delivered/markup based application getting all the attention I think the Smart Client is making a comeback. Smart Client technology has gotten better. Having massive storage and gadgets like GPS and accelerometers in our phones has brought back to the mind the advantages of access to the local system. Ideas like the Apple Store help close the usability gap between installing software and just hitting a webpage. Standards have emerged letting Smart Clients take advantage of the web.
Part of what I’ve said here can be chalked up to personal preference. If you like dynamic languages and don’t write unit tests, or you hate Microsoft, or you are anti plug-in, you may disagree. I would appreciate any rational feedback pointing out where you think I’m wrong.
Note: As I was doing proofreading for this article before publication, Scott Hanselman posted some thoughts about Quake Live. At one point he states “There's no reason for QuakeLive to be shoe-horned into a browser plugin”. I couldn’t agree more.