In the beginning of computer science people or at this time still scientists tried to centralize knowledge, they tried to build systems, that are capable of handling all tasks that will arise in their daily work. Systems like the famous System/360 of IBM where such machines. They began to be really highly complex systems, capable of everything, that we might now from our small PCs under our desk or on our lap today. But these systems had a big disadvantage: their size and handling. Of course they could solve a lot of problems, but nobody really had enough spare room for these big machines, and who could effort the knowledge of administrating such a system? Who ever tried to run a OS/360 even on a virtual machine, knows what I mean. So computers and of course their software evolved. In the already mentioned beginning, we had large monolithic systems. Many users could connect, but still the application logic was bound to one machine or better one system.

The next phase in software development marks the client/server phase. By now developers and system architects tried to separate application logic. Typical UI logic and controls where swapped to external programs. Still the main part of the application logic remained on a single “server” but parts of the workload where put on the client. One of the main advantages by now was, that the clients were strong enough to execute application logic and not only to forward the requests to the server.

But again this client/server principle had a big disadvantage, that software developers and their system architects tried to overcome – Deployment! In the early days only a single machine needed to be maintained, but now may thousands of clients connect to the server and all of them need to follow distinct rules and these rules are defined by the version of the software they are using. The costs for maintaining the installation and monitoring the software now rise extremely fast with the number of clients connecting to the whole system.

Besides the software evolution from monolithic to scattered architectures the network topology changed as well. Not only in big industries, but for almost everyone. The cost for network bandwidth to connect to a service was and is constantly falling. As a result, the cost to transmit more data became cheaper and opened up the way for another architectural principal. Well, let’s say another interpretation of an old one. The Thin Client idea is not really new, but it changed dramatically. Of course the original idea of saving IT costs remained but in a different way for developing Web Clients.

The first overwhelming advantage of web clients is, that by now they can be deployed without any effort on almost every computer worldwide. On every computer some kind of a browser is already installed. The communication protocol exists and tends to be some kind of stable and is widely accepted. The user interface language is specified and accepted as well. So only the back-end and the application logic needs to be developed. Deployment is now almost for free available. And applications cannot only be deployed to internal clients, but using the Internet and maybe secured connections, the dream of a home office can come true.

Ok, this was a nice summary of application development of the past 40 years until the beginning of the magical Web 2.0. Web 2.0 revealed patterns and principles, that seemed to be banned from the world wide web during the Internet revolution in the end of the 20th century. Things like Cookies – do you remember all the advices to not to allow them? Languages like Javascript! Always seemed to be an ugly mistake, but now? Look at it! Do you now a site that does not use Javascript?

But that is not the point the I want to state. Web 2.0 had a revolutionary impact on the software development industry. It made the thin rich client possible!

The what? Yeah, you are right the Thin Rich Client – something that you ever missed! Two of the main advantages of thin clients are the low deployment and IT costs, one of the main drawbacks is the high server load. The money you save using thin clients, you will have to invest into new iron to beat the enemy with! And what do you save in the end? Not enough, isn’t it?

Now let’s have a look at a typical Thin Rich Client application – Google Mail. Almost every one knows it (well from the Geek point of view) and most of the people love the ease of use and smoothness of the application. It feels like a real desktop application but is only a web appliction. But did you ever asked how does it work? It uses lot’s of Javascript to achieve that! Javascript – isn’t that the language to hack my PC down? No! Javascript became a very usable language, a language that makes fast processing of information available to browsers and so to your PC. Using asynchronous HTTP request (the mythical AJAX) we can even reload data without reloading the page!

As a result we can start building rich client applications using Javascript and use automatic deployment mechanisms to deploy them right on your computer using standardized protocols, no more version problems. It just works! We will reduce the server load since, the client is able to process information. Part of the application logic can again be extracted to the clients but still use all the advantages of a web client. We have to change the way we look at Javascript and use the advantages provided by it.

Using Javascript to build Thin Rich Clients can really save money from my point of view. For large companies or large scale web applications thin rich clients can be a serious alternative to new servers. Furthermore thin rich clients allow new UI paradigms, that make web applications feel like desktop applications and so better meet the user expectations.

Examples like GMail, Google Spreadsheet and Flickr are only the beginning of a new type of web applications and we will have to use a new term for it – Thin Rich Clients.

Advertisements