Move fast, move early.

And we are moving again. Since we will move now to the official developer blog of my current employer TravelIQ, this site will not be continued. But feel free to join us at our shiny new place around trafikant.de.

And of cource all articles published here are available as well at our new home.

While working on stylesheets in an Ajax application, it is often neccessary to reload the page and perform a lot of clicks to check whether your changes had the right effect. Especially filling the same forms the whole day becomes very annoying.

Mr. Clay provides a cross-browser-solution to reload your css with a bookmarklet every 2 seconds. This was a bit to hectic for me, especially since you could not stop it. That is why, I removed the interval and made it a one shot bookmarklet.

I had a tough fight with the wordpress editor, but I lost. Unfortunately I am unable to paste the correct code here. But get the bookmarklet from Mr. Clay and remove the setInterval at the beginning and ,2000 at the end. Additionally the last to )) should become )()). That’ll be it.

Feel free to leave a comment, if you have any problems with this tiny script. There are known issues listed on Steve’s site.

In the beginning of computer science people or at this time still scientists tried to centralize knowledge, they tried to build systems, that are capable of handling all tasks that will arise in their daily work. Systems like the famous System/360 of IBM where such machines. They began to be really highly complex systems, capable of everything, that we might now from our small PCs under our desk or on our lap today. But these systems had a big disadvantage: their size and handling. Of course they could solve a lot of problems, but nobody really had enough spare room for these big machines, and who could effort the knowledge of administrating such a system? Who ever tried to run a OS/360 even on a virtual machine, knows what I mean. So computers and of course their software evolved. In the already mentioned beginning, we had large monolithic systems. Many users could connect, but still the application logic was bound to one machine or better one system.

The next phase in software development marks the client/server phase. By now developers and system architects tried to separate application logic. Typical UI logic and controls where swapped to external programs. Still the main part of the application logic remained on a single “server” but parts of the workload where put on the client. One of the main advantages by now was, that the clients were strong enough to execute application logic and not only to forward the requests to the server.

But again this client/server principle had a big disadvantage, that software developers and their system architects tried to overcome – Deployment! In the early days only a single machine needed to be maintained, but now may thousands of clients connect to the server and all of them need to follow distinct rules and these rules are defined by the version of the software they are using. The costs for maintaining the installation and monitoring the software now rise extremely fast with the number of clients connecting to the whole system.

Besides the software evolution from monolithic to scattered architectures the network topology changed as well. Not only in big industries, but for almost everyone. The cost for network bandwidth to connect to a service was and is constantly falling. As a result, the cost to transmit more data became cheaper and opened up the way for another architectural principal. Well, let’s say another interpretation of an old one. The Thin Client idea is not really new, but it changed dramatically. Of course the original idea of saving IT costs remained but in a different way for developing Web Clients.

The first overwhelming advantage of web clients is, that by now they can be deployed without any effort on almost every computer worldwide. On every computer some kind of a browser is already installed. The communication protocol exists and tends to be some kind of stable and is widely accepted. The user interface language is specified and accepted as well. So only the back-end and the application logic needs to be developed. Deployment is now almost for free available. And applications cannot only be deployed to internal clients, but using the Internet and maybe secured connections, the dream of a home office can come true.

Ok, this was a nice summary of application development of the past 40 years until the beginning of the magical Web 2.0. Web 2.0 revealed patterns and principles, that seemed to be banned from the world wide web during the Internet revolution in the end of the 20th century. Things like Cookies – do you remember all the advices to not to allow them? Languages like Javascript! Always seemed to be an ugly mistake, but now? Look at it! Do you now a site that does not use Javascript?

But that is not the point the I want to state. Web 2.0 had a revolutionary impact on the software development industry. It made the thin rich client possible!

The what? Yeah, you are right the Thin Rich Client – something that you ever missed! Two of the main advantages of thin clients are the low deployment and IT costs, one of the main drawbacks is the high server load. The money you save using thin clients, you will have to invest into new iron to beat the enemy with! And what do you save in the end? Not enough, isn’t it?

Now let’s have a look at a typical Thin Rich Client application – Google Mail. Almost every one knows it (well from the Geek point of view) and most of the people love the ease of use and smoothness of the application. It feels like a real desktop application but is only a web appliction. But did you ever asked how does it work? It uses lot’s of Javascript to achieve that! Javascript – isn’t that the language to hack my PC down? No! Javascript became a very usable language, a language that makes fast processing of information available to browsers and so to your PC. Using asynchronous HTTP request (the mythical AJAX) we can even reload data without reloading the page!

As a result we can start building rich client applications using Javascript and use automatic deployment mechanisms to deploy them right on your computer using standardized protocols, no more version problems. It just works! We will reduce the server load since, the client is able to process information. Part of the application logic can again be extracted to the clients but still use all the advantages of a web client. We have to change the way we look at Javascript and use the advantages provided by it.

Using Javascript to build Thin Rich Clients can really save money from my point of view. For large companies or large scale web applications thin rich clients can be a serious alternative to new servers. Furthermore thin rich clients allow new UI paradigms, that make web applications feel like desktop applications and so better meet the user expectations.

Examples like GMail, Google Spreadsheet and Flickr are only the beginning of a new type of web applications and we will have to use a new term for it – Thin Rich Clients.

I just read, that the new prototype release is out by now – version 1.5. Since it was very hard to follow the development of prototype for quite a long time, the developer group made a complete rebirth for prototype. By now they provide a complete API documentation, some tutorials and whats left in the world of web 2.0? Of course, a blog.

To hava a closer look on what has changed go for the CHANGELOG.

Since prototype is the JavaScript framework of my choice, the only thing, that I can say is – check it out!

Developing with JavaScript can get really ugly if you don’t know how to do it. It is not a question of language abilities but of tool chains. The best developer will get tired of stupid alert() statements to debug the application.

My usual tool chain consisted out of LiveHTTPHeaders, Venkman and Firebug. The best feature of firebug is its JavaScript Console – simply enter your code, execute it and see what happens.

By now a brand new beta of Firebug 1.0 was released and what can I say more, than this version is incredible. If you are a web developer than this tool provides more than you will ever need. The most tremendous changes are:

  • Tab Completion for the JavaScript console – enter a few chars and complete objects and their methods
  • Direct editing of HTML using the DOM inspector of Firebug
  • Profiling – start the profiler, execute a piece of code and stop the profiler. What you will get is a precise list of function calls and the time that it took
  • Network Monitor – always have an overview about what is loaded and how long each request took

And there are so many other new features that you can only explore while using it. But be awre, that you can really safe time, if you use firebug. And I did not even mention all the other nice CSS and HTML Features. So whats left to say?

Get Firebug, Now!

While developing your nice JavaScript application you have to test if it works as well with all those different browsers your future users could use. Today I stumbled across a problem with Opera.

Imagine the following code:

var form = $(document.createElement("form"));
var e = $(document.createElement("textfield"));
Event.observe(e, "keypress", function(a){Event.stop(a);});

You would imagine, that this code would stop the event from beeing propageted if a keypress event is received. Usually this works fine in almost all browsers, well there is the problem: almost all. In Opera this works like it is expected to work but with one difference: if you press the enter key, two events will be fired – one is the keypress event for the textfield and the other event for the surrounding form. In this case the Even.stop() method will not work. The only way to prevent the form from beeing submitted is to stop the event directly within the form.

Now you will ask where do we need such input fields? Imagine the InPlaceTextfield and the Autocomplete from script.aculo.us as examples. By now there is no correct support for opera, if you use these controls.