Are we finally heading back to code optimalisation ?
Back in the oldschool days when I was still involved actively in the demoscene , you had to cramp all kind of fancy stuff into 4K or 64K (on disk) , and you had to make sure that your application ran fast and smooth enough on a 486DX50 with 8Mb of ram and an ET4000W32P, nothing more nothing less. We had to spend time thinking about the code we wrote and make sure that we were using abolutetly the least amount of memory disk or cpu cyckles. With the introduction of faster PC's , more memory and bigger harddisk we stopped caring. Heck.. the bloated window manager most of you ran once prevented you from optimizing anything anyhow. Lots of people noted that even with machines 100 times faster their wordprocesser still stayed at the same slow speed.
The unix platforms also caught a bit of this disease , where we once had lightweight window mangers such as fvwm or twm , we now have Gnome and KDE.
But it seems these days are over and people start optimizing code again..
It might be influcened by having to write code for mobile devices, and other embedded platforms but we are taking the lessons back to our desktop and server platforms.
I`m glad I see this trend.... I just hope it continues and that hardware vendors rather than making faster and faster hardware spend time on cutting the costs so that within a couple of years time we'll laugh with the 100$ laptop project and call it an expensive device with limited functions. But where we are today .. I applaud the efforts.. they are steering us back in the good direction away from bloatness...