Evolution will come through: Innovation and Renovation

Posted by on May 3, 2012 in Articles, Slider | 1 comment

 

In the case of Innovation, the recipe is simple: All you need is an idea

 

In the case of Renovation though, more steps are to be followed

 


One has to fully understand and embrace the existing situation, in order to provide improvements, make it more efficient, maybe combine it with other ideas, which he/she also has to have embraced in adequate amounts and at the end take it to the next level. Computer science more-or-less follows the above scheme.

Since new ideas are rare, most of the Computer evolution comes through renovations and improvements; through combinations of ideas and re-examining of old approaches using new perspectives. Until that day though, the old ideas reside in old computers, dusty files and “tired” minds.

Today’s world is dominated by Linux cores. The main trend introduces the remote desktop philosophy; the user can enjoy the advantages of a supercomputer in a “low-hardware” device. In simple terms, the user’s desktop is physically located in an enormous server; away from home. At the same time, the user can securely access all services from a desktop, laptop, netbook, or even a Smartphone. As a result, the hardware demands from the end-user terminal are being reduced by having all the processing being done in the main computer core; the server. This situation is definitely what today’s society needed to go one step further. Cloud Computing is here, it is secure, it is easy, it is cheap; it works! And it is going to do so for many years.
Now that the cloud has started to take its shape, the restless minds should be on their way to the next chapter of evolution. Since we cannot base evolution on the unstable and unpredictable innovation, how about we visit the dusty files of an old friend?

One of the – abandoned now – trends of the 80’s was the Microkernel approach. A revolutionary idea indeed, offered the most stable core for Operating Systems. The basic idea is the forwarding of all the unnecessary processing from the Computer Core to the User levels. This way the system core was “relieved” from heavy workload and became more stable and reliable. The Microkernel approach is based on a decentralized approach of process-management. Computer science has been light-years away from this way of thinking and has been based on the efficient and secure Linux kernels and with astonishing performances, in fact. The high-speed society is moving forward, constantly improving and advancing its technologies so, going back is out of the question.

Now, let’s take a breath and explore the below diagram as it presents a few interesting facts:

DIAGRAM A

This diagram exhibits the advancement in the three main hardware categories of I.T.: Server Hardware, Networking Capabilities, End-User Hardware. As we may observe, when the Microkernel approach was conceived, the networking knowhow was at its early ages. Network structures were not sophisticated enough to support all the traffic that the massive process-forwarding was causing. The high traffic that Microkernels demanded from the networking structure was simply too much for it to handle, Microkernels were abandoned, and the more minimalistic Linux got in the center of attention. Linux systems thrived since, their centered approach fitted better to the existing hardware, and the natural order of things got Linux dominating in the world of servers.
Moreover, when network science got its boost in the mid-90’s, it drifted servers – and
Linux – upwards, reaching today’s highly sophisticated Cloud Computing. The cloud computing era we are living through is being characterized by sophisticated and advanced networking structures, thus enabling computers to achieve new high-speed thresholds in data transferring.

Another important thing we can notice on the diagram is the “abandoned” end-user technology. Of course, by “abandoned” we do not mean obsolete. But with all the smartphones and netbooks around, it is not easy to define end-user technology as such. It is however improving little by little since, it is merely not vital for it to be so powerful anymore. Within fifteen or twenty years however, cloud computing will also reach its ceiling, just like every other technology has and will. And since innovation is difficult to predict the only next logical step will be maximizing already existing capabilities. End-user technology will have been left behind since, attention will be given to improving the Cloud. This will create a large opening for further research and development. Now here is the juicy part: What will be the need for improving the client hardware, while all processing and storage takes place in the server? Is cloud computing entangling itself in a process which may eventually lead computer science in a dead-end?

Exploiting the – by then – enormously powerful servers will reside on the assumption that the network bandwidth will be enough to support its process-forwarding needs. A Microkernel -based operating system will be able to divide the processing workload at least by half, thus possibly giving a new, alternative point of view, taking computer science to the next level.

Just think about it!

Panos Rokkas, DEREE – The American College of Greece
Undergraduate, Information Technology
afceayouth.com Representative
rokkasp@gmail.com

Print Friendly, PDF & Email