Mainframes are a part of computing history that we don’t often think about in our modern world of mobile, social and cloud computing. But mainframes are still very much around and we owe a lot to this foundational technology and some valuable lessons can still be learnt from the disciplines that mainframes instilled in those who use(d) them.
The mainframe today is very much part of the overall computing landscape as demonstrated by 2nd quarter results posted by IBM, where mainframe sales have increased by 11 percent compared to a drop of 3.3 percent of revenues . So mainframes are still going strong.
But what can we learn from mainframes that might be relevant today? Mainframes were extremely expensive, technically complex and you had to work with very limited computing resources. By way of example the first mainframe I worked on was an IBM 4331 with 8Mb of memory, and this merrily ran the computing requirements for company, costing several million rand at the time. But with such limited resources, a lot of attention was paid to optimisation and effective use of resources.
As we architect our computer solutions, we can meet computing demand by scaling up (larger servers) or scaling out (many servers). It has been popular to use the scale out approach as hardware is inexpensive and it is easier to just allocate more. And that is true. To provision another sever is quick and the actual run cost of one server is fairly insignificant.
Over time, this thinking has resulted in a plethora of servers and systems because, after all you just fire up another one. Then we got to realise that theses server farms were cumbersome and we borrowed some technology from the mainframes in the form of virtualisation ( a technology implemented on mainframes in the 1990’s). This has been a fairly large focus in recent years with vendors expounding the value of virtualization and we all chased the cost savings and management benefits. But we did not fundamentally challenge our consumptive behaviour. All we did was apply a technical solution to the problem to make it a bit more efficient.
My early computer career was as a systems programmer where one of your most important functions was to performance tune and troubleshoot the system. Glutinous consumption of resources by users or programmers was seriously frowned upon. A successful application was one which fulfilled functional requirements AND performed efficiently whilst consuming as few resources as possible. This is what I think we lack in today’s computing environment. We just buy more, and yes the individual cost is not so high, but put all the systems together, with all the people required to run and manage them and you end up with the large IT budgets of today.
This consumptive approach is rampant in the industry today. Buy any server based application and more often than not, it requires it’s own server. Ask the vendor if you can install it on an existing server with a bunch of other things and you will generally get the reaction of “er…no, we don’t have any customers who do that, it is meant to run on it’s own.”
There are so many times that I have seen significant savings achieved by just having one person look closely at usage patterns and drive optimisation efforts. Business demands faster delivery which drives us towards completing the current project and moving on to the next. No time to optimise. Accept the bloat and move on.
Virtualization allows us to get the best usage of the hardware, but it does not really reduce the number of people required to manage all these servers. We also don’t really know how systems interact with each other, so they are treated like black boxes and we don’t challenge vendors to make their products coexist with each other.
Being an old mainframe minimalist, I think we should challenge ourselves and go back to the disciplines optimisation and treat resources as scare. But it does take time and effort. My experience has been that CPU savings can be achieved in relatively short time frames (days to weeks) with dedicated focus and developers who are prepared to revisit their code. Disk space is another story though. Think how hard it is to clean up the hard drive of your CP/laptop – it takes hours and hours to work out what to keep and what to safely delete. Scale that problem to a corporate computer system with hundreds of owners of data and it becomes very challenging. To get meaningful savings you are probably looking at many months to realise savings.
The trick is to start, don’t boil the ocean but look a little closer and develop a culture of lean computing. If you want to extend the life of expensive computer resources, treat them as such and you are guaranteed to find wastage which you did not know about, and money you can be saving.