Traditional Hardware Management

Brief history

Computer hardware in its initial days had served functions like additional of numbers. Since then it has evolved drastically in every generation of computers. Computer’s storage capacity, processing speed, communication with other computers drastically improved from first generation to second and then further. With every new generation, new use cases to make human life better were attempted to solve. Great success was achieved in bringing ease to life in every life, may it be transportation, finance, health care or education.

When technology transformation is shaping and redefining the world; hardware infrastructure and operations can not be left behind. It has to match to the velocity of business demands, match the accuracy of emerging trends in the software and most importantly be cost effective in the competitive world.

Tomorrow’s promises

The invent of machine learning and artificial intelligence have transformed the experience in all walks of life and we are yet to see the pinnacle of success in these areas.

Today, businesses have tremendous challenges and immense pressure to catch up with the industry demands, be disruptive and experiential, to offer affordable and cutting edge solutions to consumers.

Traditional infrastructure provisioning method makes it almost impossible for businesses to achieve what businesses want to achieve. Lets learn the challenges.

Challenges with traditional approaches

Business procured the hardware as they grew and created different environments. Bigger businesses would have data centers in multiple locations. An operations team would manage the upgrade to the environments, provisioning of new servers, deployment of the applications and change in those applications. All of this happens in a manual manner mostly.

As the activities are manual across many applications and many environments, a controlled and regulated process was needed to function smoothly. This process needed a change management and approving committee.

This structure attracted lot of challenges.

Any simple change or new release needed to be planned way ahead of time.
Change impacting multiple applications would need coordination and approval among different groups.
Since a single server may host multiple applications, any error in the process would create larger unplanned and undesired down time.

Additional server or capacity request would take long to procure and provision it to the team.
Over provisioning of servers would attract a lot of operational cost.

Change cycles are longer so businesses would have to wait for weeks or months until the deployable crosses the hurdle race of all environments, that too with a snail pace as test and deployment is all manual.

Despite of the slow and controlled process, deployments failed, environments were down, traffic got slower responses, reverting deployment was a pain and as a result customers observed down time & failures.

Reacting to dynamic server demands was not possible or economical. During the peak seasons, businesses would need additional capacity for a short duration. Such elasticity of infrastructure was not easily possible.

Operations team was isolated from rest of the software making process and had no awareness of the demands. Operations team only worked on the demands from the development or testing team. This structure made it difficult to understand and analyze the issues when things went wrong. It took considerable time to even provision the logs to application team for analysis. It was difficult to understand whether the issue was a requirement miss, an application issue, a test approach issue or a server issue.

Operations teams are again as per domain or business unit wise so there could be different and inconsistent ways to provision server, deploy and monitor the applications. There is no standardized automated way to do it.

Technology change would mean redundant hardware at times.

Solving key challenges

For business to succeed it is very essential to find answers to key solutions.

  • How can operations help in reducing the time to market?
  • How to expedite provisioning and access to resources rapidly?
  • How to govern and optimize the operational cost?
  • How to scale the infrastructure with increase in need?
  • How to get rid of the manual failures and increase availability of the overall platform?
  • How to bring disruption in the operations world?
  • How to have better collaboration across development, testing and operations team?
  • How to standardize the operations and align the teams to it?

Well, DevOps is the silver bullet here. With this premise, we must dive into DevOps pool.