The History, Evolution, and Future of Modern IT

1569 VIEWS

· ·

Modern IT has changed enormously over the last few decades and created a plethora of new opportunities for individuals and businesses. In the process, what were once considered traditionally “IT” roles evolved significantly from the prior eras. Mainframes, personal computers, datacenters, Cloud computing, DevOps, and modern automation tools have created an environment where developers now are able to rapidly deploy their applications without the overhead that used to exist with traditional IT.

Modern IT

According to Wikipedia, the term “information technology” first appeared in the Harvard Business Review in 1958. In that article, the authors discuss how the application of technology in organizations would change the nature of the roles within the organization. Various management roles would evolve, with some becoming more influential within a business, while others would become less so – or even disappear – as technology itself evolved to automate those roles.

Organizations would change their processes and methodologies, as more and more decisions would become reliant on automating common everyday tasks rather than relying on manual intervention by workers or managers. As new ways to process and analyze information are discovered, more opportunities would be created for the organization or business when using that information.

Before the Personal Computer

There were a number of innovations prior to 1958 in the field of IT. Two notable examples being the Harvard Mark I and the ENIAC. These were some of the first general purpose computing devices in use at the time.

They were massive (roughly the size of a classroom), weighing many tons, composed of mechanical switches and relays, and were operated by highly educated and skilled scientists and engineers. Many of the people operating these behemoths were considered pioneers in their field. The Mark I and ENIAC were primarily used in military applications such as calculating artillery tables or certain applications in nuclear weapons.

The 1950s saw the advent of the mainframe computer. Mainframes were (and still are) used by big business and governmental agencies for heavy duty computing tasks such as financial transactions and global commerce that require a high degree of reliability. Operators typically require special training in order to properly administer a mainframe system, though they don’t necessarily have to be a highly skilled engineer or scientist in order to do so.

The Personal Computer

While there were a number of other notable innovations during the 1960s and 1970s, one of the largest shifts in Information Technology occurred during the late 1970s and early 1980s with the personal computer.

Organizational roles began to change as the PC became more ubiquitous and landed on the desks of everyday business users. Many data processing tasks could now be handled by those workers, and there was a rise in corporate IT Administrators who helped support those workers and the devices they used.

When PCs became networked, the role of the IT Administrator evolved yet again to also support the network, as well as the servers on the network that would process and store the data generated by business users.

Computer Networks and the Internet

The Internet was a big game changer for everyone. Local computer networks were connected to larger networks, and information became available to anyone who had access to those networks.

At the same time, IT roles adapted to take advantage of new paradigms. Admins would now have responsibility for not only their end users and devices and the networking equipment that supported them, but also connectivity to other networks.

Where software was once installed on individual computers, it would now go online. Software development would shift towards web development. Developers would create the software and Operations or IT would install and manage the servers that ran the software.

The growth of datacenters in the mid to late 1990s during the dotcom boom created new opportunities. Some IT workers would now work solely in datacenters managing the servers and networks in those environments. Physical servers could be installed into colocation facilities, and then managed remotely using tools such as telnet or SSH (Secure Shell).

An evolution in roles was happening. Software developers would be responsible for web application code creation. Operations personnel would manage the backend where these applications ran in datacenters. Corporate IT administrators would manage the end users and the computing devices they used. Network administration would evolve as well and often was a separate discipline from server administration.

Cloud Computing

Amazon Web Services released their Elastic Compute Cloud (EC2) service in 2006. IT would evolve again to embrace this new paradigm. Things began to change even more rapidly. Now organizations didn’t need to endure long hardware cycles in order to deploy server farms and applications.

It also became possible to maintain physical infrastructure, but use the Cloud to dynamically expand capacity when needed, and destroy the extra capacity when it was no longer required. Whether the Cloud infrastructure was temporary or semi-permanent, this ability would allow companies to avoid large capital expenditures to handle increases in load, and instead shift them into operational expenses.

With the rise of Cloud computing, IT would see another evolution in roles. Many of the administrators and operators that worked on systems in datacenters would start to shift their efforts to managing compute architectures in the Cloud. They would no longer be installing physical servers or networks into fixed locations. Infrastructure was instantiated virtually via remote APIs using graphical web interfaces or command line tools.

Compute instances could be created and destroyed at will. Secure networks could now be deployed in the cloud without the need for specialized equipment and miles of cable. Network functions such as load balancing and firewalls would become virtualized. Storage no longer required large arrays of disks and racks of servers.

It was no longer a hard requirement that these IT workers were located within close physical proximity to the compute infrastructure they managed. Organizations would see a push towards migrating their physical infrastructure to a rapidly diversifying selection of Cloud providers. IT staff would become focused on migrating functions that used to live in datacenters into the Cloud. Many of these organizations, especially startups, would have little to no physical infrastructure at all.

DevOps and SRE

Starting a little over a decade ago, IT started to embrace the concepts of DevOps and SRE. As Cloud adoption and DevOps began to gain acceptance, the lines between developers and operations personnel began to blur.

The DevOps philosophy sought to break down the organizational silos that had arisen in IT organizations slowing application development and delivery. Acceptance of failure became normal and processes were implemented to mitigate that failure. By introducing gradual changes instead of a waterfall approach to hardware and software deployment, risk could be reduced.

Tooling and automation became key to reducing toil and introducing repeatability and reliability. The concept of measuring everything provided greater visibility into whether new processes and automation were actually providing value to the business.

There was a small problem though. As it turned out, DevOps meant a lot of things to a lot of people. There wasn’t a universally accepted and codified methodology to achieve “DevOps”. Implementing DevOps as a culture in an organization would become a challenge for many.

Site Reliability Engineering, or SRE, came to the rescue. Google has been a pioneer in evangelizing the practice of SRE. According to Google, SRE provides a methodology to implement DevOps within an organization by treating infrastructure and operations problems as software engineering problems.

Developers and Operations teams began to merge into cross-functional teams delivering both the application software and the infrastructure required to run it. Operations became a development problem and Systems Administrators would start to become developers of the infrastructure they deployed. With tools like ChefTerraform, and Github, infrastructure was a coding problem that could live alongside or within the application code repositories formerly only managed by developers.

The Future of IT

What does the future of IT look like? The future is already here.

IT is already seeing the next evolution into serverless computing and containers. Cloud capacity management and orchestration tools will become key to managing costs and availability of compute capacity across disparate geolocations and providers.

Container adoption has created a need for orchestration as well. Tools such as Kubernetes – or on a larger scale Apache Mesos – are gaining acceptance. IT skill sets will need to adjust as the need to manage these new tools comes to the fore.

Due to many of the factors discussed earlier such as Cloud adoption, containers, DevOps, and SRE, the ability to rapidly deploy applications and infrastructure into the Cloud created a gap. Application and infrastructure security started to emerge as a critical issue needing to be addressed. DevSecOps has come about to help address these gaps.

DevSecOps seeks to do this by treating security problems in much the same way that DevOps and SRE seek to alleviate developmental and operational silos. It treats security as a software and automation problem. Business processes such as monitoring and incident response are also seeing automation as DevOps, DevSecOps and cloud usage evolve.

Improvements in automation and CICD pipelines will empower developers to deploy applications with less reliance on operations or security team intervention. Operations and SRE teams will focus more and more on automating and creating the reliable platforms those developers will utilize. Security teams will work directly with developers and operators to reduce attack surface in applications and improve incident response and resolution times.


Steve Tidwell has been working in the tech industry for over two decades, and has done everything from end-user support to scaling a global data ingestion and analysis platform to handle data analysis for some of the largest streaming events on the Web. He has worked for a number of companies helping to improve their operations and automate their infrastructure. At the moment, Steve is currently plotting to take over the world with cloud based technologies from his corner of the office.


Discussion

Click on a tab to select how you'd like to leave your comment

Leave a Comment

Your email address will not be published. Required fields are marked *

Menu
Skip to toolbar