I, ROBO: The Incredible History of Branch Office Computing
From X.25 to hybrid cloud
If your company has offices around the country (or around the world), you know that the network has to enable your people to communicate, share, and store information quickly and easily, and to do it securely and cost-effectively. Fortunately, remote office/branch office (ROBO) infrastructure has come a long way since its beginnings in the 1960s and ’70s. In this article, I’d like to review the incredible history of branch office IT.
In the Beginning: The Era of the Mainframe
In the beginning distributed enterprises took a centralized approach to computing, in which remote offices connected to a backbone network powered by high-capacity production and backup mainframes.
Mainframes with X.25 wide area networks were once the only option for supporting remote locations, but required a great deal of time, money, and labor to install, manage, and support. For example, the hardware was (and is still) expensive and requires a special OS as well as hard-to-come-by professional expertise.
Mainframes also take up a great deal of real estate and require far more energy than smaller computers to run and also to cool. Moreover, companies who wished to connect to mainframes back in the day would have to purchase, implement, and support user terminals, although the Internet has supplanted that necessity today. If you’ve ever worked on a green screen — and we don’t mean today’s video backdrop for special effects — you know what we’re talking about.
The Rise of Desktop Computing
In the 1980s, the market shifted from mainframes to mini-computers, which were then supplanted by desktop computing and notably, the influential IBM PC introduced in 1981. Despite its original cost of more than $5,000 (in today’s money), the IBM PC became one of the most influential computing products in the history of branch office IT, with more than three million sold. With this shift toward commoditized small computers, companies were able to build their IT networks for much less money than they had earlier invested in mainframes.
Desktops also had the advantage of being more scalable and user friendly, and reduced the skill needed to operate and support branch offices. Moreover, this approach moved the organizational processing power from the centralized mainframe to PCs on employees’ desks. This brought a huge boost in employee productivity, as everything was now a keyboard tap or mouse click away. But it also meant that IT now needed to become much more decentralized in order support distributed systems — updating hardware, patching software, and helping users.
On one hand, reliability as a whole improved, since the organization became much less dependent on failure-prone WAN links and centralized datacenter downtime. But on the other hand, supporting such a decentralized network is a pain. Because a user’s work could be saved to their own machine, there began an increase in shadow IT, data silos, and user-introduced malware from the promiscuous use of floppy disks and pirated software, making many IT professionals pine for the robust world of the corporate mainframe.
The Early Days of the Internet
By the late 1980s computers were used for activities such as accounting, business productivity, and word processing. It was also the early days of Frame Relay and fractional T1 Wide Area Networks (WANs) that could connect remote offices. The first modern routers in the history of branch office IT were popularized by Cisco, which released its first product, the AGS router, in 1986.
WANs were increasingly used to connect LANs — Local Area Networks from disparate offices — together, and LANs started to increasingly consolidate around the Ethernet standard with the 10BASE-T twisted pair standard of 1990 — a standard that offered a lot more practicality, and was based on thinner and more flexible than the coaxial cable used for the previous 10Base-2 or 10Base-5 standards.
The installation and overhead of proprietary WAN networks proved to be expensive for most organizations. And they still weren’t connected enough to fully support global communication and file sharing. That all started to change in 1990 when Sir Tim Berners Lee created three technologies that power today’s World Wide Web: HTML, URL, and HTTP. Since then, connection speeds have increased from dial-up (remember 300 and 1200 baud?) to broadband; Web 2.0 brought more interactivity; and search engines have made the web the place to find answers and conduct business.
The Early Cloud Era
In the second millennium, computers have become dramatically smaller and more energy-efficient. Mobile devices are the computing the vehicle of choice, especially for remote workers who need to stay connected. Today, we’re “always on” 24/7 regardless of where we are. Meanwhile, the infrastructure to support all of this has evolved as well, giving rise to cloud and edge computing.
With rapidly improving communications, we are in a sense returning to ROBOs working withing a more centralized approach akin to the mainframe. But the mainframe itself has been transformed to a much more flexible and distributed, world-encompassing cloud, and remote terminals are being replaced by smart edge devices.
Globally distributed organizations today have several options for secure communications, file sharing, and storage. They can rely on the Internet to create VPNs for secure site-to-site communications that are less expensive than WANs. They can use remote desktops or deploy VDI environments, which allow applications to run remotely on a server, but to be displayed locally.
In the cloud era, distributed companies moving to a more centralized approach have begun to re-encounter some of the challenges of the mainframe, including latency for accessing applications from remote locations and the ability to operate during transient losses of connectivity. Companies focusing on local computing encounter challenges for local storage capacity, data performance, and edge infrastructure management.
And so, industry consensus has started to form that the true solution for branch offices IT should be a hybrid between edge and cloud computing.
Looking Forward to Hybrid Edge to Cloud
As the 2010s come to an end, virtually all large distributed enterprise companies are in the process of modernizing their networks with hybrid cloud solutions that combine local computing for latency and downtime sensitive applications, backed by infrastructural services hosted in a public or private cloud.
ROBOs are becoming mostly stateless, meaning that a “golden copy” of every piece of data is securely stored in the corporate cloud while local caches, or replicas, are used for fast local access.
Toward that goal, organizations are deploying cloud storage gateways to solve a wide variety of ROBO IT challenges. According to a recent IDC survey, 91 percent of enterprises have deployed or are planning to deploy a cloud storage gateway at their remote sites to reduce costs, centrally manage users and data, and consolidate infrastructure at the edge.
Cloud storage gateways, also known as edge filers, are modern NAS appliances that streamline cloud storage access for remote sites. Caching filers seamlessly transfer cold file data to less expensive and highly scalable cloud object storage and connect users to a single namespace and a global file system that IT can centrally manage.
Putting It All Together
Through these transitions across the history of branch office IT, we see the move to smaller, less expensive platforms and the shift from centralization (away from mainframes) to decentralization (desktops and mobile devices — as well as the Internet of Things), back again towards centralization (cloud) and ultimately to hybrid edge to cloud infrastructure.