Everything in IT depends on the network.--and not just in an abstract, "need it occasionally" sort of way. The packets must flow for virtually every operation, every job, every transaction. Whenever packets drop, or links go down, we're disconnected and isolated. Information doesn't flow' apps don't work' users don't proceed. We need the network up and running, millisecond by millisecond, every millisecond of every day.

(Credit: OpenClipArt.org - Pierre-Yves Dubreucq )

Our utter, urgent dependency won't lessen in the coming years. It will intensify--redoubling and redoubling again. Cisco calls its vision of the future "together." HP calls its "converged infrastructure." IBM calls its "Smarter Planet." All have interconnectedness at its core. Or take it out of the vendor realm, towards the technologies and trends: Web 2.0. Cloud. Virtual desktop infrastructure. ITaaS. Smart mobile devices. Embedded computing. Wherever you look, to whatever vision of the future, the network is central. Not only will IT estate increasingly coordinate via the network, so will more and more of the global economy, and indeed, the entire scope of human activity.

They say you don't really know how valuable a thing is until you miss it and have to do without it. I missed the network a few times this week and I can tell you, it sucks.

I use a voice-over-IP (VoIP) telephone system. I could use the AT&'T or Verizon cellular networks, but Google Voice is easier, is better integrated with my applications, often has better call quality, and generally is more reliable. Except when the network goes. Then everything goes, all at once. Twice this week, that happened. Once on a mutli-hour conference call, once when I told a colleague "sure, we can talk now' call me!"--12 seconds before network access dropped completely, and stayed down for 20 angst-ridden minutes.

I use Amazon Web Services (AWS) servers for development. An entire work session this week was scrapped because, while I could get to the console to start up my "cloud servers" just fine, my development work station couldn't actually "see" or access the servers. Some problem inside AWS Some fluke of the Domain Name System (DNS) Something between me and Amazon Who knows, really Network configurations are famously hard to visualize and troubleshoot. Since Amazon's status board showed all services working, it seemed easier to come back and try again later. But when your use is production rather than development, "come back later" is a lot harder.

Critics of cloud services often point to the possibility that the network will be down, or performing poorly, as proof that on-site, owned deployments are better. About a year ago, we converted the majority of our in-house IT to cloud services' having lived with cloud's trade-offs for a year, overall we're very happy to have made the switch. But when the net goes, it is frustrating. And it's still true that the greater control of in-house resources makes it easier to guarantee a certain level of availability. But in-house has its own trade-offs--higher costs, less flexibility, and even some reliability gotchas of its own. Neither approach is invariably superior--it's a case of, for what and by what measures

Everyone increasingly depends on the end-to-end global network being up and performing well every millisecond. So we have to invest in the multiple routes, management tools, troubleshooting skills, and so on that will give us always-there, count-on-it Internet access to our resources--just as we can establish in more constrained enterprise data centers today. Until then, I'm delighted to depend on the network 99.9 percent of the time. But when that page won't load, that app falters, or the connection flutters, I want to light a candle and intone: network, don't fail me now!


Discuss   Add this link to...  Bury

Comments Who Voted Related Links