I confess I still use dialup. I hope it didn’t hurt too much when your jaw dropped, but please close your mouth now, it’s unbecoming.
The recent article Broadband Internet? No thanks (CNN) [h/t to Slashdot] inspired this post. I don’t have anything against high speed, it’s just that I’ve never needed it bad enough to get around to signing up for it. I’ve never got around to getting cable (for TV) either.
There are three common reasons given for needing high speed — media, gaming, and just getting things done.
I just not much into online media, nor do I play online games, so that takes care of the first two issues.
The third one I’ve never really understood — can you say multitasking? I generally don’t stare slack-jawed at my screen while I wait for a large file to download, my email to update, or some process to complete. There are a lot of things I can do meanwhile (like write this post, for instance).
I generally have several workspaces open: one for email and other communication; one for web browsing and feeds; usually one for coding; and a couple others I pop into for a one-off task or two. And all these little network thingies doing their thing while I do something else.
Since I do occasionally sleep or get out of the house, I schedule intermediate stuff, like yum updates, or full backups, to run during my downtime. Really big stuff, like large update sets or retrieving a new distro, I’ll save for the office, since those are work-related tasks anyway.
I really only have one complaint about dialup (other than speed): it seems less robust than in the past. I suspect that is a side effect of current application and network coding practices in general rather than a problem with dialup per se.
Does anyone any more ever check an application on a slow (or a fully congested high speed) network? Great that you’ve got a 100Gbit link on your local network, but don’t forget about those overloaded remote servers and slow T-1 (and gasp, dialup) circuits out there.
Robustness, survivability, and graceful degradation were part of the design of the early internetworking protocols — based on the quality, or lack thereof, of the equipment, not the semi-mythical “survive a nuclear war” meme (see ARPANET).
That includes getting all the bits through teh tubes, even in the face of a slow or noisy connection. I’ve used 300 baud in the distant past, but if you want really slow, there’s RFC 1149 for using carrier pigeons (see IP over Avian Carriers, including a “real” implementation, lol). You might also be entertained by the security-related Carrier Pigeons Bringing Contraband into Prisons.
If it were simply timing out, I could adjust (or live with) that. What seems to be the issue is that when multiple applications are using a fully loaded network connection, some just give up, others hang, and some (rarely) trash the entire connection. In my current setup, Pidgin is particularly bad with this: if another app or two is using most of the bandwidth, Pidgin will suddenly drop one or more of its sessions and begin consuming 100% of the CPU. WTF is that? It’s not just Pidgin; I’ve occasionally seen other network apps exhibit unusual problems (with no sensible relationship to networking) that only show up when contending for bandwidth.
This lack of robustness is probably what will spur me to get a high speed service. Unfortunately, that’s just moving the breaking point, not addressing the underlying problem.
image: 1896 telephone.jpg, Wikimedia Commons