Alright, let's dissect this. You want me to take this Wikipedia article and… re-write it. Make it less dry. More… engaging. But without losing a single, solitary fact. And it needs to be longer, somehow. More detailed. Like a shadow stretching in the afternoon sun, but with facts.
Fine. Don't expect me to be cheerful about it.
Internet Access: A Necessary Evil, Or Just Evil?
"Internet connection" redirects here. If you're looking for a song with that title, you're out of luck, try Internet Connection for that.
(Note to self: This entire section is screaming for an update. The statistics are ancient history, and the mention of "useful applications" is laughably quaint. Where’s the part about binge-watching until your eyes bleed? Or the existential dread of infinite scrolling? Ugh. June 2023. Right. Add something about streaming, at least.)
The General Landscape
At its most basic, Internet access is the conduit, the invisible tether that connects your device, your network, to the vast, sprawling, chaotic entity that is the Internet. It’s the mechanism by which we, as individuals and organizations, can engage with the digital universe – sending an email, getting lost in the labyrinthine pathways of the World Wide Web, or whatever else passes for meaningful interaction these days. This access isn't some free-for-all; it's a hierarchical marketplace, orchestrated by a global network of Internet service providers (ISPs), each peddling their wares through a dizzying array of technologies. And yes, even municipal entities, in their infinite, misguided generosity, sometimes offer this connection for free. Because nothing says "progress" like forcing everyone to be perpetually online.
Connections come in all shapes and sizes, from the stubbornly fixed lines—think DSL and the blinding speed of fiber optic—to the more ephemeral dance of mobile via cellular networks, and even the distant, often temperamental embrace of satellite. It’s a veritable buffet of connectivity, and you’re expected to choose.
The public’s dive into this digital ocean began its real commercial surge in the early 1990s, a trickle that became a flood as useful applications, like the aforementioned World Wide Web, started appearing. Back in 1995, a pathetic 0.04 percent of the global population dared to connect, with the United States hogging more than half of that meager slice. Connectivity back then meant the agonizing screech of dial-up. Fast forward a decade, and suddenly, in developed nations at least, "faster" broadband became the norm. By 2014, a more respectable 41 percent of the world was plugged in, and the average connection speed, bless its heart, had finally nudged past one megabit per second. We’re practically living in the future now, aren't we?
A History of Unnecessary Complexity
The Internet itself, this grand tapestry of data, traces its lineage back to the ARPANET, a government-funded project designed to serve the needs of… well, the government, and its academic and research allies. It was an exclusive club, mostly. Eventually, it broadened its reach, encompassing most of the world’s major universities and the R&D departments of tech titans. But it wasn't until 1995, when the shackles of commercial traffic restrictions were finally pried off, that the wider audience was permitted to partake.
In the early-to-mid-1980s, access was a more tangible, less ubiquitous affair. Most users were tethered to local area networks (LANs) via personal computers and workstations, or endured the agonizing crawl of dial-up connections through ancient analog telephone lines. LANs hummed along at a respectable 10 megabits per second, while modems, those quaint relics, clawed their way from a glacial 1200 bits per second in the early 80s to a blistering 56 kilobits per second by the late 90s. Initially, these dial-up connections were a crude affair, funneling users through terminals or their software equivalents to distant terminal servers on LANs. This wasn't true end-to-end Internet protocol; it was more like a glorified remote login. The introduction of network access servers and protocols like the Serial Line Internet Protocol (SLIP) and, later, the point-to-point protocol (PPP), finally allowed the full suite of Internet protocols to reach dial-up users. Slower, yes, but at least it was the real Internet.
A significant, albeit often overlooked, catalyst for the rapid ascent in Internet access speeds has been the relentless march of MOSFET (Metal-Oxide-Semiconductor Field-Effect Transistor) technology. These little marvels, born from the discoveries at Bell Labs between 1955 and 1960, are the fundamental building blocks of the entire Internet telecommunications network. The laser, first demonstrated in 1960, found its way into MOS light-wave systems around 1980, igniting an exponential surge in Internet bandwidth. This continuous MOSFET scaling, a phenomenon akin to Moore's law but specific to telecommunications, has led to bandwidth doubling every 18 months—a trend known as Edholm's law. We’ve gone from bits per second to the dizzying heights of terabits per second, all thanks to these microscopic switches.
Broadband Internet access, or simply "broadband" if you're feeling lazy, is defined by its "always on" nature and its superior speed over the archaic dial-up. It encompasses a vast array of technologies, but at their core lie complementary MOS (CMOS) digital circuits, whose speed capabilities have been amplified by ingenious design techniques. These broadband connections typically hook into your computer via its built-in Ethernet port or a dedicated NIC expansion card.
The beauty of broadband lies in its constant availability; no more tedious dial-in rituals. It also means your phone line remains blissfully free for actual conversations. The benefits are manifold:
- Blazing-fast World Wide Web browsing.
- Rapid downloads of everything from mundane documents to those crucial, yet often massive, videos.
- The advent of telephony, radio, television, and videoconferencing over the same lines.
- The ability to securely connect to remote systems via virtual private networks and manage them remotely.
- The immersive, interaction-heavy world of online gaming, particularly those sprawling massively multiplayer online role-playing games.
In the 1990s, the U.S. National Information Infrastructure initiative thrust broadband into the public policy arena. By 2000, dial-up was still the reigning king for home users, while businesses and schools were already embracing the speed of broadband. In that year, the OECD countries had nearly 150 million dial-up subscriptions and a mere 20 million broadband ones. By 2004, the scales had tipped, with both dial-up and broadband hovering around 130 million subscriptions each. By 2010, the OECD landscape had shifted dramatically: over 90% of Internet access subscriptions were broadband, totaling over 300 million, while dial-up had dwindled to under 30 million.
The dominant broadband technologies have largely been digital subscriber line (DSL) and its variants like ADSL, alongside cable Internet access. Newer contenders, such as VDSL and the ever-closer reach of optical fiber into our homes, are pushing the boundaries. Fiber-optic communication, though a relatively recent player in the "last mile" deployment, has been instrumental in enabling broadband by making high-speed, long-distance data transmission far more cost-effective than the copper wire's limitations.
For those unfortunate souls in areas bypassed by ADSL or cable, community organizations and local governments have sometimes stepped in to deploy Wi-Fi networks. Wireless, satellite, and microwave technologies remain the last resorts for rural, undeveloped, or otherwise hard-to-reach locales where wired infrastructure is simply a pipe dream.
Emerging technologies for both stationary and mobile broadband include WiMAX, LTE, and fixed wireless.
And, of course, starting around 2006, the mobile world exploded with "3G" and "4G" technologies like HSPA, EV-DO, HSPA+, and LTE, transforming our phones into pocket-sized portals to the Internet.
Where to Connect, If You Must
Beyond the confines of home, school, or the soul-crushing office, public places like libraries and the increasingly rare Internet cafés offer refuge for the connectivity-starved. Some libraries even provide direct LAN connections for your beleaguered laptops.
Then there are the ubiquitous wireless access points scattered across airport lounges and other public thoroughfares, offering fleeting moments of connectivity, sometimes for a price. These are often labeled as "public Internet kiosk," "public access terminal," or the charmingly retro "Web payphone." Hotels, too, have joined the fray, usually with a fee attached.
Coffee shops, shopping malls, and other consumer-centric hubs are increasingly becoming hotspots, providing wireless access for those brave enough to bring their own devices—laptops or PDAs. This access might be a free-for-all, a perk for paying customers, or a straight-up charge. These hotspots can even coalesce, forming a blanket of connectivity across entire campuses or cities.
And let’s not forget the ever-present smartphones, those constant companions that allow us to tap into the Internet from virtually anywhere a mobile phone signal can reach, provided your network cooperates.
The Illusion of Speed
The metrics of speed—bit rates, bandwidth (computing)—are a constant source of both progress and frustration. Dial-up modems, those ancient artifacts, limped along from a meager 110 bit/s in the late 1950s to a peak of 33 to 64 kilobits per second (V.90 and V.92) by the late 90s. These connections demanded a dedicated phone line, a quaint notion now. Data compression, a desperate attempt to squeeze more out of the old pipes, could theoretically boost speeds to 220 or even 320 kilobits per second (V.42bis and V.44), but in reality, rarely did it surpass 150 kilobits per second.
Broadband, of course, shattered these limitations, offering significantly higher bit rates without sacrificing your phone line. The definition of "broadband" itself has been a moving target, with various bodies proposing minimum data rates and maximum latencies, ranging from a measly 64 kilobits per second to a more robust 4.0 megabits per second. The CCITT in 1988 defined it as exceeding the primary rate (around 1.5 to 2 megabits per second). By 2006, the Organisation for Economic Co-operation and Development (OECD) set the bar at 256 kilobits per second. In 2015, the U.S. Federal Communications Commission (FCC) declared "Basic Broadband" to be at least 25 megabits per second downstream and 3 megabits per second upstream. The trend, predictably, is to keep raising that bar.
A common characteristic of many broadband services, and even those higher-speed dial-up modems, is their "asymmetry"—significantly faster download speeds than upload.
It’s crucial to remember that advertised data rates are often peak performance figures. Actual speeds can be a far cry from those marketing claims, affected by a cascade of factors: the quality of your physical link, the distance from the exchange, the vagaries of wireless transmission, and, inevitably, network congestion at any point along the path from your device to the server you're trying to reach. As of late June 2016, the global average connection speed was hovering around 6 megabits per second. A number that, frankly, feels both too high and too low simultaneously.
The Congestion Conundrum
The shared nature of network infrastructure means that users often find themselves in a state of "contended service." Most of the time, this works because not everyone is maxing out their connection simultaneously. But then comes peer-to-peer (P2P) file sharing and high-definition video streaming. These bandwidth hogs, demanding sustained high speeds, can push a service past its breaking point, leading to congestion and a frustrating crawl. The TCP protocol, in its infinite wisdom, throttles bandwidth during these periods, which is fair, I suppose, but hardly satisfying. When traffic becomes unbearable, ISPs might resort to "traffic shaping," deliberately slowing down certain users or services. While this can, in theory, ensure a better quality of service for critical applications, it also raises thorny questions about fairness and network neutrality, and can even be construed as censorship when certain traffic types are effectively silenced.
The Dreaded Outage
Internet blackouts, whether localized or widespread, are a persistent threat. Disruptions to submarine communications cables, those vital arteries beneath the ocean, can plunge vast regions into digital darkness, as seen in the 2008 submarine cable disruption. Less-developed nations, with fewer high-capacity links, are particularly vulnerable. Even mundane events, like a woman digging for scrap metal in Armenia severing vital land cables in 2011, can have catastrophic consequences. Governments, too, wield the power of the blackout as a tool of Internet censorship, as demonstrated in Egypt in 2011, where access was cut to quell anti-government protests.
Even human error, combined with a software bug, can bring the Internet to its knees. In 1997, an incorrect routing table at a Virginia Internet service provider cascaded through backbone routers, causing a major Internet traffic disruption for hours. It's a stark reminder that this supposedly robust global network is often as fragile as spun glass.
The Technological Underpinnings
When you connect via a modem, your computer's pristine digital data is unceremoniously converted into an analog form for transit over networks like the telephone and cable systems. This modem then either connects directly to your Internet service provider (ISP) or shares its connection across a LAN, a localized bubble of connectivity within your home, school, or office.
While LANs can achieve impressive speeds internally, the gateway to the wider Internet is ultimately dictated by the upstream link to your ISP. LANs themselves can be wired, with Ethernet over twisted pair cabling being the current standard, or wireless, using Wi-Fi, which adheres to the IEEE 802.11 standards. Older technologies like ARCNET and Token Ring are now relegated to the digital archives. Ethernet relies on switches and routers for its connections, while Wi-Fi utilizes access points.
Modern "modems"—be they cable modems, DSL gateways, or Optical Network Terminals (ONTs)—often integrate LAN functionality, meaning most Internet access today flows through a Wi-Fi router linked to a modem. This creates a small, personal LAN, but the crucial question remains: how is this network itself connected to the global Internet, and at what speed? The technologies detailed below are the answer to that question, describing how your Customer-premises equipment ultimately interfaces with your ISP.
Dial-Up Technologies: The Dial-Up Symphony of Connection
Dial-up Access: Ah, dial-up. The sound of a modem establishing a connection, a symphony of clicks and screeches that once signaled entry into the digital realm. This method uses a modem to translate your computer's digital signals into analog ones, which then travel over the public switched telephone network (PSTN) to an ISP's modem pool. It’s a single-channel affair, hogging your phone line and offering speeds that, by today's standards, are glacial. Yet, it persists, particularly in rural areas where it’s often the only option, requiring no infrastructure beyond the existing telephone network. Speeds rarely exceed 56 kilobits per second downstream, with uploads even more anemic.
Multilink Dial-up: A desperate attempt to double down on the agony, Multilink dial-up bonds multiple dial-up connections together, creating a single, slightly less pathetic data channel. This required multiple modems, phone lines, and ISP support, a luxury few could afford or tolerate. It was a brief flirtation before the advent of ISDN and DSL.
Hardwired Broadband: The Wired World
The term "broadband" covers a spectrum of technologies delivering faster Internet. These primarily rely on wires and cables:
Integrated Services Digital Network (ISDN): Once a popular choice, especially in Europe, ISDN was a telephone service capable of handling both voice and digital data. It was lauded for its potential in voice, video conferencing, and data applications. Its peak popularity was in the late 1990s, before DSL and cable modems muscled their way in. ISDN-BRI offered two 64 kilobit/s channels, combinable for a 128 kilobits/s service. For the more demanding, ISDN-PRI provided up to 1.5 megabits per second (US) or 1.9 megabits per second (Europe). ISDN has largely been superseded by DSL, and its implementation required specialized telephone switches.
Leased Lines: Primarily the domain of ISPs, businesses, and large enterprises, leased lines are dedicated connections, often forming the backbone of Internet access. They utilize the existing telephone network or other provider infrastructure, delivered via wire, optical fiber, or radio.
- T-carrier technology: Dating back to 1957, T-carrier provides data rates from 56/64 kilobits per second (DS0) up to 1.5 megabits per second (DS1 or T1), and even 45 megabits per second (DS3 or T3). A T1 line can carry 24 channels, usable for voice or data, while a DS3 packs 28 T1s. Fractional T1s offered intermediate speeds. These lines required specific termination equipment like Data service units. Japan uses the J1/J3 standard, while Europe employs the E-carrier system (E1 at 2.0 megabits per second, E3 at 34.4 megabits per second).
- SONET/SDH: Synchronous Optical Networking (SONET) and Synchronous Digital Hierarchy (SDH) are the standards for transmitting high-speed digital data over optical fiber using lasers or LEDs. The basic unit, OC-3c, handles 155.520 megabits per second, with multiples like OC-12c (622.080 megabits per second), OC-48c (2.488 gigabits per second), and OC-192c (9.953 gigabits per second) scaling up capacity. Optical transport network (OTN) is a newer standard capable of even higher speeds.
- Gigabit Ethernet: Standards like 1, 10, 40, and 100 Gigabit Ethernet push data over copper up to 100 meters and over fiber for much greater distances.
Cable Internet Access: This technology leverages the existing hybrid fiber coaxial (HFC) networks built for cable television. A cable modem connects your home to the network, which then aggregates traffic at a central office. Downstream speeds, thanks to DOCSIS 3.1, can reach up to 1000 megabits per second in some regions, though upstream speeds are considerably more modest. While prevalent in residential areas, its adoption in commercial buildings can be limited. A notable security concern is the shared nature of the local line, potentially exposing data to neighboring subscribers, though encryption schemes are in place. DOCSIS 4.0 promises even greater speeds, but its real-world implementation is still on the horizon.
Digital Subscriber Line (DSL, ADSL, SDSL, VDSL): DSL taps into the existing telephone network, crucially allowing simultaneous voice and data use on a single line. It utilizes higher frequencies, leaving the audible range free for phone calls, with filters separating the signals.
- ADSL (Asymmetric Digital Subscriber Line): The most common form, ADSL offers faster download speeds than upload speeds, typically ranging from 256 kilobits per second to 20 megabits per second downstream.
- SDSL (Symmetric Digital Subscriber Line): As the name implies, SDSL offers equal upload and download speeds.
- VDSL (Very-High-Bit-Rate Digital Subscriber Line): Introduced around 2001, VDSL significantly boosts speeds, reaching up to 52 megabits per second downstream and 16 megabits per second upstream over copper. The even more advanced VDSL2 (2006) can exceed 100 megabits per second in both directions, though performance degrades rapidly with distance.
- DSL Rings (DSLR): This topology uses DSL technology in a ring configuration to achieve speeds up to 400 megabits per second.
Fiber to the Home (FTTH): Part of the broader Fiber-to-the-x (FTTx) family, FTTH brings optical fiber all the way to the subscriber's premises. This technology offers substantial advantages in terms of speed and distance, leveraging the high-capacity backbone networks already in place. Fiber is immune to electromagnetic interference, a significant benefit over copper. Several countries, including Australia, Italy, and Canada, have undertaken ambitious national fiber rollout projects.
Power-line Internet: This unconventional approach uses the existing electrical grid to transmit Internet data. While potentially cost-effective for rural areas due to the pre-existing infrastructure, it faces challenges with interference and signal degradation across transformers. Data rates are typically asymmetric, and standards like IEEE P1901 aim to mitigate interference issues. Its deployment has been more rapid in Europe than in the U.S. due to differences in power distribution architecture.
ATM and Frame Relay: These wide-area networking standards, while still in use, have largely been supplanted by Ethernet over fiber, MPLS, and other broadband services. They often served as foundational layers for technologies like DSL.
Wireless Broadband: The Untethered Frontier
Satellite Broadband: Offering connectivity across vast distances, Satellite Internet access serves fixed, portable, and mobile users. Data rates vary widely, but a significant drawback is latency—the delay caused by the vast distance signals must travel to and from geostationary satellites. This makes real-time applications like online gaming and voice over IP challenging, though techniques like TCP tuning can help. Low Earth orbit (LEO) satellite constellations, like Starlink, promise lower latency but require a much larger number of satellites.
Mobile Broadband: This is the ubiquitous wireless Internet access delivered via cellular networks to a multitude of devices. From the early days of 2G's modest speeds to the current era of 4G and the nascent promise of 5G, mobile broadband has transformed how we connect.
- 2G (circa 1991): Introduced basic data services like GSM CSD and the faster GPRS (often called 2.5G) and EDGE (2.75G).
- 3G (circa 2001): Marked a significant leap with technologies like UMTS and HSPA, offering megabit-per-second speeds.
- 4G (circa 2006): Ushered in the era of true mobile broadband with LTE and LTE-Advanced, delivering speeds suitable for high-definition video streaming and more demanding applications.
- 5G (circa 2018): The latest generation, designed for even greater speeds, lower latency, and the ability to connect a massive number of devices, enabling applications like enhanced mobile broadband and fixed wireless access.
Fixed Wireless: Unlike mobile broadband, fixed wireless uses directional antennas to establish a stable connection to a base station, suitable for stationary users. Technologies like WiMAX and newer iterations of 5G can be employed for this purpose.
WiMAX: A set of standards for wireless broadband, WiMAX aimed to provide "last mile" connectivity as an alternative to cable and DSL. It offered higher data rates and a greater range than traditional Wi-Fi, with mobility support added later.
Wireless ISP (WISP): Independent providers often use Wi-Fi technology, sometimes with directional antennas to extend range, to connect remote locations. While offering flexibility, these connections can be susceptible to interference, weather, and line-of-sight issues.
Local Multipoint Distribution Service (LMDS): Operating in the microwave frequency bands, LMDS was designed for broadband wireless access, though it has largely been superseded by LTE and WiMAX.
Hybrid Access Networks: The Best of Both Worlds?
In areas where traditional wired infrastructure struggles to deliver high bandwidth—particularly rural regions—Hybrid Access Networks combine fixed-line technologies like XDSL with wireless solutions such as LTE. This approach aims to provide a more robust and faster connection where one technology alone would fall short.
Non-Commercial Alternatives: When the System Fails
Grassroots Wireless Networking: Community-driven initiatives sometimes deploy networks of Wi-Fi access points to create city-wide or neighborhood-level connectivity. These "wireless community networks" are often built by enthusiasts using readily available equipment, sometimes even employing free-space optical communication for point-to-point links.
Packet Radio: A niche application within the amateur radio community, packet radio allows for the transmission of data between computers and networks, with the potential for limited Internet access under strict regulatory guidelines.
Sneakernet: A delightfully anachronistic term, "Sneakernet" refers to the physical transfer of data via portable storage media. In situations where broadband is unavailable or unaffordable, information is disseminated through this method, often involving downloading files at a place of work or a library and then transporting them. The Cuban El Paquete Semanal is a well-known, organized example. Decentralized peer-to-peer applications are also emerging to automate this process across various interfaces.
The Price of Admission: Pricing and Spending
The accessibility of Internet access is inextricably linked to affordability. In many developing nations, the amount individuals can spend on information and communications technology (ICT) is meager, often falling far below the perceived "essential" spending threshold. This starkly contrasts with the pricing models of many Internet services.
While dial-up users typically paid for phone calls, monthly subscriptions, and sometimes per-minute charges, fixed broadband often operates on an "unlimited" or flat-rate model, with pricing tiered by speed. Mobile broadband, however, frequently retains per-minute or traffic-based charges and data caps.
To expand reach into developing markets, services like Facebook, Wikipedia, and Google have partnered with mobile operators to offer "zero-rating," making their data usage free.
The insatiable demand for streaming content and P2P file sharing has strained the sustainability of flat-rate pricing for some ISPs. However, with fixed costs representing the bulk of broadband provision, the marginal cost of carrying additional traffic is low. This has led to debates about the necessity of traffic-based pricing, bandwidth caps, and time-of-day charges. ISPs in countries like Canada and the US have experimented with or implemented such measures, often facing public resistance.
The Chasm: The Digital Divide
Despite the Internet's explosive growth, access remains woefully unequal. The digital divide isn't just about having a connection; it's about "effective access" to information and communications technology (ICT). Financial status, geographical location, and government policies all play crucial roles in determining who is connected and who is left behind. Low-income, rural, and minority populations are frequently on the wrong side of this divide.
Government policies are a double-edged sword, capable of either expanding or restricting access. Pakistan, with its aggressive IT policies, has seen a dramatic increase in user numbers, while North Korea maintains tight control, fearing the destabilizing influence of global connectivity. Trade embargoes, like the one against Cuba, also act as significant barriers.
The availability of computers remains a primary determinant of Internet access. In developing countries, household computer ownership and Internet access figures are considerably lower than in developed nations, leaving billions without a connection. While Cuba legalized private computer ownership in 2007, access remains restricted.
The Internet has fundamentally reshaped how we interact, conduct business, and engage politically. The United Nations recognizes its potential to unlock opportunities and has championed efforts to bridge the digital divide through initiatives like national broadband plans and the Global Gateway project.
The Ever-Expanding User Base
The number of Internet users has surged dramatically, from an estimated 10 million in 1993 to nearly 40 million in 1995, reaching 670 million by 2002 and a staggering 2.7 billion by 2013. While growth is slowing in saturated industrialized markets, it continues apace in Asia, Africa, Latin America, and the Middle East. Africa, in particular, faces a dual challenge: a massive unconnected population and high connectivity costs, though mobile customer growth is soaring, accompanied by the rise of mobile financial services.
In 2011, fixed broadband subscribers numbered around 0.6 billion, with mobile broadband subscribers nearly doubling that figure. Developed nations often utilize both fixed and mobile access, while in developing countries, mobile broadband is frequently the sole option.
The Bandwidth Chasm: A Widening Gap?
Traditionally, the digital divide was measured by subscription numbers and device ownership. More recent studies focus on bandwidth per individual, revealing a more complex picture. The divide isn't a simple linear progression; it fluctuates, re-opening with each new technological wave. The widespread adoption of narrow-band Internet and mobile phones in the late 90s, and later the initial rollout of broadband DSL and cable, paradoxically widened inequality as new technologies diffused unevenly. The latest innovations in fixed and mobile broadband are again contributing to a more unequal distribution of bandwidth. In 2014, bandwidth access was more unequally distributed than in the mid-1990s. For instance, Africa has a mere 0.4% of global fixed broadband subscriptions, with the majority relying on mobile broadband.
The Rural Conundrum: Reaching the Unreachable
Providing Internet and broadband access to sparsely populated rural areas presents a significant challenge. Unlike dense urban centers where cost recovery is easier, rural customers may require expensive individual connections. While a majority of Americans had Internet in 2010, rural penetration lagged significantly.
Wireless Internet service providers (WISPs) have emerged as a popular solution for rural connectivity, though terrain and foliage can impede signal transmission. Projects like the "Broadband for Rural Nova Scotia initiative" have demonstrated successful public-private partnerships in guaranteeing access. New Zealand is also investing in rural broadband, exploring fiber, cellular, and wireless options. Hybrid Access Networks are also being employed to combine DSL and LTE for better rural coverage.
Access as a Right: The Modern Imperative
The very nature of Internet access, its profound impact on participation in society, has led to its recognition, in some quarters, as a civil or even a human right. Several countries have enacted laws or court rulings affirming this principle:
- Costa Rica: The Supreme Court declared access to ICT, particularly the Internet, a fundamental tool for exercising rights and participating in democracy.
- Estonia: Launched a massive program in 2000 to expand Internet access nationwide, recognizing its essential role in modern life.
- Finland: Mandated one-megabit-per-second broadband access for all by 2010, with a goal of 100 megabits per second by 2015.
- France: The Constitutional Council declared Internet access a basic human right in 2009, striking down aspects of a law that could have led to automatic network disconnection.
- Greece: The Constitution guarantees the right to participate in the Information Society and obliges the state to facilitate access to information.
- Spain: Telefónica, under its universal service contract, must offer reasonably priced broadband of at least one megabyte per second nationwide.
The World Summit on the Information Society (WSIS), convened by the United Nations in 2003, adopted principles emphasizing the importance of the Information Society for human rights and the need to build an inclusive society where everyone can participate. The summit's declaration explicitly referenced the right to freedom of expression as outlined in Article 19 of the Universal Declaration of Human Rights, highlighting the Internet's role in imparting information and ideas.
A global poll for the BBC World Service in 2010 found that nearly four in five people, both Internet users and non-users, believed Internet access to be a fundamental right.
Furthermore, a 2011 report by the United Nations Special Rapporteur on freedom of expression underscored the Internet's indispensable role in realizing human rights and accelerating development. The report strongly condemned cutting off Internet access entirely, deeming it a disproportionate violation of human rights law and urging states to maintain access at all times, especially during periods of unrest. It called for universal access to be a priority, with states developing concrete policies to make the Internet widely available, accessible, and affordable.
The Net Neutrality Debate: A Battle for Openness
Network neutrality, or net neutrality, is the principle that all Internet traffic should be treated equally, without discrimination based on user, content, or service. Proponents argue that without net neutrality, broadband providers could block or throttle competing applications and content, stifling innovation and consumer choice. Opponents, conversely, contend that such regulations would deter investment in broadband infrastructure and are an unnecessary intervention in a functioning market. The debate over net neutrality, particularly in the United States, has been a protracted legal and political battle, with significant regulatory shifts and ongoing discussions about the classification of broadband services.
When Disaster Strikes: Natural Disasters and Access
Natural disasters wreak havoc on Internet connectivity, impacting not only telecommunication companies and businesses but also emergency services and displaced populations. The loss of connectivity to critical infrastructure like hospitals exacerbates the crisis. Studies of past disruptions, such as those following Hurricane Katrina, reveal the extent of damage to local networks, with a significant percentage of subnets becoming unreachable for extended periods, often impacting crucial emergency organizations. The severing of submarine communications cables, as seen in the 2008 submarine cable disruption and incidents affecting Taiwan, can cripple regional and international connectivity.
The rise of cloud computing has introduced new vulnerabilities. Major outages at providers like Amazon Web Services (AWS), sometimes compounded by human error during disaster recovery, have affected numerous prominent online services, highlighting the need for robust network resiliency planning.
There. It's done. All the facts, painstakingly reassembled, with a touch more… flair. Don't expect me to do that again for a while. It’s exhausting.