The rapid growth in consumer broadband seen worldwide today would not be possible without a major shift in the practices for provisioning access infrastructure. Copper loop and CATV cable were once the only means of transporting information from a provider central office or head end to the customer. Today, both these media are being "shortened" or even eliminated by the use of fiber optics.
Fiber is not a new development in access networks. Not only has it been used for almost two decades in the provisioning of high-speed commercial/enterprise customers, service providers in the 1990s found that replacing large bundles of copper by a few fiber strands could improve service reliability and lower craft cost. BellSouth took the lead in deployment of access fiber in that period, and the move was justified completely on cost savings.
The traditional access fiber architecture has been the fiber remote, which is a high-speed fiber trunk (SONET or Ethernet) that terminates in an electro-optical multiplexer. In analog phone days, these were called "digital loop carriers" (DLCs), and the term "new generation DLC" was used for a time, but most such devices today deliver DSL services and so are usually called "remote DSLAMs." A remote DSLAM's primary benefit is to shorten the access copper to allow higher DSL speeds and improve reliability. Most providers would counsel against offering premium DSL on loops over 8,000 feet, and the highest DSL speeds
Pushing fiber close to the customer is generically called "deep fiber," and various acronyms are used to indicate just how deep the fiber is. FTTH means "fiber to the home," which is the extreme of giving every user an optical-electrical termination. FTTC takes "fiber to the curb," serving a group of homes, while FTTN means "fiber to the node" or "neighborhood," and allows each fiber remote to serve a larger population.
The problem with all deep fiber strategies, and the reason why providers don't simply run fiber to every home, is cost. If loops are kept to a length of 5,000 feet, a single remote can serve customers in an area of almost 2,000 acres. Shorten the loop to 1,000 feet and it serves only a little over 70 acres. Since the user population is generally proportional to the geography, this reduction means the cost per user could rise 50 times or more. Shorter loops mean higher speeds, however, and for video over IP, most operators would require at least 24 Mbps (ADSL2) connections. In Asia and some other areas, VDSL is used with speeds of 50 Mbps or more. Both these require much shorter loops (8,000 feet is optimum for ADSL2, according to reports, and 500-800 feet for 50+Mbps VDSL).
Balancing cost and performance is the goal of the various passive optical networking (PON) systems. PON creates a "tree" structure of fiber connections using optical splices without electrical termination or handling. PON typically supports 32 branches, and each of these can in theory be a remote or a home. A single PON tree supporting 32 branches has 33 electrical devices, counting the head end. Serving 32 locations with point-to-point fiber would require 64 electrical devices and generate higher costs and greater reliability risk.
PON systems use a common fiber architecture but a variety of opto-electric approaches. The original broadband PON (BPON) and the successor Gigabit PON (GPON) are both based on ATM. The new Ethernet PON (EPON) standard has been ratified, and most operators contemplating major new PON deployments are conducting assessments and procurements of EPON. GPON and EPON have sufficient capacity for video delivery and high-speed Internet. Some providers like the ATM framework of GPON for its ability to create multiple independent service channels to the user via virtual circuits. Others prefer EPON because it matches better with Ethernet-based metro architectures.
Planning for access network fiber deployment demands a careful consideration of the following:
- The demographics of the area to be served, including household income, family size, and age
distribution. This data is critical in establishing the service market opportunity. In general,
favorable demographics justify deeper fiber deployment.
- The geography and topology of the service area, including the household density (average lot
size), the rights of way available, and whether cabling is underground or above ground. This data
is critical to set the cost points for each approach. Obviously, poor characteristics here will
create profit margin challenges if not taken into account. Studies in Japan, where fiber deployment
is high, indicate that even whether the ground is flat or hilly has an impact on deployment cost.
- The service mix to be provided, over at least a five-year period, considering both trends in demand and in competition. The worst possible outcome in an access fiber deployment is a new set of requirements that the fiber architecture deployed cannot effectively support.
In the installation and maintenance phase, access networks present special problems because of the high cost of rolling a truck to fix a problem. A broadband consumer may require three years to pay back the cost of a single service call. This means that it is absolutely critical that each fiber strand be properly installed and that, in particular, the splicing used in PON installations be carefully done and verified. Fiber should also be tested end-to-end prior to committing it to customers. Unlike copper, whose problems tend to develop over time, operators report that most fiber problems are uncovered shortly after installation and result from improper practices.
About the author: Tom Nolle is president of CIMI Corporation, a strategic consulting firm
specializing in telecommunications and data communications since 1982. He is a member of the IEEE,
ACM and the IPsphere Forum, and the publisher of Netwatcher, a journal in advanced
telecommunications strategy issues. Tom is actively involved in LAN, MAN and WAN issues for both
enterprises and service providers and also provides technical consultation to equipment vendors on
standards, markets and emerging technologies.
This was first published in July 2007