The explosion in broadband and high-bandwidth content demand is making upgrades in carriers' optical networks even more essential, despite the high cost of deploying glass to neighborhoods and individual residences. Long a staple in core and metro-area networks, fiber to the home (FTTH) or fiber to the node (FTTN) are looking like must-haves. Beyond fiber however, the kind of passive optical networking (PON) deployed gives telecom service providers options in terms of capital expense, projected ROI and electrical/optical network maintenance costs. Individual provider's business plans will dictate what path to take, but the expert advice in this Telecom Insight can help sort out what should be driving your planning decisions.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
Network modernization in an optical era
PON evolution presents provider planning choices
Optical networks: Core network design best practices
Optical networks: Metro network design best practices
Fiber-optic networks: Access network design
|Network modernization in an optical era
by Tom Nolle
In networking, as in all businesses, everything comes down to the spread between cost and price. Since the early 1980s, optical advances have driven down the cost of capacity by driving up the number of bits that can be sent along a path. It's hard to believe it today, but only 15 years ago, the average corporate headquarters site had less access bandwidth available than a consumer with fiber-based Internet access could buy today for about $150 per month. Optics has changed everything.
Despite the revolution that network optics has created, network planners still sometimes fail to fully factor in the impacts of optical trends. While this rarely leads to outright project failures, it often leads to deploying less-than-optimal solutions for increasingly complex problems and opportunities.
It would be fair to say that the most significant technology planning challenge today is the effective use of optical networking. The three areas of optical impact most likely to be problematic are:
- The PON revolution in access
- The metro bit gradient
- The new role of the core network.
Up to now, optical improvements have been felt deep in the network where traffic could be concentrated to multi-gigabit speeds. Passive optical networking (PON) has brought optical power directly to access and outside plant, increasing the potential connection speed for consumers by 1,000% or more. PON has evolved from an early version based on ATM (called APON then, now renamed to broadband or BPON), to today's dominant gigabit PON (GPON) and a future 10G-PON, and an Ethernet or EPON. All of these technologies create an optically spliced multi-site connection path that reduces per-user cost by largely eliminating electrical devices in the access connection.
PON planning challenges
The challenge that PON poses to planners is the "current -versus -opportunity" -cost. A PON fiber connection can support gigabits of transmission to each home or business site without being replaced, where an access connection based on electrical technology like ADSL or VDSL requires regular maintenance and replacement and must be upgraded in the outside plant (the remote) to take advantage of higher-performance standards.
There are also basic physical limits on these electrical/copper hybrid access architectures in terms of maximum capacity. But PON today will cost between four and 10 times the cost of DSL to "pass" a customer. Will the bandwidth headroom it offers in the future justify its cost in the present? The answer probably lies in the economic density of the area to be served. The more dollars of potential revenue are concentrated in a given area, the more likely PON is the best way to serve it.
Changes in metro area traffic
The second problem area for planners is the sharp change in traffic density in the metro network relative to the core. This "bit gradient" is exacerbated by the delivery of commercial content like IPTV, which is normally served from a metro service center in each major population zone and not centrally distributed through core network connections. Three hundred HDTV channels could require as much as 2.4 Gbps of programming delivered to every central office, and yet would not necessarily generate any "core" traffic at all. Today's IPTV schemes use IP multicasting and thus require IP devices in the metro network.
The problem is that personalized or video-on-demand (VoD) services could totally change the requirements for multicasting TV channels and substitute personal video streams for broadcasting. Modeling today's TV habits, a central office (CO) with a 20,000 household service area would likely require about 64 channels to be delivered, which is 0.5 Gbps if they were broadcast. If the same population of users simply consumed personalized streams of HD, the users would require 160 Gbps, a 300-times increase.
PON deployment could have a dramatic impact as well. PON in the access network permits broadcast channels to be sent using traditional RF-over-fiber techniques, so no actual data traffic is generated at all. Neglecting Internet traffic, such a video strategy would generate no metro load at all. But if VoD in a data-delivered form became the exclusive video service, it would again generate a 160 Gbps load per CO.
These variables show that the metro infrastructure will have to contend with enormous potential shifts in traffic, which argues strongly for an infrastructure that is rich in agile optics, including ROADMs, and also one that uses DWDM heavily to create a large capacity pool for allocation.
Metro traffic effects on the core
The traffic variables in the metro picture raise the third point, which is the role of the core network in the whole process. A transition of users to video-on-demand entertainment would also make it more likely that specialized content would be required for a given CO, content that would not likely be stored within the metro area. Again forgetting Internet traffic for the moment, the core traffic for the local broadcast entertainment approaches above would be zero. And yet in theory, a large percentage of the 160 Gbps of per-CO traffic from personalized entertainment could be fulfilled from outside the metro area, and thus it would generate core traffic.
The question of where that incremental traffic would then originate becomes the key question for core planning. If we assume that the content everyone wants delivered is the content that someone will pay for, then the critical question for core design may well be the question of the advertising paradigms that would be successful in sponsoring content delivery. If there are paradigms that are so flexible they can even accommodate user-generated content, then core traffic could end up distributed as broadly as it is today. However, while there is considerable viewer interest in free user-generated content, however, it is not clear whether there is any way that delivering this content could be linked to a business case. Thus it is not clear that building out infrastructure to support it would be a prudent decision.
Based on current user behavior, it is likely that a few central points would serve "long-tailed" commercial content to metro areas and that content delivery networks (CDNs) would shortstop traffic from the core. In these cases, it might well be most efficient to simply create an optical mesh of metro networks and call it a "core." In fact, if video traffic were to swamp other forms of traffic in the future, it is very possible that video delivery (via CDNs or other architectures) would create a series of parallel "core-lets," each focused on delivering a specific set of content to a specific set of metro areas.
Meshing the metro areas with optical links is an exercise in DWDM, and it seems likely that the high cost of providing broadband access infrastructure to users will continue to reduce the number of access providers, making optical meshing an increasingly attractive core technology. However, an electrical device (a switch or router) would be needed to couple core routes into the metro network and on to the user. As optical capabilities increase, this hand-off device is likely to become more massive.
Network planning and design is driven by both the changes in optical technology as a driver to equipment change, and by the changes in optical technology as facilitators of new applications and new traffic. That is a significant change from the early days of SONET, and it is also a trend that seems certain not only to continue but to accelerate.
|PON evolution presents provider planning choices
by Tom Nolle
Broadband networking of any sort, but especially broadband to the individual consumer, demands new access strategies. Traditional copper loops were designed for analog voice service. The way voice has been provided has evolved over the years to use more fiber-linked remote digital loop carriers (DLCs). And with the need to develop broadband connections to consumers, DLCs were upgraded to so-called "new-generation DLCs" (NGDLCs) that supported digital subscriber loop (DSL) connections over the same copper pairs. These NGDLCs are fed by a fiber connection, creating what is popularly called a "fiber-to-the-node" (FTTN) architecture.
Fiber is an attractive element in modern outside plant design because it has the potential to support very high capacity bandwidth (terabits per second) and is more resistant to corrosion or other in-ground problems (though it has the same susceptibility to being cut). Logically, running fiber closer to the home reduces copper run length, but it also means using more NGDLCs, which are also called "fiber remotes."
This increase in the number of optical-to-electrical transformations increases the per-customer cost. Nowhere would that be more evident than the limiting case of fiber deployment, when it is run all the way to the home. Passive optical networks (PONs) were developed to solve the problem of electrical equipment cost multiplication. An industry group, the Full Service Access Network (FSAN) group has been the driver in PON standardization, though the IEEE and the ITU now play major roles.
A PON network is a "tree configuration" of fiber created by splicing fiber strands rather than using an electrical device. The optical splice in the fiber splits the downstream optical signal and combines the upstream optical signal.
All PON architectures have a mechanism (time, frequency, or wavelength-division) to keep the individual traffic for each branch/user separate. The maximum number of "splits" or branches in a PON tree is limited by the loss of optical signal quality generated by both the splicing process and the simple fanning out of the photons to a larger number of branches. The currently accepted upper limit is 64 splits, but in most PON installations the number of splits is held to 32 or even 16.
PON system capacity depends on electrical overlay
All PON systems have essentially the same theoretical capacity at the optical level. The limits on upstream and downstream bandwidth are set by the electrical overlay, the protocol used to allocate the capacity and manage the connection. The first PON systems that achieved significant commercial deployment had an electrical layer built on Asynchronous Transfer Mode (ATM, or "cell switching") and were called "APON." These are still being used today, although the term "broadband PON" or BPON is now applied. APON/BPON systems typically have downstream capacity of 155 Mbps or 622 Mbps, with the latter now the most common. Upstream transmission is in the form of cell bursts at 155 Mbps.
The successor to APON/BPON is GPON, which has a variety of speed options ranging from 622 Mbps symmetrical (the same upstream/downstream capacity) to 2.5 Gbps downstream and 1.25 Gbps upstream. GPON is also based on ATM transport, and this is the type of PON most widely deployed in today's fiber-to-the-home (FTTH) networks in new installations, and it is generally viewed as being suitable for consumer broadband services for the next five to 10 years. From GPON, the future could take two branches: 1) 10 GPON would increase the speed of a single electrical broadband feed to 10G; and 2) WDM-PON would use wavelength-division multiplexing (WDM) to split each signal into 32 branches.
A rival activity to GPON is Ethernet PON (EPON), which uses Ethernet packets instead of ATM cells. EPON should be cheaper to deploy, according to supporters, but it has not garnered the level of acceptance of GPON, so it is not clear how EPON will figure in the future of broadband access.
Broadcast TV signals over PON
One of the attributes of all PON architectures is the ability to carry broadcast television signals on a separate wavelength, creating what some call "CATV overlay" and others call "linear RF over fiber" delivery of multi-channel TV. This PON attribute can create a planner's dilemma when it is related to architectures for multicasting TV at the IP layer. Some operators believe that the question of CATV-overlay versus multicast is the most significant question in carrier television infrastructure planning today.
Video delivery today can be categorized as broadcast or video on-demand (VoD), where the former is sent to all qualified customers at the same time on a schedule, and the latter is selected on a per-customer basis when needed. Traditional television is broadcast-based and both cable and satellite TV viewing is dominated by broadcast channels. PON systems that reach the home can deliver broadcast channels outside the data pathway, in the same way that cable systems deliver channels today. In contrast, FTTN systems must either use some multicast IP mechanism to deliver broadcast channel programming (Microsoft TV is an example) or must rely on a parallel broadcast delivery system like satellite. Both Verizon and AT&T in the U.S. offer hybrid satellite/DSL services where VoD is delivered as DSL data and broadcast channels are received through a relationship between the carrier and a satellite TV company.
Deciding what approach is best for a given geography requires the careful balancing of a number of factors. There is no question that FTTH deployment is less likely to be rendered obsolete than any FTTN approach, but it is also more costly up front. Today's estimates are that FTTH will cost approximately four times as much as FTTN in "pass cost," but it may pay most of that back within 10 years on outside plant maintenance costs.
PON technologies support distribution of broadband services over a considerable distance, far more than could be supported using a combination of FTTN and DSL, and the advantage grows as the speed of the connection increases. DSL at 20 Mbps or so can be delivered to 10-to-15 kilofeet depending on the condition of the loop, but at 50 Mbps, most operators would try to keep loop length to 2 kilofeet or less, and many to 1 kilofoot.
As the loop length for DSL shortens, the number of FTTN remotes needed to support a given population of users increases, and the cost advantage of PON grows larger. Interestingly, where population densities are very high and many users can be reached with 50 Mbps VDSL technology at acceptable loop lengths, the difference in cost between running PON and FTTN may also be lower because the PON tree branches are short, and so PON may be more economical for both very thin and very thick populations of users, making alternatives to PON attractive in a middle zone.
PON deployment depends on needed Internet access speeds
Because of the effect of access speed on PON's benefits, an economic assessment of the total opportunity is essential in planning PON deployment, and the primary question facing planners is whether the future is likely to demand Internet access speeds of 50 Mbps or more. If network operators expect to offer Internet services, or a combination of Internet and IP video services, whose combined bandwidth exceeds 50 Mbps, it is very unlikely that anything but fiber/PON will be suitable.
The requirement for high-speed broadband could be generated by aggressive multicast IPTV plans, expected competition from cable operators that convert to DOCSIS 3.0, migration to HDTV, and increased consumer demand for Internet bandwidth—and in any combination. In areas of high demand density (suburban/urban areas with above-average household income levels), it is unlikely that many operators will be able to avoid FTTH and PON in the next decade, and so would likely benefit from at least selective deployment of PON in the near term.
Fiber is an asset that doesn't become obsolete, and PON technology shows every sign of offering those who deploy it a steady increase in available per-customer capacity not only for residential services but for many business sites as well. The question operators should likely answer is not "Why PON?" but "Why not?"
|Optical networks: Core network design best practices
by Tom Nolle
Optical networking is the only relevant Layer 1 technology today in the network's core except in very unusual markets or geographic conditions where terrestrial microwave may still be deployed. While core optical network deployments in some areas may be literally indistinguishable from metro fiber, there are other networks where the key requirements are totally different. Thus, the first question in optical network design and deployment for core networks is the nature of the network itself, and how core and metro requirements might differ.
The "core" of a network is a place where aggregated traffic moves among on- and off-ramps. Because the traffic is highly aggregated and thus represents thousands or millions of user relationships, core network nodes are likely to have traffic destined for virtually all other core nodes, meaning that the nodes are highly interconnected. This may contrast sharply with metro networks, where "preferred topologies" often involve simply connecting serving or edge offices with points of presence (POPs) for connection to the core -- a star topology instead of a mesh.
The aggregation of traffic onto the core creates another difference in core optical networks: It is unlikely that a single network user will contribute a large portion of total traffic, and thus adding new users will generally not change the core significantly. In contrast, a large metro user may require reconfiguration of network bandwidth to accommodate the traffic. For this reason, and because of the "mesh" factor above, reconfigurability is likely to be less of an issue in core networks.
Core networks are also typically immune from the need for fast failover, the 50 ms optical alternate routing available with SONET. Ring configurations using fully redundant fiber paths are harder to create and more expensive to maintain in core networks, and so resilience is typically left to the electrical layer.
The final difference is that of geographic scope. A large metro network might span 50 to 100 km; a large core optical network can circle the globe. This long reach necessarily means that core fiber may have to span great distances without intermediate repeaters, including submarine environments, deserts, etc. Thus, ultra-long-haul fiber technology is often critical in core networks. The greater geographic scope of the core network also means getting craft personnel to an area to fix a problem may require days or weeks, and so it is critical to have some form of backup plan and to reduce outages as much as possible through design.
One issue that core and metro networks share is the issue of synchronous or circuit-switched traffic. Where PSTN calls and T1/E1 lines are to be supported over the core, it will likely be essential to utilize SONET/SDH transport for at least some of the optical paths to provide for synchronous end-to-end delivery. SONET/SDH services over global distances also require very accurate clocking to insure that bit errors are not created through "clock slips." These SONET/SDH trunks can either use the standard 1310 nm wavelength or one of the 1550 nm WDM wavelengths. Packet traffic does not require SONET/SDH, but many core network operators continue to use some SONET/SDH ADMs and switching in the core to preserve the option of circuit switched services.
A "pure packet core" can be made up of single-wavelength or WDM fiber, and thus it may be possible to create a virtual optical topology that approaches a mesh to avoid electrical handling. However, routers often have "adjacency problems" when installed in a full physical mesh, creating very long convergence times in the case of a failure. This, combined with the fact that reconfigurability in the core is often not a major requirement, means that core networks are more likely to use very high-speed fiber paths (OC-768, or 40 Gbps, for example) if the economies of these single electrical interfaces are better than the sum of the cost of WDM and a larger number of slower interfaces (4x10Gbps Ethernet).
The router adjacency issue is an example of an important point in core fiber design, which is that the needs of the electrical layer and even the service goals must be considered. Current trends in service provider Ethernet, spearheaded by work in the IEEE and the Metro Ethernet Forum are making Ethernet a strong candidate for core network deployment both to provide flexible virtual routes for higher-layer protocols like IP and to serve as the basis for actual customer services. This approach allows operators to create meshed optical networks for resiliency and add packet routing and even multicasting without creating additional router adjacencies.
A major consideration in optical core networking is the location of major points of service interconnection. The larger a provider core network is in terms of geographic scope, the more likely it will interconnect with networks of other providers, especially for local access in other geographies. These interconnection points are obviously both major traffic points requiring special capacity planning and points of major vulnerability. No interconnection point with another operator should be single-homed in fiber connection, nor of course should metro connections with the core provider's own metro infrastructure be single-homed.
The final point in core optical design is the management framework. Core networks carry aggregated traffic from millions of users, and failures will result in a flood of customer complaints. In addition, optical failures will trigger an avalanche of faults at the higher protocol layers, generating so many alerts that the network operations personnel may be overwhelmed. Many operators have insufficient integration between packet and optical layer management, and this increases vulnerability to alert storms and also makes customer support personnel less likely to have ready answers to complaints. The best optical core is no better than the operator's ability to manage it properly.
|Optical networks: Metro network design best practices
by Tom Nolle
The trends in telecommunications today show clearly that the largest incremental amount of fiber deployed in the next decade will not be in the network core but in the access network and metro network. Content, the fuel of consumer broadband traffic growth, is an application that delivers a relatively small number of movies or programs to a large population of users. In most cases, this means that content will be cached at a metro level and that the greatest traffic growth created by content will be in the metro area.
Metro fiber today is based largely on SONET, which is 1310nm single-wavelength deployment. SONET networks are usually constructed as a series of protected rings that allow fast failover to the alternate "rotation" in the event of a fiber cut. Rings are connected via optical add/drop multiplexers (ADMs).
The advent of wavelength division multiplexing (WDM) -- coarse or dense -- deployed in the 1550nm range has added versatility to metro optics by providing multiple lightpaths per fiber and greatly increasing the capacity of a given fiber strand. At the same time, the increased volume of packet traffic, which does not require SONET's synchronous delivery behavior, has changed the traffic profile for the metro network of the future. Today, Ethernet is more likely to be the planned electrical layer of metro networks, and WDM the optical. This shift is changing the balance of tasks between electrical and optical components and the best practices for deployment.
SONET rings can be replicated in metro Ethernet and dense wavelength division multiplexing (DWDM) networks by simply using the same fiber and relying on wavelength separation or by running multiple SONET paths over WDM. Since there are probably no major metro networks worldwide without any traditional synchronous TDM traffic, planners should expect to use a hybrid of SONET and Ethernet technology. Where there is a large installed base of SONET equipment, no plans to eliminate PSTN switches, and major customers with direct SONET access, it may be advisable to plan a transition in the metro optical network from SONET-over-1310 to SONET/WDM and then to begin to integrate Ethernet-over-SONET, finally moving portions of the network to Ethernet-over-WDM.
The changing economics of WDM appear to be defusing the "SONET replacement" issue. Most operators now expect to maintain SONET for PSTN transport for as long as those services are offered, moving to non-SONET architectures only as packet voice displaces TDM voice. However, the gradual evolution is most likely to be compromised by exploding consumer broadband use, particularly by IPTV plans. Operators report voice traffic is stable while data traffic is growing at often triple-digit rates. The faster packet traffic grows relative to "circuit" or TDM traffic, the more likely it is that hybrids of SONET and Ethernet (Ethernet over SONET) will have too small a window of value to justify investment. This is probably the reason why more and more optical vendors are offering hybrid products with reconfigurable add-drop multiplexing (ROADM) and Ethernet.
A WDM issue that receives considerable media play is the way in which transit optical connections are handled. Most products have converted between optical and electrical (O-E-O) to perform a wavelength transit connection because pure optical cross-connect (O-O-O) has been expensive. While O-O-O products have been available for five years or so, most vendors still use O-E-O technology.
The primary issue today is still cost; service providers believe that future ROADM products will provide all-optical transit connections. The key issue for designers, regardless of the mechanism used, is that wavelengths can be "transcoded" to a different wavelength across the switch; a system that fails to provide this is too complex to manage because wavelength assignment on various fibers becomes interdependent, and some reconfiguration modes may not be available because of collisions.
Metro optical deployment is affected by the service mix to be supported, but the service topology has an equal or greater impact. A primary question to be addressed is the amount of intra-metro traffic to be carried relative to the volume of traffic that will simply be connected to a metroPOP for transport outside the metro area.
In areas where consumer broadband traffic makes up the bulk of total traffic, most fiber deployment will focus on linking serving offices to a POP for core network interconnect. While these connections have to provide resiliency, they will rarely require the SONET standard of failover, 50ms, since they will support Ethernet traffic. The introduction of IPTV may change this picture because loss of connectivity will cause pixelization that may produce viewer dissatisfaction, particularly on pay-per-view systems. If user buffering is available, the failover time should be no more than about two-thirds of the buffer interval. Often, consumer broadband failover is best accomplished at the Ethernet level.
Where there is significant synchronous (TDM) traffic and significant corporate packet traffic, it may be necessary to provide optical failover at SONET 50ms levels, in which case SONET or resilient packet ring (RPR) may be required. As noted, WDM may allow metro optical designers to separate traffic according to optical failover requirements and provide improved failover only where needed.
Reconfigurability, meaning the ability to create variable metro optical topologies by interconnecting wavelengths in various ways, is most likely to be needed either to accommodate a large amount of business traffic (metro Ethernet services) or to support alternate routing between serving offices and metroPOPs where the core network connection is made. Where IPTV is delivered, this multi-homing may also be needed for content service points.
At the optical layer, reconfigurability and fast failover are very different things. ROADMs offer a great amount of topology flexibility to adapt to changes in traffic demands, to the point where wavelength services can be offered to metro customers and where even Gigabit Ethernet customers can be quickly accommodated. Adding rapid and multiple spanning trees to Ethernet can provide resiliency at the electrical layer for everything but the most stringent failover requirements.
Many believe that metro optics will, over time, migrate away from the 50ms failover standard of SONET as circuit-switched and TDM traffic become a smaller portion of network load. If this is true, then a pure ROADM-and-Ethernet solution, particularly one based on one-box optical/electrical approaches, may be the best long-term solution.
|Fiber-optic networks: Access network design
by Tom Nolle
The rapid growth in consumer broadband seen worldwide today would not be possible without a major shift in the practices for provisioning access infrastructure. Copper loop and CATV cable were once the only means of transporting information from a provider central office or head end to the customer. Today, both these media are being "shortened" or even eliminated by the use of fiber optics.
Fiber is not a new development in access networks. Not only has it been used for almost two decades in the provisioning of high-speed commercial/enterprise customers, service providers in the 1990s found that replacing large bundles of copper by a few fiber strands could improve service reliability and lower craft cost. BellSouth took the lead in deployment of access fiber in that period, and the move was justified completely on cost savings.
The traditional access fiber architecture has been the fiber remote, which is a high-speed fiber trunk (SONET or Ethernet) that terminates in an electro-optical multiplexer. In analog phone days, these were called "digital loop carriers" (DLCs), and the term "new generation DLC" was used for a time, but most such devices today deliver DSL services and so are usually called "remote DSLAMs." A remote DSLAM's primary benefit is to shorten the access copper to allow higher DSL speeds and improve reliability. Most providers would counsel against offering premium DSL on loops over 8,000 feet, and the highest DSL speeds may be achievable only on loops 1,000 feet or less in length.
Pushing fiber close to the customer is generically called "deep fiber," and various acronyms are used to indicate just how deep the fiber is. FTTH means "fiber to the home," which is the extreme of giving every user an optical-electrical termination. FTTC takes "fiber to the curb," serving a group of homes, while FTTN means "fiber to the node" or "neighborhood," and allows each fiber remote to serve a larger population.
The problem with all deep fiber strategies, and the reason why providers don't simply run fiber to every home, is cost. If loops are kept to a length of 5,000 feet, a single remote can serve customers in an area of almost 2,000 acres. Shorten the loop to 1,000 feet and it serves only a little over 70 acres. Since the user population is generally proportional to the geography, this reduction means the cost per user could rise 50 times or more. Shorter loops mean higher speeds, however, and for video over IP, most operators would require at least 24 Mbps (ADSL2) connections. In Asia and some other areas, VDSL is used with speeds of 50 Mbps or more. Both these require much shorter loops (8,000 feet is optimum for ADSL2, according to reports, and 500-800 feet for 50+Mbps VDSL).
Balancing cost and performance is the goal of the various passive optical networking (PON) systems. PON creates a "tree" structure of fiber connections using optical splices without electrical termination or handling. PON typically supports 32 branches, and each of these can in theory be a remote or a home. A single PON tree supporting 32 branches has 33 electrical devices, counting the head end. Serving 32 locations with point-to-point fiber would require 64 electrical devices and generate higher costs and greater reliability risk.
PON systems use a common fiber architecture but a variety of opto-electric approaches. The original broadband PON (BPON) and the successor Gigabit PON (GPON) are both based on ATM. The new Ethernet PON (EPON) standard has been ratified, and most operators contemplating major new PON deployments are conducting assessments and procurements of EPON. GPON and EPON have sufficient capacity for video delivery and high-speed Internet. Some providers like the ATM framework of GPON for its ability to create multiple independent service channels to the user via virtual circuits. Others prefer EPON because it matches better with Ethernet-based metro architectures.
Planning for access network fiber deployment demands a careful consideration of the following:
- The demographics of the area to be served, including household income, family size, and age distribution. This data is critical in establishing the service market opportunity. In general, favorable demographics justify deeper fiber deployment.
- The geography and topology of the service area, including the household density (average lot size), the rights of way available, and whether cabling is underground or above ground. This data is critical to set the cost points for each approach. Obviously, poor characteristics here will create profit margin challenges if not taken into account. Studies in Japan, where fiber deployment is high, indicate that even whether the ground is flat or hilly has an impact on deployment cost.
- The service mix to be provided, over at least a five-year period, considering both trends in demand and in competition. The worst possible outcome in an access fiber deployment is a new set of requirements that the fiber architecture deployed cannot effectively support.
In the installation and maintenance phase, access networks present special problems because of the high cost of rolling a truck to fix a problem. A broadband consumer may require three years to pay back the cost of a single service call. This means that it is absolutely critical that each fiber strand be properly installed and that, in particular, the splicing used in PON installations be carefully done and verified. Fiber should also be tested end-to-end prior to committing it to customers. Unlike copper, whose problems tend to develop over time, operators report that most fiber problems are uncovered shortly after installation and result from improper practices.
About the author: Tom Nolle is president of CIMI Corporation, a strategic consulting firm specializing in telecommunications and data communications since 1982. He is a member of the IEEE, ACM, Telemanagement Forum, and the IPsphere Forum, and the publisher of Netwatcher, a journal in advanced telecommunications strategy issues. Tom is actively involved in LAN, MAN and WAN issues for both enterprises and service providers and also provides technical consultation to equipment vendors on standards, markets and emerging technologies. Check out his SearchTelecom networking blog Uncommon Wisdom.