RBFS C-BNG with Redundancy for IPoE Subscribers

RtBrick provides reference design architectures designed and validated with in-house testing tools and methods. This document guides you to validate RBFS Consolidated BNG (C-BNG) - IPoE with Redundancy implementation.

About this Guide

This document provides information about the RBFS C-BNG - IPoE deployment in Redundancy (high availability) mode. The guide contains information about general platform configuration, configuration of various access and routing protocols, subscriber management, Quality of Service (QoS) and so on. The document presents a single use case scenario and provides information specifically on how to validate this particular implementation and for more information on any specific application, refer to https://documents.rtbrick.com/.

Currently, the guide’s scope is limited to the basic features and configurations for validation purposes. This guide does not provide information about the advanced RBFS features such as multicast.

About the RBFS Consolidated BNG

The RtBrick C-BNG is delivered as a container, running on Open Network Linux (ONL) provided by the hardware ODM manufacturers. Platforms that support C-BNG include Edgecore AGR 400, CSR 320, and UfiSpace S9600. The RtBrick C-BNG software runs on powerful bare-metal switches as an open BNG.

The BNG is designed to dynamically deliver the following services:

  1. Discovering and managing subscriber sessions for IPoE subscribers

  2. Providing authentication, authorization and accounting (AAA)

Deployment

A C-BNG provides BNG functionality on a single bare-metal switch and eliminates the need to have a chassis based system. It provides a low footprint and optimal power consumption based on BRCM chipsets, a compelling value proposition that has complete BNG and routing feature support.

C-BNG runs on small form-factor temperature hardened hardware that allows deployments in street site cabinets.

The rtbrick-toolkit is a meta package that can be used to install all the tools needed to work with RBFS images (container or ONL installer) and the RBFS APIs.

For more information, see the RBFS Installation Guide.

Using the RBFS CLI

Connect to the C-BNG node.

$ ssh <C-BNG-management-ip> -l supervisor
supervisor@<C-BNG-management-ip>'s password:

The password for C-BNG-management-ip should be entered here.

As a result, the CLI prompt will look like this:

supervisor@rtbrick>C-BNG.rtbrick.net:~ $

Open the RBFS CLI.

supervisor@rtbrick>C-BNG.rtbrick.net:~ $ cli

The CLI has three different modes:

  • operation mode is a read-only mode to inspect and analyse the system state

  • config mode allows modifying the RBFS configuration

  • debug mode provides advanced tools for trouble-shooting

The switch-mode command allows switching between the different modes. The show commands allow inspecting the system state. The set and delete commands, which are only available in the configuration mode, allow modifying or deleting the current configuration. The commit command persists the changes. RBFS provides a commit history which allows reviewing changes (show commit log) and restoring a previous configuration (rollback). There are also commands to ping destinations, capture network traffic, save the configuration to or load the configuration from a file.

The CLI supports abbreviating commands, provides suggestions by hitting the [tab] key and displays a context help by entering a ?.

For more information on how to use the RBFS CLI, see the RBFS CLI User Guide.

About RBFS Redundancy

RBFS Redundancy protects subscriber services from node and link outages. It provides mechanisms to enhance network resiliency that enables subscriber workloads to remain functional by ensuring a reliable switchover in the event of a node or link outage. With RBFS Redundancy, if one node/protected link goes down, another node can automatically take over the services (with a minimal downtime) for a redundancy group.

RBFS Redundancy protects subscriber groups using an active-standby node cluster model. The active node for a redundancy group runs the workload, and the peer node, which acts as standby, mirrors the subscriber state data from the peer (active) and takes over the workload in the event of a node or link failure. It ensures that traffic can keep flowing in the event of an outage. For more information about RBFS redundancy solution, see RBFS Redundancy Solution Guide.

Redundancy Pair BNG Devices and Inter-BNG Connectivity

C-BNG platforms, paired for redundancy, are connected with a Redundancy (RD) TCP link. This RD TCP connection can be formed either directly or through the core network. It uses IS-IS (for unicast reachability) and LDP (for labeled unicast reachability) between the active and standby nodes. The link between the C-BNG pairs establishes connectivity. RD 'keepalive' messages are exchanged between the two CBNG devices and happens data mirroring for subscriber state synchronization.

Both the devices which are paired for redundancy can contain multiple redundancy sessions. Multiple C-BNG nodes can be active nodes for one or more redundancy sessions that serve the subscribers and they can, at the same time, be standby nodes for other subscriber groups.

RBFS C-BNG with Redundancy for IPoE Subscribers Implementation Architecture

The following topology diagram for RBFS C-BNG IPoE shows two bare metal switches installed with RBFS C-BNG software. This topology aims to demonstrate complete IPoE subscriber emulation with redundancy and routing to connect to the network uplink.

The devices are deployed in active and standby mode with respect to the lag bundles. cbng1 is configured as the active device for lag-1, and cbng2 is the standby device for lag-1. If one node goes down, the other node becomes stand-alone and takes over the subscribers for a subscriber group. RBFS devices (CBNGs) are connected to core routers on one side and OLTs (simulated by BNGBlaster) on the other.

When the active device goes down or a link failure occurs between the active CBNG device and the OLT device, the standby CBNG device detects the same and takes over the IPoE subscriber sessions from the previously active device.

ipoe topology ha revised

In this topology:

  • SN-19-RR is the Service Node which is a Linux container running on an Ubuntu server. BNG Blaster, which emulates routing and access functions, runs on this container. This container also runs FreeRADIUS, which emulates RADIUS functions. BNG Blaster is an open-source network tester developed by RtBrick to validate the performance and scalability of BNG devices. The BNG blaster can emulate both subscribers at scale and network elements such as routers. Therefore, it is possible to create complex topologies with just the DUT and a server that hosts the BNG Blaster, thereby minimizing the equipment needed to test the DUT. For more information, refer to the BNG Blaster documentation.

  • There are three routers; two acting as core routers (R-1 & R-2) and the third one as the Edge router. R-1, R-2, and Edge Router are RBFS virtual helper modes. The devices need not be RtBrick C-BNGs.

  • CBNG1 and CBNG2 are the DUTs, which are two bare metal switches installed with the RBFS C-BNG software. On one side, these are connected to the Access Network, while on the other side, they are connected to the Core Network.

  • The CBNGs establish IS-IS adjacencies and LDP sessions with core routers R1 and R2. The BNG-Blaster simulates four route reflectors (RRs) that have BGP sessions with the CBNGs, which are operating in redundant mode. Two of these emulated RRs advertise 1 million IPv4 prefixes each, and the other two advertise 250,000 IPv6 prefixes each. These prefixes are then advertised to the CBNGs through the BGP sessions.

  • The topology brings up 20K IPoE subscribers (2 OLTs with 10K subscribers each simulated by BNG Blaster) over a LAG interface. A failure of a CBNG does not affect a subscriber’s services since they are backed up between the CBNGs.