RBFS Tour

Welcome

Welcome to the RBFS Tour, an interactive guide through the RtBrick Full Stack (RBFS) software. This tour is intended to be a guide through the RBFS CLI configuration. The tour contains some training modules explaining step-by-step which commands you need to use to get your switch running. It also provides some hands-on exercises and some troubleshooting guidelines. The material covers interface configuration, routing protocols (e.g., IS-IS and BGP), MPLS, VPN services and subscriber management.

RBFS Overview

RtBrick Full Stack (RBFS) is a disaggregated and open network operating system that allows faster deployment of new features and services within a short period and it promotes a collaborative ecosystem of hardware and other component vendors. By separating the hardware from the software, RBFS enables you to choose the whitebox switches of your choice without any vendor lock-in.

RBFS has been designed based on a microservices architecture, which offers some key benefits compared to traditional monolithic systems. It offers greater agility and provides a higher degree of automation that reduces operational overheads. RBFS works well with continuous integration (CI) and continuous delivery (CD) practices and tools.

The key components of RBFS are

  • a schema-driven and in-memory database called Brick Data Store (BDS). BDS acts as a control plane and provides all required data and instructions to the daemons for their functioning. BDS has architecturally been designed to enable very minimal response time by removing the time to access data stored in disks.

  • multiple microservices, known as brick daemons (BD) which allows decoupling of various functionalities and services. For example, the ribd daemon is responsible for route selection, next-hop resolution, tunnel selection and recursion, while the forwarding (fibd) daemon handles packet forwarding, route NH download, VPP and PD layer programming. Daemons such as confd and ifmd take care of various configurations and interface management respectively.

rbfs overview
Figure 1. RBFS Architectural Overview

There are daemons such as CtrlD (Controller) and ApiGwD (API Gateway) which are part of the RBFS ecosystem. These daemons reside on the ONL and manage all the communication between the client and backend services running in the container. The API Gateway (ApiGwD) daemon provides a single point access to expose services running inside of the RBFS container.

RBFS provides a CLI and a rich set of commands that you can use to operate, configure, monitor, and manage the system and its various components. In addition to the CLI, RBFS also offers support for REST-based industry-standard tools such as RESTCONF and Operational State API to enable communication with the software and underlying devices.

How to use this Tour Guide

Each training module can be used independently. Nevertheless, if you are new to RBFS, it is recommended to go through these modules in the given order.

Each module consists of a few background information and some exercises you should try to implement yourself. The exercise section has the following style:

Exercise 1: Sample Exercise

Configure something on your node.

In case you are stuck and unsure what exactly is to be done, there is a hidden section following each exercise that explains the steps and the expected solution which you can expand by clicking on the Click to reveal the answer link below the exercise:

Click to reveal the answer

Here are some more details.

Virtual Training Environment

Prerequisites

In order to build a local test environment, your organization needs access to the RtBrick image servers. Please follow the instructions in the RBFS Installation, ZTP, and Licensing to get the required permissions and install rtbrick’s toolkit for running RBFS instances.

The system requirements for a minimalistic virtual lab environment are a VM with 8 cores, 16 GB of memory, Ubuntu 18.04 and docker being installed. Pleases follow the instructions in the official docker installation guide to install docker.

Setup

Provided that all prerequisites for running a local RBFS environments are fulfilled, you can download the files necessary to run the virtual lab environment with the command

~$ sudo rtb-image pull --here -r trainings_resources -v latest

This command downloads an entire directory and the files it contains have interdependencies, i.e., they should not be moved individually.

In addition, the comamnds for the rtb-tools should be run as a normal user with sudo permissions, but not as a root user.

Afterwards, the following commands will spawn the local test environment and create four RBFS containers running systems R1, R2, R3, and R4 as well as a service node containing and starts the ctrld service. The topology is defined in the file topology.yaml.

~$ cd trainings_resources/infra
~/trainings_resources/infra$ rtb-ansible full-setup

The figure below shows the lab network topology.

topology lab
Figure 2. Topology of Virtual Lab

Running the Exercises

Each module comes along with two robot files: a setup file and a check file. Before you start a module, you should execute the corresponding setup file to prepare your lab environment, e.g., for the first BGP module

~/trainings_resources/robot$ robot bgp_part1/bgp1_setup.robot

It is not necessary to have knowledge of the Robot Framework in order to use it. For those who are interested, we have put together a small introduction under Robot Framework.

The nodes R2, R3, and R4 are loaded with the full configuration at the beginning of each module, while node R1, which is your device under test (DUT), is loaded with some initial configuration in order to avoid configuring all steps that have already been done in previous modules.

You can login to R1 to perform your configuration exercises using either ssh or rtb-ssh:

~/trainings_resources$ rtb-ssh R1

After you have completed the module, you can execute the check file to see if everything was done correctly, e.g., for the first BGP module

~/trainings_resources/robot$ robot bgp_part1/bgp1_verify.robot