Application Layer

Fabric Operation

Fabric Protocols

Verify L2 connectivity using LLDP. LLDP allows to quickly verify the topology and connectivity between your devices:

supervisor@rtbrick>LEAF01: op> show lldp neighbor
Neighbor name      Status  Remote port ID     Local port ID      Neighbor MAC address  Last received     Last sent
spine2             Up      memif-0/1/1        memif-0/1/1        7a:52:68:60:01:01     0:00:11 ago       0:00:12 ago
spine2             Up      memif-0/1/2        memif-0/1/2        7a:52:68:60:01:02     0:00:06 ago       0:00:09 ago
leaf1              Up      memif-0/1/1        memif-0/2/1        7a:47:fc:60:01:01     0:00:07 ago       0:00:10 ago
leaf2              Up      memif-0/1/1        memif-0/2/2        7a:28:3b:60:01:01     0:00:13 ago       0:00:14 ago

Verify IPv6 neighbor discovery:

supervisor@rtbrick>LEAF01: op> show neighbor ipv6
Instance                MAC Address          Interface             IP Address                Dynamic  Entry Time
default                 7a:52:68:60:01:01    memif-0/1/1/1         fd3d:3d:100:a::2          true     Wed Nov 18 18:33:28
default                 7a:52:68:60:01:01    memif-0/1/1/1         fe80::7852:68ff:fe60:101  true     Wed Nov 18 18:32:30
default                 7a:52:68:60:01:02    memif-0/1/2/1         fe80::7852:68ff:fe60:102  true     Wed Nov 18 18:32:30
default                 7a:47:fc:60:01:01    memif-0/2/1/1         fe80::7847:fcff:fe60:101  true     Wed Nov 18 18:32:32
default                 7a:28:3b:60:01:01    memif-0/2/2/1         fe80::7828:3bff:fe60:101  true     Fri Nov 20 14:23:27

If there is no LLDP peer or no IPv6 neighbor discovered on an interface, it typically indicates a connectivity issue. BGPv6 peers cannot be established. At this point, proceed with the following steps:

  • Verify the interface as described in section 3.1

  • Verify connectivity to the neighbor using Ping as described in section 4.1.3

  • Check the running configuration for the fabric interfaces.

If IPv6 neighbors have been discovered, verify the BGP sessions. BGP peers should have been auto-discovered on all fabric interfaces. If a BGP session is operational, it will be in "Established" state. The "PfxRcvd" and "PfxSent" show that BGP routes are exchanged:

supervisor@rtbrick>LEAF01: op> show bgp peer
Instance name: default
Peer                                 Remote AS    State         Up/Down Time               PfxRcvd              PfxSent
leaf1                                4200000201   Established   4d:17h:00m:27s             4                    14
spine2                               4200000100   Established   0d:00h:05m:11s             8                    14

If IPv6 neighbors have been discovered, but BGP sessions are not established, perform the following steps:

  • Inspect the output of the 'show bgp peer detail' command

  • Check the BGP running configuration

  • Enable and verify BDS logging for bgp.iod and the BGP module as described in section 5

If BGP sessions are established and routes are being exchanged, BGP will typically be fully operational. Next, verify BGP routes for IPv6 unicast and IPv6 labeled unicast:

supervisor@rtbrick>LEAF01: op> show bgp rib-local ipv6 unicast
supervisor@rtbrick>LEAF01: op> @spine1: op> show bgp rib-local ipv6 labeled-unicast

If you have multiple instances with a high number of routes, you can optionally filter the output using the 'instance default' command option. For both IPv6 and IPv6 LU, there should be one or multiple route(s) for each of the spine and leaf IPv6 loopback addresses like in this example:

supervisor@rtbrick>LEAF01: op> show bgp rib-local ipv6 unicast
Instance: default, AFI: ipv6, SAFI: unicast
Prefix                              Snd-Path-ID     Rcv-Path-ID     Peer                                Next-Hop                            Up Time
fd3d:3d:0:99::1/128                 513421047       2               ::                                                                      0d:00h:00m:48s
fd3d:3d:0:99::2/128                 748525752       0               fe80::7852:68ff:fe60:101            fe80::7852:68ff:fe60:101            0d:00h:00m:36s
fd3d:3d:0:99::3/128                 30278035        0               fe80::7847:fcff:fe60:101            fe80::7847:fcff:fe60:101            0d:00h:00m:36s
fd3d:3d:0:99::4/128                 748525752       0               fe80::7852:68ff:fe60:101            fe80::7852:68ff:fe60:101            0d:00h:00m:36s

Transport-layer Routing

The BGP routes described in section 4.1.1 above are subscribed by ribd. ribd will the select the best routes from multiple sources and add them to the actual routing table. In this guide we refer to the connectivity between the fabric devices as transport-layer, as opposed to the service-laver connectivity in the VPNs which are deployed on top.

Verify the IPv6 unicast and IPv6 labeled unicast routing tables. Same like for the BGP commands, you can optionally filter the output using the 'instance default' command option:

supervisor@rtbrick>LEAF01: op> show route ipv6 unicast
supervisor@rtbrick>LEAF01: op> show route ipv6 labeled-unicast

For both IPv6 and IPv6 LU, there should be one or multiple route(s) for each of the spine and leaf IPv6 loopback addresses, each with a valid IPv6 nexthop address and exit interface. Assuming you are using BGPv6 as a fabric protocol, i.e. no additional protocols like IS-IS in the default instance, these will be BGP routes only. If all expected routes exist, it typically indicates that the fabric is working fine from a control-plane perspective.

Fabric Connectivity

In order to troubleshoot date-plane issues, you can verify connectivity using the RBFS ping tool. First, verify connectivity to the auto-discovered link-local neighbors. Specify the interface on which the neighbor has been discovered as the source interface. Example:

supervisor@rtbrick>LEAF01: op> show neighbor ipv6
Instance                MAC Address          Interface             IP Address                Dynamic  Entry Time
default                 7a:52:68:60:01:01    memif-0/1/1/1         fe80::7852:68ff:fe60:101  true     Wed Nov 18 18:32:30
<...>

supervisor@rtbrick>LEAF01: op> ping fe80::7852:68ff:fe60:101 source-interface memif-0/1/1/1
68 bytes from fe80::7852:68ff:fe60:101: icmp_seq=1 ttl=63 time=8.6318 ms
<...>
Statistics: 5 sent, 5 received, 0% packet loss

Second, verify connectivity to the spine and leaf loopback addresses learned via BGP. As a source address that is advertised via BGP in the default instance, so that it is reachable from the remote device. This depends on your deployment, but typically it is the loopback interface in the default instance. Example:

supervisor@rtbrick>LEAF01: op> show route ipv6 unicast
Instance: default, AFI: ipv6, SAFI: unicast
Prefix/Label                             Source            Pref    Next Hop                                 Interface
fd3d:3d:0:99::1/128                      direct            0       fd3d:3d:0:99::1                          lo-0/0/0/1
fd3d:3d:0:99::3/128                      bgp               20      fe80::7847:fcff:fe60:101                 memif-0/2/1/1
fd3d:3d:0:99::4/128                      bgp               200
 <...>

supervisor@rtbrick>LEAF01: op> ping fd3d:3d:0:99::3 source-interface lo-0/0/0/1
68 bytes from fd3d:3d:0:99::3: icmp_seq=1 ttl=63 time=10.0001 ms
<...>

Next, verify MPLS connectivity by specifying IPv6 LU with the ping tool. Example:

supervisor@rtbrick>LEAF01: op> ping fd3d:3d:0:99::3 instance default afi ipv6 safi labeled-unicast source-interface lo-0/0/0/1
68 bytes from fd3d:3d:0:99::3: icmp_seq=1 ttl=63 time=2.8520 ms
<...>

If the fabric connectivity is broken, use the RBFS traceroute tool to narrow down the location of the issue. Same as for ping, you need to use a source address that is advertised via BGP in the default instance and reachable from the remote device. Example:

supervisor@rtbrick>LEAF01: op> traceroute fd3d:3d:0:99::4 source-interface lo-0/0/0/1
traceroute to fd3d:3d:0:99::4 30 hops max, 60 byte packets
1    fd3d:3d:100:a::2    13.270 ms      4.973 ms      6.294 ms
2    fd3d:3d:0:99::4    18.825 ms      17.058 ms      17.764 ms

Subscriber Services

The term subscriber describes an access user or session from a higher level decoupled from underlying protocols like PPPoE or IPoE. Subscribers in RBFS can be managed locally or remote via RADIUS. Each subscriber is uniquely identified by a 64bit number called subscriber-id.

Subscriber Sessions

A good starting point for troubleshooting subscriber services is to verify the status of the subscriber sessions. If a session is fully operational, its state will be ESTABLISHED like in the following example:

supervisor@rtbrick>LEAF01: op> show subscriber
Subscriber-Id          Interface        VLAN      Type   State
72339069014638600      ifp-0/0/1        1:1       PPPoE  ESTABLISHED
72339069014638601      ifp-0/0/1        1:2       PPPoE  ESTABLISHED
72339069014638602      ifp-0/0/1        1:3       PPPoE  ESTABLISHED
72339069014638603      ifp-0/0/3        2000:7    L2TP   ESTABLISHED

Alternative use show subscriber detail which shows further details like username, Agent-Remote-Id (aka Line-Id) or Agent-Circuit-Id if screen width is large enough to print all those information. The following table describes all possible subscriber session states:

State Description

INIT

Initial subscriber state.

AUTHENTICATING

The subscriber is waiting for authentication response.

AUTH ACCEPTED

Authentication is accepted.

AUTH REJECTED

Authentication failed.

TUNNEL SETUP

Subscriber is tunnelled via L2TPv2 waiting for L2TP session setup completed.

ADDRESS ALLOCATED

IP addresses allocated.

ADDRESS REJECTED

IP addresses rejected (pool exhaust, duplicate or wrong addresses).

FULL

Subscriber forwarding state established.

ACCOUNTING

Subscriber accounting started sending RADIUS Accounting-Request-Start.

ESTABLISHED

The subscriber becomes ESTABLISHED after response to RADIUS Accounting-Request-Start if RADIUS accounting is enabled otherwise immediately after FULL.

TERMINATING

The subscriber is terminating and remains in this state until response to RADIUS Accounting-Request-Start if RADIUS accounting is enabled

Further details per subscriber can be shown with the following commands.

supervisor@rtbrick>LEAF01: op> show subscriber 72339069014638600
  <cr>
  access-line           Subscriber access line information
  accounting            Subscriber accounting information
  acl                   Subscriber ACL information (filter)
  detail                Detailed subscriber information
  qos                   Subscriber QoS information

If a subscriber has been torn down or is not able to setup, inspect the the terminate history which indicates the teardown reason.

If a previously working subscriber session has been torn down, inspect the termination history which tells the actual reason.

supervisor@rtbrick>LEAF01: op> show subscriber history
Subscriber-Id          Timestamp                            Terminate Code
72339069014638594      Tue Nov 17 08:13:17 GMT +0000 2020   PPPoE LCP Terminate Request Received
72339069014638595      Tue Nov 17 08:13:17 GMT +0000 2020   PPPoE LCP Terminate Request Received
72339069014638596      Tue Nov 17 08:13:17 GMT +0000 2020   PPPoE LCP Terminate Request Received
72339069014638597      Tue Nov 17 08:13:17 GMT +0000 2020   PPPoE LCP Terminate Request Received
72339069014638598      Tue Nov 17 08:13:17 GMT +0000 2020   PPPoE LCP Terminate Request Received
72339069014638599      Tue Nov 17 08:13:46 GMT +0000 2020   L2TP CDN Request
72339069014638600      Tue Nov 17 08:39:01 GMT +0000 2020   PPPoE Clear Session

This command shows also further information like interface, VLAN and MAC address if screen is width enough.

Optionally you can view even more detailed information by inspecting the following key BDS tables used for subscriber management:

  • Subscriber table – main table including all subscribers with all states and parameters:

supervisor@rtbrick>LEAF01: op> show datastore subscriberd.1 table local.access.subscriber
  • Subscriber interface table:

supervisor@rtbrick>LEAF01: op> show datastore subscriberd.1 table global.access.1.subscriber.ifl
  • Subscriber termination history table:

supervisor@rtbrick>LEAF01: op> show datastore subscriberd.1 table local.access.subscriber.terminate.history

RADIUS

Remote Authentication Dial-In User Service (RADIUS) is a networking protocol that provides centralized Authentication, Authorization and Accounting (AAA) management for all types of subscribers (PPPoE or IPoE). RADIUS servers can perform as authentication and accounting servers or change of authorization (CoA) clients. Authentication servers maintain authentication records for subscribers.

The subscriber daemon requests authentication in RADIUS access-request messages before permitting subscribers access. Accounting servers handle accounting records for subscribers. The subscriber daemon transmits RADIUS accounting-start, interim and stop messages to the servers. Accounting is the process of tracking subscriber activity and network resource usage in a subscriber session. This includes the session time called time accounting and the number of packets and bytes transmitted during the session called volume accounting. A RADIUS server can behave as a change of authorization (CoA) client allowing dynamic changes for subscriber sessions. The subscriber daemon supports both RADIUS CoA messages and disconnect messages. CoA messages can modify the characteristics of existing subscriber sessions without loss of service, disconnect messages can terminate subscriber sessions.

RBFS supports multiple RADIUS servers for high availability and scaling which are bundled using RADIUS profiles. The status of those profiles can be shown with the following command.

supervisor@rtbrick>LEAF01: op> show radius profile
RADIUS Profile: radius-default
    NAS-Identifier: BNG
    NAS-Port-Type: Ethernet
    Authentication:
        Algorithm: ROUND-ROBIN
        Server:
            radius-server-1
            radius-server-2
    Accounting:
        State: UP
        Stop on Reject: True
        Stop on Failure: True
        Backup: True
        Algorithm: ROUND-ROBIN
        Server:
            radius-server-1
            radius-server-2

The profile accounting state becomes immediately ACTIVE if at least one of the referenced RADIUS accounting servers is enabled for accounting. Otherwise the profile keeps DISABLED which may indicates a wrong configuration.

If RADIUS Accounting-On is enabled, the profile state becomes STARTING before UP. It is not permitted to send any accounting request start, interim or stop related to a profile in this state. It is also not permitted to send authentication requests if accounting-on-wait is configured in addition. The state becomes UP if at least one server in the accounting server list is in a state UP or higher.

A new profile added which references existing used RADIUS servers must not trigger a RADIUS Accounting-On request if at least one of the referenced servers is in a state of UP or higher.

The state of the RADIUS servers is shown with the following commands.

supervisor@rtbrick>LEAF01: op> show radius server
RADIUS Server            Address          Authentication State Accounting State
radius-server-1          100.0.0.1        UP                   UP
radius-server-2          100.0.0.3        ACTIVE               ACTIVE
radius-server-3          100.0.0.4        ACTIVE               ACTIVE

The following table explains the meaning of the different state where some of those state are applicable for accounting only.

State Description

DISABLED

RADIUS authentication (authentication_state) or accounting (accounting-state) is disabled or server not referenced by profile.

ACTIVE

Server referenced by RADIUS profile but no valid response received.

STARTING

This state is valid for accounting (accounting-state) only during accounting-on is sending (wait for accounting-on response).

STOPPING

This state is valid for accounting (accounting-state) only during accounting-off is sending (wait for accounting-off response).

FAILED

This state is valid for accounting (accounting-state) only if accounting-on/off timeout occurs.

UP

Valid RADIUS response received

UNREACHABLE

No response received/timeout but server is still usable.

DOWN

Server is down but can be selected.

TESTING

Send a request to test if server is back again. The server will not be selected for another request in this state (use a single request to check if server is back again).

DEAD

Server is down and should not be selected.

Alternative use show radius server <radius-server> for further details and statistics per RADIUS server. Those statics can be cleared with clear radius server-statistics without any service impact.

PPPoE Sessions

For PPPoE sessions the state should be ESTABLISHED if local terminated or TUNNELLED for L2TPv2 tunnelled sessions.

supervisor@rtbrick>LEAF01: op> show pppoe session
Subscriber-Id          Interface        VLAN      MAC               State
72339069014638604      ifp-0/0/1        1:1       00:04:0e:00:00:01 ESTABLISHED
72339069014638601      ifp-0/0/1        1:2       00:04:0e:00:00:02 ESTABLISHED
72339069014638602      ifp-0/0/1        1:3       00:04:0e:00:00:03 ESTABLISHED
72339069014638603      ifp-0/0/3        2000:7    52:54:00:57:c8:29 TUNNELLED

Alternative use show pppoe session detail which shows further details like username, Agent-Remote-Id (aka Line-Id) or Agent-Circuit-Id if screen width is large enough to print all those information.

State Description

LINKING

PPP LCP setup.

AUTHENTICATING

PPP authentication (PAP or CHAP).

NETWORKING

PPP IPCP (IPv4) and IP6CP (IPv6) setup.

ESTABLISHED

The PPPoE session becomes established if at least one NCP (IPCP or IP6CP) is established (state OPEN).

TUNNELLED

This state indicates that a PPPoE session is tunnelled via L2TPv2.

TERMINATING

PPP session teardown.

TERMINATED

PPPoE session terminated.

If PPPoE session remain in state TERMINATED, the subscriber state should be checked. Typically this happens if RADIUS Accounting-Request-Stop is still pending.

Further details per PPPoE session can be shown with the following commands.

supervisor@rtbrick>LEAF01: op> show pppoe session 72339069014638601
  <cr>
  detail                Detailed session information
  statistics            Protocol statistics

The detail command shows the states of the session and all sub-protocols with extensive information and negotiated parameters.

Session statistics are available global and per session.

supervisor@rtbrick>LEAF01: op> show pppoe session statistics
supervisor@rtbrick>LEAF01: op> show pppoe session 72339069014638601 statistics

The PPPoE discovery statistics are helpful if session setup fails in initial PPPoE tunnel setup before actual PPP negotiation is starting.

supervisor@rtbrick>LEAF01: op> show pppoe discovery packets
Packet           Received         Sent
PADI             17               0
PADO             0                17
PADR             17               0
PADS             0                17
PADT             1                13

supervisor@rtbrick>LEAF01: op> show pppoe discovery errors
PADI Drop No Config            : 0
PADI Drop Session Protection   : 0
PADI Drop Session Limit        : 0
PADI Drop Dup Session          : 0
PADI Drop Interface Down       : 0
PADR Drop No Config            : 0
PADR Drop Wrong MAC            : 0
PADR Drop Interface Down       : 0
PADR Drop Session Limit        : 0
PADR Drop Session Protection   : 0
PADR Drop Bad Cookie           : 0
PADR Drop Bad Session          : 0
PADR Drop Dup Session          : 0
PADR Drop No mapping Id        : 0
PADT Drop No Session           : 0
PADT Drop Wrong MAC            : 0
PADX Interface Get Failure     : 0

If PPPoE session protection is enabled in access configuration profile, short lived or failed sessions will be logged in the PPPoE session protection table (local.pppoe.session.protection).

Every session not established for at least 60 seconds per default is considered as failed or short lived session. This will block new sessions on this IFP and VLANs for one second per default which increase exponential with any further failed session until the max time of per default 300 seconds is reached. The interval is reset after 900 seconds without failed sessions.

The PPPoE session protection table include also last subscriber-id and terminate code which indicates the reason for session failures.

supervisor@rtbrick>LEAF01: op> show pppoe discovery protection
Interface        VLAN      Status  Attempts   Last Terminate Code
ifp-0/0/1        1:1       OK      1          PPPoE LCP Terminate Request Received
ifp-0/0/1        1:2       OK      1          PPPoE LCP Terminate Request Received
ifp-0/0/1        1:3       OK      1          PPPoE LCP Terminate Request Received

If status OK indicates that new session are accepted where BLOCKED means that sessions will be rejected.

L2TP Sessions

For L2TPv2 tunnelled PPPoE sessions the global unique subscriber-id can be used to get information about the L2TP session.

supervisor@rtbrick>LEAF01: op> show l2tp subscriber 72339069014638621
Subscriber-Id: 72339069014638621
    State: ESTABLISHED
    Local TID: 45880
    Local SID: 39503
    Peer TID: 1
    Peer SID: 1
    Call Serial Number: 10
    TX Speed: 10007000 bps
    RX Speed: 1007000 bps
    CSUN: disabled

The following command gives a good overview over the corresponding tunnels.

supervisor@rtbrick>LEAF01: op> show l2tp tunnel sessions
Role Local TID Peer TID State        Preference Sessions Established Peer Name
LAC       2022        1 ESTABLISHED       10000        1           1 LNS3
LAC       3274        1 ESTABLISHED       10000        1           1 LNS8
LAC      14690        1 ESTABLISHED       10000        1           1 LNS6
LAC      29489        1 ESTABLISHED       10000        1           1 LNS9
LAC      33323        1 ESTABLISHED       10000        1           1 LNS4
LAC      35657        1 ESTABLISHED       10000        1           1 LNS10
LAC      37975        1 ESTABLISHED       10000        1           1 LNS1
LAC      45880        1 ESTABLISHED       10000        1           1 LNS7
LAC      46559        1 ESTABLISHED       10000        1           1 LNS2
LAC      58154        1 ESTABLISHED       10000        1           1 LNS5

Detailed information per tunnel are available via show l2tp tunnel <TID> detail.

L2TP tunnel statistics are available global and per tunnel.

supervisor@rtbrick>LEAF01: op> show l2tp tunnel statistics
supervisor@rtbrick>LEAF01: op> show l2tp tunnel 37975 statistics

Service-Layer Connectivity

A different type of issues can occur if a subscriber has successfully connected to a leaf switch, but does not have connectivity to his services, for example connecting to the Internet. The actual user traffic is carried in a VPN across the RBFS fabric. First, verify the VPN routing table on both the spine and the leaf switches. Depending on your design, there will be specific routes and/or a default route only:

supervisor@rtbrick>LEAF01: op> show route ipv4 unicast instance services
Instance: services, AFI: ipv4, SAFI: unicast
Prefix/Label                    Source      Pref    Next Hop            Interface
192.168.0.3/32                  direct      0       192.168.0.3         lo-0/0/0/2
192.168.0.4/32                  bgp         20      fd3d:3d:0:99::4     memif-0/1/1/1
 <...>

If routes are missing already on the spine switch, there might be a routing issue between the spine and the upstream core routers or route reflectors. Further troubleshooting steps will depend on how the fabric is connected to the upstream network in your deployment. If all expected routes exists on the spine switch, but are missing on the leaf switch, verify the VPN route exchange between them. Example for verifying VPN routes advertised by the spine switch:

supervisor@rtbrick>LEAF01: op> show bgp rib-out ipv4 vpn-unicast peer leaf1
Instance: default, AFI: ipv4, SAFI: vpn-unicast
  Peer: leaf1, Sent routes: 2
    Prefix                      MED     LocalPref   Origin          Next Hop                AS Path
    192.168.0.3/32              0       -           Incomplete      fd3d:3d:0:99::3         4200000100, 4200000201
    192.168.0.4/32              1       -           Incomplete      fd3d:3d:0:99::4         4200000100, 4200000202
<...>

Example for verifying VPN routes received by the leaf switch:

supervisor@rtbrick>LEAF01: op> show bgp rib-in ipv4 vpn-unicast peer spine1
Instance: default, AFI: ipv4, SAFI: vpn-unicast
  Peer: spine1, Received routes: 1
    Prefix                      Path ID    Next Hop                 MED        LocalPref  AS Path
    192.168.0.4/32              0          fd3d:3d:0:99::4          1          -          4200000100, 4200000202

If you have a publicly routed loopback address in the services VPN, you can verify the connectivity to any well-known destination address using the RBFS Ping tool within the VPN instance:

supervisor@rtbrick>LEAF01: op> ping 8.8.8.8 instance services source-interface lo-0/0/1/0
68 bytes from 8.8.8.8: icmp_seq=1 ttl=63 time=21.6622 ms
<...>
Statistics: 5 sent, 5 received, 0% packet loss

If there is no connectivity to the IP address of your service, verify connectivity across the fabric within the instance by sending a ping between two leaf switches. This will indicate if the connectivity problem lies in the spine/leaf fabric or in the upstream network:

supervisor@rtbrick>LEAF01: op> ping 192.168.21.5 instance services source-interface lo-0/0/1/0
68 bytes from 192.168.21.5: icmp_seq=1 ttl=63 time=1.5511 ms
<...>
Statistics: 5 sent, 5 received, 0% packet loss

Host Path Capturing

You can use the RBFS built-in capture tool to verify and troubleshoot fabric as well as services protocol operation. It captures and displays all host-path traffic, that is control-plane packets sent to the CPU. It does not apply to transit traffic. This section explains the options available of the capturing tool to troubleshoot host path issues.

Physical Interface

You can capture all host path packets on a physical interface, including all sub-interfaces, by specifying the physical interface (IFP) name with the capture command.

capture interface <physical-interface-name> direction <dir>

Example
capture interface ifp-0/0/52 direction both

Logical Interface

If you specify a logical interface (IFL) name with the capture command, the traffic on that sub-interface will be captured only. This allows to filter for example on a specific VLAN.

capture interface <logical-interface-name> direction <dir>

Example
capture interface ifl-0/0/52/1 direction both

Shared Memory Interface

There is no BDS packet table in fibd. Instead there is a pseudo network interface of the form shm-0/0/<trap-id>, where the trap ID identifies the protocol (BGP, ISIS, PPPoE, L2TP, RADIUS). You can use the VPP internal command show rtb-shm to find the mapping of protocol to trap ID. This command captures the packet exchanges between fibd and other protocol daemons.

capture interface <shm-interface-name> direction <dir>

Example
capture interface shm-0/0/1 direction both

ACL-based Packet Capturing

You can use an ACL to more granularly define the traffic to be captured.

capture acl <acl-name> direction <direction> interface <interface> <options>

Option Description

<acl-name>

ACL name.

<direction>

Direction of the packet. The supported values are: in, out, or both.

<interface>

Specifies the interface that is used to capture packets onto console. For ACL-based packet capturing, the interface is mandatory.

file <filename> start

You can use this option to save the packets in the PCAP file and later use a tool like Wireshark to analyse the captured traffic

raw

Raw packet capture

Example
{
  "rtbrick-config:acl": {
    "l3v6": {
      "rule": [
        {
          "rule-name": "from_to_spine1",
          "ordinal": [
            {
              "ordinal-value": 10,
              "match": {
                "direction": "ingress",
                "source-ipv6-prefix": "fd3d:3d:100:a::1/128"
              },
              "action": {
                "capture": "true"
              }
            },
            {
              "ordinal-value": 20,
              "match": {
                "destination-ipv6-prefix": "fd3d:3d:100:a::1/128",
                "direction": "ingress"
              },
              "action": {
                "capture": "true"
              }
            }
          ]
        }
      ]
    }
  }
}

supervisor@rtbrick>LEAF01: op> capture acl from_to_spine1 direction both interface hostif-0/0/3
Success : ifp capture started

2022-06-09T07:45:48.358898+0000 7a:2f:78:c0:00:03 > 7a:3f:3e:c0:00:03, ethertype IPv6 (0x86dd), length 122: (hlim 255, next-header ICMPv6 (58) payload length: 68) fd3d:3d:100:a::1 > fd3d:3d:100:a::2: [icmp6 sum ok] ICMP6, echo request, seq 1

2022-06-09T07:45:48.359027+0000 7a:3f:3e:c0:00:03 > 7a:2f:78:c0:00:03, ethertype IPv6 (0x86dd), length 122: (hlim 64, next-header ICMPv6 (58) payload length: 68) fd3d:3d:100:a::2 > fd3d:3d:100:a::1: [icmp6 sum ok] ICMP6, echo reply, seq 1

<...>

Filtering by Protocol

In most cases, while using the logical interface and physical interface, you may want to select a packet belonging to a specific protocol. In that case you can use the protocol filter option.

capture interface <interface-name> direction <direction> protocol <protocol-name>

Example
supervisor@rtbrick>LEAF01: op> capture interface ifp-0/0/52 direction both protocol bgp

supervisor@rtbrick>LEAF01: op> capture interface ifl-0/0/52/1 direction both protocol bgp

Raw Format

The raw option of the capture tool allows to decode as well as dump the packet in raw format. The raw option is useful if you want to examine packets in hex to check for malformed packets, etc.

capture interface <interface-name> direction <direction> raw

Example
supervisor@rtbrick>LEAF01: op> capture interface ifl-0/0/52/1 direction both raw

supervisor@rtbrick>LEAF01: op> capture interface ifp-0/0/52 direction both raw

PCAP File

While debugging a setup with real traffic, analysing all packets on a terminal might be cumbersome. You can use the pcap option to save the packets in the PCAP file and later use a tool like Wireshark to analyse the captured traffic.

To start capturing the traffic in a file, enter the following command:

capture interface <interface-name> direction <direction> file <file_name.pcap> start

To stop capturing the traffic in a file, enter the following command:

capture interface <interface-name> direction <direction> file <file_name.pcap> stop

Example
supervisor@rtbrick>LEAF01: op> capture interface ifp-0/0/52 direction both file test.pcap start

supervisor@rtbrick>LEAF01: op> capture interface ifp-0/0/52 direction both file test.pcap stop