You should understand the 6 concepts of core switches!!
421 2023-02-13

1. Backplane bandwidth

 

Also known as switching capacity, it is the maximum amount of data that can be handled between the switch interface processor or interface card and the data bus, just like the sum of the lanes owned by the overpass. Since the communication between all ports needs to be completed through the backplane, the bandwidth provided by the backplane becomes the bottleneck of concurrent communication between ports.

 

The larger the bandwidth, the greater the available bandwidth provided to each port, and the greater the data exchange speed; the smaller the bandwidth, the smaller the available bandwidth provided to each port, and the slower the data exchange speed. That is to say, the backplane bandwidth determines the data processing capability of the switch. The higher the backplane bandwidth, the stronger the data processing capability. If you want to realize the full-duplex non-blocking transmission of the network, you must meet the minimum backplane bandwidth requirements.

Calculated as follows

 

Backplane bandwidth = number of ports × port rate × 2

Tip: For a Layer 3 switch, it is a qualified switch only if the forwarding rate and backplane bandwidth meet the minimum requirements, both of which are indispensable.

 

For example,

How can a switch have 24 ports,

Backplane bandwidth = 24 * 1000 * 2/1000 = 48Gbps.

 

picture

 

2 The packet forwarding rate of the second and third layers

 

The data in the network is composed of data packets, and the processing of each data packet consumes resources. Forwarding rate (also called throughput) refers to the number of data packets passing per unit time without packet loss. Throughput is like the traffic flow of an overpass, and it is the most important parameter of a Layer 3 switch, which marks the specific performance of the switch. If the throughput is too small, it will become a network bottleneck and have a negative impact on the transmission efficiency of the entire network. The switch should be able to achieve wire-speed switching, that is, the switching rate reaches the data transmission speed on the transmission line, so as to eliminate the switching bottleneck to the greatest extent. For a Layer 3 core switch, if it is desired to achieve non-blocking network transmission, the rate can be ≤ the nominal Layer 2 packet forwarding rate and the rate can be ≤ the nominal Layer 3 packet forwarding rate, then the switch is doing the second and third layers. Line speed can be achieved when layer switching.

 

Then the formula is as follows

Throughput (Mpps) = Number of 10-Gigabit ports × 14.88 Mpps + Number of Gigabit ports × 1.488 Mpps + Number of 100-Mbit ports × 0.1488 Mpps.

 

If the calculated throughput is less than the throughput of your switch, it can achieve wire speed.

 

Here, if there are 10-megabit ports and 100-megabit ports, they will be counted up, and if they are not, they can be ignored.

 

For example,

For a switch with 24 Gigabit ports, its fully configured throughput should reach 24×1.488 Mpps=35.71 Mpps to ensure non-blocking packet switching when all ports work at wire speed. Similarly, if a switch can provide up to 176 Gigabit ports, then its throughput should be at least 261.8 Mpps (176×1.488 Mpps=261.8 Mpps), which is the real non-blocking structure design.

 

So, how to get 1.488Mpps?

 

The measurement standard of packet forwarding line speed is based on the number of 64byte data packets (minimum packets) sent per unit time as the calculation benchmark. For Gigabit Ethernet, the calculation method is as follows: 1,000,000,000bps/8bit/(64+8+12)byte=1,488,095pps Note: When the Ethernet frame is 64bytes, the 8byte frame header and Fixed overhead of 12byte frame gap. Therefore, when a line-speed Gigabit Ethernet port forwards 64byte packets, the packet forwarding rate is 1.488Mpps. The port forwarding rate of Fast Ethernet is exactly one-tenth of that of Gigabit Ethernet, which is 148.8kpps.

 

1. For 10 Gigabit Ethernet, the packet forwarding rate of a wire-speed port is 14.88Mpps.

2. For Gigabit Ethernet, the packet forwarding rate of a wire-speed port is 1.488Mpps.

3. For Fast Ethernet, the packet forwarding rate of a wire-speed port is 0.1488Mpps.

 

We can use this data.

 

Therefore, if the above three conditions (backplane bandwidth, packet forwarding rate) can be met, then we say that this core switch is truly linear and non-blocking.

 

Generally, a switch that satisfies both requirements is a qualified switch.

A switch with a relatively large backplane and a relatively small throughput, in addition to retaining the ability to upgrade and expand, has problems with software efficiency/special chip circuit design; the backplane is relatively small. A switch with relatively large throughput has relatively high overall performance. However, the manufacturer’s propaganda can be trusted for the backplane bandwidth, but the manufacturer’s propaganda cannot be trusted for the throughput, because the latter is a design value, and the test is very difficult and of little significance.

 

picture

 

3. Scalability

 

Scalability should include two aspects:

1. The slot is used to install various functional modules and interface modules. Since the number of ports provided by each interface module is certain, the number of slots fundamentally determines the number of ports that the switch can accommodate. In addition, all functional modules (such as super engine module, IP voice module, extended service module, network monitoring module, security service module, etc.) need to occupy a slot, so the number of slots fundamentally determines the scalability of the switch.

 

 

2. There is no doubt that the more supported module types (such as LAN interface modules, WAN interface modules, ATM interface modules, extended function modules, etc.), the stronger the scalability of the switch. Taking the LAN interface module as an example, it should include RJ-45 modules, GBIC modules, SFP modules, 10Gbps modules, etc., to meet the needs of complex environments and network applications in large and medium-sized networks.

 

4. Layer 4 switching

 

Layer 4 switching is used to enable fast access to network services. In Layer 4 switching, the basis for determining transmission is not only the MAC address (Layer 2 bridge) or source/destination address (Layer 3 routing), but also the TCP/UDP (Layer 4) application port number, which is designed For high-speed Intranet applications. In addition to the load balancing function, the four-layer switching also supports the transmission flow control function based on the application type and user ID. In addition, a Layer 4 switch sits directly in front of the server, with knowledge of application session content and user privileges, making it an ideal platform for preventing unauthorized server access. Layer 4 switching includes software design and circuit processing capability design.

 

5. Module redundancy

 

Redundancy capability is the guarantee for the safe operation of the network. Any manufacturer cannot guarantee that its products will not fail during operation. The ability to switch quickly when a failure occurs depends on the redundancy capability of the equipment. For core switches, important components should have redundancy capabilities, such as management module redundancy and power supply redundancy, so as to ensure the stable operation of the network to the greatest extent.

 

6. Routing redundancy

 

Use HSRP and VRRP protocols to ensure load sharing and hot backup of core equipment. When a switch in the core switch and dual-convergence switches fails, the three-layer routing device and virtual gateway can quickly switch to realize dual-line redundant backup. Ensure the stability of the entire network.

We are under popular science:

The main functions of the aggregation layer of the switch are as follows:
1. Aggregating user traffic at the access layer, performing aggregation, forwarding and switching of data packet transmission;
2. Performing local routing, filtering, traffic balancing, QoS priority management, security mechanism, IP address conversion, traffic shaping, multicast management and other processing;
3. According to the processing results, user traffic is forwarded to the core switching layer or routed locally;
4. Complete the conversion of various protocols (such as routing summary and redistribution, etc.), to ensure that the core layer connects to areas running different protocols.