linux 网卡配置 vlan/bond/bridge/macvlan/ipvlan/macvtap 模式

linux 网卡模式

linux网卡支持非vlan模式、vlan模式、bond模式、bridge模式,macvlan模式、ipvlan模式等,下面介绍交换机端及服务器端配置示例。

前置要求:

  • 准备一台物理交换机,以 H3C S5130 三层交换机为例
  • 准备一台物理服务器,以 Ubuntu 22.04 LTS 操作系统为例

交换机创建2个示例VLAN,vlan10和vlan20,及VLAN接口。

<H3C>system-view

[H3C]vlan 10 20

[H3C]interface Vlan-interface 10
[H3C-Vlan-interface10]ip address 172.16.10.1 24
[H3C-Vlan-interface10]undo shutdown
[H3C-Vlan-interface10]exit
[H3C]

[H3C]interface Vlan-interface 20
[H3C-Vlan-interface20]ip address 172.16.20.1 24
[H3C-Vlan-interface20]undo  shutdown 
[H3C-Vlan-interface20]exit
[H3C]

网卡非vlan模式

网卡非vlan模式,一般直接配置IP地址,对端上连交换机配置为access口,access口一般用于连接纯物理服务器或办公终端设备。

示意图如下
在这里插入图片描述

交换机配置,交换机接口配置为access模式,并加入对应vlan

<H3C>system-view
[H3C]interface GigabitEthernet 1/0/1
[H3C-GigabitEthernet1/0/1]port link-type access
[H3C-GigabitEthernet1/0/1]port access vlan 10
[H3C-GigabitEthernet1/0/1]exit
[H3C]
[H3C]interface GigabitEthernet 1/0/2
[H3C-GigabitEthernet1/0/2]port link-type access
[H3C-GigabitEthernet1/0/2]port access vlan 20
[H3C-GigabitEthernet1/0/2]exit
[H3C]

服务器1配置,服务器网卡直接配置IP地址

root@server1:~# cat /etc/netplan/00-installer-config.yaml
network:
  ethernets:
    enp1s0:
      dhcp4: false
      addresses:
        - 172.16.10.10/24
      nameservers:
        addresses:
          - 223.5.5.5
          - 223.6.6.6
      routes:
        - to: default
          via: 172.16.10.1
  version: 2

服务器2配置,服务器网卡直接配置IP地址

root@server2:~# cat /etc/netplan/00-installer-config.yaml
network:
  ethernets:
    enp1s0:
      dhcp4: false
      addresses:
        - 172.16.20.10/24
      nameservers:
        addresses:
          - 223.5.5.5
          - 223.6.6.6
      routes:
        - to: default
          via: 172.16.20.1
  version: 2

应用网卡配置

netplan apply

查看服务器接口信息

root@server1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.10/24 brd 172.16.10.255 scope global enp1s0
       valid_lft forever preferred_lft forever
    inet6 fe80::7eb5:9bff:fe59:a71/64 scope link 
       valid_lft forever preferred_lft forever

通过server1 ping server2测试连通性,三层交换机支持路由功能,能够打通二层隔离的vlan网段。

root@server1:~# ping 172.16.20.10 -c 4
PING 172.16.20.10 (172.16.20.10) 56(84) bytes of data.
64 bytes from 172.16.20.10: icmp_seq=1 ttl=64 time=0.033 ms
64 bytes from 172.16.20.10: icmp_seq=2 ttl=64 time=0.048 ms
64 bytes from 172.16.20.10: icmp_seq=3 ttl=64 time=0.048 ms
64 bytes from 172.16.20.10: icmp_seq=4 ttl=64 time=0.047 ms

--- 172.16.20.10 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3061ms
rtt min/avg/max/mdev = 0.033/0.044/0.048/0.006 ms

网卡vlan模式

vlan模式下,对端上连交换机需要配置为trunk口,允许多个vlan通过。

示意图如下
在这里插入图片描述

交换机配置,交换机需要配置为trunk口,允许多个vlan通过

H3C>system-view 
[H3C]interface GigabitEthernet 1/0/1
[H3C-GigabitEthernet1/0/1]port link-type trunk
[H3C-GigabitEthernet1/0/1]port trunk permit vlan 10 20
[H3C-GigabitEthernet1/0/1]exit
[H3C]

服务器配置,服务器需要配置vlan子接口

root@server1:~# cat /etc/netplan/00-installer-config.yaml
network:
  ethernets:
    enp1s0:
      dhcp4: true
  vlans:
    vlan10:
      id: 10
      link: enp1s0
      addresses: [ "172.16.10.10/24" ]
      routes:
        - to: default
          via: 172.16.10.1
          metric: 200
    vlan20:
      id: 20
      link: enp1s0
      addresses: [ "172.16.20.10/24" ]
      routes:
        - to: default
          via: 172.16.20.1
          metric: 300
  version: 2

查看接口信息,新建了两个vlan子接口vlan10和vlan20

root@server1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::7eb5:9bff:fe59:a71/64 scope link 
       valid_lft forever preferred_lft forever
10: vlan10@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.10/24 brd 172.16.10.255 scope global vlan10
       valid_lft forever preferred_lft forever
    inet6 fe80::7eb5:9bff:fe59:a71/64 scope link 
       valid_lft forever preferred_lft forever
11: vlan20@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
    inet 172.16.20.10/24 brd 172.16.20.255 scope global vlan20
       valid_lft forever preferred_lft forever
    inet6 fe80::7eb5:9bff:fe59:a71/64 scope link 
       valid_lft forever preferred_lft forever

通过vlan10 和 vlan20测试与网关连通性

root@server1:~# ping 172.16.10.1 -c 4
PING 172.16.10.1 (172.16.10.1) 56(84) bytes of data.
64 bytes from 172.16.10.1: icmp_seq=1 ttl=64 time=0.033 ms
64 bytes from 172.16.10.1: icmp_seq=2 ttl=64 time=0.048 ms
64 bytes from 172.16.10.1: icmp_seq=3 ttl=64 time=0.048 ms
64 bytes from 172.16.10.1: icmp_seq=4 ttl=64 time=0.047 ms

--- 172.16.10.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3061ms
rtt min/avg/max/mdev = 0.033/0.044/0.048/0.006 ms
root@server1:~#
root@server1:~# ping 172.16.20.1 -c 4
PING 172.16.20.1 (172.16.20.1) 56(84) bytes of data.
64 bytes from 172.16.20.1: icmp_seq=1 ttl=64 time=0.033 ms
64 bytes from 172.16.20.1: icmp_seq=2 ttl=64 time=0.048 ms
64 bytes from 172.16.20.1: icmp_seq=3 ttl=64 time=0.048 ms
64 bytes from 172.16.20.1: icmp_seq=4 ttl=64 time=0.047 ms

--- 172.16.20.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3061ms
rtt min/avg/max/mdev = 0.033/0.044/0.048/0.006 ms

网卡bond模式

bond模式下,对端交换机需要配置bond聚合口。

示意图如下
在这里插入图片描述

交换机配置,配置动态链路聚合,将端口g1/0/1和g1/0/3加入聚合组。然后将bond口配置为trunk模式。

<H3C>system-view
[H3C]interface Bridge-Aggregation 1
[H3C-Bridge-Aggregation1]link-aggregation mode dynamic
[H3C-Bridge-Aggregation1]quit

[H3C]interface GigabitEthernet 1/0/1
[H3C-GigabitEthernet1/0/1]port link-aggregation group 1
[H3C-GigabitEthernet1/0/1]exit

[H3C]interface GigabitEthernet 1/0/3
[H3C-GigabitEthernet1/0/3]port link-aggregation group 1
[H3C-GigabitEthernet1/0/3]exit

[H3C]interface Bridge-Aggregation 1
[H3C-Bridge-Aggregation1]port link-type trunk
[H3C-Bridge-Aggregation1]port trunk permit vlan 10 20
[H3C-Bridge-Aggregation1]exit

服务器配置

root@server1:~# cat /etc/netplan/00-installer-config.yaml
network:
  version: 2
  ethernets:
    enp1s0:
      dhcp4: no
    enp2s0:
      dhcp4: no
  bonds:
    bond0:
      interfaces:
        - enp1s0
        - enp2s0
      parameters:
        mode: 802.3ad
        lacp-rate: fast
        mii-monitor-interval: 100
        transmit-hash-policy: layer2+3
  vlans:
    vlan10:
      id: 10
      link: bond0
      addresses: [ "172.16.10.10/24" ]
      routes:
        - to: default
          via: 172.16.10.1
          metric: 200
    vlan20:
      id: 20
      link: bond0
      addresses: [ "172.16.20.10/24" ]
      routes:
        - to: default
          via: 172.16.20.1
          metric: 300

查看网卡信息,新建了bond0网口,并且基于bond0网口创建了两个vlan子接口vlan10和vlan20,enp1s0和enp2s0显示master bond0,说明两个网卡属于bond0成员接口。

root@server1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP group default qlen 1000
    link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff permaddr 7c:b5:9b:59:0a:71
3: enp2s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP group default qlen 1000
    link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff permaddr e4:54:e8:dc:e5:88
7: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff
    inet6 fe80::acfd:60ff:fe48:841a/64 scope link 
       valid_lft forever preferred_lft forever
8: vlan10@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.10/24 brd 172.16.10.255 scope global vlan10
       valid_lft forever preferred_lft forever
    inet6 fe80::acfd:60ff:fe48:841a/64 scope link 
       valid_lft forever preferred_lft forever
9: vlan20@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff
    inet 172.16.20.10/24 brd 172.16.20.255 scope global vlan20
       valid_lft forever preferred_lft forever
    inet6 fe80::acfd:60ff:fe48:841a/64 scope link 
       valid_lft forever preferred_lft forever

查看bond状态,Bonding Mode显示为IEEE 802.3ad Dynamic link aggregation,并且下面Slave Interface显示了两个成员接口的信息。

root@server1:~# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v5.15.0-60-generic

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2+3 (2)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Peer Notification Delay (ms): 0

802.3ad info
LACP active: on
LACP rate: fast
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: ae:fd:60:48:84:1a
Active Aggregator Info:
        Aggregator ID: 1
        Number of ports: 2
        Actor Key: 9
        Partner Key: 1
        Partner Mac Address: fc:60:9b:35:ad:18

Slave Interface: enp1s0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 2
Permanent HW addr: 7c:b5:9b:59:0a:71
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
    system priority: 65535
    system mac address: ae:fd:60:48:84:1a
    port key: 9
    port priority: 255
    port number: 1
    port state: 63
details partner lacp pdu:
    system priority: 32768
    system mac address: fc:60:9b:35:ad:18
    oper key: 1
    port priority: 32768
    port number: 2
    port state: 61

Slave Interface: enp2s0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 3
Permanent HW addr: e4:54:e8:dc:e5:88
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
    system priority: 65535
    system mac address: ae:fd:60:48:84:1a
    port key: 9
    port priority: 255
    port number: 2
    port state: 63
details partner lacp pdu:
    system priority: 32768
    system mac address: fc:60:9b:35:ad:18
    oper key: 1
    port priority: 32768
    port number: 1
    port state: 61

测试连通性,测试与交换机网关地址的连通性:

root@server1:~# ping 172.16.10.1 -c 4
PING 172.16.10.1 (172.16.10.1) 56(84) bytes of data.
64 bytes from 172.16.10.1: icmp_seq=1 ttl=255 time=1.64 ms
64 bytes from 172.16.10.1: icmp_seq=2 ttl=255 time=1.59 ms
64 bytes from 172.16.10.1: icmp_seq=3 ttl=255 time=1.95 ms
64 bytes from 172.16.10.1: icmp_seq=4 ttl=255 time=1.93 ms

--- 172.16.10.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3006ms
rtt min/avg/max/mdev = 1.589/1.776/1.953/0.165 ms
root@server1:~# 

关闭一个接口,再次测试连通性,依然能够ping通

root@server1:~# ip link set dev enp2s0 down
root@server1:~# ip link show enp2s0
3: enp2s0: <BROADCAST,MULTICAST,SLAVE> mtu 1500 qdisc fq_codel master bond0 state DOWN mode DEFAULT group default qlen 1000
    link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff permaddr e4:54:e8:dc:e5:88
root@server1:~# 
root@server1:~# ping 172.16.10.1 -c 4
PING 172.16.10.1 (172.16.10.1) 56(84) bytes of data.
64 bytes from 172.16.10.1: icmp_seq=1 ttl=255 time=1.54 ms
64 bytes from 172.16.10.1: icmp_seq=2 ttl=255 time=1.64 ms
64 bytes from 172.16.10.1: icmp_seq=3 ttl=255 time=2.73 ms
64 bytes from 172.16.10.1: icmp_seq=4 ttl=255 time=1.47 ms

--- 172.16.10.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3006ms
rtt min/avg/max/mdev = 1.470/1.844/2.732/0.516 ms

网卡桥接模式

桥接模式下,对端交换机可配置access模式或trunk模式。

示意图如下

在这里插入图片描述

交换机配置,交换机接口配置为access模式为例,并加入对应vlan

<H3C>system-view
[H3C]interface GigabitEthernet 1/0/1
[H3C-GigabitEthernet1/0/1]port link-type access
[H3C-GigabitEthernet1/0/1]port access vlan 10
[H3C-GigabitEthernet1/0/1]exit
[H3C]

服务器配置,物理网卡加入到网桥中,IP地址配置到网桥接口br0上。

root@server1:~# cat /etc/netplan/00-installer-config.yaml
network:
  version: 2
  ethernets:
    enp1s0:
      dhcp4: no
      dhcp6: no
  bridges:
    br0:
      interfaces: [enp1s0]
      addresses: [172.16.10.10/24]
      routes:
      - to: default
        via: 172.16.10.1
        metric: 100
        on-link: true
      mtu: 1500
      nameservers:
        addresses:
          - 223.5.5.5
          - 223.6.6.6
      parameters:
        stp: true
        forward-delay: 4

查看网卡信息

root@server1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br0 state UP group default qlen 1000
    link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
12: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 0e:d0:7e:31:9c:74 brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.10/24 brd 172.16.10.255 scope global br0
       valid_lft forever preferred_lft forever
    inet6 fe80::cd0:7eff:fe31:9c74/64 scope link 
       valid_lft forever preferred_lft forever

查看网桥及接口,当前网桥上只有一个物理接口enp1s0。

root@server1:~# apt install -y bridge-utils
root@ubuntu:~# brctl show
bridge name     bridge id               STP enabled     interfaces
br0             8000.0ed07e319c74       yes             enp1s0
root@server1:~# 

这样在KVM虚拟化环境,虚拟机实例连接到网桥后,虚拟机可以配置与物理网卡相同网段的IP地址。访问虚拟机可以像访问物理机一样方便。

网卡macvlan模式

macvlan(MAC Virtual LAN)是Linux内核提供的一种网络虚拟化技术,它允许在一个物理网卡接口上创建多个虚拟网卡接口,每个虚拟接口都有自己独立的MAC地址,也可以配置上 IP 地址进行通信。Macvlan 下的虚拟机或者容器网络和主机在同一个网段中,共享同一个广播域。

macvlan模式下,对端交换机可配置access模式或trunk模式,trunk模式下macvlan能够与vlan很好的结合使用。

示意图如下:

在这里插入图片描述

macvlan IP模式

该模式下,上连交换机接口配置为access模式,服务器macvlan主网卡和子接口直接配置相同网段的IP地址。

交换机配置

<H3C>system-view
[H3C]interface GigabitEthernet 1/0/1
[H3C-GigabitEthernet1/0/1]port link-type access
[H3C-GigabitEthernet1/0/1]port access vlan 10
[H3C-GigabitEthernet1/0/1]exit

服务器配置,macvlan支持多种模式,这里使用bridge模式,并持久化配置

cat >/etc/networkd-dispatcher/routable.d/10-macvlan-interfaces.sh<<EOF
#! /bin/bash
ip link add macvlan0 link enp1s0 type macvlan mode bridge
ip link add macvlan1 link enp1s0 type macvlan mode bridge
EOF

chmod o+x,g+x,u+x /etc/networkd-dispatcher/routable.d/10-macvlan-interfaces.sh

配置netplan

root@server1:~# cat /etc/netplan/00-installer-config.yaml
network:
  ethernets:
    enp1s0:
      dhcp4: false
      addresses:
        - 172.16.10.10/24
      nameservers:
        addresses:
          - 223.5.5.5
          - 223.6.6.6
      routes:
        - to: default
          via: 172.16.10.1
    macvlan0:
      addresses:
        - 172.16.10.11/24
    macvlan1:
      addresses:
        - 172.16.10.12/24
  version: 2

应用网卡配置

netplan apply

查看网卡信息,新建了两个macvlan接口,IP地址与主网卡位于同一网段,并且每个接口都有独立的MAC地址。

root@server1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.10/24 brd 172.16.10.255 scope global enp1s0
       valid_lft forever preferred_lft forever
13: macvlan0@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 32:e8:b4:0a:47:62 brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.11/24 brd 172.16.10.255 scope global macvlan0
       valid_lft forever preferred_lft forever
    inet6 fe80::30e8:b4ff:fe0a:4762/64 scope link 
       valid_lft forever preferred_lft forever
14: macvlan1@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether d2:73:75:14:b2:04 brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.12/24 brd 172.16.10.255 scope global macvlan1
       valid_lft forever preferred_lft forever
    inet6 fe80::d073:75ff:fe14:b204/64 scope link 
       valid_lft forever preferred_lft forever

测试与网关的连通性

root@server1:~# ping -c 3 172.16.10.1
PING 172.16.10.1 (172.16.10.1) 56(84) bytes of data.
64 bytes from 172.16.10.1: icmp_seq=1 ttl=255 time=3.60 ms
64 bytes from 172.16.10.1: icmp_seq=2 ttl=255 time=1.45 ms
64 bytes from 172.16.10.1: icmp_seq=3 ttl=255 time=1.44 ms

--- 172.16.10.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2004ms
rtt min/avg/max/mdev = 1.441/2.163/3.602/1.017 ms
root@server1:~# 

macvlan vlan模式

该模式下,上连交换机接口配置为trunk模式,服务器macvlan主网卡不配置IP地址,每个macvlan子接口配置为不同的vlan子接口。

交换机配置

<H3C>system-view
[H3C]interface GigabitEthernet 1/0/1
[H3C-GigabitEthernet1/0/3]port link-type trunk
[H3C-GigabitEthernet1/0/3]port trunk permit vlan 10 20
[H3C-GigabitEthernet1/0/1]exit
[H3C]

服务器配置,macvlan支持多种模式,这里使用bridge模式,并持久化配置

cat >/etc/networkd-dispatcher/routable.d/10-macvlan-interfaces.sh<<EOF
#! /bin/bash
ip link add macvlan0 link enp1s0 type macvlan mode bridge
ip link add macvlan1 link enp1s0 type macvlan mode bridge
EOF

chmod o+x,g+x,u+x /etc/networkd-dispatcher/routable.d/10-macvlan-interfaces.sh

配置netplan,两个macvlan接口macvlan0和macvlan1分别配置vlan子接口vlan10和vlan20。

root@ubuntu:~# cat /etc/netplan/00-installer-config.yaml
network:
  ethernets:
    enp1s0:
      dhcp4: false
    macvlan0:
      dhcp4: false
    macvlan1:
      dhcp4: false
  vlans:
    vlan10:
      id: 10
      link: macvlan0
      addresses: [ "172.16.10.10/24" ]
      routes:
        - to: default
          via: 172.16.10.1
          metric: 200
    vlan20:
      id: 20
      link: macvlan1
      addresses: [ "172.16.20.10/24" ]
      routes:
        - to: default
          via: 172.16.20.1
          metric: 300
  version: 2

应用网卡配置

netplan apply

查看网卡信息,新建了两个macvlan接口,以及对应的两个vlan子接口。

root@server1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::7eb5:9bff:fe59:a71/64 scope link 
       valid_lft forever preferred_lft forever
11: macvlan0@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 32:e8:b4:0a:47:62 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::30e8:b4ff:fe0a:4762/64 scope link 
       valid_lft forever preferred_lft forever
12: macvlan1@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether d2:73:75:14:b2:04 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::d073:75ff:fe14:b204/64 scope link 
       valid_lft forever preferred_lft forever
13: vlan10@macvlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 32:e8:b4:0a:47:62 brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.10/24 brd 172.16.10.255 scope global vlan10
       valid_lft forever preferred_lft forever
    inet6 fe80::30e8:b4ff:fe0a:4762/64 scope link 
       valid_lft forever preferred_lft forever
14: vlan20@macvlan1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether d2:73:75:14:b2:04 brd ff:ff:ff:ff:ff:ff
    inet 172.16.20.10/24 brd 172.16.20.255 scope global vlan20
       valid_lft forever preferred_lft forever
    inet6 fe80::d073:75ff:fe14:b204/64 scope link 
       valid_lft forever preferred_lft forever

测试两个VLAN接口与外部网关的连通性

root@server1:~# ping -c 3 172.16.10.1
PING 172.16.10.1 (172.16.10.1) 56(84) bytes of data.
64 bytes from 172.16.10.1: icmp_seq=1 ttl=255 time=3.60 ms
64 bytes from 172.16.10.1: icmp_seq=2 ttl=255 time=1.45 ms
64 bytes from 172.16.10.1: icmp_seq=3 ttl=255 time=1.44 ms

--- 172.16.10.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2004ms
rtt min/avg/max/mdev = 1.441/2.163/3.602/1.017 ms
root@server1:~# 
root@server1:~# ping -c 3 172.16.20.1 
PING 172.16.20.1 (172.16.20.1) 56(84) bytes of data.
64 bytes from 172.16.20.1: icmp_seq=1 ttl=255 time=1.35 ms
64 bytes from 172.16.20.1: icmp_seq=2 ttl=255 time=1.48 ms
64 bytes from 172.16.20.1: icmp_seq=3 ttl=255 time=1.46 ms

--- 172.16.20.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2004ms
rtt min/avg/max/mdev = 1.353/1.429/1.477/0.054 ms
root@server1:~# 

网卡ipvlan模式

IPVLAN(IP Virtual LAN)是Linux内核提供的一种网络虚拟化技术,它可以在一个物理网卡上创建多个虚拟网卡接口,每个虚拟接口都有自己独立的IP地址。

IPVLAN和macvlan类似,都是从一个主机接口虚拟出多个虚拟网络接口。唯一比较大的区别就是ipvlan虚拟出的子接口都有相同的mac地址(与物理接口共用同个mac地址),但可配置不同的ip地址。

ipvlan模式下,对端交换机也可以配置access模式或trunk模式,trunk模式下ipvlan能够与vlan很好的结合使用。

示意图如下
在这里插入图片描述

交换机配置

<H3C>system-view
[H3C]interface GigabitEthernet 1/0/1
[H3C-GigabitEthernet1/0/1]port link-type access
[H3C-GigabitEthernet1/0/1]port access vlan 10
[H3C-GigabitEthernet1/0/1]exit
[H3C]

服务器配置,ipvlan支持三种模式(l2、l3、l3s),这里使用l3模式,并持久化配置

cat >/etc/networkd-dispatcher/routable.d/10-ipvlan-interfaces.sh<<EOF
#! /bin/bash
ip link add ipvlan0 link enp1s0 type ipvlan mode l3
ip link add ipvlan1 link enp1s0 type ipvlan mode l3
EOF
chmod o+x,g+x,u+x /etc/networkd-dispatcher/routable.d/10-ipvlan-interfaces.sh

配置netplan

root@server1:~# cat /etc/netplan/00-installer-config.yaml
network:
  ethernets:
    enp1s0:
      dhcp4: false
      addresses:
        - 172.16.10.10/24
      nameservers:
        addresses:
          - 223.5.5.5
          - 223.6.6.6
      routes:
        - to: default
          via: 172.16.10.1
    ipvlan0:
      addresses:
        - 172.16.10.11/24
    ipvlan1:
      addresses:
        - 172.16.10.12/24
  version: 2

应用网卡配置

netplan apply

查看网卡信息,新建了两ipvlan接口,IP地址与主网卡位于同一网段,并且每个接口都有与主网卡相同的MAC地址。

root@server1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.10/24 brd 172.16.10.255 scope global enp1s0
       valid_lft forever preferred_lft forever
    inet6 fe80::7eb5:9bff:fe59:a71/64 scope link 
       valid_lft forever preferred_lft forever
9: ipvlan0@enp1s0: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.11/24 brd 172.16.10.255 scope global ipvlan0
       valid_lft forever preferred_lft forever
    inet6 fe80::7cb5:9b00:159:a71/64 scope link 
       valid_lft forever preferred_lft forever
10: ipvlan1@enp1s0: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.12/24 brd 172.16.10.255 scope global ipvlan1
       valid_lft forever preferred_lft forever
    inet6 fe80::7cb5:9b00:259:a71/64 scope link 
       valid_lft forever preferred_lft forever

测试与网关的连通性

root@server1:~# ping -c 3 172.16.10.1
PING 172.16.10.1 (172.16.10.1) 56(84) bytes of data.
64 bytes from 172.16.10.1: icmp_seq=1 ttl=255 time=3.60 ms
64 bytes from 172.16.10.1: icmp_seq=2 ttl=255 time=1.45 ms
64 bytes from 172.16.10.1: icmp_seq=3 ttl=255 time=1.44 ms

--- 172.16.10.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2004ms
rtt min/avg/max/mdev = 1.441/2.163/3.602/1.017 ms
root@server1:~# 

网卡 macvtap 模式

使用 bridge 使 KVM 虚拟机能够进行外部通信的另一种替代方法是使用 Linux MacVTap 驱动程序。当不想创建普通网桥,但希望本地网络中的用户访问虚拟机时,可以使用 MacVTap。

与使用bridge 的一个主要区别是 MacVTap 直接连接到 KVM 主机中的网络接口。这种直接连接绕过了 KVM 主机中与连接和使用软件bridge 相关的大部分代码和组件,有效地缩短了代码路径。这种较短的代码路径通常会提高吞吐量并减少外部系统的延迟。

示意图如下:
在这里插入图片描述

交换机配置

<H3C>system-view
[H3C]interface GigabitEthernet 1/0/1
[H3C-GigabitEthernet1/0/1]port link-type access
[H3C-GigabitEthernet1/0/1]port access vlan 10
[H3C-GigabitEthernet1/0/1]exit

主机网卡配置

root@server1:~# cat /etc/netplan/00-installer-config.yaml
# This is the network config written by 'subiquity'
network:
  ethernets:
    enp1s0:
      dhcp4: false
      addresses:
        - 172.16.10.10/24
      nameservers:
        addresses:
          - 223.5.5.5
          - 223.6.6.6
      routes:
        - to: default
          via: 192.168.137.2
  version: 2

安装kvm虚拟化环境,创建两个虚拟机,指定从enp1s0主网卡分配mavtap子接口。

virt-install \
  --name vm1 \
  --vcpus 1 \
  --memory 2048 \
  --disk path=/var/lib/libvirt/images/vm1/jammy-server-cloudimg-amd64.img \
  --os-variant ubuntu22.04 \
  --noautoconsole \
  --import \
  --autostart \
  --network type=direct,source=enp1s0,source_mode=bridge,model=virtio

virt-install \
  --name vm2 \
  --vcpus 1 \
  --memory 2048 \
  --disk path=/var/lib/libvirt/images/vm2/jammy-server-cloudimg-amd64.img \
  --os-variant ubuntu22.04 \
  --noautoconsole \
  --import \
  --autostart \
  --network type=direct,source=enp1s0,source_mode=bridge,model=virtio

查看网卡信息,新创建了两个macvtap接口

root@server1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.10/24 brd 172.16.10.255 scope global enp1s0
       valid_lft forever preferred_lft forever
    inet6 fe80::7eb5:9bff:fe59:a71/64 scope link 
       valid_lft forever preferred_lft forever
5: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:bb:15:22 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
6: macvtap0@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 500
    link/ether 52:54:00:41:8f:a3 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe41:8fa3/64 scope link 
       valid_lft forever preferred_lft forever
7: macvtap1@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 500
    link/ether 52:54:00:93:2c:4a brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe93:2c4a/64 scope link 
       valid_lft forever preferred_lft forever

虚拟机1配置IP地址

root@vm1:~# cat /etc/netplan/00-installer-config.yaml
network:
  ethernets:
    enp1s0:
      dhcp4: false
      addresses:
        - 172.16.10.11/24
      nameservers:
        addresses:
          - 223.5.5.5
          - 223.6.6.6
      routes:
        - to: default
          via: 172.16.10.1
  version: 2

虚拟机2配置IP地址

root@vm2:~# cat /etc/netplan/00-installer-config.yaml
network:
  ethernets:
    enp1s0:
      dhcp4: false
      addresses:
        - 172.16.10.12/24
      nameservers:
        addresses:
          - 223.5.5.5
          - 223.6.6.6
      routes:
        - to: default
          via: 172.16.10.1
  version: 2

测试与网关的连通性

root@vm1:~# ping 172.16.10.1 -c 3
PING 172.16.10.1 (172.16.10.1) 56(84) bytes of data.
64 bytes from 172.16.10.1: icmp_seq=1 ttl=255 time=1.38 ms
64 bytes from 172.16.10.1: icmp_seq=2 ttl=255 time=1.75 ms
64 bytes from 172.16.10.1: icmp_seq=3 ttl=255 time=4.34 ms

--- 172.16.10.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2004ms
rtt min/avg/max/mdev = 1.382/2.491/4.344/1.318 ms

bond、vlan与桥接混合配置

将服务器两块网卡组成bond口,在bond口之上创建两个vlan子接口,分别加入两个linux bridge中,然后在不同bridge下创建虚拟机,虚拟机将属于不同vlan。

示意图如下:
在这里插入图片描述
交换机配置,配置动态链路聚合,将端口g1/0/1和g1/0/3加入聚合组。将聚合口配置为trunk模式,允许vlan 8 10 20通过,并且将vlan8 配置为聚合口的native vlan,作为管理使用。

<H3C>system-view
[H3C]interface Vlan-interface 8
[H3C-Vlan-interface8]ip address 172.16.8.1 24
[H3C-Vlan-interface8]exit
[H3C]

[H3C]interface Bridge-Aggregation 1
[H3C-Bridge-Aggregation1]link-aggregation mode dynamic
[H3C-Bridge-Aggregation1]quit

[H3C]interface GigabitEthernet 1/0/1
[H3C-GigabitEthernet1/0/1]port link-aggregation group 1
[H3C-GigabitEthernet1/0/1]exit

[H3C]interface GigabitEthernet 1/0/3
[H3C-GigabitEthernet1/0/3]port link-aggregation group 1
[H3C-GigabitEthernet1/0/3]exit

[H3C]interface Bridge-Aggregation 1
[H3C-Bridge-Aggregation1]port link-type trunk 
[H3C-Bridge-Aggregation1]port trunk permit vlan 8 10 20 
[H3C-Bridge-Aggregation1]port trunk pvid vlan 8
[H3C-Bridge-Aggregation1]undo port trunk permit vlan 1
[H3C-Bridge-Aggregation1]exit
[H3C]

服务器网卡配置,注意bond0配置了管理IP地址,匹配交换机native vlan 8。

root@server1:~# cat /etc/netplan/00-installer-config.yaml
network:
  version: 2
  ethernets:
    enp1s0:
      dhcp4: false
    enp2s0:
      dhcp4: false
  bonds:
    bond0:
      dhcp4: false
      dhcp6: false
      interfaces:
        - enp1s0
        - enp2s0
      addresses:
        - 172.16.8.10/24
      nameservers:
        addresses:
          - 223.5.5.5
          - 223.6.6.6
      routes:
        - to: default
          via: 172.16.8.1
      parameters:
        mode: 802.3ad
        lacp-rate: fast
        mii-monitor-interval: 100
        transmit-hash-policy: layer2+3
  bridges:
    br10:
      interfaces: [ vlan10 ]
    br20:
      interfaces: [ vlan20 ]
  vlans:
    vlan10:
      id: 10
      link: bond0
    vlan20:
      id: 20
      link: bond0

查看网卡信息,新建了bond0网口,并且基于bond0网口创建了两个vlan子接口vlan10和vlan20。

root@server1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP group default qlen 1000
    link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff permaddr 7c:b5:9b:59:0a:71
3: enp2s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP group default qlen 1000
    link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff permaddr e4:54:e8:dc:e5:88
15: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff
    inet 172.16.8.10/24 brd 172.16.8.255 scope global bond0
       valid_lft forever preferred_lft forever
    inet6 fe80::acfd:60ff:fe48:841a/64 scope link 
       valid_lft forever preferred_lft forever
16: br10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ee:df:66:ab:c2:4b brd ff:ff:ff:ff:ff:ff
    inet6 fe80::ecdf:66ff:feab:c24b/64 scope link 
       valid_lft forever preferred_lft forever
17: br20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 9e:4d:f4:0a:6d:13 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::9c4d:f4ff:fe0a:6d13/64 scope link 
       valid_lft forever preferred_lft forever
18: vlan10@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br10 state UP group default qlen 1000
    link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff
19: vlan20@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br20 state UP group default qlen 1000
    link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff

查看创建的网桥

root@server1:~# brctl show
bridge name     bridge id               STP enabled     interfaces
br10            8000.eedf66abc24b       no              vlan10
br20            8000.9e4df40a6d13       no              vlan20

测试bond0 IP与外部网关连通性

root@server1:~# ping 172.16.8.1 -c 3
PING 172.16.8.1 (172.16.8.1) 56(84) bytes of data.
64 bytes from 172.16.8.1: icmp_seq=1 ttl=255 time=1.55 ms
64 bytes from 172.16.8.1: icmp_seq=2 ttl=255 time=1.61 ms
64 bytes from 172.16.8.1: icmp_seq=3 ttl=255 time=1.62 ms

--- 172.16.8.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2004ms
rtt min/avg/max/mdev = 1.550/1.593/1.620/0.030 ms
root@server1:~# 

在server1安装kvm虚拟化环境,然后创建两个新的kvm网络,分别绑定到不同网桥

cat >br10-network.xml<<EOF
<network>
  <name>br10-net</name>
  <forward mode="bridge"/>
  <bridge name="br10"/>
</network>
EOF
cat >br20-network.xml<<EOF
<network>
  <name>br20-net</name>
  <forward mode="bridge"/>
  <bridge name="br20"/>
</network>
EOF

virsh net-define br10-network.xml
virsh net-define br20-network.xml
virsh net-start br10-net
virsh net-start br20-net
virsh net-autostart br10-net
virsh net-autostart br20-net

查看新建的网络

root@server1:~# virsh net-list
 Name       State    Autostart   Persistent
---------------------------------------------
 br10-net   active   yes         yes
 br20-net   active   yes         yes
 default    active   yes         yes

创建两个虚拟机,指定使用不同网络

virt-install \
  --name vm1 \
  --vcpus 1 \
  --memory 2048 \
  --disk path=/var/lib/libvirt/images/vm1/jammy-server-cloudimg-amd64.img \
  --os-variant ubuntu22.04 \
  --import \
  --autostart \
  --noautoconsole \
  --network network=br10-net

virt-install \
  --name vm2 \
  --vcpus 1 \
  --memory 2048 \
  --disk path=/var/lib/libvirt/images/vm2/jammy-server-cloudimg-amd64.img \
  --os-variant ubuntu22.04 \
  --import \
  --autostart \
  --noautoconsole \
  --network network=br20-net

查看创建的虚拟机

root@server1:~# virsh list
 Id   Name   State
----------------------
 13   vm1    running
 14   vm2    running

为vm1配置vlan10的IP地址

virsh console vm1
cat >/etc/netplan/00-installer-config.yaml<<EOF
network:
  ethernets:
    enp1s0:
      addresses:
      - 172.16.10.10/24
      nameservers:
        addresses:
        - 223.5.5.5
      routes:
      - to: default
        via: 172.16.10.1
  version: 2
EOF
netplan apply

为vm2配置vlan20的IP地址

virsh console vm2
cat >/etc/netplan/00-installer-config.yaml<<EOF
network:
  ethernets:
    enp1s0:
      addresses:
      - 172.16.20.10/24
      nameservers:
        addresses:
        - 223.5.5.5
      routes:
      - to: default
        via: 172.16.20.1
  version: 2
EOF
netplan apply

登录到vm1,测试vm1与外部网关连通性

root@vm1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:a4:aa:9d brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.10/24 brd 172.16.10.255 scope global enp1s0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fea4:aa9d/64 scope link 
       valid_lft forever preferred_lft forever
root@vm1:~# 
root@vm1:~# ping 172.16.10.1 -c 3
PING 172.16.10.1 (172.16.10.1) 56(84) bytes of data.
64 bytes from 172.16.10.1: icmp_seq=1 ttl=255 time=1.51 ms
64 bytes from 172.16.10.1: icmp_seq=2 ttl=255 time=7.10 ms
64 bytes from 172.16.10.1: icmp_seq=3 ttl=255 time=2.10 ms

--- 172.16.10.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 1.505/3.568/7.101/2.509 ms
root@vm1:~# 

登录到vm2,测试vm2与外部网关连通性

root@vm2:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:89:61:da brd ff:ff:ff:ff:ff:ff
    inet 172.16.20.10/24 brd 172.16.20.255 scope global enp1s0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fe89:61da/64 scope link 
       valid_lft forever preferred_lft forever
root@vm2:~# 
root@vm2:~# ping 172.16.20.1 -c 3
PING 172.16.20.1 (172.16.20.1) 56(84) bytes of data.
64 bytes from 172.16.20.1: icmp_seq=1 ttl=255 time=1.73 ms
64 bytes from 172.16.20.1: icmp_seq=2 ttl=255 time=2.00 ms
64 bytes from 172.16.20.1: icmp_seq=3 ttl=255 time=2.00 ms

--- 172.16.20.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 1.732/1.911/2.003/0.126 ms
root@vm2:~# 

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/501687.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

变分信息瓶颈

变分信息瓶颈和互信息的定义 1 变分信息瓶颈 定义&#xff1a;变分信息瓶颈&#xff08;Variational Information Bottleneck&#xff09;是一种用于学习数据表示的方法&#xff0c;它旨在通过最小化输入和表示之间的互信息来实现数据的压缩和表示学习。这种方法通常用于无监…

OpenHarmony:全流程讲解如何编写ADC平台驱动以及应用程序

ADC&#xff08;Analog to Digital Converter&#xff09;&#xff0c;即模拟-数字转换器&#xff0c;可将模拟信号转换成对应的数字信号&#xff0c;便于存储与计算等操作。除电源线和地线之外&#xff0c;ADC只需要1根线与被测量的设备进行连接。 一、案例简介 该程序是基于…

Java基础语法(三)| 循环语句

前言 Hello&#xff0c;大家好&#xff01;很开心与你们在这里相遇&#xff0c;我是一个喜欢文字、喜欢有趣的灵魂、喜欢探索一切有趣事物的女孩&#xff0c;想与你们共同学习、探索关于IT的相关知识&#xff0c;希望我们可以一路陪伴~ 1. if语句 1.1 格式一 if (关系表达式) …

探讨企业邮箱安全问题:必须关注的四个关键要点

近年来&#xff0c;虽然出现了微信、企微等沟通方式&#xff0c;但电子邮件无疑仍然是公司对内对外沟通的首选方式。根据Statista的研究&#xff0c;每天大约有3330亿封电子邮件被发送和接收&#xff0c;预计这一数字在未来几年还会增长。然而&#xff0c;邮件诈骗的问题也一直…

SiameseRPN原理详解(个人学习笔记)

参考资源&#xff1a; 视觉目标跟踪SiamRPNSiameseRPN详解CVPR2018视觉目标跟踪之 SiameseRPN 目录&#xff09; 1. 模型架构1.1 Siamese Network1.2 RPN 2. 模型训练2.1 损失函数2.2 端到端训练2.3 正负样本选择 3. 跟踪阶段总结 SiamRPN是在SiamFC的基础上进行改进而得到的一…

web布局——说清楚fixed布局

极限省流 想要fixed做导航页面&#xff1a;指定清楚top、left、right、bottom&#xff0c;没指定清楚布局位置就会采用默认的方式&#xff1a; 0&#xff09;父元素的padding&#xff1a;fixed元素相对位移 1&#xff09;同级元素是fixed元素&#xff1a;覆盖 2&#xff09…

unrealbuildtool 无法找到,执行 Generate Visual Studio Project 错误

参考链接 Generate cpp project Couldnt find UnrealBuildTool - Pipeline & Plugins / Plugins - Epic Developer Community Forums (unrealengine.com) 错误提示如下图&#xff1a; 解决方案&#xff1a; 打开 UnrealBuildTool&#xff0c;生成解决方案就可以了

解决Veeam做replication复制或备份任务时速度很慢问题

在网络层面配置无问题&#xff0c;比如都是万兆网情况下&#xff0c;发现对一个10T大小Linux虚拟机执行replication复制任务时&#xff0c;每次都需要30几个小时才可以跑完。如下图&#xff1a; 如何能让它更快一点跑完任务呢&#xff1f; 解决方法&#xff1a;登录Veeam,进入…

Verilog语法回顾--门级和开关级模型

目录 门和开关的声明 门和开关类型 支持驱动强度的门 延迟 实例数组 and&#xff0c;nand&#xff0c;nor&#xff0c;or&#xff0c;xor&#xff0c;xnor buf&#xff0c;not bufif1&#xff0c;bufif0&#xff0c;notif1&#xff0c;notif0 MOS switches Bidirecti…

Ubuntu20.04LTS+uhd3.15+gnuradio3.8.1源码编译及安装

文章目录 前言一、卸载本地 gnuradio二、安装 UHD 驱动三、编译及安装 gnuradio四、验证 前言 本地 Ubuntu 环境的 gnuradio 是按照官方指导使用 ppa 的方式安装 uhd 和 gnuradio 的&#xff0c;也是最方便的方法&#xff0c;但是存在着一个问题&#xff0c;就是我无法修改底层…

2013年认证杯SPSSPRO杯数学建模A题(第一阶段)护岸框架全过程文档及程序

2013年认证杯SPSSPRO杯数学建模 A题 护岸框架 原题再现&#xff1a; 在江河中&#xff0c;堤岸、江心洲的迎水区域被水流长期冲刷侵蚀。在河道整治工程中&#xff0c;需要在受侵蚀严重的部位设置一些人工设施&#xff0c;以减弱水流的冲刷&#xff0c;促进该处泥沙的淤积&…

在Windows中部署redis

下载redis Windows版redis在GitHub开源&#xff0c;由microsoftarchive维护,项目地址为 https://github.com/microsoftarchive/redis 找到releases&#xff0c;然后在latest中选择msi&#xff0c;或者zip进行下载 安装 msi安装包安装 下载完成后双击运行&#xff0c;同意协…

【御控物联】JavaScript JSON结构转换(11):数组To数组——综合应用

文章目录 一、JSON结构转换是什么&#xff1f;二、术语解释三、案例之《JSON数组 To JSON数组》四、代码实现五、在线转换工具六、技术资料 一、JSON结构转换是什么&#xff1f; JSON结构转换指的是将一个JSON对象或JSON数组按照一定规则进行重组、筛选、映射或转换&#xff0…

公司服务器被.rmallox攻击了如何挽救数据?

公司服务器被.rmallox攻击了如何挽救数据&#xff1f; .rmallox这种病毒与之前的勒索病毒变种有何不同&#xff1f;它有哪些新的特点或功能&#xff1f; .rmallox勒索病毒与之前的勒索病毒变种相比&#xff0c;具有一些新的特点和功能。这种病毒主要利用加密技术来威胁用户&am…

消费盲返:新型返利模式引领购物新潮流

消费盲返&#xff0c;一种引领潮流的新型消费返利模式&#xff0c;其核心在于&#xff1a;消费者在平台选购商品后&#xff0c;不仅能享受优惠价格&#xff0c;更有机会获得后续订单的部分利润作为额外奖励。这种创新的返利机制&#xff0c;既提升了消费者的购物体验&#xff0…

小赢科技公布2023年业绩:业绩稳健增长,服务“触角”有效延伸

近日&#xff0c;金融科技公司小赢科技&#xff08;NYSE:XYF&#xff09;发布了2023年第四季度及全年未经审计的财务业绩。 财报显示&#xff0c;小赢科技2023年全年总净营收约为48.15亿元&#xff0c;同比增长35.1%&#xff1b;净利润约为11.87亿元&#xff0c;同比增长46.2%…

CCSDS CONVOLUTIONAL CODING 卷积码 规范

文章目录 3 CONVOLUTIONAL CODING3.1 overview3.2 general3.2.1 ATTACHED SYNC MARKER3.2.2 DATA RANDOMIZATION3.2.3 FRAME VALIDATION3.2.4 QUANTIZATION 3.3 BASIC CONVOLUTIONAL CODE SPECIFICATION3.4 PUNCTURED CONVOLUTIONAL CODES matlab中的 comm.ConvolutionalEncode…

Linux IRC

目录 入侵框架检测 检测流程图 账号安全 查找账号中的危险信息 查看保存的历史命令 检测异常端口 入侵框架检测 1、系统安全检查&#xff08;进程、开放端口、连接、日志&#xff09; 这一块是目前个人该脚本所实现的功能 2、Rootkit 建议使用rootkit专杀工具来检查&#…

OSPF之单区域配置

文章目录 单区域配置项目背景项目分析拓扑图配置思路基础配置命令查看路由器接口IP地址信息OSPF配置 测试PC1与PC2互通查看OSPF邻居表修改OSPF路由器的router-id完美的OSPF配置命令写法常用查询命令 单区域配置 项目背景 企业内部存在多个部门&#xff0c;分别属于不同的网段…

经纬恒润AUTOSAR产品成功适配芯来RISC-V车规内核

近日&#xff0c;经纬恒润AUTOSAR基础软件产品INTEWORK-EAS&#xff08;ECU AUTOSAR Software&#xff0c;以下简称EAS&#xff09;在芯来提供的HP060开发板上成功适配芯来科技的RISC-V处理器NA内核&#xff0c;双方携手打造了具备灵活、可靠、高性能、强安全性的解决方案。这极…