Rancher Calico BGP 对接 F5
本文永久链接: https://www.xtplayer.cn/f5/f5-calico-bgp-ingress/
本文档基于 BIGIP-16.0.0.1-0.0.3.ALL-virtual-edition
版本编写,启用 F5 的 BGP 功能需要高级路由模块证书。
如果仅用于测试, https://www.f5.com/zh_cn/trials/big-ip-virtual-edition 此处可以申请试用版的序列号。
Rancher Calico BGP 配置
K8S 集群部署
目前 Rancher UI 部署 Calico 网络驱动暂不支持自定义高级配置,对于某些高级功能,需要通过手动修改网络驱动工作负载配置来实现。但是,在下一次升级 K8S 集群的时候可能会导致网络驱动自定义配置丢失。
如果对网络组件有特殊配置,建议在通过 Rancher UI 创建集群的时候,编辑集群 YAML 文件或者 RKE 配置文件中设置网络驱动为 none
network: |
以此来关闭集群原有网络驱动部署功能,在集群创建好之后通过手动部署网络驱动,部署 YAML 文件见附件。
自定义集群,因为没有部署网络驱动会有以下的错误提示,这属于正常现象。
安装 kubectl 和 calicoctl 工具
kubectl 工具可以在 http://mirror.cnrancher.com/ 进行下载,然后把 kubectl 拷贝到
/usr/local/bin/kubectl
,并给与执行权限:chmod +x /usr/local/bin/kubectl
;访问 https://github.com/projectcalico/calicoctl/releases 去下载 calicoctl,把 calicoctl 拷贝到
/usr/local/bin/calicoctl
,并给执行权限:chmod +x /usr/local/bin/calicoctl
;创建 kubectl 配置目录:
mkdir -p ~/.kube/
配置 calicoctl
执行以下命令,然后重新登录终端。
cat>>~/.profile<<EOF
export DATASTORE_TYPE=kubernetes
export KUBECONFIG=~/.kube/config
EOF拷贝 kubeconfig 文件
当集群安装完成后,在集群页面点击
Kubeconfig File
并拷贝文件,然后将文件保存在~/.kube/config
文件中。kubeconfig 配置好之后,执行
kubectl get no
和calicoctl get no
分别可以看到以下内容:root@vagrant2:~# kubectl get no
NAME STATUS ROLES AGE VERSION
vagrant1 NotReady controlplane,etcd,worker 10m v1.18.8
vagrant2 NotReady worker 8m27s v1.18.8root@vagrant2:~# calicoctl get no
NAME
vagrant1
vagrant2因为没有网络驱动,所以状态为 NotReady。
以上步骤建议在所有节点均执行配置。
网络驱动部署
拷贝附件的
calicotemplate.yml
和BGPConfiguration.yml
到配置好kubectl 和 calicoctl
的任意一台主机上。然后执行
kubectl apply -f calicotemplate.yml
和calicoctl create -f BGPConfiguration.yml
。root@vagrant2:~# kubectl apply -f calicotemplate.yml
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
daemonset.apps/calico-node created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers createdroot@vagrant2:~# calicoctl create -f BGPConfiguration.yml
Successfully created 2 resource(s)
root@vagrant2:~#注意:
calicotemplate.yml
中的CALICO_IPV4POOL_CIDR
和BGPConfiguration.yml
中的asNumber
需要根据实际做调整。回到集群首页,可以看到集群已运行正常,节点也运行正常。
Calico 网络验证
在节点上执行
route -n
查看路由表,可以看到目标网段的网关均为对应节点的节点 IP,数据转发接口为主机的 eth0。在节点 1 找一个 Pod IP ,然后在节点 2 去 ping,验证是否可以 ping 通。
通过 calicoctl 查看 bgpPeer 和 bgpConfiguration 配置。
在每个节点上执行
calicoctl node status
检查节点状态。root@vagrant1:~# calicoctl node status
Calico process is running.
IPv4 BGP status
+----------------+-------------------+-------+----------+--------------------------------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+----------------+-------------------+-------+----------+--------------------------------+
| 192.168.50.132 | node-to-node mesh | up | 14:02:48 | Established |
| 192.168.50.176 | global | start | 14:28:14 | Active Socket: Host is |
| | | | | unreachable |
+----------------+-------------------+-------+----------+--------------------------------+
IPv6 BGP status
No IPv6 peers found.
root@vagrant1:~#root@vagrant2:~# calicoctl node status
Calico process is running.
IPv4 BGP status
+----------------+-------------------+-------+----------+--------------------------------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+----------------+-------------------+-------+----------+--------------------------------+
| 192.168.50.120 | node-to-node mesh | up | 14:02:44 | Established |
| 192.168.50.176 | global | start | 14:27:51 | Connect Socket: Host is |
| | | | | unreachable |
+----------------+-------------------+-------+----------+--------------------------------+
IPv6 BGP status
No IPv6 peers found.
root@vagrant2:~#192.168.50.176 是通过 BGPConfiguration.yml 添加的 BGPPeer,现在未配置 F5 BGP,所以无法连接。
F5 配置
创建 vlan 虚拟接口
为了与 K8S 集群通信,这里需要创建一个虚拟接口。
依次访问
network|vlans
,点击vlans list
旁边的加号(➕)。配置接口参数
Name:接口名称,随意设置;
Description: 可选;
Tag: 如果在
Interface|Tagging
中选择了Tagged
, 那么这里设置的tag
将会附加到选择的接口上,默认 4094 (可选);Interfaces:在
Interface
中选择一个网卡接口,然后在Tagging
中选择是否打上vlan tag
,最后点击Add
;其他参数可保持默认,最后点击最下边的完成(finished)。
创建 Self IPs
上一步中创建了虚拟 vlan 接口,接下来给接口添加一个用于与 K8S 集群通信的 ip。
依次访问
network|Self IPs
,点击旁边的加号(➕);配置参数:
Name:名称,随意设置;
IP Address:根据实际情况设置;
Netmask:根据实际情况设置;
VLAN/Tunnel:选择上一步骤中创建的 vlan 虚拟接口;
Port Lockdown:选择允许所有;
其他保持默认,最后点击最下边的完成(finished)。
注意:因为 K8S 容器环境的特殊性,
Self IPs
建议使用与 K8S 集群节点 IP 相同网段的 IP。如果Self IPs
与 K8S 集群节点 IP 不是相同网段,则需要处理中间环节的网络路由问题。
配置 Route Domains
依次访问
network|Route Domains
;点击
name
为 0 的条目;在
Dynamic Routing Protocols
中,点击BGP
,然后点击 向左箭头号;其他参数保持默认,最后点击
update
。
F5 启用 BGP
ssh 登录 F5 设备;
运行以下命令启用 BGP;
# 访问 IMI Shell
imish
# 切换到启用模式
enable
# 进入配置模式
config terminal
# 设置路由 bgp,编号为 64512
router bgp 64512
# 创建 BGP 对等组
neighbor calico-k8s peer-group
# 将对等组指定为 BGP 邻居
neighbor calico-k8s remote-as 64512
# 添加所有的对等节点,包括所有的 K8S 节点
neighbor 192.168.50.120 peer-group calico-k8s
neighbor 192.168.50.132 peer-group calico-k8s
redistribute kernel
# 保存配置
write
# 退出
end
整体状态检查与测试
F5 状态检查
访问 IMI Shell
imish
查看 BGP 配置及连接状态
show ip bgp neighbors
注意:如果是先配置 F5,因为还未配置 calico,
remote router ID
会为0.0.0.0
,BGP state
为Active
状态。查看 F5 路由表
f5-test.local[0]>show ip route
Codes: K - kernel, C - connected, S - static, R - RIP, B - BGP
O - OSPF, IA - OSPF inter area
N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
E1 - OSPF external type 1, E2 - OSPF external type 2
i - IS-IS, L1 - IS-IS level-1, L2 - IS-IS level-2, ia - IS-IS inter area
* - candidate default
B 10.42.105.192/26 [200/0] via 192.168.50.132, rancher-k8s-bgp, 00:08:22
B 10.42.192.0/26 [200/0] via 192.168.50.120, rancher-k8s-bgp, 00:08:22
C 127.0.0.1/32 is directly connected, lo
C 127.1.1.254/32 is directly connected, tmm
C 192.168.50.0/24 is directly connected, rancher-k8s-bgp
Gateway of last resort is not set
f5-test.local[0]>
calico 节点状态检查
在 K8S 节点上,再次执行
calicoctl node status
查看节点状态。root@vagrant2:~# calicoctl node status
Calico process is running.
IPv4 BGP status
+----------------+-------------------+-------+----------+-------------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+----------------+-------------------+-------+----------+-------------+
| 192.168.50.120 | node-to-node mesh | up | 14:02:51 | Established |
| 192.168.50.176 | global | up | 14:43:10 | Established |
+----------------+-------------------+-------+----------+-------------+
IPv6 BGP status
No IPv6 peers found.
root@vagrant2:~#root@vagrant1:~# calicoctl node status
Calico process is running.
IPv4 BGP status
+----------------+-------------------+-------+----------+-------------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+----------------+-------------------+-------+----------+-------------+
| 192.168.50.132 | node-to-node mesh | up | 14:02:53 | Established |
| 192.168.50.176 | global | up | 14:43:10 | Established |
+----------------+-------------------+-------+----------+-------------+
IPv6 BGP status
No IPv6 peers found.
root@vagrant1:~#可以看到 PEER 之间均为连接状态。
连通性测试
在 K8S 集群中创建一个 Pod 或者选择一个已有的 Pod,获取 Pod IP。
在 F5 上去做 ping 测试。
[root@f5-test:Active:Standalone] config # ping 10.42.105.194 -c 6
PING 10.42.105.194 (10.42.105.194) 56(84) bytes of data.
64 bytes from 10.42.105.194: icmp_seq=1 ttl=63 time=3.16 ms
64 bytes from 10.42.105.194: icmp_seq=2 ttl=63 time=2.53 ms
64 bytes from 10.42.105.194: icmp_seq=3 ttl=63 time=2.99 ms
64 bytes from 10.42.105.194: icmp_seq=4 ttl=63 time=3.01 ms
64 bytes from 10.42.105.194: icmp_seq=5 ttl=63 time=3.49 ms
64 bytes from 10.42.105.194: icmp_seq=6 ttl=63 time=2.44 ms
--- 10.42.105.194 ping statistics ---
6 packets transmitted, 6 received, 0% packet loss, time 5002ms
rtt min/avg/max/mdev = 2.443/2.941/3.496/0.365 ms
[root@f5-test:Active:Standalone] config #