kk Blog —— 通用基础


date [-d @int|str] [+%s|"+%F %T"]
netstat -ltunp
sar -n DEV 1

X520-T1 Linux内核发包14Mpps

目的

测试并优化Linux内核发包

环境

ga-b250m-d3h I7-7700k, 4核8线程,no_trubo=1即最高4.2GHz X520-T1 ubuntu 18.04, linux-image-4.15.0-XX

netmap

netmap是一种网卡旁路方法,用netmap测试,发包在 14.3Mpps, 单cpu 60%。测试命令

1
./build-apps/pkt-gen/pkt-gen -i enp3s0 -f tx -c 1 -p 1 -z -d 12.0.0.100:80

Linux多线程发包

在模块中用kthread创建多线程,每个线程构造skb,然后调dev_queue_xmit(skb)。测试发现cpu全部100%,发包大多在12Mpps

1
2
3
4
		tcp      udp
fq_codel  12.0Mpps/100%   12.8Mpps/100%
pfifo_fast    12.7Mpps/100%   14.0Mpps/100%
noqueue       12.0Mpps/100%   12.8Mpps/100%

猜测可能优化点

  1. qdisc流控
  2. skb的申请、释放

优化qdisc流控

qdisc流控可以简单看作把包存下,然后再发送出去。取消qdisc应该能减少一层消耗。

1
dev_queue_xmit() -> dev_hard_start_xmit() -> xmit_one() -> netdev_start_xmit() -> __netdev_start_xmit()

将上面的代码copy到模块,就跳过了qdisc。其实noqueue也是这个流程,但为什么noqueue的pps、cpu都没提升呢?

测试发现skb->xmit_more是关键,ixgbe中关于xmit_more的代码:

1
2
3
4
5
6
7
8
    if (netif_xmit_stopped(txring_txq(tx_ring)) || !skb->xmit_more) {
            writel(i, tx_ring->tail);

            /* we need this if more than one processor can write to our tail
             * at a time, it synchronizes IO on IA64/Altix systems
             */
            mmiowb();
    }

noqueue中没有设置xmit_more,但fq_codel,pfifo_fast等用到qdisc框架的都会设置xmit_more,qdisc是用skb->next将skb串起来,当skb->next != NULL时在dev_hard_start_xmit就会设置xmit_more=1。

照此优化,在我们的模块中每次申请多个包,将他们用skb->next串起来再发送。优化后发包在14.2Mpps,每个cpu都60%。

优化skb的申请、释放

每秒申请、释放14M的skb有点多。。。

我们可以创建percpu的skb池,申请的时候从池子里拿,释放的时候放回池子。

  1. 怎么做到释放时放回池子?用skb->destructor指向我们的回调函数即可
  2. 怎么做到不让Linux内核释放skb以及skb->data的空间?增加如下patch重新编译内核,设置skb池子里的skb->fclone = SKB_FCLONE_USER即可。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 2742528..cd240a9 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -532,6 +532,7 @@ enum {
  SKB_FCLONE_UNAVAILABLE, /* skb has no fclone (from head_cache) */
  SKB_FCLONE_ORIG,    /* orig skb (from fclone_cache) */
  SKB_FCLONE_CLONE,   /* companion fclone skb (from fclone_cache) */
+ SKB_FCLONE_USER,
 };
 
 enum {
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index cac95aa..b8fb200 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -585,6 +585,8 @@ static void kfree_skbmem(struct sk_buff *skb)
  case SKB_FCLONE_UNAVAILABLE:
      kmem_cache_free(skbuff_head_cache, skb);
      return;
+ case SKB_FCLONE_USER:
+     return;
 
  case SKB_FCLONE_ORIG:
      fclones = container_of(skb, struct sk_buff_fclones, skb1);
@@ -627,7 +629,7 @@ void skb_release_head_state(struct sk_buff *skb)
 static void skb_release_all(struct sk_buff *skb)
 {
  skb_release_head_state(skb);
- if (likely(skb->head))
+ if (likely(skb->head) && skb->fclone != SKB_FCLONE_USER)
      skb_release_data(skb);
 }

优化后: 单独优化skb申请释放: 发送14.2Mpps 每个cpu 50%

qdisc优化和skb申请释放优化: 发送14.2Mpps 每个cpu 20%

只用1个cpu:发送9.6Mpps, cpu 100% 用2个cpu:发送14.2Mpps, 每个cpu 80%

相比netmap单cpu 60%发送14.3Mpps, 两轮优化后还略差些,不过perf看热点已经在ixgbe里了

参考

linux-source-4.15.0_4.15.0-65.74_all.deb

二层报文发送之qdisc实现分析

netmap配置

基于82599网卡的二层网络数据包接收

基于82599网卡的二层网络数据包发送

如何在一秒之内丢弃1000万个网络数据包

X520-T1 Linux内核收发包14Mpps

收包

1
2
3
ethtool -K enp3s0 gro off
PRE_ROUTING 丢包,14Mpps
LOCAL_IN 丢包,待优化

发包(I7-7700k, no_trubo=1)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
timer 8Mpps
timer+gso  tcp:11Mpps; udp:14Mpps,但是是IP分片的包
gso 需要关闭tso??  ethtool -K enp3s0 tso off gso off


kthread pfifo    14Mpps           cpu: 80%
kthread fq_codel 12~14Mpps        cpu: 100%
kthread noqueue  12Mpps           cpu: 100%


kthread pfifo static_skb    14Mpps    cpu: 40%
kthread fq_codel static_skb 14Mpps    cpu: 40%
kthread noqueue static_skb  12Mpps    cpu: 100%


kthread noqueue static_skb skb_list  14Mpps   cpu: 20%
				1cpu: 9Mpps, cpu 100%
				2cpu: 14Mpps, cpu 60%


M.2 SSD 增加收包si 20%,发包10%
I5-6500 只能发送12.5Mpps, netmap也一样 ???

转发

1
2
12Mpps 以上? 待测
把收到的包转发比申请一个包发出更优

细节待更新

GPU温控

目录是 /sys/class/drm/card0/device/hwmon/hwmonX/

换内核之类的操作会改变 hwmonX

调节脚本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
#!/usr/bin/python

import commands;
import time;

t0=0
temp_inc=[90000, 85000, 80000, 70000, 60000, 50000, 40000, 00000];
pwm_inc=[ 245,   205,   165,   125,   105,   85,    65,    45];

temp_dec=[89000, 84000, 79000, 67000, 57000, 47000, 37000, 00000];
pwm_dec=[ 245,   205,   165,   125,   105,   85,    65,    45];


global pwm1
pwm1=0;

def set_pwm(newpwm):
	global pwm1
	if newpwm != pwm1:
		cmd="echo "+str(newpwm)+" > /sys/class/drm/card0/device/hwmon/hwmon3/pwm1";
		r,o = commands.getstatusoutput(cmd);
		pwm1=newpwm;

		#cmd1="cat /sys/class/drm/card0/device/hwmon/hwmon3/pwm1";
		#r,o = commands.getstatusoutput(cmd1);
                #print cmd
                #print r, o

r,o = commands.getstatusoutput("echo 1 > /sys/class/drm/card0/device/hwmon/hwmon3/pwm1_enable");
while 1:
	r,t = commands.getstatusoutput("cat /sys/class/drm/card0/device/hwmon/hwmon3/temp1_input");
	t = int(t);
	if t - t0 > 0:
		for i in range(0, 8):
			if t >= temp_inc[i]:
				break;
		#print "inc ", t, temp_inc[i], pwm_inc[i]
		set_pwm(pwm_inc[i]);
	elif t - t0 < 0:
		for i in range(0, 8):
			if t >= temp_dec[i]:
				break;
		#print "dec ", t, temp_dec[i], pwm_dec[i]
		set_pwm(pwm_dec[i]);

	t0 = t;
	time.sleep(10);