kk Blog —— 通用基础


date [-d @int|str] [+%s|"+%F %T"]
netstat -ltunp
sar -n DEV 1

可重入函数与不可重入函数

可重入函数主要用于多任务环境中,简单来说就是可以被中断的函数,即在这个函数执行的任何时刻中断它,转入OS调度下去执行另外一段代 码,返回控制时不会出现什么错误;也意味着它除了使用自己栈上的变量以外不依赖于任何环境(包括static),这样的函数就是 purecode(纯代码)可重入,可以允许有该函数的多个副本在运行,由于它们使用的是分离的栈,所以不会互相干扰。而不可重入的函数由于使用了一些系 统资源,比如全局变量区、中断向量表等,所以它如果被中断的话,可能会出现问题,这类函数是不能运行在多任务环境下的。

可重入函数确实需要访问全局变量(包括 static),一定要注意实施互斥手段。它在并行运行环境中非常重要,但是一般要为访问全局变量付出一些性能代价。编写可重入函数时,若使用全局变量,则应通过关中断、信号量(即P、V操作)等手段对其加以保护。若对所使用的全局变量不加以保护,则此函数就不具有可重入性,即当多个进程调用此函数时,很有可能使有关全局变量变为不可知状态。

示例:假设Exam是 int型全局变量,函数Squre_Exam返回Exam平方值。那么如下函数不具有可重入性。

1
2
3
4
5
6
7
8
int Exam = 0;
unsigned int example( int para )
{
	unsigned int temp;
	Exam = para; // (**)
	 temp = Square_Exam( );
	return temp;  
}

此函数若被多个进程调用的话,其结果可能是未知的,因为当(**)语句刚执行完后,另外一个使用本函数的进程可能正好被激活,那么当新激活的进程执行到此 函数时,将使Exam赋与另一个不同的para值,所以当控制重新回到“temp = Square_Exam( )”后,计算出的temp很可能不是预想中的结果。此函数应如下改进。

1
2
3
4
5
6
7
8
9
10
int Exam = 0;
unsigned int example( int para )
{
	unsigned int temp;  
	[申请信号量操作] //(1)  加锁  
	Exam = para;  
	temp = Square_Exam( );  
	[释放信号量操作] //     解锁   
	return temp;  
}

申请不到“信号量”,说明另外的进程正处于给Exam赋值并计算其平方过程中(即正在使用此信号),本进程必须等待其释放信号后,才可继续执行。若申请到 信号,则可继续执行,但其它进程必须等待本进程释放信号量后,才能再使用本信号。保证函数的可重入性的方法:
1、在写函数时候尽量使用局部变量(例如寄存器、堆栈中的变量);
2、对于要使用的全局变量要加以保护(如采取关中断、信号量等方法),这样构成的函数就一定是一个可重入的函数。

满足下列条件的函数多数是不可重入的:
1、函数体内使用了静态的数据结构;
2、函数体内调用了malloc()或者free()函数;
3、函数体内调用了标准I/O函数。

如何将一个不可重入的函数改写成可重入函数呢?把一个不可重入函数变成可重入的唯一方法是用可重入规则来重写它。其实很简单,只要遵守了几条很容易理解的规则,那么写出来的函数就是可重入的:
1、不要使用全局变量。因为别的代码很可能覆盖这些变量值。
2、在和硬件发生交互的时候,切记执行类似disinterrupt()之类的操作,就是关闭硬件中断。完成交互记得打开中断,在有些系列上,这叫做“进入/ 退出核心”。
3、不能调用其它任何不可重入的函数。
4、谨慎使用堆栈。最好先在使用前先OS_ENTER_KERNAL。

尾调用 尾递归

尾调用

尾调用大概是这么一种情况:
函数A里面调用了函数B。
函数B执行后,函数A马上返回。
也就是说调用函数B(并返回执行结果)是函数A所做的最后一件事。
相当于执行完函数B后,函数A也就执行完。
因此在执行函数B时,函数A的栈帧其实是已经大部分没用了,可以被修改或覆盖。编译器可以利用这一点进行优化,函数B执行后直接返回到函数A的调用者。

这里有一点需要注意:它是来自于编译器的优化。 这一点点的优化对于普通的尾调用来说可能意义不大,但是对于尾递归来说就很重要了。

尾递归

尾递归是一种基于尾调用形式的递归,相当于前面说的函数B就是函数A本身。

普通递归会存在的一些问题是,每递归一层就会消耗一定的栈空间,递归过深还可能导致栈溢出,同时又是函数调用,免不了push来pop去的,消耗时间。

采用尾调用的形式来实现递归,即尾递归,理论上可以解决普通递归的存在的问题。因为下一层递归所用的栈帧可以与上一层有重叠(利用jmp来实现),局部变量可重复利用,不需要额外消耗栈空间,也没有push和pop。

再次提一下,它的实际效果是来自于编译器的优化(目前的理解)。在使用尾递归的情况下,编译器觉得合适就会将递归调用优化成循环。目前大多数编译器 都是支持尾递归优化的。有一些语言甚至十分依赖于尾递归(尾调用),比如erlang或其他函数式语言(传说它们为了更好的处理 continuation-passing style)。

假如不存在优化,大家真刀真枪进行函数调用,那尾递归是毫无优势可言的,甚至还有缺点——代码写起来不直观。

现代编译器的优化能力很强悍,很多情况下编译器优化起来毫不手软(于是有了volatile)。但有时编译器又很傲娇,你需要主动给它一点信号,它才肯优化。尾递归就相当于传递一个信号给编译器,尽情优化我吧!

测试

为了验证尾递归优化,可以写个小程序进行测试。在VS2010下将使用/O1或以上的优化选项,一般就会尾递归优化。Gcc3.2.2(这版本好旧)下一般需要使用-O2优化选项。
先看看普通递归:

1
2
3
4
5
6
7
8
9
10
11
12
// 递归
int factorial(int n)
{
	if(n <= 2)
	{
		return 1;
	}
	else
	{
		return factorial(n-1) + factorial(n-2);
	}
}

其汇编代码:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
0100401371    push   %ebp
0200401372    mov    %esp,%ebp
0300401374    push   %ebx
0400401375    sub    $0x14,%esp
0500401378    cmpl   $0x2,0x8(%ebp)
060040137C    jg     0x401385 <factorial+20>
070040137E    mov    $0x1,%eax
0800401383    jmp    0x4013a4 <factorial+51>
0900401385    mov    0x8(%ebp),%eax
1000401388    dec    %eax
1100401389    mov    %eax,(%esp)
120040138C    call   0x401371 <factorial>
1300401391    mov    %eax,%ebx
1400401393    mov    0x8(%ebp),%eax
1500401396    sub    $0x2,%eax
1600401399    mov    %eax,(%esp)
170040139C    call   0x401371 <factorial>
18004013A1    lea    (%ebx,%eax,1),%eax
19004013A4    add    $0x14,%esp
20004013A7    pop    %ebx
21004013A8    leave
22004013A9    ret

在0040138C,0040139C这些位置,我们看到递归仍然是使用call指令,那么尾递归在汇编角度是怎么处理的呢?
尾递归:

1
2
3
4
5
6
7
8
9
10
11
int factorial_tail(int n,int acc1,int acc2)
{
  if (n < 2)
  {
      return acc1;
  }
  else
  {
      return factorial_tail(n-1,acc2,acc1+acc2);
  }
}

其汇编代码:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
0100401381    push   %ebp
0200401382    mov    %esp,%ebp
0300401384    sub    $0x18,%esp
0400401387    cmpl   $0x1,0x8(%ebp)
050040138B    jg     0x401392 <factorial_tail+17>
060040138D    mov    0xc(%ebp),%eax
0700401390    jmp    0x4013b2 <factorial_tail+49>
0800401392    mov    0x10(%ebp),%eax
0900401395    mov    0xc(%ebp),%edx
1000401398    lea    (%edx,%eax,1),%eax
110040139B    mov    0x8(%ebp),%edx
120040139E    dec    %edx
130040139F    mov    %eax,0x8(%esp)
14004013A3    mov    0x10(%ebp),%eax
15004013A6    mov    %eax,0x4(%esp)
16004013AA    mov    %edx,(%esp)
17004013AD    call   0x401381 <factorial_tail>
18004013B2    leave
19004013B3    ret

在00401390位置上,尾递归是直接使用jmp实现循环跳转。

kprobes Documentation

https://www.kernel.org/doc/Documentation/kprobes.txt

Documentation/kprobes.txt

1.4 How Does Jump Optimization Work?

If your kernel is built with CONFIG_OPTPROBES=y (currently this flag
is automatically set ‘y’ on x86/x86-64, non-preemptive kernel) and
the “debug.kprobes_optimization” kernel parameter is set to 1 (see
sysctl(8)), Kprobes tries to reduce probe-hit overhead by using a jump
instruction instead of a breakpoint instruction at each probepoint.

1.4.1 Init a Kprobe

When a probe is registered, before attempting this optimization,
Kprobes inserts an ordinary, breakpoint-based kprobe at the specified
address. So, even if it’s not possible to optimize this particular
probepoint, there’ll be a probe there.

1.4.2 Safety Check

Before optimizing a probe, Kprobes performs the following safety checks:

  • Kprobes verifies that the region that will be replaced by the jump
    instruction (the “optimized region”) lies entirely within one function.
    (A jump instruction is multiple bytes, and so may overlay multiple
    instructions.)

  • Kprobes analyzes the entire function and verifies that there is no
    jump into the optimized region. Specifically:

    • the function contains no indirect jump;
    • the function contains no instruction that causes an exception (since
      the fixup code triggered by the exception could jump back into the
      optimized region – Kprobes checks the exception tables to verify this);
      and
    • there is no near jump to the optimized region (other than to the first
      byte).
  • For each instruction in the optimized region, Kprobes verifies that
    the instruction can be executed out of line.

1.4.3 Preparing Detour Buffer

Next, Kprobes prepares a “detour” buffer, which contains the following
instruction sequence:
- code to push the CPU’s registers (emulating a breakpoint trap)
- a call to the trampoline code which calls user’s probe handlers.
- code to restore registers
- the instructions from the optimized region
- a jump back to the original execution path.

1.4.4 Pre-optimization

After preparing the detour buffer, Kprobes verifies that none of the
following situations exist:
- The probe has either a break_handler (i.e., it’s a jprobe) or a post_handler.
- Other instructions in the optimized region are probed.
- The probe is disabled.

In any of the above cases, Kprobes won’t start optimizing the probe.
Since these are temporary situations, Kprobes tries to start
optimizing it again if the situation is changed.

If the kprobe can be optimized, Kprobes enqueues the kprobe to an
optimizing list, and kicks the kprobe-optimizer workqueue to optimize
it. If the to-be-optimized probepoint is hit before being optimized,
Kprobes returns control to the original instruction path by setting
the CPU’s instruction pointer to the copied code in the detour buffer
– thus at least avoiding the single-step.

1.4.5 Optimization

The Kprobe-optimizer doesn’t insert the jump instruction immediately;
rather, it calls synchronize_sched() for safety first, because it’s
possible for a CPU to be interrupted in the middle of executing the
optimized region(*). As you know, synchronize_sched() can ensure
that all interruptions that were active when synchronize_sched()
was called are done, but only if CONFIG_PREEMPT=n. So, this version
of kprobe optimization supports only kernels with CONFIG_PREEMPT=n.(**)

After that, the Kprobe-optimizer calls stop_machine() to replace
the optimized region with a jump instruction to the detour buffer,
using text_poke_smp().

1.4.6 Unoptimization

When an optimized kprobe is unregistered, disabled, or blocked by
another kprobe, it will be unoptimized. If this happens before
the optimization is complete, the kprobe is just dequeued from the
optimized list. If the optimization has been done, the jump is
replaced with the original code (except for an int3 breakpoint in
the first byte) by using text_poke_smp().

()Please imagine that the 2nd instruction is interrupted and then
the optimizer replaces the 2nd instruction with the jump
address*
while the interrupt handler is running. When the interrupt
returns to original address, there is no valid instruction,
and it causes an unexpected result.

(**)This optimization-safety checking may be replaced with the
stop-machine method that ksplice uses for supporting a CONFIG_PREEMPT=y
kernel.

NOTE for geeks:
The jump optimization changes the kprobe’s pre_handler behavior.
Without optimization, the pre_handler can change the kernel’s execution
path by changing regs->ip and returning 1. However, when the probe
is optimized, that modification is ignored. Thus, if you want to
tweak the kernel’s execution path, you need to suppress optimization,
using one of the following techniques:
- Specify an empty function for the kprobe’s post_handler or break_handler.
or
- Execute ‘sysctl -w debug.kprobes_optimization=n’

………………..

5. Kprobes Features and Limitations

Kprobes allows multiple probes at the same address. Currently,
however, there cannot be multiple jprobes on the same function at
the same time. Also, a probepoint for which there is a jprobe or
a post_handler cannot be optimized. So if you install a jprobe,
or a kprobe with a post_handler, at an optimized probepoint, the
probepoint will be unoptimized automatically.

In general, you can install a probe anywhere in the kernel.
In particular, you can probe interrupt handlers. Known exceptions
are discussed in this section.

The register_probe functions will return -EINVAL if you attempt
to install a probe in the code that implements Kprobes (mostly
kernel/kprobes.c and arch/
/kernel/kprobes.c, but also functions such
as do_page_fault and notifier_call_chain).

If you install a probe in an inline-able function, Kprobes makes
no attempt to chase down all inline instances of the function and
install probes there. gcc may inline a function without being asked,
so keep this in mind if you’re not seeing the probe hits you expect.

A probe handler can modify the environment of the probed function
– e.g., by modifying kernel data structures, or by modifying the
contents of the pt_regs struct (which are restored to the registers
upon return from the breakpoint). So Kprobes can be used, for example,
to install a bug fix or to inject faults for testing. Kprobes, of
course, has no way to distinguish the deliberately injected faults
from the accidental ones. Don’t drink and probe.

Kprobes makes no attempt to prevent probe handlers from stepping on
each other – e.g., probing printk() and then calling printk() from a
probe handler. If a probe handler hits a probe, that second probe’s
handlers won’t be run in that instance, and the kprobe.nmissed member
of the second probe will be incremented.

As of Linux v2.6.15-rc1, multiple handlers (or multiple instances of
the same handler) may run concurrently on different CPUs.

Kprobes does not use mutexes or allocate memory except during
registration and unregistration.

Probe handlers are run with preemption disabled. Depending on the
architecture and optimization state, handlers may also run with
interrupts disabled (e.g., kretprobe handlers and optimized kprobe
handlers run without interrupt disabled on x86/x86-64). In any case,
your handler should not yield the CPU (e.g., by attempting to acquire
a semaphore).

Since a return probe is implemented by replacing the return
address with the trampoline’s address, stack backtraces and calls
to __builtin_return_address() will typically yield the trampoline’s
address instead of the real return address for kretprobed functions.
(As far as we can tell, __builtin_return_address() is used only
for instrumentation and error reporting.)

If the number of times a function is called does not match the number
of times it returns, registering a return probe on that function may
produce undesirable results. In such a case, a line:
kretprobe BUG!: Processing kretprobe d000000000041aa8 @ c00000000004f48c
gets printed. With this information, one will be able to correlate the
exact instance of the kretprobe that caused the problem. We have the
do_exit() case covered. do_execve() and do_fork() are not an issue.
We’re unaware of other specific cases where this could be a problem.

If, upon entry to or exit from a function, the CPU is running on
a stack other than that of the current task, registering a return
probe on that function may produce undesirable results. For this
reason, Kprobes doesn’t support return probes (or kprobes or jprobes)
on the x86_64 version of __switch_to(); the registration functions
return -EINVAL.

On x86/x86-64, since the Jump Optimization of Kprobes modifies
instructions widely, there are some limitations to optimization. To
explain it, we introduce some terminology. Imagine a 3-instruction
sequence consisting of a two 2-byte instructions and one 3-byte
instruction.

1
2
3
4
5
6
        IA  
         |  
[-2][-1][0][1][2][3][4][5][6][7]  
        [ins1][ins2][  ins3 ]  
        [<-     DCR       ->]  
           [<- JTPR ->]  

ins1: 1st Instruction
ins2: 2nd Instruction
ins3: 3rd Instruction
IA: Insertion Address
JTPR: Jump Target Prohibition Region
DCR: Detoured Code Region

The instructions in DCR are copied to the out-of-line buffer
of the kprobe, because the bytes in DCR are replaced by
a 5-byte jump instruction. So there are several limitations.

a) The instructions in DCR must be relocatable.
b) The instructions in DCR must not include a call instruction.
c) JTPR must not be targeted by any jump or call instruction.
d) DCR must not straddle the border between functions.

Anyway, these limitations are checked by the in-kernel instruction
decoder, so you don’t need to worry about that.

6. Probe Overhead

On a typical CPU in use in 2005, a kprobe hit takes 0.5 to 1.0
microseconds to process. Specifically, a benchmark that hits the same
probepoint repeatedly, firing a simple handler each time, reports 1-2
million hits per second, depending on the architecture. A jprobe or
return-probe hit typically takes 50-75% longer than a kprobe hit.
When you have a return probe set on a function, adding a kprobe at
the entry to that function adds essentially no overhead.

Here are sample overhead figures (in usec) for different architectures.
k = kprobe; j = jprobe; r = return probe; kr = kprobe + return probe
on same function; jr = jprobe + return probe on same function

i386: Intel Pentium M, 1495 MHz, 2957.31 bogomips
k = 0.57 usec; j = 1.00; r = 0.92; kr = 0.99; jr = 1.40

x86_64: AMD Opteron 246, 1994 MHz, 3971.48 bogomips
k = 0.49 usec; j = 0.76; r = 0.80; kr = 0.82; jr = 1.07

ppc64: POWER5 (gr), 1656 MHz (SMT disabled, 1 virtual CPU per physical CPU)
k = 0.77 usec; j = 1.31; r = 1.26; kr = 1.45; jr = 1.99

6.1 Optimized Probe Overhead

Typically, an optimized kprobe hit takes 0.07 to 0.1 microseconds to
process. Here are sample overhead figures (in usec) for x86 architectures.
k = unoptimized kprobe, b = boosted (single-step skipped), o = optimized kprobe,
r = unoptimized kretprobe, rb = boosted kretprobe, ro = optimized kretprobe.

i386: Intel® Xeon® E5410, 2.33GHz, 4656.90 bogomips
k = 0.80 usec; b = 0.33; o = 0.05; r = 1.10; rb = 0.61; ro = 0.33

x86-64: Intel® Xeon® E5410, 2.33GHz, 4656.90 bogomips
k = 0.99 usec; b = 0.43; o = 0.06; r = 1.24; rb = 0.68; ro = 0.30