性能文章>Load高故障分析>

Load高故障分析原创

9039313

Load指标的意义

Linux的Load指标代表系统运行负载,是一个让新手不太容易了解的概念。简单的说,Load指标表示一段时间内,系统有多少个正在运行的任务。包含:

1、正在CPU上面运行的任务数量

2、等待CPU调度运行的任务数量

3、等待IO执行结果的任务数量

根据工程经验,一般来说,当系统的Load指标值大于系统中CPU 数量的1.5~2倍时,表明系统负载较重,需要分析原因,采取措施降低Load指标值。否则可能产生系统卡顿的现象。如果系统的Load指标值大于系统中CPU 数量的3倍时,,表明系统负载非常严重。

随着多核系统中CPU数量的增加,以及容器虚拟化技术的发展,不同业务均运行在同一系统中,不可避免的造成业务之间的相互影响。此时,业务方常常会将系统Load指标作为衡量系统负载的重要指标。如果系统负载超过警告值,就会采取业务迁移等运维手段。并要求技术工程师分析Load高的原因。

因此,技术工程师有必要掌握Load高的分析手段。

如何观察Load指标

我们一般通过top、uptime命令查看最近1分钟、5分钟、15分钟的平均Load指标。

下图是top命令的输出示例:

top - 13:54:54 up  1:49,  1 user,  load average: 2.18, 1.72, 1.00

Tasks: 201 total,   4 running, 154 sleeping,   0 stopped,   0 zombie

%Cpu(s):  8.0 us,  1.7 sy,  0.1 ni, 90.1 id,  0.1 wa,  0.0 hi,  0.1 si,  0.0 st

KiB Mem :  4039160 total,   624264 free,  1616064 used,  1798832 buff/cache

KiB Swap:  1045500 total,  0031045500 free,        0 used.  2007432 avail Mem

红色部分表示系统在最近1分钟、5分钟、15分钟的平均Load指标为2.18/1.72/1.00

Load指标受到CPU利用率和IO等待时间的影响。

首先看看CPU利用率。在上图中,通过观察如下三个指标来判断CPU利用率:

%Cpu(s): 8.0 us, 1.7 sy, 0.1 ni, 90.1 id, 0.1 wa, 0.0 hi, 0.1 si, 0.0 st

这三个指标分别代表程序运行在用户态、内核态、以及运行在nice值调整期间的时间。您可以简单的认为这三个值之和代表了系统的CPU利用率。

如果系统中I/O密集型的程序在运行,那么iowait指标会偏高:

%Cpu(s): 8.0 us, 1.7 sy, 0.1 ni, 90.1 id, 0.1 wa, 0.0 hi, 0.1 si, 0.0 st

如何构造测试用例

首先,我们在控制台中运行如下命令,共计运行4次:
while :;do :;done &
这会启动4个后台运行的CPU密集型任务,占用4个CPU。

然后,使用如下命令启动fio,运行IO密集型任务:

sudo sh -c "while :; do fio -name=mytest -filename=test.data -direct=0 -thread -rw=randread -ioengine=psync -bs=4k -size=1M -numjobs=2 -runtime=10 -group_reporting;echo 1 > /proc/sys/vm/drop_caches; done"

我们可以在top命令中看到如下结果:

top - 14:31:34 up  2:26,  1 user,  load average: 6.51, 5.14, 3.93
Tasks: 206 total,   7 running, 156 sleeping,   0 stopped,   0 zombie
%Cpu(s): 86.3 us, 12.9 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi,  0.7 si,  0.0 st
KiB Mem :  4039160 total,  1382444 free,  1599384 used,  1057332 buff/cache
KiB Swap:  1045500 total,  1045500 free,        0 used.  1743056 avail Mem

思考题:我们明明启动了一个fio这样的IO密集型任务,为什么上图中的wa指标为0?

有不少排查Load问题的方法,然而本文介绍的方法可以用于工程实践中,原因是:

1、轻量级,对系统性能的影响微乎其微

2、不需要安装第三方工具,易于部署

3、可以大规模部署,长时间运行

4、灵活,可以根据实际情况随时调整工具

接下来,我们看看如何一步一步的找到Load高问题的元凶!

一步一步编写工具

我们的工具是一个内核模块,首先实现一个最简单的模块,包含三个文件:

1、Makefile文件

OS_VER := UNKNOWN
UNAME := $(shell uname -r)
ifneq ($(findstring 4.15.0-39-generic,$(UNAME)),)
             OS_VER := UBUNTU_1604
endif

ifneq ($(KERNELRELEASE),)
             obj-m += $(MODNAME).o
             $(MODNAME)-y := main.o
             ccflags-y := -I$(PWD)/
else
             export PWD=`pwd`

ifeq ($(KERNEL_BUILD_PATH),)
             KERNEL_BUILD_PATH := /lib/modules/`uname -r`/build
endif
ifeq ($(MODNAME),)
             export MODNAME=load_monitor
endif

all:
             make CFLAGS_MODULE=-D$(OS_VER) -C /lib/modules/`uname -r`/build M=`pwd`  modules
clean:
             make -C $(KERNEL_BUILD_PATH) M=$(PWD) clean
endif

思考题:为什么要定义OS_VER这个变量?

2、main.c文件

/**
 * Baoyou Xie's load monitor module
 *
 * Copyright (C) 2018 Baoyou Xie.
 *
 * Author: Baoyou Xie <baoyou.xie@gmail.com>
 *
 * License terms: GNU General Public License (GPL) version 2
 */

#include <linux/version.h>
#include <linux/module.h>
#include "load.h"

static int load_monitor_init(void)
{
             printk("load-monitor loaded.\n");
             return 0;
}

static void load_monitor_exit(void)
{
             printk("load-monitor unloaded.\n");
}

module_init(load_monitor_init)
module_exit(load_monitor_exit)

MODULE_DESCRIPTION("Baoyou Xie's load monitor module");
MODULE_AUTHOR("Baoyou Xie <baoyou.xie@gmail.com>");
MODULE_LICENSE("GPL v2");

请大家注意文件中的注释,以及文件末尾的模块说明。

3、load.h文件

/**
 * Baoyou Xie's load monitor module
 *
 * Copyright (C) 2018 Baoyou Xie.
 *
 * Author: Baoyou Xie <baoyou.xie@gmail.com>
 *
 * License terms: GNU General Public License (GPL) version 2
 */

思考题:load.h里面什么都没有,这有什么用?

在源码目录下运行make命令,编译出load_monitor.ko模块文件。

运行如下命令,插入/卸载模块:

insmod load_monitor.ko; rmmod load_monitor.ko

然后使用如下命令查看模块输出:

root# dmesg -c
[ 9984.791895] load-monitor loaded.
[ 9984.794608] load-monitor unloaded.

目前,我们的模块还没有任何实际意义。

现在,我们一步一步的添加功能。

首先,我们在系统中添加一个定时器,该定时器每10毫秒执行一次。在main.c中,主要进行如下修改:

struct hrtimer timer;
static void check_load(void)
{
             printk("debug in check_load\n");
}

static enum hrtimer_restart monitor_handler(struct hrtimer *hrtimer)
{
             enum hrtimer_restart ret = HRTIMER_RESTART;
             check_load();
             hrtimer_forward_now(hrtimer, ms_to_ktime(10));
             return ret;
}

static void start_timer(void)
{
             hrtimer_init(&timer, CLOCK_MONOTONIC, HRTIMER_MODE_PINNED);
             timer.function = monitor_handler;
             hrtimer_start_range_ns(&timer, ms_to_ktime(10),
                    0, HRTIMER_MODE_REL_PINNED);
}

static int load_monitor_init(void)
{
             start_timer();
             printk("load-monitor loaded.\n");
             return 0;
}

static void load_monitor_exit(void)
{
             hrtimer_cancel(&timer);
             printk("load-monitor unloaded.\n");
}

这仅仅是启动了一个定时器,并在定时器中简单的打印一句话。

思考题:这么久了,还没有切入正题。作者您是不是忘记了是在解Load高的问题?

运行如下命令,插入/卸载模块:

insmod load_monitor.ko;sleep 0.5; rmmod load_monitor.ko

然后运行如下命令,查看效果:

root# dmesg -c
[  169.840276] load-monitor loaded.
[  169.850579] debug in check_load
..........
[  170.343457] load-monitor unloaded.

最关键的一步,我们需要读取系统的Load指标。Linux内核使用如下变量保存最近1分钟、5分钟、15分钟的Load值:

unsigned long avenrun[3];

虽然我们可以在模块中直接读取这个值。但是为了防止某些版本的内核没有通过EXPORT_SYMBOL导出这个变量,我们使用如下语句获得这个变量的地址:

unsigned long *ptr_avenrun;
ptr_avenrun = (void *)kallsyms_lookup_name("avenrun");

思考题:某些版本的Linux内核不能直接调用kallsyms_lookup_name,有哪些办法获得内核函数/变量的地址?

修改后的代码如下:

static unsigned long *ptr_avenrun;

#define FSHIFT              11            /* nr of bits of precision */
#define FIXED_1             (1<<FSHIFT)   /* 1.0 as fixed-point */
#define LOAD_INT(x) ((x) >> FSHIFT)
#define LOAD_FRAC(x) LOAD_INT(((x) & (FIXED_1-1)) * 100)

static void print_all_task_stack(void)
{
              printk("!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!\n");
              printk("Load: %lu.%02lu, %lu.%02lu, %lu.%02lu\n",
                     LOAD_INT(ptr_avenrun[0]), LOAD_FRAC(ptr_avenrun[0]),
                     LOAD_INT(ptr_avenrun[1]), LOAD_FRAC(ptr_avenrun[1]),
                     LOAD_INT(ptr_avenrun[2]), LOAD_FRAC(ptr_avenrun[2]));
              printk("dump all task: balabala\n");
}

static void check_load(void)
{
              static ktime_t last;
              u64 ms;
              int load = LOAD_INT(ptr_avenrun[0]); /* 最近1分钟的Load值 */
              if (load < 4)
                     return;
              /**
               * 如果上次打印时间与当前时间相差不到20秒,就直接退出
               */
              ms = ktime_to_ms(ktime_sub(ktime_get(), last));
              if (ms < 20 * 1000)
                     return;
              last = ktime_get();
              print_all_task_stack();
}

static enum hrtimer_restart monitor_handler(struct hrtimer *hrtimer)
{
              enum hrtimer_restart ret = HRTIMER_RESTART;
              check_load();
              hrtimer_forward_now(hrtimer, ms_to_ktime(10));
              return ret;
}

重新加载模块,并使用dmesg命令,可以得到如下结果:

[ 2300.730884] !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
[ 2300.730888] Load: 4.13, 1.51, 0.66
[ 2300.730888] dump all task: balabala

我们已经能够探测到Load升高的情况了。现在我们只需要完善一下print_all_task_stack函数,输出所有线程的堆栈即可。

修改后的print_all_task_stack函数如下:

static void print_all_task_stack(void)
{
      struct task_struct *g, *p;
      unsigned long backtrace[BACKTRACE_DEPTH];
      struct stack_trace trace;
      memset(&trace, 0, sizeof(trace));
      memset(backtrace, 0, BACKTRACE_DEPTH * sizeof(unsigned long));
      trace.max_entries = BACKTRACE_DEPTH;
      trace.entries = backtrace;
      printk("!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!\n");
      printk("\tLoad: %lu.%02lu, %lu.%02lu, %lu.%02lu\n",
             LOAD_INT(ptr_avenrun[0]), LOAD_FRAC(ptr_avenrun[0]),
             LOAD_INT(ptr_avenrun[1]), LOAD_FRAC(ptr_avenrun[1]),
             LOAD_INT(ptr_avenrun[2]), LOAD_FRAC(ptr_avenrun[2]));
      rcu_read_lock();    
      printk("dump running task.\n");
      do_each_thread(g, p) {
             if (p->state == TASK_RUNNING) {
                    printk("running task, comm: %s, pid %d\n",
                            p->comm, p->pid);
                    save_stack_trace_tsk(p, &trace);
                    print_stack_trace(&trace, 0);
             }
      } while_each_thread(g, p);
      printk("dump uninterrupted task.\n");
      do_each_thread(g, p) {
             if (p->state & TASK_UNINTERRUPTIBLE) {
                    printk("uninterrupted task, comm: %s, pid %d\n",
                            p->comm, p->pid);
                    save_stack_trace_tsk(p, &trace);
                    print_stack_trace(&trace, 0);
             }
      } while_each_thread(g, p);
      rcu_read_unlock();
}

重新编译,插入/卸载模块,并使用dmesg -c查看输出。可以看到如下调用链。

您可以在这些堆栈中找到名为bash和fio的线程。这些线程正是我们测试用例启动的线程。

[ 6140.570181] !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
[ 6140.570185]   Load: 5.48, 5.80, 5.60
[ 6140.570185] dump running task.
[ 6140.570214] running task, comm: llvmpipe-0, pid 2437
[ 6140.570225]  futex_wait_queue_me+0xd6/0x120
[ 6140.570227]  futex_wait+0x240/0x260
[ 6140.570229]  do_futex+0x128/0x500
[ 6140.570231]  SyS_futex+0x7f/0x180
[ 6140.570234]  do_syscall_64+0x73/0x130
[ 6140.570239]  swapgs_restore_regs_and_return_to_usermode+0x0/0x87
[ 6140.570240]  0xffffffffffffffff
[ 6140.570242] running task, comm: llvmpipe-1, pid 2438
[ 6140.570244]  exit_to_usermode_loop+0x66/0xd0
[ 6140.570245]  prepare_exit_to_usermode+0x5a/0x80
[ 6140.570247]  swapgs_restore_regs_and_return_to_usermode+0x0/0x87
[ 6140.570248]  0xffffffffffffffff
[ 6140.570249] running task, comm: llvmpipe-2, pid 2439
[ 6140.570251]  exit_to_usermode_loop+0x66/0xd0
[ 6140.570252]  prepare_exit_to_usermode+0x5a/0x80
[ 6140.570254]  swapgs_restore_regs_and_return_to_usermode+0x0/0x87
[ 6140.570255]  0xffffffffffffffff
[ 6140.570255] running task, comm: llvmpipe-3, pid 2440
[ 6140.570257]  exit_to_usermode_loop+0x66/0xd0
[ 6140.570258]  prepare_exit_to_usermode+0x5a/0x80
[ 6140.570260]  swapgs_restore_regs_and_return_to_usermode+0x0/0x87
[ 6140.570261]  0xffffffffffffffff
[ 6140.570269] running task, comm: bash, pid 4687
[ 6140.570271]  exit_to_usermode_loop+0x66/0xd0
[ 6140.570273]  prepare_exit_to_usermode+0x5a/0x80
[ 6140.570274]  swapgs_restore_regs_and_return_to_usermode+0x0/0x87
[ 6140.570275]  0xffffffffffffffff
[ 6140.570276] running task, comm: bash, pid 4688
[ 6140.570287]  exit_to_usermode_loop+0x66/0xd0
[ 6140.570288]  prepare_exit_to_usermode+0x5a/0x80
[ 6140.570290]  swapgs_restore_regs_and_return_to_usermode+0x0/0x87
[ 6140.570291]  0xffffffffffffffff
[ 6140.570291] running task, comm: bash, pid 4689
[ 6140.570293]  exit_to_usermode_loop+0x66/0xd0
[ 6140.570295]  prepare_exit_to_usermode+0x5a/0x80
[ 6140.570296]  swapgs_restore_regs_and_return_to_usermode+0x0/0x87
[ 6140.570297]  0xffffffffffffffff
[ 6140.570298] running task, comm: bash, pid 4690
[ 6140.570303]  print_all_task_stack+0x17f/0x270 [load_monitor]
[ 6140.570305]  monitor_handler+0x86/0x90 [load_monitor]
[ 6140.570307]  __hrtimer_run_queues+0xe7/0x230
[ 6140.570308]  hrtimer_interrupt+0xb1/0x200
[ 6140.570309]  smp_apic_timer_interrupt+0x6f/0x140
[ 6140.570311]  apic_timer_interrupt+0x84/0x90
[ 6140.570311]  0xffffffffffffffff
[ 6140.570313] running task, comm: sh, pid 4703
[ 6140.570317]  invalidate_mapping_pages+0x23e/0x330
[ 6140.570320]  drop_pagecache_sb+0xb2/0xf0
[ 6140.570323]  iterate_supers+0xbb/0x110
[ 6140.570324]  drop_caches_sysctl_handler+0x51/0xb0
[ 6140.570327]  proc_sys_call_handler+0xf1/0x110
[ 6140.570328]  proc_sys_write+0x14/0x20
[ 6140.570329]  __vfs_write+0x1b/0x40
[ 6140.570330]  vfs_write+0xb8/0x1b0
[ 6140.570331]  SyS_write+0x55/0xc0
[ 6140.570333]  do_syscall_64+0x73/0x130
[ 6140.570334]  entry_SYSCALL_64_after_hwframe+0x3d/0xa2
[ 6140.570335]  0xffffffffffffffff
[ 6140.570338] running task, comm: insmod, pid 10903
[ 6140.570340]  __blocking_notifier_call_chain+0x32/0x60
[ 6140.570341]  blocking_notifier_call_chain+0x16/0x20
[ 6140.570343]  do_init_module+0xa4/0x219
[ 6140.570345]  load_module+0x1937/0x1d80
[ 6140.570346]  SYSC_finit_module+0xe5/0x120
[ 6140.570347]  SyS_finit_module+0xe/0x10
[ 6140.570349]  do_syscall_64+0x73/0x130
[ 6140.570350]  entry_SYSCALL_64_after_hwframe+0x3d/0xa2
[ 6140.570351]  0xffffffffffffffff
[ 6140.570352] dump uninterrupted task.
[ 6140.570353] uninterrupted task, comm: kworker/0:0, pid 3
[ 6140.570355]  worker_thread+0xc9/0x460
[ 6140.570357]  kthread+0x105/0x140
[ 6140.570358]  ret_from_fork+0x35/0x40
[ 6140.570359]  0xffffffffffffffff
[ 6140.570360] uninterrupted task, comm: kworker/0:0H, pid 4
[ 6140.570361]  worker_thread+0xc9/0x460
[ 6140.570363]  kthread+0x105/0x140
[ 6140.570364]  ret_from_fork+0x35/0x40
[ 6140.570365]  0xffffffffffffffff
[ 6140.570366] uninterrupted task, comm: mm_percpu_wq, pid 6
[ 6140.570368]  rescuer_thread+0x337/0x380
[ 6140.570369]  kthread+0x105/0x140
[ 6140.570370]  ret_from_fork+0x35/0x40
[ 6140.570371]  0xffffffffffffffff
[ 6140.570372] uninterrupted task, comm: rcu_sched, pid 8
[ 6140.570375]  rcu_gp_kthread+0x5b4/0x960
[ 6140.570376]  kthread+0x105/0x140
[ 6140.570378]  ret_from_fork+0x35/0x40
[ 6140.570378]  0xffffffffffffffff
[ 6140.570379] uninterrupted task, comm: rcu_bh, pid 9
[ 6140.570380]  rcu_gp_kthread+0x83/0x960
[ 6140.570382]  kthread+0x105/0x140
[ 6140.570383]  ret_from_fork+0x35/0x40
[ 6140.570384]  0xffffffffffffffff
[ 6140.570385] uninterrupted task, comm: kworker/1:0H, pid 18
[ 6140.570387]  worker_thread+0xc9/0x460
[ 6140.570388]  kthread+0x105/0x140
[ 6140.570390]  ret_from_fork+0x35/0x40
[ 6140.570390]  0xffffffffffffffff
[ 6140.570391] uninterrupted task, comm: kworker/2:0H, pid 24
[ 6140.570393]  worker_thread+0xc9/0x460
[ 6140.570394]  kthread+0x105/0x140
[ 6140.570396]  ret_from_fork+0x35/0x40
[ 6140.570396]  0xffffffffffffffff
[ 6140.570397] uninterrupted task, comm: kworker/3:0H, pid 30
[ 6140.570399]  worker_thread+0xc9/0x460
[ 6140.570400]  kthread+0x105/0x140
[ 6140.570402]  ret_from_fork+0x35/0x40
[ 6140.570403]  0xffffffffffffffff
[ 6140.570403] uninterrupted task, comm: netns, pid 32
[ 6140.570405]  rescuer_thread+0x337/0x380
[ 6140.570407]  kthread+0x105/0x140
[ 6140.570408]  ret_from_fork+0x35/0x40
[ 6140.570409]  0xffffffffffffffff
[ 6140.570410] uninterrupted task, comm: writeback, pid 38
[ 6140.570412]  rescuer_thread+0x337/0x380
[ 6140.570413]  kthread+0x105/0x140
[ 6140.570415]  ret_from_fork+0x35/0x40
[ 6140.570415]  0xffffffffffffffff
[ 6140.570416] uninterrupted task, comm: crypto, pid 42
[ 6140.570418]  rescuer_thread+0x337/0x380
[ 6140.570420]  kthread+0x105/0x140
[ 6140.570421]  ret_from_fork+0x35/0x40
[ 6140.570422]  0xffffffffffffffff
[ 6140.570423] uninterrupted task, comm: kintegrityd, pid 43
[ 6140.570424]  rescuer_thread+0x337/0x380
[ 6140.570426]  kthread+0x105/0x140
[ 6140.570427]  ret_from_fork+0x35/0x40
[ 6140.570428]  0xffffffffffffffff
[ 6140.570429] uninterrupted task, comm: kblockd, pid 44
[ 6140.570431]  rescuer_thread+0x337/0x380
[ 6140.570432]  kthread+0x105/0x140
[ 6140.570434]  ret_from_fork+0x35/0x40
[ 6140.570434]  0xffffffffffffffff
[ 6140.570435] uninterrupted task, comm: ata_sff, pid 45
[ 6140.570437]  rescuer_thread+0x337/0x380
[ 6140.570439]  kthread+0x105/0x140
[ 6140.570440]  ret_from_fork+0x35/0x40
[ 6140.570441]  0xffffffffffffffff
[ 6140.570441] uninterrupted task, comm: md, pid 46
[ 6140.570443]  rescuer_thread+0x337/0x380
[ 6140.570445]  kthread+0x105/0x140
[ 6140.570446]  ret_from_fork+0x35/0x40
[ 6140.570447]  0xffffffffffffffff
[ 6140.570447] uninterrupted task, comm: edac-poller, pid 47
[ 6140.570449]  rescuer_thread+0x337/0x380
[ 6140.570468]  kthread+0x105/0x140
[ 6140.570470]  ret_from_fork+0x35/0x40
[ 6140.570471]  0xffffffffffffffff
[ 6140.570472] uninterrupted task, comm: devfreq_wq, pid 48
[ 6140.570474]  rescuer_thread+0x337/0x380
[ 6140.570475]  kthread+0x105/0x140
[ 6140.570477]  ret_from_fork+0x35/0x40
[ 6140.570477]  0xffffffffffffffff
[ 6140.570478] uninterrupted task, comm: watchdogd, pid 49
[ 6140.570480]  rescuer_thread+0x337/0x380
[ 6140.570481]  kthread+0x105/0x140
[ 6140.570483]  ret_from_fork+0x35/0x40
[ 6140.570483]  0xffffffffffffffff
[ 6140.570484] uninterrupted task, comm: kworker/3:1, pid 58
[ 6140.570486]  worker_thread+0xc9/0x460
[ 6140.570488]  kthread+0x105/0x140
[ 6140.570489]  ret_from_fork+0x35/0x40
[ 6140.570490]  0xffffffffffffffff
[ 6140.570490] uninterrupted task, comm: kworker/1:1, pid 67
[ 6140.570492]  worker_thread+0xc9/0x460
[ 6140.570494]  kthread+0x105/0x140
[ 6140.570495]  ret_from_fork+0x35/0x40
[ 6140.570496]  0xffffffffffffffff
[ 6140.570496] uninterrupted task, comm: kthrotld, pid 98
[ 6140.570498]  rescuer_thread+0x337/0x380
[ 6140.570500]  kthread+0x105/0x140
[ 6140.570501]  ret_from_fork+0x35/0x40
[ 6140.570502]  0xffffffffffffffff
[ 6140.570503] uninterrupted task, comm: acpi_thermal_pm, pid 99
[ 6140.570505]  rescuer_thread+0x337/0x380
[ 6140.570506]  kthread+0x105/0x140
[ 6140.570507]  ret_from_fork+0x35/0x40
[ 6140.570508]  0xffffffffffffffff
[ 6140.570509] uninterrupted task, comm: scsi_tmf_0, pid 101
[ 6140.570555]  rescuer_thread+0x337/0x380
[ 6140.570558]  kthread+0x105/0x140
[ 6140.570561]  ret_from_fork+0x35/0x40
[ 6140.570563]  0xffffffffffffffff
[ 6140.570565] uninterrupted task, comm: scsi_tmf_1, pid 103
[ 6140.570568]  rescuer_thread+0x337/0x380
[ 6140.570570]  kthread+0x105/0x140
[ 6140.570572]  ret_from_fork+0x35/0x40
[ 6140.570573]  0xffffffffffffffff
[ 6140.570575] uninterrupted task, comm: ipv6_addrconf, pid 110
[ 6140.570577]  rescuer_thread+0x337/0x380
[ 6140.570579]  kthread+0x105/0x140
[ 6140.570582]  ret_from_fork+0x35/0x40
[ 6140.570583]  0xffffffffffffffff
[ 6140.570584] uninterrupted task, comm: kstrp, pid 119
[ 6140.570586]  rescuer_thread+0x337/0x380
[ 6140.570588]  kthread+0x105/0x140
[ 6140.570590]  ret_from_fork+0x35/0x40
[ 6140.570591]  0xffffffffffffffff
[ 6140.570592] uninterrupted task, comm: charger_manager, pid 136
[ 6140.570595]  rescuer_thread+0x337/0x380
[ 6140.570597]  kthread+0x105/0x140
[ 6140.570599]  ret_from_fork+0x35/0x40
[ 6140.570600]  0xffffffffffffffff
[ 6140.570601] uninterrupted task, comm: kworker/2:2, pid 176
[ 6140.570604]  worker_thread+0xc9/0x460
[ 6140.570606]  kthread+0x105/0x140
[ 6140.570608]  ret_from_fork+0x35/0x40
[ 6140.570609]  0xffffffffffffffff
[ 6140.570610] uninterrupted task, comm: kworker/1:2, pid 195
[ 6140.570613]  worker_thread+0xc9/0x460
[ 6140.570615]  kthread+0x105/0x140
[ 6140.570617]  ret_from_fork+0x35/0x40
[ 6140.570618]  0xffffffffffffffff
[ 6140.570619] uninterrupted task, comm: scsi_tmf_2, pid 198
[ 6140.570622]  rescuer_thread+0x337/0x380
[ 6140.570624]  kthread+0x105/0x140
[ 6140.570626]  ret_from_fork+0x35/0x40
[ 6140.570627]  0xffffffffffffffff
[ 6140.570628] uninterrupted task, comm: kworker/2:1H, pid 199
[ 6140.570631]  worker_thread+0xc9/0x460
[ 6140.570633]  kthread+0x105/0x140
[ 6140.570635]  ret_from_fork+0x35/0x40
[ 6140.570636]  0xffffffffffffffff
[ 6140.570637] uninterrupted task, comm: kworker/0:1H, pid 201
[ 6140.570640]  worker_thread+0xc9/0x460
[ 6140.570642]  kthread+0x105/0x140
[ 6140.570644]  ret_from_fork+0x35/0x40
[ 6140.570645]  0xffffffffffffffff
[ 6140.570646] uninterrupted task, comm: kworker/3:1H, pid 211
[ 6140.570648]  worker_thread+0xc9/0x460
[ 6140.570650]  kthread+0x105/0x140
[ 6140.570652]  ret_from_fork+0x35/0x40
[ 6140.570653]  0xffffffffffffffff
[ 6140.570654] uninterrupted task, comm: kworker/1:1H, pid 215
[ 6140.570657]  worker_thread+0xc9/0x460
[ 6140.570659]  kthread+0x105/0x140
[ 6140.570661]  ret_from_fork+0x35/0x40
[ 6140.570662]  0xffffffffffffffff
[ 6140.570663] uninterrupted task, comm: ext4-rsv-conver, pid 227
[ 6140.570666]  rescuer_thread+0x337/0x380
[ 6140.570668]  kthread+0x105/0x140
[ 6140.570669]  ret_from_fork+0x35/0x40
[ 6140.570671]  0xffffffffffffffff
[ 6140.570672] uninterrupted task, comm: iprt-VBoxWQueue, pid 328
[ 6140.570674]  rescuer_thread+0x337/0x380
[ 6140.570676]  kthread+0x105/0x140
[ 6140.570678]  ret_from_fork+0x35/0x40
[ 6140.570679]  0xffffffffffffffff
[ 6140.570681] uninterrupted task, comm: ttm_swap, pid 407
[ 6140.570683]  rescuer_thread+0x337/0x380
[ 6140.570685]  kthread+0x105/0x140
[ 6140.570687]  ret_from_fork+0x35/0x40
[ 6140.570688]  0xffffffffffffffff
[ 6140.570702] uninterrupted task, comm: kworker/0:1, pid 3230
[ 6140.570705]  worker_thread+0xc9/0x460
[ 6140.570707]  kthread+0x105/0x140
[ 6140.570709]  ret_from_fork+0x35/0x40
[ 6140.570710]  0xffffffffffffffff
[ 6140.570712] uninterrupted task, comm: kworker/u8:2, pid 3804
[ 6140.570714]  worker_thread+0xc9/0x460
[ 6140.570716]  kthread+0x105/0x140
[ 6140.570718]  ret_from_fork+0x35/0x40
[ 6140.570719]  0xffffffffffffffff
[ 6140.570720] uninterrupted task, comm: kworker/3:0, pid 3838
[ 6140.570723]  worker_thread+0xc9/0x460
[ 6140.570725]  kthread+0x105/0x140
[ 6140.570727]  ret_from_fork+0x35/0x40
[ 6140.570728]  0xffffffffffffffff
[ 6140.570730] uninterrupted task, comm: kworker/u8:0, pid 4702
[ 6140.570732]  worker_thread+0xc9/0x460
[ 6140.570734]  kthread+0x105/0x140
[ 6140.570736]  ret_from_fork+0x35/0x40
[ 6140.570738]  0xffffffffffffffff
[ 6140.570739] uninterrupted task, comm: kworker/2:0, pid 17180
[ 6140.570741]  worker_thread+0xc9/0x460
[ 6140.570743]  kthread+0x105/0x140
[ 6140.570745]  ret_from_fork+0x35/0x40
[ 6140.570746]  0xffffffffffffffff
[ 6140.570747] uninterrupted task, comm: kworker/u8:3, pid 23671
[ 6140.570750]  worker_thread+0xc9/0x460
[ 6140.570752]  kthread+0x105/0x140
[ 6140.570754]  ret_from_fork+0x35/0x40
[ 6140.570755]  0xffffffffffffffff
[ 6140.600832] sh (4703): drop_caches: 1

本文来自:Linux内核之旅公众号,作者:谢宝友,最近10年时间工作于Linux操作系统内核。同时,他也是中国开源软件推进联盟专家委员会委员,Linux ZTE架构的Maintainer,向Linux提交了4000多行代码。

点赞收藏
分类:标签:
技术小哥哥
请先登录,查看3条精彩评论吧
快去登录吧,你将获得
  • 浏览更多精彩评论
  • 和开发者讨论交流,共同进步

为你推荐

从 Linux 内核角度探秘 JDK MappedByteBuffer

从 Linux 内核角度探秘 JDK MappedByteBuffer

13
3