简介

(本文原地址在我的博客CheapTalks, 欢迎大家来看看~)

安卓系统中,普通开发者常常遇到的是ANR(Application Not Responding)问题,即应用主线程没有相应。根本的原因在于安卓框架特殊设定,将专门做UI相关的、用户能够敏锐察觉到的操作放在了一个专门的线程中,即主线程。一旦这个线程在规定的时间内没有完成某个任务,如广播onReceive,Activity跳转,或者bindApplication,那么框架就会产生所谓的ANR,弹出对话框,让用户选择继续等待或者杀死没有完成任务的应用进程。

老生常谈的说法是开发者在主线程中执行了耗时操作导致了任务执行时间太久,这样的问题通常很好定位与解决,往往打个bugreport这个ANR的root cause就原形毕露了,再不济我们也能够通过LOG定位到耗时点。

今天我们讲的是另一种常见的情况,这种情况往往是由于CPU硬件设备的落后、底层CPU调控策略不当导致的。这种问题很恼人,明明按照常理绝对不会出现耗时的地方因为它也能够出现ANR和卡顿,给用户带来极其糟糕的体验。

这篇文章我将先给出一个示例,通过ANR日志反应当前的系统状况,然后从源码角度看安卓framework是如何打出这些LOG的。

示例

以下是我截取的一段LOG,系统频繁的打出BIND_APPLICATION的耗时日志,这些日志出现的十分频繁,平均每两秒就出现一次,调用bindApplication有时甚至能够超过10s+,

这种情况是十分严重的,当系统要启动一个前台广播时,就需要10s内完成这个任务,否则就会出现ANR。如果启动的这个前台广播要运行在一个没有启动的进程中,那么在启动广播之前就要开启一个进程,然后调用bindApplication以触发Application.onCreate。这期间会先将BIND_APPLICATION、RECEIVER依次enqueue到ActivityThread$H主线程队列中,如果BIND_APPLICATION的处理时间过长,将会间接的导致RECEIER的任务没有得到处理,最终导致ANR。同样的原理,这种情况甚至会导致Input的任务没有及时得到处理,最终导致用户可察觉的卡顿。

08-28 20:35:58.737  4635  4635 I tag_activity_manager: [0,com.android.providers.calendar,110,3120]
08-28 20:35:58.757  4653  4653 I tag_activity_manager: [0,com.xiaomi.metoknlp,110,3073]
08-28 20:35:58.863  4601  4601 I tag_activity_manager: [0,android.process.acore,110,3392]
08-28 20:36:00.320  5040  5040 I tag_activity_manager: [0,com.lbe.security.miui,110,3045]
08-28 20:36:00.911  4233  4233 I tag_activity_manager: [0,com.miui.securitycenter.remote,110,8653]
08-28 20:36:03.254  4808  4808 I tag_activity_manager: [0,com.android.phone,110,7059]
08-28 20:36:05.538  5246  5246 I tag_activity_manager: [0,com.xiaomi.market,110,3406]
08-28 20:36:09.006  5153  5153 I tag_activity_manager: [0,com.miui.klo.bugreport,110,10166]
08-28 20:36:09.070  5118  5118 I tag_activity_manager: [0,com.android.settings,110,10680]
08-28 20:36:11.259  5570  5570 I tag_activity_manager: [0,com.miui.core,110,4895]复制代码

ActivityManagerService通过Binder call调用到应用的ActivityThread方法,然后将任务enqueue到处理主线程队列中

        // bind call 调用到这个方法public final void bindApplication(String processName, ApplicationInfo appInfo,List<ProviderInfo> providers, ComponentName instrumentationName,ProfilerInfo profilerInfo, Bundle instrumentationArgs,IInstrumentationWatcher instrumentationWatcher,IUiAutomationConnection instrumentationUiConnection, int debugMode,boolean enableOpenGlTrace, boolean isRestrictedBackupMode, boolean persistent,Configuration config, CompatibilityInfo compatInfo, Map<String, IBinder> services,Bundle coreSettings) {...// 封装AppBindData对象AppBindData data = new AppBindData();
...sendMessage(H.BIND_APPLICATION, data);}
...private class H extends Handler {
...public static final int BIND_APPLICATION = 110;
...public void handleMessage(Message msg) {if (DEBUG_MESSAGES) Slog.v(TAG, ">>> handling: " + codeToString(msg.what));switch (msg.what) {...case BIND_APPLICATION:// 在主线程队列中进程执行Trace.traceBegin(Trace.TRACE_TAG_ACTIVITY_MANAGER, "bindApplication");AppBindData data = (AppBindData)msg.obj;handleBindApplication(data);Trace.traceEnd(Trace.TRACE_TAG_ACTIVITY_MANAGER);break;...}
...复制代码

上面的例子中,系统很快不出所料的出现了ANR,并且这个问题不是由于在APP中做了耗时操作,而是因为系统CPU负载过高导致的。以下贴出CPU负载日志:

// bindApplication卡了主线程57s+
Running message is { when=-57s68ms what=110 obj=AppBindData{appInfo=ApplicationInfo{be09873 com.android.settings}} target=android.app.ActivityThread$H planTime=1504652580856 dispatchTime=1504652580897 finishTime=0 }
Message 0: { when=-57s35ms what=140 arg1=5 target=android.app.ActivityThread$H planTime=1504652580890 dispatchTime=0 finishTime=0 }// 分析了一下发生异常时的系统状态
08-28 20:36:13.000 2692 2759 E ActivityManager: ANR in com.android.settings
08-28 20:36:13.000 2692 2759 E ActivityManager: PID: 5118
// 从左到右,分别是最近1分钟,5分钟,15分钟的CPU负载,超过11就是负载过度
08-28 20:36:13.000 2692 2759 E ActivityManager: Load: 20.12 / 13.05 / 6.96
// 发生ANR时,CPU的使用情况
08-28 20:36:13.000 2692 2759 E ActivityManager: CPU usage from 2967ms to -4440ms ago:  
// systemserver过于繁忙
08-28 20:36:13.000 2692 2759 E ActivityManager: 73% 2692/system_server: 57% user + 15% kernel / faults: 16217 minor 11 major 08-28 20:36:13.000 2692 2759 E ActivityManager: 61% 4840/com.miui.home: 55% user + 5.4% kernel / faults: 26648 minor 17 major
08-28 20:36:13.000 2692 2759 E ActivityManager: 19% 330/mediaserver: 17% user + 2.1% kernel / faults: 5180 minor 18 major
08-28 20:36:13.000 2692 2759 E ActivityManager: 18% 4096/com.android.systemui: 14% user + 4% kernel / faults: 12965 minor 30 major
...复制代码

当然,证明系统导致ANR不仅仅需要CPU Load日志,同时也需要排除当前应用是否有耗时操作、耗时binder call的调用、是否等待锁等等情况。排除之后,就可以判断确认当前问题是由于CPU资源稀缺,导致应用执行bindApplication没有拿到足够的时间片,导致任务没有及时的完成,最终间接的导致队列排名靠后的广播或服务ANR。

深入源码

AMS.appNotResponding

private static final String TAG = TAG_WITH_CLASS_NAME ? "ActivityManagerService" : TAG_AM;final void appNotResponding(ProcessRecord app, ActivityRecord activity,ActivityRecord parent, boolean aboveSystem, final String annotation) {
...long anrTime = SystemClock.uptimeMillis();// 如果开启了CPU监听,将会先更新CPU的使用情况if (MONITOR_CPU_USAGE) {updateCpuStatsNow();}...// 新建一个用户跟踪CPU使用情况的对象// 将会打印所以线程的CPU使用状况final ProcessCpuTracker processCpuTracker = new ProcessCpuTracker(true);...String cpuInfo = null;if (MONITOR_CPU_USAGE) {updateCpuStatsNow();synchronized (mProcessCpuTracker) {cpuInfo = mProcessCpuTracker.printCurrentState(anrTime);}info.append(processCpuTracker.printCurrentLoad());info.append(cpuInfo);}info.append(processCpuTracker.printCurrentState(anrTime));// 直接将LOG打印出来Slog.e(TAG, info.toString());
...}复制代码

AMS.updateCpuStatsNow

    void updateCpuStatsNow() {synchronized (mProcessCpuTracker) {mProcessCpuMutexFree.set(false);final long now = SystemClock.uptimeMillis();boolean haveNewCpuStats = false;// 最少每5秒更新CPU的数据if (MONITOR_CPU_USAGE &&mLastCpuTime.get() < (now - MONITOR_CPU_MIN_TIME)) {mLastCpuTime.set(now);mProcessCpuTracker.update();if (mProcessCpuTracker.hasGoodLastStats()) {haveNewCpuStats = true;//Slog.i(TAG, mProcessCpu.printCurrentState());//Slog.i(TAG, "Total CPU usage: "//        + mProcessCpu.getTotalCpuPercent() + "%");// Slog the cpu usage if the property is set.if ("true".equals(SystemProperties.get("events.cpu"))) {// 用户态时间int user = mProcessCpuTracker.getLastUserTime();// 系统态时间int system = mProcessCpuTracker.getLastSystemTime();// IO等待时间int iowait = mProcessCpuTracker.getLastIoWaitTime();// 硬中断时间int irq = mProcessCpuTracker.getLastIrqTime();// 软中断时间int softIrq = mProcessCpuTracker.getLastSoftIrqTime();// 闲置时间int idle = mProcessCpuTracker.getLastIdleTime();int total = user + system + iowait + irq + softIrq + idle;if (total == 0) total = 1;// 输出百分比EventLog.writeEvent(EventLogTags.CPU,((user + system + iowait + irq + softIrq) * 100) / total,(user * 100) / total,(system * 100) / total,(iowait * 100) / total,(irq * 100) / total,(softIrq * 100) / total);}}}// 各类CPU时间归类与更新final BatteryStatsImpl bstats = mBatteryStatsService.getActiveStatistics();synchronized (bstats) {synchronized (mPidsSelfLocked) {if (haveNewCpuStats) {if (bstats.startAddingCpuLocked()) {int totalUTime = 0;int totalSTime = 0;// 遍历所有ProcessCpuTrackerfinal int N = mProcessCpuTracker.countStats();for (int i = 0; i < N; i++) {ProcessCpuTracker.Stats st = mProcessCpuTracker.getStats(i);if (!st.working) {continue;}// ProcessRecord的CPU时间更新ProcessRecord pr = mPidsSelfLocked.get(st.pid);totalUTime += st.rel_utime;totalSTime += st.rel_stime;if (pr != null) {BatteryStatsImpl.Uid.Proc ps = pr.curProcBatteryStats;if (ps == null || !ps.isActive()) {pr.curProcBatteryStats = ps = bstats.getProcessStatsLocked(pr.info.uid, pr.processName);}ps.addCpuTimeLocked(st.rel_utime, st.rel_stime);pr.curCpuTime += st.rel_utime + st.rel_stime;} else {BatteryStatsImpl.Uid.Proc ps = st.batteryStats;if (ps == null || !ps.isActive()) {st.batteryStats = ps = bstats.getProcessStatsLocked(bstats.mapUid(st.uid), st.name);}ps.addCpuTimeLocked(st.rel_utime, st.rel_stime);}}// 将数据更新到BatteryStatsImplfinal int userTime = mProcessCpuTracker.getLastUserTime();final int systemTime = mProcessCpuTracker.getLastSystemTime();final int iowaitTime = mProcessCpuTracker.getLastIoWaitTime();final int irqTime = mProcessCpuTracker.getLastIrqTime();final int softIrqTime = mProcessCpuTracker.getLastSoftIrqTime();final int idleTime = mProcessCpuTracker.getLastIdleTime();bstats.finishAddingCpuLocked(totalUTime, totalSTime, userTime,systemTime, iowaitTime, irqTime, softIrqTime, idleTime);}}}// 每30分钟写入电池数据if (mLastWriteTime < (now - BATTERY_STATS_TIME)) {mLastWriteTime = now;mBatteryStatsService.scheduleWriteToDisk();}}}}复制代码

BatteryStatsImpl.finishAddingCpuLocked

    public void finishAddingCpuLocked(int totalUTime, int totalSTime, int statUserTime,int statSystemTime, int statIOWaitTime, int statIrqTime,int statSoftIrqTime, int statIdleTime) {if (DEBUG) Slog.d(TAG, "Adding cpu: tuser=" + totalUTime + " tsys=" + totalSTime+ " user=" + statUserTime + " sys=" + statSystemTime+ " io=" + statIOWaitTime + " irq=" + statIrqTime+ " sirq=" + statSoftIrqTime + " idle=" + statIdleTime);mCurStepCpuUserTime += totalUTime;mCurStepCpuSystemTime += totalSTime;mCurStepStatUserTime += statUserTime;mCurStepStatSystemTime += statSystemTime;mCurStepStatIOWaitTime += statIOWaitTime;mCurStepStatIrqTime += statIrqTime;mCurStepStatSoftIrqTime += statSoftIrqTime;mCurStepStatIdleTime += statIdleTime;}复制代码

ProcessCpuTracker.update

update方法主要是读取/proc/stat与/proc/loadavg文件的数据来更新当前的CPU时间,其中CPU负载接口onLoadChange在LoadAverageService中有使用,用于展示一个动态的View在界面,便于查看CPU的实时数据。

具体关于这两个文件,我会在最后列出这两个节点文件的实例数据并作出简单的解析。

关于/proc目录,它其实是一个虚拟目录,其子目录与子文件也都是虚拟的,并不占用实际的存储空间,它允许动态的读取出系统的实时信息。

    public void update() {if (DEBUG) Slog.v(TAG, "Update: " + this);final long nowUptime = SystemClock.uptimeMillis();final long nowRealtime = SystemClock.elapsedRealtime();// 复用size=7的LONG数组final long[] sysCpu = mSystemCpuData;// 读取/proc/stat文件if (Process.readProcFile("/proc/stat", SYSTEM_CPU_FORMAT,null, sysCpu, null)) {// Total user time is user + nice time.final long usertime = (sysCpu[0]+sysCpu[1]) * mJiffyMillis;// Total system time is simply system time.final long systemtime = sysCpu[2] * mJiffyMillis;// Total idle time is simply idle time.final long idletime = sysCpu[3] * mJiffyMillis;// Total irq time is iowait + irq + softirq time.final long iowaittime = sysCpu[4] * mJiffyMillis;final long irqtime = sysCpu[5] * mJiffyMillis;final long softirqtime = sysCpu[6] * mJiffyMillis;// This code is trying to avoid issues with idle time going backwards,// but currently it gets into situations where it triggers most of the time. :(if (true || (usertime >= mBaseUserTime && systemtime >= mBaseSystemTime&& iowaittime >= mBaseIoWaitTime && irqtime >= mBaseIrqTime&& softirqtime >= mBaseSoftIrqTime && idletime >= mBaseIdleTime)) {mRelUserTime = (int)(usertime - mBaseUserTime);mRelSystemTime = (int)(systemtime - mBaseSystemTime);mRelIoWaitTime = (int)(iowaittime - mBaseIoWaitTime);mRelIrqTime = (int)(irqtime - mBaseIrqTime);mRelSoftIrqTime = (int)(softirqtime - mBaseSoftIrqTime);mRelIdleTime = (int)(idletime - mBaseIdleTime);mRelStatsAreGood = true;
...mBaseUserTime = usertime;mBaseSystemTime = systemtime;mBaseIoWaitTime = iowaittime;mBaseIrqTime = irqtime;mBaseSoftIrqTime = softirqtime;mBaseIdleTime = idletime;} else {mRelUserTime = 0;mRelSystemTime = 0;mRelIoWaitTime = 0;mRelIrqTime = 0;mRelSoftIrqTime = 0;mRelIdleTime = 0;mRelStatsAreGood = false;Slog.w(TAG, "/proc/stats has gone backwards; skipping CPU update");return;}}mLastSampleTime = mCurrentSampleTime;mCurrentSampleTime = nowUptime;mLastSampleRealTime = mCurrentSampleRealTime;mCurrentSampleRealTime = nowRealtime;final StrictMode.ThreadPolicy savedPolicy = StrictMode.allowThreadDiskReads();try {// 收集/proc文件节点信息mCurPids = collectStats("/proc", -1, mFirst, mCurPids, mProcStats);} finally {StrictMode.setThreadPolicy(savedPolicy);}final float[] loadAverages = mLoadAverageData;// 读取/proc/loadavg文件信息// 即最新1分钟,5分钟,15分钟的CPU负载if (Process.readProcFile("/proc/loadavg", LOAD_AVERAGE_FORMAT,null, null, loadAverages)) {float load1 = loadAverages[0];float load5 = loadAverages[1];float load15 = loadAverages[2];if (load1 != mLoad1 || load5 != mLoad5 || load15 != mLoad15) {mLoad1 = load1;mLoad5 = load5;mLoad15 = load15;// onLoadChanged是个空实现,在LoadAverageService的内部类对它进行了重写,用来更新CPU负载的数据onLoadChanged(load1, load5, load15);}}
...}复制代码

ProcessCpuTracker.collectStats

    private int[] collectStats(String statsFile, int parentPid, boolean first,int[] curPids, ArrayList<Stats> allProcs) {{// 获取感兴趣的进程idint[] pids = Process.getPids(statsFile, curPids);int NP = (pids == null) ? 0 : pids.length;int NS = allProcs.size();int curStatsIndex = 0;for (int i=0; i<NP; i++) {int pid = pids[i];if (pid < 0) {NP = pid;break;}Stats st = curStatsIndex < NS ? allProcs.get(curStatsIndex) : null;if (st != null && st.pid == pid) {// Update an existing process...st.added = false;st.working = false;curStatsIndex++;if (DEBUG) Slog.v(TAG, "Existing "+ (parentPid < 0 ? "process" : "thread")+ " pid " + pid + ": " + st);if (st.interesting) {final long uptime = SystemClock.uptimeMillis();// 进程状态缓冲数组final long[] procStats = mProcessStatsData;if (!Process.readProcFile(st.statFile.toString(),PROCESS_STATS_FORMAT, null, procStats, null)) {continue;}final long minfaults = procStats[PROCESS_STAT_MINOR_FAULTS];final long majfaults = procStats[PROCESS_STAT_MAJOR_FAULTS];final long utime = procStats[PROCESS_STAT_UTIME] * mJiffyMillis;final long stime = procStats[PROCESS_STAT_STIME] * mJiffyMillis;if (utime == st.base_utime && stime == st.base_stime) {st.rel_utime = 0;st.rel_stime = 0;st.rel_minfaults = 0;st.rel_majfaults = 0;if (st.active) {st.active = false;}continue;}if (!st.active) {st.active = true;}
...st.rel_uptime = uptime - st.base_uptime;st.base_uptime = uptime;st.rel_utime = (int)(utime - st.base_utime);st.rel_stime = (int)(stime - st.base_stime);st.base_utime = utime;st.base_stime = stime;st.rel_minfaults = (int)(minfaults - st.base_minfaults);st.rel_majfaults = (int)(majfaults - st.base_majfaults);st.base_minfaults = minfaults;st.base_majfaults = majfaults;st.working = true;}continue;}if (st == null || st.pid > pid) {// We have a new process!st = new Stats(pid, parentPid, mIncludeThreads);allProcs.add(curStatsIndex, st);curStatsIndex++;NS++;
...final String[] procStatsString = mProcessFullStatsStringData;final long[] procStats = mProcessFullStatsData;st.base_uptime = SystemClock.uptimeMillis();String path = st.statFile.toString();//Slog.d(TAG, "Reading proc file: " + path);if (Process.readProcFile(path, PROCESS_FULL_STATS_FORMAT, procStatsString,procStats, null)) {// This is a possible way to filter out processes that// are actually kernel threads...  do we want to?  Some// of them do use CPU, but there can be a *lot* that are// not doing anything.st.vsize = procStats[PROCESS_FULL_STAT_VSIZE];if (true || procStats[PROCESS_FULL_STAT_VSIZE] != 0) {st.interesting = true;st.baseName = procStatsString[0];st.base_minfaults = procStats[PROCESS_FULL_STAT_MINOR_FAULTS];st.base_majfaults = procStats[PROCESS_FULL_STAT_MAJOR_FAULTS];st.base_utime = procStats[PROCESS_FULL_STAT_UTIME] * mJiffyMillis;st.base_stime = procStats[PROCESS_FULL_STAT_STIME] * mJiffyMillis;} else {Slog.i(TAG, "Skipping kernel process pid " + pid+ " name " + procStatsString[0]);st.baseName = procStatsString[0];}} else {Slog.w(TAG, "Skipping unknown process pid " + pid);st.baseName = "<unknown>";st.base_utime = st.base_stime = 0;st.base_minfaults = st.base_majfaults = 0;}if (parentPid < 0) {getName(st, st.cmdlineFile);if (st.threadStats != null) {mCurThreadPids = collectStats(st.threadsDir, pid, true,mCurThreadPids, st.threadStats);}} else if (st.interesting) {st.name = st.baseName;st.nameWidth = onMeasureProcessName(st.name);}if (DEBUG) Slog.v("Load", "Stats added " + st.name + " pid=" + st.pid+ " utime=" + st.base_utime + " stime=" + st.base_stime+ " minfaults=" + st.base_minfaults + " majfaults=" + st.base_majfaults);st.rel_utime = 0;st.rel_stime = 0;st.rel_minfaults = 0;st.rel_majfaults = 0;st.added = true;if (!first && st.interesting) {st.working = true;}continue;}// This process has gone away!st.rel_utime = 0;st.rel_stime = 0;st.rel_minfaults = 0;st.rel_majfaults = 0;st.removed = true;st.working = true;allProcs.remove(curStatsIndex);NS--;if (DEBUG) Slog.v(TAG, "Removed "+ (parentPid < 0 ? "process" : "thread")+ " pid " + pid + ": " + st);// Decrement the loop counter so that we process the current pid// again the next time through the loop.i--;continue;}while (curStatsIndex < NS) {// This process has gone away!final Stats st = allProcs.get(curStatsIndex);st.rel_utime = 0;st.rel_stime = 0;st.rel_minfaults = 0;st.rel_majfaults = 0;st.removed = true;st.working = true;allProcs.remove(curStatsIndex);NS--;if (localLOGV) Slog.v(TAG, "Removed pid " + st.pid + ": " + st);}return pids;}复制代码

Process.readProcFile

jboolean android_os_Process_readProcFile(JNIEnv* env, jobject clazz,jstring file, jintArray format, jobjectArray outStrings,jlongArray outLongs, jfloatArray outFloats)
{
...int fd = open(file8, O_RDONLY);
...env->ReleaseStringUTFChars(file, file8);// 将文件数据读取到buffer中char buffer[256];const int len = read(fd, buffer, sizeof(buffer)-1);close(fd);
...buffer[len] = 0;return android_os_Process_parseProcLineArray(env, clazz, buffer, 0, len,format, outStrings, outLongs, outFloats);}复制代码

Process.cpp parseProcLineArray

jboolean android_os_Process_parseProcLineArray(JNIEnv* env, jobject clazz,char* buffer, jint startIndex, jint endIndex, jintArray format,jobjectArray outStrings, jlongArray outLongs, jfloatArray outFloats)
{// 先获取要读取的数据buffer长度const jsize NF = env->GetArrayLength(format);const jsize NS = outStrings ? env->GetArrayLength(outStrings) : 0;const jsize NL = outLongs ? env->GetArrayLength(outLongs) : 0;const jsize NR = outFloats ? env->GetArrayLength(outFloats) : 0;jint* formatData = env->GetIntArrayElements(format, 0);jlong* longsData = outLongs ?env->GetLongArrayElements(outLongs, 0) : NULL;jfloat* floatsData = outFloats ?env->GetFloatArrayElements(outFloats, 0) : NULL;
...jsize i = startIndex;jsize di = 0;jboolean res = JNI_TRUE;// 循环解析buffer中的数据到xxData中for (jsize fi=0; fi<NF; fi++) {jint mode = formatData[fi];if ((mode&PROC_PARENS) != 0) {i++;} else if ((mode&PROC_QUOTES) != 0) {if (buffer[i] == '"') {i++;} else {mode &= ~PROC_QUOTES;}}const char term = (char)(mode&PROC_TERM_MASK);const jsize start = i;
...jsize end = -1;if ((mode&PROC_PARENS) != 0) {while (i < endIndex && buffer[i] != ')') {i++;}end = i;i++;} else if ((mode&PROC_QUOTES) != 0) {while (buffer[i] != '"' && i < endIndex) {i++;}end = i;i++;}while (i < endIndex && buffer[i] != term) {i++;}if (end < 0) {end = i;}if (i < endIndex) {i++;if ((mode&PROC_COMBINE) != 0) {while (i < endIndex && buffer[i] == term) {i++;}}}if ((mode&(PROC_OUT_FLOAT|PROC_OUT_LONG|PROC_OUT_STRING)) != 0) {char c = buffer[end];buffer[end] = 0;if ((mode&PROC_OUT_FLOAT) != 0 && di < NR) {char* end;floatsData[di] = strtof(buffer+start, &end);}if ((mode&PROC_OUT_LONG) != 0 && di < NL) {char* end;longsData[di] = strtoll(buffer+start, &end, 10);}if ((mode&PROC_OUT_STRING) != 0 && di < NS) {jstring str = env->NewStringUTF(buffer+start);env->SetObjectArrayElement(outStrings, di, str);}buffer[end] = c;di++;}}// 将xxData解析到outxxx中env->ReleaseIntArrayElements(format, formatData, 0);if (longsData != NULL) {env->ReleaseLongArrayElements(outLongs, longsData, 0);}if (floatsData != NULL) {env->ReleaseFloatArrayElements(outFloats, floatsData, 0);}return res;
}复制代码

实例数据

/proc/stat

// [1]user, [2]nice, [3]system, [4]idle, [5]iowait, [6]irq, [7]softirq
// 1. 从系统启动开始累计到当前时刻,用户态CPU时间
// 2. nice值为负的进程所占有的CPU时间
// 3. 内核CPU时间
// 4. 除IO等待时间的其它时间
// 5. 硬盘IO等待时间
// 6. 硬中断时间
// 7. 软中断时间
cpu  76704 76700 81879 262824 17071 10 15879 0 0 0
cpu0 19778 22586 34375 106542 7682 7 10185 0 0 0
cpu1 11460 6197 7973 18043 2151 0 1884 0 0 0
cpu2 17438 20917 13339 24945 2845 1 1822 0 0 0
cpu3 28028 27000 26192 113294 4393 2 1988 0 0 0
intr 4942220 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 602630 0 0 0 0 0 0 0 0 0 0 0 0 0 15460 0 0 0 0 0 0 67118 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1854 8 5 0 10 0 0 0 6328 0 0 0 0 0 0 0 0 0 0 892 0 0 0 0 2 106 2 0 2 0 0 0 2 2 0 0 0 0 0 0 0 0 0 0 0 0 0 7949 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 10256 3838 0 0 0 0 0 0 0 499 69081 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 725052 0 14911 0 0 0 0 0 1054 0 0 0 0 0 0 2073 0 0 0 1371 5 0 659329 654662 0 0 0 0 0 0 0 0 0 6874 0 7 0 0 0 0 913 312 0 0 0 245372 0 0 2637 0 0 0 0 0 0 0 0 0 0 0 0 96 0 0 0 0 0 13906 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 8804 0 0 0 0 0 0 0 0 0 0 0 0 2294 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 6 0 0 0 0 0 0 17 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 13860 0 0 5 5 0 0 0 0 1380 362 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 7069 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 // 中断信息
ctxt 11866606 // 自系统启动以来CPU发生的上下文交换次数
btime 1507554066 // 系统启动到现在为止的时间,单位为秒
processes 38582    // 系统启动以来所创建的任务个数目
procs_running 1 // 当前运行队列的任务的数目
procs_blocked 0 // 当前被阻塞的任务数目
softirq 2359224 2436 298396 2839 517350 2436 2436 496108 329805 2067 705351复制代码

/proc/loadavg

10.55 19.87 25.93 2/2082 7475复制代码

/proc/1/stat

1 (init) S 0 0 0 0 -1 4194560 2206 161131 0 62 175 635 175 244 20 0 1 0 0 2547712 313 4294967295 32768 669624 3196243632 3196242928 464108 0 0 0 65536 3224056068 0 0 17 3 0 0 0 0 0 676368 693804 712704复制代码

从源码角度看CPU相关日志相关推荐

  1. 从源码角度看Android系统SystemServer进程启动过程

    SystemServer进程是由Zygote进程fork生成,进程名为system_server,主要用于创建系统服务. 备注:本文将结合Android8.0的源码看SystemServer进程的启动 ...

  2. 从JDK源码角度看Long

    概况 Java的Long类主要的作用就是对基本类型long进行封装,提供了一些处理long类型的方法,比如long到String类型的转换方法或String类型到long类型的转换方法,当然也包含与其 ...

  3. 从JDK源码角度看Short

    概况 Java的Short类主要的作用就是对基本类型short进行封装,提供了一些处理short类型的方法,比如short到String类型的转换方法或String类型到short类型的转换方法,当然 ...

  4. 从源码角度看Android系统Launcher在开机时的启动过程

    Launcher是Android所有应用的入口,用来显示系统中已经安装的应用程序图标. Launcher本身也是一个App,一个提供桌面显示的App,但它与普通App有如下不同: Launcher是所 ...

  5. 从源码角度看Android系统Zygote进程启动过程

    在Android系统中,DVM.ART.应用程序进程和SystemServer进程都是由Zygote进程创建的,因此Zygote又称为"孵化器".它是通过fork的形式来创建应用程 ...

  6. 从template到DOM(Vue.js源码角度看内部运行机制)

    写在前面 这篇文章算是对最近写的一系列Vue.js源码的文章(github.com/answershuto-)的总结吧,在阅读源码的过程中也确实受益匪浅,希望自己的这些产出也会对同样想要学习Vue.j ...

  7. 从源码角度看Android系统init进程启动过程

    init进程是Linux系统中用户空间的第一个进程,进程号为1.Kernel启动后,在用户空间启动init进程,并调用/system/core/init.cpp中的main方法执行一些重要的工作. 备 ...

  8. 从源码角度看Spark on yarn client cluster模式的本质区别

    首先区分下AppMaster和Driver,任何一个yarn上运行的任务都必须有一个AppMaster,而任何一个Spark任务都会有一个Driver,Driver就是运行SparkContext(它 ...

  9. 从源码角度看ContentProvider

    简介 本文原文在我的博客CheapTalks,欢迎大家去看看~ ContentProvider相信大家都耳熟能详了,在安卓系统越来越注重系统安全的前提下,不知道大家好不好奇provider是如何在An ...

最新文章

  1. 暑期集训1:C++STL 例3:UVA-12100
  2. OPA 1 - testsuite.opa.html
  3. 计算机一级windows7操作,计算机等级一级:Windows7应用之小技巧
  4. 讲述一个自学七年Python编程的码农人生
  5. 学fpga(组合逻辑和时序逻辑)
  6. mysql数据库报错1075_MySQL数据库之在MAC OS X上安装MYSQL
  7. 在linux系统下java实现pdf导出汉字无法显示_Linux环境下iText生成pdf中文不显示问题...
  8. Tricks(七)——list of lists 行和、列和的计算
  9. Mac OS 下安装wget
  10. 对于机器学习而言如何翻越测试集
  11. 快排算法的非递归实现
  12. 7.1 - CRM系统
  13. 绕过某省某大学校园网的探索(处女作)
  14. linux开机禁用vga设备,用vga_switcheroo在Linux下(开启KMS)彻底关闭某一可切换显卡的简单教程...
  15. 通过Modbus转EtherNetIP网关连接AB PLC的配置案例
  16. WPS打不出英文引号
  17. 加拿大计算机研究生移民,好消息:在加拿大BC省硕士毕业后无需工作可直接申请移民...
  18. Git: The following paths are ignored by one of your .gitignore files: xxx.dll
  19. PAT B 1068 万绿丛中一点红(C语言)*排除法
  20. 背包客:走遍世界都有家

热门文章

  1. 【UE4 】制作螺旋桨飞机
  2. Pyqt报错:arguments did not match any overloaded call
  3. python猜拳游戏编程代码_用python实现“猜拳游戏
  4. stdin and STDIN_FILENO
  5. STRINGS工具使用
  6. linux配置虚拟IP--VIP
  7. android edittext hint值,Android EditText Hint Size
  8. 软件测试和程序员,【软件测试】软件测试算是程序员吗?
  9. 消除flex-wrap产生的间距
  10. 三个版本实现黑白主题切换