Android系统下内存使用情况与监测
(0). Android/Linux 內存分配的兩個重要策略.
Linux 在分配內存時, 為了節省內存, 按需分配, 使用了延時分配以及Copy-On-Write 的策略.
延時分配即針對user space 申請memory 時, ?先只是明面上的分配虛擬空間, 等到真正操作memory 時, 才真正分配具體的物理內存, ?這個需要借助MMU 的data abort 轉換成page fault 來達成. 這樣就可以極大的避免因user space 過度申請memory, 或者錯誤申請memory 造成的memory 浪費.
而Copy-On-Write 即是在進程fork 時, ?子進程和父進程使用同一份memory, 只有當某塊memory 被更新時, 才重新copy 出新的一份. 這個在android 上表現也非常顯著, ?上層app 包括system server 都由zygote fork 出來, 并且沒重新exec 新的bin, ART VM/Lib 的memory 都是共享的, 可以極大的節省Memory 的使用.
對應的我們在評估一個進程的memory 使用時, 我們往往就需要觀察它使用的虛擬的memory 空間, 它真實的使用的物理memory, 它和其他進程有均攤多少memory, 即:
VSS- Virtual Set Size 虛擬耗用內存(包含共享庫占用的內存)
RSS- Resident Set Size 實際使用物理內存(包含共享庫占用的內存)
PSS- Proportional Set Size 實際使用的物理內存(比例分配共享庫占用的內存)
USS- Unique Set Size 進程獨自占用的物理內存(不包含共享庫占用的內存)
(1). 內存的整體使用情況.
要分析memory leaks, 你需要知道總體的內存使用情況和劃分. 以判斷內存泄露是發生在user space, kernel space, mulit-media 等使用的memory, 從而進一步去判斷具體的memory leaks.
user space 使用的memory 即通常包括從進程直接申請的memory, 比如 malloc: 先mmap/sbrk 整體申請大塊Memory 后再malloc 細分使用, 比如stack memory, 直接通過mmap 從系統申請; 以及因user space 進程打開文件所使用的page cache, 以及使用ZRAM 壓縮 user space memory 存儲所占用的memory.
kernel space 使用的memory 通常包括 kernel stack, slub, page table, vmalloc, shmem 等.
mulit-media 使用的memory 通常使用的方式包括 ion, gpu 等.
其他方式的memory 使用, 此類一般直接從buddy system 中申請出以page 為單位的memory, android 中比較常見如ashmem.
而從進程的角度來講, 通常情況下進程所使用的memory, 都會通過mmap 映射到進程空間后訪問使用(注: 也會一些非常特別異常的流程, 沒有mmap 到進程空間), 所以進程的memory maps 資訊是至關重要的. 對應在AEE DB 里面的file 是 PROCESS_MAPS
下面枚舉一些關鍵的段:
b1100000-b1180000 rw-p 00000000 00:00 0 [anon:libc_malloc]malloc 通過jemalloc 所管控的空間, 常見的malloc leaks 都會可以看到這種libc_malloc段空間顯著增長.
address perms offset dev inode pathname aefe5000-af9fc000 r-xp 00000000 103:0a 25039 /data/app/in.startv.hotstar-c_zk-AatlkkDg2B_FSQFuQ==/lib/arm/libAVEAndroid.so af9fc000-afa3e000 r--p 00a16000 103:0a 25039 /data/app/in.startv.hotstar-c_zk-AatlkkDg2B_FSQFuQ==/lib/arm/libAVEAndroid.so afa3e000-afad2000 rw-p 00a58000 103:0a 25039 /data/app/in.startv.hotstar-c_zk-AatlkkDg2B_FSQFuQ==/lib/arm/libAVEAndroid.so第一段 "r-xp" 則是只讀并可執行的主體代碼段. 第二段 "r--p" 則是這個lib 使用的只讀變量段 , 第三段 "rw-p" 則是這個lib 使用的數據段.
7110f000-71110000 rw-p 00000000 00:00 0 [anon:.bss] 71712000-71713000 rw-p 00000000 00:00 0 [anon:.bss] 71a49000-71a4a000 rw-p 00000000 00:00 0 [anon:.bss]BSS(Block Started by Symbol) 段, 存放進程未初始化的static 以及 gloal 變量, 默認初始化時全部為0. 通常此類不會有memory leaks, 基本上長度在程序啟動時就已經決定了.
//java thread 6f5b0b2000-6f5b0b3000 ---p 00000000 00:00 0 [anon:thread stack guard] 6f5b0b3000-6f5b0b4000 ---p 00000000 00:00 0 6f5b0b4000-6f5b1b0000 rw-p 00000000 00:00 0//native thread 74d0d0e000-74d0d0f000 ---p 00000000 00:00 0 [anon:thread stack guard] 74d0d0f000-74d0e0c000 rw-p 00000000 00:00 0pthread stack 使用memory, 注意目前pthread create 時只標注了它底部的 "thread stack guard", 默認pthread stack 大小是1M - 16K. guard 是 4K. ?注意的是java thread 在art 里面還會再隔離一個page, 判斷收到的SIGSEGV 是否為StackOverflowError.
7e9cf16000-7e9cf17000 ---p 00000000 00:00 0 [anon:thread signal stack guard] 7e9cf17000-7e9cf1b000 rw-p 00000000 00:00 0 [anon:thread signal stack]對應Pthread signal stack, 大小為16K,同樣底部有guard 保護.
7f31245000-7f31246000 ---p 00000000 00:00 0 [anon:bionic TLS guard] 7f31246000-7f31249000 rw-p 00000000 00:00 0 [anon:bionic TLS]對應Pthread 的TLS, 長度為12K, 同樣底部有guard 保護.
edce5000-edce6000 rw-s 00000000 00:05 1510969 /dev/ashmem/shared_memory/443BA81EE7976CA437BCBFF7935200B2 (deleted)此類是ashmem, 訪問/dev/ashmem 然后申請的memory, 通常比較關鍵是要確認它的name, 一般從它的name 可以明確得知memory 的申請位置. 至于 (deleted) 標識, 是指 mmap 時有帶MAP_FILE flag, 并且對應的path file已經unlink 或者不存在.
7e8d008000-7e8d306000 rw-s 00000000 00:0a 7438 anon_inode:dmabuf 7e8d306000-7e8d604000 rw-s 00000000 00:0a 7438 anon_inode:dmabuf 7e8d604000-7e8d902000 rw-s 00000000 00:0a 7438 anon_inode:dmabuf 7e8d902000-7e8dc00000 rw-s 00000000 00:0a 7438 anon_inode:dmabufion memory 段. ion buffer 的 vma name 標注成dmabuf, 即已經mmap 的ion memory 可以從這個直接統計算出.
注意的是, maps 打印的資訊只是地址空間資訊, 即是虛擬地址空間占用情況, 而實際的具體的memory 占用多少需要審查 proc/pid/smaps. 比如:
7e8ea00000-7e8ee00000 rw-p 00000000 00:00 0 [anon:libc_malloc] Name: [anon:libc_malloc] Size: 4096 kB Rss: 888 kB Pss: 888 kB Shared_Clean: 0 kB Shared_Dirty: 0 kB Private_Clean: 0 kB Private_Dirty: 888 kB Referenced: 888 kB Anonymous: 888 kB AnonHugePages: 0 kB ShmemPmdMapped: 0 kB Shared_Hugetlb: 0 kB Private_Hugetlb: 0 kB Swap: 0 kB SwapPss: 0 kB KernelPageSize: 4 kB MMUPageSize: 4 kB Locked: 0 kB VmFlags: rd wr mr mw me nr比如這段jemalloc 使用的memory, 對應是一個4M 大小, 但實際目前使用的RSS=PSS=888K, 即還有大部分沒有真實填充memory.
同樣人為的查看maps 比較耗時, 目前在android 里面有 procrank, showmap, pmap 等命令可供查看. procrank 根據進程使用的memory 進行排序統計系統中進程的memory 使用量, 不過它一般沒有統計ion 等資訊. 注意的是這個命令默認只編譯到了debug 版本.
k71v1_64_bsp:/ # procrank -h Usage: procrank [ -W ] [ -v | -r | -p | -u | -s | -h ]-v Sort by VSS.-r Sort by RSS.-p Sort by PSS.-u Sort by USS.-s Sort by swap.(Default sort order is PSS.)-R Reverse sort order (default is descending).-c Only show cached (storage backed) pages-C Only show non-cached (ram/swap backed) pages-k Only show pages collapsed by KSM-w Display statistics for working set only.-W Reset working set of all processes.-o Show and sort by oom score against lowmemorykiller thresholds.-h Display this help screen.showmap 根據進程的maps/smaps 進行統計排序, 注意的是這個命令默認只編譯到了debug 版本.
k71v1_64_bsp:/ # showmap showmap [-t] [-v] [-c] [-q]-t = terse (show only items with private pages)-v = verbose (don't coalesce maps with the same name)-a = addresses (show virtual memory map)-q = quiet (don't show error if map could not be read)pmap 把maps 的每個段打印出來, 如果使用-x 則會使用smaps 中數據匹配, 統計PSS, SWAP 等.
OP46E7:/ # pmap --help usage: pmap [-xq] [pids...]Reports the memory map of a process or processes.-x Show the extended format -q Do not display some header/footer lines從系統角度來看memory 的使用情況, 通常會習慣性簡單的查看 proc/meminfo: 下面簡單和大家分享具體的含義.
k71v1_64_bsp:/ # cat proc/meminfo MemTotal: 3849612 kB MemFree: 206920 kB MemAvailable: 1836292 kB Buffers: 73472 kB Cached: 1571552 kB SwapCached: 14740 kB Active: 1165488 kB Inactive: 865688 kB Active(anon): 202140 kB Inactive(anon): 195580 kB Active(file): 963348 kB Inactive(file): 670108 kB Unevictable: 5772 kB Mlocked: 5772 kB SwapTotal: 1048572 kB SwapFree: 787780 kB Dirty: 32 kB Writeback: 0 kB AnonPages: 383924 kB Mapped: 248488 kB Shmem: 6488 kB Slab: 391060 kB SReclaimable: 199712 kB SUnreclaim: 191348 kB KernelStack: 22640 kB PageTables: 28056 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 2973376 kB Committed_AS: 42758232 kB VmallocTotal: 258867136 kB VmallocUsed: 0 kB VmallocChunk: 0 kB CmaTotal: 2093056 kB CmaFree: 78916 kB我沿用kernel document 的注釋: /kernel/Documentation/filesystems/proc.txt
MemTotal: Total usable ram (i.e. physical ram minus a few reservedbits and the kernel binary code)MemFree: The sum of LowFree+HighFree MemAvailable: An estimate of how much memory is available for starting newapplications, without swapping. Calculated from MemFree,SReclaimable, the size of the file LRU lists, and the lowwatermarks in each zone.The estimate takes into account that the system needs somepage cache to function well, and that not all reclaimableslab will be reclaimable, due to items being in use. Theimpact of those factors will vary from system to system.Buffers: Relatively temporary storage for raw disk blocksshouldn't get tremendously large (20MB or so)Cached: in-memory cache for files read from the disk (thepagecache). Doesn't include SwapCachedSwapCached: Memory that once was swapped out, is swapped back in butstill also is in the swapfile (if memory is needed itdoesn't need to be swapped out AGAIN because it is alreadyin the swapfile. This saves I/O)Active: Memory that has been used more recently and usually notreclaimed unless absolutely necessary.Inactive: Memory which has been less recently used. It is moreeligible to be reclaimed for other purposesHighTotal:HighFree: Highmem is all memory above ~860MB of physical memoryHighmem areas are for use by userspace programs, orfor the pagecache. The kernel must use tricks to accessthis memory, making it slower to access than lowmem.LowTotal:LowFree: Lowmem is memory which can be used for everything thathighmem can be used for, but it is also available for thekernel's use for its own data structures. Among manyother things, it is where everything from the Slab isallocated. Bad things happen when you're out of lowmem.SwapTotal: total amount of swap space availableSwapFree: Memory which has been evicted from RAM, and is temporarilyon the diskDirty: Memory which is waiting to get written back to the diskWriteback: Memory which is actively being written back to the diskAnonPages: Non-file backed pages mapped into userspace page tables AnonHugePages: Non-file backed huge pages mapped into userspace page tablesMapped: files which have been mmaped, such as librariesSlab: in-kernel data structures cache SReclaimable: Part of Slab, that might be reclaimed, such as cachesSUnreclaim: Part of Slab, that cannot be reclaimed on memory pressurePageTables: amount of memory dedicated to the lowest level of pagetables. NFS_Unstable: NFS pages sent to the server, but not yet committed to stablestorageBounce: Memory used for block device "bounce buffers" WritebackTmp: Memory used by FUSE for temporary writeback buffersCommitLimit: Based on the overcommit ratio ('vm.overcommit_ratio'),this is the total amount of memory currently available tobe allocated on the system. This limit is only adhered toif strict overcommit accounting is enabled (mode 2 in'vm.overcommit_memory').The CommitLimit is calculated with the following formula:CommitLimit = ([total RAM pages] - [total huge TLB pages]) *overcommit_ratio / 100 + [total swap pages]For example, on a system with 1G of physical RAM and 7Gof swap with a `vm.overcommit_ratio` of 30 it wouldyield a CommitLimit of 7.3G.For more details, see the memory overcommit documentationin vm/overcommit-accounting. Committed_AS: The amount of memory presently allocated on the system.The committed memory is a sum of all of the memory whichhas been allocated by processes, even if it has not been"used" by them as of yet. A process which malloc()'s 1Gof memory, but only touches 300M of it will show up asusing 1G. This 1G is memory which has been "committed" toby the VM and can be used at any time by the allocatingapplication. With strict overcommit enabled on the system(mode 2 in 'vm.overcommit_memory'),allocations which wouldexceed the CommitLimit (detailed above) will not be permitted.This is useful if one needs to guarantee that processes willnot fail due to lack of memory once that memory has beensuccessfully allocated. VmallocTotal: total size of vmalloc memory areaVmallocUsed: amount of vmalloc area which is used VmallocChunk: largest contiguous block of vmalloc area which is free我們可以得到一些大體的"等式".
MemAvailable = free - kernel reserved memory + ative file + inactive file + SReclaimable - 2 * zone low water mark Cached = All file page - buffers - swapping = Active file + Inactive file + Unevictable file - Buffers Slab = SReclaimable + SUnreclaimable Active = Active(anon) + Active(file) Inactive = Inactive(anon) + Inactive(file) AnonPages + Buffers + Cached = Active + Inactive Buffers + Cached = Active(file) + Inactive(file) SwapTotal = SwapFree + SwapUsed(Not SwapCached) KernelStack = the number of kernel task * Stack Size(16K) Kernel Memory Usage = KernelStack + Slab + PageTables + Shmem + Vmalloc Native Memory Usage = Mapped + AnonPages + Others(2). Android dumpsys meminfo 解析.
從Android 的角度, Google 提供了dumpsys meminfo 命令來獲取全局以及某個進程的memory 信息. android 在 AativityManagerService 里面提供了一個meminfo 的service , 可以來抓取process 的memory 使用概要, ?這個慢慢成為了android 上層判斷的主流.
adb shell dumpsys meminfo ==> dump 全局的memory 使用情況. adb shell dumpsys meminfo pid ==> dump 單個process memory 使用情況. 它的一個好處在于, 如果是user build 沒有root 權限的時候, 可以借道sh ==> system_server ==> binder ==> process 進行抓取操作, 規避了權限方面的風險.對應的完整操作命令:
OP46E7:/ # dumpsys meminfo -h meminfo dump options: [-a] [-d] [-c] [-s] [--oom] [process]-a: include all available information for each process.-d: include dalvik details.-c: dump in a compact machine-parseable representation.-s: dump only summary of application memory usage.-S: dump also SwapPss.--oom: only show processes organized by oom adj.--local: only collect details locally, don't call process.--package: interpret process arg as package, dumping allprocesses that have loaded that package.--checkin: dump data for a checkin--proto: dump data to proto If [process] is specified it can be the name or pid of a specific process to dump.下面我們將解析dumpsys meminfo 的數據來源, 便于大家讀懂.
(2.1) 系統memory 來源解析.
Total RAM: 3,849,612K (status moderate)Free RAM: 1,870,085K ( 74,389K cached pss + 1,599,904K cached kernel + 195,792K free)Used RAM: 1,496,457K ( 969,513K used pss + 526,944K kernel)Lost RAM: 686,331KZRAM: 48,332K physical used for 260,604K in swap (1,048,572K total swap)Tuning: 384 (large 512), oom 322,560K, restore limit 107,520K (high-end-gfx) Total RAM: /proc/meminfo-MemTotal Free RAM: cached pss = All pss of process oom_score_adj >= 900cached kernel = /proc/meminfo.Buffers + /proc/meminfo.Cached + /proc/meminfo.SlabReclaimable- /proc/meminfo.Mappedfree = /proc/meminfo.MemFree Used RAM: used pss: total pss - cached pss Kernel: /proc/meminfo.Shmem + /proc/meminfo.SlabUnreclaim + VmallocUsed + /proc/meminfo.PageTables + /proc/meminfo.KernelStack Lost RAM: /proc/meminfo.memtotal - (totalPss - totalSwapPss) - /proc/meminfo.memfree - /proc/meminfo.cached - kernel used - zram used(2.2) 單個Process 數據源解析
單個process 則通過binder 接入app 來抓取. ?接到ActivityThread 的 dumpMeminfo 來統計.
Native Heap, 從jemalloc 取出,對應實現是
android_os_Debug_getNativeHeapSize() ==> mallinfo() ==> jemallocDalvik Heap, 使用Runtime 從java heap 取出.
同時根據process 的smaps 解析數據 Pss ?Private Dirty ?Private Clean ?SwapPss.
** MEMINFO in pid 1138 [system] **Pss Private Private SwapPss Heap Heap HeapTotal Dirty Clean Dirty Size Alloc Free------ ------ ------ ------ ------ ------ ------Native Heap 62318 62256 0 0 137216 62748 74467Dalvik Heap 21549 21512 0 0 28644 16356 12288Dalvik Other 4387 4384 0 0Stack 84 84 0 0Ashmem 914 884 0 0Other dev 105 0 56 0.so mmap 10995 1112 4576 0.apk mmap 3912 0 2776 0.ttf mmap 20 0 0 0.dex mmap 60297 76 57824 0.oat mmap 2257 0 88 0.art mmap 3220 2788 12 0Other mmap 1944 4 672 0GL mtrack 5338 5338 0 0Unknown 3606 3604 0 0TOTAL 180946 102042 66004 0 165860 79104 86755App SummaryPss(KB)------Java Heap: 24312Native Heap: 62256Code: 66452Stack: 84Graphics: 5338Private Other: 9604System: 12900TOTAL: 180946 TOTAL SWAP PSS: 0ObjectsViews: 11 ViewRootImpl: 2AppContexts: 20 Activities: 0Assets: 15 AssetManagers: 0Local Binders: 528 Proxy Binders: 1134Parcel memory: 351 Parcel count: 370Death Recipients: 627 OpenSSL Sockets: 0WebViews: 0SQLMEMORY_USED: 384PAGECACHE_OVERFLOW: 86 MALLOC_SIZE: 117DATABASESpgsz dbsz Lookaside(b) cache Dbname4 64 85 12/29/8 /data/system_de/0/accounts_de.db4 40 0/0/0 (attached) ceDb: /data/system_ce/0/accounts_ce.db4 20 27 54/17/3 /data/system/notification_log.db我給大家解釋一下:
Java Heap: 24312 dalvik heap + .art mmapNative Heap: 62256Code: 66452 .so mmap + .jar mmap + .apk mmap + .ttf mmap + .dex mmap + .oat mmapStack: 84Graphics: 5338 Gfx dev + EGL mtrack + GL mtrackPrivate Other: 9604 TotalPrivateClean + TotalPrivateDirty - java - native - code - stack - graphicsSystem: 12900 TotalPss - TotalPrivateClean - TotalPrivateDirty下面的解釋來源于?
https://developer.android.com/studio/profile/investigate-ram?hl=zh-cn
Dalvik Heap
您的應用中 Dalvik 分配占用的 RAM。Pss Total 包括所有 Zygote 分配(如上述 PSS 定義所述,通過進程之間的共享內存量來衡量)。Private Dirty 數值是僅分配到您應用的堆的實際 RAM,由您自己的分配和任何 Zygote 分配頁組成,這些分配頁自從 Zygote 派生應用進程以來已被修改。
Heap Alloc
是 Dalvik 和原生堆分配器為您的應用跟蹤的內存量。此值大于 Pss Total 和 Private Dirty,因為您的進程從 Zygote 派生,且包含您的進程與所有其他進程共享的分配。
.so mmap 和 .dex mmap
映射的 .so(原生)和 .dex(Dalvik 或 ART)代碼占用的 RAM。Pss Total 數值包括應用之間共享的平臺代碼;Private Clean 是您的應用自己的代碼。通常情況下,實際映射的內存更大 - 此處的 RAM 僅為應用執行的代碼當前所需的 RAM。不過,.so mmap 具有較大的私有臟 RAM,因為在加載到其最終地址時對原生代碼進行了修改。
.oat mmap
這是代碼映像占用的 RAM 量,根據多個應用通常使用的預加載類計算。此映像在所有應用之間共享,不受特定應用影響。
.art mmap
這是堆映像占用的 RAM 量,根據多個應用通常使用的預加載類計算。此映像在所有應用之間共享,不受特定應用影響。盡管 ART 映像包含 Object 實例,它仍然不會計入您的堆大小。
(3). 內存使用情況監測
我們說通常的監測機制有兩種. 一種是輪詢, 周期性的查看memory 的使用情況, 通常是通過腳本或者daemon 程序周期性的監測. 監測的數據一般包括:
/proc/meminfo 系統總的memory 使用情況. /proc/zoneinfo 每個zone 的memory 使用情況. /proc/buddyinfo buddy system 的memory 情況. /proc/slabinfo slub 的memory 使用分布. /proc/vmallocinfo vmalloc 的memory 使用情況. /proc/zraminfo zram 的使用情況, 以及占用memory 情況. /proc/mtk_memcfg/slabtrace slab memory 的具體分布. /proc/vmstat 系統memory 根據使用類型的分布. /sys/kernel/debug/ion/ion_mm_heap mtk multi-media ion memory 使用情況. /sys/kernel/debug/ion/client_history ion 各個clients 使用的ion 情況粗略統計. /proc/mali/memory_usage arm mali gpu 使用memory 按進程統計 /sys/kernel/debug/mali0/gpu_memory arm mali gpu 使用memory 按進程統計 ps -A -T 打印系統所有進程/線程資訊, 可觀察每個進程的線程量, 以及VSS/RSS dumpsys meminfo 從Android 角度觀察系統memory 的使用情況. /sys/kernel/debug/mlog mtk 統計系統一段時間(約60s) 系統memory的使用情況, 包括kernel, user space, ion, gpu 等的分布.大家可以寫腳本周期性的抓取. 這里單獨把mlog 抓出來說明, mlog 是MTK 開發的輕量級的memory log, 一體式抓取常見的memory 統計資訊, 包括kernel(vmalloc, slub...), user space (進程VSS,RSS...), ion, gpu 等在一段時間內部的概要使用情況. 并且提供了圖形化的tool 來展示具體的memory 分布, 使用情況, 非常方便, 請大家優先使用(tool_for_memory_analysis).
下面是一些精美的截圖:
各類memory 在一段時間內的變化情況:
Kernel/User/HW 的memory 統計變化:一段時間內, 抓取的進程的memory 變化情況:
歡迎大家手工使用.
另外一種熔斷, 即限制memory 的使用, 當到一定程度時, 主動發生異常, 回報錯誤. 通常情況下, 系統memory leaks , 就會伴隨OOM 發生, 嚴重是直接KE. 而單個進程 memory leaks, 如果它的oom adj < 0, 即daemon service 或者 persist app, 通常它的memory leaks 也會觸發系統OOM , 因為lmk 難以殺掉. 如果是普通app 發生memory leaks, 則往往直接被LMK 殺掉. 難以對系統產生直接異常. 當然進程也可能無法申請到memory 發生JE, NE 等異常.
針對總的系統的memory 使用, 我們可以通過設定, 限制系統總體的memory, 比如設置系統最大2GB:
(1). ProjectConfig.mk
CUSTOM_CONFIG_MAX_DRAM_SIZE = 0x80000000 注意: CUSTOM_CONFIG_MAX_DRAM_SIZE must be included by AUTO_ADD_GLOBAL_DEFINE_BY_NAME_VALUE
(2). preloader project config file
vendor/mediatek/proprietary/bootable/bootloader/preloader/custom/{project}/{project}.mk CUSTOM_CONFIG_MAX_DRAM_SIZE = 0x80000000 注意: CUSTOM_CONFIG_MAX_DRAM_SIZE must be exported
針對某個進程使用的memory, 我們可以通過setrlimit 來進行限制, 如: 針對camerahalserver, 使用init 的setrlimit 進行限制.
service camerahalserver /vendor/bin/hw/camerahalserverclass mainuser cameraservergroup audio camera input drmrpc sdcard_rw system media graphicsioprio rt 4capabilities SYS_NICEwritepid /dev/cpuset/camera-daemon/tasks /dev/stune/top-app/tasks#limit VSS to 4GBrlimit as 0x100000000 0x100000000#limit malloc to 1GBrlimit data 0x40000000 0x40000000把camerahalserver 的VSS 限制到4GB, ?把malloc 的大小限制到1GB, ?一旦超出就會返回 ENOMEM, ?通常情況下,這樣可自動產生NE. ?以便抓到camerahalserver 的更多信息.
注意的是因為vendor 下面的service 是由 vendor_init 拉起來的, ?需要給vendor_init 設置sepolicy. 以免無法設定成功.
/device/mediatek/sepolicy/basic/non_plat/vendor_init.teallow vendor_init self:global_capability_class_set sys_resource;也可以直接在代碼里面寫死, 參考如
/frameworks/av/media/libmedia/MediaUtils.cpp
針對APP java heap的memory leaks, 我們可以通過設定 dalvik 的heap size 進行限制, 通過system property 設定, 注意的是, 目前的做法會影響到所有的java process.
[dalvik.vm.heapgrowthlimit]: [384m] [dalvik.vm.heapsize]: [512m]? 回復「?籃球的大肚子」進入技術群聊
回復「1024」獲取1000G學習資料
總結
以上是生活随笔為你收集整理的Android系统下内存使用情况与监测的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: Hadoop环境搭建
- 下一篇: python怎么进阶_你真的会自学么?大