AshmemAndroid 内存分配与共享的增强

网友投稿 1056 2022-05-29

Ashmem 是什么?

Ashmem 对 Android 内存分配与共享的增强

Ashmem(Anonymous Shared Memory 匿名共享内存),是在 Android 的内存管理中提供的一种机制。它基于mmap系统调用,不同的进程可以将同一段物理内存空间映射到各自的虚拟空间,从而实现共享。

mmap机制

mmap系统调用是将一个打开的文件映射到进程的用户空间,mmap系统调用使得进程之间通过映射同一个普通文件实现共享内存。普通文件被映射到进程地址空间后,进程可以像访问普通内存一样对文件进行访问,不必再调用read(),write()等操作。

mmap 函数原型:

void *mmap(void *addr, size_t length, int prot, int flags, int fd, off_t offset);

* addr: *指定为文件描述符fd应被映射到的进程空间的起始地址。它通常被指定为一个空指针,这样告诉内核自己去选择起始地址。一般默认为NULL

* length: *是映射到调用进程地址空间中的字节数,从被映射文件开头offset个字节处开始算

* prot: *负责保护内存映射区的保护。常用值是代表读写访问的PROT_READ | PROT_WRITE.当然还包括数据的执行(PROT_EXEC)、数据不可访问(PROT_NONE)

** flag: **flags常用值有MAP_SHARED或MAP_PRIVATE这两个标志必须选一个,并可以选上MAP_FIXED。如果指定了,那么调用进程对被映射数据所做的修改只对该进程可见,而不该变其底层支撑对象。如果指定了,那么调用进程对被映射数据所作的修改对于共享该对象的所有进程都可见,而且确实改变了其底层支撑对象

* fd: *参数fd为映射文件的描述符,offset为文件的起点,默认为0

* offset: *偏移量

ashmem 在 mmap 上的改进

ashmem通过内核驱动提供了辅助内核的内存回收算法机制(pin/unpin)

什么是pin和unpin呢?

具体来讲,就是当你使用Ashmem分配了一块内存,但是其中某些部分却不会被使用时,那么就可以将这块内存unpin掉。unpin后,内核可以将它对应的物理页面回收,以作他用。你也不用担心进程无法对unpin掉的内存进行再次访问,因为回收后的内存还可以再次被获得(通过缺页handler),因为unpin操作并不会改变已经 mmap的地址空间。

Ashmem 的定义

我们先来看一下部分 ashmem 实现的头文件(ashmem.h)

#define ASHMEM_NAME_LEN 256 //定义设备名称 #define ASHMEM_NAME_DEF "dev/ashmem" /* 从 ASHMEM_PIN 返回的值: 判断是否要清除 */ #define ASHMEM_NOT_PURGED 0 #define ASHMEM_WAS_PURGED 1 /*从 ASHMEM_GET_PIN_STATUS 返回的值: 是 pinned 还是 unpined */ #define ASHMEM_IS_UNPINNED 0 #define ASHMEM_IS_PINNED 1 struct ashmem_pin { __u32 offset; /* 偏移量 */ __u32 len; /* 从偏移开始的长度 */ };

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

Ashmem 是怎么实现的?

下面我们开始按照 Ashmem 的实现代码来看看它是怎么样工作的(ashmem.c)

我们先来看一下两个结构体ashmem_area和ashmem_range:

/* * ashmem_area - anonymous shared memory area * Lifecycle: From our parent file's open() until its release() * Locking: Protected by `ashmem_mutex' * Big Note: Mappings do NOT pin this structure; it dies on close() */ struct ashmem_area { char name[ASHMEM_FULL_NAME_LEN]; /* 用于/proc/pid/maps中的一个标识名称(可选) */ struct list_head unpinned_list; /* 所有 ashmem 共享内存区域列表 */ struct file *file; /* ashmem 支持的文件 */ size_t size; /* 区域字节大小 */ unsigned long prot_mask; /* 内存映射区的保护 */ };

1

2

3

4

5

6

7

8

9

10

11

12

13

14

我们可以看到 ashmem_area 定义了一个内存共享区域,它的生命周期是从文件打开open()到它被释放release(),并且支持原子性

/* * ashmem_range - represents an interval of unpinned (evictable) pages * Lifecycle: From unpin to pin * Locking: Protected by `ashmem_mutex' */ struct ashmem_range { struct list_head lru; /* LRU 列表 */ struct list_head unpinned; /* unpinned 列表 */ struct ashmem_area *asma; /* 关联的 ashmem 区域 */ size_t pgstart; /* 开始页面 */ size_t pgend; /* 结束页面 */ unsigned int purged; /* 是否要被回收 */ };

1

2

3

4

5

6

7

8

9

10

11

12

13

我们看到ashmem_range的生命周期是从 unpin 到 pin

初始化 - ashmem_init(void)

static int __init ashmem_init(void) { int ret; ashmem_area_cachep = kmem_cache_create("ashmem_area_cache", sizeof(struct ashmem_area), 0, 0, NULL); if (unlikely(!ashmem_area_cachep)) { pr_err("failed to create slab cache\n"); return -ENOMEM; } ashmem_range_cachep = kmem_cache_create("ashmem_range_cache", sizeof(struct ashmem_range), 0, 0, NULL); if (unlikely(!ashmem_range_cachep)) { pr_err("failed to create slab cache\n"); return -ENOMEM; } ret = misc_register(&ashmem_misc); if (unlikely(ret)) { pr_err("failed to register misc device!\n"); return ret; } register_shrinker(&ashmem_shrinker); pr_info("initialized\n"); return 0; }

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

我们从代码中可以看到初始化函数ashmem_init(void)主要做了以下几件事:

通过kmem_cache_create1创建 ahemem_area 高速缓存

通过kmem_cache_create创建 ahemem_range 高速缓存

通过misc_register将 Ashmem 注册为 misc 设备2

通过register_shrinker注册回收函数

退出 - ashmem_exit(void)

static void __exit ashmem_exit(void) { int ret; unregister_shrinker(&ashmem_shrinker); ret = misc_deregister(&ashmem_misc); if (unlikely(ret)) pr_err("failed to unregister misc device!\n"); kmem_cache_destroy(ashmem_range_cachep); kmem_cache_destroy(ashmem_area_cachep); pr_info("unloaded\n"); }

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

我们在代码中看到了所有在退出时所做的操作:

卸载回收函数unregister_shrinker

卸载设备misc_deregister

回收两段高速缓存(ashmem_area & ashmem_range)kmem_cache_destroy

对内存进行分配、释放和回收

我们先看看Ashmem分配内存的流程:

打开“/dev/ashmem”文件

通过ioctl来设置名称和大小等

调用mmap将Ashmem分配的空间映射到进程空间

打开多少次/dev/ashmem设备并mmap,就会获得多少个不同的空间

我们在初始化Ashmem时注册了Ashmem设备,其中包含的相关方法及其作用如下面的代码所示:

static const struct file_operations ashmem_fops = { .owner = THIS_MODULE, .open = ashmem_open, .release = ashmem_release, .read = ashmem_read, .llseek = ashmem_llseek, .mmap = ashmem_mmap, .unlocked_ioctl = ashmem_ioctl, #ifdef CONFIG_COMPAT .compat_ioctl = compat_ashmem_ioctl, #endif }; static struct miscdevice ashmem_misc = { .minor = MISC_DYNAMIC_MINOR, .name = "ashmem", .fops = &ashmem_fops, };

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

其中,ashmem_open方法主要是对unpinned列表进行初始化,并将Ashmem分配的地址空间赋给file结构的private_data,这就排除了进程间共享的可能性。ashmem_release方法用于将指定的节点的空间从链表中删除并释放掉

ashmem_open 方法

static int ashmem_open(struct inode *inode, struct file *file) { struct ashmem_area *asma; int ret; ret = generic_file_open(inode, file); if (unlikely(ret)) return ret; asma = kmem_cache_zalloc(ashmem_area_cachep, GFP_KERNEL); if (unlikely(!asma)) return -ENOMEM; INIT_LIST_HEAD(&asma->unpinned_list); memcpy(asma->name, ASHMEM_NAME_PREFIX, ASHMEM_NAME_PREFIX_LEN); asma->prot_mask = PROT_MASK; file->private_data = asma; return 0; }

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

ashmem_release 方法

static int ashmem_release(struct inode *ignored, struct file *file) { struct ashmem_area *asma = file->private_data; struct ashmem_range *range, *next; mutex_lock(&ashmem_mutex); list_for_each_entry_safe(range, next, &asma->unpinned_list, unpinned) range_del(range); mutex_unlock(&ashmem_mutex); if (asma->file) fput(asma->file); kmem_cache_free(ashmem_area_cachep, asma); return 0; }

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

需要指出的是,当使用list_for_each_entry_safe(pos, n, head,member)函数时,需要调用者另外提供一个与pos同类型的指针n,在for循环中暂存pos节点的下一个节点的地址,避免因pos节点被释放而造成断链

接下来就是将分配的空间映射到进程空间。在ashmem_mmap函数中需要指出的是,它借助了Linux内核的shmem_file_setup(支撑文件)工具,使得我们不需要自己去实现这一复杂的过程。所以ashmem_mmap的整个实现过程很简单,大家可以参考它的源代码:

static int ashmem_mmap(struct file *file, struct vm_area_struct *vma) { struct ashmem_area *asma = file->private_data; int ret = 0; mutex_lock(&ashmem_mutex); /* user needs to SET_SIZE before mapping */ if (unlikely(!asma->size)) { ret = -EINVAL; goto out; } /* requested protection bits must match our allowed protection mask */ if (unlikely((vma->vm_flags & ~calc_vm_prot_bits(asma->prot_mask)) & calc_vm_prot_bits(PROT_MASK))) { ret = -EPERM; goto out; } vma->vm_flags &= ~calc_vm_may_flags(~asma->prot_mask); if (!asma->file) { char *name = ASHMEM_NAME_DEF; struct file *vmfile; if (asma->name[ASHMEM_NAME_PREFIX_LEN] != '\0') name = asma->name; /* ... and allocate the backing shmem file */ vmfile = shmem_file_setup(name, asma->size, vma->vm_flags); if (unlikely(IS_ERR(vmfile))) { ret = PTR_ERR(vmfile); goto out; } asma->file = vmfile; } get_file(asma->file); if (vma->vm_flags & VM_SHARED) shmem_set_file(vma, asma->file); else { if (vma->vm_file) fput(vma->vm_file); vma->vm_file = asma->file; } out: mutex_unlock(&ashmem_mutex); return ret; }

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

最后,我们还将分析通过ioctl来pin和unpin某一段映射的空间的实现方式。ashmem_ioctl函数的功能很多,它可以通过其参数cmd来处理不同的操作,包括设置(获取)名称和尺寸、pin/unpin以及获取pin的一些状态。最终对pin/unpin的处理会通过下面这个函数来完成:

static int ashmem_pin(struct ashmem_area *asma, size_t pgstart, size_t pgend) { struct ashmem_range *range, *next; int ret = ASHMEM_NOT_PURGED; list_for_each_entry_safe(range, next, &asma->unpinned_list, unpinned) { /* moved past last applicable page; we can short circuit */ if (range_before_page(range, pgstart)) break; /* * The user can ask us to pin pages that span multiple ranges, * or to pin pages that aren't even unpinned, so this is messy. * * Four cases: * 1. The requested range subsumes an existing range, so we * just remove the entire matching range. * 2. The requested range overlaps the start of an existing * range, so we just update that range. * 3. The requested range overlaps the end of an existing * range, so we just update that range. * 4. The requested range punches a hole in an existing range, * so we have to update one side of the range and then * create a new range for the other side. */ if (page_range_in_range(range, pgstart, pgend)) { ret |= range->purged; /* Case #1: Easy. Just nuke the whole thing. */ if (page_range_subsumes_range(range, pgstart, pgend)) { range_del(range); continue; } /* Case #2: We overlap from the start, so adjust it */ if (range->pgstart >= pgstart) { range_shrink(range, pgend + 1, range->pgend); continue; } /* Case #3: We overlap from the rear, so adjust it */ if (range->pgend <= pgend) { range_shrink(range, range->pgstart, pgstart-1); continue; } range_alloc(asma, range, range->purged, pgend + 1, range->pgend); range_shrink(range, range->pgstart, pgstart - 1); break; } } return ret; } static int ashmem_unpin(struct ashmem_area *asma, size_t pgstart, size_t pgend) { struct ashmem_range *range, *next; unsigned int purged = ASHMEM_NOT_PURGED; restart: list_for_each_entry_safe(range, next, &asma->unpinned_list, unpinned) { /* short circuit: this is our insertion point */ if (range_before_page(range, pgstart)) break; if (page_range_subsumed_by_range(range, pgstart, pgend)) return 0; if (page_range_in_range(range, pgstart, pgend)) { pgstart = min_t(size_t, range->pgstart, pgstart), pgend = max_t(size_t, range->pgend, pgend); purged |= range->purged; range_del(range); goto restart; } } return range_alloc(asma, range, purged, pgstart, pgend); }

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

最后需要说明:回收函数cache_shrinker同样也参考了Linux内核的slab分配算法用于页面回收的回调函数。具体实现如下:

static int ashmem_shrink(struct shrinker *s, struct shrink_control *sc) { struct ashmem_range *range, *next; /* We might recurse into filesystem code, so bail out if necessary */ if (sc->nr_to_scan && !(sc->gfp_mask & __GFP_FS)) return -1; if (!sc->nr_to_scan) return lru_count; if (!mutex_trylock(&ashmem_mutex)) return -1; list_for_each_entry_safe(range, next, &ashmem_lru_list, lru) { loff_t start = range->pgstart * PAGE_SIZE; loff_t end = (range->pgend + 1) * PAGE_SIZE; range->asma->file->f_op->fallocate(range->asma->file, FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, start, end - start); range->purged = ASHMEM_WAS_PURGED; lru_del(range); sc->nr_to_scan -= range_size(range); if (sc->nr_to_scan <= 0) break; } mutex_unlock(&ashmem_mutex); return lru_count; }

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

cache_shrinker同样先取得了ashmem_mutex,通过list_for_each_entry_safe来确保其被安全释放。该方法会被mm/vmscan.c :: shrink_slab调用,其中参数nr_to_scan表示有多少个页面对象。如果该参数为0,则表示查询所有的页面对象总数。而“gfp_mask”是一个配置,返回值为被回收之后剩下的页面数量;如果返回-1,则表示由于配置文件(gfp_mask)产生的问题,使得mutex_lock不能进行安全的死锁

本文所分析的代码为 Android 3.10 版本代码(ashmem.h3 & ashmem.c4)

====

kmem_cache_create (const char *name, size_t size, size_t align, unsigned long flags,void (*ctor)(void*, struct kmem_cache *, unsigned long)) 用于创建 SLAB 高速缓存 ↩

Minimal instruction set computer ↩

https://github.com/android/kernel_common/blob/android-3.10/drivers/staging/android/ashmem.c ↩

https://github.com/android/kernel_common/blob/android-3.10/drivers/staging/android/uapi/ashmem.h ↩

Android

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:MongoDB 插入文档
下一篇:python 局域网共享
相关文章