aboutsummaryrefslogtreecommitdiff
path: root/py/gc.c
diff options
context:
space:
mode:
authorDamien George <damien.p.george@gmail.com>2018-08-10 15:46:45 +1000
committerDamien George <damien.p.george@gmail.com>2018-08-14 16:11:21 +1000
commit91041945c91bae96d918876547cae48484cd8953 (patch)
tree24680cd073592edda599d5a3cad7c53bcef762e7 /py/gc.c
parent02fbb0a4553e5a03bd5a818fa511487ff72e6753 (diff)
py/gc: In gc_alloc, reset n_free var right before search for free mem.
Otherwise there is the possibility that n_free starts out non-zero from the previous iteration, which may have found a few (but not enough) free blocks at the end of the heap. If this is the case, and if the very first blocks that are scanned the second time around (starting at gc_last_free_atb_index) are found to give enough memory (including the blocks at the end of the heap from the previous iteration that left n_free non-zero) then memory will be allocated starting before the location that gc_last_free_atb_index points to, most likely leading to corruption. This serious bug did not manifest itself in the past because a gc_collect always resets gc_last_free_atb_index to point to the start of the GC heap, and the first block there is almost always allocated to a long-lived object (eg entries from sys.path, or mounted filesystem objects), which means that n_free would be reset at the start of the search loop. But with threading enabled with the GIL disabled it is possible to trigger the bug via the following sequence of events: 1. Thread A runs gc_alloc, fails to find enough memory, and has a non-zero n_free at the end of the search. 2. Thread A calls gc_collect and frees a bunch of blocks on the GC heap. 3. Just after gc_collect finishes in thread A, thread B takes gc_mutex and does an allocation, moving gc_last_free_atb_index to point to the interior of the heap, to a place where there is most likely a run of available blocks. 4. Thread A regains gc_mutex and does its second search for free memory, starting with a non-zero n_free. Since it's likely that the first block it searches is available it will allocate memory which overlaps with the memory before gc_last_free_atb_index.
Diffstat (limited to 'py/gc.c')
-rw-r--r--py/gc.c3
1 files changed, 2 insertions, 1 deletions
diff --git a/py/gc.c b/py/gc.c
index 4f5793bf5..072572435 100644
--- a/py/gc.c
+++ b/py/gc.c
@@ -453,7 +453,7 @@ void *gc_alloc(size_t n_bytes, bool has_finaliser) {
size_t i;
size_t end_block;
size_t start_block;
- size_t n_free = 0;
+ size_t n_free;
int collected = !MP_STATE_MEM(gc_auto_collect_enabled);
#if MICROPY_GC_ALLOC_THRESHOLD
@@ -468,6 +468,7 @@ void *gc_alloc(size_t n_bytes, bool has_finaliser) {
for (;;) {
// look for a run of n_blocks available blocks
+ n_free = 0;
for (i = MP_STATE_MEM(gc_last_free_atb_index); i < MP_STATE_MEM(gc_alloc_table_byte_len); i++) {
byte a = MP_STATE_MEM(gc_alloc_table_start)[i];
if (ATB_0_IS_FREE(a)) { if (++n_free >= n_blocks) { i = i * BLOCKS_PER_ATB + 0; goto found; } } else { n_free = 0; }