X-Git-Url: https://git.llucax.com/personal/website.git/blobdiff_plain/32cb310b42235f42ba8f9f70bb7ba3fe07b99377..2270f1fb21528602c4053cfca0c1d109088b5ddf:/source/blog/posts/2010/08/14-memory-allocation-patterns.rst diff --git a/source/blog/posts/2010/08/14-memory-allocation-patterns.rst b/source/blog/posts/2010/08/14-memory-allocation-patterns.rst index c8d3442..d3b8ec0 100644 --- a/source/blog/posts/2010/08/14-memory-allocation-patterns.rst +++ b/source/blog/posts/2010/08/14-memory-allocation-patterns.rst @@ -27,13 +27,13 @@ affected by changes like `memory addresses returned by the OS`__ or by some information about how much and what kind of memory are requested by the different benchmarks. -__ https://www.llucax.com.ar/blog/blog/post/-7a56a111 -__ https://www.llucax.com.ar/blog/blog/post/1490c03e +__ /blog/blog/post/-7a56a111 +__ /blog/blog/post/1490c03e I used the information provided by the ``malloc_stats_file`` CDGC__ option, and generated some stats. -__ https://www.llucax.com.ar/blog/blog/post/-2c067531 +__ /blog/blog/post/-2c067531 The analysis is done on the allocations requested by the program (calls to ``gc_malloc()``) and contrasting that with the real memory allocated by the GC. @@ -58,7 +58,7 @@ of the blocks). So the idea here is to measure two major things: * The extra amount of memory wasted by the GC when using precise mode because it stores the type information pointer at the end of the blocks. -__ https://www.llucax.com.ar/blog/blog/post/250bf643 +__ /blog/blog/post/250bf643 I've selected a few representative benchmarks. Here are the results: