Hi Ryan,
On Sat, Oct 14, 2017 at 11:35 PM, Ryan Macnak rmacnak@gmail.com wrote:
Why is filling the remembered set fatal? It's possible to empty the remembered set by promoting everything in new space.
It should not be fatal unless here is no space to grow the remembered set. Assuming we don't want to recover from an overflowed remembered set by scanning all pointer objects in old space at the next scavenge the remembered set must always remain valid; all old space objects with a new space reference must be in the remembered set, and hence the remembered set must grow on demand.
I don't know what's happening here but indeed it looks like no old space heap space is available to grow the remembered set. It could be because old space is full or because tree remembered set is very large. Don't know which yet. But on remembered set management, the scavenger "does run" (is intended to run) in a mode designed to reduce the remembered set size if the remembered set is more than 3/4 full.
I think I may have some poor code in setting the tide limit for this 3/4 full mark. Further, the algorithm to shrink the remembered set, which uses reference counting to find the set of new space objects to tenure that will most profitably reduce the remembered set size, may be buggy.
On Oct 9, 2017 10:12 PM, "Clément Bera" bera.clement@gmail.com wrote:
Hi,
This is another limit than the out of memory error. Too many references from old objects to young objects. The limit cannot be changed directly from the image. However, if using Spur, you can try to change the young space size, which also changes the remembered table size and might fix your problem. To do so you can do: Smalltalk vm parameterAt: 45 put: (Smalltalk vm parameterAt: 44) * 4. And then *restart the image.* Check here https://clementbera.wordpress.com/2017/03/12/tuning-the-pharo-garbage-collector/ section TUNING NEW SPACE SIZE for more info about that.
If you are using Spur, could you send us the image (if it is 500Mb could you put it to download on dropbox or something like that ?) ? That way we can reproduce and see what is possible. Normally in Spur if the remembered table grows too big a tenure to shrink the remembered table happens, so that error should not happen. Eliot is currently moving to another place, so he might be busy. If he is available to answer, I guess he will have a look, if he is not, I can have a look today or thursday. However I am not interested in fixing pre-Spur VMs.
Regards,
On Mon, Oct 9, 2017 at 11:57 PM, Phil B pbpublist@gmail.com wrote:
Is this effectively an out of memory error or am I hitting some other internal VM limit? (I.e. can the limit be increased or is it a hard limit?) I'm running into this when using the reference finder tool in a Cuis image. (It's a moderately large image at ~500 meg)