Hi Dave,

I just downloaded squeaksource.8.image from dan and took a look.  I see you abandoned the PersonalSqueakSource codebase back in Nov-2022.  That's too bad.  Part of what I'd hoped to accomplish with the renovation was not only a more responsive and resilient server, but for the relocation to /ss on source.squeak.org to encourage your and the community's collaboration, where we would eventually get to a point where questions like this:

> I’d happily collaborate on this but I need pointers to the code and instructions on how to interact 
> with the running server.

would be as universally known and natural as the Inbox process (although maybe that isn't saying much anyway).  Your comment in the unmerge version (SqueakSource.sscom-dtl.1147) mentions merge issues and startup problems.  I would've tried to help if you'd reached out.  Perhaps we can learn and gain just as much remaining forked and cherry-picking from each other what we deem to be most appropriate.  I just noticed the performance improvement from Levente last September.  See, before I dreamt something like that would simply be committed to /ss by him, and maybe it would send an email like with /trunk and /inbox.  Then, we admins could merge fixes into the servers whenever it was worthwhile to do so.

Note that my observations were based on watching files being slowly written to disc while also watching /usr/bin/top. The activity also correlates with log messages written to the ss.log log file, so that's what made me suspect issues with the repository save mechanism.

I don't think saving data.obj was / is related to the client slowness issues.  Why?  Because you're still rightly using SSFilesystem from PersonalSqueakSource (which is good!), which essentially does what Eliot described.  It forks the save at Processor userBackgroundPriority-1 (29), which is lower than client operations (30).  And although there appears to be a bug that will cause other client save operations to be blocked during the long serialization process (see the attached fix for that, if you wish) *read* operations don't wait on any mutex, so should remain completely unblocked.  You'd still see 100% CPU during serialization, yes, but client responsiveness should still be fine due to their (30) level processes preempting the serialization process.

 - Chris