On 31/10/2007, Joshua Gargus schwa@fastmail.us wrote:
Then i wonder, why they don't drop the idea of having shared memory at all?
It's convenient for programmers. Aside from the huge complexity of programming everything this way, we might also have to program AMD chips differently from Intel ones (at least until a standard emerged).
Each CPU then could have own memory, and they could interact by sending messages in network-style fashion. And we then would write a code which uses such architecture in best way. But while this is not true, should we assume that such code will work faster than code which 'knows' that there is a single shared memory for all CPUs and uses such knowledge in best way?
Could you restate this? I don't understand what you mean.
I don't know what to add to above. I just said that we should use approaches which is best fit for architecture where our project(s) will run on. Of course what is best fit is arguable. But i don't think we should drop a shared memory model support when we building a system on top of architecture which haves it.
I think that your proposal is very "clever", elegant, and fun to think about.
Thanks :)
You're welcome :-)
Very briefly, because this is Andreas's idea (my original one was similar but worse), and I think I convinced him to write it up. My take on it is to rework the VM so that it can support multiple images in one process, each with its own thread. Give each image an event- loop to process messages from other images. Make it easy for an image to spawn another image. A small Spoon-like image could be spawned in very few milliseconds.
I like this approach because it is dead simple, and it keeps all objects that can talk directly to one another on the same processor. Because of its simplicity, it is a low-risk way to quickly get somewhere useful. Once we have some experience with it, we can decide whether we need finer-grained object access between threads (as I've said, I don't think that we will).
Ah, yes, this was mentioned before. And i like that idea in general because of its simplicity but don't like a memory overhead and some other issues, like: - persistence - generic usage
Lets imagine that i run a squeak , started couple of servers, started tetris game and playing it. Then at some moment i feel the urgent need of shutting show my PC to insert additional 64 cores to my CPU to be able run two tetrises instead of one. ;) What is interesting now, is how i can have a dead-simple 'save image(s)' one-click action so, that after i restart system i will able to continue running same set of images from the same point as we have in current single-image squeak? If we store each separate image to separate file(s), then soon there will be a thousands of them polluting same directory. And i will lost, what do i really need to move my project on different PC. Or, maybe we should merge multiple images into same file? This is much better. But how about .changes file?
About generic usage. A small images like an ants is good when you deal with small sized tasks and job of concrete ant is simple and monotonic. But how about real-world applications such as CAD systems, or business applications which could have a couple hundred megabytes of image size? Spawning multiple images of such size is a waste of resources. Ok, then lets suppose we can have a single 'queen' image and smaller 'ant' images. But now we need a sophisticated framework for coordinating moves. And also it is clear that most work load still will be in queen image(because most of objects located there) which means that its not scales well.