On Oct 30, 2007, at 7:56 PM, Igor Stasenko wrote:
On 31/10/2007, Joshua Gargus schwa@fastmail.us wrote:
Then i wonder, why they don't drop the idea of having shared memory at all?
It's convenient for programmers. Aside from the huge complexity of programming everything this way, we might also have to program AMD chips differently from Intel ones (at least until a standard emerged).
Each CPU then could have own memory, and they could interact by sending messages in network-style fashion. And we then would write a code which uses such architecture in best way. But while this is not true, should we assume that such code will work faster than code which 'knows' that there is a single shared memory for all CPUs and uses such knowledge in best way?
Could you restate this? I don't understand what you mean.
I don't know what to add to above. I just said that we should use approaches which is best fit for architecture where our project(s) will run on. Of course what is best fit is arguable. But i don't think we should drop a shared memory model support when we building a system on top of architecture which haves it.
OK, this is basically what I thought you meant, but wasn't sure.
<snip>
Lets imagine that i run a squeak , started couple of servers, started tetris game and playing it. Then at some moment i feel the urgent need of shutting show my PC to insert additional 64 cores to my CPU to be able run two tetrises instead of one. ;)
Only 2 tetrises? I hope we can do better than that :-)
What is interesting now, is how i can have a dead-simple 'save image(s)' one-click action so, that after i restart system i will able to continue running same set of images from the same point as we have in current single-image squeak? If we store each separate image to separate file(s), then soon there will be a thousands of them polluting same directory. And i will lost, what do i really need to move my project on different PC. Or, maybe we should merge multiple images into same file? This is much better. But how about .changes file?
Or source code control in general. Good point.
There are a number of options. For simplicity, I'd say to connect a single blessed development image to the single changes file. Another would be to use a Monticello-like approach (although there are still things that changesets are needed for, and we don't want to bring in all sorts of new fundamental dependencies). However, these are the first thoughts that popped into my head... you seem to have thought about this more than I have.
Re: persistence in general... I used to do all of my thinking in a Squeak image, using morphs and projects. I grew tired of the breakage of my media when I updated my code to the latest version or screwing up my image in some way. I much prefer the way that we deal with data in Croquet, using a separate serialization format instead of the Squeak image format. So for me, it's acceptable to have many of the images to be transient... created on startup and discarded ("garbage-collected" :-) ) at shutdown.
(BTW, I hope you'll excuse me for continually saying "we" even there's no chance I'll have time to work on it with you all... I'll have to be content with cheering from the sidelines :-( )
About generic usage. A small images like an ants is good when you deal with small sized tasks and job of concrete ant is simple and monotonic. But how about real-world applications such as CAD systems, or business applications which could have a couple hundred megabytes of image size? Spawning multiple images of such size is a waste of resources.
Each image can be as big as it needs to be. We don't need to spawn off identical clones of a single image.
I don't have a lot of first-hand experience with business systems. I'm picturing large tables of relational data... I'm not sure how I would approach this with the model I've described.
Honestly, it's difficult me to think about this stuff outside of the context of Croquet. There might be use cases that aren't addressed well. But my instinct is that if the model is good enough for a world-wide network of interconnected, collaborative 3D spaces (and I believe that it is), then it is good enough for most anything you'd want to do with it.
Ok, then lets suppose we can have a single 'queen' image and smaller 'ant' images. But now we need a sophisticated framework for coordinating moves.
If the application is so sophisticated, it will need a sophisticated framework for coordinating concurrency anyway. No way around it. It doesn't matter if it is one queen and may ants, or a heterogeneous mix of islands.
Cheers, Josh
And also it is clear that most work load still will be in queen image(because most of objects located there) which means that its not scales well.
-- Best regards, Igor Stasenko AKA sig.