Hi Levente,
Yes, the SqueakMap server image is one part of the dynamic, but I think another is a bug in the trunk image. I think the reason Tim is not seeing 45 seconds before error is because the timeout setting of the high-up client is not being passed all the way down to the lowest-level layers -- e.g., from HTTPSocket --> WebClient --> SocketStream --> Socket. By the time it gets down to Socket which does the actual work, it's operating on its own 30 second timeout.
I would expect subsecond reponse times. 30 seconds is just unacceptably long.
Well, it depends on if, for example, you're in the middle of Antarctica with a slow internet connection in an office with a fast connection. A 30 second timeout is just the maximum amount of time the client will wait for the entire process before presenting a debugger, that's all it can do.
It is a fixed amount of time, I *think* still between 30 and 45 seconds, that it takes the SqueakMap server to save its model after an update (e.g., adding a Release, etc.). It's so long because the server is running on a very old 3.x image, interpreter VM. It's running a HttpView2 app which doesn't even compile in modern Squeak. That's why it hasn't been brought forward yet, but I am working on a new API service to replace it with the eventual goal of SqueakMap being an "App Store" experience, and it will not suffer timeouts.
but also:
- we can cache: https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache
- we could make alan not even ask ted when we know the answer already.
- Attention: we need a lot of information on what is stable and what not to do this.
- (its tempting to try, tho)
- (we probably want that for squeaksource/source.squeak for the MCZ requests. but we lose the download statistics then…)
If squeaksource/mc used ETags, then the squeaksource image could simply return 304 and let nginx serve the cached mczs while keeping the statistics updated.
Tim's email was about SqueakMap, not SqueakSource. SqueakSource
That part of the thread changed direction. It happens sometimes.
serves the mcz's straight off the hard-drive platter. We don't need to trade away download statistics to save a few ms on a mcz request.
Download statistics would stay the same despite being flawed (e.g. you'll download everything multiple times even if those files are sitting in your package cache).
Not if we fix the package-cache (more about this, below).
You would save seconds, not milliseconds by not downloading files again.
IIUC, you're saying we would save one hope in the "download" -- instead of client <--> alan <--> andreas, it would just be client <--> alan. Is that right?
I don't know what the speed between alan <---> andreas is, but I doubt it's much slower than client <---> alan in most cases, so the savings would seem to be minimal..?
That would also let us save bandwidth by not downloading files already sitting in the client's package cache.
How so? Isn't the package-cache checked before hitting the server at all? It certainly should be.
No, it's not. Currently that's not possible, because different files can have the same name. And currently we have no way to tell them apart.
No. No two MCZ's may have the same name, certainly not withiin the same repository, because MCRepository cannot support that. So maybe we need project subdirectories under package-cache to properly simulate each cached Repository. I had no idea we were neutering 90% of the benefits of our package-cache because of this too, and just sitting here, I can't help wonder whether this is why MCProxy doesn't work properly either!
The primary purpose of a cache is to *check it first* to speed up access to something, right? What you say about package-cache sounds really bad we should fix that, not surrender to it.
- Chris