We managed to figure out that OSProcess works when we use gcc <= 4.8 on Debian. We are happy to use 4.8 for now, so we're good. It would of course be super cool if we could use the series 6 gcc as that will soon ship with Debian 9 (stretch) but it's probably not trivial to just move to a new compiler version (as seems evident from the fact that a minor version change can mess up compilation).
Thanks for your help Alistair and Eliot.
Cheers, Max
On 18 May 2017, at 11:00, vm-dev-request@lists.squeakfoundation.org wrote:
On 18 May 2017, at 00:50, vm-dev-request@lists.squeakfoundation.org mailto:vm-dev-request@lists.squeakfoundation.org wrote:
Hi Max, Hi Alistair,
On Wed, May 17, 2017 at 1:06 AM, Alistair Grant <akgrant0710@gmail.com mailto:akgrant0710@gmail.com <mailto:akgrant0710@gmail.com mailto:akgrant0710@gmail.com>> wrote:
On Tue, May 16, 2017 at 04:59:24PM +0200, Alistair Grant wrote:
Hi Max,
On 16 May 2017 15:40, "Max Leske" <maxleske@gmail.com mailto:maxleske@gmail.com <mailto:maxleske@gmail.com mailto:maxleske@gmail.com>> wrote:
Hi Alistair,
On 16 May 2017, at 15:32, vm-dev-request@lists.
squeakfoundation.org http://squeakfoundation.org/ <http://squeakfoundation.org/ http://squeakfoundation.org/>
wrote: Hi Max, I can't answer your question directly, but just wondering why
you are
using the itimer VM when the are known issues with external calls, and
not
the heartbeat VM?
Because of the root user issue, and also because I don't care about
that
much at the moment. I'm still experimenting and for those
experiments it
doesn't matter which VM I use. Thirdly, the itimer VM is the one I
get when
I use 'curl get.pharo.org/60+vmLatest http://get.pharo.org/60+vmLatest <http://get.pharo.org/60+vmLatest http://get.pharo.org/60+vmLatest> | bash', which is convenient
to get
the latest VM, and to minimise differences between the VM's we built
the
same one. I will definitely consider using the threaded VM for
production.
P.S. I would love to see OSProcess working in 32 bit mode.
Well, it does work already, just not when we build the VM ourselves
:/
Interesting, I had the impression that for Pharo 6 OSProcess didn't work
in
32bits, only 64, but I'm also building my own VM. I'm away from my PC,
but
I'll try and take a look.
I'm seeing the same behaviour as you, i.e. OSProcess works in a VM downloaded from get.pharo.org http://get.pharo.org/ <http://get.pharo.org/ http://get.pharo.org/>, but locks up when using the VM I compiled.
Have you looked at the build logs and eliminated compiler version, command line flags, etc? One important file is the config.h that is produced in the build directory. It might be informative to compare the one configure is producing on your systems and the one that the binary builds creates.
Thanks for the pointer. I'll look into it.
Both VMs (threaded heartbeat) are based on the same source code, i.e.:
VM: 201705022326 https://github.com/OpenSmalltalk/opensmalltalk-vm.git https://github.com/OpenSmalltalk/opensmalltalk-vm.git <https://github.com/OpenSmalltalk/opensmalltalk-vm.git https://github.com/OpenSmalltalk/opensmalltalk-vm.git> $ Date: Tue May 2 16:26:41 2017 -0700 $
I'll try and take a look at this eventually, but I'm not sure how long that will be (several weeks away, at least).
If you figure it out, please let me know.
Thanks! Alistair
-- _,,,^..^,,,_ best, Eliot
Hi Max,
On Thu, May 18, 2017 at 4:32 AM, Max Leske maxleske@gmail.com wrote:
We managed to figure out that OSProcess works when we use gcc <= 4.8 on Debian. We are happy to use 4.8 for now, so we're good. It would of course be super cool if we could use the series 6 gcc as that will soon ship with Debian 9 (stretch) but it's probably not trivial to just move to a new compiler version (as seems evident from the fact that a minor version change can mess up compilation).
For the sake of revisiting this when we have time and can debug it can you state which version(s) you tried to use which didn't work? Also, what are the compilation flags (full gcc invocation example) for the case(s) that work and the case(s) that don't?
Debugging this can be straight-forward if one can build the two versions and execute them side-by-side to pin-point the failure. Coming with a fix may be more challenging ;-)
Thanks for your help Alistair and Eliot.
Cheers, Max
On 18 May 2017, at 11:00, vm-dev-request@lists.squeakfoundation.org wrote:
On 18 May 2017, at 00:50, vm-dev-request@lists.squeakfoundation.org wrote:
Hi Max, Hi Alistair,
On Wed, May 17, 2017 at 1:06 AM, Alistair Grant <akgrant0710@gmail.com < mailto:akgrant0710@gmail.com akgrant0710@gmail.com>> wrote:
On Tue, May 16, 2017 at 04:59:24PM +0200, Alistair Grant wrote:
Hi Max,
On 16 May 2017 15:40, "Max Leske" <maxleske@gmail.com <mailto: maxleske@gmail.com maxleske@gmail.com>> wrote:
Hi Alistair,
On 16 May 2017, at 15:32, vm-dev-request@lists.
squeakfoundation.org http://squeakfoundation.org/
wrote: Hi Max, I can't answer your question directly, but just wondering why
you are
using the itimer VM when the are known issues with external calls, and
not
the heartbeat VM?
Because of the root user issue, and also because I don't care about
that
much at the moment. I'm still experimenting and for those
experiments it
doesn't matter which VM I use. Thirdly, the itimer VM is the one I
get when
I use 'curl get.pharo.org/60+vmLatest http://get.pharo.org/60+vmLatest | bash', which is convenient
to get
the latest VM, and to minimise differences between the VM's we built
the
same one. I will definitely consider using the threaded VM for
production.
P.S. I would love to see OSProcess working in 32 bit mode.
Well, it does work already, just not when we build the VM ourselves
:/
Interesting, I had the impression that for Pharo 6 OSProcess didn't work
in
32bits, only 64, but I'm also building my own VM. I'm away from my PC,
but
I'll try and take a look.
I'm seeing the same behaviour as you, i.e. OSProcess works in a VM downloaded from get.pharo.org http://get.pharo.org/, but locks up when using the VM I compiled.
Have you looked at the build logs and eliminated compiler version, command line flags, etc? One important file is the config.h that is produced in the build directory. It might be informative to compare the one configure is producing on your systems and the one that the binary builds creates.
Thanks for the pointer. I'll look into it.
Both VMs (threaded heartbeat) are based on the same source code, i.e.:
VM: 201705022326 https://github.com/OpenSmalltalk/opensmalltalk-vm.git < https://github.com/OpenSmalltalk/opensmalltalk-vm.git%3E $ Date: Tue May 2 16:26:41 2017 -0700 $
I'll try and take a look at this eventually, but I'm not sure how long that will be (several weeks away, at least).
If you figure it out, please let me know.
Thanks! Alistair
-- _,,,^..^,,,_ best, Eliot
Hi Max,
On 18 May 2017 at 13:32, Max Leske maxleske@gmail.com wrote:
We managed to figure out that OSProcess works when we use gcc <= 4.8 on Debian. We are happy to use 4.8 for now, so we're good. It would of course be super cool if we could use the series 6 gcc as that will soon ship with Debian 9 (stretch) but it's probably not trivial to just move to a new compiler version (as seems evident from the fact that a minor version change can mess up compilation).
Thanks for tracking this problem down and sharing the details. I came upon exactly the same problem with the Nix package and was able to work-around that by also downgrading to gcc 4.8. I am sure it would have taken a long time to work this out without the benefit of your experience.
Relatedly: Is there a suitable automated test that we can use to check that our VMs are basically okay? I am pushing the switch to gcc 4.8 to resolve this problem but I don't know whether this is causing other regressions or what other obvious problems I have missed. I'd love a bit of test coverage for the builds.
Hi Luke,
On Sun, Jul 9, 2017 at 8:57 PM, Luke Gorrie luke@snabb.co wrote:
Hi Max,
On 18 May 2017 at 13:32, Max Leske maxleske@gmail.com wrote:
We managed to figure out that OSProcess works when we use gcc <= 4.8 on Debian. We are happy to use 4.8 for now, so we're good. It would of course be super cool if we could use the series 6 gcc as that will soon ship with Debian 9 (stretch) but it's probably not trivial to just move to a new compiler version (as seems evident from the fact that a minor version change can mess up compilation).
Thanks for tracking this problem down and sharing the details. I came upon exactly the same problem with the Nix package and was able to work-around that by also downgrading to gcc 4.8. I am sure it would have taken a long time to work this out without the benefit of your experience.
Relatedly: Is there a suitable automated test that we can use to check that our VMs are basically okay? I am pushing the switch to gcc 4.8 to resolve this problem but I don't know whether this is causing other regressions or what other obvious problems I have missed. I'd love a bit of test coverage for the builds.
Tests that stress the VM: - Running the complete test suite - running update to Squeak trunk tip from Squeak 5.0 - recompiling the entire system - building a Moose image
Having these run automatically by a CI server would be very nice.
There are others (Cadence run an aggressive regression suite every night) but they are not publicly available.
_,,,^..^,,,_ best, Eliot
Hi Eliot,
Sorry about the slow reply -
On 11 July 2017 at 21:44, Eliot Miranda eliot.miranda@gmail.com wrote:
Tests that stress the VM:
- Running the complete test suite
- running update to Squeak trunk tip from Squeak 5.0
- recompiling the entire system
- building a Moose image
Having these run automatically by a CI server would be very nice.
What is the success criteria for the tests? Or is it just that the VM doesn't crash?
Are there scripts anywhere for running these tests and checking the results that could be used for reference?
Once upon a time I included the built-in test suite in my build but it didn't have 100% success rate and I was not sure whether that was a problem and how to make a suitable pass/fail decision.
Hi Luke.
On Jul 24, 2017, at 10:04 AM, Luke Gorrie luke@snabb.co wrote:
Hi Eliot,
Sorry about the slow reply -
On 11 July 2017 at 21:44, Eliot Miranda eliot.miranda@gmail.com wrote: Tests that stress the VM:
- Running the complete test suite
- running update to Squeak trunk tip from Squeak 5.0
- recompiling the entire system
- building a Moose image
Having these run automatically by a CI server would be very nice.
What is the success criteria for the tests?
The success criterion is that they all succeed. By extension this implies that the vm doesn't crash.
Or is it just that the VM doesn't crash?
Are there scripts anywhere for running these tests and checking the results that could be used for reference?
Ask on the list of folks like Fabio Niephaus.
Once upon a time I included the built-in test suite in my build but it didn't have 100% success rate and I was not sure whether that was a problem and how to make a suitable pass/fail decision.
On 29 July 2017 at 18:31, Eliot Miranda eliot.miranda@gmail.com wrote:
The success criterion is that they all succeed. By extension this implies that the vm doesn't crash.
Does this mean that the Test Runner should report 100% pass rate? I have never seen that even with the stock image+vm from pharo.org. So I have a bootstrapping problem if I don't have a baseline that passes the CI test.
Just now I ran - randomly - a clean Pharo 5.0 image that I have on my Mac and the result I see is: 9098 run, 7408 passes, 8 skipped, 81 expected failures, 22 failures, 587 errors, 0 unexpected passes. If resolving those 22 failures and 587 errors is a blocker for running a CI then that is quite an obstacle from my perspective.
are other people really seeing 100% pass rates?
On Thu, Aug 03, 2017 at 09:47:49AM +0200, Luke Gorrie wrote:
On 29 July 2017 at 18:31, Eliot Miranda eliot.miranda@gmail.com wrote:
The success criterion is that they all succeed. By extension this implies that the vm doesn't crash.
Does this mean that the Test Runner should report 100% pass rate? I have never seen that even with the stock image+vm from pharo.org. So I have a bootstrapping problem if I don't have a baseline that passes the CI test.
Just now I ran - randomly - a clean Pharo 5.0 image that I have on my Mac and the result I see is: 9098 run, 7408 passes, 8 skipped, 81 expected failures, 22 failures, 587 errors, 0 unexpected passes. If resolving those 22 failures and 587 errors is a blocker for running a CI then that is quite an obstacle from my perspective.
are other people really seeing 100% pass rates?
Nope, I don't remember ever seeing a 100% pass rate for the entire test suite. (I don't know how this is handled in CI environments)
Cheers, Alistair
vm-dev@lists.squeakfoundation.org