Hi,
I am experiencing crashes with Ballon3D. When I enable hardware acceleration, sooner or later squeak crashes. I found some comments on swiki (http://minnow.cc.gatech.edu/squeak/2904) about using older NVidia drivers, but this semi-solution doesn't work for me as my card is not supported by old driver. The behavior explained there seems to be similar to my case.
I tried various VM and image versions, including 3.4 packaged by rpm available on http://www-sor.inria.fr/~piumarta/squeak/ without success. It still crashes. Other OpenGL applications run without problem so it seems to be squeak bug/problem so far.
Any idea what could be wrong and/or how to fix it?
Cheers Radek
Additional information:
[rodo@aquarius rodo]$ cat /proc/driver/nvidia/version NVRM version: NVIDIA Linux x86 nvidia.o Kernel Module 1.0-4363 Sat Apr 19 17:46:46 PDT 2003 GCC version: gcc version 3.2.2 20030222 (Red Hat Linux 3.2.2-5) [rodo@aquarius rodo]$ cat /proc/driver/nvidia/cards/0 Model: GeForce4 Ti 4200 IRQ: 11 Video BIOS: 04.25.00.34.00 Card Type: AGP
[rodo@aquarius test]$ inisqueak No default image, looking for alternatives...
I found the following images: 1 Squeak3.4-5170 (of which I might recommend Squeak3.4-5170, unless you know better). Which one should I install [1-1]?
Let's try that again, with a NUMBER between 1 and 1.
I found the following images: 1 Squeak3.4-5170 (of which I might recommend Squeak3.4-5170, unless you know better). Which one should I install [1-1]? 1 Installing Squeak3.4-5170.image.gz in /home/rodo/test + ln -s /usr/lib/squeak/SqueakV3.sources SqueakV3.sources + gunzip -dc /usr/lib/squeak/Squeak3.4-5170.image.gz > squeak.image + gunzip -dc /usr/lib/squeak/Squeak3.4-5170.changes.gz > squeak.changes Running /usr/bin/squeak
Segmentation fault
1091687564 B3DVertexBuffer>loadIndexed:vertices:normals:colors:texCoords: 1091681980 B3DRenderEngine>drawIndexedTriangles:vertices:normals:colors:texCoords: 1091681528 B3DIndexedTriangleMesh>renderOn: 1091681436 B3DSceneObject>renderOn: 1091687472 [] in B3DScene>renderOn: 1091681344 OrderedCollection>do: 1091681076 B3DScene>renderOn: 1091680984 B3DSceneMorph>renderOn: 1091680892 B3DSceneMorph>drawAcceleratedOn: 1091679788 B3DSceneMorph>drawOn: 1091679696 Canvas>draw: 1091679604 Canvas>drawMorph: 1091678868 [] in Morph>fullDrawOn: 1091678776 FormCanvas>roundCornersOf:in:during: 1091678684 Canvas>roundCornersOf:during: 1091676192 Morph>fullDrawOn: 1091676100 Canvas>fullDraw: 1091676008 Canvas>fullDrawMorph: 1091613720 [] in WorldState>drawWorld:submorphs:invalidAreasOn: 1091612088 Rectangle>allAreasOutsideList:startingAt:do: 1091611996 Rectangle>allAreasOutsideList:do: 1091613468 [] in WorldState>drawWorld:submorphs:invalidAreasOn: 1091611904 SequenceableCollection>do: 1091603568 WorldState>drawWorld:submorphs:invalidAreasOn: 1091611260 [] in WorldState>displayWorld:submorphs: 1091611076 FormCanvas>roundCornersOf:in:during: 1091610984 Canvas>roundCornersOf:during: 1091607520 WorldState>displayWorld:submorphs: 1091607428 PasteUpMorph>privateOuterDisplayWorld 1091607336 PasteUpMorph>displayWorld 1091606968 [] in WorldState>displayWorldSafely: 1091606876 BlockContext>on:do: 1091606784 BlockContext>ifError: 1091606692 WorldState>displayWorldSafely: 1091564816 WorldState>doOneCycleNowFor: 1091564172 WorldState>doOneCycleFor: 1091564080 PasteUpMorph>doOneCycle 1090562648 [] in Project class>spawnNewProcess 1090562832 [] in BlockContext>newProcess [rodo@aquarius test]$
Am Dienstag, 15.07.03 um 23:42 Uhr schrieb Radek Doulik:
Hi,
I am experiencing crashes with Ballon3D. When I enable hardware acceleration, sooner or later squeak crashes. I found some comments on swiki (http://minnow.cc.gatech.edu/squeak/2904) about using older NVidia drivers, but this semi-solution doesn't work for me as my card is not supported by old driver.
Are you sure? The 1541s might work. As far as i know NVIDIA's chips are both upwards and downwards compatible.
The behavior explained there seems to be similar to my case.
I tried various VM and image versions, including 3.4 packaged by rpm available on http://www-sor.inria.fr/~piumarta/squeak/ without success. It still crashes. Other OpenGL applications run without problem so it seems to be squeak bug/problem so far.
Any idea what could be wrong and/or how to fix it?
Nope. I still tend to think that Squeak triggers some untested sequence in the driver. If you could create a test case, you could mail Andy Mecham of NVIDIA. Or just post it to the NV Linux forum (http://www.nvnews.net/vbulletin/forumdisplay.php?s=&forumid=14).
-- Bert
On Wed, 2003-07-16 at 02:27, Bert Freudenberg wrote:
Am Dienstag, 15.07.03 um 23:42 Uhr schrieb Radek Doulik:
Hi,
I am experiencing crashes with Ballon3D. When I enable hardware acceleration, sooner or later squeak crashes. I found some comments on swiki (http://minnow.cc.gatech.edu/squeak/2904) about using older NVidia drivers, but this semi-solution doesn't work for me as my card is not supported by old driver.
Are you sure? The 1541s might work. As far as i know NVIDIA's chips are both upwards and downwards compatible.
Yeah, I tried that version, but it doesn't work. It probably doesn't know newer cards PCI numbers + maybe more issues.
I have lately updated nvidia driver to 4496 and it now works OK for me, so it really seems to be driver problem.
Cheers Radek
On Tuesday, July 15, 2003 11:42 PM Radek Doulik rodo@matfyz.cz wrote about crashes with Ballon3D under Linux. Is this still an open question?
Radek wrote:
I am experiencing crashes with Ballon3D. When I enable hardware acceleration, sooner or later squeak crashes. I found some comments on swiki (http://minnow.cc.gatech.edu/squeak/2904) about using older NVidia drivers, but this semi-solution doesn't work for me as my card is not supported by old driver. The behavior explained there seems to be similar to my case.
I tried various VM and image versions, including 3.4 packaged by rpm available on http://www-sor.inria.fr/~piumarta/squeak/ without success. It still crashes. Other OpenGL applications run without problem so it seems to be squeak bug/problem so far.
Any idea what could be wrong and/or how to fix it?
Not really. From the wording of your message I conclude that you do not experience crashes when you do not activate hardware acceleration. Is that right?
Your crash dump shows that you tried to draw an IndexedTriangleMesh. I remember that a long time ago I saw a problem with such a mesh that I could not solve, but the problem was a constructed one without practical importance, so I did not pay much attention to it. Nevertheless, I am not convinced that your problem is a driver problem - it may as well be a problem with Squeak.
Today I reconstructed the problem that I saw a year ago. I checked it with Squeak 3.6a-5331 and the new Ballon3D package that Andreas published today and I was able to relyably crash the VM ! (with Windows 98 on a not so very modern computer with a Intel 82810 graphics chip and quite up-to-date OpenGL drivers) The attached change set contains that example, please read the preamble to find out what it does.
(I should mention that the new Ballon3D package is otherwise useable. I tried all my 3D stuff and it worked.)
Radek, can you perhaps prepare a change set with the scene that causes the crash? I can try it on my computer.
Greetings, Boris
On Mon, 28 Jul 2003 22:53:04 +0200, "Boris Gaertner" Boris.Gaertner@gmx.net wrote:
Radek, can you perhaps prepare a change set with the scene that causes the crash? I can try it on my computer.
I ran into this while doing some 3D stuff for my AUV simulator. If you open any of the example morphs in the AdvancedB3DSceneMorph viewer, you should then be able to open your example without crashing.
As well, I found that in my code, if I open the morph with hardware acceleration already turned on, it doesn't crash.
Later, Jon
-------------------------------------------------------------- Jon Hylands Jon@huv.com http://www.huv.com/jon
Project: Micro Seeker (Micro Autonomous Underwater Vehicle) http://www.huv.com
Hm ... I wonder if that might be a GCC optimization issue? IIRC, then we had problems with GCC 3+ in some areas and trying to optimize some of this C code _might_ get you in trouble if the optimizations aren't rock-solid.
There's another possibility, though. I found on Windows that under some circumstances (which I could never figure out completely) the FPU flags get changed to signal exceptions instead of silently over/underflowing. There _are_ situations in which some computations can over/underflow but checking for this explicitly would be huge pain (given the non-existence of cross-platform exception handling you'd have to manually put in a number of tests on each of the operations affected). Is there any chance that the crash is caused by a SIGNAN (or whatever it may be called)?
Cheers, - Andreas
-----Original Message----- From: squeak-dev-bounces@lists.squeakfoundation.org [mailto:squeak-dev-bounces@lists.squeakfoundation.org] On Behalf Of Jon Hylands Sent: Monday, July 28, 2003 11:08 PM To: The general-purpose Squeak developers list Subject: Re: crashes with Ballon3D (was: accelerated OpenGL on linux/nvidia (crash))
On Mon, 28 Jul 2003 22:53:04 +0200, "Boris Gaertner" Boris.Gaertner@gmx.net wrote:
Radek, can you perhaps prepare a change set with the scene that causes the crash? I can try it on my computer.
I ran into this while doing some 3D stuff for my AUV simulator. If you open any of the example morphs in the AdvancedB3DSceneMorph viewer, you should then be able to open your example without crashing.
As well, I found that in my code, if I open the morph with hardware acceleration already turned on, it doesn't crash.
Later, Jon
Jon Hylands Jon@huv.com http://www.huv.com/jon
Project: Micro Seeker (Micro Autonomous Underwater Vehicle) http://www.huv.com
"Andreas Raab" andreas.raab@gmx.de wrote:
There's another possibility, though. I found on Windows that under some circumstances (which I could never figure out completely) the FPU flags get changed to signal exceptions instead of silently over/underflowing.
Good grief, this surely can't still be a problem in Windows; we (as in PPS, in particular Jan Bottorff, Daniel Lanovaz & I) found this afflicting VW back in 93 or there abouts. Certain library calls (all vis dlls IIRC) would change the FPU flags and _not restore them_ on return. Wham, bang, floating pint arithmetic goes weird on you. I suppose it might be worth emailing Eliot to ask if an answer was ever found.
tim -- Tim Rowledge, tim@sumeru.stanford.edu, http://sumeru.stanford.edu/tim Useful random insult:- Runs squares around the competition.
Tim,
There's another possibility, though. I found on Windows that under some circumstances (which I could never figure out completely) the FPU flags get changed to signal exceptions instead of silently over/underflowing.
Good grief, this surely can't still be a problem in Windows;
Well, maybe not, but would I want to put my money on this bet? ;-)
we (as in PPS, in particular Jan Bottorff, Daniel Lanovaz & I) found this afflicting VW back in 93 or there abouts. Certain library calls (all vis dlls IIRC) would change the FPU flags and _not restore them_ on return. Wham, bang, floating pint arithmetic goes weird on you. I suppose it might be worth emailing Eliot to ask if an answer was ever found.
I actually solved this problem by merely establishing an FPU exception filter and reset the flags if they got changed. Note that question was about Linux, so I was mostly wondering if there might be a similar problem here.
Cheers, - Andreas
On Monday 28 July 2003 2:23 pm, Andreas Raab wrote:
Hm ... I wonder if that might be a GCC optimization issue? IIRC, then we had problems with GCC 3+ in some areas and trying to optimize some of this C code _might_ get you in trouble if the optimizations aren't rock-solid.
I think I figured out the problems with this. I was having 3D problems with a 3.6g-2 VM compiled with gcc 3.3.2 and -O2; I did a bit of research, and figured out what was happening.
Ian had some time ago narrowed down the problem to the fetchFloatAtinto() (and maybe the storeFloatAtfrom()) macros that looked like this:
# define storeFloatAtfrom(i, floatVarName) \ *((int *) (i) + 0) = *((int *) &(floatVarName) + 1); \ *((int *) (i) + 1) = *((int *) &(floatVarName) + 0); # define fetchFloatAtinto(i, floatVarName) \ *((int *) &(floatVarName) + 0) = *((int *) (i) + 1); \ *((int *) &(floatVarName) + 1) = *((int *) (i) + 0);
If you compile the VM with enough warnings turned on, you'll see warnings about strict-aliasing on these macro invocations.
So I changed only the CFLAGS to add --no-strict-aliasing and the problems went away.
A small change to the definition of these macros removes the warning and the problems, and works with -O2 or -O3:
typedef union { double d; int i[sizeof(double) / sizeof(int)]; } _swapper;
# define storeFloatAtfrom(intPointerToFloat, floatVarName) \ *((int *)(intPointerToFloat) + 0) = ((_swapper *)(&floatVarName))->i[1]; \ *((int *)(intPointerToFloat) + 1) = ((_swapper *)(&floatVarName))->i[0]; # define fetchFloatAtinto(intPointerToFloat, floatVarName) \ ((_swapper *)(&floatVarName))->i[1] = *((int *)(intPointerToFloat) + 0); \ ((_swapper *)(&floatVarName))->i[0] = *((int *)(intPointerToFloat) + 1);
Patch is enclosed.
Am Mittwoch, 12. November 2003 21:10 schrieb Ned Konz:
I think I figured out the problems with this. I was having 3D problems with a 3.6g-2 VM compiled with gcc 3.3.2 and -O2; I did a bit of research, and figured out what was happening.
I guess these were the problems I struggled with yesterday. Turning off optimization or compiling the entire enchilada with gcc 2.95 made the problem (insane B3DVectors et.al.) go away. As much as I had my good share of frustration yesterday, your patch today made definitely my day. Good catch. VM works, feels fast and I'm a happy camper now. :)) Thanks to you and everybody involved.
Alex
A stock Unix VM crashes when loading update 5501, which re-creates the special objects array. Ned's patch for the "float bug" appears to cure this problem, in addition to whatever issue had originally been reported.
The patch is for platforms/Cross/vm/sq.h, hence it needs to be verified on all platforms before approval.
<This post brought to you by BFAV2>
Am 08.02.2004 um 19:12 schrieb lewis@mail.msen.com:
A stock Unix VM crashes when loading update 5501, which re-creates the special objects array. Ned's patch for the "float bug" appears to cure this problem, in addition to whatever issue had originally been reported.
I really dubt that. As much as I understand it, the float patch has nothing to do with this: Just recreating the VM-Sources with VMMaker is enough to cure the bug.
(All other plattforms use the 3.6 VMMaker sources and do *not* crash on 5501).
(I think the cause for the crash is Andreas' experimental Ephemeron implementation, which is part of Ian's 3.6-g, but not harvested for VMMaker).
I'd really like to see a official 3.6 Unix VM. That would cure the bug and would stop everybody to waste more time on this.
Marcus
-- Marcus Denker marcus@ira.uka.de
On Sun, Feb 08, 2004 at 06:53:40PM +0100, Marcus Denker wrote:
Am 08.02.2004 um 19:12 schrieb lewis@mail.msen.com:
A stock Unix VM crashes when loading update 5501, which re-creates the special objects array. Ned's patch for the "float bug" appears to cure this problem, in addition to whatever issue had originally been reported.
I really dubt that. As much as I understand it, the float patch has nothing to do with this: Just recreating the VM-Sources with VMMaker is enough to cure the bug.
Confirmed, Marcus is correct. The "float bug" patch does not fix the "crash on update 5501" problem. Just rebuilding the VM is sufficient.
However, the VM that I built on my Linux system without Ned's patch is useless (missing display colors, display pretty much scrambled), so I do recommend that the float bug patch for sq.h be checked on other platforms and included in the platforms sources if nothing else breaks.
Dave
squeak-dev@lists.squeakfoundation.org