Bryce Kampjes wrote: Colin Putney writes:
I'd suggest to you, however, that the
functionality
you
describe is not "missing" from SUnit, so much as unnecessary. Bringing up a debugger on the site
of a
failed assertion or an error is a very efficient
way
to
find and fix problems. It's even easier to
deduce why
the test failed in a debugger, and you can fix
it on
the spot.
I'm not trying to suggest that what you describe isn't a very efficient way to find and fix problems. I just suggesting that being able to view a summary of the expected and actual results of failed tests within sUnit's TestRunner Window (or in a Transcript) could be useful. Often, I've found it isn't necessary to step through the test code in order to see what's broken. Bringing up the debugger and discovering the actual and expected values on failed tests takes several more clicks than it would of taken if that info was displayed in sUnit. Having browsers open on one's test case and class undergoing tests along with sUnit displaying actual & expected info would, in many instances, allow for a faster run test->write/fix code (or even fix test) cycle. In such a set up you could fix the problem "on the spot" faster in cases where the comparison of expected and actual results leads to a immediate correct deduction of what was wrong (such as in a spelling mistake made in the test's expected result).
The best way to explain this would probably be to pair...
Yes, that sounds like a good way to make clear the efficient use of debugger and sUnit. However, I don't know any Smalltalkers where I live. Perhaps, I should look into seeing if there's a local Smalltalker Users Group.
Thanks for your advice,
Anthony
__________________________________ Do you Yahoo!? The New Yahoo! Search - Faster. Easier. Bingo. http://search.yahoo.com
Anthony Adachi writes:
Bryce Kampjes wrote: Colin Putney writes:
I'd suggest to you, however, that the
functionality
you
describe is not "missing" from SUnit, so much as unnecessary. Bringing up a debugger on the site
of a
failed assertion or an error is a very efficient
way
to
find and fix problems. It's even easier to
deduce why
the test failed in a debugger, and you can fix
it on
the spot.
I'm not trying to suggest that what you describe isn't a very efficient way to find and fix problems. I just suggesting that being able to view a summary of the expected and actual results of failed tests within sUnit's TestRunner Window (or in a Transcript) could be useful. Often, I've found it isn't necessary to step through the test code in order to see what's broken. Bringing up the debugger and discovering the actual and expected values on failed tests takes several more clicks than it would of taken if that info was displayed in sUnit. Having browsers open on one's test case and class undergoing tests along with sUnit displaying actual & expected info would, in many instances, allow for a faster run test->write/fix code (or even fix test) cycle. In such a set up you could fix the problem "on the spot" faster in cases where the comparison of expected and actual results leads to a immediate correct deduction of what was wrong (such as in a spelling mistake made in the test's expected result).
It's good practice in Smalltalk to write code inside the debugger. Often I'll write a test, it fails, knowing that I'm going to extend an existing method, I add a "self halt." to the start of it. Rerunning the test, it stops at the halt in a debugger. Then I add the new code inside the debugger. This way I can inspect the current state and check it's what I think it should be. It also checks that the method was really used by that test.
Learn to ask the machine questions. Sometimes figuring out how to ask slows you down a little but soon you stop thinking so hard until something unexpected happens then you have time to think really hard.
If a method breaks, I just click on it. A debugger opens up, where I make my changes. That way I don't need to remember where the method is. (I've removed the halt from the interface, most of the time I don't want to step through the method, so most of the time it's a waste of time.)
With good tests that use mocks often the method in the debugger is the one that needs to be changed. In fact I'm thinking about adding a mock to the assembler in Exupery for just this reason.
When writing code in the debugger it is possible to execute parts of it there and then. So if an expression is causing you grief, just execute that single expression. Very fast feedback.
I try and write all new code in the debugger. Code browsers are only for reading and refactoring code.
Bryce
On Friday 02 May 2003 03:23 pm, Bryce Kampjes wrote:
It's good practice in Smalltalk to write code inside the debugger. Often I'll write a test, it fails, knowing that I'm going to extend an existing method, I add a "self halt." to the start of it. Rerunning the test, it stops at the halt in a debugger. Then I add the new code inside the debugger. This way I can inspect the current state and check it's what I think it should be. It also checks that the method was really used by that test.
There is a package on SqueakMap that lets you set breakpoints on entry without changing the source.
It's called "Breakpoint support".
Ned Konz writes:
On Friday 02 May 2003 03:23 pm, Bryce Kampjes wrote:
It's good practice in Smalltalk to write code inside the debugger. Often I'll write a test, it fails, knowing that I'm going to extend an existing method, I add a "self halt." to the start of it. Rerunning the test, it stops at the halt in a debugger. Then I add the new code inside the debugger. This way I can inspect the current state and check it's what I think it should be. It also checks that the method was really used by that test.
There is a package on SqueakMap that lets you set breakpoints on entry without changing the source.
It's called "Breakpoint support".
Sometimes I get very conservative about my development environment. Normally when I'm starting to get afraid I've corrupted the image again.
But thanks for that, sounds very useful. I'll look at it when I'm building my next development image.
Bryce
squeak-dev@lists.squeakfoundation.org