Has anyone looked into porting JUnit 4's Theories into SUnit? (NUnit also uses theories, in 2.5)
In brief, a Theory is a test that takes a parameter. So what before might say
testMyFooPrintsIntegersHomoiconically -1 to: 1 do: [:i | self assert: i myFoo = i printString description: 'Failure for integer ', i printString]
becomes
testMyFooPrintsIntegersHomoiconically: anInteger self assert: anInteger myFoo = anInteger printString description: 'Failure for integer ', anInteger printString
You define a bunch of DataPoints, and then the runner runs that test for every data point. In JUnit data points are defined through constants with @DataPoint/@DataPoints annotations, but of course we can do them however we want. Further, theories can make assumptions, which are essentially pretest filters. For instance, in a TestCase dealing with real algebra, a test for square roots might say
testSquareRootReturnsRoot: anInteger self assumeThat: [anInteger > 0]. "Rest of test"
and then the test would only run on positive data points.
The essential idea is simply decoupling the test itself - the theory - from the data, so you don't have to roll your own looping construct when testing multiple data points.
frank
On Mon, 11 Jul 2011, Frank Shearar wrote:
Has anyone looked into porting JUnit 4's Theories into SUnit? (NUnit also uses theories, in 2.5)
In brief, a Theory is a test that takes a parameter. So what before might say
testMyFooPrintsIntegersHomoiconically -1 to: 1 do: [:i | self assert: i myFoo = i printString description: 'Failure for integer ', i printString]
becomes
testMyFooPrintsIntegersHomoiconically: anInteger self assert: anInteger myFoo = anInteger printString description: 'Failure for integer ', anInteger printString
You define a bunch of DataPoints, and then the runner runs that test for every data point. In JUnit data points are defined through constants with @DataPoint/@DataPoints annotations, but of course we can do them however we want. Further, theories can make assumptions, which are essentially pretest filters. For instance, in a TestCase dealing with real algebra, a test for square roots might say
testSquareRootReturnsRoot: anInteger self assumeThat: [anInteger > 0]. "Rest of test"
and then the test would only run on positive data points.
The essential idea is simply decoupling the test itself - the theory - from the data, so you don't have to roll your own looping construct when testing multiple data points.
I usually roll my own loops and use a single test method for a gazillion different cases. This style has the drawback that if you're not running the tests yourself, then you won't know which "subcase" is failing. So I see some value in Theories, if the test runner can tell which "subcase" (datapoint) failed.
AFAIK our version of SUnit is a modified version of SUnit 3 (which is not the latest and greatest) and I miss some basic features of the test runner (and the framework itself), so enhancing it is welcome. The features I miss the most are: - differentiate between timeouts and failures - save the process for each failure/error (as a partial continuation?) and resume that instead of re-running the test (which may pass on the second run) when check the failing test - measure the runtime of each individual test - easily create a report of the results
Levente
frank
On 11 July 2011 21:27, Levente Uzonyi leves@elte.hu wrote:
On Mon, 11 Jul 2011, Frank Shearar wrote:
Has anyone looked into porting JUnit 4's Theories into SUnit? (NUnit also uses theories, in 2.5)
In brief, a Theory is a test that takes a parameter. So what before might say
testMyFooPrintsIntegersHomoiconically -1 to: 1 do: [:i | self assert: i myFoo = i printString description: 'Failure for integer ', i printString]
becomes
testMyFooPrintsIntegersHomoiconically: anInteger self assert: anInteger myFoo = anInteger printString description: 'Failure for integer ', anInteger printString
You define a bunch of DataPoints, and then the runner runs that test for every data point. In JUnit data points are defined through constants with @DataPoint/@DataPoints annotations, but of course we can do them however we want. Further, theories can make assumptions, which are essentially pretest filters. For instance, in a TestCase dealing with real algebra, a test for square roots might say
testSquareRootReturnsRoot: anInteger self assumeThat: [anInteger > 0]. "Rest of test"
and then the test would only run on positive data points.
The essential idea is simply decoupling the test itself - the theory - from the data, so you don't have to roll your own looping construct when testing multiple data points.
I usually roll my own loops and use a single test method for a gazillion different cases. This style has the drawback that if you're not running the tests yourself, then you won't know which "subcase" is failing. So I see some value in Theories, if the test runner can tell which "subcase" (datapoint) failed.
AFAIK our version of SUnit is a modified version of SUnit 3 (which is not the latest and greatest) and I miss some basic features of the test runner (and the framework itself), so enhancing it is welcome. The features I miss the most are:
- differentiate between timeouts and failures
- save the process for each failure/error (as a partial continuation?) and
If only we had a library knocking around for this sort've thing! But seriously, with Ralph Boland's generator approach, it should be quite simple to - generate random data (possibly based on types, a la Haskell's QuickCheck) - run the parameterised test on that data - capture the continuations for failing tests giving decent error messages and resumable failing tests
resume that instead of re-running the test (which may pass on the second run) when check the failing test
- measure the runtime of each individual test
- easily create a report of the results
Yes, especially if they can be dumped in JUnit standard XML format.
Levente
frank
On 12 July 2011 21:05, Frank Shearar frank.shearar@gmail.com wrote:
On 11 July 2011 21:27, Levente Uzonyi leves@elte.hu wrote:
On Mon, 11 Jul 2011, Frank Shearar wrote:
Has anyone looked into porting JUnit 4's Theories into SUnit? (NUnit also uses theories, in 2.5)
In brief, a Theory is a test that takes a parameter. So what before might say
testMyFooPrintsIntegersHomoiconically -1 to: 1 do: [:i | self assert: i myFoo = i printString description: 'Failure for integer ', i printString]
becomes
testMyFooPrintsIntegersHomoiconically: anInteger self assert: anInteger myFoo = anInteger printString description: 'Failure for integer ', anInteger printString
You define a bunch of DataPoints, and then the runner runs that test for every data point. In JUnit data points are defined through constants with @DataPoint/@DataPoints annotations, but of course we can do them however we want. Further, theories can make assumptions, which are essentially pretest filters. For instance, in a TestCase dealing with real algebra, a test for square roots might say
testSquareRootReturnsRoot: anInteger self assumeThat: [anInteger > 0]. "Rest of test"
and then the test would only run on positive data points.
The essential idea is simply decoupling the test itself - the theory - from the data, so you don't have to roll your own looping construct when testing multiple data points.
I usually roll my own loops and use a single test method for a gazillion different cases. This style has the drawback that if you're not running the tests yourself, then you won't know which "subcase" is failing. So I see some value in Theories, if the test runner can tell which "subcase" (datapoint) failed.
AFAIK our version of SUnit is a modified version of SUnit 3 (which is not the latest and greatest) and I miss some basic features of the test runner (and the framework itself), so enhancing it is welcome. The features I miss the most are:
- differentiate between timeouts and failures
- save the process for each failure/error (as a partial continuation?) and
If only we had a library knocking around for this sort've thing! But seriously, with Ralph Boland's generator approach, it should be quite simple to
- generate random data (possibly based on types, a la Haskell's QuickCheck)
- run the parameterised test on that data
- capture the continuations for failing tests giving decent error
messages and resumable failing tests
resume that instead of re-running the test (which may pass on the second run) when check the failing test
- measure the runtime of each individual test
- easily create a report of the results
Yes, especially if they can be dumped in JUnit standard XML format.
Levente
frank
http://www.lshift.net/blog/2011/09/13/checking-squeak-quickly
I cheated a bit: because I subclassed TestCase, one should be able to use Pharo's SUnit extensions to print out JUnit XML.
I didn't implement a big generator combinator library: this is a small, minimal implementation of theories. I expect to find lots of things that could be improved, but it's not too bad (IMO at least) for a few hours' hacking.
frank
frank
Maybe worth studying is the Assessment framework from Andres Valloud "A Mentoring Course on Smalltalk"
Just google for Andres Valloud Assessments to have an idea.
Hope it inspires Bye Enrico
On Mon, Jul 11, 2011 at 16:02, Frank Shearar frank.shearar@gmail.com wrote:
Has anyone looked into porting JUnit 4's Theories into SUnit? (NUnit also uses theories, in 2.5)
In brief, a Theory is a test that takes a parameter. So what before might say
testMyFooPrintsIntegersHomoiconically -1 to: 1 do: [:i | self assert: i myFoo = i printString description: 'Failure for integer ', i printString]
becomes
testMyFooPrintsIntegersHomoiconically: anInteger self assert: anInteger myFoo = anInteger printString description: 'Failure for integer ', anInteger printString
You define a bunch of DataPoints, and then the runner runs that test for every data point. In JUnit data points are defined through constants with @DataPoint/@DataPoints annotations, but of course we can do them however we want. Further, theories can make assumptions, which are essentially pretest filters. For instance, in a TestCase dealing with real algebra, a test for square roots might say
testSquareRootReturnsRoot: anInteger self assumeThat: [anInteger > 0]. "Rest of test"
and then the test would only run on positive data points.
The essential idea is simply decoupling the test itself - the theory - from the data, so you don't have to roll your own looping construct when testing multiple data points.
frank
squeak-dev@lists.squeakfoundation.org