Hello Andreas,
Sunday, March 26, 2006, 5:08:17 PM, you wrote:
AR> I am in the interesting situation that I'm writing a few tests AR> that require large data sets for input and where I don't want AR> people to require to download the data sets. My problem is while AR> it's easy to determine that the data is missing and skip the test AR> there isn't a good way of relaying this to the user. From the AR> user's point of view "all tests are green" even though that AR> statement is completely meaningless and I'd rather communicate AR> that in a way that says "X tests skipped" so that one can look at AR> and decide whether it's useful to re-run the tests with the data AR> sets or not.
Would it make sense to make separate test case classes for the tests that take a lot of data, and then users would only get those test cases if they download the large data sets?
AR> Another place where I've seen this to happen is when platform AR> specific tests are involved. A test which cannot be run on some AR> platform should be skipped meaningfully (e.g., by telling the user AR> it was skipped) rather than to appear green and working.
Personally, I'd make a subclass to hold platform specific tests and make the whole thing pass if the platform does not match the one for which the tests were designed. If the test does not even apply, why should I get concerned as to why was something skipped when it's ok?