Hello Lukas,
Am 2008-05-22 um 21:39 schrieb Lukas Renggli:
SwaLint features a plug-in oriented model to provide its so called test. It provides an API to enable a developer to include his own tests more easily than with SmallLint.
SmallLint is plug-in based as well. Have a look at Slime [1], this is an extension to SmallLint that detects Seaside specific defects and code smells.
When we started developing SwaLint, we considered to incorporate our ideas into SmallLint. Given the requirements we had[1], we evaluated the plug-in architecture provided by SmallLint. From our point of view, it has not fit our needs; it has not provided the kind of API we wanted to code plug-ins with.
it. Additionally, a metrics based plug-in is included, which currenly should decect god classes and data classes.
The Disharmonies are interesting, but could be trivially added as a SmallLint plugin.
So, how do you reuse results from other SmallLint tests? That was one of our most important goals. Within SwaLint, one is able to use results from other test (and plug-ins, as well) to generate one’s own results. At the moment, we provide Boolean, Integer, Percentage, and Uniformity results. These results may have thresholds, so you can ‘ask’, e. g. whether the ATFD (access to foreign data, a class metric) for the class your test is currently processing is greater than ‘high’.
Besides, we tried to make SwaLint to be more usable than SmallLint is.
Many years ago i've integrated SmallLint into SUnit [2], so that Lint rules can be run as part of the unit tests. I've added pragmas to ignore certain rules at specific places.
That seems to be a nice idea to avoid test results which are really expected and intended.
This is all done in an ad-hock matter, but I think this is nice way to go without requiring new tools.
Why do you think SmallLint is not useable?
First of all, we don’t want to argue that SmallLint were a bad tool. Actually, we believe that it is a great programming aid facility for a developer. But using it showed that there are several problems. One example would be the UI. What we wanted to say is that using the supplied Tool window can be very confusing. I. e., you don’t have an overview of the subjects under test and the tests selected at the same time. When you do multiple testing passes, you do not know, which results belong to which testing session and which were the tests run. When viewing the results, looking at “…description of the test…[3]” does not provide a clear overview of which subjects are affected. We tried to address these issues by providing a consistent UI as well as a clear connection of test and subject. We even incorporated some highlighting for the convenience of the tester. We encountered a problem concerning several SmallLint rules. These rules simply failed in the sense of that a debugger popped up. Sadly, all other test results were gone then. SwaLint provides the possibility to run the complete test and afterwards inspect the errors occured. SwaLint will not crash because of a plug-in crashing (given you don’t do the “true become: false” sort of things). As SwaLint proxies all SmallLint rules, we included some default preferences that disable most of the SmallLint rules known to often produce errors in our environment. As mentioned, every test in SwaLint can have its own preferences. This is extremely usefull for the metrics, because this way we are able to provide customized thresholds persistently.
How do you do the scoping?
What do you mean by scoping? Do you mean the differed kinds of test subjects supported?
Interested in your answer and with kind regards, -Tobias on behalf of the SwaLint team
[1] Actually, this project started as an assignment for the software engineering course the team attended recently.