Dear Squeak community,
we would like to announce the first release of SwaLint.
SwaLint is a Squeak source code linting tool in the spirit of (C) Lint or SmallLint. It is intended as a developer aid to * test source code for stylistic coherence and * identify possible errors.
It has been developed by Johannes Dyck, Christoph Neijenhuis, Tobias Pape, Nico Rehwaldt, and Arian Treffer during a course on software engineering held by the Software Architecture Group at the HPI, Potsdam, Germany.
SwaLint is available from SqueakMap. Documentation can be found at http://swalint.netshed.de/.
Tobias Pape on behalf of the SwaLint team
Hi,
I really appreciate that kind of tools. What are the differences with SmallLint?
Bye
On Thu, May 22, 2008 at 2:37 PM, Tobias Pape Das.Linux@gmx.de wrote:
Dear Squeak community,
we would like to announce the first release of SwaLint.
SwaLint is a Squeak source code linting tool in the spirit of (C) Lint or SmallLint. It is intended as a developer aid to
- test source code for stylistic coherence and
- identify possible errors.
It has been developed by Johannes Dyck, Christoph Neijenhuis, Tobias Pape, Nico Rehwaldt, and Arian Treffer during a course on software engineering held by the Software Architecture Group at the HPI, Potsdam, Germany.
SwaLint is available from SqueakMap. Documentation can be found at http://swalint.netshed.de/.
Tobias Pape on behalf of the SwaLint team
Hi —, Am 2008-05-22 um 17:17 schrieb Damien Cassou:
I really appreciate that kind of tools. What are the differences with SmallLint?
SwaLint features a plug-in oriented model to provide its so called test. It provides an API to enable a developer to include his own tests more easily than with SmallLint.
Currently, we have implemented a SmallLint plug-in, which provides all test from it. Additionally, a metrics based plug-in is included, which currenly should decect god classes and data classes.
Find more information at: http://swalint.netshed.de/wiki/Available_tests
Soon, a plug-in developement tutorial will be published on the SwaLint Wiki.
Besides, we tried to make SwaLint to be more usable than SmallLint is.
Regards, -Tobias
SwaLint features a plug-in oriented model to provide its so called test. It provides an API to enable a developer to include his own tests more easily than with SmallLint.
SmallLint is plug-in based as well. Have a look at Slime [1], this is an extension to SmallLint that detects Seaside specific defects and code smells.
it. Additionally, a metrics based plug-in is included, which currenly should decect god classes and data classes.
The Disharmonies are interesting, but could be trivially added as a SmallLint plugin.
Besides, we tried to make SwaLint to be more usable than SmallLint is.
Many years ago i've integrated SmallLint into SUnit [2], so that Lint rules can be run as part of the unit tests. I've added pragmas to ignore certain rules at specific places. This is all done in an ad-hock matter, but I think this is nice way to go without requiring new tools.
Why do you think SmallLint is not useable? How do you do the scoping?
Cheers, Lukas
[1] http://www.lukas-renggli.ch/blog/slime [2] http://source.lukas-renggli.ch/essential.html
Hello Lukas,
Am 2008-05-22 um 21:39 schrieb Lukas Renggli:
SwaLint features a plug-in oriented model to provide its so called test. It provides an API to enable a developer to include his own tests more easily than with SmallLint.
SmallLint is plug-in based as well. Have a look at Slime [1], this is an extension to SmallLint that detects Seaside specific defects and code smells.
When we started developing SwaLint, we considered to incorporate our ideas into SmallLint. Given the requirements we had[1], we evaluated the plug-in architecture provided by SmallLint. From our point of view, it has not fit our needs; it has not provided the kind of API we wanted to code plug-ins with.
it. Additionally, a metrics based plug-in is included, which currenly should decect god classes and data classes.
The Disharmonies are interesting, but could be trivially added as a SmallLint plugin.
So, how do you reuse results from other SmallLint tests? That was one of our most important goals. Within SwaLint, one is able to use results from other test (and plug-ins, as well) to generate one’s own results. At the moment, we provide Boolean, Integer, Percentage, and Uniformity results. These results may have thresholds, so you can ‘ask’, e. g. whether the ATFD (access to foreign data, a class metric) for the class your test is currently processing is greater than ‘high’.
Besides, we tried to make SwaLint to be more usable than SmallLint is.
Many years ago i've integrated SmallLint into SUnit [2], so that Lint rules can be run as part of the unit tests. I've added pragmas to ignore certain rules at specific places.
That seems to be a nice idea to avoid test results which are really expected and intended.
This is all done in an ad-hock matter, but I think this is nice way to go without requiring new tools.
Why do you think SmallLint is not useable?
First of all, we don’t want to argue that SmallLint were a bad tool. Actually, we believe that it is a great programming aid facility for a developer. But using it showed that there are several problems. One example would be the UI. What we wanted to say is that using the supplied Tool window can be very confusing. I. e., you don’t have an overview of the subjects under test and the tests selected at the same time. When you do multiple testing passes, you do not know, which results belong to which testing session and which were the tests run. When viewing the results, looking at “…description of the test…[3]” does not provide a clear overview of which subjects are affected. We tried to address these issues by providing a consistent UI as well as a clear connection of test and subject. We even incorporated some highlighting for the convenience of the tester. We encountered a problem concerning several SmallLint rules. These rules simply failed in the sense of that a debugger popped up. Sadly, all other test results were gone then. SwaLint provides the possibility to run the complete test and afterwards inspect the errors occured. SwaLint will not crash because of a plug-in crashing (given you don’t do the “true become: false” sort of things). As SwaLint proxies all SmallLint rules, we included some default preferences that disable most of the SmallLint rules known to often produce errors in our environment. As mentioned, every test in SwaLint can have its own preferences. This is extremely usefull for the metrics, because this way we are able to provide customized thresholds persistently.
How do you do the scoping?
What do you mean by scoping? Do you mean the differed kinds of test subjects supported?
Interested in your answer and with kind regards, -Tobias on behalf of the SwaLint team
[1] Actually, this project started as an assignment for the software engineering course the team attended recently.
"Tobias Pape" Das.Linux@gmx.de wrote in message news:937EDCAE-2C7E-42B9-90CD-CDDFA4BB31D3@gmx.de...
Actually, we believe that it is a great programming aid facility for a developer. But using it showed that there are several problems. One example would be the UI. What we wanted to say is that using the supplied Tool window can be very confusing.
I discovered Lukas' tool set (not just SmallLint, but also the scoped environment browser, AST-based search, replace, re-factor, etc.) recently. It is such a *huge* improvement, that I would urge you to contribute towards improving that stream rather than fork a different one.
Sophie
Hello Sophie,
Am 2008-05-25 um 16:54 schrieb itsme213:
"Tobias Pape" Das.Linux@gmx.de wrote in message news:937EDCAE-2C7E-42B9-90CD-CDDFA4BB31D3@gmx.de...
Actually, we believe that it is a great programming aid facility for a developer. But using it showed that there are several problems. One example would be the UI. What we wanted to say is that using the supplied Tool window can be very confusing.
I discovered Lukas' tool set (not just SmallLint, but also the scoped environment browser, AST-based search, replace, re-factor, etc.) recently. It is such a *huge* improvement, that I would urge you to contribute towards improving that stream rather than fork a different one.
I want to place an emphasis on the fact that SwaLint is no fork off SmallLint. Its architectural codebase is not based on it in any way. Moreover, SwaLint is capable of using every test provided by SmallLint, thus, I hope SwaLint will also benefit from SmallLint improvements. For the environments i wanted to say that the notion of scoping is slightly different in SwaLint. And, well, some plug-ins in SwaLint are using AST-based searches as well.
To share my Personal opinion, I don’t think it is necessary to incorporate refac- toring or any other code-changing into code critics tools. I appreciate the fact that SmallLint is capable of it, but for SwaLint we’d like to follow the “do one thing and do it right” approach as best we are able to.
Have a nice week. So long, -Tobias
SwaLint is available from SqueakMap. Documentation can be found at http://swalint.netshed.de/.
hello,
trying to load SwaLint (mcz 239) in a 3.10 image, I got a failure because class MultipleSelectionModel does not exist. I guess there is a depency I overlooked... where can we get MultipleSelectionModel ?
Stef
Hello,
Am 2008-05-26 um 10:44 schrieb Stéphane Rollandin:
SwaLint is available from SqueakMap. Documentation can be found at http://swalint.netshed.de/.
hello,
trying to load SwaLint (mcz 239) in a 3.10 image, I got a failure because class MultipleSelectionModel does not exist. I guess there is a depency I overlooked... where can we get MultipleSelectionModel ?
Excuse me, I may have forgotten to list the dependencies.
SwaLint requires the RefactoringEngine and the AST-Package to work.
Besides, a tutorial on writing SwaLint plug-ins will be available soon on http://swalint.netshed.de/
Have a nice week. So long, -Tobias
I think that what lukas wanted to say is that it is difficult to produce good tools, to maintain them over a long period of time and that it is often more interesting to stack up effort to make sure that at the end we get something. This is why having more rules under SmallLint is interesting.
Now why should we use Sawlint when we have smallLint. What is your selling arg? Because so far this is not clear.
Stef
On May 26, 2008, at 9:01 AM, Tobias Pape wrote:
Hello Sophie,
Am 2008-05-25 um 16:54 schrieb itsme213:
"Tobias Pape" Das.Linux@gmx.de wrote in message news:937EDCAE-2C7E-42B9-90CD-CDDFA4BB31D3@gmx.de...
Actually, we believe that it is a great programming aid facility for a developer. But using it showed that there are several problems. One example would be the UI. What we wanted to say is that using the supplied Tool window can be very confusing.
I discovered Lukas' tool set (not just SmallLint, but also the scoped environment browser, AST-based search, replace, re-factor, etc.) recently. It is such a *huge* improvement, that I would urge you to contribute towards improving that stream rather than fork a different one.
I want to place an emphasis on the fact that SwaLint is no fork off SmallLint. Its architectural codebase is not based on it in any way. Moreover, SwaLint is capable of using every test provided by SmallLint, thus, I hope SwaLint will also benefit from SmallLint improvements. For the environments i wanted to say that the notion of scoping is slightly different in SwaLint. And, well, some plug-ins in SwaLint are using AST-based searches as well.
To share my Personal opinion, I don’t think it is necessary to incorporate refac- toring or any other code-changing into code critics tools. I appreciate the fact that SmallLint is capable of it, but for SwaLint we’d like to follow the “do one thing and do it right” approach as best we are able to.
Have a nice week. So long, -Tobias
Hi Stéphane,
On Mon, May 26, 2008 at 4:17 PM, stephane ducasse stephane.ducasse@free.fr wrote:
[...] This is why having more rules under SmallLint is interesting.
as far as I understood it, essentially, there are more rules now, only the other way around. ;-)
Now why should we use Sawlint when we have smallLint. What is your selling arg? Because so far this is not clear.
Ach, economy. Nobody is trying to sell anything; this was an announcement. It's available. If anybody is interested, they are free to try it. If we really really REALLY needed selling arguments for all we do, we wouldn't have the time to build cool systems out of the blue any more.
I really wonder what all this questioning is supposed to be about. Please enlighten me. :-)
Best,
Michael
"Michael Haupt" mhaupt@gmail.com wrote in message
I really wonder what all this questioning is supposed to be about.
At least from me, just an enthusiastic user of great tools who is dismayed by divergence in tool efforts ... I, and likely others who are not building these tools, would benefit more from convergence, from a purely selfish perspective :-)
Sophie
Hi Sophie,
On Tue, May 27, 2008 at 4:42 PM, itsme213 itsme213@hotmail.com wrote:
At least from me, just an enthusiastic user of great tools who is dismayed by divergence in tool efforts ... I, and likely others who are not building these tools, would benefit more from convergence, from a purely selfish perspective :-)
This makes me wonder why those who came up with Squeak didn't stick with VisualWorks, for instance. You can name many many more examples. I completely fail to understand what should be undesirable about alternatives. Hence, again, why the questioning? And why in this particular case?
Best,
Michael
On 27.05.2008, at 16:50, Michael Haupt wrote:
I completely fail to understand what should be undesirable about alternatives.
I'm not implying this is the case for SwaLint, but, there are cases where started-from-scratch alternatives take away resources that would have been better spent improving an existing implementation. Also known as CADT (http://www.jwz.org/doc/cadt.html).
- Bert -
Hi Bert,
On Tue, May 27, 2008 at 5:32 PM, Bert Freudenberg bert@freudenbergs.de wrote:
I'm not implying this is the case for SwaLint, but, there are cases where started-from-scratch alternatives take away resources that would have been better spent improving an existing implementation. Also known as CADT (http://www.jwz.org/doc/cadt.html).
I am aware of that phenomenon. :-)
In this case, the entire thing started as a student project in a course on software technology. The task was to construct a new system. In the end, it turned out to be a very nice one - so why not make it public?
Best,
Michael
On 27.05.2008, at 17:33, Bert Freudenberg wrote:
I'm not implying this is the case for SwaLint, but, there are cases where started-from-scratch alternatives take away resources that would have been better spent improving an existing implementation.
In many ways it is better to spent time on improving existing software, however let me mention a few things concerning SwaLint:
It was a student project which aimed at "improving Smalllint" or providing a tool which integrates Smalllints capabilities.
The main goals of the project were to produce a software which has - An intuitive UI - Support for metrics - Support for test configurations - The above mentioned Smalllint integration
We spent a lot of time in analysing Smalllint and its architecture in order to find out whether it is feasible to extend it. We found out, that it would lead to a major architectural change to make Smalllint capable of reusing results from other tests (which is a definitive must if you want to provide metrics support). As well it has no support for test configurations and an unintuitive UI (that is just my opinion). Therefore we created our own tool.
Smalllint is a great tool for the sophisticated developer. Our tool however is also intended to be usable for the normal programmer. "Select your classes and tests on one screen, click, and here is a very good structured overview over the problems in your application. (Maybe do some configuration)."
That is why we will definitely not support scoping in such a way as Smalllint allows it. Also, we will not support refactorings. However it takes about 5 minutes to integrate new Smalllint-Tests in our program (the tests written by Lukas for Seaside-applications are already in the development build of SwaLint).
Those are then - "easy to use" - integrated into a cool UI. Accessible for everybody even the guy who does not care about AST, scoped environment browser and so on.
Maybe usability is the kind of "selling arg" even if it does not appeal to everybody.
Have a nice week, Nico
I seriously want just to know what is in SwaLint
I do not have an internet connection so I cannot check on the web. Now when people do an announce it seems to me that they want to present something to others. So I simply asked, ok as a SmallLint user what would be the incentive to switch. You see I'm even ready to use something better. But knowing it would help.
Stef
Hi Stéphane,
On Mon, May 26, 2008 at 4:17 PM, stephane ducasse stephane.ducasse@free.fr wrote:
[...] This is why having more rules under SmallLint is interesting.
as far as I understood it, essentially, there are more rules now, only the other way around. ;-)
Now why should we use Sawlint when we have smallLint. What is your selling arg? Because so far this is not clear.
Ach, economy. Nobody is trying to sell anything; this was an announcement. It's available. If anybody is interested, they are free to try it. If we really really REALLY needed selling arguments for all we do, we wouldn't have the time to build cool systems out of the blue any more.
I really wonder what all this questioning is supposed to be about. Please enlighten me. :-)
Best,
Michael
sure I understand that. I thought this was that. Now just tell us what is cool about it. Why do you read my questions as doubtful? There are not So reread what I wrote and you will see.
Stef
On May 27, 2008, at 7:06 PM, Michael Haupt wrote:
Hi Bert,
On Tue, May 27, 2008 at 5:32 PM, Bert Freudenberg <bert@freudenbergs.de
wrote: I'm not implying this is the case for SwaLint, but, there are cases where started-from-scratch alternatives take away resources that would have been better spent improving an existing implementation. Also known as CADT (http://www.jwz.org/doc/cadt.html).
I am aware of that phenomenon. :-)
In this case, the entire thing started as a student project in a course on software technology. The task was to construct a new system. In the end, it turned out to be a very nice one - so why not make it public?
Best,
Michael
Hi nico
Thanks, finally some information... What is a test configuration.
It was a student project which aimed at "improving Smalllint" or providing a
tool which integrates Smalllints capabilities.
The main goals of the project were to produce a software which has
- An intuitive UI
indeed this is needed for SmallLint.
- Support for metrics
- Support for test configurations
- The above mentioned Smalllint integration
We spent a lot of time in analysing Smalllint and its architecture in order to find out whether it is feasible to extend it. We found out, that it would lead to a major architectural change to make Smalllint capable of reusing results from other tests (which is a definitive must if you want to provide metrics support). As well it has no support for test configurations and an unintuitive UI (that is just my opinion). Therefore we created our own tool.
Smalllint is a great tool for the sophisticated developer. Our tool however is also intended to be usable for the normal programmer. "Select your classes and tests on one screen, click, and here is a very good structured overview over the problems in your application. (Maybe do some configuration)."
That is why we will definitely not support scoping in such a way as Smalllint allows it. Also, we will not support refactorings. However it takes about 5 minutes to integrate new Smalllint-Tests in our program (the tests written by Lukas for Seaside-applications are already in the development build of SwaLint).
Those are then - "easy to use" - integrated into a cool UI. Accessible for everybody even the guy who does not care about AST, scoped environment browser and so on.
Maybe usability is the kind of "selling arg" even if it does not appeal to everybody.
it appeals to me.
Have a nice week, Nico
Hi Stéphane,
On Fri, May 30, 2008 at 5:26 PM, stephane ducasse stephane.ducasse@free.fr wrote:
I do not have an internet connection so I cannot check on the web.
??!!
Baffled,
Michael ;-)
I suppose Stephane must be using an email client utilising RFC 1149 ;)
On 2 Jun 2008, at 07:54, Michael Haupt wrote:
Hi Stéphane,
On Fri, May 30, 2008 at 5:26 PM, stephane ducasse stephane.ducasse@free.fr wrote:
I do not have an internet connection so I cannot check on the web.
??!!
Baffled,
Michael ;-)
****************************************************************************************************************************************** This email is from Pinesoft Limited. Its contents are confidential to the intended recipient(s) at the email address(es) to which it has been addressed. It may not be disclosed to or used by anyone other than the addressee(s), nor may it be copied in anyway. If received in error, please contact the sender, then delete it from your system. Although this email and attachments are believed to be free of virus, or any other defect which might affect any computer or IT system into which they are received and opened, it is the responsibility of the recipient to ensure that they are virus free and no responsibility is accepted by Pinesoft for any loss or damage arising in any way from receipt or use thereof. *******************************************************************************************************************************************
Pinesoft Limited are registered in England, Registered number: 2914825. Registered office: 266-268 High Street, Waltham Cross, Herts, EN8 7EA
Hello —,
Am 2008-05-30 um 17:26 schrieb stephane ducasse:
Hi nico
Thanks, finally some information...
I’m sorry, that I seemed to be unable to give any valuable information be- fore.
What is a test configuration.
I’m afraid, ”What” is no test configuration ;). Actually, with the need for metric based tests, the need for average thres- holds came up. Thus, we provided the possibility to have some preferences for your tests permanently stored. We sometimes refer to these preferences as “test configuration”.
[…]
Maybe usability is the kind of "selling arg" even if it does not appeal to everybody.
it appeals to me.
Glad to hear.
Have a nice week and so long, -Tobias
Hi!
http://swalint.netshed.de/ is down. I would like to see the documentation.
any suggestion ?
Thanks,
Mariano
On Tue, Jun 3, 2008 at 5:16 AM, Tobias Pape Das.Linux@gmx.de wrote:
Hello —, Am 2008-05-30 um 17:26 schrieb stephane ducasse:
Hi nico
Thanks, finally some information...
I'm sorry, that I seemed to be unable to give any valuable information be- fore.
What is a test configuration.
I'm afraid, "What" is no test configuration ;). Actually, with the need for metric based tests, the need for average thres- holds came up. Thus, we provided the possibility to have some preferences for your tests permanently stored. We sometimes refer to these preferences as "test configuration".
[…]
Maybe usability is the kind of "selling arg" even if it does not appeal to everybody.
it appeals to me.
Glad to hear.
Have a nice week and so long, -Tobias
Hello Mariano,
Am 2009-01-27 um 04:27 schrieb Mariano Martinez Peck:
Hi!
http://swalint.netshed.de/ is down. I would like to see the documentation.
any suggestion ?
I‘m afraid, the SwaLint Wiki is down for migration. I‘d like to apologize for the inconveniences. I hope to have it up and running again in the mid of february.
Have a nice evening -Tobias
Ok. Thanks. Let me know when It is up. I would like to do a revision with SwaLint before the first stable release of SqueakDBX and I don't know how to install and use it.
Cheers,
Mariano
On Tue, Jan 27, 2009 at 3:14 PM, Tobias Pape Tobias_Pape@gmx.de wrote:
Hello Mariano,
Am 2009-01-27 um 04:27 schrieb Mariano Martinez Peck:
Hi!
http://swalint.netshed.de/ is down. I would like to see the documentation.
any suggestion ?
I'm afraid, the SwaLint Wiki is down for migration. I'd like to apologize for the inconveniences. I hope to have it up and running again in the mid of february.
Have a nice evening -Tobias
Hello, Mariano
Am 2009-01-28 um 02:00 schrieb Mariano Martinez Peck:
Ok. Thanks. Let me know when It is up.
I will do.
I would like to do a revision with SwaLint before the first stable release of SqueakDBX and I don't know how to install and use it.
here, a short synopsis. You need - a Squeak Image >= 3.9 - AST from SqueakMap - Refactoring Engine from SqueakMap (provides SmallLint, btw). - SwaLint form SqueakMap
Then, you should be able to find SwaLint in the open Menu. (Or, do 'SwaLint open')
Usage as follows: - Select the categories to test in the leftmost pane (right-click gives you more advanced selection possibilities) - Select the classes to test in the right next pane (again, right-click gives you more advanced selection possibilities) - Select the test categories in the upper right pane - Select the actual tests to carry out in the lower right pane (again, right-click gives you more advanced selection possibilities)
-Click Configure to set some preferences for the test (e.g., which SmallLint test shall show up) -Click Run Tests to run the tests
Then, a window should appear, presenting the test results. - In the lower pane, right clicking will give you advanced sorting and selecting options You can browse the targets of the tests there, too - Selecting in the upper panes selects, with tests/classes results will be shown
I hope this provides enough information to start using SwaLint. Dont hesitate to ask further questions.
So long, -Tobias
Ok. Thanks a lot. I will then try it.
Cheers,
Mariano
On Wed, Jan 28, 2009 at 8:41 AM, Tobias Pape Das.Linux@gmx.de wrote:
Hello, Mariano
Am 2009-01-28 um 02:00 schrieb Mariano Martinez Peck:
Ok. Thanks. Let me know when It is up.
I will do.
I would like to do a revision with SwaLint before the first stable release
of SqueakDBX and I don't know how to install and use it.
here, a short synopsis. You need - a Squeak Image >= 3.9 - AST from SqueakMap - Refactoring Engine from SqueakMap (provides SmallLint, btw). - SwaLint form SqueakMap
Then, you should be able to find SwaLint in the open Menu. (Or, do 'SwaLint open')
Usage as follows: - Select the categories to test in the leftmost pane (right-click gives you more advanced selection possibilities) - Select the classes to test in the right next pane (again, right-click gives you more advanced selection possibilities) - Select the test categories in the upper right pane - Select the actual tests to carry out in the lower right pane (again, right-click gives you more advanced selection possibilities)
-Click "Configure" to set some preferences for the test (e.g., which SmallLint test shall show up) -Click "Run Tests" to run the tests
Then, a window should appear, presenting the test results.
- In the lower pane, right clicking will give you advanced sorting and
selecting options You can browse the targets of the tests there, too
- Selecting in the upper panes selects, with tests/classes results will be
shown
I hope this provides enough information to start using SwaLint. Don't hesitate to ask further questions.
So long, -Tobias
Tobias: I make get some time to try SwaLint. The truth is that for me, this is very user friendly, intuitive and easy to use. In addition, It is very customizable as you can modify your preferences with the tests. In fact, I was very surprised and glad with it.
Ahh I have some questions:
1) I see there is one test that checks the "length" of a method. I like to document and put all the necessary comments in my tests. So, in many tests, I get "long method" but actually most of those methods are comments with "". So, the questions is, can be comments ignored in this test?
2) I think when you have take care of the results and analyse them to fix them, the common way doing this is going test for test. So, It would be fantastic being able to browse a test. And when you do this, this browse, takes all the classes that has occurrences. Example: "long methods" , right button, "browse all". And this opens the browser with all the classes that has the long method occurrences.
3) What does "Data class" mean ? are all the test detailed deeply somewhere ? like a webpage, wiki, or just SwaLint browser ?
4) What does "Dot after return consistency" mean ? I don't understand the % and when it is average, low or high.
Thanks a lot.
Cheers,
Mariano
On Wed, Jan 28, 2009 at 10:50 AM, Mariano Martinez Peck < marianopeck@gmail.com> wrote:
Ok. Thanks a lot. I will then try it.
Cheers,
Mariano
On Wed, Jan 28, 2009 at 8:41 AM, Tobias Pape Das.Linux@gmx.de wrote:
Hello, Mariano
Am 2009-01-28 um 02:00 schrieb Mariano Martinez Peck:
Ok. Thanks. Let me know when It is up.
I will do.
I would like to do a revision with SwaLint before the first stable
release of SqueakDBX and I don't know how to install and use it.
here, a short synopsis. You need - a Squeak Image >= 3.9 - AST from SqueakMap - Refactoring Engine from SqueakMap (provides SmallLint, btw). - SwaLint form SqueakMap
Then, you should be able to find SwaLint in the open Menu. (Or, do 'SwaLint open')
Usage as follows: - Select the categories to test in the leftmost pane (right-click gives you more advanced selection possibilities) - Select the classes to test in the right next pane (again, right-click gives you more advanced selection possibilities) - Select the test categories in the upper right pane - Select the actual tests to carry out in the lower right pane (again, right-click gives you more advanced selection possibilities)
-Click "Configure" to set some preferences for the test (e.g., which SmallLint test shall show up) -Click "Run Tests" to run the tests
Then, a window should appear, presenting the test results.
- In the lower pane, right clicking will give you advanced sorting and
selecting options You can browse the targets of the tests there, too
- Selecting in the upper panes selects, with tests/classes results will be
shown
I hope this provides enough information to start using SwaLint. Don't hesitate to ask further questions.
So long, -Tobias
Hello Mariano,
Am 2009-03-01 um 19:12 schrieb Mariano Martinez Peck:
Tobias: I make get some time to try SwaLint. The truth is that for me, this is very user friendly, intuitive and easy to use. In addition, It is very customizable as you can modify your preferences with the tests. In fact, I was very surprised and glad with it.
Thank you, in the name of my Team members. In fact, we are glad to here, that what we intended to accomplish actually works.
Ahh I have some questions:
- I see there is one test that checks the "length" of a method. I
like to document and put all the necessary comments in my tests. So, in many tests, I get "long method" but actually most of those methods are comments with "". So, the questions is, can be comments ignored in this test?
In fact, we included comments intentionally. Albeit, I do not understand the "" command idea (please help me there). Our reasoning was, commets normally serve the purpose of clarifying code, thus, if they were ommited, the code would not be easily understandable. Therefore, we treat comments as code. IIRC, we assumed an "averagne" of 7 LOC per method in the beginning. Yet, you might set your desired averange LOC in the Preferences.
- I think when you have take care of the results and analyse them
to fix them, the common way doing this is going test for test. So, It would be fantastic being able to browse a test. And when you do this, this browse, takes all the classes that has occurrences. Example: "long methods" , right button, "browse all". And this opens the browser with all the classes that has the long method occurrences.
Ok, this might be useful. Did I got this right there? You'd like to right click on the test group and browse all of them? Currenetly, you can select multiple (or one, depending on your personal settings) "occurences" of test results and right-click -> browse them all. By the way, you can sort the results pane by class, too. Then you can have a look at what classes are most affected by the tests selected.
- What does "Data class" mean ? are all the test detailed deeply
somewhere ? like a webpage, wiki, or just SwaLint browser ?
This notion has been taken from Michele Lanza's and Radu Marinescu's nice book "Object-Oriented Metrics in Practice". If you like to see the underlying metrics used to "calculate" the Data class, just enable them in the Preferences. You will notice, that they are named after the ones described in the book. Our mechanism of reusing results allows for simple creation of new Test Plugins. Just play around with them. There has been plans to cover all Metrics described in the book, however, time is short and I'll be able to Implement them not before summer, I presume.
As you mention the Wiki, I'm looking forward to bring it up aroung mid March, again, or late March, depending on how my new server is running.
- What does "Dot after return consistency" mean ? I don't
understand the % and when it is average, low or high.
Oh that's a nice one :) This is a Style tests. It simply tests, how many returns are written
^ anObject aMessage
and how many are written
^ anObject aMessage.
Where 100% is "all with" and 0% "all without" dot after return. Thus, 50% is the worst value you can get in this test, as it implies, that every second return statement is written in the opposing style.
Thanks a lot.
You're welcome.
Have a nice day -Tobias
Ahh I have some questions:
- I see there is one test that checks the "length" of a method. I like to
document and put all the necessary comments in my tests. So, in many tests, I get "long method" but actually most of those methods are comments with "". So, the questions is, can be comments ignored in this test?
In fact, we included comments intentionally. Albeit,
I do not understand the "" command idea (please help me there).
Sorry. What's what you don't understand ? I dont' understand what you didn't understand hahaaha. I give you an exampple (the firstone I found):
<code> executeDDLScript: aDDLScript "Its very common you need to execute a complete DDL script: create, drop or alter tables. In these cases, you don't have any interesting results from each query. In such a case, you should use this method. Remember SqueakDBX doesn't do any translation so your statement delimiter must be understood by the backend. In order to know which delimiter we use, you can see the message queryDelimiter of the current platform backend, for example DBXPostgresPlatform. This message doesn't use the multistatements option of openDBX, it is all done by SqueakDBX so you don't have to care about it''"
| ddlStatements | ddlStatements := aDDLScript findTokens: self platform queryDelimiter. ddlStatements do: [:ddlStatement | self execute: ddlStatement] </code>
Our reasoning was, commets normally serve the purpose of clarifying code, thus, if they were ommited, the code would not be easily understandable. Therefore, we treat comments as code. IIRC, we assumed an "averagne" of 7 LOC per method in the beginning.
I am not totaly agree, bt doesn't matter. Very clear your answer,
- I think when you have take care of the results and analyse them to fix
them, the common way doing this is going test for test. So, It would be fantastic being able to browse a test. And when you do this, this browse, takes all the classes that has occurrences. Example: "long methods" , right button, "browse all". And this opens the browser with all the classes that has the long method occurrences.
Ok, this might be useful. Did I got this right there? You'd like to right click on the test group and browse all of them?
EXACTLY. I don't know if was just me, but actually that was my way of resolving the issues. Go test after test, and browsing for each one all the classes. Perhaps, I am the only one, so, It just doesn't worth it.
Currenetly, you can select multiple (or one, depending on your personal settings) "occurences" of test results and right-click -> browse them all.
Exactly. For all of test I did that. I don't know it was the esasiest way. So, because of this, I imagine 2) :)
By the way, you can sort the results pane by class, too. Then you can have a look at what classes are most affected by the tests selected.
- What does "Data class" mean ? are all the test detailed deeply
somewhere ? like a webpage, wiki, or just SwaLint browser ?
This notion has been taken from Michele Lanza's and Radu Marinescu's nice book "Object-Oriented Metrics in Practice". If you like to see the underlying metrics used to "calculate" the Data class, just enable them in the Preferences. You will notice, that they are named after the ones described in the book. Our mechanism of reusing results allows for simple creation of new Test Plugins. Just play around with them. There has been plans to cover all Metrics described in the book, however, time is short and I'll be able to Implement them not before summer, I presume.
Ok, perfect. Thanks!
As you mention the Wiki, I'm looking forward to bring it up aroung mid March, again, or late March, depending on how my new server is running.
Excellent news!
- What does "Dot after return consistency" mean ? I don't understand the
% and when it is average, low or high.
Oh that's a nice one :) This is a Style tests. It simply tests, how many returns are written
^ anObject aMessage
and how many are written
^ anObject aMessage.
Where 100% is "all with" and 0% "all without" dot after return. Thus, 50% is the worst value you can get in this test, as it implies, that every second return statement is written in the opposing style.
and are there some differences between both ways? I mean, a real difference ? or just to do it the same way in all the code ?
Thanks a lot.
You're welcome.
Have a nice day -Tobias
Hello Mariano,
sorry for the late reply, I have been busy the last weeks.
Am 2009-03-03 um 23:08 schrieb Mariano Martinez Peck:
[…]
Sorry. What's what you don't understand ? I dont' understand what you didn't understand hahaaha. I give you an exampple (the firstone I found):
<code> executeDDLScript: aDDLScript "Its very common you need to execute a complete DDL script: create, drop or alter tables. In these cases, you don't have any interesting results from each query. In such a case, you should use this method. Remember SqueakDBX doesn't do any translation so your statement delimiter must be understood by the backend. In order to know which delimiter we use, you can see the message queryDelimiter of the current platform backend, for example DBXPostgresPlatform. This message doesn't use the multistatements option of openDBX, it is all done by SqueakDBX so you don't have to care about it''"
| ddlStatements | ddlStatements := aDDLScript findTokens: self platform
queryDelimiter. ddlStatements do: [:ddlStatement | self execute: ddlStatement]
</code>
So, now I get it. In fact, I understood your question the way, that you put empty comments into your code, as in:
aMessage ""
self anotherMessage; aThirdMessage: true.
Which would not make sense to me. Regarding your example method, in my humble opinion, this is a large methos even though it is long only due its comment. I (or wie in that case) as SwaLint developers thought that 7 lines average in a class would fit smalltalk style and should cope with small getter/setters and rather lagre initializer. Well, I presume this average wouldnt fit for you, so simply change it in the preferences. In fact, it is merely a matter of taste and/or style, so don't hesitate to change if it don't fit for you.
[…]
Ok, this might be useful. Did I got this right there? You'd like
to right click on the test group and browse all of them?
EXACTLY. I don't know if was just me, but actually that was my way of resolving the issues. Go test after test, and browsing for each one all the classes. Perhaps, I am the only one, so, It just doesn't worth it.
I consider it a useful feature. Yet, currently my time to work on SwaLint is fairly limitied, so please dont expect it before summer.
Currenetly, you can select multiple (or one, depending on your
personal settings) "occurences" of test results and
right-click -> browse them all.
Exactly. For all of test I did that. I don't know it was the esasiest way. So, because of this, I imagine 2) :)
Point taken.
[…]
- What does "Data class" mean ? are all the test detailed deeply
somewhere ? like a webpage, wiki, or just SwaLint browser ?
This notion has been taken from Michele Lanza's and Radu
Marinescu's nice book "Object-Oriented Metrics in Practice".
If you like to see the underlying metrics used to "calculate" the
Data class, just enable them in the Preferences. You will
notice, that they are named after the ones described in the book. Our mechanism of reusing results allows for simple creation of new
Test Plugins. Just play around with them.
There has been plans to cover all Metrics described in the book,
however, time is short and I'll be able to Implement them not before summer, I presume.
Ok, perfect. Thanks!
You're welcome.
As you mention the Wiki, I'm looking forward to bring it up aroung
mid March, again, or late March, depending on how my new server is running.
Excellent news!
It depends. It actually depends, on some frineds of mine in this case, because theyre don't get settled buying the server *getting_nervous* ;)
- What does "Dot after return consistency" mean ? I don't
understand the % and when it is average, low or high.
Oh that's a nice one :) This is a Style tests. It simply tests, how many returns are written
^ anObject aMessage
and how many are written
^ anObject aMessage.
Where 100% is "all with" and 0% "all without" dot after return. Thus, 50% is the worst value you can get in this test, as it
implies, that every second return statement
is written in the opposing style.
and are there some differences between both ways? I mean, a real difference ? or just to do it the same way in all the code ?
The latter. Its "just" about style consistency.
Have a nice weekend, -Tobias
On Fri, Mar 13, 2009 at 10:04 AM, Tobias Pape Das.Linux@gmx.de wrote:
Hello Mariano,
sorry for the late reply, I have been busy the last weeks.
better late than never ;)
Am 2009-03-03 um 23:08 schrieb Mariano Martinez Peck:
[…]
Sorry. What's what you don't understand ? I dont' understand what you didn't understand hahaaha. I give you an exampple (the firstone I found):
<code> executeDDLScript: aDDLScript "Its very common you need to execute a complete DDL script: create, drop or alter tables. In these cases, you don't have any interesting results from each query. In such a case, you should use this method. Remember SqueakDBX doesn't do any translation so your statement delimiter must be understood by the backend. In order to know which delimiter we use, you can see the message queryDelimiter of the current platform backend, for example DBXPostgresPlatform. This message doesn't use the multistatements option of openDBX, it is all done by SqueakDBX so you don't have to care about it''"
| ddlStatements | ddlStatements := aDDLScript findTokens: self platform queryDelimiter. ddlStatements do: [:ddlStatement | self execute: ddlStatement]
</code>
So, now I get it. In fact, I understood your question the way, that you put empty comments into your code, as in:
aMessage ""
self anotherMessage; aThirdMessage: true.
Which would not make sense to me. Regarding your example method, in my humble opinion, this is a large methos even though it is long only due its comment. I (or wie in that case) as SwaLint developers thought that 7 lines average in a class would fit smalltalk style and should cope with small getter/setters and rather lagre initializer. Well, I presume this average wouldnt fit for you, so simply change it in the preferences. In fact, it is merely a matter of taste and/or style, so don't hesitate to change if it don't fit for you.
Ok. I will do that.
[…]
Ok, this might be useful. Did I got this right there? You'd like to
right click on the test group and browse all of them?
EXACTLY. I don't know if was just me, but actually that was my way of resolving the issues. Go test after test, and browsing for each one all the classes. Perhaps, I am the only one, so, It just doesn't worth it.
I consider it a useful feature. Yet, currently my time to work on SwaLint is fairly limitied, so please dont expect it before summer.
I am not hurry at all :)
- What does "Dot after return consistency" mean ? I don't understand
the % and when it is average, low or high.
Oh that's a nice one :) This is a Style tests. It simply tests, how many returns are written
^ anObject aMessage
and how many are written
^ anObject aMessage.
Where 100% is "all with" and 0% "all without" dot after return.
Thus, 50% is the worst value you can get in this test, as it implies,
that every second return statement
is written in the opposing style.
and are there some differences between both ways? I mean, a real
difference ? or just to do it the same way in all the code ?
The latter. Its "just" about style consistency.
Ok, excellent.
Have a nice weekend, -Tobias
The same to you, thanks
squeak-dev@lists.squeakfoundation.org