Further Development of TWiki Test Strategies
Like every plugin has its *Dev topic, and TWiki has the
Codev web as a whole, this is the Dev topic related to the
TestCasesTutorial
Room for Improvement
This is just a raw collection I'd like to see improved, and probably I'll checkin some of them:
- Some test cases need extra CPAN libraries (
HTML::TreeBuilder) installed, in addition to the load coming with Test::Unit. Some test cases need TWiki extensions (TWiki::Plugins::TestFixturePlugin, TWiki::Contrib::CliRunnerContrib) installed.
In my opinion, tests should fail gracefully, and provide meaningful error messages, if a test can not be performed. It may be desired that every tester is able to run every test, but this shouldn't be enforced (Bugs:Item3219
will take care for some cases).
- In many cases, "low level" unit tests are not sufficient.
Bugs:Item3205
was a nasty example: There is a bunch of tests for access control in test/unit/AccessControlTests. All of them pass at the time of this writing. Yet, a simple topic protected with ALLOWTOPICVIEW would create an internal server error. We need to add tests spanning a whole view.
- In SVN we have a TestCases web, consisting of topics for both automatic and manual tests, and fixtures. The test cases in this web are very difficult to maintain.
Many testcases fail. Though some effort has been spent to show actual and expected content, I always find it extremely difficult to draw conclusions from what I see. Most of the time the code is "correct", sort of, and it is the test topic which needs to be fixed.
- I'd like to see the TestCases web being used as a store for readonly, "standardized" topics for unit tests as well. Test cases which need to create temporary webs, then store constant strings into topics, only to read them again, are awfully slow. Due to the architecture of
Test::Unit the temporary webs are usually created and deleted for each test case.
- TWiki's logging mechanism gets into the way more than it helps: Every time at the beginning of a new month, if I run my first test from the browser in the TestCases web, a log file like
data/log200612.txt is created, owned by the web server's user id. If I then start unit tests from my own user id, almost every test case clutters STDERR because it is not allowed to write to the log file(s).
I suggest to simply redirect log records from unit tests to a temporary file, or to /dev/null. This has been implemented for tests based on TWikiFnTestCase.pm and should be extended to TWikiTestCase.pm as well.
--
Contributors: HaraldJoerg - 01 Dec 2006
Discussion
On 1; I think this is a double-edged sword. Some people seem to see "graceful failure" as an excuse to ignore the test :-(. How about writing a "set up my environment for testing" script instead? You could do it fairly trivially, I think, by writing a
TWikiTestingContrib.
- That is likely to get us in the Installer dilemma again. And today people seem to ignore unit tests without that excuse, so there is little to lose. BTW, I plan to write test cases which take incredibly long to run, and would accept that these are not being run by everybody before every
svn commit. -- HaraldJoerg - 01 Dec 2006
- OK, fair enough. it seems to me that the people who are committed enough the run the tests don't have too much trouble setting up an environment, though there is frequent whining. I agree there needs to be "smoke tests" versus "all tests" - that's why I defined TWikiTestSuite the way I did (you have to list the cases to run explicitly, rather than using
readdir) - C.
On 2 and 3; agreed, definitely agreed. Originally this was the purpose of the TestCases web, but it has become clear over time that it just doesn't work well enough. There needs to be some way to script tests that exercise TWiki over a series of operations; for example, topic renaming.
Another problem I perceive is that testcases at the higher levels tend to be very fragile; they are often testing formatting, and things as simple as Arthur adding a
class to some
HTML tags, or a minor change to a message text, have been known to cause spontaneous massive test failures :-(. Not sure what the solution is here, other than making it easier to run the tests.
- Agreed, comparing TWiki-rendered HTML can be fragile. If we replace the testcases web with unit tests, we can easier leverage Perl's regex capability to actually assert the important bits of the result. And if Arthur should ever be adding a
class, then there ought to be testcases verifying that the class is present where it should, and only there.
-- HaraldJoerg - 01 Dec 2006
- Right. The TWikiTestCase already has HTML comparison facilities. These could easily be extended. - C.
On 4; my feeling is that there are very few useful tests that would be able to leverage this. My approach has always been to try and keep test data and test code as close together as possible, and to try and clean down fixtures as thoroughly as possible. I have found that if you allow any separation between code and data in the tests, then they easily get out of step, giving false failures and sometimes (worse) false positives. There is definitely room for better support for fixture generation in
TWikiTestCase; for example, for generating fixtures from raw topic text in
__DATA__. One mistake I made early on was using fixtures to generate topic caches (.txt files) instead of database items (.txt,v files). This has resulted in a number of test gaps, and excessively complex fixtures.
- Agreed, test code and test topics need to be in sync. What I had in mind were some very simple topics, only intended to be used for readonly tests, and maintained in SVN. For example, performance of
ViewScriptTests.pm can be improved by a factor of two if it reads the topics from TestCases instead of creating and deleting a temporary web for its one and only test. -- HaraldJoerg - 02 Dec 2006
On 5; absolutely. The easiest approach is to modify TWikiTestCase to disable logging in
$TWiki::cfg.
- Done, Bugs.Item3225 The code was already there in TWikiFnTestCase. -- HaraldJoerg - 01 Dec 2006
--
CrawfordCurrie - 01 Dec 2006