Authors of large-scale changes are in an impossible situation; they can always be knocked back on the basis that the proposed change "may" break something. Even small changes might have knock-on effects. The only way to solve this problem is to have a definition of what "something"
is, and have a set of tests and testcases.
There is already a set of tests suitable for testing refactorings, which works by comparing the results of existing "golden" code against the results of refactored code. These tests are in the repository, in
tools/test/script_tests. However they are rather fragile, as they have to test
HTML and two different
HTML fragments can generate identical output. They also don't help when the change is actually
intended to change something. In this case, the change author has to have a set of test cases that exhibit what is expected from code they thought they hadn't touched.
Personally I have always made heavy use certain topics such as
TextFormattingRules in this role, but this approach has the problem that it's not easy to automate tests based on it, and it's not a particularly friendly or complete test; these topics were never designed as testcases.
So I've started writing proper test cases. The first is in
TestCaseEmbeddedTags. Yes, I know the second test fails; I put it in because it exhibits what I think we would all agree
should be the correct behaviour (and because it passes with
LocationLocationLocation, of course).
I have used the naming convention "TestCase...." and a wikibadge CategoryTestCase.
I know the Codev web is not the right place, but I can't create a better place.
Note that to support automation, I used
HTML comments to indicate
expected and
actual output.
--
CrawfordCurrie - 21 Oct 2004
Thank you very much for this Crawford

If I don't hear any complaints, I'd like to create a Testcases web to store these in (I'll do it in a weeks time).
For some reason I had a preference for one test case per topic (when I was thinking about this a long time ago), in fact I even had separated tests, expected results and a config / description topic.
Hopefully I can find the test cases I wrote for Unisys some time too.
--
SvenDowideit - 21 Oct 2004
That is a move in the right direction, we desperately need a
TWikiTestInfrastructure, focusing first on a
TestCase infrastructure, not Unit testing.
--
PeterThoeny - 21 Oct 2004
Yeah, I thought about one testcase per topic, but I didn't do it because it would be a pain in the butt when manual testing. If we follow Peter's idea of TestSet's, then they could %INCLUDE all the TestCase's, I suppose. But for now, it's quicker and easier to create the testcases with many cases in each topic.
- We talk about the same thing, different terminology. In TWikiTestInfrastructure I proposed this hierarchy: -- PTh
- Index topic: Shows the result of a set of test topics
- TestSet: One topic containing test cases (what you call a test case)
- Okey-doke. Well, we can always change the topic naming standard when we move the topics into the Testcases web (assuming Sven creates it)
Anyone fancy writing a
TWikiTestRunnerPlugin? I'm not sure how you'd do it (
CPAN:HTML::Mechanize
, perhaps?) but that's part of the challenge! It would be of huge benefit to the community.
--
CrawfordCurrie - 22 Oct 2004
Can those with the ideas of what test cases should be written please list them out so that others can make the definitions? I'd like to help but am stumped where to start. Also, if we list them out up front then people can claim them as they start. This has the benefit that one doesn't risk duplicating another's effort.
--
MartinCleaver - 22 Oct 2004
Off the top of my head. The testcases coloured red are non-trivial i.e. not as simple as viewing a page (or pressing a button on a page), usually because they require dynamic fixtures.
Test cases defined so far can be found in the TWiki Subversion repository, in the data/TestCases web (DEVELOP branch ATM). Please create new testcases there, and mark them DONE above.