Tags:
create new tag
view all tags

Integration Tests Using Unit Test Techniques

Summary

TWiki's unit test mechanism, intended to test individual components, can be extended to both automate the current web-based tests in the Testcases web, and to perform integration tests with modified TWiki configuration or Perl library environment.

Introduction

TWiki developers working from SVN have two tools for testing:

  1. The Testcases web, which has a bunch of topics. Most of them generate the string All tests passed if everything went OK. In a browser like Firefox which supports tabbed browsing, you open the WebHome topic in the Testcases web and then fire off all the tests from the table in separate tabs, and only then start to look all of them.
  2. The unit tests in SVN's test/unit directory, described at length in TestCasesTutorial. They are running automatically, more or less verbose during their processing, and yield a fine summary in the end which of the tests failed, and in which way.

When experimenting with benchmarking, and when debugging test failures, I started to develop the idea of something in between, and now I think that between the Testcases web and the unit tests there is space for two new scenarios:

  1. Use the test cases from the Testcases web, but instead of using a browser retrieve the pages with Perl's CPAN:LWP, and wrap that call into some Perl to evaluate the result.
  2. Fire off TWiki scripts from the command line, passing along a couple of parameters which you can't pass via a web server.

Using LWP

The difference between using a browser and using LWP is that with LWP you can do the checking automatically. In the easiest case, retrieve any of the "automatic" pages from the Testcases web into a Perl scalar and check whether the output matches "All tests passed". Complain if it doesn't.

But the story doesn't end here, especially when LWP is running on the same computer where TWiki resides: Encoding the "golden result" in a Testcases topic together with the TML which generates it is just a special case. As soon as we have a Perl wrapper, this wrapper can, for example, create testcases on the fly, store them using simple file system interfaces, retrieve them and compare the resulting HTML to a golden result stored in the Perl routine running the testcase. Don't verify just the HTML, but have a look at the cookies and the HTTP headers.

The more Perl is used in the wrapper, the more the technique resembles today's unit tests. Test cases can be written as modules, and result checking can be done with the same technique from CPAN:Test::Unit as today, by calling Assert.

Some things are different from the unit test philosophy:

  • These tests don't test components, but always require pretty much of TWiki to run properly.
  • They need additional configuration information so that the Perl testcases know the URL, and maybe credentials, under which LWP can retrieve the pages. They could steal it from LocalSite.cfg if that is properly configured.
  • Of course, they need a working web server with a decent configuration.
  • Care needs to be taken when mixing topic access from Perl with access from the web server since usually both are running under different user ids. But again, that's a matter of test case configuration and environment.

Calling TWiki Scripts from the Command Line

Almost every developer has done it for debugging purposes: TWiki scripts from the bin directory can be invoked from the command line, for example like in the following example (note that the -T switch is always required since TWiki scripts are taint proof):

/home/haj/twiki/bin $ perl -T -d view Sandbox.SomeTopic

Of course, -T -d isn't the only way to provide interesting parameters. Think of profiling with -T -d:DProf, for example. Or provide separate module directories with the -I switch. Or pass debugging information in the environment variable PERL5OPT. Or.... Programming and running around BenchmarkFramework provided some interesting use cases if the command line call to the script is wrapped into some Perl code which calculates the parameters to be passed:

  • Modify the configuration just for one run, without affecting online business. This is similar to what I have in mind with one-off changes to the configuration for benchmarking. Unit tests have their special method to achieve that by just copying the Perl hash, but this method isn't available if the whole script is called from the command line.
  • Test your code with different CPAN modules in place or absent: I have done that recently for some tests of the new configure script, but not yet automated it. Simple unit tests can't do that: They are running as one single Perl process. Once one of the tests has found a module, the module is compiled and can't reliably be "uncompiled". Developer machines tend to have a rather rich CPAN repository, but there ought to be a way to test the behaviour of TWiki, or an extension, if some module is missing from an installation. Preferably this test should not require to rename or delete the module: First, if you aren't root, you might not be allowed to do that, and second, other CGI process might behave strange if some module just "vanishes" for a couple of seconds.

As in the case of LWP, the Perl wrappers which calculate the parameters could use the Test::Unit framework to check the output and report their results.

You can send parameters to the TWiki process by prefixing the name of the parameter with a "-" and the value of the parameters in quotes. For example, the typical mode of checking a particular page would be

perl -T view -topic "Sandbox.SomeTopic" -user "guest"
but any arbitrary URL parameter can be used this way. If you omit the parameters, the user will be the admin user, and the topic will be the Main.WebHome topic.

Extending Tests to Multi-Request Operations

Both methods can be used to wrap more than one call to TWiki in a single test case, so full "workflows" are available:

  1. Full registration cycle
    • Verification on and off
  2. Full edit cycle
    • Including forms editing
  3. Full "More..." pages cycles
  4. Login using different login managers
    • ApacheLogin (only via the LWP path), TemplateLogin
  5. Password management
Developing automated test cases for these scenarios will be a challenge in itself, maybe manageable with CPAN contributions like HTTP::Recorder, WWW::Mechanize and the like.

-- Contributors: HaraldJoerg, ThomasWeigert, CrawfordCurrie

Discussion

Glad to see you taking the initiative on this, Harald!

Note that you don't need a remote server to execute the scripts. For example, the SaveScriptTests invoke the save script by building queries and running them within the unit test framework.

  • Yes, but it still is not the same as running something like `perl -T view ...` or LWP. The unit test framework provides the directories where TWiki.pm and friends reside, whereas in command line mode, like in web operation, the view script has to figure that out itself. And especially for testing whether an installation recipe works it is helpful to run through LWP for realistic directory/file permission verification.

While the "golden output" approach is of interest, to my mind it really doesn't work very well for TWiki for a number of reasons.

  • First, the "golden output" tests are usually the easiest to automate in a unit testcase, so I question the value of having the TestCases web at all. I think it diffuses what little testing effort we have available. I originally went along with the idea of a testcases web because some developers indicated that this was the only way they would write tests. They haven't delivered on a single testcase to date, so it rather blows that idea out of the water. If anything, more developers have picked up on unit testing than "golden output" testing.
    • Agreed. But having a CLI or LWP wrapper around the TestCases tests and matching the output against /ALL TESTS PASSED/ should be a pretty low hanging fruit. I should have said "golden substrings" smile
  • Second, running tests on a remote server is fine as long as you can control the environment remotely, as e.g. skins can have a huge impact on the output. While you can control much through the script parameters, you can't control everything.
    • Agreed. But what I had in mind was to run the tests on the same server. So control is pretty much the same as in unit tests, just retrieval is by LWP. Same server, but different processes and different user id.
  • Third, the golden output tests currently require a plugin to be installed (TestFixturePlugin). Thus the tester is not an "objective observer", as that plugin will modify behaviours in the server.
    • Ah, yes. I have missed that. Shouldn't hit very hard in many cases, though.
  • Fourth, setting up and tearing down test fixtures is a nightmare in this approach. Really, your only choice is to test as TWikiGuest, and accept the fact that you can't safely change any topics - which kinda limits test scope.
    • I'd challenge that statement, especially when tests are run from the same server. There are unit tests e.g. in RegisterTests.pm which set up and tear down no less than three temporary webs for every testcase. It can't be much more complicated than that.
That's not to say we don't need a remote test server strategy. We do. Here are some of the things I think the remote test server approach is essential to help test adequately:
  1. HTTP headers
    • As you describe above
  2. Skins
    • While just firing queries at the server is definitely of value, what we really need here are UI tests - i.e. "what happens when I press this button" tests.
      • Yes, these will always remain heavy lifting. HTTP::Recorder and WWW::Mechanize may help to resolve some of that.
  3. Performance impacts e.g. plugin on/plugin off
    • That should be available soonish with SVN's BenchmarkContrib.
  4. mod_perl / speedy cgi
    • This may be too heavily depending on both Apache config and TWiki config to be a candidate for test automation. Maybe we should consider a standardized mod_perl config template?
  5. I've moved a bunch of those to the article above
Unfortunately these tests are rather more than simple golden output tests :-(. At the very least we need a way to remotely build and tear down test fixtures (such as fake users, test webs and topics etc)

-- CrawfordCurrie - 19 Oct 2006

Thanks for the rich feedback! I've woven some comments into your statements in another color, and some of them have been merged into the description.

-- HaraldJoerg - 19 Oct 2006

Edit | Attach | Watch | Print version | History: r4 < r3 < r2 < r1 | Backlinks | Raw View | Raw edit | More topic actions
Topic revision: r4 - 2006-10-19 - HaraldJoerg
 
  • Learn about TWiki  
  • Download TWiki
This site is powered by the TWiki collaboration platform Powered by Perl Hosted by OICcam.com Ideas, requests, problems regarding TWiki? Send feedback. Ask community in the support forum.
Copyright © 1999-2026 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.