incorporating the test suite

So, does anyone have any thought of how best to integrate the test
suite into the other implementations? I’d like to add it to
citeproc-py soon-ish, but am really not sure how best to do it.

Bruce

So, does anyone have any thought of how best to integrate the test
suite into the other implementations? I’d like to add it to
citeproc-py soon-ish, but am really not sure how best to do it.

I should be done with refactoring the processor startup stuff soon
(later today I hope). Once that’s out of the way, I’ll redo the
grind.sh thing in python, to pre-process the name elements in the JSON
version of the test fixtures. That will save you one headache, at
least. The data format for cite locators is all set and working, so
it should be ready to go as soon as the names fix is in place.

The fixture layout in the python unittest framework is very similar to
DOH (they apparently share the same ancestry), so something resembling
the std_* tests in citeproc-js (and supporting code) should do the
trick.

Frank

I was looking for an easy solution to dynamically-generating the
tests. It seems nose offers this functionality:

http://www.somethingaboutorange.com/mrl/projects/nose/0.11.1/writing_tests.html#test-generators

The trick is just to make sure the suite is setup to make this as
easy/useful as possible.

Bruce

So, does anyone have any thought of how best to integrate the test
suite into the other implementations? I’d like to add it to
citeproc-py soon-ish, but am really not sure how best to do it.

I should be done with refactoring the processor startup stuff soon
(later today I hope). Once that’s out of the way, I’ll redo the
grind.sh thing in python, to pre-process the name elements in the JSON
version of the test fixtures. That will save you one headache, at
least. The data format for cite locators is all set and working, so
it should be ready to go as soon as the names fix is in place.

The fixture layout in the python unittest framework is very similar to
DOH (they apparently share the same ancestry), so something resembling
the std_* tests in citeproc-js (and supporting code) should do the
trick.

I was looking for an easy solution to dynamically-generating the
tests. It seems nose offers this functionality:

http://www.somethingaboutorange.com/mrl/projects/nose/0.11.1/writing_tests.html#test-generators

The trick is just to make sure the suite is setup to make this as
easy/useful as possible.

If you need any changes in the setup, just let me know.

So I’ve been poking around and trying to see how other projects deal
with our sort of test suite situation. The closest example, it seems
to me, is markdown.

So here’s the “test” part of the Makefile for a C-based markdown processor:

test: $(PROGRAM)
cd MarkdownTest_1.0.3;
./MarkdownTest.pl --script=…/$(PROGRAM) --tidy

In other words, they have a script (written in Perl), which runs the
tests. From what I guess looking at the command, the test script
iterates through the test directory, feeding it to the “script” (in
this case a C-based binary), and reporting the results and processing
times. So the output here is:

$ make test
cd MarkdownTest_1.0.3;
./MarkdownTest.pl --script=…/markdown --tidy
Amps and angle encoding … OK
Auto links … OK
Backslash escapes … OK
Blockquotes with code blocks … OK
Code Blocks … OK
Code Spans … OK
Hard-wrapped paragraphs with list-like lines … OK
Horizontal rules … OK
Inline HTML (Advanced) … OK
Inline HTML (Simple) … OK
Inline HTML comments … OK
Links, inline style … OK
Links, reference style … OK
Links, shortcut references … OK
Literal quotes in titles … OK
Markdown Documentation - Basics … OK
Markdown Documentation - Syntax … OK
Nested blockquotes … OK
Ordered and unordered lists … FAILED

138c138,139
<

  • Second:—

  • Second:

  • 156c157,158
    < that

  • that

    Strong and em together … OK
    Tabs … OK
    Tidyness … OK

    21 passed; 1 failed.
    Benchmark: 1 wallclock secs ( 0.01 usr 0.05 sys + 0.36 cusr 0.35
    csys = 0.77 CPU)

    Might this be a better approach for CSL, rather than for everyone to
    have to maintain their own test code?

    Bruce

    Followup …

    So this is the structure of the Markdown test suite*:

    $ ls Tests/
    Amps and angle encoding.html
    Amps and angle encoding.text
    Auto links.html
    Auto links.text

    So you have:

    1. the description of the test pulled from root file name
    2. the input source with the “text” extension
    3. expected output with the “html” extension

    Conceptually, this actually maps very well to what we’re doing; it’s
    just that we have an additional input: the data.

    So in theory, Frank’s humans-machine script could well output something like:

    group-Test_name.json
    group-Test_name.csl
    group-Test_name.html

    … where we might have groups like:

    date
    macro
    names
    sort
    substitution
    text

    … etc.

    Admittedly, though, this is orthogonal to the larger question of how
    to run the tests.

    Bruce