(Re)organizing the Test Suite (was "Re: Author-Editor collapse")

It occurs to me that perhaps we ought to have a category of tests
labelled “experimental” which are understood as proposed solutions to
problems not yet addressed in the spec?

I like that idea. How might they best be classified? They could be
divided up by proposal, or by processor (assuming that proposals will
be run on at least one system before adoption) and then proposal.

Not exactly sure, since I haven’t used the test suite. But a reference
to the schema issue tracker item would be good.

Maybe this directory structure would work?:

processor-tests / csl-1.0 /
processor-tests / csl-1.0 / extensions / citeproc-js /
processor-tests / csl-1.0 / extensions / citeproc-hs /
processor-tests / csl-1.0.1 /
processor-tests / csl-1.1 /
processor-tests / proposals /

So we first group tests by CSL version, with a “proposals” folder for
proposed new CSL features. The versioned folders will contain the “core” CSL
tests, as well as the test for processor-specific extensions.

Alternatively, we could just rely on descriptive prefixes to the file names,
and keep all tests in a single directory. With a minimal change to test.py
of citeproc-js (“m = re.match(”([a-z0-9-])_.“,filename)” at
https://bitbucket.org/fbennett/citeproc-js/src/a09ad9f1368f/test.py#cl-257),
we could use the following scheme:
“1-0-name_Delimiter.txt” (was “name_Delimiter.txt”): part of the core CSL
1.0 test suite
“1-0-js_AbbreviationList.txt” (was “variables_ShortForm.txt”): citeproc-js
specific extension to CSL 1.0
“1-0-1-name_EtAlUseLast.txt” (was “name_EtAlUseLast.txt”): part of the core
CSL 1.0.1 test suite
“proposal-number_DateOrdinalMasculine.txt” (was
“number_DateOrdinalMasculine.txt”): proposal that hasn’t yet been approved
for inclusion in CSL

Related 1:
I’m considering adding a

===== COMMENTS =====>>

<<===== COMMENTS =====<<
section to tests so we have a place to discuss the reasoning behind each
test (currently these comments are just placed in between the other sections
(see e.g. “bugreports_DuplicateSpaces.txt”).

Related 2:
I started on a JSON schema to validate the test input data. The very early
draft version at http://pastebin.com/jdEftLZ3 validates the input data of
“variables_ShortForm.txt” and “bugreports_DuplicateSpaces.txt” (for
validation I’m using GitHub - garycourt/JSV: A JavaScript implementation of an extendable, fully compliant JSON Schema validator. in combination with
Added a very rough example of a web form that uses JSV to validate co… · vickeryj/JSV@1958455 · GitHub).
This might help with defining the input data model.

Rintze

It occurs to me that perhaps we ought to have a category of tests
labelled “experimental” which are understood as proposed solutions to
problems not yet addressed in the spec?

I like that idea. How might they best be classified? They could be
divided up by proposal, or by processor (assuming that proposals will
be run on at least one system before adoption) and then proposal.

Not exactly sure, since I haven’t used the test suite. But a reference
to the schema issue tracker item would be good.

Maybe this directory structure would work?:

What do you think about encoding this (and similar) information in the tests themselves? For example, we could use a field to indicate the version of CSL for which the test is intended; additional fields could include whether or not this is an experimental, optional or recommended test case or scenario, the result format (e.g., HTML) etc.

By defining sensible default values the transition to using extra fields would be extremely easy to handle, as would be maintenance and future extensions. The downside would be that a structural overview (such as the proposed directory structure below) wouldn’t be explicitly available.

What do you think about encoding this (and similar) information in the
tests themselves? For example, we could use a field to indicate the version
of CSL for which the test is intended; additional fields could include
whether or not this is an experimental, optional or recommended test case or
scenario, the result format (e.g., HTML) etc.

By defining sensible default values the transition to using extra fields
would be extremely easy to handle, as would be maintenance and future
extensions. The downside would be that a structural overview (such as the
proposed directory structure below) wouldn’t be explicitly available.

Could we use JSON for that as well? Also, I’d like to stress that I’m not a
programmer, so feedback from those actually using the test suite for real
work is very welcome ;).

What do you think about encoding this (and similar) information in the
tests themselves? For example, we could use a field to indicate the version
of CSL for which the test is intended; additional fields could include
whether or not this is an experimental, optional or recommended test case or
scenario, the result format (e.g., HTML) etc.

By defining sensible default values the transition to using extra fields
would be extremely easy to handle, as would be maintenance and future
extensions. The downside would be that a structural overview (such as the
proposed directory structure below) wouldn’t be explicitly available.

Could we use JSON for that as well?

E.g. perhaps we could use something like:

===== METADATA =====>>
{
“explanation”: “Journal title abbreviation using an external list”,
“links”: [
"
Getting Journal Abbreviations from a repository - Zotero Forums
“,
"
http://xbiblio-devel.2463403.n2.nabble.com/abbreviation-lists-td3696547.html

],
"author": "Frank G. Bennett, Jr.",
"CSL-version": "1.0",
"type": "experimental"

}
<<===== METADATA =====<<

Rintze