Reading Frank’s post on abbreviations, I noticed the ‘isInstitution’ predicate on a name. Looking at
I also noticed ‘journalAbbreviation’ and ‘shortTitle’ even though most other terms use a minus instead to connect terms (this includes predicates, e.g., ‘comma-suffix’, ‘static-ordering’). Implementing the format and providing a consistent API interface therefore requires a ridiculous amount of converting back and forth between taxonomies and trying to sanitize input.
Now, I know that religious wars have been fought about this, however, it would make all our lives much easier if there were a standard rule for attribute names. Barring perhaps Lisp, minuses in names are not very practical, but since a majority of the terms use a minus (plus the CSL attributes do, too) I would suggest to stick with that convention.
The problem is there is no “CSL JSON”: there’s just some ad hoc stuff
that different people (mostly Frank, working apart from CSL per se)
have worked on. That schema, for example, merely documents what’s in
the test suite for purposes of validation. It is not really meant to
be normative.
FWIW, IF we want the possibility to use CSL JSON more widely (say as
microdata in HTML5?), we might want to adopt camel casing as the
preferred convention. If we did that, it would be easy enough to map
to CSL terms, where upper letters just get lower-cased and prepended
with a dash.
On Sat, Nov 5, 2011 at 12:15 PM, Sylvester Keil <@Sylvester_Keil> wrote:
Right, I’ll do the latter and post the link here later on.
Great.
FWIW, IF we want the possibility to use CSL JSON more widely (say as
microdata in HTML5?), we might want to adopt camel casing as the
preferred convention. If we did that, it would be easy enough to map
to CSL terms, where upper letters just get lower-cased and prepended
with a dash.
Yes absolutely – although I personally, irrationally, dislike camelCase (PascalCase is fine)
But as long as there is consistency, it’s very easy to convert back and forth between different conventions.
Because we have names like ISBN, we can’t just lower-case, but it’s still easy enough to map camel-cased words using regular expressions.
Where camel casing is used, pascal casing is often used to distinguish
classes. This is common in the RDF world, where you might have an item
with a class like “EditedBook”, but properties like “isPartOf”.
But CSL is an awfully simple model. Still, that might be the way to go
in distinguishing types and properties.
I don’t have a strong opinion on this. csl-date.json is purely based on
what I found in the citeproc-js documentation, and I agree it could be
cleaned up a bit. But I do wonder how much hassle it is to change things
now that Zotero and Mendeley embed metadata in Word/LibreOffice documents
using this format. How would existing documents be dealt with?
I don’t have a strong opinion on this. csl-date.json is purely based on what
I found in the citeproc-js documentation, and I agree it could be cleaned up
a bit. But I do wonder how much hassle it is to change things now that
Zotero and Mendeley embed metadata in Word/LibreOffice documents using this
format. How would existing documents be dealt with?
We can always add a mapping layer to the implementations. Would it
make sense to add a version field to the input schema?
I don’t have a strong opinion on this. csl-date.json is purely based on
what I found in the citeproc-js documentation, and I agree it could be
cleaned up a bit. But I do wonder how much hassle it is to change things
now that Zotero and Mendeley embed metadata in Word/LibreOffice documents
using this format. How would existing documents be dealt with?
New Mendeley versions could read old and new Json format. Not idea
(devel time, testing time, etc. but could be done).
Old Mendeley versions would not be able to read the new Json format.
Unless the new Mendeley writes the new Json with the old Json there
too, this is possible…