Citation Style Language

CSL Funds & Projects

Okay, just checking. (for a free solution, we could also e.g. introduce quarterly releases of the “styles” and “locales” repos and have those automatically deposited into Zenodo)

We currently use Travis CI on various repos (, with e.g. RSpec tests for the and repos for quality control, and webhooks to alert,, and Zotero of build results.

Sheldon posts GitHub comments to pull requests in our “styles” and “locales” repo to assist contributors. See the posts by csl-bot in for an example. distribution-updater updates whenever a Travis build for the “master” branch of completes successfully. (the styles-distribution repo is a bit redundant now that GitHub offers protected branches (, but we haven’t bothered making the change yet)

That looks pretty comprehensive already. Does csl-bot not already do what’s requested above? I see it producing differences in th GH issue.

Sheldon/csl-bot just links to the Travis CI build reports of failing builds. The tests in these builds currently don’t include any CSL processor-based citation rendering. The differences reported in e.g. are just the result of some string matching within the CSL XML code (see e.g.

I’ve created a GH issue with a mock-up for the citation rendering part of the request. I agree with Rintze that easy access to the diffs would be great, but don’t currently have a good idea how that would even look, so will leave that to him to add either in the same mock-up or in a separate ticket.

Edit: One more idea in this issue:

There are auto-generated diffs at that basically turn split green/red when there are differences in a normalised HTML string. There’s a lot of code in jest-csl to build on, similar to Frank’s test runner, but there are also React components for laying out test results (no real difference from diffing) that could be made into a static site with one page per file (with Gatsby.js) that the ci bot embeds in a comment or links to.

@retorquere – not particular rush, but just wanted to see if the tickets & mock-ups make sense and if you think something along these lines might be doable?

Sorry – I have been slammed the past week, but that should clear up around next Monday.

The mockup looks OK, but that seems like a fairly simple thing to do? If I’m reading it right, it’s just to add a rendered citation/bibliography when the test passes?

… and ideally better (customizable) error reports, right on github, when it fails. Might well be simple – all the better. It’ll save us a ton of time and, more importantly, reduce poor quality styles.

Just so I have a clear picture – this would then be a modification of Sheldon qua scope. Would it be OK to introduce node there? That would be the easiest way to make sure it actually goes through citeproc-js.

Absolutely, yes. Keep an eye towards ease of maintenance, but we’re agnostic in terms of tooling.

Wait – Sheldon is just a separate bot that picks up on travis results, correct? That’s trickier because it doesn’t have easy access to the PR context to generate the diffs. It seems to me it would be a lot easier to add this to the actual test runner – the test runner could either actively push out comments to the GH issue (that’s how the BBT builds do it), or actively ping the bot with build assets who then takes care of announcing to the GH issue.

I haven’t worked with Travis bots before – I’d have to dig into that first.

We’re completely agnostic about methods. We don’t want people to have to click through to Travis etc., but absolutely this doesn’t have to be a Travis bot/app. Sheldon is some years old, it’s possible the testing framework simply wasn’t sufficiently advanced to do this or that Sylvester simply was more comfortable doing this with a bot, but in either case, in spite of having a name, we’re not terribly attached to Sheldon per se as long as we get similar output (and can customize the message text with reasonable effort).

WRT backing up the repos, pushing a copy of the repo to backblaze in a nightly Travis job (or anything else where a clone can be fetched and copied to b2) would run about $5 for a rolling full year of daily full snapshots if my calculations were correct (locales + styles is currently 156MB but let’s call that 200MB to factor in growth, times 365 days makes some 73GB for one year, at 0.005/GB/month would be $4.38/year).

As an update, @retorquere has done an amazing job updating Sheldon so that we now get previews of changes in PRs. He’s put a lot of work into this and still offered to do it at the low end of our suggested rate above, i.e for US$1,000.

This is already making our work reviewing style PRs easier, so Rintze and I would be more than happy to pay this out. We’ll wait a week for any concerns raised here and then, absent objections, pay Emiliano.

We’re still looking for someone who wants to take on the csl-editor update. Please post here and-or to a separate thread.

1 Like

These previews are pretty amazing!

I’m happy they help.

In what sense are you guys looking to modernize the CSL editor? Just an update to ES6? Of the demo site or of the cslEditorLib? I could do that but that seems like a marginal win. Do you mean streamlining the deployment of the demo site (stuff like webpacking for example)?
Or do you mean something like React-ifying the editor?


  1. Seeing no objections to releasing the pay – are you able to sent me an invoice of US$ 1,000 ? Can be informal and by email.
  2. For the web editor, we’re honestly not 100% sure. Here are some general thoughts:
    a) cslEditorLib relies on a bunch of dependencies, some of them outdated. It’d be nice to make sure this is all in shape, and more generally, that the JS used is up to date (including ES6)
    b) the whole deploy process is tedious: in cslEditor, Update submodules, update citeproc separately, then regenerate the example citations, then go over to the demo cite, update the processor, then deploy. There has got to be a way to make that simpler or even automate it
    c) The generation of sample citations is ugly – it writes one single massive (and increasingly massive) JSON file using absurd amounts of memory (IIRC that’s a common problem in writing JSON in javascript bu there are solutions). Speeding that up and making it more robust (i…e less memory intensive) would be great.
    d) While a-c would just be updates, the real vision would be to fundamentally change the basic functionality: instead of requiring specific citations, we would use something like the citation parser behind and alllow people to just paste any citation (e.g. the samples for the author guidelines). Sylvester seemed to think that was totally doable and if it were that’d be a total game changer.
  1. I’ve not sent an invoice in my life. If you can give me the text that would work for you I’ll gladly bounce that back to you.

2.a AFAICT, all dependencies are git submodules, not npm packages, so the best that can be achieved there is a pull from their head. Codemirror is available as an npm package, as is jstree, but jstree for example hasn’t been updated in 7 years, I’m not sure using package versioning would help there. I’m not familiar with the module system used in the editor (I see a “define” for example that looks like it brings in libraries but not sure where that comes from – maybe we should bring in Steve Ridout as the lead on this stuff and I could just assist him where it’s helpful)

2.b should be doable using Travis

2.c why does it write one massive (and growing) JSON file? generateExampleCitations.js doesn’t currently run for me so it’s not easy for me to establish what it does exactly.

2.d I don’t yet understand what the specific citations (from 2.c?) do, and how they are used in the CSL editor. Also, is written in Ruby, so I don’t yet grok how it would be incorporated in the JS-based CSL editor – this seems best left to Sylvester if you ask me.

From what I’ve seen, only determines which parts of the citation correspond to which properties, not the style. If that’s going to be used, it’d still have to generate citations in multiple styles to match the inputted citation to the styles. So, the big JSON file would probably not be needed (unless we want to do both) and we would have to decide whether to host an API for citation generation or have it in the browser. The generation is probably pretty costly, so I’m not sure about an approach there.