Performance testing?

This post began as a wordy series of observations jotted down as I explored the performance issue. Some of the “facts” that I declared along the way were quite wrong, so I am replacing the original with this summary, which I hope will be more readable, and know to be less misleading.

A pair of traces dumped from the processor while rendering a single citation showed the CSL nodes hit by the old and new versions of the APA style (originally linked to hastebin, but since removed by their expiry rules, oops). The traces show the greater complexity of the new version. It hits 302 nodes, as compared with 175 nodes in the older version. It also has more deeply nested groups, reaching a maximum depth of 10 levels, as opposed to 6 levels in the previous version.

So the new APA version is more complex. But does that have a severe impact on performance? I initially thought that we were in deep water here, but it turns out that the answer is, “No, not really.”

The new APA version does take substantially longer to instantiate when it is installed to the processor—roughly three times longer, at 257ms against 87ms. In normal operation, though, instantiation is infrequent, since a built processor instance can and should be reused. A rise in build-time latency isn’t particularly worrying.

At the rendering level, the picture is much brighter. On my laptop, for a run of 1,000 citations + bibliography, I’m getting timings like 18803ms for the old APA version, against 19248ms for the new one, for a difference of 0.000445 seconds per citation.

I did trial runs with older and newer processor versions, and there isn’t much of a difference. In the course of exploring this issue, I made a few small tweaks that might speed things up very slightly for all styles, but the difference will be so small as to be lost in noise on anything short of a massive data set.

So congratulations to Brenton on a job well done!

PS: @Sebastian_Karcher Is that responsive?

PPS: As a followup note on this issue, I note an enviable report of 7-8ms rendering time in @Sylvester_Keil’s citeproc-ruby, so citeproc-js is lagging a bit there. On the other hand, the test run discussed here was of incremental addition of citations to a document followed by output of the bibliography, so (apart from hardware differences) there would be some additional overhead for disambiguation operations, update reformatting, and DOM latency. All things considered, and to my surprise, citeproc-js maybe isn’t doing too badly on the speed thing.