We don't really have to pull the entire style list down and do this
client-side. Our SVN post-commit hook already parses the CSL XML and
adds the relevant info to the database (along with updating the title
(displaying "(dev)" as necessary) and the timestamp (so authors don't
have to fiddle with it on every edit)). So we could provide a simple
DB-backed search API that could be called from JS (or used
a style query service). If someone wanted to take a crack at that (in
the style of the web-based citation processor API document on Google
Docs that Bruce, Rintze, and Frank have access toï¿½and we're happy to
give others access as requested), it would be fairly trivial to
implement. It probably would make sense for it to speak Atom.
But in that case real-time filtering would be off the table, right?
No. You’d just call the search API from JS with the specified search
string, start/limit, sort field, and order, and you’d get the set of
results to display. So you’re always just populating the table with the
returned resultsï¿½it’s just a question of building the right query URI.
Alternatively, the post-commit hook could just generate a static
JSON/XML file containing the full repo data that could be pulled down
from JS. That would admittedly be easier for everyone, but initial
of the style section would be slower. Even served with gzip
the full list would be pretty big.
Just zipping/rarring the dependent styles folder with WinRar already
results in a 0.8-1.0 MB file (that said, the current Style Repository
page is also quite big at 1.3 MB). But maybe we can slim that down a
bit. E.g. a JSON file would only need to contain:
title, style-id/URI, categories (citation-format, field), issn, issn-l
and the timestamp
plus perhaps a preview of the style (but you could load that
separately per selected style to save some bandwidth)
Well, you certainly don’t need to include the styles themselves. But
even the list itself may get too big. A table dump that’s fairly close
to JSON puts the uncompressed data (not counting categories, which
aren’t currently stored in the database, or previews, which I discuss
below) at 261KB, and it gzips down to 30KB. That’s doable, but 1) that
will get bigger, and potentially a lot bigger if we add more dependent
styles, and 2) it would require not-insignificant processing time
(unzipping, sorting, formatting) on the client. Of course, that becomes
less of an issue as time goes on. But for now it might be a bit laggy.
For style previews, on the current page I decided against asynchronous
fetching, since waiting for each preview was getting annoying. Instead I
bundle unique previews indexed by hash and preview hashes indexed by
style. This totals 767KB and gzips to 50KB. We’ll probably want more
examples in each preview, however.