We built it in Django with Postgresql. The data is sorted using jQuery’s table sorter, and the charts are made in flot.js. We started this the first week of July. I left the paper at the end of September, and now it’s being launched in December. I’m pretty sure they waited so long to get the most recently updated bank ratings data, which came out every three months.
The project was sort of two parts: make it so an editor could upload a spreadsheet, and then spit that spreadsheet out. Michael Strickland, the amazing summer Intern we had, and I worked on a scraper that would go through a spreadsheet file that the paper received every quarter. We made it automatic, so when someone added a .cvs file to a “Quarter” model in the Django admin and clicked save, it would scrape the data and spit it into the appropriate fields.
The big problem, which Jeremy Bowers solved after I left the paper, was normalizing the data. If a new bank was founded (which, inexplicably, sometimes happened despite our wonderful economy) it found that by finding a new bank name in a city it didn’t previously have. I originally went through each spreadsheet by hand, changing DAYTONA BCH to Daytona Beach and whatnot. This was cumbersome, and it missed a few things. I believe Jeremy wrote a script that normalized the data in a wonderful manner.
Darla Cameron did a great job with the original CSS/design and did all of the jQuery and charts. Michael and I did the Django back-end, with him doing the majority of the scraping work. I did the model, the views and spit out the data on the templates (with help from Michael). Jeff Harrington wrote some explanations for various text and Becky Bowers oversaw the project and did the QA. And lastly, Jeremy made the search work and cleaned it all up before launch, making some major style changes and adding to its overall sleekness. And, as always, Jeremy made the server work.
I asked Jeremy if there was anything I missed, and he said: “I’m not sure I remember very clearly what I needed to change, besides ‘lots.’”
I love the application’s simplicity. It gives you a large amount of data with context. I can see where ProPublica’s TableSetter could’ve made it a bit easier, but we wanted to have an individual url for each bank. That way, when Jeff and others write about certain financial institutions, they can link directly to the individual organization’s page, thus providing more context. Back when I was a cops reporter, I would do the same thing, only linking to the mugshots of those arrested.
My only question with the web app has to deal with attribution. I don’t recall if the St. Pete Times had a style for giving bylines on web apps. Previous apps I had worked on at the paper had bylines, while others did not. Obviously I prefer that credit is given to people who work on any project, whether it be a story or graphic for the paper or a web application online. But that doesn’t mean I’m right.
So my question is this, dear readers: How does your news organization give credit for web applications? Do you think standards should differ from the print product? Feel free to comment below.