Category: Journalism

  • Explaining my Weird, Uncontrollable Podcasting Workflow

    A little background

    I’ve been podcasting lately, mostly for fun and to play with technology that I haven’t had an excuse to play with before. I hadn’t had a ton of time to listen to podcasts much until I moved down to Austin, Texas last summer. We live in the northern suburbs and I take a train downtown every workday.

    After listening to some podcasts I wanted to see if I could fill that commute time with something productive, so I started recording short podcasts on my phone from the front seat of my car after writing scripts on the train. This quickly turned in to buying some dedicated equipment: a TASCAM DR-44WL recorder, an Audio Technica AT897 shotgun microphone, and a rotating array of studio microphones from a local rental house and borrowed from friends.

    I also started learning everything I could about podcasting production and audio storytelling. I basically slurped up everything at with a straw, and continue to do so on a regular basis. I followed every link on This American Life’s Make Radio page. I follow a bunch of people that make great podcasts and great radio, signed up for a bunch of newsletters, and generally immersed myself in this world I was learning more and more about.

    I still struggle a little with bringing the drama and spinning a great narrative, but I think I’ve got a lot of the fundamental skills down, it’s just time to iterate and get better. I’ve settled on a format that I like, covering a single subject in about 4-6 minutes or so. Some of my best shows so far cover a disastrous software bug that killed people in a medical machine called Therac-25 and a kind of personal essay about me dealing with perfectionist paralysis. You can listen to more shows at or subscribe via wherever you get your podcasts from.

    The Uncontrollable Workflow

    I’m a little weirded out by the workflow of a typical Tinycast episode, mostly because it feels like parts of it are somehow beyond my control. More accurately, I’m not quite sure how my brain works sometimes.

    Pretty much every episode starts out as a single line entry in a note in the Notes app that’s with me all the time. This is where I jot down a rough thought or topic I might like to cover. Sometimes it’s just a word or two, sometimes it’s a paragraph if I have some initial thoughts on direction or specific things to consider.

    Ideas tend to ferment there. Sometimes I’ll do a little ambient research to see if there’s a unique story or a new way of looking at it, or if the subject itself is random enough that most people have probably never heard of it.

    Then, at some random and undetermined point in time, inspiration strikes. I’ll start doing research in earnest, and start collecting an outline of ideas and links in Google Docs. A lot of the time I’ll also take a trip to The Perry-CastaƱeda Library at UT Austin. The sixth floor is my favorite.

    From there I turn the outline in to a script, writing like I speak. Given the format and the time (and my ability to say uhh and um a lot), scripting, editing, then recording works well for me.

    Once I have about two to two and a half pages of script that have gone through a couple rounds of edits, it’s time to record. This involves our awesomely huge walk-in closet that has just the right amount of stuff in it to provide an acoustically dead space to record in. I usually do one or two full takes through the script (reading from an iPad), re-recording any mistakes I make and sometimes trying different approaches to certain areas of the script.

    Every once in awhile I’ll have a listen and decide to try again, but usually it’s time to head to the next step: a rough edit of the vocal track. For this I use a digital audio editor (DAW) called Auria, which works on the iPad. It’s fully featured and has a selection of plug-ins as well. I also make use of FabFilter‘s compressor, limiter, and EQ plugins. If you’re looking to do the same on a computer, Audacity is the obvious free choice, Reaper looks like a great low-cost option, and Pro Tools is the crazy expensive but industry standard option if you’re going to be doing a lot of collaboration.

    The rough edit involves removing any mistakes I’ve made and choosing between two or three takes of a passage that either gave me trouble or one I thought might have multiple interpretations. I move edits and removals to a second muted track in case I want to revisit them later.

    You’re Almost Done/So Far to Go

    Once a rough edit is in place and I’ve confirmed that I’m in the right ballpark time-wise, it’s time to find some music beds and apply any sounds or ambience that are appropriate for the episode. Licensing music for podcasts can be tricky. I’m a pretty conservative guy when it comes to laws and licensing, but I think I’ve landed on some personal guidelines that I’m comfortable with. I’m not a lawyer and this isn’t advice, but it works for me.

    First off, I’m comfortable using Creative Commons Attribution only licenses, commonly abbreviated CC-BY. For content licensed CC-BY, the simple act of announcing the author and work during credits and linking back in the show notes more than covers both the letter and the spirit of the license. Kevin MacLeod has an amazing selection of music licensed this way. I’ve also used tracks from Josh Woodward and Chris Zabriskie. I also made sure to pick up their music on bandcamp or find a way to make sure they know how much I appreciate them licensing their music the way they do.

    Free Music Archive is a great way to discover CC-BY music, but you have to be careful since there’s a lot of stuff licensed under a non-commercial license (CC-BY-NC) and things marked no derivatives. Creative Commons Search also links out to custom searches for Soundcloud and other sources.

    There’s also a lot of really good stuff that can be licensed without losing an arm or a leg. Chad Crouch has a great collection of production music at Sound of Picture with great rates for podcasts. Kevin MacLeod’s music can be licensed on his site as well. The mysterious Breakmaster Cylinder licenses ridiculously great beats and production music via Person B Productions.

    Selecting and using music is another extremely unscientific part of the process. A lot of the time I know when something is just “it” or works for a specific tone or cadence I’m looking for. A lot of the time I’ll move words and music around a little bit until they line up and just work. I wish I could explain this part of the process a little better but that’s all I’ve got.

    Wrapping Up

    Once a mix feels right in my Sony MDR-7506 headphones or my PreSonus Eris E5 monitors, it’s time to walk the mix to stock iPhone earbuds and the car stereo, two places where everything has to sound correct. This is also the time that I compare the loudness of the episode to other podcasts I listen to. Loudness is a thing that I understand at a high level but still struggle with details on sometimes. Rob Byers has a solid intro on Transom and Paul Figgiani has written some great stuff on the Google+ Podcast Technology Resources community. I try to stay a little quieter than -16 LUFS but recently messed up and shipped an episode with the music beds way too quiet trying to hit that number. ALWAYS walk the final final mix.

    Once the mix is locked down I export WAVs and m4as. The m4a file gets uploaded via Transmit for iOS to the bucket for my Amazon Cloudfront distribution. This acts as my content distribution network (CDN). I also upload the m4a to Soundcloud. The WAV gets converted to an MP2 file for PRX, the Public Radio Exchange.

    As soon as all that is done, I copy the script (now the transcript) over to my wordpress install, add the link to the audio file so that it gets picked up in podcast clients. I also add any links or other references in addition to the hyperlinked transcript. Then I push the publish button.

    Actually Concluding

    It turns out that all of this is a pretty huge amount of work for what amounts to about a 5 minute podcast. I really like the level of polish that episodes have, but I do still miss some of the spontaneity of my earlier episodes. I may break out shorter quicker episodes elswhere at some point. They’re a different kind of fun.

    There’s also a lot of room for innovation, streamlining, and pain point reduction in the mobile podcast production tooling space. Lots of people are working on it but I don’t think anyone has landed on the right features that would allow me to produce something like The Tinycast all from a single app without a ton of steps. I’d probably settle for two: one for production and the other for distribution.

    There you have it. A little look in to my process, and maybe more about my brain than you cared to know. If you’re interested in creating a podcast or other thing of your own the best advice I can give you is to just do it, get it out there, and if you stick with it you’ll likely want to make it better. Everything else should fall in to place from there.

  • Kansas Primary 2008 recap

    I’m winding down after a couple of very long days preparing for our coverage of the 2008 Kansas (and local) primaries. As always it’s been an exhausting but rewarding time. We’ve come a long way since the first election I wrote software for and was involved with back in 2006 (where election night involved someone accessing an AS/400 terminal and shouting numbers at me for entry). Our election app has become a lot more sophisticated, our data import process more refined, and election night is a whole lot more fun and loads less stressful than it used to be. I thought I’d go over some of the highlights while they’re still fresh in my mind.

    Douglas County Comission 2nd District Democratic primary section

    Our election app is definitely a success story for both the benefits of structured data and incremental development. Each time the app gets a little more sophisticated and a little smarter. What once wasn’t used until the night of the election has become a key part of our election coverage both before and after the event. For example, this year we had an overarching election section and also sections for indivudual races, like this section for the Douglas County Commission 2nd district Democratic primary. These sections tie together our coverage of the individual races: Stories, photos and videos about the race, our candidate profiles, any chats we’ve had with the candidates, campaign finance documents, and candidate selectors, an awesome app that has been around longer than I have that lets users see which candidates they most agree with. On election night they’re smart enough to display results as they come in.

    Election results start coming in Results rolling in County commission races almost done

    This time around, the newsroom also used our tools to swap out which races were displayed on the homepage throughout the night. We lead the night with results from Leavenworth County, since they were the first to report. The newsroom spent the rest of the nice swapping in one or more race on the homepage as they saw fit. This was a huge improvement over past elections where we chose ahead of time which races would be featured on the homepage. It was great to see the newsroom exercise editorial control throughout the night without having to involve editing templates.

    More results

    On the television side, 6 News Lawrence took advantage of some new hardware and software to display election results prominently throughout the night. I kept catching screenshots during commercial breaks, but the name of the race appeared on the left hand side of the screen with results paging through on the bottom of the screen. The new hardware and software allowed them to use more screen real estate to provide better information to our viewers. In years past we’ve had to jump through some hoops to get election results on the air, but this time was much easier. We created a custom XML feed of election data that their new hardware/software ingested continuously and pulled results from. As soon as results were in our database they were on the air.

    The way that election results make their way in to our database has also changed for the better over the past few years. We have developed a great relationship with the Douglas County Clerk, Jamie Shew and his awesome staff. For several elections now they have provided us with timely access to detailed election results that allow us to provide precinct-by-precinct results. It’s also great to be able to compare local results with statewide results in state races. We get the data in a structured and well-documented fixed-width format and import it using a custom parser we wrote several elections ago.

    State results flow in via a short script that uses BeautifulSoup to parse and import data from the Kansas Secretary of State site. That script ran every few minutes throughout the night and was updating results well after I went to bed. In fact it’s running right now while we wait for the last few precincts in Hodgeman County to come in. This time around we did enter results from a few races in Leavenworth and Jefferson counties by hand, but we’ll look to automate that in November.

    As always, election night coverage was a team effort. I’m honored to have played my part as programmer and import guru. As always, it was great to watch Christian Metts take the data and make it both beautiful and meaningful in such a short amount of time. Many thanks go out to the fine folks at Douglas County and all of the reporters, editors, and technical folk that made our coverage last night possible.

  • DjangoCon!

    I’m a little late to the announcement party, but I’ll be attending DjangoCon and sitting on a panel about Django in Journalism with Maura Chace and Matt Waite. The panel will be moderated by our own Adrian Holovaty.

    I think the panel will be pretty fantastic but I can’t help be just as terrified as my fellow panelists. I love that we’ll have both Journalist-programmers and Programmer-journalists on the panel, and I love that Django is so often the glue that brings the two together.

    DjangoCon is going to be awesome.

  • Covering Kansas Democratic Caucus Results

    I think we’re about ready for caucus results to start coming in.

    We’re covering the Caucus results at and on Twitter.

    Turnout is extremely heavy. So much so that they had to split one of the caucus sites in two because the venue was full.


    How did we do it?

    We gained access to the media results page from the Kansas Democratic Party on Friday afternoon. On Sunday night I started writing a scraper/importer using BeautifulSoup and rouging out the Django models to represent the caucus data. I spent Monday refining the models, helper functions, and front-end hooks that our designers would need to visualize the data. Monday night and in to Tuesday morning was spent finishing off the importer script, exploring Google Charts, and making sure that Ben and Christian had everything they needed.

    After a few hours of sleep, most of the morning was spent testing everything out on our staging server, fixing bugs, and improving performance. By early afternon Ben was wrapping up KTKA and Christian was still tweaking his design in Photoshop. Somewhere between 1 and 2 p.m. he started coding it up and pretty soon we had our results page running on test data on the staging server.

    While the designers were finishing up I turned my focus to the planned Twitter feed. Thanks to some handy wrappers from James, I wrote a quick script that generated a short message based on the caucus results we had, compared it to the last version of the message, and sent a post to Twitter if the message had changed.

    Once results started coming in, we activated our coverage. After fixing one quick bug, I’ve been spending most of the evening watching importers feed data in to our databases and watching the twitter script send out updates. Because we’ve been scraping the Kansas Democratic Party media results all night and showing them immediately, we’ve been picking up caucuses seconds after they’ve been reported and have been ahead of everything else I’ve looked at.

    Because we just recently finished moving our various Kansas Weekly papers to Ellington and a unified set of templates, it was quite trivial to include detailed election results on the websites for The Lansing Current, Baldwin City Signal, Basehor Sentinel, The Chieftain, The De Soto Explorer, The Eudora News, Shawnee Dispatch, and The Tonganoxie Mirror

    While there are definitely things we could have done better as a news organization (there always are), I’m quite pleased at what we’ve done tonight. Our servers hummed along quite nicely all night, we got information to our audience as quickly as possible, and generally things went quite smoothly. Many thanks to everyone involved.

  • We’re hiring!

    Wow, the Django job market is heating up. I posted a job opening for both junior and senior-level Django developers on djangogigs just a few days ago, and it has already fallen off the front page.

    So I’ll mention it again: We’re hiring! We’re growing and we have several positions open at both the junior and senior level. We’d love to talk to you if you’ve been working with Django since back in the day when everything was a tuple. We’d love to talk to you if you’re smart and talented but don’t have a lot of (or any) Django experience.

    Definitely check out the listing at djangogigs for more, or feel free to drop me a line if you’d like to know more.

  • Google apps for your newsroom

    Google spreadsheetsI like to think that I’m pretty good at recognizing trends. One thing that I’ve been seeing a lot recently in my interactions with the newsroom is that we’re no longer exchanging Excel spreadsheets, Word files, and other binary blobs via email. Instead we’re sending invites to spreadsheets and documents on Google docs, links to data visualization sites like Swivel and ManyEyes, and links to maps created with Google MyMaps.

    Using these lightweight webapps has definitely increased productivity on several fronts. While as much as we would love every FOIA request and data source to come in a digital format, we constantly see data projects start with a big old stack of paper. Google spreadsheets has allowed us to parallelize and coordinate data entry in a way that just wasn’t possible before. We can create multiple spreadsheets and have multiple web producers enter data in their copious spare time. I did some initial late night data entry for the KU flight project (Jacob and Christian rocked the data visualization house on that one), but we were able to take advantage of web producers to enter the vast majority of the data.

    Sometimes the data entry is manageable enough (or the timeline is tight enough) that the reporter or programer can handle it on their own. In this case, it allows us to quickly turn quick spreadsheet-style data entry in to CSV, our data lingua franca for data exchange. Once we have the data in CSV form we can visualize it with Swivel or play with it in ManyEyes. If all we’re looking for is a tabular listing of the data, we’ve written some tools that make that easy and look good too. On larger projects, CSV is often the first step to importing the data and mapping it to Django objects for further visualization.

    Awesome webapps that increase productivity aren’t limited to things that resemble spreadsheets from a distance. A few weeks back we had a reporter use Google’s awesome MyMaps interface to create a map of places to enjoy and avoid while traveling from Lawrence, KS to Miami, FL for the orange bowl. We pasted the KML link in to our Ellington map admin and instantly had an interactive map on our site. A little custom template work completed the project quite quickly.

    It all boils down to apps that facilitate collaboration, increase productivity, and foster data flow. Sometimes the best app for the job sits on the desktop (or laptop). Increasingly, I’ve found that those apps live online—accessable anywhere, anytime.

  • 2008 Digital Edge Award Finalists

    The 2008 DIgital Edge Award finalists were just announced, and I’m excited to see several World Company sites and projects on there as well as a couple of sites running Ellington and even the absolutely awesome Django-powered

    At work we don’t do what we do for awards. We do it to serve our readers, tell a story, get information out there, and do it as best we can. At the same time even being nominated as finalists is quite an honor, and evokes warm fuzzy feelings in this programmer.

    Here are the various World Company projects and sites that were nominated (in the less than 75,000 circulation category):

    Not too shabby for a little media company in Kansas. I’m particularly excited about the nominations since it hasn’t been too long since we re-designed and re-launched the site with a lot of new functionality. Scanning the finalists I also see a couple of other sites running Ellington as well as several special projects by those sites.

    As someone who writes software for news organizations for a living I’m definitely going to take some time this morning to take a look at the other finalists. I’m particularly excited to check out projects from names that I’m not familiar with.

  • Google Maps GeoXml crash course

    Over the weekend I added KML/GeoRSS data loading via GGeoXml to some mapping software for work. I ran in to a couple of gotchas and a couple of things that I thought were really interesting, so I thought I’d share.

    Getting started

    GGeoXml and particularly clickable polylines are relatively new features, so we need to specify that we want to use beta features when we grab our google maps code:

    <script src="" type="text/javascript"></script>

    The key here is v=2.x which specifies that we want some 2.x or somewhat bleeding edge features. Now that we’ve got that loaded up, we’ll want to define a couple of variables:

    var map;
    var geoXml;

    This set up two global variables that we’ll be using. While I’m not a big fan of polluting global namespaces, this will allow me to play with these variables via Firebug once the page has loaded. For non-sample production code you’ll want to properly namespace all of your work. Next we’ll grab some data using GGeoXml:

    geoXml = new GGeoXml("");

    This will grab data from the XML file. In my case it’s a list of things to do in and around Kansas City. Now that we have data in hand, let’s create a map and add this data as an overlay:

    if (GBrowserIsCompatible()) {
      map = new GMap2(document.getElementById("map_canvas")); 
      map.setCenter(new GLatLng(38.960543, -95.254383), 9);
      map.addControl(new GLargeMapControl());
      map.addControl(new GLargeMapControl());

    This should be pretty familiar if you’ve ever worked with the Google Maps API. We’re creating a new map using the div map_canvas in the document. Then we set the center of the map and add a few controls. Once we’re done with that, we’re going to add the geoXml that we loaded via KML to the map. Here’s the complete basic example:

    KC Basics

    Let Google do the hard stuff

    You’ll notice that the basic map is centered manually and doesn’t really fill the whole viewport. GGeoXml has finished running, we can query it for useful information. For example, I can ask what its default center is from Firebug:

    >>> geoXml.getDefaultCenter();
    (39.076937, -94.601867) Uk=39.076937 La=-94.601867 x=-94.601867 y=39.076937

    Knowing that I could then set the center of the map as follows:


    While this handles setting the center correctly, it doesn’t help me figure out what zoom level to set. Luckily there is another method on GGeoXml objects: gotoDefaultViewport(map). I can then call the following and have Google do all the hard work of figuring out what center and zoom to use to make sure all of the KML/GeoRSS content fits within the viewport:


    This allows us to let Google create bounding boxes, find centers and zoom levels, which while interesting exercises aren’t fun to do on a daily basis. There’s one gotcha here though: you can’t get information about a GGeoXml instance until it’s done loading.

    Are we there yet?

    When we call GGeoXml, we can optionally provide a callback function that will be called as soon as GGeoXml is done doing its thing and we can access it. First let’s create a function that centers and zooms so that the data is contained nicely within the viewport:

    var geoCallback = function()

    Next we have to modify our call to GGeoXml to include the callback:

    geoXml = new GGeoXml("", geoCallback);

    Here’s the final code that includes the callback:

    KC Final

    I hope that illustrates how much can be done with very little code. Definitely take advantage of the rich map annotating capabilities that Google’s My Maps offers and don’t be shy about including them in maps on your own site.

  • Google Maps adds clickability to GPolyline and GPolygon

    Google Maps: clickable poly!

    I’ve been waiting for this announcement Ever since Google introduced GGeoXml to its mapping API:

    In our latest release (2.88) of the API, we’ve added “click” events to GPolyline and GPolygon, much to the enthusiasm of developers in the forum.

    I knew it was just a matter of time since their internal apps have been supporting clickable GPolylines and GPolygons for some time now. Read the whole post for some fascinating information on how click detection works.

    What this boils down to (for me anyway) is that you can display information generated with Google’s MyMaps interface on your own site with the same fidelity as the original via KML and GGeoXml. Up until now you could load KML from MyMaps via GGeoXml but the GPolylines and GPololygons only displayed and were not clickable. This removes a huge roadblock and should allow for even more interesting mapping applications.

  • Reader Submitted Content: Everybody Wins

    LJW hail coverage photoI was enthralled to see a reader submitted photo on the front page of The Lawrence Journal-World this morning. It was located just below the fold and part of a followup story about hail damage from the storm on Sunday.

    Reader submitted photos rounded out the coverage done by our awesome staff of photographers. In fact, reader submitted photos and those taken by our readers appeared side by side in the same online gallery.

    While this might make some people a little uneasy, I think it’s perfect. There’s no way that we could produce a paper with the quality that our readers expect without our photographers. At the same time our photographers can’t be everywhere at once and it’s great to be able to expand our coverage with the help of our readers.

    My co-worker David reminds me that we’re also changing the photo credit that runs in print from “Special to the Journal-World” to “Submitted online by” which should help spread the world and generate more content online that might see its way to print.

    It’s a two-way street and everybody wins. We’re better because our readers submit content to us and we’re able to provide better coverage to them because of it.

  • Don’t Be Complacent

    News Designer:

    The new free Baltimore Examiner tab dropped Wednesday, with a bigger circulation than the Baltimore Sun.

    How awful must it be to wake up one morning and have your paper suddenly and abruptly be #2? It could happen to anyone, any time, and with little or no warning.

    We do our best to be painfully aware of that at the Journal-World. Dolph Simons Jr., Chairman of The World Company is quick to remind us, “No one can afford to be complacent as there always is someone who can come into town and beat you at your own business if you do not remain alert and strong.” That quote is on our about us page, though he echoes similar statements in a recent interview with The Kansan, KU‘s newspaper.

    Still, this gutsy move by The Examiner should remind the entire industry to keep on its toes.

  • Clickable Bylines

    Clickable bylines are the new black in online journalism according to this post and related comments at Poynter. I have to admit that I thought that this was the norm rather than the exception, since this had been the case at the Journal-World long before I arrived in Lawrence.

    A few days ago Dan asked me how long it would take to whip up per-writer RSS feeds. Thanks to django’s syndication framework the answer was no time at all. Over the next couple of days and with the direction of Dan and David, we tweaked the feeds to include both per-writer and per-photographer feeds. David made it easy to set up search alerts for every time a staff member posted a story. We also updated the staff bio pages to make all of this information easier to get to.

    Here is an example from a recent story by Joel Mathis:

    in-story byline

    If you click on Joel’s name, you’ll be taken to his bio page. If you click on Contact, you’ll be taken directly to his contact form. There’s nothing new there (for us anyway). The new stuff happens on the bio page:

    Joel's bio page

    The very top of every bio page contains more metainformation than you can shake a stick at. First and foremost is Joel’s number and contact form. After that we have an RSS feed of his latest stories. Following that is the search alert form that allows you to be notified every time Joel posts a story. Since Joel is such a converged guy and takes pictures too, you can check out the latest photos he has taken or subscribe to an RSS feed of those photos. You can also subscribe to that feed as a photocast in the latest iPhoto.

    Joel’s bio page also also contains a short biography that makes me want to head up the street to Rudy’s every time I read it. Below the bio and mugshot is a list of recent stories by Joel and a form that lets you quickly search his stories.

    I think these tools go well beyond what other news organizations are just beginning to do. At the same time there is always room for improvement, so don’t be suprised if more information is added to these pages.