Category: Web Services

  • Getting to know Scala

    Over the past couple of weeks I’ve been spending some quality time with Scala. I haven’t really been outside of my Python shell (pun only slightly intended) since getting to know node.js several months back. I’m kicking myself for not picking it up sooner, it has a ton of useful properties:

    • The power and speed of the JVM and access to the Java ecosystem without the verbosity
    • An interesting mix of Object-Oriented and Functional programming (which sounds weird but works)
    • Static typing without type pain through inferencing in common scenarios
    • A REPL for when you just want to check how something works
    • An implementation of the Actor model for message passing and Erlang-style concurrency.

    Getting started

    The first thing I did was try to get a feel for Scala’s syntax. I started by skimming documentation and tutorials at I quickly learned that Programming Scala was available on the web so I started skimming that on a plane ride. It’s an excellent book and I need to snag a copy of my bookshelf.

    After getting to know the relatively concise and definitely expressive syntax of the language, I wanted to do something interesting with it. I had heard of a lot of folks using Netty for highly concurrent network services, so I thought I would try to do something with that. I started off tinkering with (and submitting a dependency patch to) naggati2, a toolkit for building protocols using Netty.

    After an hour or so I decided to shelve Naggati and get a better handle on the language and Netty itself. I browsed through several Scala projects using Netty and ended up doing a mechanistic (and probably not very idiomatic) port of a Java echo server. I put this up on github as scala-echo-server.

    Automation is key

    Because my little app has an external dependency, I really wanted to automate downloading that dependency and adding it to my libraries. At quick glance, it looked like it was possible to use Maven with Scala, and there was even a Scala plugin and archetype for it. I found the right archetype by typing mvn archetype:generate | less, found the number for scala-archetype-simple, and re-ran mvn archetype:generate, entering the correct code and answering a couple of questions. Once that was done, I could put code in src/main/scala/com/postneo and run mvn compile to compile my code.

    It was about this time that I realized that most of the Scala projects I saw were using simple-build-tool instead of Maven to handle dependencies and build automation. I quickly installed it and easily configured my echo server to use it. From there my project was a quick sbt clean update compile run from being completely automated. While I’m sure that Maven is good this feels like a great way to configure Scala projects.

    Something a little more complex

    After wrapping my head around the basics (though I did find myself back at the Scala syntax primer quite often), I decided to tackle something real but still relatively small in scope. I had implemented several archaic protocols while getting to know node.js, and I thought I’d pick one to learn Scala and Netty with. I settled on the Finger protocol as it existed in 1977 in RFC 742.

    The result of my work is an open source project called phalanges. I decided to use it as an opportunity to make use of several libraries including Configgy for configuration and logging and Ostrich for statistics collection. I also wrote tests using Specs and found that mocking behavior with mockito was a lot easier than I expected. Basic behavior coverage was particularly useful when I refactored the storage backend, laying the groundwork for pluggable backends and changing the underlying storage mechanism from a List to a HashMap.

    Wrapping up

    Scala’s type checking saved me from doing stupid things several times and I really appreciate the effort put in to the compiler. The error messages and context that I get back from the compiler when I’ve done something wrong are better than any other static language that I can remember.

    I’m glad that I took a closer look at Scala. I still have a lot to learn but it’s been a fun journey so far and it’s been great to get out of my comfort zone. I’m always looking to expand my toolbox and Scala looks like a solid contender for highly concurrent systems.

  • Natalie Anne Croydon

    Passed out

    Last weekend, our first child, Natalie Anne Croydon was born. I’ve been trying to keep up with Flickr photos and updated my twitter feed a lot during the labor and delivery process (what a geek!). Thanks to everyone for their kind words and congratulations.

    For more pictures, check my Flickr archive starting on May 24 or my photos tagged “Natalie”.

  • Kansas covered by OpenStreetMap!

    I was checking up on the TIGER/Line import to OpenStreetMaps earlier today and I was pleased to see that Kansas is already partially processed! I had emailed Dave the other day and he had queued Kansas up, but I was pleasantly suprised to see it partially processed already. Douglas County is already completely imported and Tiles@Home is currently rendering it up. Parts of Lawrence have already been rendered and cam be seen using the osmarender layer. Here’s 23rd and Iowa:

    Lawrence in OpenStreetMap!

    Andrew Turner had turned me on to OpenStreetMap over beers at Free State and at GIS Day @ KU even though I’ve been reading about it for some time now. So far it seems like an amazing community and I’ve been enjoying digging in to the API and XML format and various open source tools like osmarender, JSOM, and OpenLayers.

    After getting psyched up at GIS Day I’ve been playing with some other geo tools, but more on that later.

  • Goodbye Business 2.0

    Goodbye, friend.

    I got my last issue of Business 2.0 in the mail today.

    As predicted they’ve offered to send me one month of Fortune for every two months of my remaining subscription. I’ve been offered “equivalent” subscriptions for cancelled publications in the past, but never at a 1:2 ratio.

    I might grab a copy of Fortune from the newstand to see if it’s worth it, but I feel like I get enough standard business news from Marketplace and various sources in my feed aggregator. Unless Fortune knocks my socks off I’ll be asking for a refund. It doesn’t help that I’m already unhappy with Fortune/CNN Money for killing my beloved publication.

    Goodbye b2. You will be missed.

  • Google Maps GeoXml crash course

    Over the weekend I added KML/GeoRSS data loading via GGeoXml to some mapping software for work. I ran in to a couple of gotchas and a couple of things that I thought were really interesting, so I thought I’d share.

    Getting started

    GGeoXml and particularly clickable polylines are relatively new features, so we need to specify that we want to use beta features when we grab our google maps code:

    <script src="" type="text/javascript"></script>

    The key here is v=2.x which specifies that we want some 2.x or somewhat bleeding edge features. Now that we’ve got that loaded up, we’ll want to define a couple of variables:

    var map;
    var geoXml;

    This set up two global variables that we’ll be using. While I’m not a big fan of polluting global namespaces, this will allow me to play with these variables via Firebug once the page has loaded. For non-sample production code you’ll want to properly namespace all of your work. Next we’ll grab some data using GGeoXml:

    geoXml = new GGeoXml("");

    This will grab data from the XML file. In my case it’s a list of things to do in and around Kansas City. Now that we have data in hand, let’s create a map and add this data as an overlay:

    if (GBrowserIsCompatible()) {
      map = new GMap2(document.getElementById("map_canvas")); 
      map.setCenter(new GLatLng(38.960543, -95.254383), 9);
      map.addControl(new GLargeMapControl());
      map.addControl(new GLargeMapControl());

    This should be pretty familiar if you’ve ever worked with the Google Maps API. We’re creating a new map using the div map_canvas in the document. Then we set the center of the map and add a few controls. Once we’re done with that, we’re going to add the geoXml that we loaded via KML to the map. Here’s the complete basic example:

    KC Basics

    Let Google do the hard stuff

    You’ll notice that the basic map is centered manually and doesn’t really fill the whole viewport. GGeoXml has finished running, we can query it for useful information. For example, I can ask what its default center is from Firebug:

    >>> geoXml.getDefaultCenter();
    (39.076937, -94.601867) Uk=39.076937 La=-94.601867 x=-94.601867 y=39.076937

    Knowing that I could then set the center of the map as follows:


    While this handles setting the center correctly, it doesn’t help me figure out what zoom level to set. Luckily there is another method on GGeoXml objects: gotoDefaultViewport(map). I can then call the following and have Google do all the hard work of figuring out what center and zoom to use to make sure all of the KML/GeoRSS content fits within the viewport:


    This allows us to let Google create bounding boxes, find centers and zoom levels, which while interesting exercises aren’t fun to do on a daily basis. There’s one gotcha here though: you can’t get information about a GGeoXml instance until it’s done loading.

    Are we there yet?

    When we call GGeoXml, we can optionally provide a callback function that will be called as soon as GGeoXml is done doing its thing and we can access it. First let’s create a function that centers and zooms so that the data is contained nicely within the viewport:

    var geoCallback = function()

    Next we have to modify our call to GGeoXml to include the callback:

    geoXml = new GGeoXml("", geoCallback);

    Here’s the final code that includes the callback:

    KC Final

    I hope that illustrates how much can be done with very little code. Definitely take advantage of the rich map annotating capabilities that Google’s My Maps offers and don’t be shy about including them in maps on your own site.

  • Accidental APIs: NFL edition

    NFL flash app powered by JSON

    These days whenever I find an interesting interactive/updating flash app I tend to fire up Firebug and see where the data is coming from. Quite often there’s an XML data feed somewhere that the flash app is being fed with. For example, you can get an XML feed of parking space availability or a list of airlines with gate information at Kansas City International Airport. As with many XML feeds designed for consumption with Flash, these feeds aren’t always well-formed but the data is there for the taking.

    I went on a similar quest after checking in on my Redskins at I was pleasantly suprised to find JSON driving the live game update instead of XML. There are three JSON feeds exposed on the live game update page.

    The first is the game-specific updates. This feed includes a score breakdown by quarter, who has possesion, time left, and the last few recent plays, along with a few other things that weren’t obvious at first glance.

    The second drives the league-wide scoreboard at the top of the page. This includes game date/time, the teams involved, what quarter they’re in, who has the ball, and what the score is. From time to time this feed will also include extra information such as a recent score change.

    The third feed includes information about weekly leaders in the NFL. This includes the top five passing, rushing, receiving, and scoring players this week. That sounds like great information to have programmatic access to if you’re in to fantasy football.

    It makes me happy to see big companies use XML and JSON feeds for their flash apps rather than a proprietary alternative. It’s also fun to see things like Prototype and Sciptaculous on a site like These feeds are rarely documented, but for the large part are self-documenting. Some of the subtitles in teh NFL feeds can most likely be determined by watching the feed and the flash display over time.

  • Google Maps adds clickability to GPolyline and GPolygon

    Google Maps: clickable poly!

    I’ve been waiting for this announcement Ever since Google introduced GGeoXml to its mapping API:

    In our latest release (2.88) of the API, we’ve added “click” events to GPolyline and GPolygon, much to the enthusiasm of developers in the forum.

    I knew it was just a matter of time since their internal apps have been supporting clickable GPolylines and GPolygons for some time now. Read the whole post for some fascinating information on how click detection works.

    What this boils down to (for me anyway) is that you can display information generated with Google’s MyMaps interface on your own site with the same fidelity as the original via KML and GGeoXml. Up until now you could load KML from MyMaps via GGeoXml but the GPolylines and GPololygons only displayed and were not clickable. This removes a huge roadblock and should allow for even more interesting mapping applications.

  • Crosswalk usability


    Today the City of Lawrence announced the operational status of a new crosswalk on 11th between New York and New Jersey. I’m a big fan of pedestrian safety, especially when it benefits kids on the way to school, but I was a little confused about the description of the new crosswalk and related signaling:

    The unique design consists of three lights (two reds and one yellow) configured like an inverted triangle. The signal remains dark until activated by a pedestrian pushing a button. The signal then flashes yellow for approximately six seconds followed by a steady yellow for approximately four seconds and followed by a double, steady red during the time pedestrians are receiving a “walk” signal. Once the pedestrian display changes to “Don’t Walk,” the motorist’s signal changes to alternating flashing red. During the flashing red, motorists may proceed through the crosswalk after stopping if the pedestrian has completed crossing.

    The operation of this crosswalk appears straightforward from the pedestrian point of view. Press button; wait for walk signal; walk across. Things are a bit more vague for motorists. Here are several questions I can think of with answers inferred from the press release and tri-fold pamphlet, along with some common sense:

    What do I do if someone is crossing in the crosswalk?
    Stop and wait for the person to finish crossing.
    What do I do if the bottom yellow light is flashing?
    This means that a pedestrian has just activated the signal. It’s probably best to slow to a stop and allow the person to cross.
    What do I do if the bottom yellow light is solid?
    A pedestrian activated the signal a few seconds ago and is waiting to cross. It’s probably best to slow to a stop and allow the person to cross.
    What do I do if the top two lights are red?
    The two red lights are equivalent to a stop sign or red light and you are required to stop at or before the line to let the pedestrian cross
    What do I do if the two red lights are alternating flashing red?

    If there is a pedestrian in the crosswalk you must either stop or continue to stop. If the pedestrian has cleared the crosswalk you may proceed.

    While I think that a crosswalk with lights to indicate it is in use is a great thing, some simplification could be done here. With all of the various light states above, the following always applies:

    What do I do if someone is crossing in the crosswalk?
    Stop and wait for the person to finish crossing.

    Perhaps it would be best to reduce the crosswalk to a binary state: either someone has pushed the button and the crosswalk is lit, or it is not in use and therefore not lit. This would reduce the four light states in to one single state and greatly simplifies what a driver needs to keep track of. If you would like to give the driver warning, use a solid red preceded by a solid yellow for a few seconds. This would closer emulate the traffic signal that drivers are used to obeying. I have seen other crosswalks that flash yellow and transition to solid yellow. Either of these solutions seem simpler and more effective than the High Intensity Activated crossWalK described above.

  • Those annoying AT&T “free reminder” SMSes

    ATT SMSI’ve recieved an AT&T “free reminder” SMS twice now in two days. I know I’m not paying for it, but these messages amount to SMS spam to me. It may not be a big deal to most people, but for me most of the time I recieve an SMS it means a server is down and needs my attention. There’s a certain stress level associated with my incoming SMS sound, so it really irks me when I check the message and it’s spam from my carrier.

    The first one I ignored, but I called up customer support as soon as the second one came in. I let them know that these messages, while I know that I’m not paying for them, amount to spam in my inbox and that I never received any such messages while I was a Cingular customer and that these messages made me very unhappy. I was as polite about it as I could be, because the person on the other end of the phone deals with a lot of angry customers during the day.

    She asked me if there was a long code as the sender of the message, and after checking my phone I confirmed that yes indeed it was (in this case it was 1 111 301 000). After looking that up in her knowledge base she let me know that the easiest way to unsubscribe from these messages is to reply to the message with the word “stop” or “unsubscribe”. There were a few others, but these are the two that I remember.

    My phone UI (S60 2nd edition) didn’t allow me to directly reply to this message, but I composed a new text message to “1 111 301 000” with the word “stop” as the message body. We’ll see if this works. If it does I’ll be happy. If it doesn’t, I’ll be quite unhappy.

  • On the internet, everyone can hear you scream

    When I first read this story on techcrunch about Grand Central snubbing customers and changing their numbers with very little notice, it seemed like the kind of thing that was affecting thousands of users. Perhaps it was a problem on the same scale as the recent Skype outage. That’s a big deal right?


    The problem affected exactly 434 users.

    As explained by founder Craig Walker in the comments of the techcrunch post, the problem occurred when one of their partners stopped providing service in a particular part of the country. They were able to port the majority of customers to a new provider but were unable to do so for 434 people.

    Unfortunately two of those 434 people had a blog. Then it got picked up by Techcrunch and all of a sudden it’s not a problem that affected 434 people, it’s a problem that affected the entire internet.

    It’s unfortunate (for Grand Central and Google) that some of those 434 people were in Northern Virginia. The blog per capita there is off the charts. If those 434 people had been in rural Iowa, the internet would have never known.

    That brings me to the other takeaway from this incident: When designing a product or service (especially for alpha geeks) you have one and only one chance to get it right. You’re never more than one power outage, one service outage, one information breach, bad decision, misstep, misquote, or mess up away from loosing your customers or potential customers forever.

    Are people going to remember that this issue affected 434 people a few months down the road? Nope. The conversation will go something like this: “Grand Central? I dunno about them. I remember hearing that they changed phone numbers on a ton of people after promising them ‘one number for life.’”

  • Google Analytics for project hosting

    Google analytics for project hosting

    This must be a relatively new feature because I remember thinking to myself “if only Google project hosting supported Google Analytics.” I kept meaning to send someone an email about it, mostly because I was curious how many people had been taking a look at my two little Erlang projects. I’m guessing that the answer is “not many,” but still.

    Thanks, Google, for knocking out a feature before I could even ask for it. As a happy Google project hosting user, I’d highly suggest using them for open source project hosting big and small. The administration UI is minimal but intuitive, it includes a subversion repository, issue tracker, download manager, and it’s easy to link out for other resources that your project might have.

  • 2007: The year of “The Real Internet”

    I know that 2006 and every year that came before it was supposed to be “the year of the mobile web”. Maybe it’s time to change that a little. 2007 will be the year of “The Real Internet.”

    As with most things mobile, “The Real Internet” has been available on Nokia devices since 2006 in the form of the Nokia Mobile Browser, based on the same open-source technology as the iPhone‘s Safari.

    Later this month, Apple will unleash “The Real Internet” on its iPhone to much fanfare.

    Today Opera released a beta version of its new mobile browser that aims to bring “The Real Internet” to all devices that can run J2ME. This means that “The Real Internet” can come to a wide range of devices, be they computing monsters or low-end free-on-contract phones. The new version of Opera Mini allows users to zoom in and out much like Safari on the iPhone (albeit in a much less sexy way). There is a video demo and an online emulator if you’d like to check it out.

    So there it is, 2007 (or 2006 in Finland) seems to be shaping up to be the year of “The Real Internet.”

  • Forum Nokia Remote Device Access

    I’m really excited about Nokia’s new Remote Device Access program for Forum Nokia members, including free members.

    A similar service has been available from Device Anywhere for some time now, but the service isn’t free (but definitely a lot cheaper than purchasing half a dozen test devices). I’m excited that Nokia have opened up a device testing service with a wide array of devices from the 5500 to the N95 to all developers including shareware and open source developers. It looks like I have 40 credits and a half hour with a device costs 2 credits, so it looks like I have the potential to test for up to 20 hours with my free Forum Nokia membership.

    Here are some screenshots from the Java-based interface:

    RDA List
    Nokia Remote Device Access device list

    RDA N95
    Nokia Remove Device Access N95 information

    RDA standby
    Nokia Remote Device Access N95 standby screen

    RDA home screen
    Nokia Remote Device Access N95 home screen

    RDA S60 browser
    Nokia Remote Device Access S60 browser

    RDA maps
    Nokia Remote Device Access maps

    By default the bit-depth isn’t quite the same as the device (see Heikki’s comment below), so there’s a bit of dithering and as expected there’s a slight delay, but it’s definitely the next best thing to having a device in your hands. I was a bit disoriented when I put the S60 browser in horizontal mode and continued to use it with a vertical keypad, but that’s to be expected.

    I think it’s a great testing tool and can’t wait to make use of it in the future.

  • A mobile take on SXSW 2007

    With SXSW Interactive 2007 winding down I’ve started reflecting on SXSW from a mobile perspective. First of all I found myself using from my phone quite a bit in the beginning of the conference as it allowed me the same overview that the pocket schedule did but also allowed me to drill down in to panel details. As I had more time to research my panel selection later in the week I found myself using my annotated (analog) pocket guide more and the mobile site less.

    One of the most invigorating sessions was Brian Fling’s presentation entitled Everything you wanted to know about the mobile web (but were afraid to ask). His slide deck is chock-full of information but only as much jargon as absolutely necessary. There wasn’t a lot of new information in it for me, but I think he’s doing exactly the right thing by firing up this group of alpha designers about mobile design and showing them that it’s not that hard. In fact XHTML-MP is still just XHTML and while there are lots of limitations, CSS is still CSS. He also mentioned several great resources in his presentation including the brand new .mobi mobile web developer’s guide. Also worth reading is the Mobile Web Initiative’s Mobile Best Practices document.

    I also caught a mobile panel on Monday called “Mobile Application Design Challenges and Tips.” Dan Saffer took some great notes on the panel which focused on the trials and tribulations experienced when developing a mobile app. It was nice to hear that several apps were quite successful operating “off-deck” or outside of carrier portals. It was great to hear Matt Jones talk about lower level UI bits and revel in the success of ZoneTag, but my biggest takeaway from the panel was from John Poisson of His advice was to not worry too much about details. Get it out there as quickly as you can, even if it’s simpler than you plan for it to be. Gather feedback from your users and continue to improve the product. This also seemed to jive with my takeaway from the turning projects in to revenue panel: fail early, fail often.

    The other mobile panel that stood out was called There’s no Such Thing as the Mobile Web (Or Is There?) I found a great set of notes on this panel at Eran’s blog. The panel started off discussing if there was a seperate “mobile web” or not and in the end it was hard to come up with a solid answer. It is significant to note that what a user does or expects to do on a mobile device is somewhat different than what a person needs when sitting in front of a computer. Context, location, creating content and retrieving information quickly are essential. It was interesting to get several different viewpoints on the issue: Dan Applequist from the standards and carrier viewpoint, Carlo from the industry analyst side, Michael Sippey from the blogging/content creation point of view, and Dwipal Desai who is focusing on mobile for a little company called YouTube. I was fascinated at how well Six Apart know their users: They focus on richer apps on higher end devices for a service like Vox but emphasize text messaging for LiveJournal because those users tend to be younger with cheaper phones, limited data capability, but usually have unlimited messaging plans. Vodafone are also in a unique position to offer a rich environment with asynchronus communication built around SVGT. Looking forward it’s obvious that there’s a ton of potential for the mobile web (or connected rich applications, or whatever you’d like to call it) but it’s unclear exactly which path we’ll take and what it will be.

    I truly hope that the topics discussed at SXSW this year will encourage these alpha designers, UI experts, and coders to take a closer look at mobile apps and push the limits of the mobile web.

  • Mapping Every airport and helipad in America

    All the airports

    After stumbling upon Transtats again today I took my semi-anual visit to the FAA data and statistics page to see if there was anything new to play with. The unruly passenger count still looks like it’s down for 2006 but I was really interested in playing with the airport data that I’ve seen before.

    After a little help from Python’s CSV module and some helper functions from geopy, I whipped up a 4 meg KML file for use with Google Earth or anything else that can import KML. Be warned thought that the file contains some 20,000 airports, helipads, and patches of dirt that can lead to some rendering bugs. If you’re interested, here’s the code that generated the KML.

  • All I want to do is convert my schema!

    I’m working on a django in which I want to store GPS track information in GPX format. The bests way to store that in django is with an XMLField. An XMLField is basically just a TextField with validation via a RELAX NG Compact schema.

    There is a schema for GPX. Great! The schema is an XSD though, but that’s okay, it’s a schema for XML so it should be pretty easy to just convert that to RELAX NG compact, right?


    I pulled out my handy dandy schema swiss army knife, Trang but was shocked to find out that while it can handle Relax NG (both verbose and compact), DTD, and an XML file as input and even XSD as an output, there was just no way that I was going to be able to coax it to read an XSD. Trang is one of those things (much like Jing that I rely on pretty heavily that hasn’t been updated in years. That scares me a bit, but I keep on using ’em.

    With Trang out of the picture, I struck out with various google searches (which doesn’t happen very often). the conversion section of the RELAX NG website. The first thing that struck my eye was the Sun RELAX NG Converter. Hey, Sun’s got it all figured out. I clicked the link and was somewhat confused when I ended up at their main XML page. I scanned around and even searched the site but was unable to find any useful mention of their converter. A quick google search for sun “relax ng converter” yielded nothing but people talking about how cool it was and a bunch of confused people (just like me) wondering where they could get it.

    At this point I was grasping at straws so I pulled up The Internet Archive version of the extinct Sun RELAX NG Converter page. That tipped me off to the fact that I really needed to start tracking down rngconf.jar. A google search turned up several Xdoclet and Maven cvs repositories. I grabbed a copy of the jar but it wouldn’t work without something called Sun Multi-Schema XML Validator.

    That’s the phrase that pays, folks.

    A search for Sun “Multi-Schema XML Validator” brought me to the project page and included a prominent link to nightly builds of the multi-schema validator as well as nightly builds of rngconv. These nightly builds are a few months old, but I’m not going to pick nits at this point.

    After downloading and and making sure all the jars were in the same directory I had the tools I needed to convert the XSD in hand to RELAX NG Compact. First I converted the XSD to RELAX NG Verbose with the following command: java -jar rngconv.jar gpx.xsd > gpxverbose.rng. That yielded the following RELAX NG (very) Verbose schema. Once I had that I could fall back to trusty Trang to do the rest: trang -I rng -O rnc gpxverbose.rng gpx.rng. It errored out on any(lax:##other) so I removed that bit and tried again. After a lot more work than should have been required, I had my RELAX NG Compact schema for GPX.

    My experience in finding the right tools to convert XSD to RELAX NG was so absurd that I had to write it up, if only to remind myself where to look when I need to do this again in two years.

  • My mail setup: Postifx, Dovecot, PostgreSQL, SASL, TLS, Amavis, SpamAssassin, and Postgrey on Ubuntu

    Part of moving from several various hosting locations to one was figuring out my mail setup. I had originally planned to manage mail with a control panel provided by my VPS provider. The 1U server that I had co-located in Maryland for several years is still sitting in the middle of the den so I figured that the easier mail was to manage the more likely I would be to get off my butt and manage it. It turns out that there were some bugs with Ubuntu and the version of the control panel I was using, so I asked the VPS provider to reinstall a fresh clean copy of Ubuntu Server and I’d take it from there.

    I have to say that I’ve been doing this Linux thing for some time now (remember downloading A, AP, D, K, etc disk sets?) and it seems like setting up a good mail server still one of the most tedious things to do, but boy does it feel good when you’re done. After quite a bit of research I settled on a virtual mailbox stack built on Postfix and Dovecot.

    I’ve found that setting up a mail server is best done in pieces. Configure something, test to make sure that it works, add another piece, break it, fix it, test it again. I started out my setup with a basic Postfix installation as described on the Ubuntu wiki. Once that was working I moved to a virtual mailbox setup with flatfiles but eventually ditched that for a PostgreSQL setup as described at L’Xtreme after trying to get PAM and SASL authentication working with my flatfile setup. If you’re looking to start from scratch with a postfix setup using virtual mailboxes I would highly recommend the L’Xtreme setup.

    The only snag I ran in to with the L’Xtreme instructions was generating CRYPTed passwords. I ended up using htpasswd from the apache2-utils package in Dapper Drake. Setting both auth_debug = yes and auth_debug_passwords = yes in /etc/dovecot/dovecot.conf helped me figure out the password mismatch that was going on.

    Once I had the basic setup working with TLS and SASL authentication via pam authenticating to Postgres, I set out to lock down the system against spam. The first thing I did was to set up Amavisd-new running it through SpamAssassin and several plugins. That did a pretty good job but spam dropped to near zero as soon as I installed Postgrey. I used these instructions from Debian/Ubuntu tips and tricks. I tweaked the config files to whitelist quicker and reduced the greylist to 60 seconds from the default 5 minutes (to be a little nicer to legit mail servers). I’ve also been using to keep an eye on stats.

    Like I said, setting up a mail server can be quite frustrating, but it sure is satisfying once it’s humming along.

  • Wii Internet Channel Drops

    Wii Internet Channel start page on the Wii Channel Wii Internet Channel: Zooming in Google Maps on the Wii Internet Channel

    The Wii Internet channel dropped early this morning. I’m rounding up some screen shots on my Wii Internet Channel flickr set.

    So far it seems pretty usable. Pages are rendered in fullscreen mode (sometimes wtih just a little scrolling if the page is wide. You scroll by hitting the B (trigger) button and moving the Wii remote. Zooming in and out is as simple as hitting the + or – keys. Due to the resolution of my TV it was necessary to zoom in on just about every page I went to. I found that the predictive text interface used while entering text in form fields to be good, definitely on par with my experiences with T9 and the Nokia 770 onscreen keyboard.

    Adding favorites was quite easy (just click the star while you’re on a page then click the add button). After browsing around the web and snapping some pictures I went over to Wiicade to play some flash games.

    I don’t see the Wii Internet Channel becoming my primary browser any time soon but I can’t wait to see more content designed for and adapted to the Wii platform.

    Update: The demo video from Opera does a really good job at taking Opera on the Wii through its paces.

    Opera on the Wii identifies itself as such: Opera/9.00 (Nintendo Wii; U; ; 1309-9; en

  • Web Services with JSON and Lua

    I’m still not sure why but today I wrote a web services client for the Yahoo! Traffic API using JSON and Lua.

    According to Wikipedia, Lua is a lightweight scripting language commonly embedded for use as an in-game scripting language. While it is most commonly used in gaming, I think it’s a simple but very powerful little scripting language.

    While exploring Lua a bit I stumbled upon the socket library and decided to couple that with a JSON parser. The short and sweet program checks and prints out traffic information for Kansas City, Missouri in about 20 decently commented lines. Here is some example output from the program.

    As with many short and sweet scripting language programs, this one relies on a few external libraries: Luasocket and JSON4Lua. The socket library uses Lua’s C bindings to work its magic but JSON4Lua (and many other extensions) are written in pure Lua. I’ve always been a sucker for a good library written in pure Python, and as such I love pure Lua extensions too.

    The JSON parser and HTTP library were particularly neat to work with, as was wrapping my head around Lua tables. Here’s the bit that parses the JSON response in to a Lua table:

    results = JSON.decode(r)[”ResultSet”][”Result”]

    Lua tables are neat in that you can access them in dict or attribute style so the above code can be rewritten as such:

    results = JSON.decode(r).ResultSet.Result

    If you’d like to read up on Lua a bit more I would suggest checking out the following sites:

  • Oh the CalDAV Possibilities

    While checking up on the Darwin Calendar Server wiki the other day I noticed something I had missed last week: CalDAVTester. It is an exhaustive suite of tests written in Python with XML config files to verify that a CalDAV server implementation is properly implementing the spec.  This suite of tests is going to prove very useful as more servers and clients implement the CalDAV spec.

    Right now the biggest problem with CalDAV is a lack of clients and servers.  That will change over the next 6-8 months as clients and servers are refined, released and rolled out.  Hopefully the CalConnect group and an exhaustive suite of tests will help keep interop a high priority.