Posted: April 26th, 2006 | Author: Matt Croydon | Filed under: Linux, Open Source | 2 Comments »
A week or so ago I managed to get Xgl working on a box at home using these instructions wtih an aging Nvidia card. I previously had issues getting it to work with an ATI card, but I think that had more to do with flgx and my rather old Radeon than anything else.
I was completely blown away when I started up XGL and enabled compiz for the first time. It brought an extra level of polish to the already amazing Dapper Drake. By default it enables several effects and features. Some are more whizzy than useful, but the alt-tab preview pane and expose-like features are quite useful. Less useful but still pretty are true transparency and the waving effect that happens when you drag a window.
Unfortunately running Xgl and compiz meant that things like emacs didn’t run at all, and other apps like Evolution behaved unpredictably.
That’s to be expected though. Xgl is still very much unstable and bleeding edge. Even if it’s not usable (for me) day in and day out, I think it’s a glimpse in to the future of desktop linux. The packages are available in the Universe repository for Dapper, but there are big warnings everywhere about their experimental nature.
I really hope that over the next six months Xgl matures and that by the time Edgy Eft is born Xgl will be ready for prime time, if not in the default install.
Posted: April 26th, 2006 | Author: Matt Croydon | Filed under: *BSD, Apple, Linux, Projects | 15 Comments »
My work powerbook was out at Apple for a week or so getting a tan, a new motherboard, memory, and processor. While it was out of town I settled in to a Linux development environment focused around Ubuntu Dapper, Emacs 22 + XFT (pretty anti-aliased fonts), and whatever else I needed. Ubuntu (and other apt-based systems) are great for hitting the ground running because you just install whatever you need on the fly only when you need it. I also got pretty in to emacs and all of the stuff that’s there by default with a source build of the development snapshot. My co-worker James helped me get through some of the newbie bumps of my emacs immersion program.
When the powerbook came back I decided it was time to reboot my development environment, so I started from scratch. Here’s what I installed, in the order that I installed it:
- Updates. Oh. My. Goodness. I rebooted that thing so many times I started looking for a green start button.
- Quicksilver (freeware): I use it all the time to get at stuff I need.
- Transmit (commercial, $30): Worth every penny.
- Firefox (open source): My browser of choice, though I really dig Safari’s rendering engine.
- Textmate (commercial, 39 euro): I spend all day in this text editor and it rocks, though I do miss emacs.
- Then I disabled capslock. I never hit it on purpose, it’s always getting in the way. I should really map a modifier key to it, but I’m not sure which one and I don’t know if I can convince my pinkey to hit it on purpose.
- Xcode: A man has to have a compiler.
- Subversion (open source): I used the Metissian installer since it has treated me well in the past, and I often have flashbacks of building subversion pre-1.0 from source.
- Django (open source): I checked out trunk, .91, and magic-removal from svn.
- Ellington (commercial, starting at $10-15k): I checked out ellington and other work stuff from our private repository.
- Firebug: Essential for web development.
- Python 2.4 (open source): I’m not a big fan of the Python 2.3 that ships with OSX.
- Python Imaging Library (open source): It’d be really nice if this made its way in to the standard Python distro.
- ElementTree (open source): I usually use either ElementTree or Sax for parsing XML documents.
- GNU Wget (open source): It’s what I use to download stuff from the commandline.
- PostgreSQL (open source): It probably hogs resources to always have this running in the background, but I use it often enough.
- PostgreSQL startup item from entropy.ch
- mxDateTime (open source): I’ve never really used it directly, but psycopg does.
- Psycopg 1.x (open source): Django uses this to talk to Postgres.
- Colloquy (open source): A really nice IRC client for OSX. I’m also rather fond of Irssi and screen over SSH.
- Growl (open source): It’s not work critical but I like it.
- Pearl Crescent Page Saver (freeware): I find it indispensable for taking screenshots of entire web pages.
- Session Saver for Firefox: I hate looking at 15 different forum threads to find the latest version of this, but I love what it does for me.
- Adium (open source): Best darned IM client for OSX that talks just about any protocol.
While I may have missed an app or two, I think that just about covers my OSX development and living environment. I find the Ubuntu desktop useful enough that it’s still humming under my desk at work. The work LCD has both analog and DVI inputs so I’m able to switch between my two-screened powerbook and a one-screened Linux desktop in a pseudo-KVM kind of way.
I can’t say enough how impressed I was with Dapper, and how productive it kept me. Aside from my emacs learning curve, I felt at home and had the command line and any app that I wanted to install at my disposal.
I hope that this laundry list is helpful, if nothing else it’ll be a place for me to start the next time I’m looking for a clean slate.
Posted: April 25th, 2006 | Author: Matt Croydon | Filed under: Journalism | 11 Comments »
I was enthralled to see a reader submitted photo on the front page of The Lawrence Journal-World this morning. It was located just below the fold and part of a followup story about hail damage from the storm on Sunday.
Reader submitted photos rounded out the coverage done by our awesome staff of photographers. In fact, reader submitted photos and those taken by our readers appeared side by side in the same online gallery.
While this might make some people a little uneasy, I think it’s perfect. There’s no way that we could produce a paper with the quality that our readers expect without our photographers. At the same time our photographers can’t be everywhere at once and it’s great to be able to expand our coverage with the help of our readers.
My co-worker David reminds me that we’re also changing the photo credit that runs in print from “Special to the Journal-World” to “Submitted online by” which should help spread the world and generate more content online that might see its way to print.
It’s a two-way street and everybody wins. We’re better because our readers submit content to us and we’re able to provide better coverage to them because of it.
Posted: April 24th, 2006 | Author: Matt Croydon | Filed under: Web Services | 13 Comments »
I’ve wanted a Bluetooth GPS device for a long time now. I know it’s a totally geeky thing, but there are so many things that I’ve wanted to do that involve getting a hard lat/long reading. Don’t get me wrong, cel tower information is nice, but nothing is better than knowing *exactly* where you are.
I decided to spend some tax return money on a nice (but inexpensive) Bluetooth GPS unit. Jim Ley was kind enough to share what he knew about them with me in #mobitopia. After talking to Jim and examining my options, it sounded like I had to choose between feature sets in my price range. I could either have the SiRF III chipset, which by all accounts rocks, is small, low power, and extremely accurate, or I could do on-device logging.
My decision quickly came down to either the DeLorme Bluelogger with on-device logging but a previous generation SiRFStar IIe chipset or the Holux GPSlim 236 which has the newer chip but no on-device logging.
I ended up snagging a Holux 236 and a USB data cable for a little over a hundred bucks after shipping on ebay. I’ve since been tinkering with hooking up the GPS to both Meaning and ZoneTag to geocode my flickr photos. They’re the ones filed under geotagged. I’ve also been having fun with the NMEA info python app, Christopher Schmidt’s GPSDisplay, Microsoft Streets and Trips on my wife’s PDA, and GPSDrive on the Nokia 770.
Now that I’ve had a chance to use it a little, it would be nice to be able to do on-device logging. It’s not the end of the world but in hindsight it would have been nice to log my path on the device and download it later rather than haiving to keep a bluetooth connection open on a second device. I’m still glad to have the latest and greatest chipset though.
The Holux 236 has been one of the most hassle-free bluetooth devices I’ve used. It doesn’t require explicit pairing before use and getting it to work with various platforms and applications has been a breeze. I do have some trouble getting a fix to transfer from time to time, but it’s extremely well behaved by Bluetooth device standards.
I hope to play around with this some more but also do some real stuff with it too. I can’t wait to poke at it from within Python for S60 and an open street map of Lawrence would rule. Speaking of Lawrence, you should definitely check out where Tim Hibbard is. He’s been geolocating himself around town with a nice Google Maps interface for some time now.
Posted: April 23rd, 2006 | Author: Matt Croydon | Filed under: Web Services | 7 Comments »
I spent part of yesterday cleaning up and organizing the den (the last holdout of the rebel packing box forces) and found quite a few things along the way. Among old papers, reciepts, notes, and general crap there were some interesting stuff. Here is a sampling:
- 5/31/2000: Invoice for my AMD Athlon 750 CPU. That CPU in an Abit KA7-100 motherboard treated me quite well.
- 8/05/2000: A packing slip for several Billy Pilgrim albums from the (long gone) MP3.com. I wish that I had snagged a few more before MP3.com went under.
- 1/3/2002: Registration confirmation to watch a Steve Jobs keynote via satellite at Apple’s Northern Virginia campus.
- 7/5/2002: My reciept for Radio Userland 8.0.8. That’s also the day that I switched my tech content from LiveJournal to my Radio blog.
- 7/20/2002: A printout of my Amazon Web Services developer token.
- 8/26/2002: Windows Beta product key for ITX (.NET Server RC)
- 10/12/2002: An Airtran bording pass from BWI to Boston for the Web Services DevCon East.
- 2/28/2003: packing slip for my newly ebay‘d Intel ISP 1100 1U server. I also found reciepts for the processor, memory, and hard drive that I stuck in it.
- 10/03: In the margin of class notes I wrote down some thoughts on mobile wikis and mobile FOAF.
- 11/03: Some notes on weathermob, a mobile webapp that I’ve thought about off and on again for years but have never done anything with.
Per usual, I spent way too much time looking at stuff and not enough time actually cleaning. A trip down geeky memory lane is quite nice every once in awhile though.
Posted: April 6th, 2006 | Author: Matt Croydon | Filed under: Journalism, Web Services | 8 Comments »
The new free Baltimore Examiner tab dropped Wednesday, with a bigger circulation than the Baltimore Sun.
How awful must it be to wake up one morning and have your paper suddenly and abruptly be #2? It could happen to anyone, any time, and with little or no warning.
We do our best to be painfully aware of that at the Journal-World. Dolph Simons Jr., Chairman of The World Company is quick to remind us, “No one can afford to be complacent as there always is someone who can come into town and beat you at your own business if you do not remain alert and strong.” That quote is on our about us page, though he echoes similar statements in a recent interview with The Kansan, KU‘s newspaper.
Still, this gutsy move by The Examiner should remind the entire industry to keep on its toes.
Posted: April 5th, 2006 | Author: Matt Croydon | Filed under: Projects, Web Services | 56 Comments »
I’ve been contemplating writing a wishlist app off and on for a few months now but have never gotten around to doing so. While I have an Amazon wishlist, there’s a lot of stuff that I’d love to have that Amazon doesn’t sell. After finding myself keeping a seperate list and periodically e-mailing it to my wife, I though tit would be cool to be able to put together a wishlist using any item that has a URL.
I waited too long and it looks like gifttagging has done at least 80% of what I was hoping to do. It has the web 2.0 look and feel and a tag cloud on the front page and everything. I have a feeling that I won’t actually use the service but it definitely does almost all of what I was planning to do, so if I tried to pull it off it’d be something of an also-ran.
A couple of weeks ago I brainstormed the concept (in a rather conversational tone) with the hope of motivating myself to get started on it. That obviously didn’t happen so I thought perhaps I’d share the brainstorming session in case it’s useful to someone.
So you have an amazon wishlist, and a wishlist with this other site, and you want some stuff that you can’t put on a wishlist. Wouldn’t it be nice if you could put all of this wishlist stuff in one spot? Cue wishlist 2.0 (or whatever it’s called). It gives you one URL you can send to your friends who want to know what you want. Of course it does stuff like pull in (and keep in synch) your Amazon wishlist, but it also works for so much more, like that doodad you want from whatsit.com. It lets you set priorities, keep notes about particular items, and it’s really easy to share with your friends. They can subscribe to an RSS/Atom feed of the stuff you want, you can send them an email linking to your wishlist, they can leave comments and OMG you can tag stuff too.
So lets get down to some details. You sign up. Confirm your email address, cause you have to have a valid email address (even if it’s mailinator.com). After you confirm you’re sent to your “dashboard” screen. You know, the one you get every time you log in. It lists your wishlist items in whatever order you prefer (but you can reorder them). Since it’s your first time there’s a little bit at the top asking if you’d like a tour of the place, or if you’d rather, just import your shit from amazon.
The import process is pretty painless. We’re up front about needing some information about you in order to get your wishlist from Amazon. So we get that info from you say “hang on a sec” and go grab your info using Amazon’s APIs. We come back with “Hey, so you’re John Whatshisname from Austin, TX, right? You want this, that, and the other thing. That’s you, right?”
After we confirm that we’re not pulling in some other dude’s wishlist, we prepopulate your wishlist with the stuff from Amazon. Your quantities and ranking come over, plus everything gets tagged with “amazon”.
If you don’t have anything to import from amazon, we take you in the other way and show you how easy it is to add items to your wishlist. All stuff needs is a URL in order for you to add it. We’ll do our best to guess what it is, but you can always override that. It gets your default “want it” value unless you override that, plus you can tag it with whatever you want “del.icio.us-style”.
From there we can point out that “hey, your wishlist has an RSS feed. Or an Atom feed, if that’s how you roll.” You can also do other stuff like tell your friends, browse stuff from other peoples’ wishlists, or access your wishlist from a mobile phone.
I guess the browsing and social aspect could be fleshed out a bit. Each wishlist item could be able to tell you what other people that want this want. You know, if you want a pink RAZR you might also want a fashionable bluetooth headset. Stuff like that. You can also look at the latest stuff that everyone is wishing for. If you’re on somebody elses wishlist page and you see something that they want that you also want, you can just click “I want this too” and you can add it to your wishlist.
Posted: April 4th, 2006 | Author: Matt Croydon | Filed under: Web Services | 3 Comments »
I’m reminded of this every time I try to retrieve a bookmark from del.icio.us: tags are for the community, not for you.
No really. Every time I go looking for a link from a few months back I search a tag that I *know* I must have tagged it with. A tag that I always tag stuff like that with.
Nine times out of ten I forgot to use that tag, whatever it was. They’re really usless for information retireval. But they sure do help out the community.
Posted: April 2nd, 2006 | Author: Matt Croydon | Filed under: Web Services | 4 Comments »
Yesterday Amazon’s new S3 service served up nothing but service unavailable messages for nearly 7 hours.
I give Amazon full credit for hopping on their user forums last night and letting us know that they were working on it and letting us know when it was fixed. At the same time I’m a little frustrated that such an outage occured so early on in the history of the service. The whole point of S3 is to treat storage like a utility, metered in gigabyte-hours and gigabytes of data transfered, much like you would treat your water or electricity service.
How mad would you be if the power company turned off your power for several hours without warning, or if you woke up in the morning to find that you couldn’t take a shower? Pretty mad I imagine. I was just a little bit annoyed last night because my flickr backup wasn’t working. I couldn’t have retrieved anything from S3 if I had wanted to, but thankfully I didn’t need (or want) to.
What if I were building out a Carson-style startup using S3 for storage? That would have been 7 hours of downtime for my app too. Hopefully the beta testers weren’t too pissed off. Hopefully I wasn’t showing a demo of it to anyone.
Now might be a good time to read the Amazon Web Services Licensing Agreement and specifically the section on Amazon S3. You’ll note that there aren’t any guarantees about availability or uptime. You can’t count the nines in their SLA.
I know that Amazon strives to keep S3 and their other web services up as much as possible, and over time they have done an extellent job at it. S3 is still very young and I’m sure that they’re tweaking and improving the service on the fly all the time.
This incident is by means an indication of long term stability. Just remember that there are no guarantees.
Update: Amazon continues to keep communication channels open and are taking strides to make sure that this doesn’t happen again. David Barth writes:
A short note to let you know that we are taking the outage this weekend very seriously, and that once things calm down here we will post something to this thread letting you know what steps we will be taking in the future to ensure this doesn’t happen again.
Update: David Barth gives us a more detailed update:
We were taking the low-load Saturday as an opportunity to perform some maintenance on the storage system, specifically on some very large (>100 million objects) buckets in order to obtain better load-balancing characteristics. Normally this procedure is entirely transparent to users and bucket owners. In this case, the re-balancing caused an internal transit link to become flooded, this cascaded into other network problems, and the system was made unavailable.
Read the full post for more on what Amazon is doing to prevert further outages.