Nat shows off the sexy new interface to Hula. The screencast looks quite impressive. Hula has definitely come a long way in a short time, even though it was an awesome little project when it was released as open source.
Category: Web Services
When I was a kid it was a real treat to make the (seemingly) long journey from Kensington to The Enchanted Forest in Ellicott City. The Enchanted Forest was a storybook theme park with rides, storybook displays, and a lot of stuff that was just magical when I was a child. These pictures take me back. Years later I was bummed to hear that it had closed down. Today there is a shopping center and parking lot covering part of the old grounds, though some of the original signage and the entrace castle remain.
My heart was warmed when I read an article in The Baltimore Sun (registration probably required) about bits and pieces of the old amusement park being resurrected down the street at Clark’s Elioak Farm. They’re having a big party this weekend to celebrate the 50th anniversary of the park.
I can’t tell you how much all this brings me back to my childhood. I might try to head out to the farm over the weekend to relive it a bit more and take some pictures.
But what I will say is, what a great idea and a great program. At least half of the startups in the program are seriously cool and all of them made a ton of progress on very little money and in very little time.
A little bit of funding and guidance at such an early stage has got to be a huge boost to people looking for the next big thing. While Reedit doesn’t do a lot for me, I know that whatever Aaron is working on is going to kick some major butt.
I have a feeling that we’ll see more early microfunding ala Y Combinator in the near future, there’s such an obvious need for it. You can’t get everything right, but it will be interesting to look back at the class of Summer 2005 in a few years to see what works out and what tanks.
PocketPC Thoughts points out that Orb Networks has released an Add-Ons API. They don’t seem to be promoting it on their site as far as I can tell, but it’s out there. The API itself is documented and there is an example Add-On available for download. You might also want to look at their developer forum. For now it’s a C++ on Windows thing, but the documentation does make reference to Linux and Mac versions in the works.
This wiki page contains instructions for setting up Asterisk to use your Project Gizmo account for both incoming and outgoing calls. It’s a nice little hack and sounds like a perfect way for me to tinker with outbound VoIP using a spare box and a generic card that I picked up on Ebay months ago. BroadVoice‘s Bring Your Own Device plans also sound worthwhile.
I’m extremely impressed with the configurability of Asterisk. You can do just about anything you can imagine with it, including routing incoming calls based on caller ID info, and extremely complex outbound routing. With the right configuration it’s no big deal to use POTS for outgoing local calls and even multiple VoIP accounts if one has cheaper rates in a partiucular area, all matchable by dial string.
It’s definitely overkill for a simple answering machine, but it’s truly powerful platform.
Over the weekend I’ve been working on a Python for Series 60 project that I thought up a few days ago while exchanging information with Gustaf between Google Earth instances. It really should have hit me when Google Sightseeing packed its sights in to a KML file, but what can I say, I’m a little slow.
After sending a .kml file via email to Gustaf, I decided to take a look at what exactly made up a .kml file. I started to drool a little bit when I read the KML documentation. The first example is extremely simple yet there’s a lot of power behind it. A few lines of XML can tell Google Earth exactly where to look and what to look at.
Proof of Concept
With this simple example in mind, I started to prototype out a proof of concept style Python app for my phone. Right now everything is handled in a popup dialog, and for the time being I’m just going to save a .kml file and let you do with it as you please, but over the next few days I plan to re-implement the app with an
appuifw.Form, get latitude and longitude information from Bluetooth GPS (if you’re so lucky), and work on smtplib integration so that the app can go from location -> write KML -> send via smtplib.
Rapid Mobile Development
When I say that I’ve been working on this app over the weekend, that’s not strictly accurate. I prototyped the proof of concept over about 20-30 minutes on Friday night using the Python for Series 60 compatability library from the wonderful folks at PDIS. I then spent the rest of some free time over the weekend abstracting out the KML bits and reverting my lofty smtplib goals to saving to a local file on the phone. I’m not sure if the problem is due to my limited T-Mobile access or if I need to patch smtplib in order to use it on my phone.
There’s also one big downside to trying to use smtplib on the phone, and that’s the fact that smtplib (and gobs of dependent modules) aren’t distributed with the official Nokia PyS60 distribution, so if I’m going to distribute this app with smtplib functionality, I’ll have to package up a dozen or two library modules to go with it. I’m going to mull it over for a few days and see if I can get past my smtplib bug or investigate alternatives.
from kml import Placemark
I’ve started a rudimentary Python kml library designed with the Series 60 target in mind. It’s rather simplistic, and so far I’ve only implemented the simplest of Placemarks, but I plan to add to it as the need arises. It should be quite usable to generate your own KML Placemark. Here’s a quick usage example:
>>> from kml import Placemark >>> p=Placemark(39.28419, -76.62169, \ "The O's Play Here!", "Oriole Park at Camden Yards") >>> print p.to_string() <kml xmlns="http://earth.google.com/kml/2.0"> <Placemark> <description>The O's Play Here!</description> <LookAt> <longitude>-76.62169</longitude> <latitude>39.28419</latitude> <range>600</range> <tilt>0</tilt> <heading>0</heading> </LookAt> <Point> <coordinates>-76.62169,39.28419</coordinates> </Point> </Placemark> </kml>
Once I have my Placemark object, saving to disk is cake:
>>> f=open("camdenyards.kml", "w") >>> f.write(p.to_string()) >>> f.close()
If you have Google Earth installed, a simple double click should bring you to Camden Yards in Baltimore. The simplicity of it and the “just works” factor intrigue me, not the fact that this can be accomplished in a few dozen lines of python but the fact that KML seems so well suited for geographic data interchange.
It’s About Interchange
If you are really in to geographic data, and I mean so at an academic or scientific level, KML probably isn’t the format for you. You might be more interested in the Open Geospatial Consortium’s GML (Geography Markup Language). It looks like it does a great job at what it does, but I’m thinking that the killer format is aimed more at the casual user. KML is just that. From a simple Placemark describing a dot on a map to complicated imagery overlays, KML has your back covered. I find the documentation satisfying and straighforward, though I’m no expert on standards.
In the very near future conveying where you are or what you are talking about in a standard way is going to be extremely important. Right now there’s only one major consumer of .kml files and that’s Google Earth. Expect that to change rapidly as people realize how easy it is to produce and consume geodata using KML and .kmz files (which are compressed .kml files that may also include custom imagery).
I would love to see “proper” KML generators and consumers, written with XML toolkits instead of throwing numbers and strings around in Python. I would love to have a GPS-enabled phone spitting out KML using JSR-179, the Location API for J2ME. I hope to use Python for Series 60 to further prototype an application that uses a Bluetooth GPS receiver for location information and allow easy sharing of geodata using KML.
If you’d like, take a look at the current state my kml Python library, which is extremely simple and naive, but it allows me to generate markup on either my laptop or N-Gage that Google Earth is happy to properly parse. A proof of concept wrapper around this library can be found here. I hope to expand both in the coming days, and I hope to soon have the smtplib-based code working properly on my phone with my carrier.
Update: Oops, forgot to add the <name/> tag. Fixed. The name should now replace the (ugly) filename for your Placemark.
This is yet another observation that I had a week or two ago that’s been sitting in the WeblogPostIdeas queue for far too long. It’s a rather obvious actually, but it seemed to “click” after Rio released a firmware update that included PlaysForSure support to a few of their more popular models. This meant that a sexy little 2.5, 5, or 6 gig player that can be easily had for less than $149 could make use of subscription audio and not just the $n per download model.
In a perfect world I could go online, pay my $.99 (or $.89, or $.79), download a song, and be able to do whatever the hell I’d like with it. Unfortunately we just don’t live in that type of world. Yes there are a few companies out there that “Get It” and provide unencumbered plain-jane mp3s when you pony up your cash. Yes there are ways of getting around iTunes and other types of DRM, but it’d be nice not to commit a crime in order to use the music you paid for in a manner that you see fit, like stashing a copy of it on your laptop, desktop, protable player, music server at home, and your desktop at work. I mean that’s just something you should be able to do with something you’ve paid $.99 for.
But I digress. You pony up your buck and you don’t actually own the music and you can’t really do what you’d like to. That’s realy okay. Like I said, there are ways around most of it, but that’s not something that Joe User should have to deal with.
That’s where Yahoo! Music Unlimited comes in. It fills that gap between price per downloads that you don’t own and higher priced subscription services.
What have they done right? They’ve gotten the price point down to the “no-brainer” level. Really. Five bucks a month (if paid annually of course) for all you care to eat, and you can listen to it as long as you pony up monthly or annually. It’s easy to pay more than that on a coffee run to Starbucks. Yeah you don’t own your music and there are restrictions, but that’s not much different than the stuff you paid your buck for.
Having said that, it’s not perfect. Y! Music Unlimited only works if you’ve got Windows, which leaves out Mac, Linux, and other people out of the loop. Still, for a lot of people this music service makes a lot of sense.
I may be the last person on the earth to notice this, but the beta of A9 search absolutely rocks. It’s Ajaxy, interactive, and it lets me choose what “stuff” I want to see in the search results. The Wikipedia checkbox is definitely welcome, as is the Feedster addon tab, as those are two places I often search.
It’s not perfect though (but it’s “beta”), the layout could be a little smarter by default and I would really like the ability to rearrange window (column) location like you can with Google News. All in all though, I like it a lot. I don’t know if it’s enough to break my Google habit though.
Last weekend I saw Star Wars: Revenge of the Sith in one of the handful of theatres (I count 4) in the DC-Virginia-Frederick-Baltimore metro area equipped with DLP (Digital Light Processing) technology. Just like Mike Washlesky at The Mac Observer, I was blown away. I first noticed the crispness and clarity when the first preview splash screen came up and was blown away by the effects and their digital projection throughout the movie. The movie won’t top my greats list but it was a lot of fun and great to see in digital.
- Episode III Digital Theater List: Make sure you find the one digital theater, buy your tickets online, and show up early for a good seat.
- DLPMovies: An excellent place to find your local DLP theater (if there is one). DLPMovies found more theaters in my area than the ones showing Star Wars, including one that is currently showing Madagascar in DLP.
- DLP.com: A lot of marketing, but it boils down to amazing picture quality and an insane contrast ratio.
- DLP Wikipedia entry: Excellent information as always.
A few weeks ago I saw a spot on TV about the Baltimore Emerging Technology Center, an early stage incubator for local high tech startups. They appear to house a wide range of high tech startups at 3 different locations in Baltimore. The current list of participants shows quite a bit of promise from biotech to IT services.
Looking around these sites definitely gave me a feel for the state of the art (so to speak) in high tech startups in Baltimore. It didn’t get the press that Northern Virginia did back in the dot com days, but things are definitely happening up there.
Dave has released the Agile Web Development with Rails for beta consumption. The demand within the first hour has already been huge, so if your download takes a little longer than the 5 minutes, that’s why. I’m really proud to see this happen. Many thanks to Dave and the people who’ve contributed. End of brief transmission from Brazil.
My beta book/dead tree combo pack has been ordered and little elves are preparing my PDF as we speak. Beta books rock! I have a copy of Thinking in C# that I picked up when it was being offered as a $5 PDF download. Good thing too, since it never made it to print.
I have no problem paying cover price for a book if I can get near-instant access to it in beta form. Sure I could have waited till July and snagged it for $35 from Amazon, but what’s the fun in that? Kudos to the PragProg folk and the authors of the book for getting this Beta Book thing together.
That’s exactly what I’ve been thinking for a day or two since I read Rael’s radar post. I’ve been screaming to myself “there has to be a better way!” While the end product is pretty and polished, the public data isn’t easily human readable and isn’t easily machine readable. It’s sort of nothing at all really.
At the same time, I’m not sure what the best solution is. Would this be a good use of RDF? Would one construct a DOAB (Description of a Book) file or possibly add DOAB information to a FOAF file? Would the information best be stored in an XML document with a custom schema? RSS? Atom? Something completely different? What’s the best way to attack this? My spidey sense tells me that the answer is not a Backpack list with a particular syntax, but I’m clueless as to what the right answer is.
While looking for an archived version of a website at The Internet Archive Wayback Machine, I noticed that they had a bookmarklet. I hadn’t noticed it before. I don’t know how long it’s been there, but Phil Gyford wrote it back in 2001.
It comes in handy every once in awhile and now sits next to the Experimental post to del.icio.us bookmarklet in my bookmarklet bar.
Speaking of pure python crypto, it looks like PyDES works perfectly too. This one will probably require bits of the Python 2.2.2 source in order to run though. Specifically it’s looking for
time. All in all it’s quite lightweight and seems more responsive in both
importtime and encrypt/decrypt time as compared to
blowfish.py. It’s still very slow compared to a native implementation, but should be fast enough for inclusion in Python for Series 60 apps.
DES and 3DES are available from this module. I can’t seem to find a reference to what license it is released under, so you might want to track down the author before writing an application around it.
Here’s the code for the demo above (taken from an example that ships with PyDES):
import pyDes k = pyDes.des("DESCRYPT", pyDes.CBC, "") print "Encrypting/Decrypting DES" d = k.encrypt("Please encrypt my string") print "Decypted string: " + k.decrypt(d) k = pyDes.triple_des("MySecretTripleDesKeyData") print "Encrypting/Decrypting 3DES" d = k.encrypt("Encrypt this sensitive data", "*") print "Decypted string: " + k.decrypt(d, "*")
I haven’t had enough time to work up a proper hack for this, but I though I would pass along an interesting discovery that I made the other day before heading out to PyCon. After hearing about how great BeautifulSoup is at scraping HTML and making it easy to get little bits from it that you need, I thought I’d have a go at running it on my taco. You know what? It worked. I was expecting it to barf on import, but no, it chugged along just fine.
Now unfortunately BeautifulSoup won’t work out of the box with the standard .SIS install of Python for Series 60. It relies only on
types, but those three libraries have some dependencies themselves. Here is what BeautifulSoup requires according to modulefinder.py running on my Debian box:
These dependencies can be easily taken care of by dropping the python modules from the source distro in the appropriate libs directory on the drive you installed Python on.
One reason that BeautfulSoup “just works” on Series 60 is that the author strives to keep imports to a minimum and that the author srives to keep BeautifulSoup backwards compatible all the way back to Python 1.5.2. There are probably many modules out there like BeautifulSoup that are designed to be backwards compatible and platform independent that should work just fine on Series 60. As I find them, I will definitely point them out. I also hope to do some hacking on a few screen scraping apps that use BeautifulSoup and appuifw to present web data using native widgets.
I’ve got a crappy little inkjet printer. It does a darn decent job at printing a page or two in black and white, but anything beyond that is just too much for it. Stuff usually prints, but not always. Sometimes it’s a bit smudged. Other times the paper jams up. Sometimes the cats run off with or otherwise mangle the printed page before I can get to it. The darn thing seems to take a lot of union breaks.
A few weeks ago I was doing research on a paper for my computer organization class. I did much of my research at the University of Maryland’s awesome Engineering and Physical Sciences library, taking notes, xeroxing some pages, and checking out a book. There was also a weath of information available to me at the ACM Digital Library. I ended up downloading 150 or so pages of papers and articles from the vast ACM library.
As much as I love technology, I’m just not able to skim text and read for long periods of time on a computer screen. I decided to print out the 150 or so pages, but had absolutely no desire to do that on my crappy little inkjet printer.
I’ve wanted to give Mimeo a try ever since I learned about them a few years ago. They allow you to upload documents to their servers, preview them online, and then they print ia try. I stumbled upon them a few years ago and ship it to you. It’s a very cool idea, but it was Saturday and my paper was due Wednesday.
Enter Fedex Kinkos. They have a similar service called File, Print Fedex Kinkos. After you download their software (win32 only, sorry), it creates a printer driver and integrates itself with Office. They offer the option of shipping your order to you, but more importantly they allow you to pick it up at your local Fedex Kinkos location.
I spent a bit of time seperating the various articles I wanted to print with bibliographic information and eventually combined them all in to one big PDF file using Adobe Acrobat. I sent the job off to the File, Print Fedex Kinkos printer and chose options for my order. Since I was going to be flipping through all of the pages, I decided to go with double sided printing on the cheapest possible paper with three holes already punched. The el-cheapo worked out to somewhere around 6-8 cents or so (I forget) per printed page. I flipped through the preview and entered my billing information. It took a few minutes to upload the document to their servers, but after that was done I got a receipt to print and an email in my inbox. About an hour and a half later I got another email saying that the order was ready to be picked up.
FedEx Kinkos is doing a very smart thing with this service. They’re taking advantage of the fact that they’ve got locations all over the US for pickup. They can also call on the FedEx infrastructure for shipped documents. They’re also making it easier for users to send them orders, reducing employee time spent on taking in orders. They are also probably keeping printers busy that might have otherwise been idle.