May 2009 Archives

Not Your Mother's Star Trek

| | | |

TV's Wil Wheaton has posted his review of the latest reincarnation of Star Trek into six simple words "It was awesome. I loved it" and notes that his review comes from "someone who was part of the first effort to make Star Trek relevant to the, uh, next generation of fans."

Perhaps I'm just getting old and cranky, tho Wil has a couple more years on me, but I can't say I agree. Consider this clip from the very first pilot episode of Star Trek featuring Jeffrey Hunter as Captain Christopher Pike.

The plot is built on a strong, classic science fiction device, an investigation of what is real? Yet this version of Star Trek famously didn't do it for television network executives looking for action and adventure. In Star Trek circles those executives are always viewed as simple-minded morons. Yet those same "morons" green-light a then unprecedented second, reworked pilot in which a recasted crew and writers not only go on an existential exploration but also get into a fist-fight or two. Somehow, when done right, it works.

Doubt me? Exhibit A: The Matrix, a successful action movie that just so happens to be built around science fiction's exploration of human consequences in a fictional world similar to our own. In the case of the first Matix movie, that very same question, what is real?

Maybe not so simple-minded, those Hollywood execs? The most famous movie example of this form in the Star Trek universe would of course be the second movie, The Wrath of Kahn:

The problem here is that when this formula works, it works. When it doesn't, well we get movies like The Final Frontier or Nemesis. It also means that without Gene Roddenberry we won't see anything like The Voyage Home, which askews the action pacing for something a little more "down to earth" and still works.

And here in lies my criticism, this time-travel alternative universe action adventure is not just tired, it is threadbare. Once again the Enterprise goes into battle, once again the odds are against the ship and crew and yet, once again somehow they pull it off. The promise of this movie, how the crew is first brought together, how they learn to trust each other in life or death situations is never fully developed. Chris Pine's Kirk comes off as a jerk with a death wish instead of Shatner's calculating risk taker. Zachary Quinto just can't seem to quite pull off Nimoy's self-searching Spock.

And don't even get me started on the Nokia and Budweiser product placements.....

Trekkies Bash New Star Trek Film As 'Fun, Watchable'

Ghosts in the Machine

| | | |

Over the past couple of years I have become quite a fan of virtualization. While full virtualization has been around for quite awhile, it has only come into its own on the Intel x86 architecture with the addition of extensions added to the x86-based CPUs from Intel and AMD in the last four years or so. Since then the options available for taking advantage of running virtual machines has expanded and matured. Even Apple and its line of Macs have gotten into the mix, since the migration from PowerPC to Intel x86 in Apple's computing products.

In fact my first successful use of virtualization came on a Mac running Parallels. Back at Zoomshare, my main workstation was a Mac running OS X. Which in and of itself is fine for me as I could do just about anything I needed development-wise, write code using TextWrangler, mange code using Subversion, even do some quick prototyping by running Apache, Perl and Postgress all on one system.

Great, except that well over half of the visitors that visit Zoomshare hosted sites are doing so using some version of Internet Explorer and IE has quite a few well know issues. Plus, part of my tenure at Zoomshare covered the release of IE 7 which required the ability to test against two different version despite the fact that one could only run IE 6 or IE 7, not both.

Enter Parallels for the Mac and virtualization. With virtualization, I was able to run several instances of Windows XP, concurrently if needed, to test against both version of IE. This despite the fact that I was using a Mac Mini with meager Intel Solo CPU [1].

But wait you say, Macs have been able to run version of Windows for years. What about Virtual PC or even OrangePC? Well both options provided for the ability to run Windows, but in the non-Intel Mac world of PowerPC CPUs. Moreover, both Virtual PC and Orange PC provided technically different solutions. Virtual PC provided a software emulation of the Intel based hardware and Orange PC did provide a hardware option that was incorporated into the Mac environment as a daughterboard

Virtualization on the other hand simulates the current underlying hardware, in the case of the "modern" Mac mini at Zoomshare, this means simulating the underlying Intel hardware for use by Windows. In the case of my current laptop, a Lenovo ThinkPad R61i, this means a Linux (Fedora Core 9) "host" running VMWare Workstation and a "guest" Windows XP, Mac OS X or just about any other OS built to run on the underlying Core Duo Intel Processor.

VM Screen Shot
VM Screen Shot

In my most recent, and current, incarnation as an independent computer consultant, I've taken on the task of updating a retail bridal shop's online store. Step one being to setup a development environment. At previous places I've worked, in order to keep control of costs, the development setup was built using repurposed and scavenged production equipment. With VMware's Server product I can create multiple development setups for testing on one midsize Quad Core server with ample memory.

Not that this isn't without its own issues, some based on software limitations some on hardware. For example, one issue I have with Vmware's Server software is the lack of cloning options. See, one advantage of virtual machines is the ability to replicate virtual "appliances", the software image containing a software stack designed to run inside a virtual machine. That is I can create a virtual machine, install the necessary base components and then replicate it as many times as needed [2].

The problem with VMware Server is that while I can copy the necessary files that represent the virtual machine on the host's file system, I then need to modify configuration options in the vmx and vmxf files. Even then I found I need to remove and readd the "hard drive" within VMware's web access tool after notifying VMware that the cloned virtual machine is a "copied" virtual machine and needs a new unique identifier.

The other catch to all of this is that each of these instances where I have grown to accept virtualization as a viable tool have been in closed, controllable environments. Where virtualization is suppose to shine, from a cost of business ownership, is in the data center - in messy, live production environments. That to me seems risky, at least in terms of the production environments I tend to work in.

Full virtualization, as the type of virtualization discussed here, is successful for isolating users and computing environments from each other; e.g. multiple differentiated development servers for testing various coding or deployment approaches. In a live production environment, with a mission critical business application on it, such as the online store for a retailer, I'm not quite sure how virtualization would be effective from a cost perspective. One would need some beefy iron to run a popular online store and if the underlying hardware had any sort of failure, the whole business would come to a halt. On the other hand, for a non-critical, but business necessary operation; development and testing environments, email, filesharing, print servers, digital telephony and even employee workstations, virtualization is definitely worth a look.



[1] Ok, I did have to up the amount of RAM on the machine in order to run XP in a responsive manner, but that's a minor upgrade, even with a non-straightforward Mac Mini enclosure.

[2] Well not quite. One issue here is software licensing. For example, Apple's End User License Agreement limits even virtual implementations of OS X Server to Apple designed hardware. Other software packages have other sorts of limitations.

Monster Mash

| 0 Comments | 1 TrackBack
| | | |

The concept of a "mashup", a web application that combines data or functionality from one or more sources into a single integrated page, is nothing new. In fact, since Facebook has integrated non-Facebook data sources into their web application, the ability to casually bring different social actions, such as Digging a news article, from different sources onto one's Facebook Wall is quite straightforward. This casual integration works in a sharing/microblogging environment, where one wants to maintain a single, or even handful, point for sharing various actions; "Hey I just uploaded a cool video or checkout this song mix I made...."

Yet this isn't really what comes to my mind when talking about mashups. Yes, these Facebook examples use open web application programming interfaces (APIs) to access non-Facebook data sources, producing an integrated result. But it fails to create something greater than the sum of its parts. Not that, by the accepted consensus, a mashup needs to be defined as something greater than its parts. But, I think a good argument can be made, nonetheless.

Flickrvision is one of my favor mashup examples for it shows, in realtime, geolocated Flickr photos using Google Maps. One can easily sit back and lose oneself watching what photos are being upload to Flickr from around the world. Something that cannot be done passively on Flickr as is.

At Zoomshare, I hacked together something similar to show "the location" of user created websites. The mashup displayed a thumbnail of the website and the site's name at the location of the user, if it was known. The web app never made it past the development stage, in part, because of the intense resources need to capture and serve up website snapshots.

I still like the idea and in order to bring something about for show, I present my own variation on Flickrvision, using my own Flickr photostream, Photo Travels:

Shot of Personal Mashup


The Guts - Server Side
The trick, if there is one, isn't with Google Map's API or Flickr's. Both are well documented with numerous examples. No the real trick, if you ask me, is the geotagging of location information for the photos themselves. Digital cameras with GPS functionality are still few and far between; the notable exception being the iPhone, which really doesn't count as a digital camera. Flickr provides a decent interface for users to tag their photos, including the ability to add location information. So does the current version of iPhoto, iPhoto '09.

Once tagged the next step is to pull the photo data from Flickr. Flickr supports a number of different request architectures and API methods. To keep things as straightforward as possible, and as portable as possible, I've elected to use the REST request format using two API calls, and

With REST one simply needs to request the desired information with a HTTP GET or POST action. Building our data request is straightforward; building a URL with a method and method arguments. Technically our first method, only requires an API Key, which is easy enough to obtain. However in this specific case we're looking to get geotagged images from my account, so our request includes a number of "optional" arguments:,-90,180,90&user_id=37182874@N04&extras=geo&api_key=cd6f9dbede6ddd3e4ce2290ea0f11ec6


As noted in the Flickr documentation our arguments are:

  • bbox: A comma-delimited list of 4 values defining "Bounding Box" of an area that will be searched. The 4 values represent the bottom-left and the top-right corner of a "box" defined with a minimum_longitude, minimum_latitude, maximum_longitude, maximum_latitude. Longitude has a range of -180 to 180 , latitude of -90 to 90. Defaults to -180, -90, 180, 90 if not specified.
  • user_id: The NSID of the user who's photo to search.
  • extras: A comma-delimited list of extra information to fetch for each returned record. Currently supported fields are: license, date_upload, date_taken, owner_name, icon_server, original_format, last_update, geo, tags, machine_tags, o_dims, views, media.

Obviously the geo information is desired in the result set, so we add that request in the extras argument. Note that a geo or bountding box request will only return 250 results "per page".

Our REST result set is a basic XML formatted data that looks something along the lines of this:

<?xml version="1.0" encoding="utf-8" ?>
<rsp stat="ok">
<photos page="1" pages="1" perpage="250" total="249">
<photo id="3462202831" owner="37182874@N04" secret="56251be50e" server="3085" farm="4" title="2002081102020" ispublic="1" isfriend="0" isfamily="0" latitude="38.888092" longitude="-121.049591" accuracy="16" place_id="hTVV1XibApQLdJJ7" woeid="2384516" />
<photo id="3463016716" owner="37182874@N04" secret="06c8fde13f" server="3655" farm="4" title="2002081102023" ispublic="1" isfriend="0" isfamily="0" latitude="38.888092" longitude="-121.049591" accuracy="16" place_id="hTVV1XibApQLdJJ7" woeid="2384516" />


In Perl our REST request looks like this:

# Setup our working Perl environment
use LWP::Simple;
use XML::Simple;

my $xml = new XML::Simple;
my $url = ',-90,180,90&user_id=37182874@N04&extras=geo&api_key=cd6f9dbede6ddd3e4ce2290ea0f11ec6';

# Make our REST Request
my $content = get $url;

# Did we get something back?
die "Couldn't get $url" unless defined $content;


LWP:Simple provides our Perl script with the ability to make request against URL resources such as the Flickr. This part of the script simply defines the URL given the Flickr method and arguments previously mentioned, makes the actual request and then preforms a simple test to check if something, anything, was returned back given the request made.

Ideally, the next step is to preform some additional testing on the data contained in $content, part of which would be wrapped around the parsing of the XML file using the XML::Simple module. XML:Simple makes dealing with XML files, reading or writing, in Perl a piece of cake. In this case, importing an XML file into a reference to a hash of hashes from which needed values can be found using various key indexes. That is, a Flickr photo id, for example, is a value within the photo element, which is in turn a nested element of photos:



can simply be referred in Perl as:



For placing a photo on Google Map based on location the basic pieces of information needed are:

  • latitude: self explanatory
  • longitude: self explanatory
  • photo id: self explanatory
  • farm: needed for building the image source URL of where the image resides. No doubt farm represents which collect of servers, server farm, the image actually resides in.
  • server: needed for building the image source URL of where the image resides. No doubt server represents which server within the given server farm, the image actually resides on.
  • secret: a unique value given by Flickr to a photo which, in theory, can't be guessed and can only be obtained via proper requestes based on given permissions.

Interestingly, while Flickr's will return an image's title, it does not return the image's description. For that a second method call is required, requires the api_key and the photo_id. An optional secret argument, assigned to each photo, can be included to skip permissions checking.

Bring this final list of desired information together;

  • latitude
  • longitude
  • photo id
  • farm
  • server
  • secret
  • taken
  • description

  the Perl code looks like this:

my ( $lat, $long, $farm, $server, $photo_id, $secret, $taken, $desc );
my $data = $xml->XMLin($content);
my $photo = $data->{photos}->{photo};

# Parse out required data for each photo returned from search request
while ( my ($id, $values) = each(%$photo) ) {

        $desc = "";
        $photo_id = $id;

        $secret = $photo->{$photo_id}->{secret};
        $server = $photo->{$photo_id}->{server};
        $farm = $photo->{$photo_id}->{farm};
        $lat = $photo->{$photo_id}->{latitude};
        $long = $photo->{$photo_id}->{longitude};

       # Build and make the second request for photo specific information,
       # description and date taken
        $url = "" .$photo_id. "&secret=".$secret;
        $content = get $url;
        die "Couldn't get $url" unless defined $content;

        my $info = $xml->XMLin($content);
        my $photo_info = $info->{photo};

        # Parse photo specific results
        while ( my ($key, $value) = each(%$photo_info) ) {

                $taken = $photo_info->{dates}->{taken};

                if ( ref( $photo_info->{description} ) ne "HASH" ) {
                        # If we get a HASH then description is empty
                        $desc = $photo_info->{description};




The last task for our Perl parser is to print out the collected data via standard out. While there are a number of different formats to choose from; text delimited, XML or JSON ranking in as the top three, sticking with a keep it simple mentality, JSON is the way to go.

JSON is a lightweight data-interchange format that is not only easy for indivudals to read and write but is also easy for machines to parse and generate. If fact, while a JSON module does exisit for encoding data in Perl all that is needed in this instance is the following print startment:

print "{\"lat\":\"" .$lat. "\", \"long\":\"" .$long. "\"\"url\":\"\@N04/" .$photo_id. "\",\"src\":\"http://farm" .$farm. "" .$server. "/" .$photo_id. "_" .$secret. "_m.jpg\",\"desc\":" .$desc. "\"taken\":\"" .$taken. "\"},\n";


Ok, while that single line of Perl, with esacaped quotes and all, doesn't seem "human readable" the resulting output is:

"lat":"38.916489","long":"-77.045494","url":"","src":"","desc":"Youth Ball","taken":"2009-01-20 21:15:28"},


Once the script executes the result is a collection of name/value pairs in which each line represents information about a specific photo.

Moreover to the point of choosing JSON, it provides quite a bit of programming flexability. The JSON format requires only a single line of Javascript code for the browser to parse while at the sametime provides data in a format that can be easly processed in other programming lanagues, should this specific data be in need for another resource in the future.

While a developer could live or die with a totally dynamic setup, pulling data from Flickr the moment a browser request comes in for the data, from a usability perspective two issues qucikly arise:

  1. Load Time
  2. Script Failure

If everything was setup dynamiclly, with the Perl script being invoked the moment a request for data came, an addtional wait time would be added for the user requesting the Flickr/Google Map mashup. That wait time could vary wildly, depending on server and network loads. 

Error handling is an import element when disucussing usability. What would happen if the parsing script failed? Most likely the user would leave, never to return. Even if the client side code caught the failed loading of data error properly and asked the user to try again.

As a hedge agaist both of these issues, scheduling the script to run at regular intervals and caching successful results for the client is the most straightforward method. Simple and common method of implementation, on a Unix-based system, is to use cron to schedule script execution and fire off an email if the script failed.

# Run every half-hour
0,30 * * * * www > /srv/www/htdocs/photos/flickr.json


But one might ask, why invlove the local server at all? Why not have the requesting client simply contact Flickr directly?

One issue has already been mentioned, Flickr presents the required data in two differernt requests, both need parsing - which requires time and effort. Executing this step ahead of time and caching the result will speed up the overall application and requires less work from the user's client system - which these days could be anything from a cell phone to a multi-core workstation.

The second issue is security related. The client side code will be running within an AJAX framework and while a Google Maps provided function, GDownloadUrl, will be handling the data request the XmlHttpRequest Javascript object is used to execute the actual request. XmlHttpRequest is subject to a same-origin restriction to pervent issues of cross-site scripting attacks. That is the URL for the data request must refer to the same server as the URL of the current document that executes the code. Bottom line: the data must reside on the local server since the local server is the source of the resource as a whole.

The Guts - Client Side
As mentioned briefly the client, most likely a web browser running on a user's laptop or desktop computer, will be executing the mashup in the AJAX programming framework. This framework combines various interrelated web programming techniques to create, in this case, and interactive world map displaying photos taken from around the world.

At the core the Javascript object XmlHttpRequest is used to asynchronously request data from the web server in the background, without interfering with the display and behavior of the existing web page. While both the object and framework name (Asynchronous JavaScript and XML) suggests the use of XML formatted data only the data interchange doesn't require or limit the requesting data to an XML format. Thus other formats such as preformatted HTML, plain text or our preferred JSON can also be used.

About the Author

Paul is a technologist and all around nice guy for technology oriented organizations and parties. Besides maintaining this blog and website you can follow Paul's particular pontifications on the Life Universe and Everything on Twitter.


Add to Google Reader or Homepage
Add to My AOL

Add to netvibes
Subscribe in Bloglines
Add to Technorati Favorites

Movable Type Pro