Monster Mash

| 0 Comments | 1 TrackBack
| | | |

Introduction
The concept of a "mashup", a web application that combines data or functionality from one or more sources into a single integrated page, is nothing new. In fact, since Facebook has integrated non-Facebook data sources into their web application, the ability to casually bring different social actions, such as Digging a news article, from different sources onto one's Facebook Wall is quite straightforward. This casual integration works in a sharing/microblogging environment, where one wants to maintain a single, or even handful, point for sharing various actions; "Hey I just uploaded a cool video or checkout this song mix I made...."

Yet this isn't really what comes to my mind when talking about mashups. Yes, these Facebook examples use open web application programming interfaces (APIs) to access non-Facebook data sources, producing an integrated result. But it fails to create something greater than the sum of its parts. Not that, by the accepted consensus, a mashup needs to be defined as something greater than its parts. But, I think a good argument can be made, nonetheless.

Flickrvision is one of my favor mashup examples for it shows, in realtime, geolocated Flickr photos using Google Maps. One can easily sit back and lose oneself watching what photos are being upload to Flickr from around the world. Something that cannot be done passively on Flickr as is.

At Zoomshare, I hacked together something similar to show "the location" of user created websites. The mashup displayed a thumbnail of the website and the site's name at the location of the user, if it was known. The web app never made it past the development stage, in part, because of the intense resources need to capture and serve up website snapshots.

I still like the idea and in order to bring something about for show, I present my own variation on Flickrvision, using my own Flickr photostream, Photo Travels:

Shot of Personal Mashup

 

The Guts - Server Side
The trick, if there is one, isn't with Google Map's API or Flickr's. Both are well documented with numerous examples. No the real trick, if you ask me, is the geotagging of location information for the photos themselves. Digital cameras with GPS functionality are still few and far between; the notable exception being the iPhone, which really doesn't count as a digital camera. Flickr provides a decent interface for users to tag their photos, including the ability to add location information. So does the current version of iPhoto, iPhoto '09.

Once tagged the next step is to pull the photo data from Flickr. Flickr supports a number of different request architectures and API methods. To keep things as straightforward as possible, and as portable as possible, I've elected to use the REST request format using two API calls, flickr.photos.search and flickr.photos.getInfo.

With REST one simply needs to request the desired information with a HTTP GET or POST action. Building our data request is straightforward; building a URL with a method and method arguments. Technically our first method, flickr.photos.search only requires an API Key, which is easy enough to obtain. However in this specific case we're looking to get geotagged images from my account, so our request includes a number of "optional" arguments:

http://api.flickr.com/services/rest/?method=flickr.photos.search&bbox=-180,-90,180,90&user_id=37182874@N04&extras=geo&api_key=cd6f9dbede6ddd3e4ce2290ea0f11ec6

 

As noted in the Flickr documentation our arguments are:

  • bbox: A comma-delimited list of 4 values defining "Bounding Box" of an area that will be searched. The 4 values represent the bottom-left and the top-right corner of a "box" defined with a minimum_longitude, minimum_latitude, maximum_longitude, maximum_latitude. Longitude has a range of -180 to 180 , latitude of -90 to 90. Defaults to -180, -90, 180, 90 if not specified.
  • user_id: The NSID of the user who's photo to search.
  • extras: A comma-delimited list of extra information to fetch for each returned record. Currently supported fields are: license, date_upload, date_taken, owner_name, icon_server, original_format, last_update, geo, tags, machine_tags, o_dims, views, media.

Obviously the geo information is desired in the result set, so we add that request in the extras argument. Note that a geo or bountding box request will only return 250 results "per page".

Our REST result set is a basic XML formatted data that looks something along the lines of this:

<?xml version="1.0" encoding="utf-8" ?>
<rsp stat="ok">
<photos page="1" pages="1" perpage="250" total="249">
<photo id="3462202831" owner="37182874@N04" secret="56251be50e" server="3085" farm="4" title="2002081102020" ispublic="1" isfriend="0" isfamily="0" latitude="38.888092" longitude="-121.049591" accuracy="16" place_id="hTVV1XibApQLdJJ7" woeid="2384516" />
<photo id="3463016716" owner="37182874@N04" secret="06c8fde13f" server="3655" farm="4" title="2002081102023" ispublic="1" isfriend="0" isfamily="0" latitude="38.888092" longitude="-121.049591" accuracy="16" place_id="hTVV1XibApQLdJJ7" woeid="2384516" />
...
</photos>
</rsp>

 

In Perl our REST request looks like this:

#!/usr/bin/perl
# Setup our working Perl environment
use LWP::Simple;
use XML::Simple;

my $xml = new XML::Simple;
my $url = 'http://api.flickr.com/services/rest/?method=flickr.photos.search&bbox=-180,-90,180,90&user_id=37182874@N04&extras=geo&api_key=cd6f9dbede6ddd3e4ce2290ea0f11ec6';

# Make our REST Request
my $content = get $url;

# Did we get something back?
die "Couldn't get $url" unless defined $content;

 

LWP:Simple provides our Perl script with the ability to make request against URL resources such as the Flickr. This part of the script simply defines the URL given the Flickr method and arguments previously mentioned, makes the actual request and then preforms a simple test to check if something, anything, was returned back given the request made.

Ideally, the next step is to preform some additional testing on the data contained in $content, part of which would be wrapped around the parsing of the XML file using the XML::Simple module. XML:Simple makes dealing with XML files, reading or writing, in Perl a piece of cake. In this case, importing an XML file into a reference to a hash of hashes from which needed values can be found using various key indexes. That is, a Flickr photo id, for example, is a value within the photo element, which is in turn a nested element of photos:

<photos>
<photo><id>1234</id>
</photo>
</photos>

 

can simply be referred in Perl as:

$ref->{photos}->{photo}->{id}

 

For placing a photo on Google Map based on location the basic pieces of information needed are:

  • latitude: self explanatory
  • longitude: self explanatory
  • photo id: self explanatory
  • farm: needed for building the image source URL of where the image resides. No doubt farm represents which collect of servers, server farm, the image actually resides in.
  • server: needed for building the image source URL of where the image resides. No doubt server represents which server within the given server farm, the image actually resides on.
  • secret: a unique value given by Flickr to a photo which, in theory, can't be guessed and can only be obtained via proper requestes based on given permissions.

Interestingly, while Flickr's flickr.photos.search will return an image's title, it does not return the image's description. For that a second method call is required, flickr.photos.getInfo. flickr.photos.getInfo requires the api_key and the photo_id. An optional secret argument, assigned to each photo, can be included to skip permissions checking.

Bring this final list of desired information together;

  • latitude
  • longitude
  • photo id
  • farm
  • server
  • secret
  • taken
  • description

  the Perl code looks like this:

my ( $lat, $long, $farm, $server, $photo_id, $secret, $taken, $desc );
my $data = $xml->XMLin($content);
my $photo = $data->{photos}->{photo};

# Parse out required data for each photo returned from search request
while ( my ($id, $values) = each(%$photo) ) {

        $desc = "";
        $photo_id = $id;

        $secret = $photo->{$photo_id}->{secret};
        $server = $photo->{$photo_id}->{server};
        $farm = $photo->{$photo_id}->{farm};
        $lat = $photo->{$photo_id}->{latitude};
        $long = $photo->{$photo_id}->{longitude};

       # Build and make the second request for photo specific information,
       # description and date taken
        $url = "http://api.flickr.com/services/rest/?method=flickr.photos.getInfo&api_key=cd6f9dbede6ddd3e4ce2290ea0f11ec6&photo_id=" .$photo_id. "&secret=".$secret;
        $content = get $url;
        die "Couldn't get $url" unless defined $content;

        my $info = $xml->XMLin($content);
        my $photo_info = $info->{photo};

        # Parse photo specific results
        while ( my ($key, $value) = each(%$photo_info) ) {

                $taken = $photo_info->{dates}->{taken};

                if ( ref( $photo_info->{description} ) ne "HASH" ) {
                        # If we get a HASH then description is empty
                        $desc = $photo_info->{description};

                }

        }

 

The last task for our Perl parser is to print out the collected data via standard out. While there are a number of different formats to choose from; text delimited, XML or JSON ranking in as the top three, sticking with a keep it simple mentality, JSON is the way to go.

JSON is a lightweight data-interchange format that is not only easy for indivudals to read and write but is also easy for machines to parse and generate. If fact, while a JSON module does exisit for encoding data in Perl all that is needed in this instance is the following print startment:

print "{\"lat\":\"" .$lat. "\", \"long\":\"" .$long. "\"\"url\":\"http://www.flickr.com/photos/37182874\@N04/" .$photo_id. "\",\"src\":\"http://farm" .$farm. ".static.flickr.com/" .$server. "/" .$photo_id. "_" .$secret. "_m.jpg\",\"desc\":" .$desc. "\"taken\":\"" .$taken. "\"},\n";

 

Ok, while that single line of Perl, with esacaped quotes and all, doesn't seem "human readable" the resulting output is:

"lat":"38.916489","long":"-77.045494","url":"http://www.flickr.com/photos/37182874@N04/3426080512","src":"http://farm4.static.flickr.com/3602/3426080512_584945a853_m.jpg","desc":"Youth Ball","taken":"2009-01-20 21:15:28"},

 

Once the script executes the result is a collection of name/value pairs in which each line represents information about a specific photo.

Moreover to the point of choosing JSON, it provides quite a bit of programming flexability. The JSON format requires only a single line of Javascript code for the browser to parse while at the sametime provides data in a format that can be easly processed in other programming lanagues, should this specific data be in need for another resource in the future.

While a developer could live or die with a totally dynamic setup, pulling data from Flickr the moment a browser request comes in for the data, from a usability perspective two issues qucikly arise:

  1. Load Time
  2. Script Failure

If everything was setup dynamiclly, with the Perl script being invoked the moment a request for data came, an addtional wait time would be added for the user requesting the Flickr/Google Map mashup. That wait time could vary wildly, depending on server and network loads. 

Error handling is an import element when disucussing usability. What would happen if the parsing script failed? Most likely the user would leave, never to return. Even if the client side code caught the failed loading of data error properly and asked the user to try again.

As a hedge agaist both of these issues, scheduling the script to run at regular intervals and caching successful results for the client is the most straightforward method. Simple and common method of implementation, on a Unix-based system, is to use cron to schedule script execution and fire off an email if the script failed.

# Run every half-hour
0,30 * * * * www flickr.pl > /srv/www/htdocs/photos/flickr.json

 

But one might ask, why invlove the local server at all? Why not have the requesting client simply contact Flickr directly?

One issue has already been mentioned, Flickr presents the required data in two differernt requests, both need parsing - which requires time and effort. Executing this step ahead of time and caching the result will speed up the overall application and requires less work from the user's client system - which these days could be anything from a cell phone to a multi-core workstation.

The second issue is security related. The client side code will be running within an AJAX framework and while a Google Maps provided function, GDownloadUrl, will be handling the data request the XmlHttpRequest Javascript object is used to execute the actual request. XmlHttpRequest is subject to a same-origin restriction to pervent issues of cross-site scripting attacks. That is the URL for the data request must refer to the same server as the URL of the current document that executes the code. Bottom line: the data must reside on the local server since the local server is the source of the resource as a whole.

The Guts - Client Side
As mentioned briefly the client, most likely a web browser running on a user's laptop or desktop computer, will be executing the mashup in the AJAX programming framework. This framework combines various interrelated web programming techniques to create, in this case, and interactive world map displaying photos taken from around the world.

At the core the Javascript object XmlHttpRequest is used to asynchronously request data from the web server in the background, without interfering with the display and behavior of the existing web page. While both the object and framework name (Asynchronous JavaScript and XML) suggests the use of XML formatted data only the data interchange doesn't require or limit the requesting data to an XML format. Thus other formats such as preformatted HTML, plain text or our preferred JSON can also be used.

1 TrackBack

TrackBack URL: http://pdw.weinstein.org/mt/mt-tb.cgi/142

A couple of pieces of news from the last few days has me thinking that Twitter might have reached its apogee. Last week I dugg an article about San Francisco's information center using Twitter to connect with residents, allowing them... Read More

Leave a comment

About the Author

Paul is a technologist and all around nice guy for technology oriented organizations and parties. Besides maintaining this blog and website you can follow Paul's particular pontifications on the Life Universe and Everything on Twitter.

   
   


Subscribe:
Add to Google Reader or Homepage
Add to My AOL

Add to netvibes
Subscribe in Bloglines
Add to Technorati Favorites

Powered
CentOS
Apache
MySQL
Perl
Movable Type Pro