Recently in Services Category

Accessing the CTA's API with PHP

| 0 Comments
| | | |

Overview
Last month the City of Chicago arranged for a Open Data Hackaton in which a collection of programmers gathered together to develop and write programs that utilize a new resource, open access to city information.

For my part, I spent the data writing a PHP class file that wraps around the Chicago Transit Authority's web-based application programming interface, enabling access to CTA bus, rail and service information for PHP driven applications. As I've noted in the README file, "this class brings all three APIs together into one object with related methods."

The following in a quick rundown of how to incorporate this new class file into a working PHP application.


Installation
The first step is to download the class.cta.php file from GitHub and save it in a location that the PHP application has read access.

The next step is to include the file using the include (or similar require) function in the PHP application itself:

// Load the class file in our current directory
include_once( 'class.cta.php' );

Once the class file has been loaded, the next step is to instantiate the class:

$transit = new CTA ( 
	'YOUR-TRAIN-API_KEY_HERE', 
	'YOUR-BUS-API-KEY-HERE', false 
);

Notice that initialization of transit includes providing two API keys. API Keys can be requested from the CTA. For an API Key for Train Tracker, use the Train Tracker API Application form. For Bus Tracker, first sign into Bus Tracker, then request an Developer Key under "My Account".1

If no valid API keys are provided the only methods that will return valid information are the Customer Alert functions for system status information. Specificity the two functions statusRoutes and statusAlerts. This is because the Customer Alert API does not require an API key for access.


Execution
To invoke a method simply use the object and related function, providing any additional information as parameters, if required. For example, to get information about all of the bus stops the east-bound route 81 bus makes:

// Get an array result of all stops for an east-bound 81 bus.
$EastBoundStops = $transit->busGetStops( '81', 'East Bound' ));

All methods return an array which can be accessed to retrieve desired information. PHP's print_r or var_dump functions provide insight into all information returned by a specific function:

echo '<pre>';
print_r( $transit->busGetStops( '81', 'East Bound' ));
echo '</pre>';

The output will look something akin to this:

SimpleXMLElement Object
	(
	    [stop] => Array
	        (
	            [0] => SimpleXMLElement Object
	                (
	                    [stpid] => 3751
	                    [stpnm] => 2900 W Lawrence
	                    [lat] => 41.968500785328
	                    [lon] => -87.701137661934
	                )
...
	            [49] => SimpleXMLElement Object
	                (
 	                   [stpid] => 3725
	                    [stpnm] => Milwaukee & Higgins
	                    [lat] => 41.969027266773
	                    [lon] => -87.761798501015
	                )

 	       )

	)

In order to generate the following output listing the location of the Lawrence & Kimball stop:

Lawrence & Kimball (Brown Line)
At 41.968405060961 North and -87.713229060173 West

The following PHP code will provide the latitude and longitude of the Kimball stop, which is also a transfer point to the El's Brown Line:

$EastBoundStops = $transit->busGetStops( '81', 'East Bound' );
foreach( $EastBoundStops as $stop ) {

     if ( preg_match( '/kimball/i', $stop->stpnm )) {
		
          echo $stop->stpnm;
          echo 'At ' .$stop->lat. 'North and ' .$stop->lon. ' West';
		
     }
	
}

Notice that while the list of stops is provided in an array, each element in the array is a SimpleXMLElement object, thus the use of the object syntax for accessing each element.

The train function will allow for the determination of rail information, for example when the next Brown line train will be leaving the Kimball stop. However, while the previous example included a stop id for the route 81 bus at Kimball, the stop id is unique to the route 81 bus and does not translate to the stop id of the Brown line El at Kimball. Therefore, the first step is to locate the relevant GTFS2 data for the Kimball station:

/*	Per the CTA's website, El routes are identified as follows:

	Red = Red Line
	Blue = Blue Line
	Brn = Brown Line
	G = Green Line
	Org = Orange Line
	P = Purple Line
	Pink = Pink Line
	Y = Yellow Line
		
	Which means our Brown line is 'brn' 
*/
$brownStops = $transit->train( '', '', '', 'brn' );
foreach( $brownStops as $stop ) {

	if ( preg_match( '/kimball/i', $stop->staNm )) {
		
		echo "$stop->staNm train is destined for $stop->stpDe ";
		echo "Scheduled to arrive/depart at $stop->arrT";
		
	}
	
}

Which provides output similar to the following:

Kimball train is destined for Service toward Loop
Scheduled to arrive/depart at 20110821 17:17:01

One should note that the Brown line stop at Kimball is the northern end point for the Brown line, which means any trains leaving the station will only be bound in one direction, south, toward the Loop. If the string comparison is changed to 'irving' for the Irving Park station, the output changes to something similar, with trains running in both directions:

Irving Park train is destined for Service toward Kimball
Scheduled to arrive/depart at 20110821 17:19:34

Irving Park train is destined for Service toward Kimball
Scheduled to arrive/depart at 20110821 17:23:13

Irving Park train is destined for Service toward Loop 
Scheduled to arrive/depart at 20110821 17:19:44

Irving Park train is destined for Service toward Kimball 
Scheduled to arrive/depart at 20110821 17:32:19 

In Review
class.cta.php is a single PHP class file that provides access to all three CTA APIs for Bus, Train and Service information. The class implement functions for access to all API methods and returns an array of SimpleXMLElement objects that a PHP developer can use to incorporate real-time information about Chicago's public transit system.

Additional information about the CTA's APIs, including terms of use and how to request API Keys, can be found on the CTA's Developer page.




1 Why two different API Keys, one for train and one for bus information? Due to the evolution of the CTA's API interfaces, there are three distinct APIs, one for Bus, Train and Customer Alerts information. As a result there are three distinct URI endpoints and two distinct API keys.

2 The CTA provides its data based on the Google Transit Feed Specification (GTFS) which is becoming a common format for public transportation agencies to publishing schedules and associated geographic information. The CTA generates and distributes, about once a week, an up-to-date zip compressed collection of files that includes basic agency information, route, transfer and stop locations, and other related service information. Note that ids 0-29999 are bus stops, ids 30000-39999 are train stops and 40000-49999 train stations (parent stops).

How Google Search Works: A Detailed Flowchat

| 0 Comments
| | | |

This morning technabob pointed out the following flowchart, on how Google's world famous Search feature works, by PPCBlog

How Does Google Work?
Infographic by PPC Blog

The chart itself doesn't dive into how exactly PageRank works so much as covers all the steps and guidelines used to discover, rank and report on a web page relative to an user's query.

As such, the text at the end that proclaims "and all of this is done in less than second, 300 million times a day" is a bit of a misstatement. That statement is valid for everything after the point of "User queries Google". Everything before that is done in advance, in order to process, index and prepare for answering the user query in a quick, meaningful manner.

Also, that sidebar seems to leave out that those data centers are filled with an estimated 100,000+ servers built using "off-the-self" PC hardware that run highly customized software built on open source software such as Apache (quite possibly) and Linux (definitely).

Of course Google didn't exactly start out as the 800-pound gorilla.

Perl Module WebService::Viddler on CPAN

| 0 Comments
| | | |

Back in March I outlined an idea for a Perl module for accessing the video service Viddler. As I noted, at the time, while there was plenty of support for the Viddler Application Programming Interface via PHP, the support for Perl was quite anemic.

To help rectify the situation, in my "copious" free time I've been working on an Perl module that wraps around Viddler's API. The goal being to provide a quick method for integrating Perl-based applications with Viddler.

Today, I'm happy to announce the first working release of my effort, WebService::Viddler, which can be found at CPAN (Comprehensive Perl Archive Network).

As I mentioned, at the heart of things, WebService::Viddler is an object-oriented encapsulation of the Viddler video platform providing a Perl specific focus for access via their public API, which itself is documented at: http://developers.viddler.com/documentation/api/

Currently this module is, at best, beta quality code and only supports version 1 of the Viddler API. Moreover, while it handles most of the v1 API methods, it currently lacks support for the two commenting related methods, videos-comments-add and videos-comments-remove (a more complete To Do list can be found in the provided README file.

Of course the advantage of the module is that it makes including Viddler in a Perl-based application, dead simple:

#!/usr/bin/perl -T

use strict;
use warnings;

use WebService::Viddler;
use Data::Dumper;

# Create our object and establish a session
my $videos = new WebService::Viddler( 
                         apiKey => '123456ABCDEF',
                         username => $username,
                         password => $password,
                       );

# Get and print the API version is use
print "API Version: " .$videos->api_getInfo(). "\n";

# Upload a video providing required information such as the 
# title, tag and description
$videos->videos_upload( 
                         "Moon", 
                         "Moon", 
                         "A little video clip of...", 
                         "0", 
                         "/home/pdw/temp/Moon.mp4", 
                         "" 
                         );

# Get the details of the given video and
# use Data::Dumper help print out the values in the list results
print Dumper( $videos->videos_getDetailsByUrl( 
              "http://www.viddler.com/explore/pdweinstein/videos/3/" ));

# Get a list of videos by the given tag and
# use Dumper to help print out the values in the list results
print Dumper( $videos->videos_getByTag( "moon" ));


Questions, bugs and code suggestions are of course welcomed!

The Network is the Computer

| 0 Comments | 1 TrackBack
| | | |

If only 1% of 13 million (13,000) of your users are willing to incur a $10 surcharge within the first week after the release of a significant software upgrade, one has to wonder, how does one make money in the software business?

In this specific case, the 13 million users of Apple's iPod Touch, Apple makes the bulk of its money not on the device's software or even the iTunes Store sales (music, videos or apps), but on the hardware itself. Apple has been and still is, a hardware company. Despite its reputation for cutting edge software.

But if the adoption rate of Apple's iPhone OS 3.0 is any indication, even a token charge for a software upgrade significantly impairs software adoption. So the question remains, how does one make money in the software business these days?

For some companies, the case for users to upgrade can go beyond "new computing features". For example an anti-virus company that charges for upgrades that deal with new threats to computer security. Or a financial software company that releases new software for changes in accounting and tax codes.

Yet even these examples have limits, an anti-virus company can charge for an update to deal with a new type of threat, say an anti-virus software that gets an upgrade to also deal with spyware. But, the software security company would probably be out of business if it charge a fee for every new iteration of a specific threat type, for every new virus that might come along.

Enter Software as a Service (SaaS). Unlike, traditionally boxed PC software SaaS is a model of software retail whereby a provider licenses an application to customers for use as a service on demand. Instead of distributing the software for purchase, the developer/vendor hosts the application in a location where it can be reached by the users when needed.

With this model users don't have to support or update the software themselves. Instead users are charged an access fee (per usage or monthly/yearly subscriptions) to the features they wish to use. Their fee covers the cost for maintaining and updated those specific features. User access (and in turn development and maintenance) can also be subsidized by some third-party, such as an advertiser.

The key is effective, reliable, ubiquitous access to where the application actually resides. In this day in age of computing that means, the Internet and specifically the Worldwide Web. Without this key infrastructure, online services such as Salesforce.com or Facebook would be significantly impaired.

None of this is really new, the concepts behind Software as a Service have been be around for awhile. But, understanding the concept helps to illuminate today's news from Google, the Google Chrome Operating System.

Google's public announcement of their own operating system is indeed a direct challenge to Microsoft's bread and butter family of Windows Operating Systems as reported. But the Google OS isn't a better than Microsoft Windows product, such as Apple's Mac OS X. Nor is Google's OS even focusing on the traditional tasks of managing the interface between the local hardware and user.

Instead Google's operating system, like their own web browser of the same name (Chrome) and mobile operating system (Android), is about simplifying and enhancing access to applications online. In Google's own words the operating system's goal is "to start up and get you onto the web in a few seconds" and will be built around the existing "web platform" so that "web-based applications will automatically work" and will work "not only on Google Chrome OS, but on any standards-based browser on Windows, Mac and Linux."

Google's new operating system is designed to leverage the growing collection of service oriented software that can be found online, including, of course, Google's own suite of applications such as Gmail, Docs and YouTube.

The trick for Google now is not just in implementation, but also adoption. Focusing first on the growing trend of netbooks helps, but thin computing itself is hardly a new concept.

A Twitter Conversation

| 0 Comments
| | | |

A couple of pieces of news from the last few days has me thinking that Twitter might have reached its apogee. Last week I dugg an article about San Francisco's information center using Twitter to connect with residents, allowing them an alternative method for requesting government information and non-emergency services. At first glance the move sounds intriguing, it required no special setup or additional city funds, yet gives San Francisco and its mayor Gavin Newsom, additional tech creds.

Checking out the city's Twitter feed my second thought was how interesting the information might be to aggregate, in a mashup, or some other form. Providing in a quick glance an easy to read indicator on trends within various neighborhoods, what people are worrying about or have issue with.

Then I thought about using it, and here I realize a larger issue (besides the small fact that I no longer reside in San Francisco). Twitter is about conversations, but it is about many-to-many conversations. In the real world you can think of it as a group conversation at a party, people move in and out of the social group and the conversation ebbs and flows on that dynamic.

Well that's the theory at least. A recent Harvard Business School-based study indicates 90% of Twitter's content is generated by only by 10% of its users. The research team notes that "This implies that Twitter resembles more of a one-way, one-to-many publishing service more than a two-way, peer-to-peer communication network".


From Harvard Business Publishing's Conversation Starter Blog, New Twitter Research: Men Follow Men and Nobody Tweets

So Twitter isn't like a group conversation after all. It is more like a lecture. One person speaking to a collection of individuals, with a few participating in a ongoing question and answer session.

What does this have to do with our city information desk? Well if you have something specific to ask someone you'd probably take that person aside to have a direct conversation, callin on a city representative about a specific issue is a one-to-one conversation.

Unless I'm a community organizer, I don't really care to follow the city's Twitter feed. I have a question, I want an answer. Twitter might be my first place to gather information from other people, but it isn't going to be my first choice when directly engaging the question in search of a specific solution.

Overall this means Twitter and microblogging are useful, but only to a point. Which brings us to the crux of Twitter's problem. Unlike Facebook, where writing status updates is one aspect of the overall experience, microblogging is all Twitter is about.

Which might explain why Twitter's online traffic might have reached a plateau. According to Complete, Twitter's monthly traffic numbers increased only 1.47% from April to May of 2009. While one month's worth of data hardly indicates an overall static growth trend, from March to April Twitter experienced a 32.72% increase in traffic which itself was down from a 76.83% increase between February and March. That sure looks like the beginning of a plateau...


Twitter's Unique Visitors as Calulated by Complete

Monster Mash

| 0 Comments | 1 TrackBack
| | | |

Introduction
The concept of a "mashup", a web application that combines data or functionality from one or more sources into a single integrated page, is nothing new. In fact, since Facebook has integrated non-Facebook data sources into their web application, the ability to casually bring different social actions, such as Digging a news article, from different sources onto one's Facebook Wall is quite straightforward. This casual integration works in a sharing/microblogging environment, where one wants to maintain a single, or even handful, point for sharing various actions; "Hey I just uploaded a cool video or checkout this song mix I made...."

Yet this isn't really what comes to my mind when talking about mashups. Yes, these Facebook examples use open web application programming interfaces (APIs) to access non-Facebook data sources, producing an integrated result. But it fails to create something greater than the sum of its parts. Not that, by the accepted consensus, a mashup needs to be defined as something greater than its parts. But, I think a good argument can be made, nonetheless.

Flickrvision is one of my favor mashup examples for it shows, in realtime, geolocated Flickr photos using Google Maps. One can easily sit back and lose oneself watching what photos are being upload to Flickr from around the world. Something that cannot be done passively on Flickr as is.

At Zoomshare, I hacked together something similar to show "the location" of user created websites. The mashup displayed a thumbnail of the website and the site's name at the location of the user, if it was known. The web app never made it past the development stage, in part, because of the intense resources need to capture and serve up website snapshots.

I still like the idea and in order to bring something about for show, I present my own variation on Flickrvision, using my own Flickr photostream, Photo Travels:

Shot of Personal Mashup

 

The Guts - Server Side
The trick, if there is one, isn't with Google Map's API or Flickr's. Both are well documented with numerous examples. No the real trick, if you ask me, is the geotagging of location information for the photos themselves. Digital cameras with GPS functionality are still few and far between; the notable exception being the iPhone, which really doesn't count as a digital camera. Flickr provides a decent interface for users to tag their photos, including the ability to add location information. So does the current version of iPhoto, iPhoto '09.

Once tagged the next step is to pull the photo data from Flickr. Flickr supports a number of different request architectures and API methods. To keep things as straightforward as possible, and as portable as possible, I've elected to use the REST request format using two API calls, flickr.photos.search and flickr.photos.getInfo.

With REST one simply needs to request the desired information with a HTTP GET or POST action. Building our data request is straightforward; building a URL with a method and method arguments. Technically our first method, flickr.photos.search only requires an API Key, which is easy enough to obtain. However in this specific case we're looking to get geotagged images from my account, so our request includes a number of "optional" arguments:

http://api.flickr.com/services/rest/?method=flickr.photos.search&bbox=-180,-90,180,90&user_id=37182874@N04&extras=geo&api_key=cd6f9dbede6ddd3e4ce2290ea0f11ec6

 

As noted in the Flickr documentation our arguments are:

  • bbox: A comma-delimited list of 4 values defining "Bounding Box" of an area that will be searched. The 4 values represent the bottom-left and the top-right corner of a "box" defined with a minimum_longitude, minimum_latitude, maximum_longitude, maximum_latitude. Longitude has a range of -180 to 180 , latitude of -90 to 90. Defaults to -180, -90, 180, 90 if not specified.
  • user_id: The NSID of the user who's photo to search.
  • extras: A comma-delimited list of extra information to fetch for each returned record. Currently supported fields are: license, date_upload, date_taken, owner_name, icon_server, original_format, last_update, geo, tags, machine_tags, o_dims, views, media.

Obviously the geo information is desired in the result set, so we add that request in the extras argument. Note that a geo or bountding box request will only return 250 results "per page".

Our REST result set is a basic XML formatted data that looks something along the lines of this:

<?xml version="1.0" encoding="utf-8" ?>
<rsp stat="ok">
<photos page="1" pages="1" perpage="250" total="249">
<photo id="3462202831" owner="37182874@N04" secret="56251be50e" server="3085" farm="4" title="2002081102020" ispublic="1" isfriend="0" isfamily="0" latitude="38.888092" longitude="-121.049591" accuracy="16" place_id="hTVV1XibApQLdJJ7" woeid="2384516" />
<photo id="3463016716" owner="37182874@N04" secret="06c8fde13f" server="3655" farm="4" title="2002081102023" ispublic="1" isfriend="0" isfamily="0" latitude="38.888092" longitude="-121.049591" accuracy="16" place_id="hTVV1XibApQLdJJ7" woeid="2384516" />
...
</photos>
</rsp>

 

In Perl our REST request looks like this:

#!/usr/bin/perl
# Setup our working Perl environment
use LWP::Simple;
use XML::Simple;

my $xml = new XML::Simple;
my $url = 'http://api.flickr.com/services/rest/?method=flickr.photos.search&bbox=-180,-90,180,90&user_id=37182874@N04&extras=geo&api_key=cd6f9dbede6ddd3e4ce2290ea0f11ec6';

# Make our REST Request
my $content = get $url;

# Did we get something back?
die "Couldn't get $url" unless defined $content;

 

LWP:Simple provides our Perl script with the ability to make request against URL resources such as the Flickr. This part of the script simply defines the URL given the Flickr method and arguments previously mentioned, makes the actual request and then preforms a simple test to check if something, anything, was returned back given the request made.

Ideally, the next step is to preform some additional testing on the data contained in $content, part of which would be wrapped around the parsing of the XML file using the XML::Simple module. XML:Simple makes dealing with XML files, reading or writing, in Perl a piece of cake. In this case, importing an XML file into a reference to a hash of hashes from which needed values can be found using various key indexes. That is, a Flickr photo id, for example, is a value within the photo element, which is in turn a nested element of photos:

<photos>
<photo><id>1234</id>
</photo>
</photos>

 

can simply be referred in Perl as:

$ref->{photos}->{photo}->{id}

 

For placing a photo on Google Map based on location the basic pieces of information needed are:

  • latitude: self explanatory
  • longitude: self explanatory
  • photo id: self explanatory
  • farm: needed for building the image source URL of where the image resides. No doubt farm represents which collect of servers, server farm, the image actually resides in.
  • server: needed for building the image source URL of where the image resides. No doubt server represents which server within the given server farm, the image actually resides on.
  • secret: a unique value given by Flickr to a photo which, in theory, can't be guessed and can only be obtained via proper requestes based on given permissions.

Interestingly, while Flickr's flickr.photos.search will return an image's title, it does not return the image's description. For that a second method call is required, flickr.photos.getInfo. flickr.photos.getInfo requires the api_key and the photo_id. An optional secret argument, assigned to each photo, can be included to skip permissions checking.

Bring this final list of desired information together;

  • latitude
  • longitude
  • photo id
  • farm
  • server
  • secret
  • taken
  • description

  the Perl code looks like this:

my ( $lat, $long, $farm, $server, $photo_id, $secret, $taken, $desc );
my $data = $xml->XMLin($content);
my $photo = $data->{photos}->{photo};

# Parse out required data for each photo returned from search request
while ( my ($id, $values) = each(%$photo) ) {

        $desc = "";
        $photo_id = $id;

        $secret = $photo->{$photo_id}->{secret};
        $server = $photo->{$photo_id}->{server};
        $farm = $photo->{$photo_id}->{farm};
        $lat = $photo->{$photo_id}->{latitude};
        $long = $photo->{$photo_id}->{longitude};

       # Build and make the second request for photo specific information,
       # description and date taken
        $url = "http://api.flickr.com/services/rest/?method=flickr.photos.getInfo&api_key=cd6f9dbede6ddd3e4ce2290ea0f11ec6&photo_id=" .$photo_id. "&secret=".$secret;
        $content = get $url;
        die "Couldn't get $url" unless defined $content;

        my $info = $xml->XMLin($content);
        my $photo_info = $info->{photo};

        # Parse photo specific results
        while ( my ($key, $value) = each(%$photo_info) ) {

                $taken = $photo_info->{dates}->{taken};

                if ( ref( $photo_info->{description} ) ne "HASH" ) {
                        # If we get a HASH then description is empty
                        $desc = $photo_info->{description};

                }

        }

 

The last task for our Perl parser is to print out the collected data via standard out. While there are a number of different formats to choose from; text delimited, XML or JSON ranking in as the top three, sticking with a keep it simple mentality, JSON is the way to go.

JSON is a lightweight data-interchange format that is not only easy for indivudals to read and write but is also easy for machines to parse and generate. If fact, while a JSON module does exisit for encoding data in Perl all that is needed in this instance is the following print startment:

print "{\"lat\":\"" .$lat. "\", \"long\":\"" .$long. "\"\"url\":\"http://www.flickr.com/photos/37182874\@N04/" .$photo_id. "\",\"src\":\"http://farm" .$farm. ".static.flickr.com/" .$server. "/" .$photo_id. "_" .$secret. "_m.jpg\",\"desc\":" .$desc. "\"taken\":\"" .$taken. "\"},\n";

 

Ok, while that single line of Perl, with esacaped quotes and all, doesn't seem "human readable" the resulting output is:

"lat":"38.916489","long":"-77.045494","url":"http://www.flickr.com/photos/37182874@N04/3426080512","src":"http://farm4.static.flickr.com/3602/3426080512_584945a853_m.jpg","desc":"Youth Ball","taken":"2009-01-20 21:15:28"},

 

Once the script executes the result is a collection of name/value pairs in which each line represents information about a specific photo.

Moreover to the point of choosing JSON, it provides quite a bit of programming flexability. The JSON format requires only a single line of Javascript code for the browser to parse while at the sametime provides data in a format that can be easly processed in other programming lanagues, should this specific data be in need for another resource in the future.

While a developer could live or die with a totally dynamic setup, pulling data from Flickr the moment a browser request comes in for the data, from a usability perspective two issues qucikly arise:

  1. Load Time
  2. Script Failure

If everything was setup dynamiclly, with the Perl script being invoked the moment a request for data came, an addtional wait time would be added for the user requesting the Flickr/Google Map mashup. That wait time could vary wildly, depending on server and network loads. 

Error handling is an import element when disucussing usability. What would happen if the parsing script failed? Most likely the user would leave, never to return. Even if the client side code caught the failed loading of data error properly and asked the user to try again.

As a hedge agaist both of these issues, scheduling the script to run at regular intervals and caching successful results for the client is the most straightforward method. Simple and common method of implementation, on a Unix-based system, is to use cron to schedule script execution and fire off an email if the script failed.

# Run every half-hour
0,30 * * * * www flickr.pl > /srv/www/htdocs/photos/flickr.json

 

But one might ask, why invlove the local server at all? Why not have the requesting client simply contact Flickr directly?

One issue has already been mentioned, Flickr presents the required data in two differernt requests, both need parsing - which requires time and effort. Executing this step ahead of time and caching the result will speed up the overall application and requires less work from the user's client system - which these days could be anything from a cell phone to a multi-core workstation.

The second issue is security related. The client side code will be running within an AJAX framework and while a Google Maps provided function, GDownloadUrl, will be handling the data request the XmlHttpRequest Javascript object is used to execute the actual request. XmlHttpRequest is subject to a same-origin restriction to pervent issues of cross-site scripting attacks. That is the URL for the data request must refer to the same server as the URL of the current document that executes the code. Bottom line: the data must reside on the local server since the local server is the source of the resource as a whole.

The Guts - Client Side
As mentioned briefly the client, most likely a web browser running on a user's laptop or desktop computer, will be executing the mashup in the AJAX programming framework. This framework combines various interrelated web programming techniques to create, in this case, and interactive world map displaying photos taken from around the world.

At the core the Javascript object XmlHttpRequest is used to asynchronously request data from the web server in the background, without interfering with the display and behavior of the existing web page. While both the object and framework name (Asynchronous JavaScript and XML) suggests the use of XML formatted data only the data interchange doesn't require or limit the requesting data to an XML format. Thus other formats such as preformatted HTML, plain text or our preferred JSON can also be used.

Share This!

| 3 Comments
| | | |

Not a fan of Digg? Wish you could give visitors more options to share your thoughts without registering with each community run news service? Looking for a simple tool that let's you know what your site visitors are recommending to their friends? Well then I've got the tool for you, ShareThis!

ShareThis is a web widget from Nextumi, Inc. that allows one's content to be instantly 'shareable' with users of various web services with the minimal amount of work by the site owner. As a bonus the ShareThis widget can provide tracking and reporting information such that one can see what site content is being shared.

For zoomshare users this means being able to let users view, vote and/or share your work with other potential visitors without the need to be a user of each individual web service. So if someone thinks your recent blog posting is Digg worthy, they can submit your posting to Digg, right from within your posting, without you having to provide all the necessary Digg links.

Getting Started
The first step to using ShareThis is to register as a publisher. Once registered the next step is to customize your widget, choosing how visitors can share your content and with whom.

Share This Config
Configuring ShareThis

For example, you can allow visitors to share your content only by email. Or you can limit them to just Facebook and MySpace. One can also choose the basic color scheme for the widget in order to better match one's site template.

Once configured one copies the resulting widget code and pastes it into a free form web page or blog post as desired.

<script type="text/javascript" src=http://w.sharethis.com/widget/?tabs=web%2Cemail&charset=utf-8 &style=rotate&publisher=23441421-9d3a-4d4c-8746-a097a0f4b702 &headerbg=%235c5c5c&inactivebg=%237a7a7a&inactivefg=%23FFFFFF &linkfg=%230000FF></script>

ShareThis code for pdw @ zoomshare

Which results in the following button which visitors can click on to reveal the ShareThis Widget:



Nice right? Well it gets even better. As an assist the good folks at Nextumi have also added some basic reporting features. As such you get not only get an idea of who's visiting your site, but what they are sharing with their friends and what service their friends are using.

ShareThis Config
Reviewing Share This Traffic

Check it out and be sure to share this with your friends, I think you'll all enjoy this little tool as much as I do.

Managing Update Notifications

| 2 Comments
| | | |

Ok, so you enjoy knowing when a zoomshare friend updates their site via Message Notifications, but, well let's be honest, you have a lot of friends and if you spend one more day clearing out yet one more inbox your going to scream!

If only there was a way to switch off the default setting and select which friends you wish to receive update notifications from ...

Well now you can! Zoomshare users can now control which friends they receive update notifications from within their Friend List. For each friend a new option titled 'Edit Preferences' has been added to the right-hand side of the friend's screename. To toggle the setting off or on simple click on the 'Edit Preferences' to reveal the 'Receive Update Notifications' checkbox.

Zoomshare Edit Friend Update Notification Preferences
Editing Update Notification Preferences

By default this setting is 'on' so the checkbox will be 'checked'. To toggle the setting 'off' click on the checkbox to uncheck it, then click on 'Save Preference'.

Experienced zoomshare users may notice that the 'Edit Preferences' feature expands the previous 'Add Description' feature in which a user could leave a personal description or note to themselves about each friend. This option still exists under the 'Edit Preferences' feature and behaves in a similar manner as the 'Add Description' feature.

To add a personal note or description about a friend, simply replace the "Add Description" text with one's personal comment and select 'Save Preferences' after first clicking on 'Edit Preferences.

Enjoy!

Heads Up

| 0 Comments
| | | |


sare notes in a recent Forums posting that we have updated the look of the console landing page. The new Dashboard provides a simplified, heads up view of activity on zoomshare.

With the new Dashboard users can better track their friends list, send and receive invites and update their profile and directory information. Of course users can still edit their website or upload photos to their photo album by using the navigation tabs at the very top of the console screen.

How does the new Dashboard help users better track what's going on? Well when a user has a new Message or Invite the Dashboard lets the user know of the new item by highlighting the console as shown in this screen shot:

Moreover, we now send out notifications of certain updates to you when your friends have made changes to their zoomshare sites. Which also makes it easier for users to keep track of what's happening on zoomshare:

What kind up update triggers a notification? If a friend edits a web page, adds a blog post, adds an image to a photo album or adds an item to their shopping cart then a notification will be on its way to you.

When does the notification get sent? Well currently we process our update logs every 24 hours at 3 am Central Time. That means most users will have a notification of a friend's update the following morning. Over the course of the next few weeks will will be adjusting the timing of this process to find the right balance between timely notification and information overload.

In the meantime, enjoy the latest set of updates and let me know what you think

...

Facebook's Broken Beacon of Light?

| 3 Comments
| | | |

Yesterday Mark Zuckerberg of Facebook apologized to Facebook users after the uproar that has resulted from Facebook's latest feature Beacon. The idea behind Beacon is to "help people share information with their friends about things they do on the web." That is Beacon allows Facebook members to share information about their online activity, purchasing a book or posting a product review on a Facebook partner site with others in their social network. Zuckerberk relates that this "simple idea" missed the "right balance" between not "get in people's way ... but also clear enough so people would be able to easily control what they shared." As a result his apology to all notes that changes have been made, including the default behavior of Beacon, which was switched from 'opt-out' to 'opt-in'.

While reading his post, in part because of a prompt from Sare, I realized I've heard this discussion before, its a common view of 'security' vs 'ease-of-use' a lot of programmers have. Well the similarity makes sense, after all personal privacy, the what/where/when/how of sharing, is at its root an issue of security. Hence Zuckerberg's framing of the good/bad/ugly of Beacon version 1, that the two are at diametric opposites; adding security complicates the user experience whereas removing security eases the user experience and that one needs to 'balance' the two at any given time in software development.

The thing is, is I don't really buy that. I mean, yes that might always seem to be the case, but I think that has more to do with the fact that we programmers have painted ourselves into that corner by thinking of the two issues as polar opposites for sometime now. Moreover, I think it becomes an issue of lazy programming since we can say, "it an either/or proposition, pick one and that's what we live with since I can't/don't want to develop something different."


(For a more conventional, not to mention cynical, spin on Facebook's Beacon check out Steven Levy's Do Real Friends Share Ads? article for Newsweek in which Levy suggests that in the rush to maximize Microsoft's $240 million investment Facebook didn't have its user's best interest in mind at all.)

An illustration of my point, a few weeks ago Bruce Schneier posted about a video showing how to circumvent a soda machine. The posting got me thinking about how 'back in the day' it was common knowledge in my high school that one could exploit the dollar bill reader by fooling them with one-sided, black and white, photocopies. If I had to guess, based on the observed behavior, the readers simply cared about being given a piece of paper of a specific length and width that at some point matched the pattern for a One Dollar Bill (some pattern that, I assume, was dissimilar enough to say a Five Dollar Bill). No color matching, matching backside, etc. Today those same readers are more sophisticated, that old 'trick' won't work. Yet the reader's 'user interface' is still the same you orient the bill as pictured, slide the bill in and at some point the reader grabs hold of it, either accepting or rejecting your offering. You, the user, don't have to do anything new, different or complicated, yet the 'security' of the system is greatly enhanced. Sure some readers can seem overly fussy and frustrating, but I've also seen readers that care little about the orientation of the bill, easing use, without, I'm sure, exposing the machine to past vulnerabilities.


The soda machine issue also demonstrates another point about computer security since the 'cracking' of the vending machine is an excellent example of that ultimately it is not about having the programming code in front of you so much as the behaviors, expected and unexpected, that code details that can cause a security issue. In the case of the soda machine video someone discovered how to get the machine to 'fail' and then exploited that to their advantage. In the case of my high school reminiscing it was about literally given the machine what it expected. As Schneier notes this is a simple enough exploit, no source code needed, just a little patience by the observer who determines what behaviors the machine expects by how it reacted.

A few months ago I tried to make the same observation about, ironically enough, a 'security breach' at Facebook when some PHP source code got 'leaked' onto the Internet. It would seem the same can be said for Beacon, its not the code itself that can be an issue but the, in this case, expected behavior that actually becomes a possible security/privacy issue for the user.

The point, if there really is any here, is that on the surface computer security and personal privacy can look cut and dry, good or bad, usability or security, black or white. But, as with those old dollar bill readers, if you ignore the other side, read only in black and white and look only for what your expecting, you can get fooled fairly easy. Oh and that teenagers have a logic all their own since the thought of breaking Federal-counterfeiting law is worth the price of a 'free' soda.

~~~ Stumble It! ~~~ Reddit ~~~ Digg This! ~~~

About the Author

Paul is a technologist and all around nice guy for technology oriented organizations and parties. Besides maintaining this blog and website you can follow Paul's particular pontifications on the Life Universe and Everything on Twitter.

   
   


Subscribe:
Add to Google Reader or Homepage
Add to My AOL

Add to netvibes
Subscribe in Bloglines
Add to Technorati Favorites

Powered
CentOS
Apache
MySQL
Perl
Movable Type Pro