Recently in Web Category

A couple of weeks ago I started following Dave Winer on Twitter and the discussion about the relevancy of RSS and RSS updates in "real-time" has lead me to ask a simply question that I have yet to see asked: Is all this "real-time" communication even necessary?


What are you talking about, I hear some of you asking? Ok, here's the deal, Dave Winer is one of the developers of RSS. RSS is a file format that allows for the dissemination of web content, usually blogs, to be discoverable and readable by others online. That is not everyone goes to my website, on a daily basis to see if I've posted a new article. Many people use an aggregator that "subscribes" to a "news feed" provided on my site. When a new article posts, it appears in their aggregator at which point it can be read. All of this depends on RSS.


Supposedly, however, RSS is dead. Or at least RSS is dying. Why? Well because it takes time for the propagation of new posts to appear in one's aggregator/reader. Of course time is relative and one has to reconcile the illusion of faster with the actuality of faster, but for some it seems RSS takes too much time compared to status updates. Why should my readers wait for their aggregator when I can tell them right away on Twitter or Facebook?


But wait, RSS isn't dying, rssCloud will save it by speeding up the notification process for RSS feeds


But, wait. Wait, I ask. Why do we need really fast (or the appearance thereof) in the first place? I'm mean think about it, phone calls, emails, status updates, news feeds. All of this is running really fast, probably as close to instantaneous as we many ever be able to get.


And what do we all end up doing? We all end up developing with personal tricks and time management decisions about how to best process all this information. We allot Monday mornings to catching up on Facebook. We flag emails for levels of priority and we filter phone calls based on caller id.


Why? Well because my time, schedule and level of interest is different from yours. That doesn't mean I'm ignoring you, it just means, well, I've got something else on my mind....


Which brings me back to my initial question for us developers and users: Do all of these different types of communication have to be in real-time? Is it necessary?

If only 1% of 13 million (13,000) of your users are willing to incur a $10 surcharge within the first week after the release of a significant software upgrade, one has to wonder, how does one make money in the software business?

In this specific case, the 13 million users of Apple's iPod Touch, Apple makes the bulk of its money not on the device's software or even the iTunes Store sales (music, videos or apps), but on the hardware itself. Apple has been and still is, a hardware company. Despite its reputation for cutting edge software.

But if the adoption rate of Apple's iPhone OS 3.0 is any indication, even a token charge for a software upgrade significantly impairs software adoption. So the question remains, how does one make money in the software business these days?

For some companies, the case for users to upgrade can go beyond "new computing features". For example an anti-virus company that charges for upgrades that deal with new threats to computer security. Or a financial software company that releases new software for changes in accounting and tax codes.

Yet even these examples have limits, an anti-virus company can charge for an update to deal with a new type of threat, say an anti-virus software that gets an upgrade to also deal with spyware. But, the software security company would probably be out of business if it charge a fee for every new iteration of a specific threat type, for every new virus that might come along.

Enter Software as a Service (SaaS). Unlike, traditionally boxed PC software SaaS is a model of software retail whereby a provider licenses an application to customers for use as a service on demand. Instead of distributing the software for purchase, the developer/vendor hosts the application in a location where it can be reached by the users when needed.

With this model users don't have to support or update the software themselves. Instead users are charged an access fee (per usage or monthly/yearly subscriptions) to the features they wish to use. Their fee covers the cost for maintaining and updated those specific features. User access (and in turn development and maintenance) can also be subsidized by some third-party, such as an advertiser.

The key is effective, reliable, ubiquitous access to where the application actually resides. In this day in age of computing that means, the Internet and specifically the Worldwide Web. Without this key infrastructure, online services such as Salesforce.com or Facebook would be significantly impaired.

None of this is really new, the concepts behind Software as a Service have been be around for awhile. But, understanding the concept helps to illuminate today's news from Google, the Google Chrome Operating System.

Google's public announcement of their own operating system is indeed a direct challenge to Microsoft's bread and butter family of Windows Operating Systems as reported. But the Google OS isn't a better than Microsoft Windows product, such as Apple's Mac OS X. Nor is Google's OS even focusing on the traditional tasks of managing the interface between the local hardware and user.

Instead Google's operating system, like their own web browser of the same name (Chrome) and mobile operating system (Android), is about simplifying and enhancing access to applications online. In Google's own words the operating system's goal is "to start up and get you onto the web in a few seconds" and will be built around the existing "web platform" so that "web-based applications will automatically work" and will work "not only on Google Chrome OS, but on any standards-based browser on Windows, Mac and Linux."

Google's new operating system is designed to leverage the growing collection of service oriented software that can be found online, including, of course, Google's own suite of applications such as Gmail, Docs and YouTube.

The trick for Google now is not just in implementation, but also adoption. Focusing first on the growing trend of netbooks helps, but thin computing itself is hardly a new concept.

A Twitter Conversation

| 0 Comments | 0 TrackBacks | 

A couple of pieces of news from the last few days has me thinking that Twitter might have reached its apogee. Last week I dugg an article about San Francisco's information center using Twitter to connect with residents, allowing them an alternative method for requesting government information and non-emergency services. At first glance the move sounds intriguing, it required no special setup or additional city funds, yet gives San Francisco and its mayor Gavin Newsom, additional tech creds.

Checking out the city's Twitter feed my second thought was how interesting the information might be to aggregate, in a mashup, or some other form. Providing in a quick glance an easy to read indicator on trends within various neighborhoods, what people are worrying about or have issue with.

Then I thought about using it, and here I realize a larger issue (besides the small fact that I no longer reside in San Francisco). Twitter is about conversations, but it is about many-to-many conversations. In the real world you can think of it as a group conversation at a party, people move in and out of the social group and the conversation ebbs and flows on that dynamic.

Well that's the theory at least. A recent Harvard Business School-based study indicates 90% of Twitter's content is generated by only by 10% of its users. The research team notes that "This implies that Twitter resembles more of a one-way, one-to-many publishing service more than a two-way, peer-to-peer communication network".


From Harvard Business Publishing's Conversation Starter Blog, New Twitter Research: Men Follow Men and Nobody Tweets

So Twitter isn't like a group conversation after all. It is more like a lecture. One person speaking to a collection of individuals, with a few participating in a ongoing question and answer session.

What does this have to do with our city information desk? Well if you have something specific to ask someone you'd probably take that person aside to have a direct conversation, callin on a city representative about a specific issue is a one-to-one conversation.

Unless I'm a community organizer, I don't really care to follow the city's Twitter feed. I have a question, I want an answer. Twitter might be my first place to gather information from other people, but it isn't going to be my first choice when directly engaging the question in search of a specific solution.

Overall this means Twitter and microblogging are useful, but only to a point. Which brings us to the crux of Twitter's problem. Unlike Facebook, where writing status updates is one aspect of the overall experience, microblogging is all Twitter is about.

Which might explain why Twitter's online traffic might have reached a plateau. According to Complete, Twitter's monthly traffic numbers increased only 1.47% from April to May of 2009. While one month's worth of data hardly indicates an overall static growth trend, from March to April Twitter experienced a 32.72% increase in traffic which itself was down from a 76.83% increase between February and March. That sure looks like the beginning of a plateau...


Twitter's Unique Visitors as Calulated by Complete

Monster Mash

| 0 Comments | 1 TrackBack | 

Introduction
The concept of a "mashup", a web application that combines data or functionality from one or more sources into a single integrated page, is nothing new. In fact, since Facebook has integrated non-Facebook data sources into their web application, the ability to casually bring different social actions, such as Digging a news article, from different sources onto one's Facebook Wall is quite straightforward. This casual integration works in a sharing/microblogging environment, where one wants to maintain a single, or even handful, point for sharing various actions; "Hey I just uploaded a cool video or checkout this song mix I made...."

Yet this isn't really what comes to my mind when talking about mashups. Yes, these Facebook examples use open web application programming interfaces (APIs) to access non-Facebook data sources, producing an integrated result. But it fails to create something greater than the sum of its parts. Not that, by the accepted consensus, a mashup needs to be defined as something greater than its parts. But, I think a good argument can be made, nonetheless.

Flickrvision is one of my favor mashup examples for it shows, in realtime, geolocated Flickr photos using Google Maps. One can easily sit back and lose oneself watching what photos are being upload to Flickr from around the world. Something that cannot be done passively on Flickr as is.

At Zoomshare, I hacked together something similar to show "the location" of user created websites. The mashup displayed a thumbnail of the website and the site's name at the location of the user, if it was known. The web app never made it past the development stage, in part, because of the intense resources need to capture and serve up website snapshots.

I still like the idea and in order to bring something about for show, I present my own variation on Flickrvision, using my own Flickr photostream, Photo Travels:

Shot of Personal Mashup

 

The Guts - Server Side
The trick, if there is one, isn't with Google Map's API or Flickr's. Both are well documented with numerous examples. No the real trick, if you ask me, is the geotagging of location information for the photos themselves. Digital cameras with GPS functionality are still few and far between; the notable exception being the iPhone, which really doesn't count as a digital camera. Flickr provides a decent interface for users to tag their photos, including the ability to add location information. So does the current version of iPhoto, iPhoto '09.

Once tagged the next step is to pull the photo data from Flickr. Flickr supports a number of different request architectures and API methods. To keep things as straightforward as possible, and as portable as possible, I've elected to use the REST request format using two API calls, flickr.photos.search and flickr.photos.getInfo.

With REST one simply needs to request the desired information with a HTTP GET or POST action. Building our data request is straightforward; building a URL with a method and method arguments. Technically our first method, flickr.photos.search only requires an API Key, which is easy enough to obtain. However in this specific case we're looking to get geotagged images from my account, so our request includes a number of "optional" arguments:

http://api.flickr.com/services/rest/?method=flickr.photos.search&bbox=-180,-90,180,90&user_id=37182874@N04&extras=geo&api_key=cd6f9dbede6ddd3e4ce2290ea0f11ec6

 

As noted in the Flickr documentation our arguments are:

  • bbox: A comma-delimited list of 4 values defining "Bounding Box" of an area that will be searched. The 4 values represent the bottom-left and the top-right corner of a "box" defined with a minimum_longitude, minimum_latitude, maximum_longitude, maximum_latitude. Longitude has a range of -180 to 180 , latitude of -90 to 90. Defaults to -180, -90, 180, 90 if not specified.
  • user_id: The NSID of the user who's photo to search.
  • extras: A comma-delimited list of extra information to fetch for each returned record. Currently supported fields are: license, date_upload, date_taken, owner_name, icon_server, original_format, last_update, geo, tags, machine_tags, o_dims, views, media.

Obviously the geo information is desired in the result set, so we add that request in the extras argument. Note that a geo or bountding box request will only return 250 results "per page".

Our REST result set is a basic XML formatted data that looks something along the lines of this:

<?xml version="1.0" encoding="utf-8" ?>
<rsp stat="ok">
<photos page="1" pages="1" perpage="250" total="249">
<photo id="3462202831" owner="37182874@N04" secret="56251be50e" server="3085" farm="4" title="2002081102020" ispublic="1" isfriend="0" isfamily="0" latitude="38.888092" longitude="-121.049591" accuracy="16" place_id="hTVV1XibApQLdJJ7" woeid="2384516" />
<photo id="3463016716" owner="37182874@N04" secret="06c8fde13f" server="3655" farm="4" title="2002081102023" ispublic="1" isfriend="0" isfamily="0" latitude="38.888092" longitude="-121.049591" accuracy="16" place_id="hTVV1XibApQLdJJ7" woeid="2384516" />
...
</photos>
</rsp>

 

In Perl our REST request looks like this:

#!/usr/bin/perl
# Setup our working Perl environment
use LWP::Simple;
use XML::Simple;

my $xml = new XML::Simple;
my $url = 'http://api.flickr.com/services/rest/?method=flickr.photos.search&bbox=-180,-90,180,90&user_id=37182874@N04&extras=geo&api_key=cd6f9dbede6ddd3e4ce2290ea0f11ec6';

# Make our REST Request
my $content = get $url;

# Did we get something back?
die "Couldn't get $url" unless defined $content;

 

LWP:Simple provides our Perl script with the ability to make request against URL resources such as the Flickr. This part of the script simply defines the URL given the Flickr method and arguments previously mentioned, makes the actual request and then preforms a simple test to check if something, anything, was returned back given the request made.

Ideally, the next step is to preform some additional testing on the data contained in $content, part of which would be wrapped around the parsing of the XML file using the XML::Simple module. XML:Simple makes dealing with XML files, reading or writing, in Perl a piece of cake. In this case, importing an XML file into a reference to a hash of hashes from which needed values can be found using various key indexes. That is, a Flickr photo id, for example, is a value within the photo element, which is in turn a nested element of photos:

<photos>
<photo><id>1234</id>
</photo>
</photos>

 

can simply be referred in Perl as:

$ref->{photos}->{photo}->{id}

 

For placing a photo on Google Map based on location the basic pieces of information needed are:

  • latitude: self explanatory
  • longitude: self explanatory
  • photo id: self explanatory
  • farm: needed for building the image source URL of where the image resides. No doubt farm represents which collect of servers, server farm, the image actually resides in.
  • server: needed for building the image source URL of where the image resides. No doubt server represents which server within the given server farm, the image actually resides on.
  • secret: a unique value given by Flickr to a photo which, in theory, can't be guessed and can only be obtained via proper requestes based on given permissions.

Interestingly, while Flickr's flickr.photos.search will return an image's title, it does not return the image's description. For that a second method call is required, flickr.photos.getInfo. flickr.photos.getInfo requires the api_key and the photo_id. An optional secret argument, assigned to each photo, can be included to skip permissions checking.

Bring this final list of desired information together;

  • latitude
  • longitude
  • photo id
  • farm
  • server
  • secret
  • taken
  • description

  the Perl code looks like this:

my ( $lat, $long, $farm, $server, $photo_id, $secret, $taken, $desc );
my $data = $xml->XMLin($content);
my $photo = $data->{photos}->{photo};

# Parse out required data for each photo returned from search request
while ( my ($id, $values) = each(%$photo) ) {

        $desc = "";
        $photo_id = $id;

        $secret = $photo->{$photo_id}->{secret};
        $server = $photo->{$photo_id}->{server};
        $farm = $photo->{$photo_id}->{farm};
        $lat = $photo->{$photo_id}->{latitude};
        $long = $photo->{$photo_id}->{longitude};

       # Build and make the second request for photo specific information,
       # description and date taken
        $url = "http://api.flickr.com/services/rest/?method=flickr.photos.getInfo&api_key=cd6f9dbede6ddd3e4ce2290ea0f11ec6&photo_id=" .$photo_id. "&secret=".$secret;
        $content = get $url;
        die "Couldn't get $url" unless defined $content;

        my $info = $xml->XMLin($content);
        my $photo_info = $info->{photo};

        # Parse photo specific results
        while ( my ($key, $value) = each(%$photo_info) ) {

                $taken = $photo_info->{dates}->{taken};

                if ( ref( $photo_info->{description} ) ne "HASH" ) {
                        # If we get a HASH then description is empty
                        $desc = $photo_info->{description};

                }

        }

 

The last task for our Perl parser is to print out the collected data via standard out. While there are a number of different formats to choose from; text delimited, XML or JSON ranking in as the top three, sticking with a keep it simple mentality, JSON is the way to go.

JSON is a lightweight data-interchange format that is not only easy for indivudals to read and write but is also easy for machines to parse and generate. If fact, while a JSON module does exisit for encoding data in Perl all that is needed in this instance is the following print startment:

print "{\"lat\":\"" .$lat. "\", \"long\":\"" .$long. "\"\"url\":\"http://www.flickr.com/photos/37182874\@N04/" .$photo_id. "\",\"src\":\"http://farm" .$farm. ".static.flickr.com/" .$server. "/" .$photo_id. "_" .$secret. "_m.jpg\",\"desc\":" .$desc. "\"taken\":\"" .$taken. "\"},\n";

 

Ok, while that single line of Perl, with esacaped quotes and all, doesn't seem "human readable" the resulting output is:

"lat":"38.916489","long":"-77.045494","url":"http://www.flickr.com/photos/37182874@N04/3426080512","src":"http://farm4.static.flickr.com/3602/3426080512_584945a853_m.jpg","desc":"Youth Ball","taken":"2009-01-20 21:15:28"},

 

Once the script executes the result is a collection of name/value pairs in which each line represents information about a specific photo.

Moreover to the point of choosing JSON, it provides quite a bit of programming flexability. The JSON format requires only a single line of Javascript code for the browser to parse while at the sametime provides data in a format that can be easly processed in other programming lanagues, should this specific data be in need for another resource in the future.

While a developer could live or die with a totally dynamic setup, pulling data from Flickr the moment a browser request comes in for the data, from a usability perspective two issues qucikly arise:

  1. Load Time
  2. Script Failure

If everything was setup dynamiclly, with the Perl script being invoked the moment a request for data came, an addtional wait time would be added for the user requesting the Flickr/Google Map mashup. That wait time could vary wildly, depending on server and network loads. 

Error handling is an import element when disucussing usability. What would happen if the parsing script failed? Most likely the user would leave, never to return. Even if the client side code caught the failed loading of data error properly and asked the user to try again.

As a hedge agaist both of these issues, scheduling the script to run at regular intervals and caching successful results for the client is the most straightforward method. Simple and common method of implementation, on a Unix-based system, is to use cron to schedule script execution and fire off an email if the script failed.

# Run every half-hour
0,30 * * * * www flickr.pl > /srv/www/htdocs/photos/flickr.json

 

But one might ask, why invlove the local server at all? Why not have the requesting client simply contact Flickr directly?

One issue has already been mentioned, Flickr presents the required data in two differernt requests, both need parsing - which requires time and effort. Executing this step ahead of time and caching the result will speed up the overall application and requires less work from the user's client system - which these days could be anything from a cell phone to a multi-core workstation.

The second issue is security related. The client side code will be running within an AJAX framework and while a Google Maps provided function, GDownloadUrl, will be handling the data request the XmlHttpRequest Javascript object is used to execute the actual request. XmlHttpRequest is subject to a same-origin restriction to pervent issues of cross-site scripting attacks. That is the URL for the data request must refer to the same server as the URL of the current document that executes the code. Bottom line: the data must reside on the local server since the local server is the source of the resource as a whole.

The Guts - Client Side
As mentioned briefly the client, most likely a web browser running on a user's laptop or desktop computer, will be executing the mashup in the AJAX programming framework. This framework combines various interrelated web programming techniques to create, in this case, and interactive world map displaying photos taken from around the world.

At the core the Javascript object XmlHttpRequest is used to asynchronously request data from the web server in the background, without interfering with the display and behavior of the existing web page. While both the object and framework name (Asynchronous JavaScript and XML) suggests the use of XML formatted data only the data interchange doesn't require or limit the requesting data to an XML format. Thus other formats such as preformatted HTML, plain text or our preferred JSON can also be used.

The Old Switcheroo

| 1 Comment | 0 TrackBacks | 

Back in May I outlined a number of changes we here at Zoomshare had planned for everybody's photo albums including a new mid-size view of an image, embed codes for sharing the image on the web and more. One feature we've been developing, that I left out of that initial list, was the capturing of photo uploads for promotion on Zoomshare.com.

Well now after a few weeks of development, testing, customer feedback and reprioritizing we've pulled a little switcheroo. The latest feature for Zoomies being the promotion of their smiling faces, pets, group pictures, favorite art and more (naturally, to be considered, the images must be family friendly) while we tweak and adjust, based in part on user feedback, the master list of new album features.

How do you get an image considered for Zoomshare.com? It is as simple as simple can be, just upload one or more photos to a photo album on your website and tune into Zoomshare.com for daily updates.

Why should you care? Well getting an image with a link to your site on Zoomshare.com, which is currently averaging just over 3,000 visitors per day, will help get your site noticed and bring new visitors to you. It also helps spruce up our home page, allowing us to showcase our terrific online community to potential new users.

It's Win - Win, so Join In!

Deep within the Zoomshare office resides a collection of servers running a duplicate copy of our application for development and testing. Within this damp lab the mad hacker/programmers, including yours truly, slave over every line of code, looking to improve performance and reliability.

About once a week a service bell rings summoning us from our dungeon to "The Meeting Place" where that week's directive is read to all within ear shot. For the past few weeks the directive has repeated, exactly the same, like a drum beat, over and over "Improve Photo Albums, Improve Photo Albums". Then with little warning we are pushed back down the stairs to our lab, deprived of sunlight and all but bread and water, until one day at the Sacred Council of Fogg the Stone of Wine declares "It is done!"

Eh, ok, maybe not quite like that. Yet, we do indeed work everyday on improving Zoomshare and indeed for the past few weeks within our lab environment we have been focusing on enchantments to photo albums and soon those changes will be seeing the light of day. That of course brings us to the big question: What in Photo Albums will be changing?

Album Changes: Navigation
First and foremost the biggest change to Zoomshare sites for visitors will be a new navigation scheme, designed to allow a visitor to easily move between individual images and removal of the awkward "pop-up window".

At first, navigating images within album looks the same; first a list of albums in alphabetical order, followed by a collection of thumbnail images representing the various sorted images within an album.

Thumbnail Album VIew
Thumbnail View

However, everything changes after the next mouse click. Previously when a visitor clicked on a thumbnail image a pop-up window would open with a full-size view of the image. Within the upcoming change upon mouse click the thumbnail view will disappear and be replaced with a new "mid-size" view of the image.

The new mid-size view has the following enchantments:

  • Mid-size view of image
  • Embed Codes for the image <-- More on this in a bit
  • Previous & Next navigation to view album images at the mid-size "level"
  • If a free site, a new ad space

Mid-size Album VIew
Mid-size View

To view a full or originally uploaded size, a simple mouse clink on the mid-size image will do the trick. However instead of a pop-up window, which most web browsers restrict these days, a "lightbox" effect is in place in which the image opens in a "modal dialog box". Simply put everything within the browser window is "grayed out" to draw focus to the image "on top of" everything else.

Full-size Album VIew
Full-size View

Album Changes: Embed Codes
With the addition of the mid-size view to photo albums another element the "Share" or "embed codes" have been added alongside each image to enable sharing of your images among visitors to your site and the web in general. Depending on the size of the image each image will either have a collection of Small & Original or Small, Medium and Large code for embedding the image into a web page or bulletin board.

In addition each image will include a URL to use in instant messages or emails as well as the embed code for the Zoomshare Flipbook Widget containing the whole album in which the individual image is a part of.

Use of these embed codes is similar to other codes found on the web today; the visitor simply uses their ever handy copy and paste abilities to select and copy the embed code they wish to use, then paste that code into the desired location as needed.

Selecting Embed Codes
Selecting Share Code for Embedding

Album Changes: Console
Don't want your images to be "shareable" by your site visitors? No problem, within console a new album setting allows a Zoomshare user to control which, if any, albums are "shareable". By default all albums are, so if you wish to limit which albums provide share information one simply needs to disable the feature as desired.

Enabling Embed Codes
Console Disable/Enable Share Information

Even if a Zoomshare user wishes to disable Share information for the outside world, within console the embed codes for any image can still be found for use within Zoomshare pages or elsewhere.

To access this new Share information within console two new features have been added to the Add & Sort Photos section of the photo album console. A "Share" link above each new image provides for that image's embed codes when desired. Simply click on the Share link to reveal the necessary information.

In addition to the Share link, the embed codes for each image are also provided in the Info section of each image, just in case.

Selecting Embed Codes within Console
Share Information within Console

Back to the Pit of Despair
As with all change it will take a little bit of time for the dust to settle. If anything seems amiss or if you have a suggestion, comment or feedback, please do send them our way at: customerservice at zoomshare dot com. But with any luck the Shared Wizards of Zoom will declare "It is Good."

Share This!

| 3 Comments | 0 TrackBacks | 

Not a fan of Digg? Wish you could give visitors more options to share your thoughts without registering with each community run news service? Looking for a simple tool that let's you know what your site visitors are recommending to their friends? Well then I've got the tool for you, ShareThis!

ShareThis is a web widget from Nextumi, Inc. that allows one's content to be instantly 'shareable' with users of various web services with the minimal amount of work by the site owner. As a bonus the ShareThis widget can provide tracking and reporting information such that one can see what site content is being shared.

For zoomshare users this means being able to let users view, vote and/or share your work with other potential visitors without the need to be a user of each individual web service. So if someone thinks your recent blog posting is Digg worthy, they can submit your posting to Digg, right from within your posting, without you having to provide all the necessary Digg links.

Getting Started
The first step to using ShareThis is to register as a publisher. Once registered the next step is to customize your widget, choosing how visitors can share your content and with whom.

Share This Config
Configuring ShareThis

For example, you can allow visitors to share your content only by email. Or you can limit them to just Facebook and MySpace. One can also choose the basic color scheme for the widget in order to better match one's site template.

Once configured one copies the resulting widget code and pastes it into a free form web page or blog post as desired.

<script type="text/javascript" src=http://w.sharethis.com/widget/?tabs=web%2Cemail&charset=utf-8 &style=rotate&publisher=23441421-9d3a-4d4c-8746-a097a0f4b702 &headerbg=%235c5c5c&inactivebg=%237a7a7a&inactivefg=%23FFFFFF &linkfg=%230000FF></script>

ShareThis code for pdw @ zoomshare

Which results in the following button which visitors can click on to reveal the ShareThis Widget:



Nice right? Well it gets even better. As an assist the good folks at Nextumi have also added some basic reporting features. As such you get not only get an idea of who's visiting your site, but what they are sharing with their friends and what service their friends are using.

ShareThis Config
Reviewing Share This Traffic

Check it out and be sure to share this with your friends, I think you'll all enjoy this little tool as much as I do.

Managing Update Notifications

| 2 Comments | 0 TrackBacks | 

Ok, so you enjoy knowing when a zoomshare friend updates their site via Message Notifications, but, well let's be honest, you have a lot of friends and if you spend one more day clearing out yet one more inbox your going to scream!

If only there was a way to switch off the default setting and select which friends you wish to receive update notifications from ...

Well now you can! Zoomshare users can now control which friends they receive update notifications from within their Friend List. For each friend a new option titled 'Edit Preferences' has been added to the right-hand side of the friend's screename. To toggle the setting off or on simple click on the 'Edit Preferences' to reveal the 'Receive Update Notifications' checkbox.

Zoomshare Edit Friend Update Notification Preferences
Editing Update Notification Preferences

By default this setting is 'on' so the checkbox will be 'checked'. To toggle the setting 'off' click on the checkbox to uncheck it, then click on 'Save Preference'.

Experienced zoomshare users may notice that the 'Edit Preferences' feature expands the previous 'Add Description' feature in which a user could leave a personal description or note to themselves about each friend. This option still exists under the 'Edit Preferences' feature and behaves in a similar manner as the 'Add Description' feature.

To add a personal note or description about a friend, simply replace the "Add Description" text with one's personal comment and select 'Save Preferences' after first clicking on 'Edit Preferences.

Enjoy!

Heads Up

| 0 Comments | 0 TrackBacks | 


sare notes in a recent Forums posting that we have updated the look of the console landing page. The new Dashboard provides a simplified, heads up view of activity on zoomshare.

With the new Dashboard users can better track their friends list, send and receive invites and update their profile and directory information. Of course users can still edit their website or upload photos to their photo album by using the navigation tabs at the very top of the console screen.

How does the new Dashboard help users better track what's going on? Well when a user has a new Message or Invite the Dashboard lets the user know of the new item by highlighting the console as shown in this screen shot:

Moreover, we now send out notifications of certain updates to you when your friends have made changes to their zoomshare sites. Which also makes it easier for users to keep track of what's happening on zoomshare:

What kind up update triggers a notification? If a friend edits a web page, adds a blog post, adds an image to a photo album or adds an item to their shopping cart then a notification will be on its way to you.

When does the notification get sent? Well currently we process our update logs every 24 hours at 3 am Central Time. That means most users will have a notification of a friend's update the following morning. Over the course of the next few weeks will will be adjusting the timing of this process to find the right balance between timely notification and information overload.

In the meantime, enjoy the latest set of updates and let me know what you think

...

Impressive, Very Impressive.

| 0 Comments | 0 TrackBacks | 

This afternoon I was checking out zoomshare's new digs, helping prepare for an up-and-coming move of our servers from one colocation (colo) facility to another. I have to say that while this is by far not my first trip to a colo they still impresses me to no end.

What exactly is a colo facility? Well it is a data center where multiple customers (companies usually) locate network, server and storage equipment and interconnect to a telecommunication service provider to the virtual world at large. In other words it's the high tech center in the physical world where the virtual world of zoomshare resides and is accessed.

How high tech? Well that can vary from colo facility to colo facility, but to give you an idea of our new space (and why I'm always impressed) here's a quick run down:

  • 10.4 Megawatts (MW) provided by two different electrical sources. For reference household incandescent light bulbs rate between 40 and 100 watts, a modern diesel-train engine can top out a 3 to 5 MW. If fact, as a backup power source, this new facility will have 12 (it currently has 8) - 2.5 MW diesel engines that will be able to provide 30 MW if needed in pinch.
  • A cooling plant that not only cools 135,000 square feet overall but can provide 13,652 British Thermal Units per Hour to a space about the size of 7' x 1' x 2'. That relates to the ability to deliver the cooling power of a window mounted AC unit to cool the space of a utility closet.
  • Multilayered physical security setup that includes security guards, "man trap" entry, closed circuit TV, infrared motion detection and "hand geometry" scanners.
  • The ability to deal with fire, wind, flooding or earthquake.

Ok, but I bet that laundry list doesn't quite impress (or maybe it does I don't know). They say a picture is worth a thousand words, so what does this facility actually look like? Well I'm pretty sure photos are a no-no for privacy reasons. As I mentioned colos provide space, power and bandwidth to multiple companies, companies that can be competitors in any given market space. Hence my assumption that pulling out the camera phone is probably a bad idea. In fact, in this facility we are moving into most of the actual colo floor space is lights-out - of course the whole building isn't lights-out, just the more sensitive spaces. So with the exception of a few well placed guiding lights such that one can find one's assigned space and not trip over one's own two feet there is not much one can see, let alone picture.

Having said that here's a little marketing vid, take of it what you will, I'm still impressed.

Recent Entries

Facebook's Broken Beacon of Light?
Yesterday Mark Zuckerberg of Facebook apologized to Facebook users after the uproar that has resulted from Facebook's latest feature Beacon.…
Location, Location, Location
Last week Justin Davies, whom I have had the pleasure of working with virtually on a few technical publication projects…
Give Me a Ping, Mr. Vasili. One Ping Only, Please.
Another tool for letting potential readers know about changes or updates to your site, besides Digg or posting to message…
   
   


Subscribe:
Add to Google Reader or Homepage
Add to My AOL

Add to netvibes
Subscribe in Bloglines

Powered
openSUSE
Apache
MySQL
Perl
Movable Type Pro