November 2009 Archives

Atlas V Launch of Intelsat 14

| 0 Comments
| | | |

I captured the following video of a nighttime Atlas V rocket launch I witnessed, along side my father, while in Central Florida on Thanksgiving vacation with my family.

Note that I've modified the audio a bit to try an enhance the interesting bits while muting distracting background noises. That is, I've boosted up the countdown from a nearby radio scanner while muting the sound of cars passing by on the highway. Given the distance between the pad and the viewing location, the sound of the rocket boosters are not noticeable until about 1:35 mark.



The launch occurred at 1:55 am EST on November 23rd, 2009 from pad SLC-41 at Cape Canaveral. The video was shot just north of Florida Highway 528, about 13 miles south of the launch complex.



Red Marker denotes Launchpad 41, Green Marker denotes Viewing Location


The Atlas V rocket launch was preformed by United Launch Alliance and contained an international telecommunications satellite, Intelsat 14, which is slated to replace the Intelsat 1R satellite currently in operation. Intelsat 14 was successfully launched into a transfer orbit that will eventually place it into a circular geostationary orbit over the equator at 45 degrees west longitude.

In addition to bridging commercial intercontinental telecommunications, Intelsat 14 contains experimental equipment on behalf of the Department of Defense designed by Cisco known as Internet Routing in Space (IRIS). The "space router" is designed to test routing IP traffic in orbit, eliminating the need to send data to and from additional ground stations for network routing and in turn promises to allow U.S. military, allied forces and possible future commercial operators to quickly communicate using the Internet protocol for voice, video and data relay to remote locations all over the world.


Vacation Message

| 0 Comments
| | | |
Thanks for stopping by.

I am out on vacation in Florida until November 28th. During this time I will have little access to the Internet and will not be adding new entries until my return.

If you require immediate satisfaction, relax and enjoy this little slideshow from my last trip to Central Florida.

Rocket Garden STS-108 Astronaut Memorial Epcot Center Everyone Loves a Toy Story Quote Katie Koi


Thank you for your understanding. This is an automated posting.

Google's Chrome OS in 2010

| 0 Comments
| | | |

First published: 20th of November 2009 for Technorati

Yesterday Google hosted a small technical introduction to its new Chrome Operating System (OS), which is scheduled for release on new netbooks by the end of 2010.

Google's vision for Chrome is to build on the concept of the Web as a ubiquitous computing platform. As outlined by Sundar Pichai, Google's Vice President of Product Management, "in Chrome OS, every application is a web application." Which means at the heart of everything is Google's Chrome Web Browser, modified to run all on its own, "It's just a browser with a few modifications. And all data in Chrome OS is in the cloud."

That in turn allows Google to provide a quick, nimble system that can "be blazingly fast, basically instant-on." As demonstrated on the test system, built on a modified Linux kernel, went from power-on to surfing the Web in 10 seconds.

In essence, Google Chrome is a cross between Google's cellphone software Android, which is also hosted on the Linux kernel, and the Chrome browser. However, unlike Android, which is built on a modified Java platform for third-party applications to be built and run on, Chrome OS is built to run today's rich web applications built on AJAX as well as tomorrow's web applications built around the draft HTML5 standard.

But what does this mean for Microsoft and Apple? While Google's development of their own operating system is indeed a direct challenge to Microsoft's bread and butter family of Windows, Chrome OS isn't better than Microsoft Windows product or Apple's Mac OS X. Nor is Google's OS even focusing on the traditional tasks of managing the interface between the local hardware and user.

Instead, Google's operating system is about simplifying and enhancing access to applications online. Not so much a replacement of current personal computers, but an alternative to getting online and accessing applications such as Google Docs or Twitter.

Anything that can be done on any standard Web browser on Windows, Mac and Linux can be done on Chrome which means Google's soon-to-be operating system is designed to leverage the growing collection of service-oriented software that can be found online, including, of course, Google's own suite of applications.

The trick for Google now is not just in implementation, but also adoption. Building on the growing trend of netbooks helps, but network computing itself is hardly a new concept.

A Stepped Up Remote Apache Monitor in Perl

| 0 Comments
| | | |

Back in September I outline a simple Perl script to remotely monitor the status of various web servers I manage and report on any failures. One shortcoming of the script is that it has no memory of the previous state of the websites listed for polling. Thus, once a site fails, the script will continuously report on the failure until resolved.

For some, this might be just fine, a simple repetitive reminder until corrected. For others however, this might not be ideal. If, for example, the problem is non-trivial to solve, the last thing one needs is a nagging every few minutes that the issue has yet to be resolved.

I for one am all for notification without excessive nagging.

The obvious answer to this dilemma is to store the previous state of the server such that it can be used to test against the currently state; if the state of the server has changed, a notification gets sent. Thus one straightforward notification that something has changed.

As a bonus, by reporting on the change of state, the script will now report on when the server has come back online as well as when it has failed. This simple change eliminates what would have been a manual process previously; notifying stakeholders that the issue has been resolved.

Since the Perl script is evoked by cron on a regular basis and terminates once polling is complete, the "current" state of a site will need to be store in secondary memory, i.e. on disk, for future comparison. This is pretty straightforward in Perl:


sub logState ($$$) {

	my ( $host, $state, $time ) = @_;

	# Create a filehandle on our log file
	my $fh = FileHandle->new(">> $fileLoc");
	if (defined $fh) {

	        # Print to file the necessary information 
                # delimited with a colon
		print $fh "$host:$state:" .$time->datetime. "\n";
		$fh->close;

 	}
}

With a new Filehandle object the script opens the file previously assigned to the $fileLoc variable for appending (the '>>' immediately prior to the variable denotes write by appending).

If a Filehandle object has been successfully created, the next step is to write a line to the file with the information necessary for the next iteration of the monitor script, specifically the host information and its current state.

Note that each line (\n) in the file will denote information about a specific site and that the related information is separated by a colon (:). This will be pertinent later in the code, reading of the log file at the next scheduled execution of the monitor script:


# Our array of polling sites' previous state
my @hostStates = ();

# Populate said array with information from log file
my $fh = FileHandle->new("< $fileLoc");
while ( <$fh> ) {

	my( $line ) = $_;
	chomp( $line );
	push ( @hostStates, $line );	

}
$fh->close;

In this bit of code the goal is to get the previously logged state of each site and populate an array with the information. At the moment how each record is delimited isn't of concern, but simply that each line is information relating to a specific site and gets its own node in the array.

Note, since the objective here is to simply read the log file the "<" is used by the filehandle to denote that the file is "read-only" and not "append".

Once the polling of a specific site occurs, the first item of concern is determining the site's previous state. For that the following bit of code is put to use:


sub getPreviousState ($) {

	my ( $host ) = @_;

	# For each node in the array do the following
	foreach ( @hostStates ) {

		my( $line ) = $_;
		# Break up the information 
                # using our delimiter, the colon
		my ($domain, $state, $time) = split(/:/, $line, 3);

		# If we find our site return the previous state 
		if ( $domain eq $host ) {
			return $state;
		}

 	}

}

In this function each element in the array is broken down to relevant information using the split function, which delimits the record by a given character, the colon. From here it is a simple matter of testing the two states, the previous and current state before rolling into the notification process.

The complete, improved remote monitor:


#!/usr/bin/perl
use strict;
use FileHandle;

use Time::Piece;
use LWP::UserAgent;
use Net::Ping;
use Net::Twitter::Lite;

### Start Global Settings ###

my $fileLoc = "/var/log/state.txt";
my @hosts = ( "pdw.weinstein.org", "www.weinstein.org" );

### End Global Settings ###

# Our array of polling sites' previous state
my @hostStates = ();

# Populate said array with information from log file
my $fh = FileHandle->new("< $fileLoc");
while ( <$fh> ) {

	my( $line ) = $_;
	chomp( $line );
	push ( @hostStates, $line );

}
$fh->close;

# Clear out the file by writing anew
my $fh = FileHandle->new("> $fileLoc");
$fh->close;

foreach my $host ( @hosts ) {

	my $previousState = getPreviousState( $host );

	my $url = "http://$host";
	my $ua = LWP::UserAgent->new;
	my $response = $ua->get( $url );

	my $currentState = $response->code;
	my $time = localtime;

	# If states are not equal we need to notify someone
	if ( $previousState ne $currentState ) { 

		# Do we have an status code?
		if ( $response->code ) {

			reportError("$host reports
			$response->message.\n");

		} else {

		# HTTP is not responding, 
                # is it the network connection down?
		my $p = Net::Ping->new("icmp");
		if ( $p->ping( $host, 2 )) {

			reportError ( "$host is responding, 
			     but Apache is not.\n" );

		} else {

			reportError ( "$host is unreachable.\n" );

		}

	}

	# Not done yet, we need to log 
        # the current state for future use
	logState( $host, $currentState, $time )

}

sub reportError ($) {

	my ( $msg ) = @_;
	my $nt = Net::Twitter::Lite->new(
		username => $username, 
		password => $pasword );

	my $result = eval { $nt->update( $msg ) };

	if ( !$result ) {

		# Twitter has failed us,
		# need to get the word out still...
		smsEmail ( $msg );

	}

}

sub smsEmail ($) {

	my ( $msg ) = @_;
	my $to = "7735551234\@txt.exaple.org";
	my $from = "pdw\@weinstein.org";
	my $subject = "Service Notification";

	my $sendmail = '/usr/lib/sendmail';
	open(MAIL, "|$sendmail -oi -t");
		print MAIL "From: $from\n";
		print MAIL "To: $to\n";
		print MAIL "Subject: $subject\n\n";
 		print MAIL $msg;
	close( MAIL );

}

sub logState ($$$) {

	my ( $host, $state, $time ) = @_;

	# Create a filehandle on our log file
	my $fh = FileHandle->new(">> $fileLoc");
	if (defined $fh) {

	        # Print to file the necessary information,
                # delimited with a colon
		print $fh "$host:$state:" .$time->datetime. "\n";
		$fh->close;
 	}
}

sub getPreviousState ($) {

	my ( $host ) = @_;

	# For each node in the array do the following
	foreach ( @hostStates ) {

		my( $line ) = $_;
		# Break up the information using our delimiter, 
                # the colon
		my ($domain, $state, $time) = split(/:/, $line, 3);

		# If we find our site return the previous state
		if ( $domain eq $host ) {

			return $state;

		}

 	}

}

Happy Birthday Apache (Software Foundation)

| 0 Comments
| | | |

First published: 3rd of November 2009 for Technorati

 

 

(Photo from Flickr user jaaronfarr)


This week the Apache Software Foundation (ASF) is holding its annual US conference in Northern California for all things Apache. As part of the get together conference attendees, as well as those elsewhere this week, are invited to join in celebrating the 10th anniversary of The Apache Software Foundation.

Ah, I hear confusion in your voice, didn't Apache celebrate its 10th anniversary a couple of years ago?

Indeed the Apache Web Server has already celebrated its tenth birthday, but just as the Apache Web Server evolved from an ad hoc collection of software patches for the NCSA's web server HTTPd, the Apache Software Foundation is a registered non-for-profit organization that evolved from the loose affiliation of web developers and administrators who first submitted and organized those patches in the first place.

Big deal? Well, yes - it is a bit deal. See the Apache Software Foundation is a decentralized community of developers that oversee the development of Apache HTTP Server along with some 65 additional leading Open Source projects.

In essence the ASF provides the necessary framework for those projects to exists, from the guidelines on how to organize project resources and contributions to a maturation process for new projects, know as the Apache Incubator which includes providing legal protection to volunteers, defending against misuse of the Apache brand name and adoption of the Apache License.

In other words, the ASF is about learning from and building on the success of the world's most popular web server. Projects such as Tomcat, Lucene, SpamAssassin and CouchDB all owe a bit of their success to the ASF's dedication to providing transparent team-focused development projects the necessary computing and project resources needed for successful collaboration.

Along with sharing the same open source license and resources, the projects managed by the ASF - and to a larger extent the collection of project volunteers - is the ideals that project participation helps define, not just the roles of individual contributors, but their responsibilities as well. Roles are assigned based upon demonstrated talent and ability, a Meritocracy. And while anyone can help contribute to a project outright, membership to the foundation as a whole is granted only to nominated and elected individuals who have actively contributed to ASF and its projects.

Oh, and the ASF also organizes several ApacheCon conferences each year, including annual conferences in the United States and Europe.

And that is why the ASF's 10th anniversary is important. That is why you should take sometime this week to celebrate.

(Ed. note: this author also reflects on his first time with Apache on his personal blog.)

My First Exposure to Apache, A Personal Reflection

| 0 Comments | 1 TrackBack
| | | |

Technorati just published an article of mine on this week's 10th Anniversary celebration of the Apache Software Foundation. Alas, given current commitments - consulting gigs and an upcoming family getaway - I couldn't bring myself to justify a trip out to the Bay Area this week to participate. So instead, I present this personal reflection of my first real exposure to Apache in celebration of the foundation's 10-year milestone.

C2Net Software
In 1998, with a freshly minted Computer Science degree in hand, I received my first real exposure to the Apache community with my first full-time job offer from C2Net Software in Oakland, CA. I had an offer sheet, a signing bonus and a opportunity to move to the technological epicenter that is the San Francisco Bay Area. I had no idea what I was in for.

C2Net Software LogoBy 1998 the Apache Group - forerunner to the Apache Software Foundation - had already coalesced around a patched up HTTPd Web Server from the University of Illinois' National Center for Supercomputing Applications which had come into its own as the most popular software for running websites. Companies such as C2Net and Covalent started building business on packaging the Apache Web Server with pre-compiled modules such as mod_php and mod_ssl for any computing platform imaginable, even Windows. But by far the most popular systems of the day were Sun - We put the dot in dot-com - Solaris and FreeBSD.


The Internet boom was in full swing.


Being a recent college graduate I had all of the theory and knowledge and none of the "real-world" experience. I was hired by C2Net as a Database Engineer. I had recent exposure to a various Unix-based systems, including one variation while working for a small business in a Chicago suburb writing Perl scripts for text processing of information bound for a database and later computation. I had experience with HTML layout and programming for the Common Gateway Interface from working part-time at a small computer bookstore in another suburb. I had even tried to organize an online resume matching service as a whole-class project in a Software Engineering course.


However, I was missing two important pieces; knowledge of a web server software and how to use the server to bring everything together.


That would soon change. C2Net had been growing. What had started, in part, in a Berkeley dorm as a Bay Area ISP that had adopted the open Apache Web Server to combat security flaws discovered within Netscape's Web Server had evolved into a booming business selling a security-minded version of Apache packaged as the Stronghold Web Server worldwide. Alas, their one-table incident tracking system that had been hacked together one evening was in serious need of replacement.


That is where I came in, working with three other individuals I help developing what is now a days referred to as a Customer Relationship Management (CRM) System, but at the time we just called the "All-Singing-All-Dancing Sales and Support Database" - complete with Michigan J. Frog as mascot - since it would ingrate sales and support contacts and interactions into a single database with web-based work queues for pending sales and support email inquiries.


ASAD: The All-Singing, All-Dancing Database

Our in-house e-mail and web-based CRM system started by replicating the basic functions of the existing incident tracking system, an inbound email would be parsed and processed based on basic information. If an incident id in the subject was located the email body was "appended" to the corresponding incident and the status of the incident was updated for attention. If the email had no incident number, a new incident was created, the email was appended and the incident was assigned to a level-one support tech based on the number of open incidents then awaiting any one tech to answer.


Staff members logged into the system using a digital client certificate generated by an internal, private certificate authority. Stronghold would verify the certificate against the root certificate of our certificate authority and then provide the certificate information for the web application. The application would then use the email address as presented in the certificate to query the database and generate the user's work queue. Since using digital certificates begets encryption all information transmitted between the server and the client was confined from the very begging to the very end.


Granted the system had its flaws too. Today there are any number of robust templating systems for abstracting the application logic from the display logic. Many of the program files became filled with dead weight, print statements repeating over and over the same HTML code for formatting and display.


But it worked. It was something more than a collecting of CGI scripts and static HTML pages on some remote system. It was an application. An application capable of complex user interactions. An application on a system that I had direct access to, could review error logs in real-time, could tweak the performance and before long a system that would be implement to get important business done.


All of which came about in great deal because of the Apache Web Server and its growing community.

2010: The Year of the Mobile Gadget

| 0 Comments
| | | |

First published: 28th of October 2009 for Technorati

In the movie "2010", there is a sequence of opening scenes explaining Dr. Heywood Floyd's (played by Roy Scheider) decision and preparation for his trip to Jupiter to personally investigate The Monolith, including a scene of Dr. Floyd working on the beach with a portable computer.

2010.jpeg

Of course in 1984, when the scene was filmed, the closest functional equivalent to today's laptop was Compaq's "luggable" Portable. So it's not too hard to understand that the computer used for the scene was in fact an Apple //c with a optional LCD panel, despite the fact that Apple didn't offered a computer with a battery until the Macintosh Portable in 1989.

But the scene isn't really about predicting what kind of computers we might be working on in 2010 so much as it was about showing the promise that personal computer makers started to make in the early 80s; the ability to break out of the office, while at the same time becoming more productive.

On that note, alas, there has always been an issue. Who in their right mind would bring a $999 or more computer to the beach, what with its three deathly hallows: water, sand and sun?

At it turns out, the answer is starting to come to light as the holiday shopping seasons gets underway on the eve of 2010.

And that answer is the netbook. Or is it the smartphone? Or perhaps the e-reader? Or something else?

Ok, maybe the answer isn't quite here yet. But consider: thus far, we have netbooks from Asus, Lenovo and others. Smartphones from Google, Microsoft and their partners, Palm, Apple, RIM and Nokia. e-Readers from Sony, Amazon and now even Barnes & Noble and of course who can ignore Apple's rumored Tablet?

And while it is not for certain that one, or any, of these gadget types will still be with us in 5 years time - know anyone still talking hype about PDAs? - it is certain that these devices bring us a step closer to that scene of sitting on a beach with a truly portable computer with readability in bright sunlight and long battery life without the personal computer price tag.

That in turn will make 2010 the Year of the Mobile Gadget.

About the Author

Paul is a technologist and all around nice guy for technology oriented organizations and parties. Besides maintaining this blog and website you can follow Paul's particular pontifications on the Life Universe and Everything on Twitter.

   
   


Subscribe:
Add to Google Reader or Homepage
Add to My AOL

Add to netvibes
Subscribe in Bloglines
Add to Technorati Favorites

Powered
CentOS
Apache
MySQL
Perl
Movable Type Pro