Recently in Apache Category

How to Secure Your Website Part I: Communication

| 0 Comments | 0 TrackBacks
| | | |

First published: 16th of Dec 2013 for Orbit Media Studios

Security is about risk management. Online, security is about reducing the risk of exposing information to the general Internet.

Consider the two actions occurring on any device connected to the Internet:

  • Communication
  • Storage

Communication

Communication is the heart of the Internet. The standard Internet protocol suite, known as TCP/IP (Transmission Control Protocol and Internet Protocol), is the basis for a collection of additional protocols designed to interconnect computer systems across the world in different ways. For example:

  • Domain Name - DNS (Domain Names System)
  • Email - SMTP (Simple Mail Transfer Protocol)
  • Web - HTTP (Hypertext Transfer Protocol)

Unfortunately, in the initial designs of the Internet, preventing unauthorized access to data while in transit and the verification of the communicating parties were not primary concerns. As a result, many of the protocols that use TCP/IP do not incorporate encryption or other security mechanisms by default.

The consequence is that anyone can "listen in" (not just the NSA) as data is transmitted across the Internet. That is, none of the protocols in the sample list employ any kind of encoding that restricts access to the data as it travels from one system to another.

HTTP - the protocol of the web - does, however, have a solution to this problem. SSL (Secure Sockets Layer) establishes a process to incorporate cryptographic methods that identify the parties in communication and establish a secure method of data transmission over the web (HTTPS).

Note: Today SSL's successor is TLS (Transport Layer Security), but it is still commonly referred to as SSL (or more accurately SSL/TLS).

Since the initial phase of establishing a SSL/TLS connection incorporates intense mathematical calculations, implementation in the past had been limited to specific webpages (an e-commerce site's checkout page, for example). However, today the trend is to implement as broadly as possible.

  • Popular sites, such as Google or Facebook, will conduct all communication over HTTPS by default by redirecting the initial HTTP request to HTTPS.
  • Popular web browsers will attempt to connect to a website via HTTPS first by rewriting the initial HTTP request to HTTPS before attempting a connection.

Does your website need SSL/TLS? That's a risk assessment you need to make with your web developer and hosting provider. But consider:

  • The trend is to secure more data in transit, not less.
  • Your website's visitors are not just concerned about sensitive information that they are actively providing (credit card information, for example), but other information they are actively and passively providing, such as what webpage they are viewing.

Our next security post will cover the second topic: data storage. In the meantime, have a question about security and the web? Post your question in the comments section below.

The Power of mod_proxy

| 0 Comments | 0 TrackBacks
| | | |

The Power of mod_proxy: An Introduction to Proxy Servers and Load Balancers with Apache HTTP Server
Presentations slides from my mod_proxy presentation at ApacheCon NA 2011 earlier this month. In addition to this slideshow, the presentation can be downloaded in PPT and MP3



Adding SQL Server Support in PHP on Linux

| 0 Comments | 0 TrackBacks
| | | |

Back in July I outlined a method for establishing a SSH tunnel between Linux and Windows machines. The goal of the connection was to enable a PHP script on a front-end Linux web server access to information stored on the back-end private Windows server running SQL Server.

What I didn't mention at the time was how I enabled PHP support for Microsoft's SQL Server.

The most common deployments of PHP on Linux include support for MySQL or Postgres, depending largely on other factors such has the organization's preference, experience and requirements. Since PHP can be deployed on Windows, there is support for Microsoft's SQL Server. Such support is nontrivial to enable in PHP on Linux. It is however possible:

To enabled SQL Server support in PHP on Linux, the PHP extension that provides said support requires the FreeTDS library to build against. FreeTDS is an open source implementation of C libraries originally marketed by Sybase and Microsoft to enable access to their database servers.

Downloading the source code, building and installing FreeTDS is straightforward:


$ wget \
ftp://ftp.ibiblio.org/pub/Linux/ALPHA/freetds/stable/freetds-stable.tgz
$ gunzip freetds-stable.tgz
$ tar xf freetds-stable
$ cd freetds
$ ./configure
$ make
$ make install

The next step is to build the PHP source code against the FreeTDS libraries to include SQL Server support. This can be done one of two ways; build PHP from scratch or build the specific PHP extension. Since I was working on a server with a preexisting install of PHP, I opted for door number two:

Locate or download the source code for the preexisting version of PHP. Next, copy the mssql extension source code from the PHP source code into a separate php_mssql directory:


$ cp ext/mssql/config.m4 ~/src/php_mssql
$ cp ext/mssql/php_mssql.c ~/src/php_mssql
$ cp ext/mssql/php_mssql.h ~/src/php_mssql

Now build the source code, pointing it to where FreeTDS has been installed:


$ phpize
$ ./configure --with-mssql=/usr/local/freetds
$ make

There should now be a mssql.so file in ~/src/php_mssql/modules/ that can be copied into the existing PHP install. Once copied the last remaining steps are to enable the extension by modify the php.ini file and restarting the Apache HTTP Server.

Additional Information can be found here: Connecting PHP on Linux to MSQL on Windows

Establish and Maintain an SSH Tunnel between Linux and Windows

| 0 Comments | 0 TrackBacks
| | | |

The Situation
Over the years, I've worked in numerous computing environments and have come to appreciate heterogeneous systems. In my mind, all system administrators should experience how different platforms solve similar problems, just as all programmers should be exposed to different programming languages.

Of course this means being able to play well with others. Sometimes, that's easier said than done.

A recent project requirement stipulated being able to connect a public web server with a private database system. Not an uncommon requirement, but it did place a hurdle immediately in the way. The web application, developed with the Linux, Apache, MySQL and PHP (LAMP) stack, needed a method to connect to the private database system securely, which, for fun was not MySQL but instead Microsoft's SQL Server.

The Problem
The initial requirement called on connecting to the SQL Server using Microsoft's virtual private network (VPN) solution, Microsoft Point-to-Point Encryption (MPPE). Not impossible, since support for MPPE on any Linux distribution simply requires modifying the Linux kernel and recompiling the kernel in Linux is usually a non-issue.

However, in this case the web application would be running on a basic virtual private server (VPS) and a Linux VPS doesn't run its own kernel. Instead Linux VPSes run on a shared kernel used by all the different virtualized servers running on the same hardware.

Net result, no modification of the Linux kernel would be possible on the VPS.

One alternative to this hurdle would have been to switch from a Linux VPS to a Windows VPS. This would have been technically possible since Apache, MySQL and PHP have viable Windows ports. Alas, the hosting provider in question didn't yet offer Windows VPSes. They would shortly, but couldn't guarantee that their Windows VPS solution would be available in time for this particular project's deadline.

A second alternative could have been to upgrade from a virtualized server to a dedicated server. But that would have added more computing resources than what was required. From a business perspective, the added monthly cost wasn't justifiable. Not when a third alternative existed.

A Workable Solution
VPN is one of those terms that can refer to something generic as well as something very specific1. This distinction setups up alternative number three. The secure network connection requirement would remain, the implementation could simply change2.

Specifically the secure connection would be implemented via SSH instead of via MPPE.

With SSH an encrypted tunnel through an open port in the private network's firewall can be established. This tunnel forwards network traffic from a specified local port to a port on the remote machine, securely.

Most Linux distributions these days install OpenSSH as part of their base system install. OpenSSH is a free and open version of the SSH protocol and includes client and server software. For those distributions that don't install it by default installing OpenSSH is usually a trivial matter via the distribution's package manager.

Windows, on the other-hand, has no such base installation of an SSH implementation. There are a number of free software versions for Windows. For the case at hand, freeSSHD was selected to provide a free, open source version of the SSH server software.

Configuring freeSSHD to enable tunneling requires the following steps:

  1. Click on the "Tunneling" tab
  2. Check to enable port forwarding and apply the change
  3. Click on the "Users" tab
  4. Create or edit a user and enable tunnel access

Once the firewall has been configured to allow SSH traffic on port 22, establishing the tunnel from the Linux client to the Windows server is as simple as typing the following at the Linux command-line:


ssh -f -N -L 127.0.0.1:1433:192.168.1.2:1433 username@example.org

In which ssh will create and send to the background a ssh tunnel (-f option) without executing any remote commands (-N option) that begins at the localhost port 1433 (127.0.0.1:1433) terminates at the remote address and port (192.168.1.2:1433) and authenticates using the remote username at the remote location (the public IP address or domain name for the private network).

But Wait There's More
There is however a minor problem with this SSH tunnel. As described, the establishment of the SSH tunnel is an interactive process. The command needs to be executed and the password for the user provided for authentication. In most cases a simple shell script, executed by cron would solve this minor issue. However, for the sake of security OpenSSH doesn't provide a command-line option for providing passwords.

This authentication step can be managed in one of two ways. One is the use of a key management program such as ssh-agent. The second, more common option is to create a passphrase-less key.

The first step in creating a passphrase-less key is to first generate a private/public key pair>sup>3. In Linux this is done by issuing the command:


ssh-keygen -t rsa

Which generates a private/public key pair based on either the RSA or DSA encryption algorithm, depending on what is provided in the command-line option.

When prompted to enter a passphrase for the securing of the private key simply press enter. To confirm the empty passphrase simply press enter again.

The next step, after copying the public key onto the Windows server, is to enable the use of the public key for authentication. In freeSSHD the steps are:

  1. Click on the "Users" tab
  2. Select a user and click on "Change"
  3. Select "Public Key" from the "Authorization" drop-down
  4. Click on "OK" to save changes to users
  5. Next click on the "Authentication" tab
  6. Using the browse button, select the directory with the users public key are kept
  7. Enable public-key authentication by choosing the "Allowed" button under "Public-Key Authentication"
  8. Click on "OK" to save the changes to authentication

With the passphrase-less keys in place, the last step is to automate the tunnel itself. In this case, instead of a shell script, I opted to use program called autossh.

autossh is a program that can start a copy of ssh and monitor the connection, restarting it when necessary. All autossh needs to know is what local port to monitor, so our one-time initial startup of ssh tunnel looks similar to the previous example, but with autossh and the addition of the -M option


autossh -M 1433 -f -N -L 127.0.0.1:1433:192.168.1.2:1433 
username@example.org




[1] This means alas, it is also one of those terms that can cause confusion, especially between technical and non-technical people, if not defined at the outset.

[2] This is one of those places where knowledge of different solutions solving a similar problem becomes handy.

[3] For user authentication SSH can either be password-based or key-based. In key-based authentication, SSH uses public-key cryptography where the public key is distributed to identify the owner of the matching private key. The passphase is in this case is used to authenticate access to the private key.

Speaking at ApacheCon NA 2011

| 0 Comments | 0 TrackBacks
| | | |

At the moment November, and another hockey season, seems a long way off. But I'm already making plans to be in Vancouver from the 9th to the 11th for the North American edition of ApacheCon.

That is because, at present, I am schedule to present at ApacheCon on The Power of the mod_proxy Modules:

This presentation reviews the concepts of web proxies and load balancing, covers the creation and maintenance of proxies (forward and reverse) for HTTP, HTTPS and FTP using Apache and mod_proxy and how mod_proxy_balancer can be used to provide a basic load balancing solution. Configuration Examples of implementing proxies and load balancer will be discussed including; how and when mod_proxy modules can help, configuring mod_proxy for forward or reverse proxy and configuring mod_proxy_balancer for one or more backend web servers.

A preview of this upcoming presentation can be found in a previous talk from ApacheCon EU 2006, entitled Apache 2.2 and mod_proxy_balancer.

For those who might be interested, this year's ApacheCon is set to cover:
  • Enterprise Solutions
  • Cloud Computing
  • Emerging Technologies & Innovation
  • Community Leadership
  • Data Handling, Search & Analytics
  • Pervasive Computing
  • Servers, Infrastructure & Tools

More information, including full schedule and registration, can be found at: http://na11.apachecon.com/

A Step Beyond a Red Hat Enterprise Linux, Apache, MySQL and PHP How To

| 1 Comment | 0 TrackBacks
| | | |
Server Room

Image by pdweinstein via Flickr

It seems quite a bit of my time has been taken up with system administration related tasks these days. If I'm not contemplating system monitoring solutions for one client, I must be busy migrating a mail service from one server location to another.


Naturally, I was hardly surprised when I got an email from an old high school friend recently asking for assistance in setting up a Linux, Apache, MySQL and PHP (LAMP) server on Red Hat Enterprise Linux (RHEL).


Now said friend is hardly a technical novice and Google is happy to provide one with plenty of step-by-step how to's on the subject. So instead of running down the usual I figured, since he understands the technical side, but is probably more at home with Microsoft-based solutions, that I would give him some tips to orient himself.


Thus here is a quick and dirty "what's happening" (instead of the more common how to) on installing LAMP on a RHEL based system.



Helpful Tools

Since most, if not all, of these commands deal with system-wide alterations first and foremost one will need to have "superuser" access. That can be accomplished in one of two ways:


  • root user: root is the conventional name of the user who has rights to all files and programs on a Unix-based system such as Red Hat Enterprise Linux. In other words root is the main administrator for the system and can do anything at anytime.


  • sudo: sudo is a command that allows users to run programs with the security privileges of another user (normally the superuser, root). sudo usage needs to be granted to a user before it can be used and can be limited to a subset of privileges if needed.


There are plenty of commands that one can execute at the command line. Just remembering them all can be hard, let alone remembering how exactly each command works, in what context and with what arguments. For that alone one should always remember that help is always one command away:


  • man: man is a command (and format) for displaying the on-line manual pages. The man command followed by the name of the command who's documentation you wish to view will provide an display of that documentation.

For example to view the manual pages in regards to the man command itself:


$ man man


A key characteristic of Unix-centric systems is the use of plain text for data storage. This means that the focus of data manipulation is based on the fundamentals of text processing. It also means that data can be "piped" from one command-line tool to another for performing a task that might be more complicated for just one tool to complete.


While text processing isn't necessary for setting up a LAMP server what is necessary to know and have ready to use is at least one general, text-based tool; a text editor.


  • vi: vi, vim, emacs or pico each is a command line text editor and while I try to be agnostic in the eternal debate about which is better, I will mention that for as long as I've been a Unix user/admin (some 15 years now) no matter what system I am on, be it a Red Hat distribution of Linux or a proprietary system chances are vi or vim has been installed and is available.1


Installing Software

Red Hat Enterprise Linux uses a software package format called rpm. Each software package consists of an archive of files along with information about the package like its version and a description of the software within.


As a general practice packages are broken up into two separate packages; One contains the compiled executable, while a second one contains the source files for building the software from scratch2. The source code packages will usual have a "-devel" appended to their name field.3


However, back in the early days of rpm it was discovered that managing individual rpm packages is a pain and time consuming4. As such, a number of rpm-based Linux distributions have adopted tools such as yum. yum assists in managing the identification of dependencies for a given software package as well as installing said packages, tracking updates for a given package from a specified repository and removing installed software from the system as desired.


Identification

Using yum with the list argument will provide information about a given package's availability. For example to check on the Apache Web Server, which is officially known as the Apache HTTP Server:


$ yum list httpd
Loaded plugins: fastestmirror
Installed Packages
httpd.i386     2.2.3-31.el5.centos.2     installed
Available Packages
httpd.i386     2.2.3-31.el5.centos.4     updates  


As you can see httpd is not only available, but is also already installed5.


Installing

If, however, the Apache HTTP Server wasn't the install argument for yum would do the trick.


$ yum install httpd
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * addons: mirror.cisp.com
 * base: yum.singlehop.com
 * extras: mirror.sanctuaryhost.com
 * updates: mirrors.liquidweb.com
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package httpd.i386 0:2.2.3-31.el5.centos.4 set to be updated
--> Finished Dependency Resolution

Dependencies Resolved

==================================================================
 Package     Arch     Version               Repository       Size
==================================================================
Installing:
 httpd       i386     2.2.3-31.el5.centos.4 updates         1.2 M

Transaction Summary
==================================================================
Install      1 Package(s)        
Update       0 Package(s)        
Remove       0 Package(s)        

Total download size: 1.2 M
Is this ok [y/N]:


Now it so happens that installing of Apache HTTP Server on the example system at the given time, didn't require any additional software packages to for the installation. This might not always be the case and as such, yum will identify and list any packages it determines are missing from the system to be installed at the same time, for the Apache Software Foundation's web server to be useful.


Updating

One of the key advantages of a package management system such as yum in the ability to easily track updates. The update feature can be used in two distinct manners. If used without explicitly naming a given packages, update will check for and then update every currently installed package that has a newer package available from the repository.


If, on the other hand, one or more packages are specified, yum will only update the listed package. As with installing the initial package, while updating, yum will ensure that all dependencies are installed.


Removing

Of course if yum can install and update software, it only makes sense that it can also remove the installed software from a given system. The remove argument is used to remove a specified package from the system as well as removing any packages which depend on the package being removed.


Note that this is basically the exact opposite of the install option. Install will add any software the package requires to run and the requested software itself. Remove, however, removes the requested software and any software that depends on it.


Configuring Software

Once each component is installed, it's time to go about configuring each one. The general convention for Unix-based systems such as Red Hat's Enterprise Linux is to place system-wide configuration files in the /etc directory. If fact, while the specifics can vary, the directory structure from Unix system to Unix system will general follow the same over all pattern.


Again staying with our web server example, the main Apache HTTP Server configuration file can be found at /etc/httpd/conf/httpd.conf on a Red Hat Enterprise Linux system. To open the configuration file for review and possible modification using the vi text editor is as simple as:


$ vi /etc/httpd/conf/httpd.conf


Another common directory convention is /var for variable files. Here are files whose content is expected to continually change during normal operation of the system. This is of note in regards to Apache and MySQL since /var is where database (/var/db) or website file hierarchies reside (/var/www).


Executing a Service

Last, but not least is starting these various services. For that there is the service command which runs a standardized script created for controlling the operation of the software in question. Thus to start the httpd web service the command would be"


$ service httpd start
Starting httpd:                                            [  OK  ]


Of course starting a service manually and having it automatically start when the system boots are two different realities. The chkconfig command provides a simple command-line tool for denoting if a service is to start at boot time or not. To turn on the Apache httpd service at boot:


$ chkconfig --add httpd
$ chkconfig httpd on


As noted in its man page, chkconfig has five distinct functions: adding new services for management, removing services from management, listing the current startup information for services, changing the startup information for services, and checking the startup state of a particular service.


Thus if service is akin to rpm form manually controlling a specific service via its control script, than chkconfig is akin to yum for managing a group of services, controlling the when and how of availability.


The --add argument for chkconfig adds a new service for management. Equally, the --del will remove a service from management and the --list will list all of the services which chkconfig knows about or, if a name is specified, information only about the named service.


One of the key concepts in regards to services, such as a web or database service on a Unix-centric system, is the concept of a runlevel. In standard practice, runlevel relates to the state the system is in and what is enabled for execution. Runlevel zero, for example, halts all execution of any running service, an denotes a halted, off, system.


Runlevels 1 to 5 differ in terms of which drives are mounted and which services, such as a web server are started. Default runlevels are typically 3, 4, or 5. Lower run levels (1) are useful for maintenance or emergency repairs, since they usually don't offer any network services and can help in eliminating sources of trouble.


This is important to note since in managing a service, chkconfig needs to know at which runlevels a service should run. Moreover, it is important to note what runlevels other depended services are enabled for. That is, if runlevel 1, which is single-user state akin to a "safe mode" is configured to not run any networking service, it makes little sense to enable the Apache HTTP server to execute at this runlevel.


More Resources

Just as other how-tos are entry points into getting going, this little what's happening is a quick "step beyond" the how to's last step, moving one towards understand what is going on "behind the scenes." Hopefully this next step provides a helpful point of moving into a greater understanding and into the grand world of system administration.


To that end here is a quick list of books on subject of system administration for Unix systems such as Red Hat Enterprise Linux:




2 Remember one of the major tenets at the heart of GNU/Linux is the ability to have access to view and modify the source materials.

3 More about the rpm naming convention can be found here: http://en.wikipedia.org/wiki/RPM_Package_Manager#Package_label

4 Often refereed to as RPM Hell

5 A trained eye will note that this command was actually executed on a CentOS system. CentOS, as noted on their site, is a "CentOS is an Enterprise-class Linux Distribution derived from sources freely provided to the public by a prominent North American Enterprise Linux vendor." Can you guess which one?

A Stepped Up Remote Apache Monitor in Perl

| 0 Comments | 0 TrackBacks
| | | |

Back in September I outline a simple Perl script to remotely monitor the status of various web servers I manage and report on any failures. One shortcoming of the script is that it has no memory of the previous state of the websites listed for polling. Thus, once a site fails, the script will continuously report on the failure until resolved.

For some, this might be just fine, a simple repetitive reminder until corrected. For others however, this might not be ideal. If, for example, the problem is non-trivial to solve, the last thing one needs is a nagging every few minutes that the issue has yet to be resolved.

I for one am all for notification without excessive nagging.

The obvious answer to this dilemma is to store the previous state of the server such that it can be used to test against the currently state; if the state of the server has changed, a notification gets sent. Thus one straightforward notification that something has changed.

As a bonus, by reporting on the change of state, the script will now report on when the server has come back online as well as when it has failed. This simple change eliminates what would have been a manual process previously; notifying stakeholders that the issue has been resolved.

Since the Perl script is evoked by cron on a regular basis and terminates once polling is complete, the "current" state of a site will need to be store in secondary memory, i.e. on disk, for future comparison. This is pretty straightforward in Perl:


sub logState ($$$) {

	my ( $host, $state, $time ) = @_;

	# Create a filehandle on our log file
	my $fh = FileHandle->new(">> $fileLoc");
	if (defined $fh) {

	        # Print to file the necessary information 
                # delimited with a colon
		print $fh "$host:$state:" .$time->datetime. "\n";
		$fh->close;

 	}
}

With a new Filehandle object the script opens the file previously assigned to the $fileLoc variable for appending (the '>>' immediately prior to the variable denotes write by appending).

If a Filehandle object has been successfully created, the next step is to write a line to the file with the information necessary for the next iteration of the monitor script, specifically the host information and its current state.

Note that each line (\n) in the file will denote information about a specific site and that the related information is separated by a colon (:). This will be pertinent later in the code, reading of the log file at the next scheduled execution of the monitor script:


# Our array of polling sites' previous state
my @hostStates = ();

# Populate said array with information from log file
my $fh = FileHandle->new("< $fileLoc");
while ( <$fh> ) {

	my( $line ) = $_;
	chomp( $line );
	push ( @hostStates, $line );	

}
$fh->close;

In this bit of code the goal is to get the previously logged state of each site and populate an array with the information. At the moment how each record is delimited isn't of concern, but simply that each line is information relating to a specific site and gets its own node in the array.

Note, since the objective here is to simply read the log file the "<" is used by the filehandle to denote that the file is "read-only" and not "append".

Once the polling of a specific site occurs, the first item of concern is determining the site's previous state. For that the following bit of code is put to use:


sub getPreviousState ($) {

	my ( $host ) = @_;

	# For each node in the array do the following
	foreach ( @hostStates ) {

		my( $line ) = $_;
		# Break up the information 
                # using our delimiter, the colon
		my ($domain, $state, $time) = split(/:/, $line, 3);

		# If we find our site return the previous state 
		if ( $domain eq $host ) {
			return $state;
		}

 	}

}

In this function each element in the array is broken down to relevant information using the split function, which delimits the record by a given character, the colon. From here it is a simple matter of testing the two states, the previous and current state before rolling into the notification process.

The complete, improved remote monitor:


#!/usr/bin/perl
use strict;
use FileHandle;

use Time::Piece;
use LWP::UserAgent;
use Net::Ping;
use Net::Twitter::Lite;

### Start Global Settings ###

my $fileLoc = "/var/log/state.txt";
my @hosts = ( "pdw.weinstein.org", "www.weinstein.org" );

### End Global Settings ###

# Our array of polling sites' previous state
my @hostStates = ();

# Populate said array with information from log file
my $fh = FileHandle->new("< $fileLoc");
while ( <$fh> ) {

	my( $line ) = $_;
	chomp( $line );
	push ( @hostStates, $line );

}
$fh->close;

# Clear out the file by writing anew
my $fh = FileHandle->new("> $fileLoc");
$fh->close;

foreach my $host ( @hosts ) {

	my $previousState = getPreviousState( $host );

	my $url = "http://$host";
	my $ua = LWP::UserAgent->new;
	my $response = $ua->get( $url );

	my $currentState = $response->code;
	my $time = localtime;

	# If states are not equal we need to notify someone
	if ( $previousState ne $currentState ) { 

		# Do we have an status code?
		if ( $response->code ) {

			reportError("$host reports
			$response->message.\n");

		} else {

		# HTTP is not responding, 
                # is it the network connection down?
		my $p = Net::Ping->new("icmp");
		if ( $p->ping( $host, 2 )) {

			reportError ( "$host is responding, 
			     but Apache is not.\n" );

		} else {

			reportError ( "$host is unreachable.\n" );

		}

	}

	# Not done yet, we need to log 
        # the current state for future use
	logState( $host, $currentState, $time )

}

sub reportError ($) {

	my ( $msg ) = @_;
	my $nt = Net::Twitter::Lite->new(
		username => $username, 
		password => $pasword );

	my $result = eval { $nt->update( $msg ) };

	if ( !$result ) {

		# Twitter has failed us,
		# need to get the word out still...
		smsEmail ( $msg );

	}

}

sub smsEmail ($) {

	my ( $msg ) = @_;
	my $to = "7735551234\@txt.exaple.org";
	my $from = "pdw\@weinstein.org";
	my $subject = "Service Notification";

	my $sendmail = '/usr/lib/sendmail';
	open(MAIL, "|$sendmail -oi -t");
		print MAIL "From: $from\n";
		print MAIL "To: $to\n";
		print MAIL "Subject: $subject\n\n";
 		print MAIL $msg;
	close( MAIL );

}

sub logState ($$$) {

	my ( $host, $state, $time ) = @_;

	# Create a filehandle on our log file
	my $fh = FileHandle->new(">> $fileLoc");
	if (defined $fh) {

	        # Print to file the necessary information,
                # delimited with a colon
		print $fh "$host:$state:" .$time->datetime. "\n";
		$fh->close;
 	}
}

sub getPreviousState ($) {

	my ( $host ) = @_;

	# For each node in the array do the following
	foreach ( @hostStates ) {

		my( $line ) = $_;
		# Break up the information using our delimiter, 
                # the colon
		my ($domain, $state, $time) = split(/:/, $line, 3);

		# If we find our site return the previous state
		if ( $domain eq $host ) {

			return $state;

		}

 	}

}

Happy Birthday Apache (Software Foundation)

| 0 Comments | 0 TrackBacks
| | | |

First published: 3rd of November 2009 for Technorati

 

 

(Photo from Flickr user jaaronfarr)


This week the Apache Software Foundation (ASF) is holding its annual US conference in Northern California for all things Apache. As part of the get together conference attendees, as well as those elsewhere this week, are invited to join in celebrating the 10th anniversary of The Apache Software Foundation.

Ah, I hear confusion in your voice, didn't Apache celebrate its 10th anniversary a couple of years ago?

Indeed the Apache Web Server has already celebrated its tenth birthday, but just as the Apache Web Server evolved from an ad hoc collection of software patches for the NCSA's web server HTTPd, the Apache Software Foundation is a registered non-for-profit organization that evolved from the loose affiliation of web developers and administrators who first submitted and organized those patches in the first place.

Big deal? Well, yes - it is a bit deal. See the Apache Software Foundation is a decentralized community of developers that oversee the development of Apache HTTP Server along with some 65 additional leading Open Source projects.

In essence the ASF provides the necessary framework for those projects to exists, from the guidelines on how to organize project resources and contributions to a maturation process for new projects, know as the Apache Incubator which includes providing legal protection to volunteers, defending against misuse of the Apache brand name and adoption of the Apache License.

In other words, the ASF is about learning from and building on the success of the world's most popular web server. Projects such as Tomcat, Lucene, SpamAssassin and CouchDB all owe a bit of their success to the ASF's dedication to providing transparent team-focused development projects the necessary computing and project resources needed for successful collaboration.

Along with sharing the same open source license and resources, the projects managed by the ASF - and to a larger extent the collection of project volunteers - is the ideals that project participation helps define, not just the roles of individual contributors, but their responsibilities as well. Roles are assigned based upon demonstrated talent and ability, a Meritocracy. And while anyone can help contribute to a project outright, membership to the foundation as a whole is granted only to nominated and elected individuals who have actively contributed to ASF and its projects.

Oh, and the ASF also organizes several ApacheCon conferences each year, including annual conferences in the United States and Europe.

And that is why the ASF's 10th anniversary is important. That is why you should take sometime this week to celebrate.

(Ed. note: this author also reflects on his first time with Apache on his personal blog.)

My First Exposure to Apache, A Personal Reflection

| 0 Comments | 1 TrackBack
| | | |

Technorati just published an article of mine on this week's 10th Anniversary celebration of the Apache Software Foundation. Alas, given current commitments - consulting gigs and an upcoming family getaway - I couldn't bring myself to justify a trip out to the Bay Area this week to participate. So instead, I present this personal reflection of my first real exposure to Apache in celebration of the foundation's 10-year milestone.

C2Net Software
In 1998, with a freshly minted Computer Science degree in hand, I received my first real exposure to the Apache community with my first full-time job offer from C2Net Software in Oakland, CA. I had an offer sheet, a signing bonus and a opportunity to move to the technological epicenter that is the San Francisco Bay Area. I had no idea what I was in for.

C2Net Software LogoBy 1998 the Apache Group - forerunner to the Apache Software Foundation - had already coalesced around a patched up HTTPd Web Server from the University of Illinois' National Center for Supercomputing Applications which had come into its own as the most popular software for running websites. Companies such as C2Net and Covalent started building business on packaging the Apache Web Server with pre-compiled modules such as mod_php and mod_ssl for any computing platform imaginable, even Windows. But by far the most popular systems of the day were Sun - We put the dot in dot-com - Solaris and FreeBSD.


The Internet boom was in full swing.


Being a recent college graduate I had all of the theory and knowledge and none of the "real-world" experience. I was hired by C2Net as a Database Engineer. I had recent exposure to a various Unix-based systems, including one variation while working for a small business in a Chicago suburb writing Perl scripts for text processing of information bound for a database and later computation. I had experience with HTML layout and programming for the Common Gateway Interface from working part-time at a small computer bookstore in another suburb. I had even tried to organize an online resume matching service as a whole-class project in a Software Engineering course.


However, I was missing two important pieces; knowledge of a web server software and how to use the server to bring everything together.


That would soon change. C2Net had been growing. What had started, in part, in a Berkeley dorm as a Bay Area ISP that had adopted the open Apache Web Server to combat security flaws discovered within Netscape's Web Server had evolved into a booming business selling a security-minded version of Apache packaged as the Stronghold Web Server worldwide. Alas, their one-table incident tracking system that had been hacked together one evening was in serious need of replacement.


That is where I came in, working with three other individuals I help developing what is now a days referred to as a Customer Relationship Management (CRM) System, but at the time we just called the "All-Singing-All-Dancing Sales and Support Database" - complete with Michigan J. Frog as mascot - since it would ingrate sales and support contacts and interactions into a single database with web-based work queues for pending sales and support email inquiries.


ASAD: The All-Singing, All-Dancing Database

Our in-house e-mail and web-based CRM system started by replicating the basic functions of the existing incident tracking system, an inbound email would be parsed and processed based on basic information. If an incident id in the subject was located the email body was "appended" to the corresponding incident and the status of the incident was updated for attention. If the email had no incident number, a new incident was created, the email was appended and the incident was assigned to a level-one support tech based on the number of open incidents then awaiting any one tech to answer.


Staff members logged into the system using a digital client certificate generated by an internal, private certificate authority. Stronghold would verify the certificate against the root certificate of our certificate authority and then provide the certificate information for the web application. The application would then use the email address as presented in the certificate to query the database and generate the user's work queue. Since using digital certificates begets encryption all information transmitted between the server and the client was confined from the very begging to the very end.


Granted the system had its flaws too. Today there are any number of robust templating systems for abstracting the application logic from the display logic. Many of the program files became filled with dead weight, print statements repeating over and over the same HTML code for formatting and display.


But it worked. It was something more than a collecting of CGI scripts and static HTML pages on some remote system. It was an application. An application capable of complex user interactions. An application on a system that I had direct access to, could review error logs in real-time, could tweak the performance and before long a system that would be implement to get important business done.


All of which came about in great deal because of the Apache Web Server and its growing community.

Simple Remote Monitor for Apache with Perl

| 3 Comments | 2 TrackBacks
| | | |
Sometime last week Apache on one of the servers that manages some websites I host stop responding to requests, resulting in said websites being unavailable for a day and half (or so). Unfortunately I didn't know about the problem until someone else notified me of the issue. Opps. Not good system administration that.


There are plenty of solutions for monitoring network services, but given that I'm just running Apache for a half dozen non-critical websites on the side, there's no reason to go overboard. Any sysadmin worth their weight in salt should be able to whip something up with Perl in no time.1 So, what's needed? Well, something that will remotely monitor the pool of websites and notify me if something is amiss:

monitor_flow.jpg

First up, making a simple HTTP GET request:

my $url = "http://$host";
my $ua = LWP::UserAgent->new;
my $response = $ua->get( $url );

Now we test the response, was it successful? If so, Apache returns an appropriate status code and the requested resource. If Apache is available, but unable to properly process the request it will responded with some relevant status code, which we wish to pass on for further troubleshooting:

if ( !$response->is_success ) {

	# Do we have an error code?
	if ( $response->code ) {

	     reportError("$host reports $response->message \n");

	} else {

However a failed GET request will result in no status code since Apache probably failed to respond at all. In this case it would be helpful to determine if the issue is with Apache or something else. For that test a network ping is issued:

		# HTTP is down, is the network connection down too?
		my $p = Net::Ping->new("icmp");
		if ( $p->ping( $host, 2 )) {

		     reportError ("$host is responding,
		     		     	     but Apache is not.\n");

		} else {

		     reportError ("$host is unreachable.\n");

		}
	}

Not too difficult that. Now, how do we go about communicating the issue at hand? SMS has always been my preferred method since my phone is usually close at hand and, iPhone or not, SMS is widely implemented and easy to use.

# Send SMS via cellular Email to SMS gateway
my ( $msg ) = @_;
my $to = "7735551234\@txt.att.net";
my $from = "pdw\@weinstein.org";
my $subject = "Service Notification";

my $sendmail = '/usr/lib/sendmail';
open( MAIL, "|$sendmail -oi -t" );
	print MAIL "From: $from\n";
	print MAIL "To: $to\n";
	print MAIL "Subject: $subject\n\n";
	print MAIL $msg;
close( MAIL ); 

All's done? Not quite. For the fun of it, I figured broadcasting a message that the server was unavailable might be of use to regular visitors. What better way to broadcast a short message than via Twitter?

my ( $msg ) = @_;
my $nt = Net::Twitter::Lite->new(
	username => $username,
	password => $password );

my $result = eval { $nt->update( $msg ) };

Bring this all together, with cron gives us:

# monitor remote httpd servers every 30 minutes
*/30 * * * *   pdw  /home/pdw/bin/monitor.pl >/dev/null 2>&1



#!/usr/bin/perl
use strict;

use LWP::UserAgent;
use Net::Ping;
use Net::Twitter::Lite;

my @hosts = ( "pdw.weinstein.org", "www.weinstein.org" );

foreach my $host ( @hosts ) {

my $url = "http://$host";
my $ua = LWP::UserAgent->new;
my $response = $ua->get( $url );

if ( !$response->is_success ) {

# Do we have an error code?
if ( $response->code ) {

reportError("$host reports
$response->message.\n");

} else {

# HTTP is down, is the network connection down too?
my $p = Net::Ping->new("icmp");
if ( $p->ping( $host, 2 )) {

reportError ( "$host is responding,
but Apache is not.\n" );

} else {

reportError ( "$host is unreachable.\n" );

}

}

}

sub reportError ($) {

my ( $msg ) = @_;
my $nt = Net::Twitter::Lite->new( username => $username, password => $pasword );
my $result = eval { $nt->update( $msg ) };

if ( !$result ) {

# Twitter has failed us,
# need to get the word out still...
smsEmail ( $msg );

}

}

sub smsEmail ($) {

my ( $msg ) = @_;
my $to = "7735551234\@txt.att.net";
my $from = "pdw\@weinstein.org";
my $subject = "Service Notification";

my $sendmail = '/usr/lib/sendmail';
open(MAIL, "|$sendmail -oi -t");
print MAIL "From: $from\n";
print MAIL "To: $to\n";
print MAIL "Subject: $subject\n\n";
print MAIL $msg;
close( MAIL );

}



1 Of course if I was worth my weight I would have had something in place long before it was called for....

About the Author

Paul is a technologist and all around nice guy for technology oriented organizations and parties. Besides maintaining this blog and website you can follow Paul's particular pontifications on the Life Universe and Everything on Twitter.

   
   


Subscribe:
Add to Google Reader or Homepage
Add to My AOL

Add to netvibes
Subscribe in Bloglines
Add to Technorati Favorites

Powered
CentOS
Apache
MySQL
Perl
Movable Type Pro