Recently in Apache Category

How to Secure Your Website Part III: Keeping You and Your Website Safe

| 0 Comments
| | | |

First published: 30th of May 2014 for Orbit Media Studios

History
In the late 1960s the mathematician Whitfield Diffie, now a well known cryptographer, started his graduate work at Stanford. There he was introduced to the growing prominence of "time-sharing" computing, computers powerful enough to allow more than one user or task to execute at the same time. Contemplating the security implications of these new systems, Diffie and his colleagues realized that our everyday concepts of privacy and security would have to enforceable in the new digital age.

Unfortunately, in the 1980s, the developments of multitasking and computer security were pushed aside for a new vision; computers became independent and personal. They sat on a desk, not in some closed off room. They had all the required resources right there and didn't require connecting to another system. They just got about doing one thing, in real time, with just one user.

Evolution
As the personal computer evolved, features from the days of mainframes and minicomputers were introduced. Multitasking and networking made their way into our everyday lives. Soon everyone had an email address and was finding their way onto the "Information Superhighway." Unfortunately, the vision of an independent personal computer lead us to develop some bad habits and a false sense of security.

Consider what has been mentioned in the previous two posts about data in transit and in storage:

  • Encrypting and decrypting data requires intense mathematical computation, which can impact processing time and the perception of an application's responsiveness. In the world of 80s-era personal computing, the computer was not regularly connected to any remote device, was not executing multiple applications at the same time, was not interacting with various users and was not easily portable. At the time encryption was not popular because of the performance hit and limited security benefit.

Unfortunately, this habit of speed over security has continued. Platform and application developers still routinely shortcut security concerns in the name of performance.

  • The Internet provides a previously unknown sense of immediacy and intimacy despite great physical distances. Email and social networks allow us to view and share thoughts throughout the world as they occur. Ecommerce sites can organize lists of items personalized to one's tastes and fashions.

This intimacy creates a false sense of security, that one is safe, among friends and trusted institutions. Yet, the wildly successful networking protocol TCP/IP, the foundation of today's Internet, was originally developed as a research initiative. It forsake some concerns, such as security, for others, such as simplicity of implementation as research drove itself to an initial, small-scale (by today's standards) implementation.

Safety Tips
There are, of course, steps that system architects and developers can take to rectify this situation. But there are also steps that users of these systems, be it end users of a website or proprietor of it, can take:

  • Be aware of what data is being collected, how it is communicated

    • What information is being requested, can it be considered "sensitive"

    • Review how data is being transmitted between systems

    • If it is "sensitive" is it being transmitted securely

  • be aware of how information is being stored

    • Review what data is being stored

    • If the data is  "sensitive" is it being stored securely

    • Review "roles" assigned to different users who access the data and create unique accounts for each user

  • Overall, be proactive, not reactive

    • Create strong passwords

    • Use secured network protocols such as SSL and SFTP

    • Keep all applications and devices up-to-date

    • Undertake a risk assessment with your web developer and hosting provider.

  • Know that no system is unbreakable

    • Like a chain, a complex system is only as strong as its weakest link

    • Compliance with PCI, HIPPA or other security policies is a starting point

    • Threats evolve as new vulnerabilities are routinely discovered, don't get discouraged

Think something is missing from the list? Post it in the comments section below.

How to Secure Your Website Part II: Storage

| 0 Comments
| | | |

First published: 25th of Feb 2014 for Orbit Media Studios

Security is about reducing risk. All devices connected to the Internet have to deal with reducing the risk of data being compromised while in transit or in storage. Part I of How to Secure Your Website introduced the basics of securing website data while in transit. This post will cover storage.

Computer storage is often organized into a hierarchy based on accessibility to and volatility of data. The focus of this article is on secondary storage, a hard drive or flash memory.

Just about all devices these days incorporate some form of authorization and access control. Access control is simply the process of restricting access. Authentication is the use of some sort of credential, such as a username and password. Authorization is the act of authentication for access.

Due to poor risk assessment or implementation, access control processes are routinely compromised. Worst, most data stored on these compromised devices are rarely encrypted properly, if at all.

As mentioned in Part I, there are cryptographic methods that not just encode data, but provide additional methods of authorization and access control to data. So, why isn't all data encrypted in storage?

Similar to that of data in transit, encrypting data in storage has not always been considered a high priority. Speed is usually the focus for storage because the access time impacts the overall speed of an application. The act of encrypting data on write and decrypting the data on read requires more time and can cause a perception that the application or website is slow. Hence encryption is rarely enabled for all data in storage.

How does Orbit handle data storage?

  • If a business case requires the storage of personally identifiable information, Orbit's policy is to enhance the CMS to encrypt the data for storage, decrypt and viewable through a secured process and destroy the data after 30 days.

  • User passwords are hashed. Similar to a cipher, a hash is a method for encoding data. However, unlike a cipher, a hash is one way. A strong password, properly hashed, is difficult to guess or reverse

Does your website's data need to be secured? That's a risk assessment you need to make with your web developer and hosting provider. But consider, what information is collected and stored on your website:

  • Name, Phone Number, Email, Street Addresses

    • Some people are very cautious about sharing even this basic level of information with others. However, those people will opt-out of forms that ask for this information on principle

    • Most people share this level of information openly and, taken by itself, is optional to secure

  • Date of Birth, City of Birth, Mother's Maiden Name, Alma mater, Year of Graduation, Past Residences, Gender, Ethnicity, Account/Username

    • On their own, this information might be considered benign. When combined with other information they form the basis of an identity

    • Need to secure

  • Social Security Number, Driver's License ID, Bank Account Number, Credit Card Number, Account Password

    • This is information that is used for authentication of an identity

    • These pieces of information must be secured. Moreover, the securing of this information might need to pass some sort of industry compliance, such as PCI or HIPPA

Of course, this list is incomplete. Perhaps you can think of something to add to it? Post it in the comments section below.

How to Secure Your Website Part I: Communication

| 0 Comments
| | | |

First published: 16th of Dec 2013 for Orbit Media Studios

Security is about risk management. Online, security is about reducing the risk of exposing information to the general Internet.

Consider the two actions occurring on any device connected to the Internet:

  • Communication
  • Storage

Communication

Communication is the heart of the Internet. The standard Internet protocol suite, known as TCP/IP (Transmission Control Protocol and Internet Protocol), is the basis for a collection of additional protocols designed to interconnect computer systems across the world in different ways. For example:

  • Domain Name - DNS (Domain Names System)
  • Email - SMTP (Simple Mail Transfer Protocol)
  • Web - HTTP (Hypertext Transfer Protocol)

Unfortunately, in the initial designs of the Internet, preventing unauthorized access to data while in transit and the verification of the communicating parties were not primary concerns. As a result, many of the protocols that use TCP/IP do not incorporate encryption or other security mechanisms by default.

The consequence is that anyone can "listen in" (not just the NSA) as data is transmitted across the Internet. That is, none of the protocols in the sample list employ any kind of encoding that restricts access to the data as it travels from one system to another.

HTTP - the protocol of the web - does, however, have a solution to this problem. SSL (Secure Sockets Layer) establishes a process to incorporate cryptographic methods that identify the parties in communication and establish a secure method of data transmission over the web (HTTPS).

Note: Today SSL's successor is TLS (Transport Layer Security), but it is still commonly referred to as SSL (or more accurately SSL/TLS).

Since the initial phase of establishing a SSL/TLS connection incorporates intense mathematical calculations, implementation in the past had been limited to specific webpages (an e-commerce site's checkout page, for example). However, today the trend is to implement as broadly as possible.

  • Popular sites, such as Google or Facebook, will conduct all communication over HTTPS by default by redirecting the initial HTTP request to HTTPS.
  • Popular web browsers will attempt to connect to a website via HTTPS first by rewriting the initial HTTP request to HTTPS before attempting a connection.

Does your website need SSL/TLS? That's a risk assessment you need to make with your web developer and hosting provider. But consider:

  • The trend is to secure more data in transit, not less.
  • Your website's visitors are not just concerned about sensitive information that they are actively providing (credit card information, for example), but other information they are actively and passively providing, such as what webpage they are viewing.

Our next security post will cover the second topic: data storage. In the meantime, have a question about security and the web? Post your question in the comments section below.

The Power of mod_proxy

| 0 Comments
| | | |

The Power of mod_proxy: An Introduction to Proxy Servers and Load Balancers with Apache HTTP Server
Presentations slides from my mod_proxy presentation at ApacheCon NA 2011 earlier this month. In addition to this slideshow, the presentation can be downloaded in PPT and MP3



Adding SQL Server Support in PHP on Linux

| 0 Comments
| | | |

Back in July I outlined a method for establishing a SSH tunnel between Linux and Windows machines. The goal of the connection was to enable a PHP script on a front-end Linux web server access to information stored on the back-end private Windows server running SQL Server.

What I didn't mention at the time was how I enabled PHP support for Microsoft's SQL Server.

The most common deployments of PHP on Linux include support for MySQL or Postgres, depending largely on other factors such has the organization's preference, experience and requirements. Since PHP can be deployed on Windows, there is support for Microsoft's SQL Server. Such support is nontrivial to enable in PHP on Linux. It is however possible:

To enabled SQL Server support in PHP on Linux, the PHP extension that provides said support requires the FreeTDS library to build against. FreeTDS is an open source implementation of C libraries originally marketed by Sybase and Microsoft to enable access to their database servers.

Downloading the source code, building and installing FreeTDS is straightforward:


$ wget \
ftp://ftp.ibiblio.org/pub/Linux/ALPHA/freetds/stable/freetds-stable.tgz
$ gunzip freetds-stable.tgz
$ tar xf freetds-stable
$ cd freetds
$ ./configure
$ make
$ make install

The next step is to build the PHP source code against the FreeTDS libraries to include SQL Server support. This can be done one of two ways; build PHP from scratch or build the specific PHP extension. Since I was working on a server with a preexisting install of PHP, I opted for door number two:

Locate or download the source code for the preexisting version of PHP. Next, copy the mssql extension source code from the PHP source code into a separate php_mssql directory:


$ cp ext/mssql/config.m4 ~/src/php_mssql
$ cp ext/mssql/php_mssql.c ~/src/php_mssql
$ cp ext/mssql/php_mssql.h ~/src/php_mssql

Now build the source code, pointing it to where FreeTDS has been installed:


$ phpize
$ ./configure --with-mssql=/usr/local/freetds
$ make

There should now be a mssql.so file in ~/src/php_mssql/modules/ that can be copied into the existing PHP install. Once copied the last remaining steps are to enable the extension by modify the php.ini file and restarting the Apache HTTP Server.

Additional Information can be found here: Connecting PHP on Linux to MSQL on Windows

Establish and Maintain an SSH Tunnel between Linux and Windows

| 1 Comment
| | | |

The Situation
Over the years, I've worked in numerous computing environments and have come to appreciate heterogeneous systems. In my mind, all system administrators should experience how different platforms solve similar problems, just as all programmers should be exposed to different programming languages.

Of course this means being able to play well with others. Sometimes, that's easier said than done.

A recent project requirement stipulated being able to connect a public web server with a private database system. Not an uncommon requirement, but it did place a hurdle immediately in the way. The web application, developed with the Linux, Apache, MySQL and PHP (LAMP) stack, needed a method to connect to the private database system securely, which, for fun was not MySQL but instead Microsoft's SQL Server.

The Problem
The initial requirement called on connecting to the SQL Server using Microsoft's virtual private network (VPN) solution, Microsoft Point-to-Point Encryption (MPPE). Not impossible, since support for MPPE on any Linux distribution simply requires modifying the Linux kernel and recompiling the kernel in Linux is usually a non-issue.

However, in this case the web application would be running on a basic virtual private server (VPS) and a Linux VPS doesn't run its own kernel. Instead Linux VPSes run on a shared kernel used by all the different virtualized servers running on the same hardware.

Net result, no modification of the Linux kernel would be possible on the VPS.

One alternative to this hurdle would have been to switch from a Linux VPS to a Windows VPS. This would have been technically possible since Apache, MySQL and PHP have viable Windows ports. Alas, the hosting provider in question didn't yet offer Windows VPSes. They would shortly, but couldn't guarantee that their Windows VPS solution would be available in time for this particular project's deadline.

A second alternative could have been to upgrade from a virtualized server to a dedicated server. But that would have added more computing resources than what was required. From a business perspective, the added monthly cost wasn't justifiable. Not when a third alternative existed.

A Workable Solution
VPN is one of those terms that can refer to something generic as well as something very specific1. This distinction setups up alternative number three. The secure network connection requirement would remain, the implementation could simply change2.

Specifically the secure connection would be implemented via SSH instead of via MPPE.

With SSH an encrypted tunnel through an open port in the private network's firewall can be established. This tunnel forwards network traffic from a specified local port to a port on the remote machine, securely.

Most Linux distributions these days install OpenSSH as part of their base system install. OpenSSH is a free and open version of the SSH protocol and includes client and server software. For those distributions that don't install it by default installing OpenSSH is usually a trivial matter via the distribution's package manager.

Windows, on the other-hand, has no such base installation of an SSH implementation. There are a number of free software versions for Windows. For the case at hand, freeSSHD was selected to provide a free, open source version of the SSH server software.

Configuring freeSSHD to enable tunneling requires the following steps:

  1. Click on the "Tunneling" tab
  2. Check to enable port forwarding and apply the change
  3. Click on the "Users" tab
  4. Create or edit a user and enable tunnel access

Once the firewall has been configured to allow SSH traffic on port 22, establishing the tunnel from the Linux client to the Windows server is as simple as typing the following at the Linux command-line:


ssh -f -N -L 127.0.0.1:1433:192.168.1.2:1433 username@example.org

In which ssh will create and send to the background a ssh tunnel (-f option) without executing any remote commands (-N option) that begins at the localhost port 1433 (127.0.0.1:1433) terminates at the remote address and port (192.168.1.2:1433) and authenticates using the remote username at the remote location (the public IP address or domain name for the private network).

But Wait There's More
There is however a minor problem with this SSH tunnel. As described, the establishment of the SSH tunnel is an interactive process. The command needs to be executed and the password for the user provided for authentication. In most cases a simple shell script, executed by cron would solve this minor issue. However, for the sake of security OpenSSH doesn't provide a command-line option for providing passwords.

This authentication step can be managed in one of two ways. One is the use of a key management program such as ssh-agent. The second, more common option is to create a passphrase-less key.

The first step in creating a passphrase-less key is to first generate a private/public key pair>sup>3. In Linux this is done by issuing the command:


ssh-keygen -t rsa

Which generates a private/public key pair based on either the RSA or DSA encryption algorithm, depending on what is provided in the command-line option.

When prompted to enter a passphrase for the securing of the private key simply press enter. To confirm the empty passphrase simply press enter again.

The next step, after copying the public key onto the Windows server, is to enable the use of the public key for authentication. In freeSSHD the steps are:

  1. Click on the "Users" tab
  2. Select a user and click on "Change"
  3. Select "Public Key" from the "Authorization" drop-down
  4. Click on "OK" to save changes to users
  5. Next click on the "Authentication" tab
  6. Using the browse button, select the directory with the users public key are kept
  7. Enable public-key authentication by choosing the "Allowed" button under "Public-Key Authentication"
  8. Click on "OK" to save the changes to authentication

With the passphrase-less keys in place, the last step is to automate the tunnel itself. In this case, instead of a shell script, I opted to use program called autossh.

autossh is a program that can start a copy of ssh and monitor the connection, restarting it when necessary. All autossh needs to know is what local port to monitor, so our one-time initial startup of ssh tunnel looks similar to the previous example, but with autossh and the addition of the -M option


autossh -M 1433 -f -N -L 127.0.0.1:1433:192.168.1.2:1433 
username@example.org




[1] This means alas, it is also one of those terms that can cause confusion, especially between technical and non-technical people, if not defined at the outset.

[2] This is one of those places where knowledge of different solutions solving a similar problem becomes handy.

[3] For user authentication SSH can either be password-based or key-based. In key-based authentication, SSH uses public-key cryptography where the public key is distributed to identify the owner of the matching private key. The passphase is in this case is used to authenticate access to the private key.

Speaking at ApacheCon NA 2011

| 0 Comments
| | | |

At the moment November, and another hockey season, seems a long way off. But I'm already making plans to be in Vancouver from the 9th to the 11th for the North American edition of ApacheCon.

That is because, at present, I am schedule to present at ApacheCon on The Power of the mod_proxy Modules:

This presentation reviews the concepts of web proxies and load balancing, covers the creation and maintenance of proxies (forward and reverse) for HTTP, HTTPS and FTP using Apache and mod_proxy and how mod_proxy_balancer can be used to provide a basic load balancing solution. Configuration Examples of implementing proxies and load balancer will be discussed including; how and when mod_proxy modules can help, configuring mod_proxy for forward or reverse proxy and configuring mod_proxy_balancer for one or more backend web servers.

A preview of this upcoming presentation can be found in a previous talk from ApacheCon EU 2006, entitled Apache 2.2 and mod_proxy_balancer.

For those who might be interested, this year's ApacheCon is set to cover:
  • Enterprise Solutions
  • Cloud Computing
  • Emerging Technologies & Innovation
  • Community Leadership
  • Data Handling, Search & Analytics
  • Pervasive Computing
  • Servers, Infrastructure & Tools

More information, including full schedule and registration, can be found at: http://na11.apachecon.com/

A Step Beyond a Red Hat Enterprise Linux, Apache, MySQL and PHP How To

| 1 Comment
| | | |
Server Room

Image by pdweinstein via Flickr

It seems quite a bit of my time has been taken up with system administration related tasks these days. If I'm not contemplating system monitoring solutions for one client, I must be busy migrating a mail service from one server location to another.


Naturally, I was hardly surprised when I got an email from an old high school friend recently asking for assistance in setting up a Linux, Apache, MySQL and PHP (LAMP) server on Red Hat Enterprise Linux (RHEL).


Now said friend is hardly a technical novice and Google is happy to provide one with plenty of step-by-step how to's on the subject. So instead of running down the usual I figured, since he understands the technical side, but is probably more at home with Microsoft-based solutions, that I would give him some tips to orient himself.


Thus here is a quick and dirty "what's happening" (instead of the more common how to) on installing LAMP on a RHEL based system.



Helpful Tools

Since most, if not all, of these commands deal with system-wide alterations first and foremost one will need to have "superuser" access. That can be accomplished in one of two ways:


  • root user: root is the conventional name of the user who has rights to all files and programs on a Unix-based system such as Red Hat Enterprise Linux. In other words root is the main administrator for the system and can do anything at anytime.


  • sudo: sudo is a command that allows users to run programs with the security privileges of another user (normally the superuser, root). sudo usage needs to be granted to a user before it can be used and can be limited to a subset of privileges if needed.


There are plenty of commands that one can execute at the command line. Just remembering them all can be hard, let alone remembering how exactly each command works, in what context and with what arguments. For that alone one should always remember that help is always one command away:


  • man: man is a command (and format) for displaying the on-line manual pages. The man command followed by the name of the command who's documentation you wish to view will provide an display of that documentation.

For example to view the manual pages in regards to the man command itself:


$ man man


A key characteristic of Unix-centric systems is the use of plain text for data storage. This means that the focus of data manipulation is based on the fundamentals of text processing. It also means that data can be "piped" from one command-line tool to another for performing a task that might be more complicated for just one tool to complete.


While text processing isn't necessary for setting up a LAMP server what is necessary to know and have ready to use is at least one general, text-based tool; a text editor.


  • vi: vi, vim, emacs or pico each is a command line text editor and while I try to be agnostic in the eternal debate about which is better, I will mention that for as long as I've been a Unix user/admin (some 15 years now) no matter what system I am on, be it a Red Hat distribution of Linux or a proprietary system chances are vi or vim has been installed and is available.1


Installing Software

Red Hat Enterprise Linux uses a software package format called rpm. Each software package consists of an archive of files along with information about the package like its version and a description of the software within.


As a general practice packages are broken up into two separate packages; One contains the compiled executable, while a second one contains the source files for building the software from scratch2. The source code packages will usual have a "-devel" appended to their name field.3


However, back in the early days of rpm it was discovered that managing individual rpm packages is a pain and time consuming4. As such, a number of rpm-based Linux distributions have adopted tools such as yum. yum assists in managing the identification of dependencies for a given software package as well as installing said packages, tracking updates for a given package from a specified repository and removing installed software from the system as desired.


Identification

Using yum with the list argument will provide information about a given package's availability. For example to check on the Apache Web Server, which is officially known as the Apache HTTP Server:


$ yum list httpd
Loaded plugins: fastestmirror
Installed Packages
httpd.i386     2.2.3-31.el5.centos.2     installed
Available Packages
httpd.i386     2.2.3-31.el5.centos.4     updates  


As you can see httpd is not only available, but is also already installed5.


Installing

If, however, the Apache HTTP Server wasn't the install argument for yum would do the trick.


$ yum install httpd
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * addons: mirror.cisp.com
 * base: yum.singlehop.com
 * extras: mirror.sanctuaryhost.com
 * updates: mirrors.liquidweb.com
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package httpd.i386 0:2.2.3-31.el5.centos.4 set to be updated
--> Finished Dependency Resolution

Dependencies Resolved

==================================================================
 Package     Arch     Version               Repository       Size
==================================================================
Installing:
 httpd       i386     2.2.3-31.el5.centos.4 updates         1.2 M

Transaction Summary
==================================================================
Install      1 Package(s)        
Update       0 Package(s)        
Remove       0 Package(s)        

Total download size: 1.2 M
Is this ok [y/N]:


Now it so happens that installing of Apache HTTP Server on the example system at the given time, didn't require any additional software packages to for the installation. This might not always be the case and as such, yum will identify and list any packages it determines are missing from the system to be installed at the same time, for the Apache Software Foundation's web server to be useful.


Updating

One of the key advantages of a package management system such as yum in the ability to easily track updates. The update feature can be used in two distinct manners. If used without explicitly naming a given packages, update will check for and then update every currently installed package that has a newer package available from the repository.


If, on the other hand, one or more packages are specified, yum will only update the listed package. As with installing the initial package, while updating, yum will ensure that all dependencies are installed.


Removing

Of course if yum can install and update software, it only makes sense that it can also remove the installed software from a given system. The remove argument is used to remove a specified package from the system as well as removing any packages which depend on the package being removed.


Note that this is basically the exact opposite of the install option. Install will add any software the package requires to run and the requested software itself. Remove, however, removes the requested software and any software that depends on it.


Configuring Software

Once each component is installed, it's time to go about configuring each one. The general convention for Unix-based systems such as Red Hat's Enterprise Linux is to place system-wide configuration files in the /etc directory. If fact, while the specifics can vary, the directory structure from Unix system to Unix system will general follow the same over all pattern.


Again staying with our web server example, the main Apache HTTP Server configuration file can be found at /etc/httpd/conf/httpd.conf on a Red Hat Enterprise Linux system. To open the configuration file for review and possible modification using the vi text editor is as simple as:


$ vi /etc/httpd/conf/httpd.conf


Another common directory convention is /var for variable files. Here are files whose content is expected to continually change during normal operation of the system. This is of note in regards to Apache and MySQL since /var is where database (/var/db) or website file hierarchies reside (/var/www).


Executing a Service

Last, but not least is starting these various services. For that there is the service command which runs a standardized script created for controlling the operation of the software in question. Thus to start the httpd web service the command would be"


$ service httpd start
Starting httpd:                                            [  OK  ]


Of course starting a service manually and having it automatically start when the system boots are two different realities. The chkconfig command provides a simple command-line tool for denoting if a service is to start at boot time or not. To turn on the Apache httpd service at boot:


$ chkconfig --add httpd
$ chkconfig httpd on


As noted in its man page, chkconfig has five distinct functions: adding new services for management, removing services from management, listing the current startup information for services, changing the startup information for services, and checking the startup state of a particular service.


Thus if service is akin to rpm form manually controlling a specific service via its control script, than chkconfig is akin to yum for managing a group of services, controlling the when and how of availability.


The --add argument for chkconfig adds a new service for management. Equally, the --del will remove a service from management and the --list will list all of the services which chkconfig knows about or, if a name is specified, information only about the named service.


One of the key concepts in regards to services, such as a web or database service on a Unix-centric system, is the concept of a runlevel. In standard practice, runlevel relates to the state the system is in and what is enabled for execution. Runlevel zero, for example, halts all execution of any running service, an denotes a halted, off, system.


Runlevels 1 to 5 differ in terms of which drives are mounted and which services, such as a web server are started. Default runlevels are typically 3, 4, or 5. Lower run levels (1) are useful for maintenance or emergency repairs, since they usually don't offer any network services and can help in eliminating sources of trouble.


This is important to note since in managing a service, chkconfig needs to know at which runlevels a service should run. Moreover, it is important to note what runlevels other depended services are enabled for. That is, if runlevel 1, which is single-user state akin to a "safe mode" is configured to not run any networking service, it makes little sense to enable the Apache HTTP server to execute at this runlevel.


More Resources

Just as other how-tos are entry points into getting going, this little what's happening is a quick "step beyond" the how to's last step, moving one towards understand what is going on "behind the scenes." Hopefully this next step provides a helpful point of moving into a greater understanding and into the grand world of system administration.


To that end here is a quick list of books on subject of system administration for Unix systems such as Red Hat Enterprise Linux:




2 Remember one of the major tenets at the heart of GNU/Linux is the ability to have access to view and modify the source materials.

3 More about the rpm naming convention can be found here: http://en.wikipedia.org/wiki/RPM_Package_Manager#Package_label

4 Often refereed to as RPM Hell

5 A trained eye will note that this command was actually executed on a CentOS system. CentOS, as noted on their site, is a "CentOS is an Enterprise-class Linux Distribution derived from sources freely provided to the public by a prominent North American Enterprise Linux vendor." Can you guess which one?

A Stepped Up Remote Apache Monitor in Perl

| 0 Comments
| | | |

Back in September I outline a simple Perl script to remotely monitor the status of various web servers I manage and report on any failures. One shortcoming of the script is that it has no memory of the previous state of the websites listed for polling. Thus, once a site fails, the script will continuously report on the failure until resolved.

For some, this might be just fine, a simple repetitive reminder until corrected. For others however, this might not be ideal. If, for example, the problem is non-trivial to solve, the last thing one needs is a nagging every few minutes that the issue has yet to be resolved.

I for one am all for notification without excessive nagging.

The obvious answer to this dilemma is to store the previous state of the server such that it can be used to test against the currently state; if the state of the server has changed, a notification gets sent. Thus one straightforward notification that something has changed.

As a bonus, by reporting on the change of state, the script will now report on when the server has come back online as well as when it has failed. This simple change eliminates what would have been a manual process previously; notifying stakeholders that the issue has been resolved.

Since the Perl script is evoked by cron on a regular basis and terminates once polling is complete, the "current" state of a site will need to be store in secondary memory, i.e. on disk, for future comparison. This is pretty straightforward in Perl:


sub logState ($$$) {

	my ( $host, $state, $time ) = @_;

	# Create a filehandle on our log file
	my $fh = FileHandle->new(">> $fileLoc");
	if (defined $fh) {

	        # Print to file the necessary information 
                # delimited with a colon
		print $fh "$host:$state:" .$time->datetime. "\n";
		$fh->close;

 	}
}

With a new Filehandle object the script opens the file previously assigned to the $fileLoc variable for appending (the '>>' immediately prior to the variable denotes write by appending).

If a Filehandle object has been successfully created, the next step is to write a line to the file with the information necessary for the next iteration of the monitor script, specifically the host information and its current state.

Note that each line (\n) in the file will denote information about a specific site and that the related information is separated by a colon (:). This will be pertinent later in the code, reading of the log file at the next scheduled execution of the monitor script:


# Our array of polling sites' previous state
my @hostStates = ();

# Populate said array with information from log file
my $fh = FileHandle->new("< $fileLoc");
while ( <$fh> ) {

	my( $line ) = $_;
	chomp( $line );
	push ( @hostStates, $line );	

}
$fh->close;

In this bit of code the goal is to get the previously logged state of each site and populate an array with the information. At the moment how each record is delimited isn't of concern, but simply that each line is information relating to a specific site and gets its own node in the array.

Note, since the objective here is to simply read the log file the "<" is used by the filehandle to denote that the file is "read-only" and not "append".

Once the polling of a specific site occurs, the first item of concern is determining the site's previous state. For that the following bit of code is put to use:


sub getPreviousState ($) {

	my ( $host ) = @_;

	# For each node in the array do the following
	foreach ( @hostStates ) {

		my( $line ) = $_;
		# Break up the information 
                # using our delimiter, the colon
		my ($domain, $state, $time) = split(/:/, $line, 3);

		# If we find our site return the previous state 
		if ( $domain eq $host ) {
			return $state;
		}

 	}

}

In this function each element in the array is broken down to relevant information using the split function, which delimits the record by a given character, the colon. From here it is a simple matter of testing the two states, the previous and current state before rolling into the notification process.

The complete, improved remote monitor:


#!/usr/bin/perl
use strict;
use FileHandle;

use Time::Piece;
use LWP::UserAgent;
use Net::Ping;
use Net::Twitter::Lite;

### Start Global Settings ###

my $fileLoc = "/var/log/state.txt";
my @hosts = ( "pdw.weinstein.org", "www.weinstein.org" );

### End Global Settings ###

# Our array of polling sites' previous state
my @hostStates = ();

# Populate said array with information from log file
my $fh = FileHandle->new("< $fileLoc");
while ( <$fh> ) {

	my( $line ) = $_;
	chomp( $line );
	push ( @hostStates, $line );

}
$fh->close;

# Clear out the file by writing anew
my $fh = FileHandle->new("> $fileLoc");
$fh->close;

foreach my $host ( @hosts ) {

	my $previousState = getPreviousState( $host );

	my $url = "http://$host";
	my $ua = LWP::UserAgent->new;
	my $response = $ua->get( $url );

	my $currentState = $response->code;
	my $time = localtime;

	# If states are not equal we need to notify someone
	if ( $previousState ne $currentState ) { 

		# Do we have an status code?
		if ( $response->code ) {

			reportError("$host reports
			$response->message.\n");

		} else {

		# HTTP is not responding, 
                # is it the network connection down?
		my $p = Net::Ping->new("icmp");
		if ( $p->ping( $host, 2 )) {

			reportError ( "$host is responding, 
			     but Apache is not.\n" );

		} else {

			reportError ( "$host is unreachable.\n" );

		}

	}

	# Not done yet, we need to log 
        # the current state for future use
	logState( $host, $currentState, $time )

}

sub reportError ($) {

	my ( $msg ) = @_;
	my $nt = Net::Twitter::Lite->new(
		username => $username, 
		password => $pasword );

	my $result = eval { $nt->update( $msg ) };

	if ( !$result ) {

		# Twitter has failed us,
		# need to get the word out still...
		smsEmail ( $msg );

	}

}

sub smsEmail ($) {

	my ( $msg ) = @_;
	my $to = "7735551234\@txt.exaple.org";
	my $from = "pdw\@weinstein.org";
	my $subject = "Service Notification";

	my $sendmail = '/usr/lib/sendmail';
	open(MAIL, "|$sendmail -oi -t");
		print MAIL "From: $from\n";
		print MAIL "To: $to\n";
		print MAIL "Subject: $subject\n\n";
 		print MAIL $msg;
	close( MAIL );

}

sub logState ($$$) {

	my ( $host, $state, $time ) = @_;

	# Create a filehandle on our log file
	my $fh = FileHandle->new(">> $fileLoc");
	if (defined $fh) {

	        # Print to file the necessary information,
                # delimited with a colon
		print $fh "$host:$state:" .$time->datetime. "\n";
		$fh->close;
 	}
}

sub getPreviousState ($) {

	my ( $host ) = @_;

	# For each node in the array do the following
	foreach ( @hostStates ) {

		my( $line ) = $_;
		# Break up the information using our delimiter, 
                # the colon
		my ($domain, $state, $time) = split(/:/, $line, 3);

		# If we find our site return the previous state
		if ( $domain eq $host ) {

			return $state;

		}

 	}

}

Happy Birthday Apache (Software Foundation)

| 0 Comments
| | | |

First published: 3rd of November 2009 for Technorati

 

 

(Photo from Flickr user jaaronfarr)


This week the Apache Software Foundation (ASF) is holding its annual US conference in Northern California for all things Apache. As part of the get together conference attendees, as well as those elsewhere this week, are invited to join in celebrating the 10th anniversary of The Apache Software Foundation.

Ah, I hear confusion in your voice, didn't Apache celebrate its 10th anniversary a couple of years ago?

Indeed the Apache Web Server has already celebrated its tenth birthday, but just as the Apache Web Server evolved from an ad hoc collection of software patches for the NCSA's web server HTTPd, the Apache Software Foundation is a registered non-for-profit organization that evolved from the loose affiliation of web developers and administrators who first submitted and organized those patches in the first place.

Big deal? Well, yes - it is a bit deal. See the Apache Software Foundation is a decentralized community of developers that oversee the development of Apache HTTP Server along with some 65 additional leading Open Source projects.

In essence the ASF provides the necessary framework for those projects to exists, from the guidelines on how to organize project resources and contributions to a maturation process for new projects, know as the Apache Incubator which includes providing legal protection to volunteers, defending against misuse of the Apache brand name and adoption of the Apache License.

In other words, the ASF is about learning from and building on the success of the world's most popular web server. Projects such as Tomcat, Lucene, SpamAssassin and CouchDB all owe a bit of their success to the ASF's dedication to providing transparent team-focused development projects the necessary computing and project resources needed for successful collaboration.

Along with sharing the same open source license and resources, the projects managed by the ASF - and to a larger extent the collection of project volunteers - is the ideals that project participation helps define, not just the roles of individual contributors, but their responsibilities as well. Roles are assigned based upon demonstrated talent and ability, a Meritocracy. And while anyone can help contribute to a project outright, membership to the foundation as a whole is granted only to nominated and elected individuals who have actively contributed to ASF and its projects.

Oh, and the ASF also organizes several ApacheCon conferences each year, including annual conferences in the United States and Europe.

And that is why the ASF's 10th anniversary is important. That is why you should take sometime this week to celebrate.

(Ed. note: this author also reflects on his first time with Apache on his personal blog.)

About the Author

Paul is a technologist and all around nice guy for technology oriented organizations and parties. Besides maintaining this blog and website you can follow Paul's particular pontifications on the Life Universe and Everything on Twitter.

   
   


Subscribe:
Add to Google Reader or Homepage
Add to My AOL

Add to netvibes
Subscribe in Bloglines
Add to Technorati Favorites

Powered
CentOS
Apache
MySQL
Perl
Movable Type Pro