Recently in Web Category

How to Secure Your Website Part III: Keeping You and Your Website Safe

| 0 Comments
| | | |

First published: 30th of May 2014 for Orbit Media Studios

History
In the late 1960s the mathematician Whitfield Diffie, now a well known cryptographer, started his graduate work at Stanford. There he was introduced to the growing prominence of "time-sharing" computing, computers powerful enough to allow more than one user or task to execute at the same time. Contemplating the security implications of these new systems, Diffie and his colleagues realized that our everyday concepts of privacy and security would have to enforceable in the new digital age.

Unfortunately, in the 1980s, the developments of multitasking and computer security were pushed aside for a new vision; computers became independent and personal. They sat on a desk, not in some closed off room. They had all the required resources right there and didn't require connecting to another system. They just got about doing one thing, in real time, with just one user.

Evolution
As the personal computer evolved, features from the days of mainframes and minicomputers were introduced. Multitasking and networking made their way into our everyday lives. Soon everyone had an email address and was finding their way onto the "Information Superhighway." Unfortunately, the vision of an independent personal computer lead us to develop some bad habits and a false sense of security.

Consider what has been mentioned in the previous two posts about data in transit and in storage:

  • Encrypting and decrypting data requires intense mathematical computation, which can impact processing time and the perception of an application's responsiveness. In the world of 80s-era personal computing, the computer was not regularly connected to any remote device, was not executing multiple applications at the same time, was not interacting with various users and was not easily portable. At the time encryption was not popular because of the performance hit and limited security benefit.

Unfortunately, this habit of speed over security has continued. Platform and application developers still routinely shortcut security concerns in the name of performance.

  • The Internet provides a previously unknown sense of immediacy and intimacy despite great physical distances. Email and social networks allow us to view and share thoughts throughout the world as they occur. Ecommerce sites can organize lists of items personalized to one's tastes and fashions.

This intimacy creates a false sense of security, that one is safe, among friends and trusted institutions. Yet, the wildly successful networking protocol TCP/IP, the foundation of today's Internet, was originally developed as a research initiative. It forsake some concerns, such as security, for others, such as simplicity of implementation as research drove itself to an initial, small-scale (by today's standards) implementation.

Safety Tips
There are, of course, steps that system architects and developers can take to rectify this situation. But there are also steps that users of these systems, be it end users of a website or proprietor of it, can take:

  • Be aware of what data is being collected, how it is communicated

    • What information is being requested, can it be considered "sensitive"

    • Review how data is being transmitted between systems

    • If it is "sensitive" is it being transmitted securely

  • be aware of how information is being stored

    • Review what data is being stored

    • If the data is  "sensitive" is it being stored securely

    • Review "roles" assigned to different users who access the data and create unique accounts for each user

  • Overall, be proactive, not reactive

    • Create strong passwords

    • Use secured network protocols such as SSL and SFTP

    • Keep all applications and devices up-to-date

    • Undertake a risk assessment with your web developer and hosting provider.

  • Know that no system is unbreakable

    • Like a chain, a complex system is only as strong as its weakest link

    • Compliance with PCI, HIPPA or other security policies is a starting point

    • Threats evolve as new vulnerabilities are routinely discovered, don't get discouraged

Think something is missing from the list? Post it in the comments section below.

How to Secure Your Website Part II: Storage

| 0 Comments
| | | |

First published: 25th of Feb 2014 for Orbit Media Studios

Security is about reducing risk. All devices connected to the Internet have to deal with reducing the risk of data being compromised while in transit or in storage. Part I of How to Secure Your Website introduced the basics of securing website data while in transit. This post will cover storage.

Computer storage is often organized into a hierarchy based on accessibility to and volatility of data. The focus of this article is on secondary storage, a hard drive or flash memory.

Just about all devices these days incorporate some form of authorization and access control. Access control is simply the process of restricting access. Authentication is the use of some sort of credential, such as a username and password. Authorization is the act of authentication for access.

Due to poor risk assessment or implementation, access control processes are routinely compromised. Worst, most data stored on these compromised devices are rarely encrypted properly, if at all.

As mentioned in Part I, there are cryptographic methods that not just encode data, but provide additional methods of authorization and access control to data. So, why isn't all data encrypted in storage?

Similar to that of data in transit, encrypting data in storage has not always been considered a high priority. Speed is usually the focus for storage because the access time impacts the overall speed of an application. The act of encrypting data on write and decrypting the data on read requires more time and can cause a perception that the application or website is slow. Hence encryption is rarely enabled for all data in storage.

How does Orbit handle data storage?

  • If a business case requires the storage of personally identifiable information, Orbit's policy is to enhance the CMS to encrypt the data for storage, decrypt and viewable through a secured process and destroy the data after 30 days.

  • User passwords are hashed. Similar to a cipher, a hash is a method for encoding data. However, unlike a cipher, a hash is one way. A strong password, properly hashed, is difficult to guess or reverse

Does your website's data need to be secured? That's a risk assessment you need to make with your web developer and hosting provider. But consider, what information is collected and stored on your website:

  • Name, Phone Number, Email, Street Addresses

    • Some people are very cautious about sharing even this basic level of information with others. However, those people will opt-out of forms that ask for this information on principle

    • Most people share this level of information openly and, taken by itself, is optional to secure

  • Date of Birth, City of Birth, Mother's Maiden Name, Alma mater, Year of Graduation, Past Residences, Gender, Ethnicity, Account/Username

    • On their own, this information might be considered benign. When combined with other information they form the basis of an identity

    • Need to secure

  • Social Security Number, Driver's License ID, Bank Account Number, Credit Card Number, Account Password

    • This is information that is used for authentication of an identity

    • These pieces of information must be secured. Moreover, the securing of this information might need to pass some sort of industry compliance, such as PCI or HIPPA

Of course, this list is incomplete. Perhaps you can think of something to add to it? Post it in the comments section below.

How to Secure Your Website Part I: Communication

| 0 Comments
| | | |

First published: 16th of Dec 2013 for Orbit Media Studios

Security is about risk management. Online, security is about reducing the risk of exposing information to the general Internet.

Consider the two actions occurring on any device connected to the Internet:

  • Communication
  • Storage

Communication

Communication is the heart of the Internet. The standard Internet protocol suite, known as TCP/IP (Transmission Control Protocol and Internet Protocol), is the basis for a collection of additional protocols designed to interconnect computer systems across the world in different ways. For example:

  • Domain Name - DNS (Domain Names System)
  • Email - SMTP (Simple Mail Transfer Protocol)
  • Web - HTTP (Hypertext Transfer Protocol)

Unfortunately, in the initial designs of the Internet, preventing unauthorized access to data while in transit and the verification of the communicating parties were not primary concerns. As a result, many of the protocols that use TCP/IP do not incorporate encryption or other security mechanisms by default.

The consequence is that anyone can "listen in" (not just the NSA) as data is transmitted across the Internet. That is, none of the protocols in the sample list employ any kind of encoding that restricts access to the data as it travels from one system to another.

HTTP - the protocol of the web - does, however, have a solution to this problem. SSL (Secure Sockets Layer) establishes a process to incorporate cryptographic methods that identify the parties in communication and establish a secure method of data transmission over the web (HTTPS).

Note: Today SSL's successor is TLS (Transport Layer Security), but it is still commonly referred to as SSL (or more accurately SSL/TLS).

Since the initial phase of establishing a SSL/TLS connection incorporates intense mathematical calculations, implementation in the past had been limited to specific webpages (an e-commerce site's checkout page, for example). However, today the trend is to implement as broadly as possible.

  • Popular sites, such as Google or Facebook, will conduct all communication over HTTPS by default by redirecting the initial HTTP request to HTTPS.
  • Popular web browsers will attempt to connect to a website via HTTPS first by rewriting the initial HTTP request to HTTPS before attempting a connection.

Does your website need SSL/TLS? That's a risk assessment you need to make with your web developer and hosting provider. But consider:

  • The trend is to secure more data in transit, not less.
  • Your website's visitors are not just concerned about sensitive information that they are actively providing (credit card information, for example), but other information they are actively and passively providing, such as what webpage they are viewing.

Our next security post will cover the second topic: data storage. In the meantime, have a question about security and the web? Post your question in the comments section below.

Cleaning House

| 0 Comments
| | | |

Over on Facebook a few days ago I commented about a personal "new year" project of reorganizing (first the home office, next this website):

"Phase one of reorganizing home office desk completed. Most useless item: note to self to clean desk (near the bottom no less) Single largest source of paper: Health Insurance"
Now I think I can can add the most interesting item from the excavation:
Business card collection

Business card collection (Photo credit: pdweinstein)

A collection of business cards from contacts and interestes from a few years ago hiding among a stash of old passwords. The Thawte, O'Reilly and daemonnews cards are from contacts I had when I did more technical writing, which started out on topics of SSL and Apache. The Google card is from a recruiter I had contact with at the time (still waiting on a job Google ;-) 

I had the pleasure of working with Eddie Codel and Scott Beale on Webzine events and even "server sat" Laughing Squid's hosting setup one Labor Day weekend while Scott and crew went to Burning Man.

Ah memories...

PHP, Nagios and MySQL Replication

| 0 Comments
| | | |

Overview

MySQL replication is a handy way to distribue database processes across several servers. For example, a simple "master-slave" step up allows for a continuous backup of data from a primary database server, the master to a secondary backup server, the slave. But what if the slave server stops replicating for some reason? Not much of a good backup, if it fails to copy data for some undermined length of time.

The good news is that MySQL provides a simple, detailed query for checking if replication is taking place and will report errors, should they occur. The trick of course is getting notified when an issue does occur quickly. Given an existing Nagios setup for service monitoring at a PHP shop the only missing piece is some code.

The Details
First off, Nagios has the ability to supply arguments to a script as a script being invoked at the command-line. One common set of arguments for Nagios scripts are warning and critical thresholds. For example, a disk allocation script might take arguments to send a warning notification if the amount of free disk space reaches 20% and a critical notification if free space is 10% or less.

With MySQL replication one area of concern is the network. Any latency between the two servers can induce lag in synchronizing the slave server with the master server. Given this, why not pass along a threshold to our script setting checking how many seconds the secondary server is behind the primary.

For processing command line short form and long form options in PHP there is the getopt function:

        $shortopts  = "";
        $shortopts .= "w:"; // Required value for warning
        $shortopts .= "c:"; // Required value for critical

        $longopts  = array(
                // No long form options
        );

	// Parse our options with getopt
        $options = getopt( $shortopts, $longopts );

        // If slave is x second behind for warning state
        $delayWarn = $options['w'];

        // If slave is x second behind for a critical state
        $delayCritical = $options['c'];

Besides being in a critical or warning state, Nagios also has conditions for normal and unknown. Each state is associated with a status code that will be set upon completion of the script, hence the following associative array:

        // Nagios conditions we can be in
        $statuses = array( 'UNKNOWN' => '-1', 'OK' => '0', 'WARNING' => '1', 'CRITICAL' => '2' );

For the moment, we don't know what condition our replication setup is in. Nor do we have any additional information about the current state, so let's go ahead and define that as such:

        $state = 'UNKNOWN';
        $info = '';

The next step is to go ahead and connect to our slave MySQL instance and query its status using "SHOW SLAVE STATUS;"

		$db = new mysqli( $dbHost, $dbUser, $dbPasswd );

		// Prepare query statement & execute
		$query = $db->prepare( "SHOW SLAVE STATUS" )) {
		$query->execute();

The MySQL query is going to return a number of columns in a single result row. Of immediate concern is if the slave is in error state or not. For that we take a look at the columns labeled Slave_IO_Running, Slave_SQL_Running and Last_Errno.

        // If Slave_IO_Running OR Slave_SQL_Running are not Yes 
        // OR Last_Errno is not 0 we have a problem
        if (( $SlaveIORunning != 'Yes' ) OR ( $SlaveSQLRunning != 'Yes' ) 
        	OR ( $Last_Errno != '0' )) {

            	$state = 'CRITICAL';

If the slave server is not in error, then we'll go ahead and check how far behind it is, and set a warning or critical state given the earlier parameters from the beginning of the script:

        } else if (( $row['Slave_IO_Running'] == 'Yes' ) OR ( $row['Slave_SQL_Running'] == 'Yes' ) OR ( $row['Last_Errno'] == '0' )) {

        	// So far so, good, what about time delay, how behind is the slave database?
			if ( $row['Seconds_Behind_Master'] >= $delayCritical ) {

            	$state = 'CRITICAL';

            } else if ( $row['Seconds_Behind_Master'] >= $delayWarn ) {

            	$state = 'WARN';

            } else {

            	$state = 'OK';

            }

		}

Now that we have determined the state of the secondary database server, we can pass along some information for Nagios to process.

        // What to output?
        switch ( $state ) {

                case "UNKNOWN":
                        $info = 'Replication State: UNKNOWN';
                        break;

                case "OK":
                        $info = 'Replication State: OK Master Log File: ' .$MasterLogFile. ' Read Master Log Position: ' .$ReadMasterLogPos. ' Replication Delay (Seconds Behind Master): ' .$SecondsBehindMaster;
                        break;

                case "WARNING":
                        $info = 'Replication State: WARNING Master Log File: ' .$MasterLogFile. ' Read Master Log Position: ' .$ReadMasterLogPos. ' Replication Delay (Seconds Behind Master): ' .$SecondsBehindMaster;
                        break;

                case "CRITICAL":
                        $info = 'Replication State: CRITICAL Error: ' .$LastErrno. ': ' .$Last_Error. ' Replication Delay (Seconds Behind Master): ' .$SecondsBehindMaster;
                        break;

        }

All that is left is to transfer our information to Nagios via standard out and an exit code:

        // Need to set type to integer for exit() to handle the code properly
        $status = $statuses[$state];
        settype( $status, "integer" );

        fwrite( STDOUT, $info );
        exit( $status );

Putting it all together we get something like this:

#!/usr/bin/php
<?php

	$shortopts  = "";
	$shortopts .= "w:"; // Required value for warning
	$shortopts .= "c:"; // Required value for critical

	$longopts  = array( 
		// No long form options 
	);

	$options = getopt( $shortopts, $longopts );

	// If slave is x second behind, set state as warn
	$delayWarn = $options['w'];

	// If slave is x second behind, set state as critical
	$delayCritical = $options['c'];

	// Nagios conditions we can be in
	$statuses = array( 'UNKNOWN' =----> '-1', 'OK' => '0', 'WARNING' => '1', 'CRITICAL' => '2' );
	$state = 'UNKNOWN';
	$info = '';
	
	$dbUser = 'user';
	$dbPasswd = 'password';
	$dbHost = 'localhost';

	$db = new mysqli( $dbHost, $dbUser, $dbPasswd );

	if ( mysqli_connect_errno() ) {
	
		// Well this isn't good
		$state = 'CRITICAL';
		$info = 'Cannot connect to db server';

	} else {

		// Prepare query statement & execute
		if ( $query = $db->prepare( "SHOW SLAVE STATUS" )) {

			$query->execute();

			// Bind our result columns to variables
			$query->bind_result( $SlaveIOState, $MasterHost, $MasterUser, $MasterPort, $ConnectRetry, $MasterLogFile, $ReadMasterLogPos, $RelayLogFile, $RelayLogPos, $RelayMasterLogFile, $SlaveIORunning, $SlaveSQLRunning, $ReplicateDoDB, $ReplicateIgnoreDB, $ReplicateDoTable, $ReplicateIgnoreTable, $ReplicateWildDoTable, $ReplicateWildIgnoreTable, $LastErrno, $Last_Error, $SkipCounter, $ExecMasterLogPos, $RelayLogSpace, $UntilCondition, $UntilLogFile, $UntilLogPos, $MasterSSLAllowed, $MasterSSLCAFile, $MasterSSLCAPath, $MasterSSLCert, $MasterSSLCipher, $MasterSSLKey, $SecondsBehindMaster, $MasterSSLVerifyServerCert, $LastIOErrno, $LastIOError, $LastSQLErrno, $LastSQLError );

			// Go fetch
			$query->fetch();

			// Done
			$query->close();

			// and done
			$db->close();
	
			// If Slave_IO_Running OR Slave_SQL_Running are not Yes OR Last_Errno is not 0 we have a problem
			if (( $SlaveIORunning != 'Yes' ) OR ( $SlaveSQLRunning != 'Yes' ) OR ( $LastErrno != '0' )) {
		
				$state = 'CRITICAL';	
		
			} else if (( $SlaveIORunning == 'Yes' ) OR ( $SlaveSQLRunning == 'Yes' ) OR ( $LastErrno == '0' )) {
	
				// So far so, good, what about time delay, how behind is the slave database?
	
				if ( $SecondsBehindMaster >= $delayCritical ) {
				
					$state = 'CRITICAL';
				
				} else if ( $SecondsBehindMaster >= $delayWarn ) {
				
					$state = 'WARN';
				
				} else {
	
					$state = 'OK';
		
				}
			
			}
	
	
		} else {
			
			// Well this isn't good
			$state = 'CRITICAL';
			$info = 'Cannot query db server';			
			
		}
	
		// What to output?
		switch ( $state ) {

			case "UNKNOWN":
				$info = 'Replication State: UNKNOWN';
				break;

			case "OK":
				$info = 'Replication State: OK Master Log File: ' .$MasterLogFile. ' Read Master Log Position: ' .$ReadMasterLogPos. ' Replication Delay (Seconds Behind Master): ' .$SecondsBehindMaster;
				break;

			case "WARNING":
				$info = 'Replication State: WARNING Master Log File: ' .$MasterLogFile. ' Read Master Log Position: ' .$ReadMasterLogPos. ' Replication Delay (Seconds Behind Master): ' .$SecondsBehindMaster;
				break;

			case "CRITICAL":
				if ( $info == '' ) {
					
					$info = 'Replication State: CRITICAL Error: ' .$LastErrno. ': ' .$LastError. ' Replication Delay (Seconds Behind Master): ' .$SecondsBehindMaster;
			
				}
			break;
			
		}
	
	}

	// Need to set type to integer for exit to handle the exit code properly
	$status = $statuses[$state];
	settype( $status, "integer" );

	fwrite( STDOUT, $info );
	exit( $status );


?>

So weird, Connecting HavenCo and Red Hat

| 0 Comments
| | | |

It's a bit weird to be reading about Red Hat posting $1 billion in revenue in a year for the first time or this Ars article by James Grimmelmann about HavenCo since, to me personally that's part of my past.

See, as Grimmelmann notes, HavenCo's chairman of the board was Sameer Parekh whom I worked with/for at a different internet security company, C2Net Software. Almost everything Grimmelmann writes about I remember first-hand. I even remember reading the Wired articles he references (and how could I forget Neil Stephenson's Cryptonomicon, it's still one of my favorite novels).

Around the same time, Steven Levy wrote the non-fiction book Crypto, which tells part of the history of securing communications and modern computing networks; from Whitfield Diffie and the initial concerns of privacy to Netscape and the creation of SSL.

Alas, Levy's book is already 10 years old. While it covers the basis for the cryptography that powers today's Internet, it doesn't necessarily tell the whole story. Parts of the story that are missing, such as the short comings of SSL and its open standard successor, TLS, the adoption of "virtual private networks", that allow the use of primarily public networks, such as the Internet, to connect remote points securely, as if part of a central private network or that much of today's emails remain in "plaintext", despite the availability of encryption methods such as PGP, is missing.

Most of what happens on today's Internet every moment, took root around the same time of Levy's work, 1999-2001, when I was right there working for C2Net with its own vision on how to secure everyday communications on the "Information Superhighway".

And what happened to C2Net? Well it was sold, to......Red Hat of which I become an employee of (and then ex-employee of).

So yeah, I have this odd, I remember that (HavenCo) and oh, good for them (Red Hat). Then I think wow, I wasn't just a part of the some pioneering companies "back in the day", but also witnessed some completely cutting edge stuff that's only now being understood by the world at large.

So weird.

Adding SQL Server Support in PHP on Linux

| 0 Comments
| | | |

Back in July I outlined a method for establishing a SSH tunnel between Linux and Windows machines. The goal of the connection was to enable a PHP script on a front-end Linux web server access to information stored on the back-end private Windows server running SQL Server.

What I didn't mention at the time was how I enabled PHP support for Microsoft's SQL Server.

The most common deployments of PHP on Linux include support for MySQL or Postgres, depending largely on other factors such has the organization's preference, experience and requirements. Since PHP can be deployed on Windows, there is support for Microsoft's SQL Server. Such support is nontrivial to enable in PHP on Linux. It is however possible:

To enabled SQL Server support in PHP on Linux, the PHP extension that provides said support requires the FreeTDS library to build against. FreeTDS is an open source implementation of C libraries originally marketed by Sybase and Microsoft to enable access to their database servers.

Downloading the source code, building and installing FreeTDS is straightforward:


$ wget \
ftp://ftp.ibiblio.org/pub/Linux/ALPHA/freetds/stable/freetds-stable.tgz
$ gunzip freetds-stable.tgz
$ tar xf freetds-stable
$ cd freetds
$ ./configure
$ make
$ make install

The next step is to build the PHP source code against the FreeTDS libraries to include SQL Server support. This can be done one of two ways; build PHP from scratch or build the specific PHP extension. Since I was working on a server with a preexisting install of PHP, I opted for door number two:

Locate or download the source code for the preexisting version of PHP. Next, copy the mssql extension source code from the PHP source code into a separate php_mssql directory:


$ cp ext/mssql/config.m4 ~/src/php_mssql
$ cp ext/mssql/php_mssql.c ~/src/php_mssql
$ cp ext/mssql/php_mssql.h ~/src/php_mssql

Now build the source code, pointing it to where FreeTDS has been installed:


$ phpize
$ ./configure --with-mssql=/usr/local/freetds
$ make

There should now be a mssql.so file in ~/src/php_mssql/modules/ that can be copied into the existing PHP install. Once copied the last remaining steps are to enable the extension by modify the php.ini file and restarting the Apache HTTP Server.

Additional Information can be found here: Connecting PHP on Linux to MSQL on Windows

Establish and Maintain an SSH Tunnel between Linux and Windows

| 1 Comment
| | | |

The Situation
Over the years, I've worked in numerous computing environments and have come to appreciate heterogeneous systems. In my mind, all system administrators should experience how different platforms solve similar problems, just as all programmers should be exposed to different programming languages.

Of course this means being able to play well with others. Sometimes, that's easier said than done.

A recent project requirement stipulated being able to connect a public web server with a private database system. Not an uncommon requirement, but it did place a hurdle immediately in the way. The web application, developed with the Linux, Apache, MySQL and PHP (LAMP) stack, needed a method to connect to the private database system securely, which, for fun was not MySQL but instead Microsoft's SQL Server.

The Problem
The initial requirement called on connecting to the SQL Server using Microsoft's virtual private network (VPN) solution, Microsoft Point-to-Point Encryption (MPPE). Not impossible, since support for MPPE on any Linux distribution simply requires modifying the Linux kernel and recompiling the kernel in Linux is usually a non-issue.

However, in this case the web application would be running on a basic virtual private server (VPS) and a Linux VPS doesn't run its own kernel. Instead Linux VPSes run on a shared kernel used by all the different virtualized servers running on the same hardware.

Net result, no modification of the Linux kernel would be possible on the VPS.

One alternative to this hurdle would have been to switch from a Linux VPS to a Windows VPS. This would have been technically possible since Apache, MySQL and PHP have viable Windows ports. Alas, the hosting provider in question didn't yet offer Windows VPSes. They would shortly, but couldn't guarantee that their Windows VPS solution would be available in time for this particular project's deadline.

A second alternative could have been to upgrade from a virtualized server to a dedicated server. But that would have added more computing resources than what was required. From a business perspective, the added monthly cost wasn't justifiable. Not when a third alternative existed.

A Workable Solution
VPN is one of those terms that can refer to something generic as well as something very specific1. This distinction setups up alternative number three. The secure network connection requirement would remain, the implementation could simply change2.

Specifically the secure connection would be implemented via SSH instead of via MPPE.

With SSH an encrypted tunnel through an open port in the private network's firewall can be established. This tunnel forwards network traffic from a specified local port to a port on the remote machine, securely.

Most Linux distributions these days install OpenSSH as part of their base system install. OpenSSH is a free and open version of the SSH protocol and includes client and server software. For those distributions that don't install it by default installing OpenSSH is usually a trivial matter via the distribution's package manager.

Windows, on the other-hand, has no such base installation of an SSH implementation. There are a number of free software versions for Windows. For the case at hand, freeSSHD was selected to provide a free, open source version of the SSH server software.

Configuring freeSSHD to enable tunneling requires the following steps:

  1. Click on the "Tunneling" tab
  2. Check to enable port forwarding and apply the change
  3. Click on the "Users" tab
  4. Create or edit a user and enable tunnel access

Once the firewall has been configured to allow SSH traffic on port 22, establishing the tunnel from the Linux client to the Windows server is as simple as typing the following at the Linux command-line:


ssh -f -N -L 127.0.0.1:1433:192.168.1.2:1433 username@example.org

In which ssh will create and send to the background a ssh tunnel (-f option) without executing any remote commands (-N option) that begins at the localhost port 1433 (127.0.0.1:1433) terminates at the remote address and port (192.168.1.2:1433) and authenticates using the remote username at the remote location (the public IP address or domain name for the private network).

But Wait There's More
There is however a minor problem with this SSH tunnel. As described, the establishment of the SSH tunnel is an interactive process. The command needs to be executed and the password for the user provided for authentication. In most cases a simple shell script, executed by cron would solve this minor issue. However, for the sake of security OpenSSH doesn't provide a command-line option for providing passwords.

This authentication step can be managed in one of two ways. One is the use of a key management program such as ssh-agent. The second, more common option is to create a passphrase-less key.

The first step in creating a passphrase-less key is to first generate a private/public key pair>sup>3. In Linux this is done by issuing the command:


ssh-keygen -t rsa

Which generates a private/public key pair based on either the RSA or DSA encryption algorithm, depending on what is provided in the command-line option.

When prompted to enter a passphrase for the securing of the private key simply press enter. To confirm the empty passphrase simply press enter again.

The next step, after copying the public key onto the Windows server, is to enable the use of the public key for authentication. In freeSSHD the steps are:

  1. Click on the "Users" tab
  2. Select a user and click on "Change"
  3. Select "Public Key" from the "Authorization" drop-down
  4. Click on "OK" to save changes to users
  5. Next click on the "Authentication" tab
  6. Using the browse button, select the directory with the users public key are kept
  7. Enable public-key authentication by choosing the "Allowed" button under "Public-Key Authentication"
  8. Click on "OK" to save the changes to authentication

With the passphrase-less keys in place, the last step is to automate the tunnel itself. In this case, instead of a shell script, I opted to use program called autossh.

autossh is a program that can start a copy of ssh and monitor the connection, restarting it when necessary. All autossh needs to know is what local port to monitor, so our one-time initial startup of ssh tunnel looks similar to the previous example, but with autossh and the addition of the -M option


autossh -M 1433 -f -N -L 127.0.0.1:1433:192.168.1.2:1433 
username@example.org




[1] This means alas, it is also one of those terms that can cause confusion, especially between technical and non-technical people, if not defined at the outset.

[2] This is one of those places where knowledge of different solutions solving a similar problem becomes handy.

[3] For user authentication SSH can either be password-based or key-based. In key-based authentication, SSH uses public-key cryptography where the public key is distributed to identify the owner of the matching private key. The passphase is in this case is used to authenticate access to the private key.

Let's Play Two (or Web Analytics for Fun and Profit)

| 0 Comments
| | | |

Back in October it occurred to me that it had been 5 years since the White Sox beat the Astros to win the World Series. As I result of that realization, I dug into my video collection and quickly put together and posted this video:



To say that this video is the most popular video I've posted on YouTube thus far is an understatement. What's more interesting, for those of us who work the medium of the web, is the traffic statistics of those who have viewed the video in the past 5 months:


konerko_vid_stats.jpegSo what do these stats tell us? Well to some extent it tells us a few things we might have already "known", such as that most baseball fans (or at least White Sox fans) are mature males residing in the United States.

What I find interesting is when people were viewing this little video. Obviously some people viewed it right when I posted it last Oct, during the 2010 World Series. Then, as expected, things go quite for the most part. Then, as Spring Training builds to today's Opening Day, so does the traffic.

But wait, you might be wondering, what about the spike of traffic in December?[1] What could possibly have driven the largest one-time surge in traffic for a handful of days? Perhaps I engaged in a little social marketing? Or maybe the video got popular on a sports site?

Well as it happened it did get posted on a local sports site, but that doesn't completely explain the surge, or why said site was posting a baseball video in December.

Why did it get popular so quickly (and fade so quickly) in December? Because, on December 8, 2010, Paul Konerko, the hero of the video, resigned with the Chicago White Sox for 3 more years.

Interesting, No?




[1] Well if you really are a White Sox fan, you might not be wondering, but don' spoil the ending please?

Web Development: Before and After the Client

| 0 Comments
| | | |

First published: 17th of Dec 2010 for Orbit Media Studios

For someone looking for a web design firm, how a website is developed might seem meaningless. Who cares, so long as it works?

Yet how well a website works can be measured in part by the costs associated with it. The direct cost is the total price for the initial project. The indirect costs consist of secondary expenses related to ongoing marketing and support during the lifetime of the website.

At Orbit we have two development processes. Both are designed to reduce costs and improve quality. The first is an internal process that starts before the client ever arrives. The second process begins at the first client meeting as we discover the project's specific requirements.

Internal Development
First, what do we mean by develop? Development usually refers to the programming of the website, whereas design refers to the look and feel.

With development, we need to consider a few basic questions. What features are required to make an ecommerce website work, for example? Regardless of the item being sold or the company selling the item, the basic logic can be described in a few steps:

  1. A customer selects an item to purchase
  2. The selected item is placed into a shopping cart
  3. The customer decides to checkout, continue shopping or abandon their cart
  4. To checkout, the customer initiates the process of purchasing what is in their cart
  5. The store presents a total bill for the item(s) desired by the customer
  6. The customer presents a method of payment
  7. The payment is verified and the transaction is completed

To be sure, this purchasing feature isn't complete and plenty of questions can remain. However, this generalized logic provides a starting point.

This is where web development at Orbit begins, identifying basic features of a potential website.

Developer Day
Roughly once a month, all of Orbit's developers spend the day working on such questions, analyzing and programing with various sandboxes.

A sandbox is simply a generic website in which the development team can create, test and improve different features and find the best approach for virtually any type of website. It's a play area for programmers.

The focus is on breaking down the feature into workable steps and rapidly building them. In doing so we consider what has worked for clients in the past along with growing trends such as social media integration.

Each Developer Day represents the repeating of a cycle of planning, analyzing, coding and acceptance testing in order to get the feature built right.

But, as we mentioned, plenty of questions can remain. Not all features will work perfectly "out of the box" for all clients.

Developing with the Client
This brings us to the second process of web development at Orbit: developing with the client. Now the concern is on completion of a particular website. Thus the focus for the developer changes from generalized concepts to specific implementation.

But before a developer can customize the code for a client, a new process of discovery and planning must begin. The phases of this process break down into the following, with direct client involvement at each step:

  1. A Kick-Off Meeting where initial questions about goals and scope are answered
  2. Discovery of the layout and flow for the proposed website
  3. Designing the look of the website and expressing the client's brand
  4. Development, implementing and testing
  5. Deploing the website for public use

In this sequential development process each step follows from the last. There is a specific beginning and ending. One step cannot be started until the previous step is completed and approved.

The Big Payoff
Understanding the development process for a custom website is important. How many hours a developer works on a client's website and the dependability of the underlying code affects its ultimate cost.

Both direct and indirect costs impact the client's ability to market their website and can limit the overall return of the website.

Rather than starting from scratch, Orbit takes the pieces we have built and improved earlier and applies them to the client's project, customizing the features to the needs of the website. In doing so we execute different development processes in order to keep our client's costs manageable while adding value to their business.

About the Author

Paul is a technologist and all around nice guy for technology oriented organizations and parties. Besides maintaining this blog and website you can follow Paul's particular pontifications on the Life Universe and Everything on Twitter.

   
   


Subscribe:
Add to Google Reader or Homepage
Add to My AOL

Add to netvibes
Subscribe in Bloglines
Add to Technorati Favorites

Powered
CentOS
Apache
MySQL
Perl
Movable Type Pro