Recently in Web Category

How to Secure Your Website Part I: Communication

| 0 Comments | 0 TrackBacks
| | | |

First published: 16th of Dec 2013 for Orbit Media Studios

Security is about risk management. Online, security is about reducing the risk of exposing information to the general Internet.

Consider the two actions occurring on any device connected to the Internet:

  • Communication
  • Storage

Communication

Communication is the heart of the Internet. The standard Internet protocol suite, known as TCP/IP (Transmission Control Protocol and Internet Protocol), is the basis for a collection of additional protocols designed to interconnect computer systems across the world in different ways. For example:

  • Domain Name - DNS (Domain Names System)
  • Email - SMTP (Simple Mail Transfer Protocol)
  • Web - HTTP (Hypertext Transfer Protocol)

Unfortunately, in the initial designs of the Internet, preventing unauthorized access to data while in transit and the verification of the communicating parties were not primary concerns. As a result, many of the protocols that use TCP/IP do not incorporate encryption or other security mechanisms by default.

The consequence is that anyone can "listen in" (not just the NSA) as data is transmitted across the Internet. That is, none of the protocols in the sample list employ any kind of encoding that restricts access to the data as it travels from one system to another.

HTTP - the protocol of the web - does, however, have a solution to this problem. SSL (Secure Sockets Layer) establishes a process to incorporate cryptographic methods that identify the parties in communication and establish a secure method of data transmission over the web (HTTPS).

Note: Today SSL's successor is TLS (Transport Layer Security), but it is still commonly referred to as SSL (or more accurately SSL/TLS).

Since the initial phase of establishing a SSL/TLS connection incorporates intense mathematical calculations, implementation in the past had been limited to specific webpages (an e-commerce site's checkout page, for example). However, today the trend is to implement as broadly as possible.

  • Popular sites, such as Google or Facebook, will conduct all communication over HTTPS by default by redirecting the initial HTTP request to HTTPS.
  • Popular web browsers will attempt to connect to a website via HTTPS first by rewriting the initial HTTP request to HTTPS before attempting a connection.

Does your website need SSL/TLS? That's a risk assessment you need to make with your web developer and hosting provider. But consider:

  • The trend is to secure more data in transit, not less.
  • Your website's visitors are not just concerned about sensitive information that they are actively providing (credit card information, for example), but other information they are actively and passively providing, such as what webpage they are viewing.

Our next security post will cover the second topic: data storage. In the meantime, have a question about security and the web? Post your question in the comments section below.

Cleaning House

| 0 Comments | 0 TrackBacks
| | | |

Over on Facebook a few days ago I commented about a personal "new year" project of reorganizing (first the home office, next this website):

"Phase one of reorganizing home office desk completed. Most useless item: note to self to clean desk (near the bottom no less) Single largest source of paper: Health Insurance"
Now I think I can can add the most interesting item from the excavation:
Business card collection

Business card collection (Photo credit: pdweinstein)

A collection of business cards from contacts and interestes from a few years ago hiding among a stash of old passwords. The Thawte, O'Reilly and daemonnews cards are from contacts I had when I did more technical writing, which started out on topics of SSL and Apache. The Google card is from a recruiter I had contact with at the time (still waiting on a job Google ;-) 

I had the pleasure of working with Eddie Codel and Scott Beale on Webzine events and even "server sat" Laughing Squid's hosting setup one Labor Day weekend while Scott and crew went to Burning Man.

Ah memories...

PHP, Nagios and MySQL Replication

| 0 Comments | 0 TrackBacks
| | | |

Overview

MySQL replication is a handy way to distribue database processes across several servers. For example, a simple "master-slave" step up allows for a continuous backup of data from a primary database server, the master to a secondary backup server, the slave. But what if the slave server stops replicating for some reason? Not much of a good backup, if it fails to copy data for some undermined length of time.

The good news is that MySQL provides a simple, detailed query for checking if replication is taking place and will report errors, should they occur. The trick of course is getting notified when an issue does occur quickly. Given an existing Nagios setup for service monitoring at a PHP shop the only missing piece is some code.

The Details
First off, Nagios has the ability to supply arguments to a script as a script being invoked at the command-line. One common set of arguments for Nagios scripts are warning and critical thresholds. For example, a disk allocation script might take arguments to send a warning notification if the amount of free disk space reaches 20% and a critical notification if free space is 10% or less.

With MySQL replication one area of concern is the network. Any latency between the two servers can induce lag in synchronizing the slave server with the master server. Given this, why not pass along a threshold to our script setting checking how many seconds the secondary server is behind the primary.

For processing command line short form and long form options in PHP there is the getopt function:

        $shortopts  = "";
        $shortopts .= "w:"; // Required value for warning
        $shortopts .= "c:"; // Required value for critical

        $longopts  = array(
                // No long form options
        );

	// Parse our options with getopt
        $options = getopt( $shortopts, $longopts );

        // If slave is x second behind for warning state
        $delayWarn = $options['w'];

        // If slave is x second behind for a critical state
        $delayCritical = $options['c'];

Besides being in a critical or warning state, Nagios also has conditions for normal and unknown. Each state is associated with a status code that will be set upon completion of the script, hence the following associative array:

        // Nagios conditions we can be in
        $statuses = array( 'UNKNOWN' => '-1', 'OK' => '0', 'WARNING' => '1', 'CRITICAL' => '2' );

For the moment, we don't know what condition our replication setup is in. Nor do we have any additional information about the current state, so let's go ahead and define that as such:

        $state = 'UNKNOWN';
        $info = '';

The next step is to go ahead and connect to our slave MySQL instance and query its status using "SHOW SLAVE STATUS;"

		$db = new mysqli( $dbHost, $dbUser, $dbPasswd );

		// Prepare query statement & execute
		$query = $db->prepare( "SHOW SLAVE STATUS" )) {
		$query->execute();

The MySQL query is going to return a number of columns in a single result row. Of immediate concern is if the slave is in error state or not. For that we take a look at the columns labeled Slave_IO_Running, Slave_SQL_Running and Last_Errno.

        // If Slave_IO_Running OR Slave_SQL_Running are not Yes 
        // OR Last_Errno is not 0 we have a problem
        if (( $SlaveIORunning != 'Yes' ) OR ( $SlaveSQLRunning != 'Yes' ) 
        	OR ( $Last_Errno != '0' )) {

            	$state = 'CRITICAL';

If the slave server is not in error, then we'll go ahead and check how far behind it is, and set a warning or critical state given the earlier parameters from the beginning of the script:

        } else if (( $row['Slave_IO_Running'] == 'Yes' ) OR ( $row['Slave_SQL_Running'] == 'Yes' ) OR ( $row['Last_Errno'] == '0' )) {

        	// So far so, good, what about time delay, how behind is the slave database?
			if ( $row['Seconds_Behind_Master'] >= $delayCritical ) {

            	$state = 'CRITICAL';

            } else if ( $row['Seconds_Behind_Master'] >= $delayWarn ) {

            	$state = 'WARN';

            } else {

            	$state = 'OK';

            }

		}

Now that we have determined the state of the secondary database server, we can pass along some information for Nagios to process.

        // What to output?
        switch ( $state ) {

                case "UNKNOWN":
                        $info = 'Replication State: UNKNOWN';
                        break;

                case "OK":
                        $info = 'Replication State: OK Master Log File: ' .$MasterLogFile. ' Read Master Log Position: ' .$ReadMasterLogPos. ' Replication Delay (Seconds Behind Master): ' .$SecondsBehindMaster;
                        break;

                case "WARNING":
                        $info = 'Replication State: WARNING Master Log File: ' .$MasterLogFile. ' Read Master Log Position: ' .$ReadMasterLogPos. ' Replication Delay (Seconds Behind Master): ' .$SecondsBehindMaster;
                        break;

                case "CRITICAL":
                        $info = 'Replication State: CRITICAL Error: ' .$LastErrno. ': ' .$Last_Error. ' Replication Delay (Seconds Behind Master): ' .$SecondsBehindMaster;
                        break;

        }

All that is left is to transfer our information to Nagios via standard out and an exit code:

        // Need to set type to integer for exit() to handle the code properly
        $status = $statuses[$state];
        settype( $status, "integer" );

        fwrite( STDOUT, $info );
        exit( $status );

Putting it all together we get something like this:

#!/usr/bin/php
<?php

	$shortopts  = "";
	$shortopts .= "w:"; // Required value for warning
	$shortopts .= "c:"; // Required value for critical

	$longopts  = array( 
		// No long form options 
	);

	$options = getopt( $shortopts, $longopts );

	// If slave is x second behind, set state as warn
	$delayWarn = $options['w'];

	// If slave is x second behind, set state as critical
	$delayCritical = $options['c'];

	// Nagios conditions we can be in
	$statuses = array( 'UNKNOWN' =----> '-1', 'OK' => '0', 'WARNING' => '1', 'CRITICAL' => '2' );
	$state = 'UNKNOWN';
	$info = '';
	
	$dbUser = 'user';
	$dbPasswd = 'password';
	$dbHost = 'localhost';

	$db = new mysqli( $dbHost, $dbUser, $dbPasswd );

	if ( mysqli_connect_errno() ) {
	
		// Well this isn't good
		$state = 'CRITICAL';
		$info = 'Cannot connect to db server';

	} else {

		// Prepare query statement & execute
		if ( $query = $db->prepare( "SHOW SLAVE STATUS" )) {

			$query->execute();

			// Bind our result columns to variables
			$query->bind_result( $SlaveIOState, $MasterHost, $MasterUser, $MasterPort, $ConnectRetry, $MasterLogFile, $ReadMasterLogPos, $RelayLogFile, $RelayLogPos, $RelayMasterLogFile, $SlaveIORunning, $SlaveSQLRunning, $ReplicateDoDB, $ReplicateIgnoreDB, $ReplicateDoTable, $ReplicateIgnoreTable, $ReplicateWildDoTable, $ReplicateWildIgnoreTable, $LastErrno, $Last_Error, $SkipCounter, $ExecMasterLogPos, $RelayLogSpace, $UntilCondition, $UntilLogFile, $UntilLogPos, $MasterSSLAllowed, $MasterSSLCAFile, $MasterSSLCAPath, $MasterSSLCert, $MasterSSLCipher, $MasterSSLKey, $SecondsBehindMaster, $MasterSSLVerifyServerCert, $LastIOErrno, $LastIOError, $LastSQLErrno, $LastSQLError );

			// Go fetch
			$query->fetch();

			// Done
			$query->close();

			// and done
			$db->close();
	
			// If Slave_IO_Running OR Slave_SQL_Running are not Yes OR Last_Errno is not 0 we have a problem
			if (( $SlaveIORunning != 'Yes' ) OR ( $SlaveSQLRunning != 'Yes' ) OR ( $LastErrno != '0' )) {
		
				$state = 'CRITICAL';	
		
			} else if (( $SlaveIORunning == 'Yes' ) OR ( $SlaveSQLRunning == 'Yes' ) OR ( $LastErrno == '0' )) {
	
				// So far so, good, what about time delay, how behind is the slave database?
	
				if ( $SecondsBehindMaster >= $delayCritical ) {
				
					$state = 'CRITICAL';
				
				} else if ( $SecondsBehindMaster >= $delayWarn ) {
				
					$state = 'WARN';
				
				} else {
	
					$state = 'OK';
		
				}
			
			}
	
	
		} else {
			
			// Well this isn't good
			$state = 'CRITICAL';
			$info = 'Cannot query db server';			
			
		}
	
		// What to output?
		switch ( $state ) {

			case "UNKNOWN":
				$info = 'Replication State: UNKNOWN';
				break;

			case "OK":
				$info = 'Replication State: OK Master Log File: ' .$MasterLogFile. ' Read Master Log Position: ' .$ReadMasterLogPos. ' Replication Delay (Seconds Behind Master): ' .$SecondsBehindMaster;
				break;

			case "WARNING":
				$info = 'Replication State: WARNING Master Log File: ' .$MasterLogFile. ' Read Master Log Position: ' .$ReadMasterLogPos. ' Replication Delay (Seconds Behind Master): ' .$SecondsBehindMaster;
				break;

			case "CRITICAL":
				if ( $info == '' ) {
					
					$info = 'Replication State: CRITICAL Error: ' .$LastErrno. ': ' .$LastError. ' Replication Delay (Seconds Behind Master): ' .$SecondsBehindMaster;
			
				}
			break;
			
		}
	
	}

	// Need to set type to integer for exit to handle the exit code properly
	$status = $statuses[$state];
	settype( $status, "integer" );

	fwrite( STDOUT, $info );
	exit( $status );


?>

So weird, Connecting HavenCo and Red Hat

| 0 Comments | 0 TrackBacks
| | | |

It's a bit weird to be reading about Red Hat posting $1 billion in revenue in a year for the first time or this Ars article by James Grimmelmann about HavenCo since, to me personally that's part of my past.

See, as Grimmelmann notes, HavenCo's chairman of the board was Sameer Parekh whom I worked with/for at a different internet security company, C2Net Software. Almost everything Grimmelmann writes about I remember first-hand. I even remember reading the Wired articles he references (and how could I forget Neil Stephenson's Cryptonomicon, it's still one of my favorite novels).

Around the same time, Steven Levy wrote the non-fiction book Crypto, which tells part of the history of securing communications and modern computing networks; from Whitfield Diffie and the initial concerns of privacy to Netscape and the creation of SSL.

Alas, Levy's book is already 10 years old. While it covers the basis for the cryptography that powers today's Internet, it doesn't necessarily tell the whole story. Parts of the story that are missing, such as the short comings of SSL and its open standard successor, TLS, the adoption of "virtual private networks", that allow the use of primarily public networks, such as the Internet, to connect remote points securely, as if part of a central private network or that much of today's emails remain in "plaintext", despite the availability of encryption methods such as PGP, is missing.

Most of what happens on today's Internet every moment, took root around the same time of Levy's work, 1999-2001, when I was right there working for C2Net with its own vision on how to secure everyday communications on the "Information Superhighway".

And what happened to C2Net? Well it was sold, to......Red Hat of which I become an employee of (and then ex-employee of).

So yeah, I have this odd, I remember that (HavenCo) and oh, good for them (Red Hat). Then I think wow, I wasn't just a part of the some pioneering companies "back in the day", but also witnessed some completely cutting edge stuff that's only now being understood by the world at large.

So weird.

Adding SQL Server Support in PHP on Linux

| 0 Comments | 0 TrackBacks
| | | |

Back in July I outlined a method for establishing a SSH tunnel between Linux and Windows machines. The goal of the connection was to enable a PHP script on a front-end Linux web server access to information stored on the back-end private Windows server running SQL Server.

What I didn't mention at the time was how I enabled PHP support for Microsoft's SQL Server.

The most common deployments of PHP on Linux include support for MySQL or Postgres, depending largely on other factors such has the organization's preference, experience and requirements. Since PHP can be deployed on Windows, there is support for Microsoft's SQL Server. Such support is nontrivial to enable in PHP on Linux. It is however possible:

To enabled SQL Server support in PHP on Linux, the PHP extension that provides said support requires the FreeTDS library to build against. FreeTDS is an open source implementation of C libraries originally marketed by Sybase and Microsoft to enable access to their database servers.

Downloading the source code, building and installing FreeTDS is straightforward:


$ wget \
ftp://ftp.ibiblio.org/pub/Linux/ALPHA/freetds/stable/freetds-stable.tgz
$ gunzip freetds-stable.tgz
$ tar xf freetds-stable
$ cd freetds
$ ./configure
$ make
$ make install

The next step is to build the PHP source code against the FreeTDS libraries to include SQL Server support. This can be done one of two ways; build PHP from scratch or build the specific PHP extension. Since I was working on a server with a preexisting install of PHP, I opted for door number two:

Locate or download the source code for the preexisting version of PHP. Next, copy the mssql extension source code from the PHP source code into a separate php_mssql directory:


$ cp ext/mssql/config.m4 ~/src/php_mssql
$ cp ext/mssql/php_mssql.c ~/src/php_mssql
$ cp ext/mssql/php_mssql.h ~/src/php_mssql

Now build the source code, pointing it to where FreeTDS has been installed:


$ phpize
$ ./configure --with-mssql=/usr/local/freetds
$ make

There should now be a mssql.so file in ~/src/php_mssql/modules/ that can be copied into the existing PHP install. Once copied the last remaining steps are to enable the extension by modify the php.ini file and restarting the Apache HTTP Server.

Additional Information can be found here: Connecting PHP on Linux to MSQL on Windows

Establish and Maintain an SSH Tunnel between Linux and Windows

| 0 Comments | 0 TrackBacks
| | | |

The Situation
Over the years, I've worked in numerous computing environments and have come to appreciate heterogeneous systems. In my mind, all system administrators should experience how different platforms solve similar problems, just as all programmers should be exposed to different programming languages.

Of course this means being able to play well with others. Sometimes, that's easier said than done.

A recent project requirement stipulated being able to connect a public web server with a private database system. Not an uncommon requirement, but it did place a hurdle immediately in the way. The web application, developed with the Linux, Apache, MySQL and PHP (LAMP) stack, needed a method to connect to the private database system securely, which, for fun was not MySQL but instead Microsoft's SQL Server.

The Problem
The initial requirement called on connecting to the SQL Server using Microsoft's virtual private network (VPN) solution, Microsoft Point-to-Point Encryption (MPPE). Not impossible, since support for MPPE on any Linux distribution simply requires modifying the Linux kernel and recompiling the kernel in Linux is usually a non-issue.

However, in this case the web application would be running on a basic virtual private server (VPS) and a Linux VPS doesn't run its own kernel. Instead Linux VPSes run on a shared kernel used by all the different virtualized servers running on the same hardware.

Net result, no modification of the Linux kernel would be possible on the VPS.

One alternative to this hurdle would have been to switch from a Linux VPS to a Windows VPS. This would have been technically possible since Apache, MySQL and PHP have viable Windows ports. Alas, the hosting provider in question didn't yet offer Windows VPSes. They would shortly, but couldn't guarantee that their Windows VPS solution would be available in time for this particular project's deadline.

A second alternative could have been to upgrade from a virtualized server to a dedicated server. But that would have added more computing resources than what was required. From a business perspective, the added monthly cost wasn't justifiable. Not when a third alternative existed.

A Workable Solution
VPN is one of those terms that can refer to something generic as well as something very specific1. This distinction setups up alternative number three. The secure network connection requirement would remain, the implementation could simply change2.

Specifically the secure connection would be implemented via SSH instead of via MPPE.

With SSH an encrypted tunnel through an open port in the private network's firewall can be established. This tunnel forwards network traffic from a specified local port to a port on the remote machine, securely.

Most Linux distributions these days install OpenSSH as part of their base system install. OpenSSH is a free and open version of the SSH protocol and includes client and server software. For those distributions that don't install it by default installing OpenSSH is usually a trivial matter via the distribution's package manager.

Windows, on the other-hand, has no such base installation of an SSH implementation. There are a number of free software versions for Windows. For the case at hand, freeSSHD was selected to provide a free, open source version of the SSH server software.

Configuring freeSSHD to enable tunneling requires the following steps:

  1. Click on the "Tunneling" tab
  2. Check to enable port forwarding and apply the change
  3. Click on the "Users" tab
  4. Create or edit a user and enable tunnel access

Once the firewall has been configured to allow SSH traffic on port 22, establishing the tunnel from the Linux client to the Windows server is as simple as typing the following at the Linux command-line:


ssh -f -N -L 127.0.0.1:1433:192.168.1.2:1433 username@example.org

In which ssh will create and send to the background a ssh tunnel (-f option) without executing any remote commands (-N option) that begins at the localhost port 1433 (127.0.0.1:1433) terminates at the remote address and port (192.168.1.2:1433) and authenticates using the remote username at the remote location (the public IP address or domain name for the private network).

But Wait There's More
There is however a minor problem with this SSH tunnel. As described, the establishment of the SSH tunnel is an interactive process. The command needs to be executed and the password for the user provided for authentication. In most cases a simple shell script, executed by cron would solve this minor issue. However, for the sake of security OpenSSH doesn't provide a command-line option for providing passwords.

This authentication step can be managed in one of two ways. One is the use of a key management program such as ssh-agent. The second, more common option is to create a passphrase-less key.

The first step in creating a passphrase-less key is to first generate a private/public key pair>sup>3. In Linux this is done by issuing the command:


ssh-keygen -t rsa

Which generates a private/public key pair based on either the RSA or DSA encryption algorithm, depending on what is provided in the command-line option.

When prompted to enter a passphrase for the securing of the private key simply press enter. To confirm the empty passphrase simply press enter again.

The next step, after copying the public key onto the Windows server, is to enable the use of the public key for authentication. In freeSSHD the steps are:

  1. Click on the "Users" tab
  2. Select a user and click on "Change"
  3. Select "Public Key" from the "Authorization" drop-down
  4. Click on "OK" to save changes to users
  5. Next click on the "Authentication" tab
  6. Using the browse button, select the directory with the users public key are kept
  7. Enable public-key authentication by choosing the "Allowed" button under "Public-Key Authentication"
  8. Click on "OK" to save the changes to authentication

With the passphrase-less keys in place, the last step is to automate the tunnel itself. In this case, instead of a shell script, I opted to use program called autossh.

autossh is a program that can start a copy of ssh and monitor the connection, restarting it when necessary. All autossh needs to know is what local port to monitor, so our one-time initial startup of ssh tunnel looks similar to the previous example, but with autossh and the addition of the -M option


autossh -M 1433 -f -N -L 127.0.0.1:1433:192.168.1.2:1433 
username@example.org




[1] This means alas, it is also one of those terms that can cause confusion, especially between technical and non-technical people, if not defined at the outset.

[2] This is one of those places where knowledge of different solutions solving a similar problem becomes handy.

[3] For user authentication SSH can either be password-based or key-based. In key-based authentication, SSH uses public-key cryptography where the public key is distributed to identify the owner of the matching private key. The passphase is in this case is used to authenticate access to the private key.

Let's Play Two (or Web Analytics for Fun and Profit)

| 0 Comments | 0 TrackBacks
| | | |

Back in October it occurred to me that it had been 5 years since the White Sox beat the Astros to win the World Series. As I result of that realization, I dug into my video collection and quickly put together and posted this video:



To say that this video is the most popular video I've posted on YouTube thus far is an understatement. What's more interesting, for those of us who work the medium of the web, is the traffic statistics of those who have viewed the video in the past 5 months:


konerko_vid_stats.jpegSo what do these stats tell us? Well to some extent it tells us a few things we might have already "known", such as that most baseball fans (or at least White Sox fans) are mature males residing in the United States.

What I find interesting is when people were viewing this little video. Obviously some people viewed it right when I posted it last Oct, during the 2010 World Series. Then, as expected, things go quite for the most part. Then, as Spring Training builds to today's Opening Day, so does the traffic.

But wait, you might be wondering, what about the spike of traffic in December?[1] What could possibly have driven the largest one-time surge in traffic for a handful of days? Perhaps I engaged in a little social marketing? Or maybe the video got popular on a sports site?

Well as it happened it did get posted on a local sports site, but that doesn't completely explain the surge, or why said site was posting a baseball video in December.

Why did it get popular so quickly (and fade so quickly) in December? Because, on December 8, 2010, Paul Konerko, the hero of the video, resigned with the Chicago White Sox for 3 more years.

Interesting, No?




[1] Well if you really are a White Sox fan, you might not be wondering, but don' spoil the ending please?

Web Development: Before and After the Client

| 0 Comments | 0 TrackBacks
| | | |

First published: 17th of Dec 2010 for Orbit Media Studios

For someone looking for a web design firm, how a website is developed might seem meaningless. Who cares, so long as it works?

Yet how well a website works can be measured in part by the costs associated with it. The direct cost is the total price for the initial project. The indirect costs consist of secondary expenses related to ongoing marketing and support during the lifetime of the website.

At Orbit we have two development processes. Both are designed to reduce costs and improve quality. The first is an internal process that starts before the client ever arrives. The second process begins at the first client meeting as we discover the project's specific requirements.

Internal Development
First, what do we mean by develop? Development usually refers to the programming of the website, whereas design refers to the look and feel.

With development, we need to consider a few basic questions. What features are required to make an ecommerce website work, for example? Regardless of the item being sold or the company selling the item, the basic logic can be described in a few steps:

  1. A customer selects an item to purchase
  2. The selected item is placed into a shopping cart
  3. The customer decides to checkout, continue shopping or abandon their cart
  4. To checkout, the customer initiates the process of purchasing what is in their cart
  5. The store presents a total bill for the item(s) desired by the customer
  6. The customer presents a method of payment
  7. The payment is verified and the transaction is completed

To be sure, this purchasing feature isn't complete and plenty of questions can remain. However, this generalized logic provides a starting point.

This is where web development at Orbit begins, identifying basic features of a potential website.

Developer Day
Roughly once a month, all of Orbit's developers spend the day working on such questions, analyzing and programing with various sandboxes.

A sandbox is simply a generic website in which the development team can create, test and improve different features and find the best approach for virtually any type of website. It's a play area for programmers.

The focus is on breaking down the feature into workable steps and rapidly building them. In doing so we consider what has worked for clients in the past along with growing trends such as social media integration.

Each Developer Day represents the repeating of a cycle of planning, analyzing, coding and acceptance testing in order to get the feature built right.

But, as we mentioned, plenty of questions can remain. Not all features will work perfectly "out of the box" for all clients.

Developing with the Client
This brings us to the second process of web development at Orbit: developing with the client. Now the concern is on completion of a particular website. Thus the focus for the developer changes from generalized concepts to specific implementation.

But before a developer can customize the code for a client, a new process of discovery and planning must begin. The phases of this process break down into the following, with direct client involvement at each step:

  1. A Kick-Off Meeting where initial questions about goals and scope are answered
  2. Discovery of the layout and flow for the proposed website
  3. Designing the look of the website and expressing the client's brand
  4. Development, implementing and testing
  5. Deploing the website for public use

In this sequential development process each step follows from the last. There is a specific beginning and ending. One step cannot be started until the previous step is completed and approved.

The Big Payoff
Understanding the development process for a custom website is important. How many hours a developer works on a client's website and the dependability of the underlying code affects its ultimate cost.

Both direct and indirect costs impact the client's ability to market their website and can limit the overall return of the website.

Rather than starting from scratch, Orbit takes the pieces we have built and improved earlier and applies them to the client's project, customizing the features to the needs of the website. In doing so we execute different development processes in order to keep our client's costs manageable while adding value to their business.

Google and Net Neutrality: Business First

| 0 Comments | 0 TrackBacks
| | | |

The Story Thus Far
Earlier this month the New York Times published an article entitled "Google and Verizon Near Deal on Web Pay Tiers" in which the paper outlined a pending agreement between Google and Verizon that would allow for prioritizing certain traffic across the Internet. Google's initial public reaction to the story was to say that the Times didn't have its facts straight and that if they had just called, Google would have been happy to have answered any questions the paper had.

Many, I think, saw Google's initial reaction to the article as an outright rejection that any sort of "deal" was in the works. However, just a few days later Google announced "A Joint Policy Proposal for an Open Internet" in which they outlined a "principled compromise our companies have developed over the last year concerning the thorny issue of 'network neutrality."

Wait, what? Web Pay Tiers? Network Neutrality?

Ok, for the uninitiated the term network neutrality (net neutrality), refers to the formalization of what has been, more or less, standard practice; that very limited, if any, restrictions should be allowed to be put in place by any service providers or government on the content or equipment that can connect to the Internet as a whole.

Conversely, many service providers argue that bandwidth is limited in scope and that without the ability to prioritize traffic and access they will ultimately fail in providing what has become a necessary service to many individuals and businesses.

In short order, Google in the past has advocated open, unrestricted access and even encapsulated the phrase "Don't be evil" as a corporate motto of sorts while companies such as Version have argued that the health and livelihood of their business models depend on tiered data networks, regardless of the level of service offered.

The proposed agreement between Google and Verizon has been analyzed by a number of groups and individuals since its publication. The basic outline of which reads: Google and Verizon agree that all "wirelines" shall be open and unencumbered. However unlike a "wireline", "wireless" (cellular) broadband is still an emerging marketplace and, as such, requires a different set of governing principles (or at least should not be handicapped by United States regulation (via the FCC) requiring the same open and unencumbered requirement as a "wireline").

Google argues that in order to reach an agreement with Verizon, like any workable agreement, they needed to make compromises in order to move forward. Critics of the agreement argue that Google has "sold out" on the idea of net neutrality, noting that the "wireless" compromise between the two companies just so happens to cover the same business segment where Google and Verizon are business partners via Google's Android platform for smartphones.

Personal Take: Business First
While I personally believe that businesses should be held to higher standard than simply generating profit, I realize that, no mater what I feel, a business must make sound business decisions, else that business will cease to be in business.

Google's mature, core business, search, requires open access for users to use and open standards for access to third-party data that it then aggregates, warehouses and mines. However, Google's potential growth business, wireless, is in competition between a number of other closed or semi-closed computing platforms (iPhone, Blackberry, RIM, et al).

Consider the following philosophy a variation of Maslow's Hierarchy of Needs, which I consider in high regard as outlined for individuals: Before a business can operate in a socially conscious way, the business must first be profitable. If a business, or business unit of a larger corporation, can align its strategy with a greater social cause, that is be profitable while being highly attuned to and influential in a current social concern, then the business can reached a higher level of "growth"

Thus, to me, it seems unsurprising that Google is willing to build a policy around a fundamental division between wired and wireless networks. Nor do I feel there has been any "betrayal" of whatever higher "don't be evil" value Google wishes to place on its businesses as a whole.

But that doesn't mean I like the agreement anymore than others. In fact, for the most part, I feel the EFF's review of the proposed agreement between Google and Verizon, is inline with my own initial read of the policy and viewpoint with regard to net neutrality.

Unfortunately for Google, its internal division of business between "wireline" and "wireless" is hardly an ideal division for the rest of us. This means while there are potentially good suggestions with the agreement, there are also some potential bad and ugly ramifications for a division between "wireline" and "wireless".

The Take Away: Hardly the End of the Internet 
Obviously, this whole post is a quick summary of a very complex and evolving issue.

But that's the final point to make here. This is not the end of the Internet as we know it in which Google, Verizon and a handful of others carve it up between themselves. Instead this is an interesting proposal, an experiment in suggested policy, that may or may not point to a balancing point between different social, business and individual interests. 

As Carl Sagon once wrote about the American form of democracy, "In almost all of these cases, adequate control experiments are not performed, or variables are insufficiently separated. Nevertheless, to a certain and often useful degree, policy ideas can be tested. The great waste would be to ignore the results of social experiments because they seem to be ideologically unpalatable." 

What is Reliable Web Hosting?

| 0 Comments | 0 TrackBacks
| | | |

[Editor's Note:This guest post is written by Kirsten Ramsburg of WebHostingSearch, enjoy]

When a business decides to start a new website, they tend to not be particularly concerned with the type of hosting they will use. Instead, they are focused on getting their site online quickly and generating new sales through information dissemination or e-commerce. However, it is important for anyone planning on creating a website to consider the available options of web hosting and how they might affect their site.

There are many companies that offer cheap web hosting services for small sites. Almost all of them use either Windows or Linux as a hosting platform (there are a few alternatives like Unix or Mac hosting, but these are exceedingly rare). Contrary to popular belief, it is not necessary to use a Windows hosting package if you run Windows on your desktop computer. The hosting server is completely separate from your desktop and is typically accessed through third-party software, so you can use a Linux server even if your desktop is running Windows, OSX, or even Solaris.

Regardless of the type of hosting, the most important concern for any site is to find a reliable web hosting solution. For example, it is vital that the company you select has a guaranteed uptime of 98% percent or higher. This means that the server will be offline for less than fourteen hours a month for maintenance, or other reasons. Most web companies today guarantee 99% uptime and if they fail in delivering this you won't have to pay for that month. Be wary of any company that does not guarantee this minimum amount of uptime.

Another important thing to consider is the possibility of infiltration by a malicious third-party. Though both types of servers continue to improve their security systems, Windows is slightly more prone to hacker attacks because it is built on the Windows framework. Since the majority of viruses and other malicious software is designed specifically to attack Windows, servers based on Windows are a little more vulnerable. However, both types of servers depend primarily on the administrator to maintain security. In other words, it's better to have a competent administrator running a Windows server than a careless administrator running a Linux server.

The third important thing to consider when selecting a reliable web hosting companies is the age of the hosting company. Many newer services offer amazingly inexpensive rates, but the services and support are not always as consistent as with established hosts. There are even instances of companies selling hosting packages only to close down and disappear with their clients' money. Generally speaking, a company younger than two years is risky.

Finding reliable web hosting is a challenge that most new companies and many private individuals will have to face at some point. However, keeping these three points in mind will greatly improve the odds of finding a hosting company that will allow you to focus on actually running your business instead of worrying about your site.

About the Author

Paul is a technologist and all around nice guy for technology oriented organizations and parties. Besides maintaining this blog and website you can follow Paul's particular pontifications on the Life Universe and Everything on Twitter.

   
   


Subscribe:
Add to Google Reader or Homepage
Add to My AOL

Add to netvibes
Subscribe in Bloglines
Add to Technorati Favorites

Powered
CentOS
Apache
MySQL
Perl
Movable Type Pro