Recently in Red Hat Category

So weird, Connecting HavenCo and Red Hat

| 0 Comments
| | | |

It's a bit weird to be reading about Red Hat posting $1 billion in revenue in a year for the first time or this Ars article by James Grimmelmann about HavenCo since, to me personally that's part of my past.

See, as Grimmelmann notes, HavenCo's chairman of the board was Sameer Parekh whom I worked with/for at a different internet security company, C2Net Software. Almost everything Grimmelmann writes about I remember first-hand. I even remember reading the Wired articles he references (and how could I forget Neil Stephenson's Cryptonomicon, it's still one of my favorite novels).

Around the same time, Steven Levy wrote the non-fiction book Crypto, which tells part of the history of securing communications and modern computing networks; from Whitfield Diffie and the initial concerns of privacy to Netscape and the creation of SSL.

Alas, Levy's book is already 10 years old. While it covers the basis for the cryptography that powers today's Internet, it doesn't necessarily tell the whole story. Parts of the story that are missing, such as the short comings of SSL and its open standard successor, TLS, the adoption of "virtual private networks", that allow the use of primarily public networks, such as the Internet, to connect remote points securely, as if part of a central private network or that much of today's emails remain in "plaintext", despite the availability of encryption methods such as PGP, is missing.

Most of what happens on today's Internet every moment, took root around the same time of Levy's work, 1999-2001, when I was right there working for C2Net with its own vision on how to secure everyday communications on the "Information Superhighway".

And what happened to C2Net? Well it was sold, to......Red Hat of which I become an employee of (and then ex-employee of).

So yeah, I have this odd, I remember that (HavenCo) and oh, good for them (Red Hat). Then I think wow, I wasn't just a part of the some pioneering companies "back in the day", but also witnessed some completely cutting edge stuff that's only now being understood by the world at large.

So weird.

16 of 20: Personal Artifacts from the Early Years of Linux

| 4 Comments
| | | |

As the story goes, in 1991 Linus Torvalds, then a student at the University of Helsinki, began work on the core component, the kernel, of his own operating system based off of Andrew S. Tanenbaum's teaching operating system MINIX.

Since releasing the source code 20 years ago, and under his guidance ever since, the Linux kernel has become central to the development of devices from smartphones to super computers.

But for most people, when they think of Linux, they think of Linux distributions, software that includes the Linux kernel and supporting resources that complete the basic requirements of an operating system.

A few weeks ago the Linux.com Editorial Staff posted an article on one of the earliest distributions of Linux, Slackware. That in turn got me thinking about some of my early exposure to Linux in the late 90s, and what media I might still have from that time.


Linux
My personal copy of Andrew S. Tanenbaum's Modern Operating Systems from my Computer Science days along with Daniel A. Tauber's The Complete Linux Kit and Randy Hootman's Linux - Installation and Beyond 

A quick scan of my bookshelf revealed a couple of interesting artifacts. One is a copy of a book, "The Complete Linux Kit", a 1995 title that included a CD-ROM with the Slackware Linux distribution1 that was compiled by Daniel A. Tauber and printed by Sybex.

The second is a video, "Linux - Installation and Beyond" which is described as a "three hour seminar showing installation of Red Hat Linux, Slackware Linux, Yggdrasil Plug-and-Play Linux" featuring Randy Hootman and was distributed by Yggdrasil Computing. Alas while Red Hat and Slackware still exist, in one form or another, the Yggdrasil distribution is no longer maintained and the company no longer exists.

In a small effort to share what Slackware was like, "back in the day" I offer this copy of Randy Hootman's tutorial on installing and using Slackware Linux version 2.3:

As a small bonus, and because I personally worked for Red Hat, once upon a time, here is Randy on Red Hat Linux v2.0:


GNOME Linux Desktop v1
Speaking of Red Hat, a screenshot of my Linux Workstation in 2000, running GNOME v1




[1] Where the CD has since gone to, I'm note sure . Probably in a book of CD-ROMs in the storage.


A Step Beyond a Red Hat Enterprise Linux, Apache, MySQL and PHP How To

| 1 Comment
| | | |
Server Room

Image by pdweinstein via Flickr

It seems quite a bit of my time has been taken up with system administration related tasks these days. If I'm not contemplating system monitoring solutions for one client, I must be busy migrating a mail service from one server location to another.


Naturally, I was hardly surprised when I got an email from an old high school friend recently asking for assistance in setting up a Linux, Apache, MySQL and PHP (LAMP) server on Red Hat Enterprise Linux (RHEL).


Now said friend is hardly a technical novice and Google is happy to provide one with plenty of step-by-step how to's on the subject. So instead of running down the usual I figured, since he understands the technical side, but is probably more at home with Microsoft-based solutions, that I would give him some tips to orient himself.


Thus here is a quick and dirty "what's happening" (instead of the more common how to) on installing LAMP on a RHEL based system.



Helpful Tools

Since most, if not all, of these commands deal with system-wide alterations first and foremost one will need to have "superuser" access. That can be accomplished in one of two ways:


  • root user: root is the conventional name of the user who has rights to all files and programs on a Unix-based system such as Red Hat Enterprise Linux. In other words root is the main administrator for the system and can do anything at anytime.


  • sudo: sudo is a command that allows users to run programs with the security privileges of another user (normally the superuser, root). sudo usage needs to be granted to a user before it can be used and can be limited to a subset of privileges if needed.


There are plenty of commands that one can execute at the command line. Just remembering them all can be hard, let alone remembering how exactly each command works, in what context and with what arguments. For that alone one should always remember that help is always one command away:


  • man: man is a command (and format) for displaying the on-line manual pages. The man command followed by the name of the command who's documentation you wish to view will provide an display of that documentation.

For example to view the manual pages in regards to the man command itself:


$ man man


A key characteristic of Unix-centric systems is the use of plain text for data storage. This means that the focus of data manipulation is based on the fundamentals of text processing. It also means that data can be "piped" from one command-line tool to another for performing a task that might be more complicated for just one tool to complete.


While text processing isn't necessary for setting up a LAMP server what is necessary to know and have ready to use is at least one general, text-based tool; a text editor.


  • vi: vi, vim, emacs or pico each is a command line text editor and while I try to be agnostic in the eternal debate about which is better, I will mention that for as long as I've been a Unix user/admin (some 15 years now) no matter what system I am on, be it a Red Hat distribution of Linux or a proprietary system chances are vi or vim has been installed and is available.1


Installing Software

Red Hat Enterprise Linux uses a software package format called rpm. Each software package consists of an archive of files along with information about the package like its version and a description of the software within.


As a general practice packages are broken up into two separate packages; One contains the compiled executable, while a second one contains the source files for building the software from scratch2. The source code packages will usual have a "-devel" appended to their name field.3


However, back in the early days of rpm it was discovered that managing individual rpm packages is a pain and time consuming4. As such, a number of rpm-based Linux distributions have adopted tools such as yum. yum assists in managing the identification of dependencies for a given software package as well as installing said packages, tracking updates for a given package from a specified repository and removing installed software from the system as desired.


Identification

Using yum with the list argument will provide information about a given package's availability. For example to check on the Apache Web Server, which is officially known as the Apache HTTP Server:


$ yum list httpd
Loaded plugins: fastestmirror
Installed Packages
httpd.i386     2.2.3-31.el5.centos.2     installed
Available Packages
httpd.i386     2.2.3-31.el5.centos.4     updates  


As you can see httpd is not only available, but is also already installed5.


Installing

If, however, the Apache HTTP Server wasn't the install argument for yum would do the trick.


$ yum install httpd
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * addons: mirror.cisp.com
 * base: yum.singlehop.com
 * extras: mirror.sanctuaryhost.com
 * updates: mirrors.liquidweb.com
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package httpd.i386 0:2.2.3-31.el5.centos.4 set to be updated
--> Finished Dependency Resolution

Dependencies Resolved

==================================================================
 Package     Arch     Version               Repository       Size
==================================================================
Installing:
 httpd       i386     2.2.3-31.el5.centos.4 updates         1.2 M

Transaction Summary
==================================================================
Install      1 Package(s)        
Update       0 Package(s)        
Remove       0 Package(s)        

Total download size: 1.2 M
Is this ok [y/N]:


Now it so happens that installing of Apache HTTP Server on the example system at the given time, didn't require any additional software packages to for the installation. This might not always be the case and as such, yum will identify and list any packages it determines are missing from the system to be installed at the same time, for the Apache Software Foundation's web server to be useful.


Updating

One of the key advantages of a package management system such as yum in the ability to easily track updates. The update feature can be used in two distinct manners. If used without explicitly naming a given packages, update will check for and then update every currently installed package that has a newer package available from the repository.


If, on the other hand, one or more packages are specified, yum will only update the listed package. As with installing the initial package, while updating, yum will ensure that all dependencies are installed.


Removing

Of course if yum can install and update software, it only makes sense that it can also remove the installed software from a given system. The remove argument is used to remove a specified package from the system as well as removing any packages which depend on the package being removed.


Note that this is basically the exact opposite of the install option. Install will add any software the package requires to run and the requested software itself. Remove, however, removes the requested software and any software that depends on it.


Configuring Software

Once each component is installed, it's time to go about configuring each one. The general convention for Unix-based systems such as Red Hat's Enterprise Linux is to place system-wide configuration files in the /etc directory. If fact, while the specifics can vary, the directory structure from Unix system to Unix system will general follow the same over all pattern.


Again staying with our web server example, the main Apache HTTP Server configuration file can be found at /etc/httpd/conf/httpd.conf on a Red Hat Enterprise Linux system. To open the configuration file for review and possible modification using the vi text editor is as simple as:


$ vi /etc/httpd/conf/httpd.conf


Another common directory convention is /var for variable files. Here are files whose content is expected to continually change during normal operation of the system. This is of note in regards to Apache and MySQL since /var is where database (/var/db) or website file hierarchies reside (/var/www).


Executing a Service

Last, but not least is starting these various services. For that there is the service command which runs a standardized script created for controlling the operation of the software in question. Thus to start the httpd web service the command would be"


$ service httpd start
Starting httpd:                                            [  OK  ]


Of course starting a service manually and having it automatically start when the system boots are two different realities. The chkconfig command provides a simple command-line tool for denoting if a service is to start at boot time or not. To turn on the Apache httpd service at boot:


$ chkconfig --add httpd
$ chkconfig httpd on


As noted in its man page, chkconfig has five distinct functions: adding new services for management, removing services from management, listing the current startup information for services, changing the startup information for services, and checking the startup state of a particular service.


Thus if service is akin to rpm form manually controlling a specific service via its control script, than chkconfig is akin to yum for managing a group of services, controlling the when and how of availability.


The --add argument for chkconfig adds a new service for management. Equally, the --del will remove a service from management and the --list will list all of the services which chkconfig knows about or, if a name is specified, information only about the named service.


One of the key concepts in regards to services, such as a web or database service on a Unix-centric system, is the concept of a runlevel. In standard practice, runlevel relates to the state the system is in and what is enabled for execution. Runlevel zero, for example, halts all execution of any running service, an denotes a halted, off, system.


Runlevels 1 to 5 differ in terms of which drives are mounted and which services, such as a web server are started. Default runlevels are typically 3, 4, or 5. Lower run levels (1) are useful for maintenance or emergency repairs, since they usually don't offer any network services and can help in eliminating sources of trouble.


This is important to note since in managing a service, chkconfig needs to know at which runlevels a service should run. Moreover, it is important to note what runlevels other depended services are enabled for. That is, if runlevel 1, which is single-user state akin to a "safe mode" is configured to not run any networking service, it makes little sense to enable the Apache HTTP server to execute at this runlevel.


More Resources

Just as other how-tos are entry points into getting going, this little what's happening is a quick "step beyond" the how to's last step, moving one towards understand what is going on "behind the scenes." Hopefully this next step provides a helpful point of moving into a greater understanding and into the grand world of system administration.


To that end here is a quick list of books on subject of system administration for Unix systems such as Red Hat Enterprise Linux:




2 Remember one of the major tenets at the heart of GNU/Linux is the ability to have access to view and modify the source materials.

3 More about the rpm naming convention can be found here: http://en.wikipedia.org/wiki/RPM_Package_Manager#Package_label

4 Often refereed to as RPM Hell

5 A trained eye will note that this command was actually executed on a CentOS system. CentOS, as noted on their site, is a "CentOS is an Enterprise-class Linux Distribution derived from sources freely provided to the public by a prominent North American Enterprise Linux vendor." Can you guess which one?

Y2K, C2Net, HKS and Red Hat

| 1 Comment
| | | |

In November of 1999 I had over a year of work as a professional programmer under my belt working for C2Net Software. It had been a bit of a bumpy ride, by the end of 1999 I had already been hired, laid-off, hired as a consultant and rehired as an employee.

Given the personal struggle and the fact that I worked for an "Internet company", I had a bit of remote detachment about Y2K. But by November 1999, even C2Net was in the mist of Y2K preparations, internally and externally.

Externally plenty of customers had, as part of their own Y2K compliance efforts, started seeking us out long before November to verify that our main product, the Stronghold Web Server, was Y2K safe. The concern being that plenty of elements about managing web traffic require proper handling of date and time information, from the underlying network protocols to creation of unique session identifiers. The good news was that Stronghold, was a packaged and commercially distributed version of the Apache Web Server, which was indeed Y2K compliant, thanks to its many developers.

While most customers went away content with a signed letter of compliance, I remember our Sales and Marketing VP asking me if I wanted to go to NYC on behalf of client and be available if anything went wrong. Basically, since Stronghold was in the clear and any web application the system was running would have been outside of our domain - I was being asked if I wanted an all expense paid trip from San Francisco to New York City to witnesses the ball drop in Time Square for the new millennium1. Naturally, I declined2.

Internally, the biggest worry I had was dealing with our online credit card processing system. First off, the virtual terminal program we had was running on Windows 98, which itself was not Y2K compliant.

However, the bigger problem was the credit card processing software, ICVerify. Today there are plenty of solutions for processing credit card purchases, in real-time, online, from do it yourself solutions such as MainStreet Softworks' Monetra to all-in-one solutions such as Google's Checkout. But in 1999, while a number of virtual terminal solutions, such as ICVerify existed, for processing credit cards "by hand", few solutions existed for processing credit cards automatically, online.

In fact the only reason C2Net's system did real time transaction3 was because ICVerify had been hacked in such as way as to process transactions via a secured network connection. The best part of the situation, like many other Y2K issues, was that the person(s) who had created the "solution" no longer worked for the company, as everyone at the company had been adversely affected by the previous year's corporate turmoil, as I had.

Patching Windows 98 would hardly solve the problem and since no one with the company understood the ICVerify hack completely, it was unknown if patching it would adversely affect our main method for selling Stronghold within the United States.

Given that Stronghold was a commercially distributed version of Apache and Apache at that time was built for Unix (POSIX) based systems, the main requirement, besides real-time processing, was the ability to run along with our custom ecommerce system built using FreeBSD, Stronghold and PHP. That left us with one viable solution, Hell's Kitchen's4 (HKS) CCVS.

By December of 1999 I had already identified CCVS as our would-be solution, to the point that I had actually purchased it on behalf of C2Net and had started developing the replacement credit card processing solution. Doing so had me in contract with a couple of primary individuals at HKS, including the founder, Todd Masco, who must of had his hands full with a few other people, such as myself, rushing to replace their credit card processing systems before the new year. Despite that I don't recall not being able to reach Todd or Doug DeJulio when needed.


Last Remaining Share
Last Remaining Share


That in and of itself endeared me to HKS, but little did I know at the time that wasn't going to be the half of it. While I recall missing the end of the year deadline for getting our new payment system operational by a week (or so), I was hardly in the thick of it. In fact, if anything Todd, Doug and HKS had most certainly had it much worst. For besides the presumed end of the millennium rush to update transaction systems across the nation, in the first week of 2000 the public announcement was made, Red Hat had acquired HKS.

The folks at HKS really had played it cool.

Now here is where things get a bit interesting, the main selling point of Stronghold was that it was a full distribution of the Apache Web Server that included the commercial right to use the encryption technology that allows for secure web transactions, thus allowing one to built solutions such as our custom ecommerce system. As mentioned Stronghold, by way of Apache, was a POSIX based application that ran on systems such as FreeBSD, Sun Solaris and on a the new, up-and-coming operating system Linux which was (and still is) favored by Red Hat. CCVS was a POSIX based credit card processing system.

All of which meant that by the summer of 2000 I received a friendly phone call from Todd, now of course at Red Hat, looking to build contacts at C2Net with regards to possible partnership.

Now it was my turn to play to cool.

Given, in part, the issues at C2Net over the previous year the majority owner of the company was looking to sell and by late spring/early summer the whole of C2Net had been informed that negotiations had been started in regards to Red Hat purchasing C2Net5 and then sworn to secrecy.

So when Todd's call came, I had to politely tell him I would pass on his information to our VP of Sales and Marketing and then of course made a beeline to said office after hanging up with Todd. Sadly, with hindsight and all, I should have realized that if Todd was in the dark about the potential purchase of C2Net by Red Hat, given the obvious fit between the three products, that the acquisition of HKS might have been poorly executed, which in turn was not a good sign for C2Net. But at the time I recall hearing shortly after that Todd did eventually get filled in. And by August of 2000 C2Net and Red Hat issued a joint press releases announcing the agreed upon acquisition.




1 That is of course if, like most Americans, you can't count and/or don't care that our Gregorian calendar had no year 0.

2 Call it Bloomberg's Law or whatever, but in the US one's loathing of all things New York (or Los Angeles) is inversely proportional to one's distance from New York City. Thus, Boston, which is closer, hates NYC greater than Chicago. San Francisco, not so much hating NYC as it does LA. New York and Los Angeles can of course clam no one loathes themselves more than they do, which in doing so means they care little about anyone else, given their immediate proximity to their own location. Having grown up in and around Chicago, I naturally care equally less about New York as I do LA.

3 And for that matter the only reason why we had a machine running Windows 98 in our environment

4 Yes, despite my previous ragging on New York City, I do know that not only is Hell's Kitchen the name of a neighborhood in New York, but if I recall correctly, that the company Hell's Kitchen was in fact named after said neighborhood.

5 A bit fuzzy here on the timeline and details, but I recall hearing about a deal between Caldera and C2Net that never materialized and then Red Hat got cold feet when "the bubble burst" in purchasing Red Hat, until various revenue commitments renewed discussions between C2Net and Red Hat.

jul 24 01

| 0 Comments
| | | |

In pluggin' the "Great Debate" on open source Red Hat released this Media Alert that in passing also names yours truly.

About the Author

Paul is a technologist and all around nice guy for technology oriented organizations and parties. Besides maintaining this blog and website you can follow Paul's particular pontifications on the Life Universe and Everything on Twitter.

   
   


Subscribe:
Add to Google Reader or Homepage
Add to My AOL

Add to netvibes
Subscribe in Bloglines
Add to Technorati Favorites

Powered
CentOS
Apache
MySQL
Perl
Movable Type Pro