Recently in Management Category

Where Software Methodology and Business Strategy Meet?

| 0 Comments | 0 TrackBacks
| | | |
Two Keyboards and a Mouse

Image by pdweinstein via Flickr

When I initially envisioned the article Web Development: Before and After the Client (local copy), my initial idea was to draw a line connecting Orbit Media Studio's overall business strategy, through the implementation of specific development methodologies, to the end result of the type of service Orbit offers.

Consider the description for Orbit's method of enhancing their content management system; "the focus is on breaking down the feature into workable steps and rapidly building them." This incremental process allows Orbit to keep coding solutions simple, quickly incorporate lessons learned from previous projects and sustain development of their codebase for the long term;

In doing so we consider what has worked for clients in the past along with growing trends such as social media integration.

But once a client enters the picture the goals change. A client has specific objectives, chief among them the desire for a stable website that is delivered on time and on budget. The change in goals requires a change in methodology;

In this sequential development process each step follows from the last. There is a specific beginning and ending. One step cannot be started until the previous step is completed and approved.

Thus, while these two methods have distinctly different goals unto themselves, Orbit uses both to bring about a specific end result in desired quality of results.

The point that I was trying to make got me wondering. Given a set of goals would most developers, or project managers, choose the development method(s) best suited to realizing those goals? Or would they choose the one they are most comfortable with?

Sure, most project managers or software engineers would recognize the different methodologies described above and that they focus on different objectives. But what's the most common way a development process is chosen?

Unfortunately, my experience over the years leads me to believe that most choose the one they know the best, is in fashion or is what is used "in-house." In fact many places I've interviewed at consider development methodologies like software platforms, they maybe an "Agile shop" just as much as they are a Mac or PHP shop.

This lack of connection between desired outcome and actual process haunts both business and technology managers in my opinion.

Then again, maybe I've had an odd experience with my career thus far?




Postscript: In perhaps an answer to my own question, while I was drafting this blog post, I came across this article, Design Driven Development. The post, from a Boston-based web design firm, outlines a similar organic evolution of their development process.

Old Programmers Don't Die, They Just Fade Away

| 0 Comments | 0 TrackBacks
| | | |

A few days ago I came across this Infoworld article entitled "The painful truth about age discrimination in tech" via Slashdot and have been wanting to comment on it ever since. While I have had no reason to cry foul on any company I've ever interviewed for, I have to say most of the issues certainly ring true to me.

One of the frustrations I feel a lot of tech works have is how to communicate experience. Way too often I fell I've talked to recruiters or HR personnel who are either looking for an exact word-for-word match between the resume and the job opening or, on the opposite end of the spectrum, are looking for just one keyword to hit.

Alas, it seems all too true that "hiring managers are unable to map how 10 years of experience in one programming language can inform or enhance a programmer's months of experience with a newer technology."

Which of course doesn't help when, in the world of technology, the field is evolving at such a rapid pace, with a huge focus on "The Next Big Thing".

True, writing CGI scripts in Perl yesterday doesn't automatically translate to writing custom modules in Joomla. But there is a road that gets a developer from first writing a CGI script in Perl to learning Object-Orientated programming to understanding design patterns such as Model-View-Controller that does provide one with the basis for working with Joomla.

Luckily this is an issue that can be taken care of with a little education.

More troubling for an experienced developer is that "only 19 percent of computer science graduates are still working in programming once they're in their early 40s."

Granted the source of that statistic is a government study that's at least a decade old. But still, the high turn-over I've experienced working in the tech industry - my average is about 2 years at any given company - I can see many individuals would take the break as motivation to look for something "better". Heck, I've even felt it myself, having gone back to school for a Masters in Business Administration at one point.

Invariably when talking about business, a sports analogy tends to make an entrance. Sure enough, Inforworld's article compares the IT industry to that of professional sports, "at some point in those career arcs, the assets that made workers such hot properties -- youth, the ability to devote lots of time to their vocation, comparative inexperience -- diminish. And the marginal utility of what's left -- experience -- is not as strongly valued."

Yet that of course is not true, well at least it isn't true in professional sports. All you have to do is think of all those managers, coaches and scouts, most of whom at one time or another played the sport itself. Perhaps they never made it to the "bigs" or they did, but found out that their talent wasn't above average. Yet found a way to contribute, to use their experience as a way to give back to the sport that gave them a job, as the saying goes.

Which begs the question, where are those jobs, the managing, coaching or scouting positions in IT?

Y2K, C2Net, HKS and Red Hat

| 1 Comment | 0 TrackBacks
| | | |

In November of 1999 I had over a year of work as a professional programmer under my belt working for C2Net Software. It had been a bit of a bumpy ride, by the end of 1999 I had already been hired, laid-off, hired as a consultant and rehired as an employee.

Given the personal struggle and the fact that I worked for an "Internet company", I had a bit of remote detachment about Y2K. But by November 1999, even C2Net was in the mist of Y2K preparations, internally and externally.

Externally plenty of customers had, as part of their own Y2K compliance efforts, started seeking us out long before November to verify that our main product, the Stronghold Web Server, was Y2K safe. The concern being that plenty of elements about managing web traffic require proper handling of date and time information, from the underlying network protocols to creation of unique session identifiers. The good news was that Stronghold, was a packaged and commercially distributed version of the Apache Web Server, which was indeed Y2K compliant, thanks to its many developers.

While most customers went away content with a signed letter of compliance, I remember our Sales and Marketing VP asking me if I wanted to go to NYC on behalf of client and be available if anything went wrong. Basically, since Stronghold was in the clear and any web application the system was running would have been outside of our domain - I was being asked if I wanted an all expense paid trip from San Francisco to New York City to witnesses the ball drop in Time Square for the new millennium1. Naturally, I declined2.

Internally, the biggest worry I had was dealing with our online credit card processing system. First off, the virtual terminal program we had was running on Windows 98, which itself was not Y2K compliant.

However, the bigger problem was the credit card processing software, ICVerify. Today there are plenty of solutions for processing credit card purchases, in real-time, online, from do it yourself solutions such as MainStreet Softworks' Monetra to all-in-one solutions such as Google's Checkout. But in 1999, while a number of virtual terminal solutions, such as ICVerify existed, for processing credit cards "by hand", few solutions existed for processing credit cards automatically, online.

In fact the only reason C2Net's system did real time transaction3 was because ICVerify had been hacked in such as way as to process transactions via a secured network connection. The best part of the situation, like many other Y2K issues, was that the person(s) who had created the "solution" no longer worked for the company, as everyone at the company had been adversely affected by the previous year's corporate turmoil, as I had.

Patching Windows 98 would hardly solve the problem and since no one with the company understood the ICVerify hack completely, it was unknown if patching it would adversely affect our main method for selling Stronghold within the United States.

Given that Stronghold was a commercially distributed version of Apache and Apache at that time was built for Unix (POSIX) based systems, the main requirement, besides real-time processing, was the ability to run along with our custom ecommerce system built using FreeBSD, Stronghold and PHP. That left us with one viable solution, Hell's Kitchen's4 (HKS) CCVS.

By December of 1999 I had already identified CCVS as our would-be solution, to the point that I had actually purchased it on behalf of C2Net and had started developing the replacement credit card processing solution. Doing so had me in contract with a couple of primary individuals at HKS, including the founder, Todd Masco, who must of had his hands full with a few other people, such as myself, rushing to replace their credit card processing systems before the new year. Despite that I don't recall not being able to reach Todd or Doug DeJulio when needed.


Last Remaining Share
Last Remaining Share


That in and of itself endeared me to HKS, but little did I know at the time that wasn't going to be the half of it. While I recall missing the end of the year deadline for getting our new payment system operational by a week (or so), I was hardly in the thick of it. In fact, if anything Todd, Doug and HKS had most certainly had it much worst. For besides the presumed end of the millennium rush to update transaction systems across the nation, in the first week of 2000 the public announcement was made, Red Hat had acquired HKS.

The folks at HKS really had played it cool.

Now here is where things get a bit interesting, the main selling point of Stronghold was that it was a full distribution of the Apache Web Server that included the commercial right to use the encryption technology that allows for secure web transactions, thus allowing one to built solutions such as our custom ecommerce system. As mentioned Stronghold, by way of Apache, was a POSIX based application that ran on systems such as FreeBSD, Sun Solaris and on a the new, up-and-coming operating system Linux which was (and still is) favored by Red Hat. CCVS was a POSIX based credit card processing system.

All of which meant that by the summer of 2000 I received a friendly phone call from Todd, now of course at Red Hat, looking to build contacts at C2Net with regards to possible partnership.

Now it was my turn to play to cool.

Given, in part, the issues at C2Net over the previous year the majority owner of the company was looking to sell and by late spring/early summer the whole of C2Net had been informed that negotiations had been started in regards to Red Hat purchasing C2Net5 and then sworn to secrecy.

So when Todd's call came, I had to politely tell him I would pass on his information to our VP of Sales and Marketing and then of course made a beeline to said office after hanging up with Todd. Sadly, with hindsight and all, I should have realized that if Todd was in the dark about the potential purchase of C2Net by Red Hat, given the obvious fit between the three products, that the acquisition of HKS might have been poorly executed, which in turn was not a good sign for C2Net. But at the time I recall hearing shortly after that Todd did eventually get filled in. And by August of 2000 C2Net and Red Hat issued a joint press releases announcing the agreed upon acquisition.




1 That is of course if, like most Americans, you can't count and/or don't care that our Gregorian calendar had no year 0.

2 Call it Bloomberg's Law or whatever, but in the US one's loathing of all things New York (or Los Angeles) is inversely proportional to one's distance from New York City. Thus, Boston, which is closer, hates NYC greater than Chicago. San Francisco, not so much hating NYC as it does LA. New York and Los Angeles can of course clam no one loathes themselves more than they do, which in doing so means they care little about anyone else, given their immediate proximity to their own location. Having grown up in and around Chicago, I naturally care equally less about New York as I do LA.

3 And for that matter the only reason why we had a machine running Windows 98 in our environment

4 Yes, despite my previous ragging on New York City, I do know that not only is Hell's Kitchen the name of a neighborhood in New York, but if I recall correctly, that the company Hell's Kitchen was in fact named after said neighborhood.

5 A bit fuzzy here on the timeline and details, but I recall hearing about a deal between Caldera and C2Net that never materialized and then Red Hat got cold feet when "the bubble burst" in purchasing Red Hat, until various revenue commitments renewed discussions between C2Net and Red Hat.

The Misunderstanding of Information Technologists

| 0 Comments | 0 TrackBacks
| | | |

In many ways an employee in a business with any significant headcount has to deal with the same social constructs as any student in high school. Social groups, pressures and mores impact decisions and actions just as much as the organizational chart. Alas this also means that stereotypes and group labels can quickly impact how various teams and business organizations perceive themselves and others.


For IT Professionals this of course means retaining the label of "geek, nerd, dork and dweeb" along with an equivalent high school social hierarchy, low man on the totem-pole. Which means IT Professionals can end up in a lose-lose situation where an executive or manager might perceive an IT geek as antisocial, bullheaded and business-challenged.

But in an opinion piece last week for Computerworld Jeff Ello, an IT manager for the Krannert School of Management at Purdue University, feels that at the heart of the matter, IT Professionals are simply just misunderstood.


That is, IT Professionals are analytical individuals that can empower those around them and that their behaviors and intentions are simply misread. What might look to one manager as an individual that can't accept the manager's decision on how something is to be done is really an individual who is fighting for something to be done in a logical and effective manner. "It's not about being right for the sake of being right but being right for the sake of saving a lot of time, effort, money and credibility."


His opinion puts one in mind of Mike Judge's movie Office Space in which programmers Peter, Michael and Samir1 are terrorized by Initech's demanding and perplexing management team, personified by the company's Vice President, Bill Lumbergh. But the appeal of a movie such as Office Space is that one doesn't have to be a programmer to have ever felt terrorized by an impersonal business executive. A customer service representative can feel equally marginalized.


Yes, of course Executives should look at IT department as they would any revenue generating organization in general and not as some group of misbehaving malcontents. Each organization and individual, taken at face value, is an important asset to the business, with specific skills that can benefit a company. For IT this means bringing strong creative and analytical abilities to the table, skills that can be brought to bear on just about any business problem.


In noting that within the analytical skill set, "at the most fundamental level" of IT's job is "to build, maintain and improve frameworks" Jeff Ello reminds us of what IT can do best, bring about significant strategic advantage for the business.


However, for whatever reason Jeff Ello seems interested more in trying to justify the specific social group and mores of IT Professionals than he seems in communicating how those misunderstood stereotypes can be overcome. For whatever grievance or special treatment he might wish to argue to the world's collection of Executives on behalf of IT Professionals, it should be noted that in the end, we all want the same thing, for whatever enterprise we find ourselves engaged in to succeed. For that is what differentiates business from high school.


In any case, it works both ways. If IT Professionals are going to gain the respect of those Executives and Managers in endeavors great and small, technical and non-technical alike, it also means that IT Professionals need to understand the rules of the game governing Executives and business. It is time for both groups to shed past stereotypes and move on to bigger and better things.



1 And of course Milton, can't forget about him. He can set fire to this place, you know?

Open Software versus Closed (Proprietary) Software

| 0 Comments | 1 TrackBack
| | | |

The project is a go. The decision has been made. Because of the strategic nature, a significant investment will be made to optimize and support a critical business function with the development of new software.


For many a key question requires answering, does the organization develop this new software in-house or outsource it to a specialized development firm? Quite an array of factors can go into deciding this question. In other cases, the question might be simple to answer.


Increasingly however an additional question comes into play when deciding how to move on the question of software development, go with an open source solution or a closed, proprietary solution? Open and closed systems, like anything else, have advantages and disadvantages.


For example if software development is done in-house proprietary software can provide legal protections for any intellectual property built into the functionality of the software that the organization considers critical to its great business success.


On the other-hand. an in-house system that either is developed internally and then opened or built initially on an open source codebase can reduce development costs and overhead.


These days, for many, the virtues of openness has an strong appeal. Consider the example of a little project I undertook a few years ago to connect an Apple //c to a Mac mini. At the heart of the hardware connection is the bridge between the old-school RS-232 standard to the currently ubiquitous USB standard. To many the success of the project is seen in the very open nature of the two standards. While I was able to purchase all the parts I needed, "off-the-shelf", many, not doubt would note that, if the key product didn't exist, I could have taken to building my own cabling, by looking up the published documentation on the two standards.


But a standard, just like software, doesn't have to be open to be well documented. Now a days the use of Microsoft Word or Excel goes without much forethought. If considerations are made it is with the eye toward wide support, compatibility and availability. Thus, while every organization, software or individual might not be highly compatible with latest version of Microsoft's productivity software, the high adoption rate means at least a high rate of basic support and compatibility for the software and its document formats.


In fact, opened or closed, standard or software, the importance for many is not about the technical or legal risk. From a business perceptive, more than anything else, the question ends up being about support and compatibility. If one invests a large amount of money into software to manage a critical business function the concerns and ideals of open or close take on a lot less importance . What does take on importance are questions about return on investment or on-going cost to support, manage or improve upon the solution, in the short and/or long term.


This is why people talk about communities, development networks, ecosystems and adoption rates. Because these pieces of information present a larger picture about market conditions. For if software and the standards the software adopts are largely adopted then chances are good for the long term viability of the software.


Market position and investment costs, also explain why, more times than not, market leaders such as Twitter, will favor closed systems over open ones whereas disruptive challengers, such as Oracle back in the day, will champion openness over close-knit systems.1 For if the "micro-blogging" format becomes an open standard then Twitter loses out on their investment scaling up their infrastructure. Whereas Oracle, with the open SQL standard, provided a challenge to the proprietary hardware/software lock-in of IBM, the success of their challenge directly dependent upon their investment in get SQL standardized with high adoption.


In other words the division between opened and closed software is hardly as cut and dry as many try to make it out to be.



1 It also explains why some companies, such as the Oracle example, might change their position over time, while others, like Microsoft, might have conflicting positions depending on the goals of a specific department/product.

Strategic Software Development

| 0 Comments | 2 TrackBacks
| | | |

I've been thinking a lot about software development and strategic business strategies recently, in part because of some business work I've been doing recently and in part because the reason why the software program exists has a strong relation with how the system is designed.


'We Are All Software Companies Now' is a post I read last week suggesting that all businesses are software businesses these days, even HP, Tesla, Rackspace and Apple. Well that's not quite right, as I've noted before about Apple all one has to do is look at their financial statements to know that Apple is a hardware company, selling iPhones, iPods and Macs. Apple is no more a software company than it is a technology retail chain outlet.


But of course Apple does both because it has determined that having retail stores and software engineers are critical functions that allow themselves to effectively preform in bringing the best personal computing experience to consumers around the world.1


That is in many cases a business might determine that their own development of software for a key business activity will enable the organization to achieve its long-term objectives.


To determine if a business function might be of strategic importance, let's consider two different types of businesses and how they relate to a specific type of business software.


Spacely Sprockets is specialized widget maker, but unlike Apple, only sells its sprockets wholesale to retailers who then sell a range of individualized widgets from various manufacturers to public consumers. ACME is just such as retailer.


Both Spacely Sprockets and ACME have warehouses. Both companies have shipping requirements. Both companies are using a software solution to manage and track products from origin to destination, from creation to final sale. Spacely uses an "boxed" enterprise resource planning (ERP) solution from a highly regarded business software vendor. ACME, on the other-hand, uses a customized, self-developed and managed ERP solution.


What exactly are these two ERP systems doing for these companies? In general ERP systems are about removing barriers to data that can become trapped in department specific solutions. That is ERP systems are about eliminating standalone computer systems in finance, marketing and manufacturing, and integrating them into a single unified system.


For Spacely Sprockets this means a solution that tracks wholesale sales and overall market penetration, production volume, warehouse inventory and shipping costs. The software vendor Spacely has contracted sells a solution built on software modules for each department; marketing and sales, production and shipping. Each department can choose from a set of predefined workflows optimized for their need. At the same time the modules share a common communication infrastructure enabling access to any required piece of data for the company's Operations department.


Thus Spacely can focus on their business, building the best widget for their market. At the same time reducing the overall cost of their operation and leveraging the expertise of other similar business models via their software vendor's solution. The reduction of operating costs can either be passed on to the consumer in lower product cost or to Spacely's shareholders in higher profit margins.


For ACME their business also involves sourcing products, procurement and logistics management. They too are in need of integrating resource management solution within their company.


However, their decision to build their own ERP stems from the fact that their continued success in business is built on their ability to properly managing the procurement and logistics of numinous products, including Spacely's widgets. AMCE has years of in-house knowledge and expertise that just cannot be replicated in any "off-the-self" ERP solution.


ACME's strategic business objective is to continue applying and refining their hard-won supply-chain knowledge to their operation. In doing so they too can reduce their operating costs in relation to their competition and can either pass on the savings to their customers or to their shareholders in higher profit margins.


Of course in the abstract, this seems extremely straight-forward. In reality, business conditions change all the time. One day the hard-won knowledge your business has fought for is gold, the next it might become a liability.


And just because a business elects one option over the other doesn't mean the software concerns end there. Nope, besides deciding to build a custom solution or not, a business has to consider; do they go with a close-source solution or an open source solution? If they do decide to develop a custom solution, do they try to develop the software knowledge and solution in-house or outsource?




1 I couldn't find an up-to-date reference for Apple's current "mission statement." The best reference Google could come up with is a few years old. None-the-less, even if Apple doesn't have a stated mission, this statement isn't that far of the mark of how the company is currently operating.

Mission Critical

| 0 Comments | 1 TrackBack
| | | |

Over the past week plenty of commentary has been made, including my own, in relation to the 40th anniversary of the historic flight of Apollo 11. One comment on Twitter by alexr however caught my specific attention, "SW engineers should take this moment to consider if they'd trust their code to have gotten the LM onto the lunar surface safely."

Indeed a sobering thought at first glance. In this day and age it is quite common to run into a program written by one or more software engineers that seems unstable or error-prone. What if all those engineers had to deal with the rigors of getting their code "flight ready" where billions of dollars and at least 2 men's lives at risk?

But on second thought I think this comment does more harm than good. It seems to imply that either; the challenges of writing critically important, life-and-death software only occurs once in a blue moon1 or that the good-old-days of elite superman programmers who wrote error-free programs are long since gone, replaced by thousands of mediocre programmers writing millions of bug-infested computer code.

Instances of life-or-death situations being directed by computers (hardware and software) might not be an everyday occurrence for a programmer, or even a once in a career occurrence. But it does still occur. I recall the Professor who taught my Assembly Language class in college mentioning his work on a project for Motorola on a car fuel-injection system. The engine had habit of shutting down when entering the presence of electrical interference.

Just imagine driving down a highway at 55 mph only to have your car shutdown while passing by some high-tension power lines..... Now consider the added complexity of today's hybrid engines.

And while not every programming challenge is "life-and-death", plenty of software code in today's world is "mission critical" with millions, if not billions, of dollars at stake.2

In any case the coding of the Lunar Module's software was hardly error-free. In fact in regards to the Apollo 11 moon landing two specific instances occurred with the Eagle's Guidance Computer during the critical decent to the moon's surface.

At 102:38:30 Neil Armstrong calls out a program alarm, "1202". Ten seconds later, Armstrong is asking for feedback from Houston on the error code.3 Houston gives the astronauts a "go" to continue their decent. But less than 5 minutes later, with 2000 feet separating the LM from the surface the ship's computer issues a "1201".



102:42:13 Armstrong: (on-board): Okay. 3000 at 70. 


102:42:17 Aldrin: Roger. Understand. Go for landing. 3000 feet.


102:42:19 Duke: Copy.


102:42:19 Aldrin: Program Alarm. (Pause) 1201


102:42:24 Armstrong: 1201. (Pause) (On-board) Okay, 2000 at 50.


102:42:25 Duke: Roger. 1201 alarm. (Pause) We're Go. Same type. We're Go.


Second round of system issues.


What was a 1201 and 1202 type error? Only that the Apollo Guidance Computer was indicating that it was overloaded with data inputs, couldn't keep up and was resetting itself.

Yeap, that's right, the guidance computer for the LM rebooted, at least twice, during one of the most critical phases of the mission because it ran out of memory.

The problem? An error in one of the crew's check lists had them turn on the rendezvous radar during the landing phase. Of course the LM crew was hardly trying to rendezvous with the Command Module during their decent, but the repeated calls to the computer to process imaginary rendezvous radar data filled up the limited writable computer memory4 the on-board system had, causing the system to repeatedly restart.

Now I suppose somebody will argue that the computer was hardly to blame. It was a user-generated error turning on the rendezvous radar (or a documentation error) not a computer programmer error. Moreover the program was designed to reset itself if it got overloaded on purpose.5

But, that's just it. No programmer, not matter how good, can take into account every possible error or misuse, whether created by the programmer or the user.6 Would you have considered at first that an over-head power line might scramble your car's fuel injection system?

This is where the programming concept of fault-tolerant programming comes into play. The idea is pretty basic; enable the system to continue operating properly in the event of an error. Just as the Apollo spacecraft (and Saturn V launcher) had mechanical backups to keep the physical system running in case of failure the guidance program (and properly designed programs of today) manage the error and keep it from causing catastrophic results.

Thus the statement should not be, engineers consider if you'd trust your code to get the LM to and from the moon safely. Instead it is, do you consider your software fault-tolerant enough to get one to the moon and back safely?



Interesting side note there is a community programming effort that has created a software emulator of the Apollo hardware and software, Virtual AGC and AGS.
  


1 Pardon the pun.

2 And by indirect implication the lives of the employees, customers, stockholders, et al.


3 Sounds eerily familiar for any modern day computer user; the computer reports some cryptic error code and the next step is to go searching for additional information on what's gone wrong.

4 Now a days we classify memory as Read-Only Memory (ROM) or Random Access Memory (RAM) and talk about Gigabytes (109) of RAM for a laptop (or even a smartphone). The Apollo Guidance Computer? About 64 Kilobytes (103) of ROM and only 2 Kilobytes of writable RAM.

5 The idea was to clear the fault and reestablish import tasks, i.e. clear out the waiting calls for calculating unnecessary rendezvous telemetry and reestablish jobs for processing landing telemetry.

6 And just in case you wish to insist that the programmers of yesteryear were superman, well turns out one uncorrected bug could have crashed the LM by trying to flying the craft first "under" the surface then back "over" the surface and then "onto" the surface for a safe landing.

The Software Supply Chain Problem

| 0 Comments | 0 TrackBacks
| | | |

Last week a dust up occurred in part of the software industry relating to a security issue in a key software toolkit. Apparently two years ago, someone ran an analysis tool on the source code to the security toolkit OpenSSL in the Debian Linux distribution. The tool reported an issue within the OpenSSL package included by Debian, so the Debian team decided that they needed to fix this "security bug". Alas the solution broke a critical element of OpenSSL, its random number generator, (Long story short, a truly random number generator is critical to software encryption tools such as OpenSSL.) The end result is that for the past two years security applications on Debian and Debian related distributions have been "hackable" and need to be rebuilt.

Each side in the matter is blaming the other. A member of the OpenSSL team suggested that "had Debian [submitted its code changes], we (the OpenSSL Team) would have fallen about laughing, and once we had got our breath back, told them what a terrible idea this was." Debian developers on the other hand have noted that the email address provide by the OpenSSL team is incorrect and that overall documentation on the part of the OpenSSL team is lacking.

As with our own service issue from a few months back pointing fingures isn't as helpful as discovering where the chain broke and why. In both cases the issues are eerily similar, a break down in customer/vendor communication.

In Boston Ben Hyde deftly makes a connection between his local butcher's meat packing industry and his own and in the process wonders what might be the fallout of interdependent web applications circa 2008. Here in Chicago, the former hog butcher for the world, I think we are just starting to see questions and concerns of "quality control" starting to percolate into the public consciousness as the software supply chain between "suppliers", "vendors" and "customers" grows in sophistication.

Last Labor Day the Chicago Park District recently revealed a statue at the corner of Pulaski and Foster, just a short walk from my home here in the Albany Park neighborhood, in honor of the local park's namesake, Samuel Gompers. Samuel Gompers was an American labor organizer, union leader and founder of the American Federation of Labor (AFL). Unlike some of his contemporaries, Gompers doesn't seem to have considered himself a Socialist, Anarchist, or even a Communist, which in today's political world would probably place him and his beliefs somewhere near the center of America's political spectrum. Although at the time he's ideals clearly fell progressively left of center.

Upton Sinclair, a junior contemporary of Gompers, was, no doubt about it, a Socialist actively advocating socialist views. In fact, while he gained particular fame for his 1906 novel The Jungle, which dealt with conditions in the U.S. meat packing industry, in turn causing a public uproar that partly contributed to the passage of the Pure Food and Drug Act and the Meat Inspection Act in 1906, Sinclair himself felt the meaning of his work had been lost on the general public. His outcry wasn't about the conditions of the meat so much as it was about the human tragedy lived by the workers in the plants handling the meat.

And yet, The Jungle did ultimately bring about change. Perhaps not the change originally intended by its author, but change did come to the growing complexity of the American food supply chain of the early 20th century, a supply chain in which the quality control problems of the time started to get dealt with as regulations and greater customer awareness started to take hold.

A Zoomshare service outage, while problematic, is correctable. A security breach from improperly patched software from two years ago is a little harder to correct....

Recently TJX Cos., a discount retailer that operates T.J. Maxx, Marshalls, HomeGoods and A.J. Wright stores started mailing notifications to customers about a recently arrived at settlement to a class action suit in relation to a January 2007 report that computers that handle customer transactions at a number of its chains were broken into.

What if - and this is just a hypothetical here - what if the TJX issue was related to the Debian/OpenSSL fiasco? Who would legally be on the hook? TJX? Debian? OpenSSL? All three?

What are the implications? We are already seeing customers and regulators react. Services such as Zoomshare post Privacy Policies and Terms of Service. States such as California have passed laws requiring immediate notification if customer data is compromised.

It seems easy to wonder if the computer industry is one Upton Sincalr expose away from greater public and governmental outcry. Even without a "man-of-the-people" individual looking to correct some of the inequities in the IT industry one can see changes are brewing as the overall complexity of our systems grow - along with our greater dependence.

About the Author

Paul is a technologist and all around nice guy for technology oriented organizations and parties. Besides maintaining this blog and website you can follow Paul's particular pontifications on the Life Universe and Everything on Twitter.

   
   


Subscribe:
Add to Google Reader or Homepage
Add to My AOL

Add to netvibes
Subscribe in Bloglines
Add to Technorati Favorites

Powered
CentOS
Apache
MySQL
Perl
Movable Type Pro