VOICE Home Page: http://www.os2voice.org |
[Previous Page] [Next Page] [Features Index] |
By: Lynn H. Maxson lmaxson@ibm.net
Now I personally have no doubts about the Warpicity methodology or about its
working as advertised. I don't have them because I have subjected it to the same
level of scrutiny that I have given many such proposals in the past to clients,
whether as an IBMer (36 years) or since (7 years). Clients present me with a problem
set. After a thorough analysis I present them in turn with a solution set. Whether
the client accepts it or not is a client decision.
When IBM withdrew from actively marketing OS/2 many in the OS/2 community reacted
with anger and despair, both of which in my opinion amounted to overreacting. IBM
made a business decision. It was losing money on OS/2, losing marketshare with
OS/2, and losing money in its PC business. Regardless of the chicanery or business
ethics of Microsoft which accounted for part of this, the financial loses IBM sustained
over several years was significant by any measure. At one time the estimated loss
in one year of the IBM PC Company was over a billion dollars.
Many in the our OS/2 community felt IBM had a commitment to continue OS/2 regardless
of loses sustained, that it owed it to us due to our loyalty. The truth is that
IBM instituted a form of damage control to reduce the amount of loss, but nevertheless
sustain some financial loss which it has every year in OS/2 support. Without arguing
that much of this was due to influence by some of its large accounts who had a significant
investment in OS/2 that IBM did not want to anger, the fact is that such support
has continued. The OS/2 community for the most part has benefited.
Now mention of the OS/2 community has occurred frequently thus far but the truth
is that there has not been nor does there exist a recognizable entity that speaks
for the OS/2 community or legally capable of representing it as a business entity.
IBM large accounts do have the capability of representing themselves and have done
so effectively. Because they have the rest of us have enjoyed the benefits of continued
IBM OS/2 support in terms of fixpacks, premier JAVA support, continuing improvements
to Netscape, and others.
When I made the initial Warpicity Proposal at WarpStock98 the key component
was establishing an organization capable of legally and financially representing
some participating segment of the OS/2 community. I had no idea then nor do I now
of the size of the participating segment. For means of the proposal I picked a
number, 50,000, and a subscription amount, $20/annually.
I felt the 50,000 was conservative due to the rumored sales of 13 million copies
of OS/2 including those made to large accounts. Even believing that large accounts
accounted for 90% of total sales that left over 3 million essentially SOHO or individual
accounts. As public radio and television seldom exceed more than 6% membership
of their listening audience I used that as a guideline for an estimate. That estimate
comes to 180,000. Thus selecting 50,000 seemed very conservative.
I did much the same thing with setting the proposed annual subscription amount
of $20. I knew that IBM had not set positively with the OS/2 community with its
Software Choice subscription and its prepaid two-year subscription which amounted
to $120/year/client machine, significantly higher for its Advanced Server subscription.
On the other hand I have paid Spitfire Software $20 semi-annually ($40 annually)
for support for its InCharge product since it began its paid subscription support.
Thus again I picked on the conservative side.
If my conservative estimates turned out to be optimistic or real, 50,000 subscribers
at $20 each annually provided a annual budget of $1,000,000. Prior to presenting
at WarpStock98 I floated these numbers in the OS/2 forums on CompuServe which I
frequent. The reactions at times were quite vociferous. They ranged from the 50,000
being way too high and 5,000 if we were lucky to the $1,000,000 intended to cover
the cost of providing a replacement for OS/2 as far too low. After all both IBM
and MS were expending hundreds of millions of dollars in maintenance and new development
annually for already written operating systems.
Now if their responses were accurate with respect to number of possible subscribers,
then that meant that better than 99% of OS/2 sales had gone to large accounts, that
the SOHO market accounted for only a fraction of a percent of the total OS/2 sales
to date. That meant that IBM was wasting its money in marketing to SOHO users and
was justified in ceasing to waste it further. IBM said MS has won the SOHO desktop
and it had. We had substantial testimony from our own community to verify that.
I was faced then with a moving target that kept getting smaller, going from
50,000 to 5,000 to 500 and dwindling. More importantly I was faced with the fact
of life of the high cost of software development and maintenance which both IBM
and MS (and the rest of the industry) endured. From the number of potential subscribers
it did not appear that $1,000,000 was achievable and even if it was the high cost
in software production of an operating system meant it was too little.
So I couldn't attach whose numbers of subscribers were correct. That could
only be determined by trial and error. One thing seemed certain: the number was
constantly dwindling and thus whatever budget they could afford had to be sufficient.
At its minimum that number was either zero, which ended any concerns about budget,
or one, which set a lower limit on affordability. The upshot was I had to attack
the cost of developing and maintaining software. Moreover the challenge was I had
to attack it in a manner to make it individually affordable, given the dwindling
population and the negative estimates of its actual size.
The high cost of software development and maintenance has lead to a profusion
of packaging, of what manufacturing calls "build to stock", a means of
using volume sales to amortize the total "cost of sales" by dividing that
cost across each unit sold. To succeed the volume sold and the income received
must be such as to cover the cost of sales plus, hopefully, a profit. Otherwise
the venture is unprofitable, which is what happened with IBM with OS/2.
In manufacturing the alternative to "build to stock" is "build
to order" or what we commonly refer to as a "custom" product. Normally
custom products on a per unit basis cost more than a package one. I say "normally"
because I faced a challenge of not simply bringing a custom cost as close as possible
to a package one (to meet the need to have it individually affordable), but if possible
to make it at a cost less than a package.
Now a difference between a custom and a package product in manufacturing terms
lies in the custom product does not sustain "packaging costs". Now you
can reasonably protest that a custom product is a package of one. Thus it must
absorb all of its packaging costs as opposed to distributing them over multiple
units. That is certainly true. Thus we must enter somewhat murky waters, but I
hope in logical and clear manner that leaves us in agreement upon exiting them.
Though as a business we may sell either a product or a service, for the moment
let us consider a product only. If we are a new business and we have selected the
most "practical" business manuals to guide us in our pricing, chances
are they will instruct us to set our pricing for a product at three (3) times its
manufacturing costs. It says this because technically (and hopefully) this is a
cost which a priori we can reasonably know or estimate. Now a manufacturing cost
is only part of the "cost of sales", the remaining being the marketing
and distribution costs. Those for a variety of reasons we can only know a posteriori,
after the fact. Thus experience recommends the three times manufacturing cost.
We have an example of this within IBM manufacturing. When the IBM PC Company
began losing marketing share as well as money IBM disk manufacturing found itself
in a bind as well: its captive market was declining. It's marketing and distribution
channels on the component level was not doing well enough to compensate for the
expected losses. Thus the disk drive manufacturing unit went to the open market
where potential customers like Dell and others were open to competitive bids. Here
the disk drive manufacturing unit no longer burdened by marketing and distribution
costs could bring their selling price (in direct sales to end customers) down to
competitive levels and yet maintain the same or greater profitability.
Probably more to the point IBM manufacturing knows its customer set or marketplace
exactly and is capable of dealing with them directly in many instances not involving
added value services (those provided by IBM marketing). Such direct deals are probably
more to their liking than the vagaries often associated with the over- and under-estimating
of sales by the marketing unit, either of which shift manufacturing costs upward.
In truth IBM manufacturing views as positive bypassing the need for IBM marketing.
This lowers its cost of sales, making it more competitive.
So you see the packaging costs are not simply what goes into the packaging material or the production of its documentation but also into the marketing and distribution of the packages. A custom package is a direct sale. When built it goes directly to the customer. It absorbs a one-time marketing cost, but otherwise absent of those costs which accrue due to a delay between production and sale.
We need to understand the sources of the costs associated with that delay.
Always in business time is money, the more time the more money. A package product
accumulates cost just sitting in inventory, just occupying space, just being unsold.
If you have set your price on selling a given volume in a given interval of time
and if you haven't done so, then you must either amortize your cost over a lesser
volume, which may or may not be profitable (given IBM's OS/2 experience), or you
must increase the interval thereby increasing your cost of sales, which again may
or may not be profitable.
The point is that a custom (build to order) product does not suffer these costs.
All things being equal it always meets its volume (and thus its profit) level in
the interval allowed. In this manner if we throw in marketing and distribution
costs as well as the time to meet volume goals, a custom product may very well cost
less than a package product.
The real point is to insure that it does not cost significantly more, that the
difference amounts to a "noise level", something so small as not to enter
negatively in any purchasing decision. Well, we do have the pre-purchase or guaranteed
purchase plan. This occurs in any organization that produces for its own consumption
with the financial resources to cover its production costs. All that remains is
to guarantee the lowest possible purchase price to stave off competitors.
That gets somewhat complicated when a competitor offers an equivalent product
for free or media costs plus shipping. That's what Microsoft did with Netscape
and Sun is doing currently in its StarOffice offering with Microsoft. These are
anomalies, actually subsidies whose costs get absorbed out of the profits of other
products. It is much the same way with the voodoo economic model of Linux, whose
volunteers must subsidize their work from other income.
If you have an self-financed infrastructure and you decide to develop and maintain
your own operating system, do you manufacture it as a general package(build to stock)
or as a custom package(build to order)? The answer is cost related based on what
you can produce with the funds you have available. The general solution which IBM,
MS, and others have adopted is to produce a general package from which the user
can select a custom set of features. So they must absorb a greater package cost
to allow multiple custom choices.
They do this using pre-packaged (compiled and executable) components. An alternative
to this is to use a "selection process" on a base of source code and to
"generate" (compile and link) a custom system. This is the alternative
IBM used in the early years of its OS/360 operating systems as well as those of
its transaction processing system CICS. This alternative, this generation of a
custom system from source code, is what the Warpicity methodology does.
One of the advantages of generating from source instead of assembling from pre-packaged
components lies in eliminating the need for emulation to run the APIs a guest operating
system through translating them in the APIs of a host operating system. You eliminate
it by supporting the APIs of both natively within the same operating system structure.
Nominally this means having a micro-kernel above which all APIs exist separately
in a layer. Each set of APIs then, including multiples of the same set, run their
applications as separate virtual machines.
I mention this because it is important to understand the thinking that went
into defining the Warpicity methodology. Warpicity avoids using emulation as well
as the translation that it requires. It has no concerns then that a host set of
APIs may or may not contain matching features for a guest API and if not then they
have to added in some ad hoc manner to the host API. This is what Lotus has done
in using the same source for OS/2 as it did for Windows with its SmartSuite. This
is what VMWare has done with their virtual system software for Linux to run Windows
applications. It remains the task of those working on Wine and ODIN on Linux as
well as absorbing the incompleted effort of the Win32OS/2 project.
The Warpicity methodology does not engage in translation from one set of APIs
into another but all from a logical hierarchy of APIs which permits melding of all
API sets (application interfaces) as a layer above a common set which is independent
and separate from the higher level. Another advantage of this is that no performance
hits due to the translation occur when an API is native. This extends to JAVA as
well, permitting a native JAVA engine to co-reside natively with OS/2 or Windows
or Linux operating systems. Unlike other implementations no performance hit occurs
with this JAVA engine either.
The difficulty in all this lies in presenting something with claims that seem
almost too good to be true and therefore must not be. Therein lies the problem:
the Missourian's rule, they must see it in order to believe it. Until then it remains
vaporware.
The claims, however, have little or nothing to do with the methodology. They
come about from automating the non-writing clerical tasks that occur through a software
product's life-cycle from initial development until the last version is laid to
rest. That deals with the automated storage, retrieval, and maintenance of all
source for all development (and maintenance) activities from input of requirements
through production of each version of a software product. You don't have to be
a software techie to understand how this occurs. You can understand the process
without knowing how to write the software.
All vendor products used in the production of software use source files. All.
No exception. Therein, you see, lies the problem. It is not simply compatibility
or the translation of one file format into another. It is also the lack of reusability,
meaning that the same text used in two different source files are different source
texts. There is no automated means then of changing the content of a source at
one instance and having that change reflected in all other instances. We have not
only the expenditure of extra time and energy, but more importantly an issue of
synchronization, an issue of keeping source documents in sync.
The most egregious example of this lies in commented programming source code.
The comments are embedded in the programming source file. Moreover they are normally
written by the programmer who may or may not be following installation standards
or who may or may not have the proper written communication skills. Nonetheless
if his comments are to appear in another source document they must occur as duplicates.
Again then a change in the programming source code necessitating a change in the
comments will not be naturally reflected in the user documentation.
Now the Warpicity methodology assumes a pure manufacturing environment. If a
version of software product is a final assembly, then it consists of sub-assemblies
and raw material, which each sub-assembly containing other sub-assemblies and raw
material until reaching a sub-assembly level on every path that contains raw material
only. If we look at any programming source file or user documentation, we will
determine that they are all assemblies, that none involve the storage, retrieval,
and maintenance of raw materials per se, only the instance of their use in an assembly.
Thus reuse of raw material is impossible, only duplication is possible.
In a pure manufacturing environment everything ultimately reduces into a set
of raw material. Yet nowhere in any vendor's tool does it deal with raw material
except as an instance in an assembly, a source file. For this reason alone the
Warpicity methodology rejects the use of any existing vendor tool in the production
of software. Instead as a pure manufacturing environment the Warpicity methodology
stores, retrieves, and maintains separately each raw material used. Separately
it stores all assemblies of these raw materials. Nowhere in any of this does it
involve a source file. It can produce a source file as an output, but has no need
of them as input to a process.
Now the raw material of any user document is the sentence, the question, or
the exclamation. The raw material of any programming (or specification language)
is the program (specification) statement. The Warpicity methodology stores each
of these separately as named entities, in a relational implementation as rows in
a single table. The single table suffices for the storage of raw material for both
user documentation as well as program or specification source code. The name given
each entity derives directly from its content plus an index value to avoid concerns
with homonyms (other raw material with the same name).
So a single table suffices for the storage of all source raw material. Once
again we have a reason for rejecting all vendor tools that only work on assemblies
and provide no means to create an assembly from raw material source.
Contained within the source raw material are object references. If we consider
the source as talk, then the objects are what we talk about. The Warpicity methodology
stores the source for these objects under their name in a separate table. Thus
the Warpicity methodology uses only two tables for the storage of all source as
well as the descriptions of the objects referred to.
Both the source table and the object table are maintained in the data repository.
In addition to these the Warpicity methodology uses the Universal Name Directory
(UND) which adds four additional tables. Two of this are used to account for all
possible names of raw materials and assemblies, including all possible homonyms
(same name, different referent) and synonyms (different names, same referent).
The remaining two account for all possible assemblies, whether hierarchical (unordered)
or network (ordered). Thus with only six tables the Warpicity methodology offers
fully compatible source access with automatic source reuse. Something not available
with tools using source files only.
What we need to understand is how this simplifies documentation whether user
or source (program or specification) as well as the process of automating their
storage, retrieval, and maintenance. While the actual writing speed remains the
same (people based), once written all further process occur at machine speeds, thousands
and tens of thousands of times faster and cheaper. It is here and not in the other
aspects of the Warpicity methodology that the claim gains of "50 times time
reduction, 200 times cost reduction" occur.
Basically existing vendor systems could adopt the same scheme of a common data
repository, incorporate it within their tools, and provide the same level of gains.
The mistake IBM made in its Data Repository in its ill-fated AD/Cycle was retaining
source files and not looking at them as assemblies, breaking them down into separate
raw material parts. As a result their concept of a data repository, a place where
data reposes, was transformed into a data directory, a place which said where the
data in source files resided.
The Warpicity methodology thus supports three levels of documentation: raw material,
assembly, and final assembly (presentation). Other vendor tools support at most
two: assembly and final assembly. Thus they cannot provide a seamless fit among
a sequence of vendor tools used in the software development/maintenance process.
The Warpicity methodology due to its source always being raw material does provide
a fit as seamless as the process itself.
The point is that any analysis will show that the major cost in software occurs
due to this lack of a seamless fit throughout the activities involved. The major
expenditure in time occurs trying to compensate for the lack of a seamless fit.
The Warpicity methodology provides an automated seamless fit from one activity
to the next, thus leaving the only non-automated activity, that which people must
perform, the actual writing of source.
There is nothing technical in this to understand. The definition of the six
tables exist in another, earlier document (available at http://www.scoug.com)
as are all the naming conventions. The secret lay in guaranteeing unique names
for all source. It is not much of a secret in that it is a commonly employed mechanism
of appending an index value to to every proper name used in such software as the
nickname list in the Gammatech IRC software product.
That so little can mean so much is the story of the history of the IT profession
as is evidence in the gains achieved through the simplification of processes inherent
in all the structured methodologies over what they replaced. This is just another
simplification that leads to increased productivity. It all stems from the elimination
of the use of source files.
As mentioned earlier the actual writing is not automated. Some person or persons
must do the writing. None of this existed in source form or in any existing too.
Thus all of it must proceed from scratch. It all occurs from writing specifications
in a specification language SL/I. No compiler exists for SL/I. Thus one must be
written. To write it means writing the specifications in SL/I for the parser, the
syntax checker, the semantics checker, and the two-stage proof engine. First it
must be written using a non-SL/I language.
Once you have written the specifications in SL/I for a SL/I compiler and written
the initial version of the compiler of these specifications in a non-SL/I language,
you now have a working SL/I compiler. Now all that remains is to write the specifications
for the tool, the Developer's Assistant, along with those for the data repository,
and you will have the only tool necessary from that point on. Now all you have
to do is write the specifications for the operating system as well as those for
all the APIs for the operating systems you want to include. At that point you will
have your replacement for OS/2 and what other OSes you have managed to include.
Now where along this scale of writing do we have a point which constitutes proof?
At what point do you buy into the methodology? More to the point at what point
do we as a user community considering a tool which can produce any software and
match any problem set exactly with a solution set decide that such a means of production
belongs in the public domain and not subject to proprietary and closed source control?
In short we can guarantee that we own this tool and all its source and thus can
guarantee its use by any vendor.
At what point do we act as a community together to achieve a useful end or to
prove a concept instead of relying on one or two members? What you have before
you is a well-thought-out proposal based on many years of experience in software
development and maintenance. It rejects object-oriented per se due to the proprietary
nature of that methodology which even JAVA cannot overcome. It has considered (and
eliminated) all the deficiencies of all other programming methodologies while retaining
their useful functions and features. Even so it introduces no new features with
the exception of the automation of clerical activities surrounding source maintenance.
Such automation, the justification for almost all computer systems, cannot be regarded
as either new or revolutionary.
Thus I offered it to the OS/2 community as a proposal, a means for it to achieve
a desired end, knowing full well that I would not and could not afford to take it
on as an undertaking completely on my own. While my forte may be analysis it does
not transfer over into implementation. Given some of the known skills out there
in the community I would defer to them when it comes to implementation. Thus I
cannot and will not promise any implementation without a corresponding investment
by the community in doing something which was offered in their interest in the first
place.
If we have a standoff here, it will remain as such until the community decides either to accept or reject it and follows either decision with an action plan of its own. I can only hope to be one vote in such a decision. I will abide by any majority vote.