In the 1980s the brief boom in expert systems and more recently the fad for "business rules" in various forms are attempts to capture business policy in software. This is not an easy chore for a number of reasons:
These are merely the exciting parts of policy dynamics. Much of the time "policy" is what really happens when the business of the organization is transacted, and it all too often has nothing to do with books, or even experts. Sometimes headquarters learns what policy has happened in its name, but not always and by no means consistently.
Meanwhile analysts in cubes near the boardroom, as they have since green eye shades were more than metaphor, crank out risk and predictive models which eventually become the articulation of new policy or changes to existing policy. These result in pricing changes, new products, new transactions, new procedures, and always epic adventures in data processing.
Such is policy: big, lumbering, many-headed, inefficient, dangerous - now and forever. The purpose of this web site is to explain a family of techniques which give some hope of getting a grip on one or more tentacles of the beast. Policy components (or, to use a more techie grad school term, policy 'bots) are able to get a handle on some of these issues which before have one way or another defeated most previous techniques.
The goal of policy components is to deliver the results of data mining as
business intelligence at the point of sale, providing the dream of expert
systems and artificial intelligence with the precision offered by full analytic
techniques -- and to do this at the point of sale with full transactional
Policy components (a.k.a. policy 'bots) are portable lightweight software modules which implement a single facet of enterprise policy wherever it is required. And (this is crucial) they don't do anything else.
The first demo on this site is a perfect example of how inclusive policy is and of how trivial it can seem. The demo is designed to assist an operator find a code. It isn't world shaking stuff, and yet a mote can seem awfully big when it's in your eye.
The insurance company for which the original of this excerpt was developed definitely obsesses over payee codes, has perhaps an average of fifteen minutes devoted to it in training, has some expert operators who have absorbed certain patterns into their bones, can't quite explain how they know what they know, and can't wait to finish night school as rocket scientists in order to depart to greener and less hectic pastures.
This problem could, but never would, be addressed as an expert system or business rule solution of perhaps twenty or thirty rules. One would expect a long development cycle eliciting expertise from analysts, trainers and operators. Once operational one would also expect a response time of approximately 50 or 60 milliseconds - no big deal certainly but indicative of bigger and lengthier things ahead.
The policy component which is illustrated here required a single technician about 80 hours to develop, with the aid not of experts but of light data mining (included in the development time). Responses average less than a single millisecond.
The types of problems represented by the other examples could and frequently are in fact implemented as expert systems or business rule systems. The auto insurance example in fact is a classic expert system problem, which might be solved in well over a hundred rules. It considers several variables, weighs their influence and calculates a price for automobile insurance. Auto insurance pricing is far more complex than what this demo program presents, but the example typifies the core of the problem and illustrates the capabilities nicely in a techno morality play every driver can easily understand.
The price list demo also calculates prices, but from a very different angle. It deliberately emulates a published price sheet. Such an application has become de rigueur for the automated underwriting of mortgage loans in business rule environments. The intent of the demo is to show how such an application can be improved upon with a policy component in three distinct ways over the now standard sort of solution by improving:
The fourth and final demonstration is far and away the largest and most robust. It tackles the analysis of a full credit report and other credit history to assess risk for mortgage lending. This represents the core of the mortgage lending business, but it also typifies the policy of nearly any complex financial transaction which must balance risk against the likelihood of reward.
The intent with this example is to give an illustration of a fairly large policy structure which must include complex probability calculations, legal constraints, and management fiats, to demonstrate in short that policy components are capable of efficiently dealing with extremely adult business challenges, but beyond that, also to show themselves to be far and away the most practical way to proceed in software for these problems.
If that is true, the key to it all lies in statistics, as opposed directly to "expertise".
The basis of expert systems and business rule systems is obviously the rules themselves, which come from documented policies, experts, delegated committees, etc. The core of policy components, on the other hand, is statistics, as they are developed by the organization's analysts or by ad hoc observation and data mining, whichever of these, or whichever combination of them, the project dictates.
Behind each policy component is a formal or implicit statistical model. The same in fact is true of nearly any expert or business rule system. This may seem a quirky statement, but it is true; and it is a crucial distinction - and one that needs to be made for the proper analysis of any policy, and therefore also of any policy software solution. The demos themselves serve nicely to illustrate this law of policy implementation.
The payee code demo merely has some confirming percentages to compute its scores, which seems innocent enough of mathematical complexity, and it is. But the alternative to these few numbers was the nearly subconscious reflexes of expert operators. This is ever the way if expertise is to be dragged kicking and screaming from the medulla to the cortex where software can deal with it.
The auto insurance demo has no formal statistical model behind it because it is an act of creative fiction anyway. But it is clearly the case that all insurance companies do not offer, much less price, their products without a team of actuaries grinding out numbers. The statistics behind the auto insurance demo may be fantasy, but any real implementation would involve quite sophisticated modeling.
The mortgage price sheet is an excellent illustration of a statistical model used as policy. The intricate patterns of prices depend upon several variables involving the loan, the property, and the borrower's credit. It is obvious that the subtleties of the model have been simplified into the published price sheet, in order to accommodate the time and pressure of the underwriter's routine. The usual practice until lately has been to "automate" the price sheet by taking it verbatim and ignoring the implicit mathematical logic behind it. Business rules are able to do this successfully, if a bit awkwardly. A policy component can do the same job more accurately than the price sheet itself or a business rule solution, and more efficiently, without losing a beat in the formidable problem of keeping prices current.
The credit demo illustrates a complete statistical model for assessing credit. It is accompanied by the formal attire of mathematics. Statistics of this sort have up to now been confined pretty much to the boardroom. The results of sophisticated statistical reporting packages such as SAS or SPSS do not lend themselves to transaction software .
This component, however, does put Eigen vectors, beta coefficients, t-tests and all the rest into profitable action at the point of sale. It would make too long a tale to recount in any detail how this is accomplished. Besides, that would be telling.
Code development for policy components is rarely so extensive as to require a lot of project management paraphernalia. It is also true that the iterative and labor intensive cycle of "knowledge acquisition" used for rule systems is inappropriate. Equally so is the cumbersome waterfall (now answering to "SDLC") currently making such a comeback. The versatile methodologies for strict object orientation could certainly be made to serve but would usually amount to serious overkill. Unless the component is to be made part of a large general campaign, it is frequently possible to "manage" it pretty much by watching it grow.
So it is just as well to say a few words about the usual steps involved in getting a policy component into service.
First of all, of course, it is well to state the problem, or to get management to do so.
Usually there will be statistical analysis but not inevitably. The analysts on the project may conduct data mining, build models, get sign off, etc. Or it may be that these steps have been done already, and this analysis constitutes the project's starting point. It is worth noting as well that policy components may be asked to include policy of the old-fashioned kind, such as: "Never give a discount to my brother-in-law."
Policy components, when the analysis appears complete, are prototyped, usually by one or two people working by themselves in techno-heaven. The prototype is complete when it seems optimal; it must simply match the model within stated tolerances.
At the conclusion of the prototype comes the most distinctive and important phase of a policy component project. The behavior of the prototype and the context of the planned deployment yield figures upon which reliably to estimate (the El Dorado of information technology for half a century) return on investment.
Even fairly complex policy components require relatively little management effort for analysis and prototyping. Ahead lie formal development, interfaces to existing systems, training, etc. But the behavior of the component is well known and certified after prototyping. For example, the component might have shown itself to save (or gain) 33 basis points on the price of a loan, or it might predict an elusive code 22% of the time. Such statistical facts, and they are very close to that, have precise value.
Meanwhile it should also be straightforward to estimate the cost of the component. The technical risk has already passed, but a list of technical chores remain, such as: interfaces, queries, integration; their estimation should be straightforward and quite accurate.
The development and integration phases are known technology, and they should generally lend themselves to a waterfall approach. The conversion of the prototype from script to the target environment (e.g., Java, DotNet, VB, C++, mainframe) requires care but is essentially free of risk. Policy components are intrinsically object-oriented and converse politely with nearly any client from XML to COBOL.
It is not necessarily clear whether maintenance should be part of any given methodology or not. In this case, however, it deserves and gets its very own section for reasons that will become clear.
First of all, it is necessary to bear firmly in mind that the idea of software maintenance itself is changing before our very eyes. This is true generally, but is exemplified perfectly by policy components. These little systems not only model the real world environment, a veritable Petri dish of change; and, moreover, they depend for their effectiveness on the ability to do so accurately. That is, the software is no longer in charge. The more it attempts to reflect conditions in the world outside of mathematics and bookkeeping, the more it is obliged to follow the shifting currents of the real world. Maintenance is no longer a matter of deciding when to ‘upgrade’ or fix nagging bugs. It now has to do with skipping to the world’s tune, or risk becoming worse than useless.
When the business of insurance transactions or mortgage underwriting, or federal reporting change, so must, the policy software, whether it is a 'component' or a large general system.
Any policy component model has two sorts of maintenance plateau: the first involves mid-course corrections; the second means changing vehicles altogether. This is illustrated perfectly with the payee code demo. The prototype software for running random selections of transactions for analysis is also ‘production’ software of a sort. It was originally used to tune the scoring. It is used in an ongoing way to verify that the scoring remains optimal.
One thing is certain, however; will not remain optimal. The business environment is bound at best to ‘drift’ as some clients do more business and others less. Different payee codes are liable to surge to the foreground, others to recede. The dynamics of this process are likely to be glacial, however, at least by the standards of mortgage pricing. The point, however, is that any policy component needs to be watched by an analyst who is responsible for maintaining the integrity of the underlying model on a regular basis.
The other flavor of maintenance cannot be subjected to that sort of routine. It might be, for example, that a major new client alters the entire transaction population in a single week or so. Or it may be that hurricane Katrina suddenly alters the dynamics of homeowner insurance transactions. Such ‘fractal’ events in the real world mean that the modeling process must be undertaken anew, and rather quickly. Even in such cases, though, things are hardly likely to be dire. Most policy components go from no policy known whatsoever to completed prototype in at most a few weeks. Even the most radical maintenance does not involve rethinking the entire problem, merely in performing similar analysis over again.
The moral of maintenance for policy components is that because they by definition offer appreciable revenue enhancement or expense relief, their maintenance is not ‘optional’ and is also not likely to submit to a rigid schedule.