Page 1;
Copyright 2004 by Richard Tabor Greene, All Rights Reserved, US Government Registered
The Fractal Computing Strategy Including Fractal Micro-Applications, Fractal Interfaces, Fractal Human Capability Module Libraries, and Fractal Program Construction Events Research Question 1 BUSHY-NESS INCREASING TECHNOLOGIES: What are the long term effects and costs of technologies that invite bushy unformedi nputs and generate bushy un-ordered outputs? Research Question 2 WHAT ARE THE MAIN TECHNO SOURCES OF INCREASING BUSHY-NESS IN SYSTEMS: What are the various types of system that are inviting more bushy inputs and producing more bushy outputs? This paper describes a new, very general strategy towards doing all parts of computing and software development. It presents the strategy in 16 diverse areas of software practice and argues for researching the costs and benefits of applying that strategy. When people create bushy networks of connections, then, suffering from the chaos of path and locale that produces, they create tools for searching that mess and finding things in it, they contradict themselves. They generate the chaos they later must solve to be efficient and effective. Garbage in garbage out, bushyness in bushyness out, we might say. Instead of investing in tools for finding one’s way amid bushyness one oneself created, one might invest in tools for inputting something less bushy. Method 1 MAP TECHNO SOURCES OF INCREASES IN BUSHY-NESS OF SYSTEM INPUTS AND OUTPUTS: fifteen such sources are found Method 2 FIND KINDS OF SYSTEM BUSHY-NESS IMPROVED BY FRACTAL REGULARIZATIONS: apply fractal regularizations to now bushy system inputs or outputs and measure for improvements in cognitive and social processing and outcomes of use of them. Some, perhaps most, of the most fundamental and important media that humans use suffer from bushy networks of connections, “solved” later (and poorly), by complex search processes or engines (prose is the major example examined herein, with internet page links, and software application menu lists important sub-examples: this paper suggests 15 such domains where fractal regularization solves problems of bushy connections or choices). Result 1 THE PROSE-AS-INSPIRING-BAD-EXAMPLE HYPOTHESIS: prose is a very old interface that hides the count, names, and ordering of points and inspires us to leave inputs and outputs of all other communication and symbolic systems with similar unclarity of count, names, and ordering. Result 2 TOOLS FOR REDUCING SYSTEM INPUT BUSHY-NESS: investing in tools reducing bushy-ness of system inputs may pay more in benefits than tools for handling bushy-nesses already input to systems. We define “bushyness” in this paper precisely in terms of proliferation of different branch factors among peer nodes on the same level of a hierarchy of links and among levels of that same hierarchy, proliferation of randomly assigned node name formats among peer nodes and levels, and proliferation of diverse principles of ordering nodes among peers and levels. An alternative linking geometry, fractal in nature, in that the same patterns (of branch factor, name formats, and ordering principle) are repeated on different network size scales is suggested. An argument is made that if the costs of structuring prose, internet page links, and software application menu lists (and 13 other aspects of software production and use) fractally are compared with the benefits of doing so, a significant net benefit will be found overall. The costs of investing in tools for inputting fractal formats instead of bushy ones is compared with the costs of investing in tools for handling and searching among already bushy formats. For both end-users and software developers, improved speed and accuracy of recall, recognition, reproduction, lostlessness, errorlessness, and easy collaboration are signal benefits of regularized fractally ordered software aspects described here. The strategic importance of bushyness in general (and fractal computing strategy generated solutions for it) for future software, application, network architectures, search procedures, and overall effectiveness of various media are presented.
Point One: Regularized Fractality and the End of Prose This paper starts with a suggestive metaphor, presenting characteristics of prose that demonstrate its poor qualities as an interface for communication of any sort. That we accept and fail to replace such a platform (that invites and pretty much achieves horrid performance levels of basic cognitive functions like recall, recognition, orientation, error of operation) suggests something about why and how we fail to replace irregular internet page links, irregular software application menu listings, and the like. We are reproducing in electronic form abysmal interfaces, network bushyness, and performance levels we learned and imbued from prose all our lives. Once the interface failings of prose are made evident and precisely measurable in the passage below, we can turn to network links, application menu lists, and other software traits seeing the same lessons and possibilities for improvement.
The Bush versus the Regularized Fractal Examine the paragraph below and tell me how many main points it has, the name of each of its main points, and the principle by which they are ordered. As soon as this is asked, people notice that none of those are visually evident in prose as an interface. There is more.
There are as many versions of complexity theory as there are books and articles on it. Yet the same words are found repeated in article after article and chapter after chapter. We find state space-trajectory-attractor in nearly every article and chapter. We find the butterfly effect-system avalanches-tipping points-catastrophes mentioned in most articles and chapters. We find genetic algorithms-neural nets-populations of agents mentioned in quite a few. We find self-organization-self emergence-end of design mentioned in not a few. We also, however, find major errors repeated by even the most famous writers from the most famous companies and universities. Some of these errors are so frequent that their corrections may seem to be erroneous. First among these omnipresent errors are attractors referred to as “attracting, drawing hither, influencing” paths, trajectories, system evolution. Attractors are ultimate destinations that certain system initial configurations end up at, but attractors certainly exert no gravity-like draw, force, pull or the like. The name “attractor” is unfortunate in that is encourages this error. Second is the error of seeing “emergence” as good and “design” as bad. This is a corollary of seeing the “new” as good and anything “old” as bad. Whether emergence is good or bad depends entirely on who is doing what with what sort of system. If I am trying to prevent a disease in my loved one, by deliberate actions, emergence of the disease is unwanted and bad. If I am testing environmental toxins and find disease emerging in subjects exposed to some toxins not others, emergence is good. In truth, the tendency to see emergence as good and design as bad comes, probably, from a free market philosophy, recently dominant in economic policy making. Design gets associated with central planning in bureaucracies and emergence gets associated with interacting populations of free market firms and players from whose unplannable interactions better-than-expected things emerge. I have (Greene, 1993) asked several hundred people in each of dozens of institutions over the years to do this for the paragraph above and obtained the following results (each group was given 20 minutes to “read” this single paragraph, no other instructions were provided, after the 20 minutes people were asked for the number of main points (at all and any hierarchy levels of points in the paragraph), the names of points, the principles by which they were ordered (at all and any hierarchy levels of points in the paragraph); groups were given 20 minutes to answer these 3 questions): 2 of 2000 got the count right, 0 of 2000 got the names of the main points right (any near miss synonym was accepted), 1 of 2000 got the principles by which they were ordered right. “Right” here was obtained from the same research basis as judgements made on GRE, SAT, Toefl Writing Test verbal exams for college entrance, namely, research by Kintsch (recently head of American Association of Psychology), van Dijk (editor of the journal Discourse), and Meyer (Kintsch, 1998; van Dijk, 1980; Meyer, 1984). The Structural Reading Diagram (see Greene, 1993) below displays in one two-dimensional drawing format the number of points, their names, and their principles of ordering. Getting the count, names, and ordering principles directly from “reading” the above text paragraph is tremendous work, a measure of how far the medium of prose is from being good at conveying its “meaning” (according to text research basis for SAT, GRE, and Toefl Writing Text standardized exams for college entrance). In fact, if you try it without producing something like the Structural Reading Diagram below, it is nearly impossible to keep in mind all the counts, point names, and principles of ordering in a simple small paragraph like the one above. A paragraph of German theology would dwarf the above in complexity, number of things, and diversity of things. There are two things to note: the cognitive work load involved in extracting count, names, and ordering from prose coding, and the superiority for getting count, names, and ordering of the Structural Reading Diagram given below. Later in this article a format vastly superior to this diagram will be provided and explained.
Cognitive Work Load.
If we review the right column contents we find seven count numbers required to reproduce the point structure (if 18 is included as one, the number of individual paragraph points). We find 16 names required to reproduce the point structure. We find 5 principles of ordering required. In other words 7 counts, 16 names, and 5 principles of ordering are the cognitive load for reproducing the number of main points, their names, and how they were ordered in this small paragraph. Nearly no adult human (certainly very few of those I have tested, from populations far above average in cognitive ability and schooling) can, in 40 minutes, detect and reproduce (while having the text before them) this information. This is the cognitive cost of prose being a bushy network of connected ideas. The number of numbers that must be recalled, the number of point names, and the principles by which they are put in order exceed human mental capacity at present.
Superiority of Diagrams to Prose.
Consider the following test, which I have conducted. Instead of giving people prose paragraphs, as in the test above, I have given them Structural Reading Diagrams, such as the one below of the above test paragraph, and asked them to “read” it. After five minutes I then asked them for the number of main points, their names, and the principles by which they are ordered. Surprise, surprise, out of 2000 people thusly tested, only 8% failed to provide count correctly, only 11% failed to provide names of points correctly, and only another 11% failed to provide principles of ordering correctly. Given that the number of points, their names, and their principles of ordering are the same for both the prose paragraph and the diagram, this difference in recall performance is stark. It is a direct measure of the poverty of prose as a writing medium and perhaps, as the rest of this article will argue, a measure of the likelihood of prose disappearing in a few decades as a primary medium of communication among people.
Page 2;
Copyright 2004 by Richard Tabor Greene, All Rights Reserved, US Government Registered
Structural Reading Diagram of the Test Paragraph Top Level: count 1 point name: unity order: none
{
Unity among Versions of Complexity Theory
{
2nd Level: count 2: names: repeated X order: good-bad
Repeated Errors
Repeated Terms 4 Sets
Attractors Don’t Attract Emergence Not Better Than Design Goodness depends on purpose Cause of error: Free market idea Good\bad examples
Error’s correction and cause
{
3rd Level, Left: count: 4 rep/event/ names: effect/cause order: specific-general 3rd Level, Right: count 2 names: errors order: phenomena-use
Count, Names, Ordering Principle. The demonstration above shows that to “understand” a simple paragraph you usually need something on the order of 7 numbers, 16 names of points, and 5 different principles of ordering. It is very difficult even for very smart people to read such a paragraph and come away from that reading with any sense of how many points were there, what their names were, and how they were ordered. The truth is, ordinary reading is conducted at abysmally sloppy levels of cognitive performance.
Regularization Needed. Take an “interface” view
corollary of the error: the new is good, the old is bad
if purpose if purpose is research stop disease, toxicity, disease disease emer- emergence gence is is good bad
Structure
attractors unfortuare nate ultimate name: destinations, “attractor” without causes gravity- this error like force
Structure
errors repeated so often they seem correct
Structure
event effect: cause:
Structure
event:
Structure
Structure Structure
of this matter and things change. The prose interface is truly awful if the ultimate purpose of prose is to convey number of points, names of points, and principles of state butterfly genetic self ordering. [Some people think my emphasis on “number” space, effect, algorithm, organization, 4th Level, Right, Left: count: 2 self trajec- tipping neural of points is strained but anyone having a hallway converpoints, names: correction,cause emergence, nets, tory, system sation with their vice president in business who comes order: effect-cause end of populations attrac- avalanche, 4th Level, Right, Right: away from that with a different “number” in mind of count: 2 catastrophe of agents design tor names: causes (of goodness, things the vice president wants done by tomorrow than of error) specific------------general phenonemon-------use effect--------------cause order: effect-cause the vice president has, is soon persuaded of the imporgood-------------------------------bad tance of “number”.] A glance at any prose paragraph reveals nearly nothing about any of these, visually apparMain Point of Each Paragraph ent. We have to labor with de-coding anaphora, disam(“Structure” means same biguating pronominal reference, and the like. The truth as group title above it and including it, is, ordinary prose is a terrible medium of communication. such sentences are performing purely We continue it mostly out of blind unthinking momentum structure-indicating from our past and the low technical level in our systems roles.) of education. Continuing an “interface” view of the matter, we can imagine media other than linear sequential prose that allow instant simultaneous perception of number of points, names, and principles of their ordering. The simple twodimensional Structural Reading Diagram above suffices to vastly outperform prose as a writing vehicle. However, we can do better. By regularizing what and how we write we can reduce 7 count numbers to 1, 16 point names to 4 formats plus 8 names, and 5 principles of ordering to 1. representation:
design seen = central plans, emergce seen = market player freedom
{ {
Regularize Branch Factors. Suppose we force the same number of branches from each node at all hierarchy levels.
We could impose a 2 by 2 by 2 by 2 by 2 by 2 format of two points each having 2 subpoints, each in turn of which has 2 subsubpoints, and so on. In this case there would be 1 number needed to recall the entire hierarchical arrangement of points.
Regularize Principles of Order. Suppose we force the same principle of ordering at all levels and across all branches at any level.
We could impose item to feature-of-item
ordering for example at all nodes having branches. In this case there would be 1 principle of ordering at all branches and levels.
Regularize Point Naming. Suppose we designate one particular grammatical combination of words for names at each level and within those chosen forms, one particular subform to another subform as the order of names within each 2 by 2 branch. In this case there will be six formats of names, one for each hierarchy level, and 2 contrasting formats within any 2 by 2 branch.
Result of Regularizations Our three above regularizations now produce this: 1 number to recall the structure of points, 1 principle to order all points, 2 formats to order all 16 names. Because each point now occurs in the same ordering, branch factors everywhere are the same number, and name formats are ordered throughout, we need one anchor point’s name, plus the one ordering principle to reproduce the other two or three points of any regularized set. We have reduced 28 things to be recalled (to recall all the points and their arrangement in the paragraph) to 13 things. The Fractal Concept Model for the test paragraph above is given below. Compare the cognitive load of recalling, recognizing, reproducing, applying it to the cognitive load of doing that with the original test paragraph. Experimental such comparisons, by me in previous work, reveal stark differences, reported elsewhere (Greene, 1993). Flake (Flake, 1998) show lots of fractal forms in nature, some regularized, without showing why they appear, an argument made well by Wolfram (Wolfram, 2002), or results compared to other previous and possible arrangements.
Implications In this paper the following implications are important:
-
prose is an extremely inefficient interface to point count, names, and order quality of human everyday reading is abysmal when measured by what research on reading defines it as irregularity of branching per point, names of points that do not reflect their structural positioning, and variation in principle of ordering points across items at one level and across different vertical levels in point hierarchies--these 3 make recall of count, names, order nearly impossible regularization of branch factor per node, name format per level and position in ordering principle, and ordering principle across hierarchy typically reduces info to be recalled for complete understanding by factor of ten or more display interface of two-dimensional diagram instead of prose sentences in paragraphs allows near simultaneous perception of count, names, and ordering principles display interface of two-dimensional diagram of points as what people write instead of prose sentence paragraphs allows near simultaneous presentation of count, names, and ordering principles display interface of regularized (branch factor, name format, ordering principle) two-dimension diagram of points as what people write instead of prose sentence paragraphs allows near simultaneous presentation of count, names, and ordering principles and long term accurate recall of entire hierarchy of points.
Which Structure is the Key to High Cognitive Performance and Quality--”Natural” Brain Neural Nets or Consciously Invented Other Mental Disciplines? An ambiguity in the role of
Fractal Concept Model of the Above Test Paragraph
structure in cognitive high performance comes from the habit of researching how things are rather than how things might be--the bias in research towards passive studying of how things now are, even when things now spotting are not to great. There is abundant evidence that requires great contexts human cognition--reading, writing, communicating, Natural Selection Not Efficient depends on and like functions--as done by “the average person” great if we Creation Process humans spotting “interesting” partials have billions of years and even as done by “the average ‘above-average’ perRepeated Errors corollary of: son”, is performed at low levels of both performance word “attractor” the new is better and quality compared to peak performances achieved unfortunate error cause than the old and sustained by the top 1% of individuals in any field Attractors Don’t Emergence Not Attract value of Better Than of human endeavor. That is, how the human brain emergence Design attractors are no gravity-like sometimes want depends “naturally” reads, writes, and the like produces poor destinations only pull force design or not want it on purpose results. If, however, those people produce structural Unity Among Apparent Differences in Versions of Complexity Theory reading diagrams or fractal concept models (as this Weinberger: populations of lumpy nets interacting agents paper explains below) their habitual every-day reading, Event Cause Social Theories writing, and other cognitive abilities improve signifiAxtel: Gladwell: inventorless invention cantly. The role of structure in high performance can genetic algorithms neural nets market tipping points of banking split into two meanings--it can refer to neural net paralRepeated Terms Apparent Differences lel-associative “natural” way the brain reads, writes Wolfram: Gould: attractor and so forth or it can refer to consciously invented new tipping points minds not superior levels of mutation habits of mind that produce much better reading, writPhysics Theories Representation Biologic Theories Event ing, and other cognitive performances. An example is Goodwin: Mandelbrot: Kaufmann: Bak: sex is like system avalanches, trajectory state space stripe from simple butterfly effect sandpiles the elimination of all sorts of text comprehnsion life discontinuous inevitability of life catastrophes programs research in the 1990s when PDP (parallel distributed processing) models of neural net based minds showed all the symbol-processing models then competing to be “the most right” (constituting most of cognitive science, which was based on seeing minds as like computers) could simultaneously be done by neural nets of the Rumelhart/McClelland sort. That is, causal logic, theory-hypothesis, co-variation, and other cognitive models all were right (and wrong) because all simultaneously got executed in typical neural net models, with none of them having priority over the others. It may be that the role of structure in cognitive high performance in natural people depending only on their born-with parallel distributed neural net brain system differs from the role of structure in cognitive higher performances of un-natural people entooled
Throughout: Branch Factor = 3; Order = Superficial to Causative Anomalies Name formats not regularized
Page 3;
Copyright 2004 by Richard Tabor Greene, All Rights Reserved, US Government Registered
or trained beyond that born-with basis of functioning. This article explores both of these to some extent, but ends up more interested in the latter than the former. Where built-in brain structure supports poor cognitive performance and quality, and consciously invented and learned other structures result in better performance and quality, the “role of structure” in cognitive high performance becomes the role of specially, consciously, invented structures that go beyond structures natural to brains as born. This reproduces a similar ambiguity and response from known flaws in human decision-making (Kahnemann and Tversky work undermining rational utility theory). Humans who perform best must consciously counter their own inner brain’s tendencies to reason in a flawed manner in certain cases. This article explores both of these, neural net natural ways and consciously-imposed improved ways, to some extent, but ends up more interested in the latter than the former.
The Import of the Above Example If 2000 people given 20 minutes to read a 350 word paragraph and 20 minutes to state the number of points, their names, and principles by which they were ordered produce not a single person getting all three correct and only three people getting any one of the three right, then:
-
most people operate at abysmal levels of cognitive performance when reading and writing prose is a very poor interface for communication much of the poverty of prose as an interface for communication comes from its irregularity--of branch factor, of ordering principle, of naming formats the “bushyness” of branch factors alone in a single prose paragraph taxes human memory abilities the “bushyness” of principles of ordering alone in a single prose paragraph taxes human memory abilities the “diversity” of name formats in a single prose paragraph taxes human memory abilities. prose, in reality, is the coding of non-linear configurations into a linear stream of symbols, hence, loss of visual evidency of number of points, names, and principles of ordering - the dominance of prose reading and listening and speaking and writing in contemporary work and life, in reality, trains us all to abysmal levels of cognitive performance. • Proposal 1: Death from Prose. Abysmal habits of cognitive performance, learned from using prose, will cause software interfaces to aim for abysmal levels of cognitive performance too. • Proposal 2: Death of Prose. Software tools for writing, speaking, reading, and listening will emerge that entice us all beyond prose as a medium to the direct speaking, hearing, writing, and reading of non-linear configurations of inter-related elements.
The Cost-Benefits of Improvement Regularizing branch factor requires real thought. It requires some imagination work before setting pen to paper (finger to keyboard). It requires sensing relative heft in terms of detailed number of points for major points, in order to choose highest level point names and ordering. In short, it requires what GREs, SATs, and Toefl Writing Tests already require of high scoring writers--that one writes to express beautiful mental structures already ordered well before being expressed in any format, prose or diagram. This repeats the well known result in composition research (Myers et al, 1986) that great writers write things twice--once to discover what they themselves actually think about something and another time to express beautifully a more ordered version of what they think (write, discover, order into model, choose dramatic path across model, write again). Regularizing naming requires real thought too. It requires astute naming of points so names reflect vertical level in hierarchy, role in ordering principles, and position relative to lateral peer points in hierarchies of points. Regularizing principle of ordering requires real thought. It requires putting lower level points into orders analogous to the ordering of higher level points. The metaphors this requires require real mental work. The benefits of regularizing branch, naming, and ordering principles are faster, more accurate recall, recognition, reproduction, and application of points. As tools emerge in software form, that make regularizing branch factor, naming, and ordering easier to perform, prose will be replaced by direct expression in non-linear configurations of inter-related concepts.
Example of a Fractal Software Interface Readers may lose the practical import of all this in the theory being presented, I fear, so here I offer one case example (over 20 years old though it be) to motivate further discussion in the rest of this paper. A major defense contractor contacted me over 20 years ago about an interface problem they had. They made jet fighters and pilots needed to select any of slightly over 7000 evasive maneuvers within 1.2 seconds of sensing a situation. They had been unable to create an interface that worked, in terms of being learnable and allowing fast enough choice of action. Some of their attempts had generated so many buttons that too much cockpit space was used up. I regularized branch factor, naming of maneuver, and principle of ordering, producing a telephone number for each maneuver, accessed using a normal calculator keypad. A nine by nine by nine by nine, 4 level hierarchy, ordered all evasive maneuvers with names at each level regularized to indicate what level one was on, and the same branch factor and principle of ordering of points at each level. As a result, in 3 weeks of training, pilots could with 99.993% accuracy select the correct maneuver within .89 seconds. In this case, fractal repetition on all size scales of the same branching, ordering, and distinct name formattings, saved lives. The benefit of faster, more accurate, more complete, higher quality recall and reproduction of points at times does save lives--in general it is vitally important and at the core of wanted productivity and quality improvements.
Point Two: The Fundamental Argument of This Paper: By Analogy with the Prose Example Above: 15 Other Areas of Software Media will Reduce Our Effectiveness with Excess “Bushyness” till We Reduce “Bushyness” Input into Them. The rest of this paper works out in 15 different areas of important software and media technology, how bushyness, reduced by regularizing inputs, vastly improves the cognitive worth (reducing needed workload) of doing work. Instead of inputting bushyness then developing tools to find our way through it, we can develop tools for inputting things much less bushy, obviating tools for handling bushyness now no longer there. The particular approach to reducing bushyness explored below is “fractal-ization” of a great variety of inputs to software systems. Fractally expressed systems and fractal interfaces replace bushyness and greatly improve cognitive worth and reduce cognitive workloads of systems, not just theoretically, but in actual fielded practical systems. The inescapable implication of this paper’s contents, if they eventually are judged correct, is investment in tools for reducing bushyness input to systems pays back more, faster than investments in searching through, organizing, and wending one’s way through bushyness already input.
Point Three: Repeating the Prose Example by Analogy in Other Realms of Software Theory and Practice Internet Homepage Links: Regular Fractal Replacing Bushy Internet Homepage Links (Dertouzos, 2001; Leebaert, 1999; Tapscott, 1999; Fogg, 2003; Rheingold, 2003; Stefik, 1996; Sudweeks, et al, 1998)
Internet sites are evolving:
from every page being individually laid out, with links to other pages that are unique to it to standard types of pages, each type having standard patterns of link types from it to other types of pages from standard types of pages, each type having standard patterns of link types from it to other types of pages to standard types of pages, each type having a regularized fractal pattern of link types from it to other types of pages from standard types of pages, each type having a regularized fractal pattern of link types from it to other types of pages to entire sites structured as regularized fractals having the same branch factor, ordering principle, name formats throughout. Experiments to test the access ease, errorlessness, speed of use, and orientation traits of each of these are underway (in our lab). Some of these experiments involve the same site materials organized in each of the 3 above ways, comparing use and result characteristics among them.
Software Application Menus: Regular Fractal Replacing Bushy Software Application Menu Hierarchies (Dertouzos, 2001; Gelernter, 1991; Jacobson, 1999; Anders, 1998; Sellen and Harper, 2002; Leebaert, 1995; Leebaert, 1999; Hoch, 2000; Langer, 1989; Marchesi, 2003)
Present interfaces to applications suffer from the following problems:
busy branching within and across hierarchy levels blocks concept memory and muscle memory of how to access a function partial and bushy operator choices are applied to partial and bushy operand choices in each menu-choice function users access tiny fractions of application function-spaces due to costs of learning, unpredictable outcomes of function choice, and difficulty remembering how to access functions new to them.
Page 4;
Copyright 2004 by Richard Tabor Greene, All Rights Reserved, US Government Registered
Regular fractal menu spaces would have the same branch factor, principle of ordering, and name formatting throughout. In addition, regular fractals of functions would be applied to regular fractals of operands. For example we use bolding to emphasize words by putting letters in those words into bold font. However the “emphasize visually” function includes at least 15 other ways to emphasize besides bolding, and what needs to be thusly emphasized visually includes at least 15 other operands besides words--titles, charts, pages, chapters, and so forth. By creating regular fractals of entire menu hierarchies, operators within such hierarchies, and operands each such operator set applies to, three sorts of bushyness that hinder using all the functions in an application, finding functions in them, and remembering how to access such functions would be greatly reduced or eliminated outright.
Indexes: Regular Fractal “Meaning Indexes” Replacing Bushy “Mention Indexes” (Kovacic, 1994; Dixon, 1996; John-Steiner, 2000; Harnad, 1997; Lappin, 1996; Vaina and Hintikka, 1985; Langer, 1989; Buchanan, 2002; Albert-Laszlo, 2002; Batten et al, 1995; Brynjolfsson and Kahin, 2000; Cilliers, 1999; Clark, 1990; Guttenplan, 2002; Dennett, 1996; De Rosnay, 2000) We cannot automatically find the number of points, names of them, and principles ordering them in current computer contents because irregularity of branch factor, name formatting, and ordering principle prevents automatic finding of points, hence, meanings. We must build indexes of prose because prose itself is irregular in these three ways. If, however, prose was structured from the beginning as regular fractals of points, it would be self indexing--the sequence of points encountered would tell you the number of points, names of points, and ordering of points that follow. Today we use mention indexes--complicated statistics about words that occur together and net pages linked or accessed together--because prose is not self indexing. Meaning indexes are more powerful but not possible due to the three irregularities in prose and in homepage random linking patterns. Fractal regularization--composing prose in fractal concept model form--then choosing a dramatic path across such regularized forms of points for presenting to audiences, in actual writing or speaking--would allow prose to be self indexing, not with mention but with meaning of points: count, name, and order of points made being visually self evident. Note both prose and the internet, technically speaking, are autistic media at present--they are pure streams of percepts inputs not revealing any underlying structure or meaning, just the so-called “idiot savants” of autism get access to pure percept info in the brain unmediated by cerebral cortex indexing functions. Much of our brains is just indexing what we experience for us--that is much of what makes us intelligent. Prose and the internet are un-intelligent, because un-indexed.
Software Modularity: Regular Fractal Human Capability Modules Replacing Bushy Object Libraries of Modules (Sternberg, 2002; Dertouzos, 2001; Stanovich, 1999; Marchesi, 2003; Hoch et al, 2000; Riel, 1996; Cilliers, 1999; Plotkin, 1993; Clark, 1990; Guttenplan, 2003; Anders, 1998 plus human incapabilities that require us to develop certain compensating capabilities: Piattelli-Palmarini, 1994; Plotkin, 1993; Myers, 2001; Myers, 1992; Myers, 1986; Stanovich, 2001; Tannen, 1990; Sternberg, 2002; Langer, 1989; Kahneman and Tversky, 2000; Nisbett and Ross, 1980; Plous, 1993; Postrel, 1998; Shapiro and Varian, 1999) Programmers used to have to memorize the 70 or so keywords of programming languages; now, they must memorize the 1100 or more objects (each with half a dozen messages it sends, others it receives, and actions it takes when sending or receiving particular messages). Not only this, but there are several different competing such libraries of objects to thusly memorize. Ontology work in artificial intelligence did nothing to simplify all this, despite much ambition. People new to programming in JAVA or C++ discover nothing about an application spec tells them what to make an object, a message, an action. Between the functions that people want applications to perform and the functions that items in object libraries can be combined to perform is now great bushyness due to too much freedom and not enough constraint. If we insert between functions wanted in applications and functions achievable by combining objects in current object libraries, regular fractals of human capabilities, from the combinations of which application-wanted functions appear, and in order to achieve which objects get combined, then that bushyness is reduced or removed entirely. In truth applications are a moving frontier that regularly spots and defines what effective, educated, creative, leaderly, artful, quality human beings can do and gets combinations of objects from object libraries to do that. All over society everyday software applications transform more and more of particular human capabilities, needed to do something in the world, into software that does that something. Humans, after software frees them from some work in this way, expand their ambitions and invent newer human capabilities for software later to discover and automate parts or all of. Regular fractals of human capabilities are the missing layer between functions wanted in applications and functions achievable by combinations of objects from libraries (for examples of such regular fractals of human capabilities see Greene, 2004).
Software Solutions: Regular Fractals of Process Social and Technical Components Replacing Bushy Customizations of Standard Software Packages like SAP (Greene, 1990; Greene, 1993; Cole and Scott, 2000; Olson, Malone et al, 2001; Akao, 1990; Cole et al, 1993; Ishida, 1998; Ichikawa, 1986; Ishikawa, 1985; Juran, 1988; Kano, 1993)
Software solutions tend to be standard application cores customized for particular customers and implementations (general consulting companies doing the customization work for software suppliers like SAP, for example). However, such standard cores can petrify competitiveness efforts and such customizations can overcome the advantages of standard cores. The dual bushynesses of missed competitiveness chances due to frozen standard cores and missed standardization chances due to massive long customizations can be reduced by fractally specifying all software and socialware systems as addressing standard total quality process model root causes of why particular process output traits dissatisfy customers. This looks like any or all of the following:
every software application feature specified as something in both socialware and software that addresses a root cause of why a particular output of a work process dissatisfies customers of that process’ output every software component of an application feature specified as something in software that addresses a root cause of why output of that component might dissatisfy “next process step” customers of that step’s output instead of a menu interface, a process model interface of the steps in the process of doing the work the application is for is used. What is standard is the model of process components and use of work process models as application interfaces. All features of an application wanted are specified as needed software and needed socialware accompanying software. All features, social and software, address particular root causes of why process outputs do or may dissatisfy customers.
Software Specification: Regular Fractals of Process Transparency to Five Voices Replacing Bushy Under-Constrained Rapid Prototyping (Greene, 1993; DeCastro, 2003; Olson, Malone et al, 2001; Shiba, 1993; Kawase, 1988; Kurogane, 1987)
Developers all over the world in all sorts of system tend to, over time, substitute their preferences for customer preferences in what they produce. Developers tend to become the customers of what they produce, supplanting the real customers. Rapid prototyping in software practice was an attempt to forestall this tendency in software development. However, the community among developers and users it developed tended to drift in opposite harmful directions simultaneously--towards incorporating “wish for” features instead of “must have” ones, and towards incorporating “developer convenient” features instead of “user required” features. Total quality theory invented the five voices model, of making work processes fully transparent to transmission of the contents of those five voices and of making work process steps fully acknowledge and handle the contradictions and trade-offs among the sometimes competing demands of those five voices:
the voice of technology--what sequence of new technologies is likely in future years and how embody them at low costs in reliability, cost, error, learning, and performance the voice of suppliers--what adjustments in our process’ work and arrangements be enable what suppliers need from us in order to succeed in helping make us world best the voice of the process--what about how we now work or might work is a constraint being ignored by higher ups and what about how we now work or might work is an opportunity being ignored by higher ups the voice of the president--what about how the whole nature of our industry and business, pointed out by our CEO, can be powerfully even though partially, enabled by how we work in this particular process the voice of the customer--in what ways can our process better respond to what customers want and better anticipate what future events will change customer criteria of excellence. When all software applications are specified so as to make work processes fully transparent and responsive to these five voices, and when all components of software applications are specified by transparency to these five voices on the smaller scale of how they appear within the application, then you have fractal repetition of the five voice model of quality processes of work.
Software-ization of Software Development: Regular Fractals of Using Software Meeting Mediation and Work Coordination to Automate the Conversations that Specify, Build, and Deploy Software Replacing the Bushyness of Software Development as Art (Greene, 2003; Xerox, HPWC 1992; Olson, Malone et al, 2001; Carmet and George, 1992; Documentum, 1992; Galegher et al, 1990; Grohowski and McGoff, 1993; Hewitt, 1986; Malone et al, 2003; Mullen et al, 1987; Neal, 1992; Olson et al, 2001; Rousseau, 1989; Sakamoto, 1989; Sellen and Harper, 2002; Smith, 1992; Gery, 1991)
If we take the best of what software is now capable of doing and apply it to the specifying, building, and deploying of software itself, on a continual basis, we automate the software development process overall. All software is built by having conversations, in which people specify software to do functions where we have an adequate data basis for doing something, and in which people specify socialware to get data not yet available to allow software doing of things. Software application uncovers, more and more, managing and designing work not based on valid data, and puts such work on a valid data basis. It also, more and more, brings people together to create valid data for functions now not assisted by software and done instead by human hunch, opinion, or cowtowing to authority figures as a way to decide, there being no valid data for deciding. If we can structure conversations so they are more productive and structure the between-meeting individual flows of work specified by action-items in meetings, then get software to automatically take human responses to such structured meeting activities and structured between-meeting activities and turn them all into working new software code--we have made working software the new “minutes’ of meetings. For example, Joe says “right now Bill decides based on his experience in X and Y before, but he sometimes makes mistakes where our current technology is changing rapidly--we need a way to quickly specify new technology capabilities in detail so we can automate Bill’s decisions with less error than Bill now makes”. This is a conversation fragment where work now not based on valid data is identified and the beginnings of social changes that could develop valid data for it are discussed. Software development consists of spotting functions now done happenstancely or managerially for which we have valid data that would allow doing such functions better, and it consists of spotting functions where we lack such valid data and designing social arrangements that would produce it.
Page 5;
Copyright 2004 by Richard Tabor Greene, All Rights Reserved, US Government Registered
Email: Addressing Email to Regular Fractals of Social Index Categories Replacing the Bushyness of Addressing Email to People Whose Names We Know (Schrage, 1995; Norman, 2004; Fogg, 2003; Morecroft et al, 2002; Albert-Lazlo, 2002; Gilovich, 1991; Lessig, 2001; Rheingold, 2003; Schiller, 1999; Sproull and Kiesler, 1991; Vaina and Hintikka, 1985; Weinberger, 2002; White, 2002)
“Anybody trying to do this function in this step of this process in the future would benefit from knowing what I just learned that X definitely does not work though it appeals and that Z works pretty well if you get your supply of it from A”--these types of insights occur all the time about people doing their own work and about people interacting with the work of others “if he would just take the time to phone around first, before acting, all this confusion could be avoided”. The trouble is current email systems have no way to address such messages to the right person in the right process step at the right future time. Email today is crippled by having to know the name of someone in order to send them a message. As a result spam dwarfs valid email because we can address only one hundredth or less of the people and places we think of important messages to send every day. Consider being able to address email as follows:
to anyone interested in this topic to anyone having this capability to anyone having this need to anyone in the future doing this step of this process to anyone assign to this role in this part of this organization to anyone doing this role in any organization to anyone who in the past did this step of this process to anyone in the past assigned to this role in this part of this organization to anyone in the past doing this role in any organization. Consider all those people in your large corporation interested in scuba diving. That set of people could be socially indexed using the above categories--those sharing particular interests, capabilities, needs, experience with doing a step of a particular process, and so on. In other words I can index all the people in a large organization into the above categories, then I can recursively index the people in each of those categories using the same categories, and so on till there are too few people to make recursive application of the categories worthwhile.
Just-in-Time: Regular Fractal Applications of Just-In-Time Principles to Replace Bushyness of Inventories Hiding True Process State, Regular Fractal Application of Transformation Process Steps to Replace Bushyness of Events Dissipating Impact (Greene, 2004; Greene, 2003, Greene, 1993; Akao, 1990; Malone, Crowston, et al, 2003; Champy, 1994; Davenport, 1993; Dimancescu, 1990; Axtell and Eckholdt, 1990; Becker and Steele, 1995; Horgen et al, 1999; Kelly, 1994; Schrage, 1995; Pava, 1983, 1986)
First we have departments assigned to do functions. But the vertical male hierarchy of that constantly caused everyone to gaze upward for promotion not horizontally towards what customers wanted. Then we had processes assigned to outputs. But the horizontal linkages of that caused everyone to gaze laterally missing wholly unexpected new capabilities appearing in the world for doing process functions. Then we had events, assigned to transform departments and processes in certain ways. This transformation from department management to process management to managing by events was achieved by repeated application of Just-in-Time management principles from total quality, that asserted that inventories were places where lack of management or excesses of managing hid. By drastically reducing inventories the true state of work could be spotted and addressed. Just-in-Time got applied, over time, first to physical inventories, then to less tangible inventories, and eventually, in managing by events, to managers as a fixed inventory--a specially designated social class with a monopoly on the right to deliver managing functions. Using a fixed high cost social class inventory of managers to deliver managing functions resulted in, paradoxically, too much managing being delivered always (as managers tried to look managerial to justify their perks) and too little managing being delivered (as excess managing hid important situations needing important sorts of managing functioning not being delivered by such fixed inventories of managers). Such over-shoot and under-shoot phenomenon are common anywhere that fixed inventories are used to support work functions, Just-in-Time theory says. Alternatives to a fixed social class inventory of managers to deliver leadership functions were invented--rescue squad delivery, and event delivery being the primary ones (managing by events). In managing by events mass workshop events deliver leadership functions instead of specially designated people. World best procedures for doing a leadership function are embedded in workshop procedures. In particular, the transformation needed at the core of any leadership function, gets expressed as four or more main steps, and that sequence of steps is applied on all size scales of the event--one step per each day of a 4 day events, one step for each 4 hour period of any one day, one step per each hour of any 4 hour period, one step per each 15 minutes of each 1. Regular fractal repetition of the same transformation process steps on all size scales of an event means at least the following:
the doing of each step takes place in the context of all the other steps errors from implementing only on one scale, ignoring other scales, are avoided social rank, class, and hierarchy systems--the monkey element in human organization--that drive people to shun certain scales and glorify other scales--are blunted and prevented from preventing lowest scale desiderata from linking to global scale desiderata.
Outsourcing: Regular Fractal Outsourcing Replacing the Bushyness of Opportunistic Imbalanced Outsourcing Trees (Brown and Duguid, 2000; Macho-Stadlerv et al, 1997; Malone, et al, 2003; Carley and Prietula, 1994; Morecroft et al, 2002)
Outsourcing is a purely fractal phenomenon of functions within large organizations deployed to smaller distant organizations specializing in doing those functions, and functions in those smaller distant organizations similarly deployed to still smaller more distant ones specializing in doing those functions, and so on. If this fractal chain of the same outsourcing moves applied on smaller and smaller size scales of firms is envisioned, entooled (over the internet), and managed with quality--vastly more outsourcing can be done because vastly more capability and subtlety of requirements get deployed. Horror stories you read about people going out of business because they out-sourced too little too late and oppositely about people who outsourced functions and got terrible quality of results back come from bushy opportunistic outsource choosing and entooling. Regular fractal outsource network development avoids both these problems.
Venture Clusters: Regular Fractal Investing in Evolutionarily Stable Constellations of Venture Businesses Replacing the Bushyness of Investing in Individual Ventures or Intrapreneur Ventures (Brown and Duguid, 2000; Lee et al, 2000; Sahlmann, 1999) New technologies are fast growing ecosystems characterized by:
each venture business is primarily a viewpoint onto the ecosystem, a place to spot opportunities and emerging technology constellations it is not individual ventures that succeed but sets of them that together constitute evolutionarily stable constellations of related technical and market capabilities hence, individual ventures nearly always fail save where they belong to such constellations and individual venture members of such constellations that fail are readily and rapidly replaced by new ventures to do the stable role supported in that constellation. Corporate spin-off ventures nearly always fail because the resources and security they get from their big parent firm (and personnel with attendant styles and backgrounds) slow them in spotting and joining emerging evolutionarily stable venture constellations. Why is this so? Individual new technologies mean nothing to customers, just about always. It takes a panorama of related technologies to transform a function enough to impact things customers care about. There is a critical mass in customer work and life impact that must be reached by a set of related technologies before they can be noticed, bought, and appreciated. Individual ventures and corporate spin-offs nearly never achieve this critical mass on their own, hence, nearly always fail. Each emerging evolutionarily stable constellation of related technologies-ventures fractally, as it grows in scale, requires a new generation of emerging evolutionarily stable constellations of new related technologies-ventures. In others words, the ability of an emerging constellation of ventures to themselves spot and organize still newer ventures to support their own emerging needs, is critical to their success. It works like this--idea sex (ecstatic intersection of ideas) gives rise to career sex (ecstatic intersection of careers) which gives rise to technology and venture sex (ecstatic intersection of businesses). Each component product and function within such ventures becomes an ecstatic intersection of ideas, careers, and ventures too.
Social Computation: Regular Fractal Arrangement of People and Roles in Social Automations Replacing Bushyness of Role Specializations or Role Sequencing (Greene, 2004; Greene, 2003; Cilliers, 1999; White, 2002; Batten, et al, 1995; Albert-Laszlo et al, 2002; Wolfram, 2002; Alexander and Pal, 1998; Bradshaw, 1997; Cilliers, 1999; Cowan et al, 1994; Crutchfield and Hanson, 1998; Epstein and Axtell, 1996; Ilgen and Hulins, 2000; Muhns and Singh, 1997; Resnick, 1996)
Machine computers, social computers, and biologic computers mutually inspire each other giving rise to new forms of machine computing inspired by social and biologic computers, new forms of social computers inspired by machine and biologic ones, and new forms of biologic computation inspired by machine and social computers. Computational sociality is new social arrangements of human beings inspired by arrays of processors in connectionist, neural, genetic forms of machine computation (which, it is obvious, were inspired by biologic forms of computation). You can imagine arrays of people, each person a CPU, central processing unit like Intel Pentium chips, and algorithms passed across such arrays that get work done by allocating information and roles to regions of those human processors, so-called “neighborhoods”. Such arrays of people organized into abstract neighborhoods with algorithms passed within and among neighborhoods, and the connectedness, diversity, and patchings among which are regularly tuned till “better than expected” results emerge from myriad interactions are called social automatons. Among the infinite ways of arranging people as social automatons are a subset of ways that organize such arrays of people fractally with, for example, a large group assigned to each of the four main steps in some transformation process, each of which is subdivided into smaller groups assigned to the same steps of the same transformation process (or of a different one). This can be continued on to a third or fourth level for really large groups. What this does is:
it gets each step done in the context of each other step in the transformation process it gets all transformation process steps attended to at all size scales of the problem being dealt with.
Page 6;
Copyright 2004 by Richard Tabor Greene, All Rights Reserved, US Government Registered
When people argue politically, say, a liberal versus a conservative, and one wins, say, the conservative, we find a year later that the winner embodies many of the ideas of the position he or she vanguished. Thus a dominant conservative party will have within it a liberal and conservative faction. This tendency for victors to incorporate the “best” parts of the positions of those that they defeat, causes fractal recurrence of themes on all size scales (Abbott, 2001). The tendency, when building social automatons, is to fall back on a different role per group (bureaucracy) or different groups doing the same roles (market competitions). Also the monkey hierarchy dynamics of human status seeking (people wanting to be “more important” than each other) tend to cause lofty functions to be dealt with by a lofty level of elites and mundane functions to be dealt with by a grunge “lower” layer of peons. If these forces are resisted and regularized fractal assignments of functions to groups across size scales are made, then many types of errors endemic to usual human bureaucracies, markets, and status hierarchies are eliminated.
Entertaining Work: Regular Fractal Degrees of Reality in Software Applications Replacing the Bushyness of Wholly Separate Game, Simulation, and Work Coordination Applications (Vogler, 1992; Campbell, 1949; Myers, 1992; Bates, 2002; Rouse, 2002; Dunniway, 2003;Rasmusen, 2001; Ilgin and Hulin, 2000; Prietula, et al, 1998; Eigen and Winkler, 1981; Laramee, 2002; Smith, 1982; Senge, 1990; Schrage, 1995; Gaylord and D’Andria, 1998; Gilbert and Conte, 1995; Hannon and Ruth, 1994; Picard, 1997; Norman, 2004; Hogan, 2003)
If you take a game and add a bit of reality to it, you get a simulation. If you take a simulation and add a bit of reality to it, you get work coordination, actually doing the function rather than simulating it. There is a continuum from game to simulation to work coordination and back, from work coordination to simulation to game. Indeed, fractally, you find inside games, embedded simulation and work coordination, and inside simulations embedded games and work coordination, and inside work coordination embedded games and simulations. Instead of these being separate types of application unrelated to each other, they are one application family, within which all members of that family fractally reappear.
Knowledge Management: Regular Fractal Compilations of Knowledge Between Diverse Formats Replacing Bushyness of Knowledge Flowing Only Within Practices Not Across Them (Dierkes, et al, 2001; Huczynski, 1993; Lesser et al, 2000; Esterby-Smith, 2002; Cohen and Sproull, 1996) We all know that knowledge transfers from one explicit format to another (one book to another for example) and from one implicit format (work routine) to another (a guy copying the guy next to him) and from implicit formats to explicit formats (artificial intelligence people eliciting tacit knowledge to build expert systems that distribute it to others, for example), and from one explicit format to implicit formats (people mastering new ideas to the point that they become automatic routines not needing thinking about). This is the four translation functions among two different formats of knowledge. There are, however, more than 20 such different formats of knowledge among each pair of which are four such translations. Any particular knowledge transferred across practice or discipline or department or functional boundaries will entail a particular configuration of translations among particular knowledge formats favored by parties on both sides of those boundaries. Knowledge moves via such configurations of translations among knowledge formats. Within the same practice, that is, not across boundaries, knowledge moves without such translations, hence, more easily and faster. There is a bottom up fractality to knowledge transfer, as small scale transfer of knowledge from Joe, say, to Susan within firm X, becomes medium scale transfer from firm X to firm Z, and large scale transfer from industry A to industry B. On all these and other size scales the same configuration of translations repeats for the same knowledge type being transferred. This is fractal recurrence of configurations of knowledge format translations. Note the omnipresent error of thinking that “if we knew what we know” things would be better. Putting people and their knowledge in touch with each other, usually merely an excuse for computer vendors to sell equipment, by increasing connectedness in social automatons (equals organizations) thereby erodes a primary basis of all creativity, insuring a later homogeneity of knowledge and outlook guaranteed to turn the destiny of the firm over to less uniform (cheaper IT budgets) competitors. One of life’s little ironies--more connection equals less worth of stuff to connect (see Galegher et al, 1990 for how over-connection ruins product development process concept development work).
Experimentrics: Regular Fractal Recurrence of Technology Standards and Associated Venture Businesses Replacing the Bushyness of Myriad Competing Interfaces (Greene, 2003; De Rosnay, 2000; Macho-Stadler, 1997; Brown and Duguid, 2000; Lee et al, 2000; Sahlman et al, 1999) Organization structures (departments, processes, and events) were traditionally there in order to do repeated functions, to “get work done”. Today, the domains in which steady streams of uniform outputs work well are diminishing yet still quite large. Alongside them are emerging domains where steady streaming of outputs are suicidal (Intel is the example--new products that cannibalize own older products as a cost of surviving “only the paranoid survive”). Here, organization structures are being re-imagined and re-designed as experiments, not fixed commitments of form. A sales force, for example, re-designed as an experiment in what sells and what way to sell sells, sees salespersons as data gatherers producing data that will, when analyzed, show what products sell and what ways to approach and present to customers sell. Product lines get seen as experiments too--pushing one point in a whole range of product attribute spaces in order to see which direction of movement within that abstract feature space will sell. Initial products are experimental probes. To do this effectively you have to structure management and processes to value, get, and use the data needed to decide things. This experimentization of form is fractal, repeating on different size scales--venture business as experiment, sales approach as experiment, product release as experiment, technology included as experiment. In high tech markets the issue of standards complicates this story a little, dealt with below. Customers will not tolerate lots of interfaces. They want new things to easily talk to established present things (equals old ones). New technologies emerge along with interface standards so that sets of them coalesce that reach critical mass to impact customer functions enough to get attention and bought. Venture businesses tend to form constellations with each other based on such emerging standard interfaces so that sets of firms, representing sets of technologies, compete with each other. Within a standard or interface are combinations of capabilities that repeat on smaller size scales of each design. You end up with three or more layers in devices each handling the same diversity on its own size scale.
Quality Globalization: Regular Fractal Recurrence of the Primary Values of Competing Global Quality-Related Movements Replacing the Bushyness of Each Movement Promoting Itself Against the Others, Not With Them (Greene, 2003; Greene, 2004; Socolow, 1994) There are at least ten global movements, each concerned about some sort of quality to maintain, that compete for media attention, funding, and staffing across the globe. Each presents its own primary values as best, denigrating the work of the other movements in spite of their overall similarity of intent and approach. This derationalizes both the actions of the movements themselves and policy making by cities, states, firms and others. Quality was the responsibility of a profession called “quality assurance” for decades before Japanese competition introduced the world to “totalized” quality, with entire workforces responsible for it. Quality “globalization” extends all this from the quality of production--the total quality movement--to the quality of the earth--the environment movement--to the quality of conflict--the human rights movement--to the quality of mind--intellectual movements like complexity theory--to several others. Central to such globalization of quality is simultaneous combination of the primary values of these competing movements in one local practice, called a “value-meshing” practice. Such practices have to be invented, they do not just lie there waiting. When such practices get invented, they combine the primary values of all participating movements, and each part of them embody those primary values, fractally repeated, on successively smaller size scales. Such practices are an increasingly central target for software specification and development work.
Summary Below is a table summarizing the discussion thus far.
The Fractal Computing Strategy Bushy Media
Menus
Bushy Processes
Bushy Diversity
Specifi-cation Process
ad hoc local reinvention
user based rapid prototype design
user-producer dialog
addressable person names
manage by department
ERP integration
intrapreneurship
separate realms of game, simulation, work coordination
knowledge redistribution
policy as experiment
totalization
human capability modules
customized standard applications
TQ specification
work coordinated meetings & processes
addressable process roles
manage by process enleanment and signal
macro: sets of firms ERP; micro ERP
venture cluster establishment
duo compilations: game to simulation, work to game etc.
knowledge creation automation: data mining, agent simulations
structure as experiment
complexification
fractal meaning indexes
fractal human capability models
fractal: socio-tech processes as interfaces and specs
fractal voice meshing
fractal conversation compilation
addressable fractal social indexes
manage by fractal events
market bid supply fractal events
fractal venturization of internal business units
fractal compilation across game, simulation, work coordination
fractal knowledge compilation infrastructure
fractal firms as experiments-the agile economy
globalization: fractal movements meshing
searches that hit on mentions not meanings, irrespective of contexts
diverse unstandard libraries of objects or patterns that are huge and irregular
happenstance mix of wish fors, must haves, and diverse nonstandard application form
happenstance non-explicit trade-offs among key voices
non-structured conversing freedom that gets nowhere and produces little
powerful ways to send messages to tiny named portion of the world
bias for certain size scales in results and methods of reaching results
unseen bush of happenstance layers of successive outsourcers
masses of failed initiatives within big firms of ventures without because unallied well
missing or poorly done game, simulation, or work coordination components of games, simulations, and work coordination systems
different knowledge formats and compilations per horizontal or vertical organization unit
fixed structures or processes with clumsy add on “coral reefs” to handle interactions
fights among global qualityrelated movements with choosing of favorites banishing values of losers from system designs
Net Links
Index-ing
Modu-larity
Traditional Linear Code
linear coding
salt and pepper locale
hanging mosses
mention indexes
software modules
Non-Linear Visualization
2-d diagram
visual bush
cascading trees
meaning indexes
Regularization
fractal concept model
fractal link model
fractal operation model
Bushynes that gets reduced
expanse of text with invisible numbers, names, and ordering of points
different link layouts and patterns on each net page
irregular operator sublists of lists and operand sublists of lists
Email
JIT
Outsourc-ing
Venture Clusters
Game-Simulation-Work Conti-nuum
Software Specifi-cation
Prose
Software Solutions
Bushy Organization Forms
Knowl-edge Manage-ment
Experi-mentics
Quality Globalization
Page 7;
Copyright 2004 by Richard Tabor Greene, All Rights Reserved, US Government Registered
The Fractal Computing Strategy Bushy Media
What is fractally repeated
Bushy Processes
Prose
Net Links
Menus
Index-ing
Modu-larity
order of higher level points repeated at lover levels
order and layout of top level link types repeated at lower levels
order of top level choices analogically repeated at lower levels
order of higher level points repeated at lower levels
order of higher level points repeated at lower levels for human and software capability fractals
Bushy Organization Forms
Software Solutions
Software Specifi-cation
Specifi-cation Process
Email
JIT
Outsourc-ing
basic process parts repeated for each step in processes
all voices repeated in lower level components of each voice
refining ideas into designs fractally repeated for each step of such designs
social index category set repeated within each lower level subcategory
steps of overall intended transformation process repeated within each smaller time scale period and activity of the event
set of outsourced functions repeated for each outsource supplier
Bushy Diversity
Venture Clusters
particular clumping of technologies repeated within each venture and product of cluster members
Game-Simulation-Work Conti-nuum game, simulation, and work coordination functions repeated within each such product
Knowl-edge Manage-ment
configuration of knowledge compilations for particular body of knowledge fractally repeated within each lower level knowledge subunit
Experi-mentics
experiment form and functions on whole industry levl repeated for technology, venture, department, product levels
Quality Globalization
value meshing practice inventions joining primary values of certain global movements repeated at industry, standards, firms, ventures, product levels
The Fractal Computing Strategy Agenda The question, then, at this paper’s end is this--just what is the “Fractal Computing Strategy”? It is the following questions and actions applied to each of the software realms in the summary above.
• what is the pattern of fractal repetition there in the world or that we can impose on the world • what recursive process gives rise to that fractal pattern • what regularization of systems, laws, behaviors, data, companies, processes and the like best takes advantage of the fractal patterns there (technically “bushes” are fractal so here interest is in regularizing fractal patterns there or that we design) • what are the costs of skipping such regularization • what are the benefits of not skipping it • what similarities exist among all the fractal patterns at all levels of software technology, as outlined above in this paper • what distinctions exist among all the fractal patterns at all levels of software technology, as outlined above in this paper • how do all aspects of software need to be revised so as to maximize the cognitive, cost, performance, quality benefits of regularized fractality • are their non-regularized non-fractal alternative forms of any of the above aspects of software theory and practice that perform better than the regularized fractal forms outlined above. How does this differ from current practices? Currently we do not seek regularization up front. We do not invest in tools for regularizing inputs to systems. Instead we allow bushyness to be input then invest in very expensive and partial tools for finding our way among the mess that input bushyness produces. This is the familiar mistake of fixing quality late in a production process at ten times what it would cost to fix it early in concept phases of design processes. The Fractal Computing Strategy overall amounts to this--a call to invest in tools for regularizing inputs to systems more than in tools for finding our way among bushy outputs of system that allow any sort of mess to be input.
Thinking about Software and Computation Fractally An example of how to think about software overall in a regularized fractal way is presented immediately below. It seeks the reappearance of the same functions and/or structures at each size scale of doing software from whole industry standards to functions enabled in particular modules of code. It does this, pursues this regularization, because of a transparency and clarity to human minds it produces. But much more is at stake than that. For, as mentioned above, fractal regularization of functions repeats application of them at all size scales. This catches errors missed otherwise. Also, mentioned above, fractal regularization implements, at each level, each function in the context of its preceding and suceeding ones. This lubricates cooperation and coordination among separate realms of code or design. In the model below, the same sequence of cycles appears on each level of software practice from learning a programming language to eventizing virtual spaces and firms. By practicing these cycles on each level a powerful simplicity crosses the levels. The cycles combine competing processes. Too much investment in specification makes prototypes fail and vice versa. Too much investing in abstracting makes modeling fail and vice versa. Balancing requires judgement, a human factor par excellence. As you progress from learning a programming language (not at all the same thing as learning to program computers) to eventizing virtual firms, each cycle takes on more content and importance. The columns of the model below suggest seeing yourself as mastering any one cycle on all size scales of the software industry. The rows of the model below suggest mastering all the cycles for each level you are on. As you progress downward on the model below you encounter more and more human and human elements to handle.
Regularized Fractal Model of All of Software Practice (Suggestive not Complete) Component: Finding/ Developing Cycle
Build/Test Cycle
articulate other language keyword consequences on abstract dimensions that new language keywords act on
new language keywords and new language keyword combinations as menu to select from for matching other language keyword functions
execute other language using your new language functions and compare consequences till exact match is obtained
all features of application address with software and socialware functions root causes of why steps in work processes produce traits in process outputs that displease customers
find the smallest set of functions that combine to execute all the functions needed to address the root causes in the software specification
find on the net, already composed software units that execute as many of the functions needed in your application as possible and only then compose functions you are unable to find
combine separate parts of the application built by different people, pairs, or groups, then test the results and revise
data-procedure abstraction and control and non-determinism trade-offs
break down root cause to be handled by software and socialware functions into operators needed to apply to operands and determinate action sequences to be selected as data and application situations dictates
separate operator from operand and controlled sequences of actions from non-deterministic selection of such sequences
seek operators and operands, fixed sequences of actions and non-deterministic selections of such sequences on the net as already done software modules and compose new code only for what cannot be found
combine operators and operands, fixed action sequences and non-deterministic selectors of such sequences then test the results against spec of what root causes cause process outputs to displease customers
managing software application development
all features address root causes of process output quality failure; socialware features accompany software features
model the process environment the application is to impact specifying each software feature as addressing root causes of why steps in the process cause outputs of the process to displease customers
specify the process environment the application is to impact, then specify the application itself as a process, then specify the interface of the application as a process, then specify the data of the application as process data
find best practice versions of the process environment of the application, the process that the application is, the process interface of the application, the process database of the application then create only what cannot be found
combine process environment links, process of the application itself, process interface, and process databases into the entire application and test against root causes of the work process to be impacted
engineering software products
co-evolving new ventures/ products/standards as competitive sets, not competitive on their own
ventures as windows on the flux of clusters of firms, technologies, careers, and people and as prototypes quickly revised when found not to link centrally with emerging constellations of viable sets of firms, technologies, careers, and people
abstract from own product ideas to find new contexts for new instances of such products, abstract from own set of related firms/technologies to find new contexts for new such sets for you to link up with via changing fundamentally what product you are out to create
find technologies, people ,careers, firms that combine to make what your firm makes and make only what you cannot find or buy
assemble all members of your particular constellation of related firms/ technologies and test if they are an evolutionarily stable set supported by emerging forces in the industry
managing information system evolution
technology succession management; technology pace management; reliable performance management; Taguchi optimization
specify critical mass sets of simultaneous related upgrades in related systems that suffice to transport business functions into new competitive capability and accomplishment then build prototypes of each combination to be tested
separate development of reliable component technologies from development of products and working systems and optimize both separately before optimize them together
find combinations of key technologies and systems and upgrades that transport business functions into new competitive capability and accomplishment and invent missing key components unfindable
assemble all members of key technology platform sets with related social supports and transformation and test implementability and impact on competitive capabilities
re-engineering business processes
functional model with assumption changes and social-technical materialsfor-doing-functions changes
specify old assumptions about doing work function that need changing and old technical and social tools for doing such functions that can be replaced with newer and better capabilities
separate departmental from process from event support systems and specify evolution of departmental to process to event over time as competitive needs dictate
find best practice evolutions of departments to processes and of processes to events and copy or buy them making what you cannot thus obtain
combine departmental, process, and event systems based on new assumptions and social-technical materials for doing work functions and test against old systems, competitor capabilities, and emerging needs
virtualizing functions and organizations
game-simulation-workcoordination continuum; people multi-tasking among firms
require working games of new functionality as the form of proposal you accept, and build simulations as first prototypes, adding reality till simulations become actual work support work coordination systems
encourage people to work for cluster of ventures spun off of your firm, as many of them as they can make good contributions to, rewarding people without upward limit, for all their contributing; people become employees of multiple different firms, working for some 4 hours a week, others 8 hours or more
find ventures to obtain or ally with and build ventures only where you cannot find them
assemble your cluster of ventures and tune interactions till better than wanted results emerge among them
Software Process
Description
learning a programming language
doing features of other programming languages in your chosen language
elaborate consequences of other language keyword they try to execute using new lan guage keywords
learning to program
applications as sentences of minimal grammars you invent
programming an application
Specify-Prototype Cycle
Abstracting-Modeling Cycle
Page 8;
Copyright 2004 by Richard Tabor Greene, All Rights Reserved, US Government Registered
Regularized Fractal Model of All of Software Practice (Suggestive not Complete) Software Process eventizing virtual and real organizations
Specify-Prototype Cycle
Description
fractal transform process embedding in all event levels and scales; expert protocol based mass workshops for delivering management and leadership functions
Component: Finding/ Developing Cycle
Build/Test Cycle
seek external social automatons and make ones only where you cannot find them
assemble social automatons in events and manage their interactions till better than wanted results emerge
Abstracting-Modeling Cycle
transfer more and more functions from departments to processes to events, using mass workshop events based on world best protocols for doing particular managing or production functions
set up various social automatons of people and manage a population of such automatons interacting as a larger scale fractal social automaton composed of smaller social automatons
The social contents of what you manage and the social outputs those contents produce both increase. In effect you are more and more exploring the interface between the social and the mechanical world of devices.
Conclusion: Fractal Cultures of Technologies and Their Producers, Nerds The anthropology of technologies and technical communities of producers and users is a new field, pretty small in numbers thus far. Its insights are powerful, however, enhancing other perspectives on technology like Fogg’s social psych applications to the internet (Fogg, 1993). Re-engineering, seen by anthropologists of technology, was a way to masculinely do (that is, do with technical systems) the feminization of work systems, replacing monkey male hierarchy behaviors and systems with horizontal, shared intimacy female systems. In nation after nation, company after company, re-engineering was embraced where no one could mention, propose, or talk about over-male-ness of system hurting productivity or an overall monkey-ness to behavior distorting technical and product capabilities. All could talk about “re-engineering” however, as it was comfortingly rational, male, emotionless, and vertical status oriented (in, paradoxically, how it achieved non-male, non-monkey, non-hierarchical systems of work). This was the anthropology of technology viewpoint at work (see Greene, 2003). If we examine, in an anthropology of technology perspective, what sustains bushyness in system inputs and outputs, we find maleness of technology users and producers a possible answer. The argument made by some anthropology of technology perspectives goes something like this:
The Gender Argument, in Brief: Male behaviors are hormonal, as least as much as female behaviors (one-up-man-ships replacing helping people, for example, in family and business settings; hacking as self validation, for another example). Add high technology, and that maleness gets intensified. The way to solve anything is find or invent the correct machine. Social, relationship, and emotive causes and solutions are avoided, in favor of technical machine ones (for example, Don Norman’s recent, at age 60, “discovery” of emotions in interface issues). For another example, if people are not cooperating--then give them networks of devices for communicating better. The idea that communication problems come from something other than physical speed limits on message transmission channels seldom occurs. Bushy systems come from technically “solving” things caused by technical solutions, rather than socially, relationally, or emotively solving such things. Bushyness, as presented here, is one such problem being made worse by male technical systems for handling it (too late after the fact, see Tenner, 1996 for elaborations on this theme). More feminine technologies for inputting to systems promise far cheaper, more powerful solutions. Fractal regularization, whether emergent on its own in the world or imposed deliberately by people on the systems they design and produce, is one such feminization of system, sorely needed. What is feminine about fractal regularization? It has to do with female preferences for clarity of environment and empathetic transparent transmission of messages in an uncluttered environment (socks for example). It starts with stopping the “technical tools solve technical problems” mindset. It handles directly the despair of nerds expressed in statements like “you can never get people to input anything but bushy things”. It handles unrealistic claims for, promises for, expectations of software applications, supposed alone, without commensurate (usually completely unspecified and unenforced) social changes to “impact” myriad work aspects. Despair at social tactics and changes is the start of all technical tool discussions, when conducted by males, researchers are finding (Norman, 2004). Males simply despair of doing things or solving things socially (Tannen, 1990). They prefer technical tool solutions even where the problems come from too much of a technical tool solution approach. At the very least it is worth serious investigation to quantify the degree of gender factor causation of our technical and system problems.
The Internet is Autistic An example of the gender basis of ineffective technical systems is the internet. Autistic people, recent research shows, seem to fall along a continuum of increasing maleness, genetically speaking. All males differ from females in being less immediate to their own and others’ emotions (perhaps because sensitive warriors are dead ones). Autistic kids can vary from a little less sensitive than usual males to being brain-hardware blind to other minds (mind blindness, some autism has been called). Idiot savants, a small subset of autistic people, have access to the raw input streams into the mind’s perceptual machinery, unmediated by cerebral cortex with its deep indexing of inputs. The internet, as ordinary people experience it now, is such an unmediated, unindexed, raw stream of more or less interesting junk. It lacks the indexing, the cerebral cortex mediation, that turns inputs into meaning. If we examine insecurity, junk mail, spam, fraud and the rest on the internet, it becomes clear that something very like maleness itself is killing this technical system and driving people and organizations from use of it (repeating exactly the same phenomenon in electronic democracy experiments, see Tsagarousianou, et al, 1998). Males (especially young ones sociology finds) are perhaps the problem, with their unrecognized and uncontrolled hormones. Fractal regularization appeared in the history of human designs in the form of mandalas of high road buddhism some 1700 years ago. Buddhist meditation deliberately works on compassion and other virtues of females, achieved by males via balancing the imbalances genetically put into their minds. Davidson’s research at the University of Wisconsin, Madison, is beginning to get these ideas into the scientific mainstream via valid research methods (Davidson et al, 2002). This paper suggests the Fractal Computing Strategy corresponds to the “fixing it in design is ten times as economic as fixing it in production” insight of total quality and it corresponds to the feminizing technical systems insight of the anthropology of technology field.
References 1. 2.
Abbott, Chaos of Disciplines, Chicago, 2001 Akao, edr, Quality Function Deployment, Productivity, 1990
3.
Alexander&Pal, Digital Democracy, Policy&Politics in the Wired WorldOxford98
4. 5.
Albert-Lazlo, Linked: the New Science of Networks, Perseus, 2002 Anders, Envisioning Cyberspace, Designing 3D Electronic Spaces, McGraw Hill, 1998
6.
Axtell, R. and Eckholdt, S.; Organizational Systems Design: Designing High Performance Organizations, Organizational Systems Design Alliance, Winston-Salem, NC, 1990
7.
Bates, Game Design: the Art and Business of Creating Games
8.
Batten, D., Casti, J., Thord, R., Networks in action: communication, economics, and human knowledge. Berlin: Springer Verlag, 1995.
9.
Becker &Steele, Workplace by Design, Mapping the High Performance Workscape, Jossey- Bass, 95
10. 11. 12. 13.
Bortolussi&Dixon, Psychonarratology, foundations for the empirical study of literary response, Cambridge, 2003 Bradshaw, J. editor; Software Agents, MIT Press, 1997 Brown and Duguid, The Social Life of Information, Harvard Bsns, 2000 Brynjolfsson and Kahin, Understanding the Digital Economy, MIT, 2000@
14.
Buchanan, Nexus: Small Worlds & the Groundbreaking Science of Networks, Norton 02
15. 16. 17.
Carley, K. M., Prietula, M. J.; Computational Organization Theory; LEA, Hillsdale, NJ, 1994. Carmel, E. and George, J.; “Joint Application Development and Electronic Meeting Systems: Opportunities for the Future”; Center for the Management of Information, University of Arizona; Tucson, AZ; 1992 Champy, J.; Reengineering Management; CSC Index, Boston, Mass.; 1994
18. 19.
Cilliers, Complexity and Postmodernism, Routledge, 1999@ Clark, Microcognition: Philosophy, cognitive science, & parallel distributed processing, MIT90
20.
Cohen and Sproull, editors; Organizational Learning, Sage, 1996
21.
Cole and Scott, eds, The Quality Movement Organization Theory, SAGE, 2000
22. 23. 24. 25. 26.
Cole, R., Bacdayan, P., White, B.; “Quality, Participation, and Competitiveness”; California Management Review, Univ. of California, Berkeley, CA, Vol. 35, No. 3, Spring, 1993 Cowan, G, Pines, D., Meltzer, D.; Complexity: Metaphors, Models, and Reality; Addison-Wesley Studies in the Sciences of Complexity; Reading, Mass., 1994 Crutchfield and Hanson, Computational Mechanics of Cellular Processes, Princeton Univ. Press, 1998 Davenport, T.; Process Innovation; Harvard Business School Press; Boston, Mass.; 1993 Davidson and Harrington, eds, Visions of Compassion, Oxford, 02
27.
de Castro&Timmis, Artificial Immune Systems: A New Computational Intelligence Approach
28. 29. 30.
Dennett, D., Kinds of Minds, Basic Books, 1996 De Rosnay, the Symbiotic Man, McGraw Hillk, 2000 Dertouzos, The Unfinished Revolution, Harper Collins, 2001
31.
Dierkes, et al; Handbook of Organizational Learning&Knowledge, Oxford, 01@
32. 33. 34.
Dimancescu, D., The Seamless Enterprise, Omneq Press, Manchester, N. H., 1990. Documentum Inc. Staff; Documentum : Document Information Management Solutions; Xerox, Stamford, Conn.; 1992 Dunniway, Professional Game Design, 2003
35.
Easterby-Smith, ed, Blackwell Handbook on Org Learning and Knowdge Mngt
36. 37. 38. 39. 40. 41.
Edelman, Bright Air, Brilliant Fire, Basic Books, 1992. Eigen&Winkler,Laws of the Game,HowthePrinciples of Nature Govern Chance,Princeton 81 Epstein and Axtell, Growing Artificial Societies , Social Science from the Bottom Up, MIT Press, 1996 Flake, The Computational Beauty of Nature: Computer Explorations of Fractals, Chaos, Complex Systems ,and Adaptation, MIT Press, 1998 Fogg, Persuasive Technology, using computers to change what we think and do, Morgan Kaufmann, 2003 Galegher, J., Kraut, R., Egido, C. editors., Intellectual teamwork. Hillsdale, NJ: Lawrence Erlbaum, 1990.
42.
Gaylord and D’Andria, Simulating Society, a mathematica toolkit for modeling socioeconomic behavior, Springer, 1998@
43. 44.
Gelernter, D., Mirror Worlds, Oxford University Press, New York, 1990. Gery, G., Electronic performance support systems. Boston, MA: Weingarten Publications, 1991.
45. 46.
Gilbert&Troitzsch, Simulation for the Social Scientist, Open Univ. Press, 1990@ Gilbert&Conte, eds; Artificial Societies, computer simulation of social life, UCL, 1995
47. 48. 49.
Gilovich, How We Know What Isn’t So, Free Press, 1991 Greene, R., Implementing Japanese AI Techniques. New York City: McGraw Hill Book Company, 1990. Greene, R., Global quality. Milwaukee, WI: American Society for Quality Control with Homewood, IL: Irwin Professional Publishing, 1993.
50.
Greene, Managing Complex Adaptive Systems, Bestest Mostest, 2003
Page 9;
Copyright 2004 by Richard Tabor Greene, All Rights Reserved, US Government Registered
51. 52. 53.
Greene, Defining 21st Century Human Capabilities, Bestest Mostest, 2004 Greene, R., The Social Cellular Automata Process: Applying Complexity Theory to Improve the Movement Building Aspects of Management, Journal of Policy Studies, March 1997 Grohowski, R. and McGoff, C.; “Implementing Electronic Meeting Systems at IBM: Lessons Learned and Success Factors”; Systems Integration Division, IBM; Bethesda, MY; 1993.
54. 55.
Guttenplan, ed., The Blackwell Companion to the Philosophy of Mind@ Hannon and Ruth, Dynamic Modeling, Springer, 1994
56.
Hewitt, C.; “Offices are Open Systems”; ACM Transactions on Office Information Systems; Vol. 4, No. 3, July, 1986.
57.
Hoch, Roeding, et al, Secrets of Software Success, Harvard Business School, 2000
58.
Hogan, The Mind and its Stories, narrative universals and human emotion, Cambridge, 2003
59.
Horgen, Joroff, Porter, and Schon, Excellence by Design, Wiley, 1999
60. 61.
Huczynski, A.; Management Gurus; Routledge, NYC, 1993 Ichikawa, A.; Practical Strategic TQM for Middle Management; Tokyo, Japan: Diamond Company, 1986.
62.
Ilgen&Hulin eds, Computational Modeling of Behavior in Organizatns, Americn Psych Assn. 2000
63. 64.
Ishida, CommunityWare, Wiley, 1998 Ishikawa, K.; What is Total Quality Control? The Japanese Way; Translated by David J. Lu, Prentice Hall, Englewood Cliffs, NJ, 1985
65.
Jacobson,ed, Information Design, MIT Press, 1999
66. 67. 68. 69. 70. 71. 72. 73. 74.
Juran, J., Juran on planning for quality. New York: the Free Press, 1988. Kahneman and Tversky, eds., Choices, Values, and Frames, Cambridge, 2000 Kano, N.; “A Perspective on Quality Activities in American Firms”; California Management Review; Berkeley, CA; Vol. 35, No. 3; Spring, 1993 Kawase, T.; Improving Productivity with a Line-Centered Organization; Keio Univ. Press, Tokyo, Japan, 1988 Kegan, In Over Our Heads, Harvard University Press, 1994. Kelly, Out of Control, Addison-Wesley, 1994 Kintsch and van Dijk, Macrostructures, LEA, 1980. Kovacic, G. New Approaches to Organizational Communication, State Univ. of New York Press, 1994. Kurogane, K.; “Practical Control Points in TQC Promotion.”; Presentation at Internatioanl Conference on Quality Control, Tokyo, Japan, October 1987.
75.
Lappin, Blackwell Handbk of Contemporary Semantic Theory, 1996
76. 77.
Langer, Mindfulness, Addison Wesley, 1989 Laramee, editor, Game Design Perspectives, Charles River Media, 2002@
78.
Lee, Miller, Hancock, Rowen eds, The Silicon Valley Edge, Stanford, 2000@
79. 80. 81. 82. 83.
Leebaert, D., The Future of Software, MIT Press, Cambridge, Mass., 1995. Leebaert edr, The Future of the Electronic Marketplace, MIT Press, 1999 Lessig, the Future of Ideas, the fate of the commons in a connected world, Random House, 2001 Macho-Stadler and Perez-Castrillo, An Intro to the Economics of Information, 2nd edn, Oxford, 1997 Malone, Crowston, Herman, eds, Organizing Business Knowledge, the MIT Process Handbook, MIT2003
84.
Marchesi et al, Extreme Programming Perspectives, Addison Wesley, 2003@
85. 86. 87. 88. 89. 90. 91.
Meyer, B.; “Macrostructure”, Journal of Verbal Learning and Behavior, LEA, No. 4, 1984 Minsky, M., Society of mind. Cambridge, MA: MIT Press, 1988. Morecroft, Sanchez, Heene, Systems Perspectives on Resources, Capabilities, and Management Processes, Pergamon, 2002 Mullen, B., Geotyhals, G., editors, Theories of group behavior. Berlin:Springer Verlag, 1987. Muhns and Singh, Readings in Agents, Morgan Kaufmann, 1997 Myers, Brown, McGonigle, eds; Reasoning and Discourse Processes, Academic, 1986 Myers, Intuition, Its Powers and Perils, Yale, 2001@
92.
Myers, The Pursuit of Happiness, who is happy and why, Morrow, 1992@
93. 94. 95. 96. 97.
Neal, L.; “Computer Supported Meeting Rooms”, EDS; Cambridge, Mass., 1992 Nisbett&Ross,Human Inference:Strategies&Shortcomings of Social Judgement,PrenticeH 80 Norman,The Psychology of Everyday Things, Basic, 1988 Norman, Emotional Design, BAsic, 2004 OECD, Microfinance for the Poor?, OECD, 1997.
98.
Olson, Malone, et al eds, Coordination Theory and Collaboration Technology, LEA01@
99. 100. 101. 102. 103.
Pasmore, W.; Designing Effective Organizations; John Wiley and Sons, NYC, 1988 Pava, C.; Managing New Office Technology; The Free Press, NYC, 1983 Pava, C. “Redesigning Sociotechnical Systems Design: Concepts and Methods for 1990s”, The Journal of Applied Behavioral Science, Vol. 22, No. 3, 1986 Picard, R.; Affective Computing, MIT Press, 1997 Piattelli-Palmarini,Inevitable Illusions,How Mistakes of Reason Rule Our Minds,Wiley 1994
104. Plotkin, Darwin Machines and the Nature of Knowledge, Harvard, 1993
105. 106. 107. 108. 109.
Plous, The Psychology of Judgement and Decision Making, McGraw Hill, 1993 Postrel, The Future and Its Enemies, Free Press 1998 Prietula, Michael, Carley, Kathleen and Gasser, les editors; Simulating Organizations: Computational Models of Institutions and Groups, MIT Press, 1998 Rasmusen, Editor, Readings in Games and Information, Blackwell, 2001 Resnick, Turtles, Termites, and Traffic Jams, MIT Press, 1996.
110. Rheingold, Smart Mobs, the next social revolution, Perseus, 2003 111. Riel, Object-Oriented Heuristics, Addison-Wesley, 1996 112. Rouse, Game Design: Theory and Practice
113. Rousseau, D.; “Managing the Change to an Automated Office: Lessons from Five Case Studies”; Office: Technology and People; Elsevier; 1989 114. Sahlman, et al, The Entrepreneurial Venture (readings), Harvard, 1999
115. 116. 117. 118. 119. 120. 121.
Sakamoto, S.; Process Design Concept: A New Approach To IE, Industrial Engineering, March, 1989 Schiller, Digital Capitalism, Networking the Global Market System MIT 99 Schrage, No More Teams, Mastering the Dynamics of Creative Collaboration, Currency Doubleday, 1995 Sellen and Harper, The Myth of the Paperless Office, MIT, 2002 Senge, P. The Fifth Discipline; Doubleday Currency, NYC, 1990 Shapiro and Varian, Information Rules, a Strategic guide to the Network Economy, Harvard Business 1999@ Sigmund, Games of Life, Explorations in Ecology, Evolutn&Behavior, Oxford93@
122. Shiba, et al, A New American TQM, Productivity, 1993
123. Smith, Evolution and the Theory of Games, Cambridge, 1982 124. Smith, S.; Case Studies in Modeling Office Procedures; Xerox: Palo Alto Research Center Report, 1992 125. Socolow, Andrews, et al, Industrial Ecology and Global Change, Cambridge, 1994
126. Sproull, L. and Kiesler, S., Connections, MIT Press, Cambridge, Mass., 1991 127. Stanovich, How to Think Straight about Psych, Allyn and Bacon, 6th edn, 2001@
128. Stefik, Internet Dreams, MIT Press, 1996 129. Sternberg, editor, Why Smart People Can Be So Stupid, Yale, 2002@
130. Sudweeks, Fay, McLaughlin, Margaret and Rafaeli, editors; Network and Netplay: Virtual Groups on the Internet, MIT Press, 1998) 131. Tannen, Deborah; You Just Don’t Understand; Morrow, NYC, 1990
132. Tapscott, Creating Value in the Network Economy, Harvard Business, 1999 133. Tenner, Why Things Bite Back, technology and the revenge of unintended consequences, Knopf, 1996@ 134. Tsagarousianou, Tambini and Bryan, editors, Cyberdemocracy, Routledge, 1998 135. Vaina and Hintikka, eds, Cognitive Constraints on Communication, Reidel, 85 136. Van Dijk, Macrostructures, an interdisciplinary study of global structures in discourse, interaction, and cognition, LEA, 1980
137. Vogler The Writer’s Journey, Michael Wiese Productions Book, 1992@ 138. Weinberger, Small Pieces Loosely Joined, Perseus, 22002 139. White, Markets from Networks, Socioecon Models of Production, Princeton, 2002 140. Wolfram, A New Kind of Science, Wolfram, 2002@ 141. Xerox, High Performance Work Center, Proposal to Management, 1992
Appendix Expanded Explanations of Each of the 16 Analogical Applications of Regular Fractalization of Prose Fractalization of Prose, Analogical Application 1: to Net Links on Homepages Freedom Self Contradicting into Chaos. The early hypertext systems suffered from bushyness of link.
As links proliferated, interesting paths, though there, became more and more unfindable. Tools for wending one’s way amid a plethora of links were developed, instead of tools for structuring links in interesting ways. It is the prose story all over again, freedom to branch (irregular branch factors), freedom to name (irregular naming formats), and freedom to order (irregular principles of order), resulting in bushes of connections making sure that interesting choices were lost or unfindable in sufficient time and effort. Since eons of prose had habituated humans to low levels of cognitive performance, this situation among net pages was tolerated.
The Fractal Alternative.
Fractal regularization of branch factor, naming formats, and principles of order, among net links, would amount to the same pattern of link locations and link types per each homepage (or per each of a limited set of types of homepage). All sorts of design alternatives need experimental exploration and validation: a floating menu of link types per page, a border around each page having areas assigned specific link types, a right-mouse-button-click-produced menu of link types per page, and others.
Fractal Repetition of What Link Types.
There are a number of abstract theoretical dimensions that order possible link types. We can have part-whole hierarchies of links, click on an item to get its component or what it is a component of; we can have levels of understanding hierarchies of links, click on an item to get an easier or more basic explanation of or illustration of it or what it is a easier explanation of; we can have levels of dynamism hierarchies of links, click on an item to get a simulation or game or work coordination version of the item at work or what it is a game version of; we can have border crossing levels of hierarchies of links, click on an item to get the Chinese version, the male version, the 18th century version, the People Magazine version or what it is a particular species of; we can have data type hierarchies of links, click on an item to get the data analysis supporting it or the source data that was thusly analyzed or graphic representations of it, or video clips illustrating it; we can have spatially laid out links, click to get parent nodes or child nodes or peer nodes or non-acquaintance peer nodes (distant ones). We can have functionally laid out links, click to get sources, examples, tests, peer topics, counter topics, component topics, more general topics. In addition, at present, we have pop-up menus at link words that have operations choices such as saving, renaming, re-linking. All sorts of experiments having each of these compete and then testing blends of various of them against each other for impact, interest, effectiveness need to be done. A highly abstract dimension comes into view--the degree to which aspects of a work are accessed by links versus by sequential reading or scanning. We get the image of a dozen different paths throughout the points of a passage, competing within each user for interest, relevance, and power. But there is a different overall issue--regularization of links, by type and layout. The Center for Fractal Computing homepage demonstrates fractally regularized page links on the internet, discussed here.
Example System. Structural Cognition is an “orthogonal discipline” a body of knowledge that cuts across all traditional disciplines that can be researched and taught on its own as well as applied to any of thirty other fields. It involves finding mental operators and applying them not to habitual conventional operands in the mind, but to structured patterns of many more ideas than they usually get applied to. Fractal concept models, within Structural Cognition, are categorical models that are regularized into fractal format with one branch fac-
Page 10;
Copyright 2004 by Richard Tabor Greene, All Rights Reserved, US Government Registered
tor, sequence of name formats, and principle of name ordering throughout. One such fractal concept model was presented at the start of this paper as an alternative organization of content to the bushy prose test paragraph there. Fractal organization of homepage links, and fractal organization of the homepages themselves, turning entire sites into fractal concept models has been done in the Center for Fractal Computing, in order to collect experimental data on its effect on all sorts of software use and information gathering and retention parameters. This allowed telephone number access to any of 256 items fractally ordered (in groups of 4) in the site. Muscle memory and concept memory both supported fast access and eventual memorization of the entire 256 site substructure.
Benefits of Fractal Regularizations. Finally, look at the fractal concept model of the test paragraph at the beginning of this paper.
If such a fractal were the text, then clicking a box would reveal its 3 components and clicking the center of such 3 components would bring you one level up, more general. Such a fractal arrangement of points naturally leads itself to hypertext like linking. Having the same branch factor and principle of ordering throughout reduces disorientation and error and time-to-access. Also, it makes users easily remember the entire hierarchy after repeated uses. Such a fractal form for text itself as well as such fractal form for menus of operation choices that pop-up promise strong cognitive benefits for recognition, recall, reproduction, and application. All of this can be applied not to formatting text but also to formatting menus that pop-up so that they too are fractal in form, with regular branch factor, and principles of order, and name formats.
Fractalization of Prose, Analogical Application 2: to Software Application Menus Freedom Self Contradicting into Chaos.
This brings us to irregular branch factors, name formats, and principles of ordering for menus in software applications. Research has long shown that most users ignore most functions in modern software applications, preferring to learn minimal workable subsets that they avoid straying from, to minimize learning costs and unreliable operation consequences. Part of the reason they ignore functions is they cannot remember menu choices of what to apply a particular operation to and menu choices about what operation to apply to a particular text content (letter, word, spacing, margins, borders, fonts, anchored drawings, floating drawings, embedded spreadsheets, etc.). They cannot remember choices because of irregular branch factors, name formats, and ordering principles. A more profound cause of user minimal-subsetting of operation spaces in applications is the irregular way sets of operations map onto sets of operands. Some operators will apply to 3 operands and others to 12 with no rhyme or reason for the difference except “tradition”.
The Fractal Alternative.
There are two domains of bushyness in application menus to be dealt with. One is bushy networks of operations to choose. The other is bushy networks of things to apply those operations to. In both domains, wording, imagination, habits, and document traditions from the world of handwriting and typing have carried over to the world of digital media unnecessarily. For example, one operation, commonly used for emphasis, is putting words into bold fonts. Entire sentences or titles of sections of chapters can be similarly put into bold font form. However, the operation “emphasize visually” is commonly applied to things beyond just words, phrases, and titles. We can apply the operation to drawings, paragraphs, entire chapters, themes cropping up in subsequent chapters, and so on. Putting words into bold font has obvious analogs for putting drawings into “bold”, or entire chapters. If you examine the menus of existing software applications you constantly find irregular numbers of operations and irregular numbers of operands to apply those operations to. It is because regular fractal models of operations and their associated operands to be applied to are lacking that we fall back on tradition which covers irregular fractions of what regular fractal models of operations and operands hold. Tradition did not model things in regular fractal form and hence, emphasis by “bolding”, for example, got applied to words and sentences, but not paragraphs, pages, drawings, chapters, graphs, and a host of other thinkable, useful operands for “bolding” operation. If, instead, we had a common branch factor, common naming formats, and common principle of ordering throughout entire operator choice hierarchies and operand choice hierarchies (for each operator), we might reasonably expect people to remember cognitively the sequence of choices to any one particular one several layers down in the hierarchy and those people might, in muscle memory (which differs from concept memory and runs parallel with it), remember the sequence of clicks that take them instantly to a deeply embedded choice in the hierarchy. At present, we all fail to remember conceptually and muscularly, the myriad vastly repeated choices in the various menu systems we use.
Fractal Repetition of What Type of Links.
Imagine a four by four (equals 16) fractal model of operands for particular operators from menus to apply to. There will be several such models, one for each set of operands. Imagine, now, a four by four fractal model of operators to apply to each such four by four fractal model of operands. Now we have 16 operators applying to 16 operands. Even if some operators or operands in these formats are grayed out, as unselectable at present, if the naming, ordering, and regularity of branch factors in all models are analogous or identical, there is much less variation and irregularity to memorize, find one’s way in, and handle. The cognitive load on concept memory and muscle memory gets reduced. More specifically consider operations one could categorize as “emphasis”. This includes bold, italic, underlined fonts, centered, left-justified, etc. texts and drawings, borders around sentences or objects, 3D stylings, color application schemes, and so on. There are a great many ways to emphasize a great many publishing objects found in word processing and other software applications, but organized differently, scattered among separate menus, and applied to limited types of operands, in any particular application. Ordering “emphasis” operations fractally, regularizing branch factor, name formats, and principles of ordering, then doing the same form operands such emphasis operators apply to, results in a vastly more coherent, learnable, errorless environment of work. Experiments to verify just how much benefit, of each such sort, obtains are underway. I am not suggesting that any particular fractal regularization is best (that needs to be researched and is being researched), I am suggesting here that fractal regularization of nearly any sort is better than lack of such regularization. The Center for Fractal Computing is currently conducting experiments to verify the type and amount of performance benefits obtained when current menu interfaces are replaced by fractally regularized ones.
Example System. Students at the Center for Fractal Computing have taken Microsoft Word’s menu structure and revised it into fractal form, then presented Word’s original interface and our new version of it to students new to computing, measuring over 2 year period effects on error rates, speed of learning, amount of functionality in the application explored and regularly used, amount of interface access memorized by concept memory and muscle memory and both, and like attributes. Though data collecting is, as yet, incomplete, initial results are stark--the fractal version of the interface in nearly every measured dimension is at least 100% and often considerably more beyond Microsoft’s interface in performance quality. Benefits of Fractal Regularizations. If all ways to visually emphasize all types of document object are both fractally regularized (both emphasis operators regularized and emphasized operands regularized) then a number of benefits appear. If this is extended from visual emphasis type operands to each other type of software application operand, those benefits are multiplied. What are the benefits? Current software application menu structures, typified by Microsoft Office products, have functions not unified in one place but scattered about among different sub-menus under different menus. Also, operators and operands are both scattered and largely incomplete, such that we can emphasize words or sentences in one menu but have to move to other menus to emphasize similarly drawings or tables or embedded spreadsheets. Moreover, we find muscle memory outperforming concept memory in access actions, except for the interference of mouse motions, which block muscle memory. Current software application menu structures, then, unnecessarily scatter similar function operators and similar operands. They also do not organize operators and operands with regular branch factors, name formatting, and principles of order. We can expect that fractal regularization of operators and operands within menu structures of any application will do the following:
• • • • • •
greatly expand the portion of all application functions actually selected and used regularly by end-users of applications greatly expand the portion of all application operands actually selected and having operators applied to them by end-users of applications greatly expand the operators and operands supported by typical applications of any specific type improve rates of recognition, recall, reproduction, and application use of all operators and operands in an application reduce error rates in terms of seeking operators and operands in wrong places, wrong menus greatly reduce time needed to access items by muscle memory by enabling alternatives to mouse-motions for selecting menu items (fractal ordering lends itself to keypad input readily, allowing the mouse to be bypassed efficiently).
Fractalization of Prose, Analogical Application 3: to Indexing Freedom Self Contradicting into Chaos.
We currently use mention indexes in google searching, book indexes, and the like. A few applications automatically translate search terms or index items into synonyms, but even that is rather rare at present. Mention indexes irritate us all mightily by producing passages where the same words are mentioned, but in totally different contexts with totally different meanings. In other words, they produce a lot of false hits, for any one query. Mention indexes need to be understood in a particular context. When something is mentioned, what is it being mentioned in relation to? At each point in a text and discourse, this question must be answered. Answering it is a large part of all forms of “understanding” of a message.
“My mother’s weight” is mentioned somewhere in some text--in relation to a joke? a flow of nuclear fuel in stars? a grammatical generality in indo-european languages? The meaning of something mentioned depends terribly, it must be said, on the context in which it is being mentioned. Part of this is a function relationship--X is being mentioned as an example of something, as a counter-example of something, as a part of something, as an introduction of something, for example. Part of this is a topic relation--X is being mentioned as a something that elaborates something else, or as something being elaborated by something else. Part of this is a logic relation--X is being mentioned as a premise of some conclusion, as a conclusion of some premises, as the implication of a truth, as the contradiction of a truth, and so on. Part of this is a research relation--X is being mentioned as an unsupported assertion, as a form of evidence supporting something, as a form of evidence countering something. Part of this is a poetic or associational relation--X is being mentioned as a something that reminds me of X, as something that X reminds me of, as an abstract association with X. At a minimum, according to linguistic and semantic philosophy theory, we have function, topic, logic, research, and association contexts that words erect for other words or have themselves. We index immensely. In a way, nearly all that our intelligence is, is indexing. Neuroscience recently has noted that some autistic people, “idiot savants” they have been called, have direct “unmediated” access to sense inputs to the mind, unmediated by neuro-cortex “indexing” of what is being input. So some idiot savants can draw everything in a scene, seen, but without realizing what they saw in the scene. We are indexers of what we hear, seen, do, feel, and so on. Maybe most of what the cerebral cortex does is indexing things. Indexing is nearly everything that we are, that makes us us. So the indexing done automatically for us by our neuro-cortex works powerfully and well, making us who and what we are, largely, but the indexing we do of books and the internet is by mention, not meaning, so we constantly retrieve items that mention the same phrases, ideas, or words but in completely different contexts, with completely different meaning. Fixing this requires something like automating, then distributing mechanically, intelligence. We know this because colleges test people wanting to enter them in part by asking them to read passages of text and answer questions like “which of the following is the best title for the second paragraph?” and “which of the following expresses the primary relation between the first paragraph and the last paragraph of the passage?”. Indexing text by meaning, not mention, requires intelligence and colleges distinguish who to accept and who to reject, in part, by distinguishing who can name the main point of a passage more accurately than others. We use people’s ability to index things by meaning to distinguish who should get into the best colleges. If we were to succeed in getting texts or internet contents indexed by meaning instead of mention, it would mean we succeeded in getting a major portion of human intelligence done by machine.
The Fractal Alternative.
Fractally regularized texts, drawings, and the like, however, can readily be indexed by meaning, not mere mention, because we can recover the structure among their topics, functions, logical connections, associations, and research basis automatically, where doing this for text is impossible at present. It is a matter of how bushy a thing is being searched for. In the case of text, dozens of branch factors, dozens of name formattings, and dozens of principles of order are involved in even small passages, single paragraphs or single pages. In the case of fractally regularized texts, on the other hand, a single branch factor, an ordered regime of name formats, and a single principle of order throughout suffices. What is being searched for is vastly simpler and vastly easier to find, automatically or non-automatically.
Page 11;
Copyright 2004 by Richard Tabor Greene, All Rights Reserved, US Government Registered
Fractal Repetition of What Type of Links.
Here is where the matter of investment in tools comes to fore. If we invest in tools for developing meaning indexes of usual text with its bushy connections of points, irregularity of branch factors, name formatting, and principles of order, much investment is likely to produce little of use for quite some time. If, on the contrary, we invest in tools for structuring what we write fractally, so it has a regular branch factor, sequence of name formats, and principle of order throughout, we may make the development of working meaning indexes for much of what we write easy and automatic. The matter is a bit more complicated than this. If the concepts to be communicated are ordered fractally, we may yet choose to communicate them via various dramatic paths across their pattern. In this case, our actual writing or speaking will be a somewhat regular path across a very regular fractal pattern of ideas. There are two orders to be discovered here--the path and the fractal. This model, just mentioned, comes close to results of composition research. It finds that people write twice--once to find out what they think and again to express well to others that thinking. Imagine now, writing everything three times--once to discover what we think, once to turn that into a regularlized fractal form, and a third time to express dramatically the fractal via a chosen path across it. We needs tools for all three of these if the contents of books and internet contents in the future are to be readily meaning indexed, automatically.
Example System. Lest readers think all this is theory, a government agency, twenty years ago, used meaning indexes as defined above to identify which of several possible speech writers composed speech components of famous world leaders. Each writer used typical geometries of topic, function, logic, association, research evidence, and the like, unconsciously, which were searched for by software applications and identified by geometry, then writer. Shakespeare’s Hamlet, meaning indexed in diagram form, was published a decade ago (Greene, 1993). Benefits of Fractal Regularizations.
In this case, the benefits are clear--fractal regularization of texts allows machines to recover the structure among the functions, topics, logical connections, associations, and research basis of any text. That means machines could recover the meaning index of a passage of text. Imagine doing an internet search or searching the topics of a book by meaning not mention, and getting not a few good hits among much junk but nearly all good hits.
Fractalization of Prose, Analogical Application 4: to Modularity Freedom Self Contradicting into Chaos.
I remember when Modula-2 replaced Pascal, when Smalltalk’s libraries of objects replaced Modula-2, when C++ libraries replaced Smalltalk ones, when patterns organized C++ object libraries. We used to have to memorize 70 keywords of a programming language. Then, as progress progressed, we had to memorize 1100 different objects, each with its own messages and actions it took when each message was sent or received. In memorization load terms, that hardly seemed like progress. We had the real world that we had to segment and represent, then we had to map the results of that onto huge object libraries, then we had to de-bug errant objects, messages, and actions coming from interactions among objects. Programming became vastly more complex not vastly simpler. This was progress?
Modularity did not keep its promise. It promised to simplify and safe-ify our world of software development, instead it complexified them. Why did this happen? First of all, clever people noticed that there were many different ways to model situations in object or module form. There was no guidance within object-oriented programming regimes on what to make into an object, what to make into a message, what to make into a message parameter, what to make into an action. We felt that object libraries made it clear somehow how to represent real world situations in program object form, but that was never really true. Secondly, what was made modular was abstract programming language objects and their functions. It has gradually become questionable whether that was the only or proper level where modularity was needed. Some suggest that we modularized abstract programming objects, because technical people were not patient enough to wait for modularity of human situation and social function to get clear. It was as if there were no solid basis for deciding what to make into a module and what not to make into a module. What modules appeared appeared arbitrary. The issue stretches somewhat like this. We cannot make components of real world situations into modules because they change and new ones appear from time to time and because there are too many of them (because there are many different size scales at which humans and the systems they build wish to operate). We can easily make components of software languages into modules because nothing much constrains us, however, because nothing much constrains us, what to make module and what not to make module differs and is baseless. Between these two is there something missing, a missing layer that, if made modular, would link world to software object in a usefully constrained way?
The Fractal Alternative.
Human capabilities--of people highly educated, highly effective, highly creative, high in quality achieved, highly artful, high in leadership exerted, and so on--might be such a missing level. If we had modules of leadership, creativity, educatedness, quality, artfulness, and so on we might map the world’s situations onto them. Why? Because we always do map the world’s situations onto human capabilities anyway, whenever and wherever we act. The question is--why did we in the first place try to map real world directly to software modules? In practical situations, people always map real world situations first onto human capabilities, then collections of ordered applications of such capabilities are mapped onto social or technical system tools in support of those human capabilities.
If we build fractally regularized libraries of human capabilities, then we can build software and socialware tools to support each capability. Two combinations are involved--combining software/socialware tools to support particular human capabilities and combining particular human capabilities to affect situations in the world. This is not a good idea, but simply how people already always do act. Software people by slighting the human capability layer, and not building fractally regularized module hierarchies of it, have left themselves with inadequate constraints on what to make into a software object. Ultimately we wish to translate a fractally regularized hierarchy of human capabilities into the appropriate ones of a fractally regularized hierarchy of software capabilities.
Fractal Repetition of What Type of Links.
Malone’s group at MIT worked to invent Coordination Science because it was felt that global internetting of the economy was lowering the costs of coordination, allowing markets of many actors bidding to do work that formerly was done by large bulky bureaucracies (an adaptation to the high coordination costs of the past). “I thought leaders and managers coordinated things” many people say, so “how can mere software coordinate?” It turns out that much of leading and managing needs some evidential data basis, often missing in the past, so human heuristics and best-guess arguments substituted for missing data. As data gaps get eliminated by better and better computer systems, involving more and more parts of a business process, the amount of leading and managing, beyond what is made obvious by data collected from the process, shrinks. In truth, most of leading and managing, in the past, was argument and experience substituting for missing data, that no longer is missing. Hence, coordination can be largely automated to the extent that we have adequate data to feed into large-scale process-based software systems. If we define comprehensively just what all the most important functions are that leaders and managers perform, we can scientifically, thoroughly, and accurately define what part of them software and social systems can automate. CAD designers continually request new features of their software--on what basis, we might ask? Instead of blindly “prioritizing” each year’s new bunch of features requested by designers, we might look for what generates designer requests for new features in CAD software. A major such source is the plurality of ways to be creative when designing things using CAD or other software. CAD software supports well certain lowest common denominator functions of creating while slighting more important but diffusely defined functions like general idea sketchpad work. As designers work with last year’s delivered CAD software, they find their ambitions growing as old routine tasks get automated by last year’s software. They wish to try things they did not have time, effort, and hope for trying in the past. Where do these new ambitions come from? They come from aspects of “being creative” hindered by routine work content not automated in past CAD software systems delivered. Research by our group has found 60 models of creating used by a broad sample of 150 creators in 63 very diverse parts of US society. As systems support one or two of those 60, designers imagine supports for some of the as-yet-unsupported 58 remaining models of creating. If we define comprehensively just what all the most important models of creating are that various types of creators use, we can scientifically, thoroughly, and accurately define what part of them software and social systems can automate. In a similar way, we can argue that nearly all areas of human capability (not just creativity) need mapping into hierarchies of capability modules. There are subtleties involved in this-human capabilities, for example, are mixtures of functions that capable people recognize and do, that later become specific ways of doing such functions, as people get experience with them. So capabilities are a constant frontier of new functions compiled later into specific “better” ways of doing those functions. “Creating” software is then a matter of translating certain of various fractally regularized human capabilities in a hierarchy, specified as needed to impact situations in certain ways, into certain of various fractally regularized software and socialware capabilities in respective hierarchies. Current research in the Center for Fractal Computing has produced fractally regularized models of human capabilities, being tested as elements combined to constitute functioning wanted in the real world to address root causes (see section below) and as modules that software and socialware components are to support.
Example System. We wanted a way to entice huge workforce to automate, voluntarily, their work processes with work coordination software combined with meeting mediation software. However, risky new tech was anathema to nearly all wanting reliable performance (and a good night’s sleep). Therefore, total quality work that all workgroups suffered under was automated using the combined work coordination and meeting mediation software the High Performance Work Center created. The system was sold to them as a database for total quality team work results. Phone calls from outside-Xerox government agencies ordering hundreds of copies were regularly fielded before the system was even fully deployed to initial parts of Xerox. As groups automated their irritating total quality data with the system, they became pleased and imagined automating other burdensome work. In this way, within two years, groups all over the company were discovering on their own that they had a system capable of quickly being tailored to automate nearly any work process dependent on message, document, and data exchange. In the course of all this, Richard Bruce at PARC invented the Liveboard, a whiteboard display of computer screens with screen objects, handwritten, automatically converted into digital figures and manipulatable by wands held by meeting participants. It was great technology but without software for meeting mediation and work coordination to demonstrate its power. The High Performance Work Center software was moved onto the Liveboard and both together demonstrated at Xerox’s first Annual Document Symposium for the corporation’s top 400 managers. The software thusly put onto the Liveboard was developed from a fractal of customer process-support needs, a fractal of root causes of those customer needs, a fractal of human capabilities needing deployment to handle those root causes, a fractal of software capabilities needed to implement each of those human capabilities. This succession of fractal models, each mapped fully onto the next specified the overall system. This kept the software realistic, each feature addressing human capabilities needed to address root causes of customer needs for meeting management and coordinating work processes.
Benefits of Fractal Regularizations. Fractally regularizing hierarchies of human capabilities and of software capabilities and socialware capabilities allows homogeneity of branch factor, naming format, and principle of ordering to replace the chaos of unrememberable objects, messages, and actions now found in object oriented programming libraries. Two transformations are involved--fractally ordering hierarchies of human capabilities, software capabilities, and socialware capabilities, and distinguishing a middle layer--human capabilities--between impacts wanted on real situations and capabilities of software entities and socialware configurations. Add hoc, unregularized versions of this go on now, when software developers/vendors search out best-practice versions of particular steps in particular business processes, and automate parts of that with software. “Best practices” are merely currently admired versions of particular human capabilities. The irregularity, bushyness, and chaos of best practice lists and routines now found in world-wide software applications repeat the chaos of prose bushyness, and internet homepage link bushyness, and software application menu choice list bushyness. It is the same problem showing up in different contexts. There is, therefore, every reason to suppose it will succumb to the same solution, applied everywhere.
Fractalization of Prose, Analogical Application 5: to Software Solutions Freedom Self Contradicting into Chaos.
Software applications used to be custom in design. Each customer insisted he had a “unique” billing system and so hired contractors to build custom software for billing for him, for example. This evolved and now we find standard software applications, like SAP, surrounded by hosts of consulting and systems
Page 12;
Copyright 2004 by Richard Tabor Greene, All Rights Reserved, US Government Registered
firms that “customize” them for particular customers. The current solution is under two pressures, however. First, firms are finding the standard parts of such applications severely limiting their competitive potential, as competitors pay for newer configurations of integration. Second, the customizing involved has increased till advantages from the standard part are too little to make it all worthwhile. In truth, customers are customizing away many if not most of the advantages from standard cores in such applications. In the limit, they get the worst of both worlds--immense complicated customization portions severely crippled by standard portions they cannot tinker with or optimize.
The Fractal Alternative.
An emerging alternative involves two things--not depending on software alone to transform processes and total quality specification of what all applications do. The first is achieved by specifying both software and accompanying socialware changes needed to make the software’s capabilities actually change things. The second is achieved by specifying what dissatisfier, of some process output, displeases customers, and how software with accompanying socialware changes can eliminate whatever, in the process, causes that dissatisfier in the process’ outputs. Both of these are done fractally, repeating formats on different size scales. Instead of customization portion and standard core portion of applications, we get total quality specification inside of which is a software part and socialware part. Total quality specification insures that every feature of a software application addresses one or more root causes of why something in the output of the process dissatisfies customers. This prevents “features of convenience” and “specification drift” ruining focus and impact during long cycle times of software production. Splitting that specification over a software portion and a socialware portion counters the neurotic male tendencies in software technologies and builder communities that attempts to avoid social and emotive parts of worklife by substituting technical machine-like systems. The overall application is specified as follows with each step of the process’ “mediate steps” a fractal repetition of the below diagram. In other words, the overall application is a process support application (all software applications are process-support in architecture this asserts), with steps in the process considered individual processes, specified the same way (though individual process step processes will have smaller scale “suppliers” “process outputs” “traits of outputs” and “customers”). Software and socialware tools that address root causes of what in each process causes outputs to displease customers of those outputs constitute the components of the application and the overall application.
software and socialware features that handle each root cause so output traits please customers
Fractal Repetition of What Type of Links.
root causes of why particular key process step traits cause output traits that displease customers
suppliers
process inputs
mediate steps & key step traits
process outputs
traits of outputs
trait impacts on customer satisfaction
(causes of output traits that dissatisfy customers)
evolving technical capabilities in the world at large that impact what process steps can be capable of
The overall application is software and socialware that impacts what causes process outputs to displease customers. Each component of the software is software and socialware that impacts what causes process outputs of parts of the overall application to displease customers of those outputs in other parts of the overall application. By making each component of an application address root causes of why outputs of that part displease customers of that part, and by making the overall application address root causes of why outputs of it displease customer of the entire process in the real world, each feature of the software and each feature of accompanying socialware are guaranteed to have impact. This disciplines the application, focuses it, and gets rid of “nice to have” stuff that is without impact in the real world the application operates in. Numerous faults that proliferate into bushyness when product development processes are long and specification processes for them are vague, are eliminated by this fractal of total quality process specification of both software and the socialware needed to make each software feature really impact things.
There is another dimension of fractal repetition in this area: process interface, process support, process data levels within any software application. The process fractal regularization occurs not only as process models of steps in process models, but also as changes of interfaces of applications, all applications, till they become process interfaces (in some cases a process flow model replacing menus and sub-menus as interface), changes of application contents into being support for process steps, and basing all that on process databases, that capture and manipulate process data. Process repeats from interface to operation to data, these being vertical levels within any application.
Example System. An example may help readers visualize what all this implies about application architectures. Xerox in 1992 created a Taguchi support software application that assisted people in doing Taguchi style experiments for optimizing product attributes. First, the Taguchi process was defined by identifying the root causes of poor outcomes from existing optimization experiment designs that it fixed. Second, the root causes of why real people trying to apply the Taguchi process failed to implement it well were identified. Third, every feature of the application was specified as addressing one or more root causes in both these sets. One such root cause was users of Taguchi processes assimilating away the early steps in the process and emphasizing the math in later steps, which resembled ordinary engineering practice more and required less practical and mental assumption change. To address this root cause the Taguchi application got rid of menus as an application interface and presented a process flowchart of the process of doing Taguchi optimization as the interface. All operations in the application were applied to process steps or substeps as operands. All data in the application was formatted as process step data. The whole architecture in the drawing above of suppliers, technical bases outside but relevant to the process, customers, and so forth appeared in the interface, application operations, and databases. The same architecture was used to specify each step in the Taguchi application. As a result, the Xerox application omitted the math parts covered by all other commercial Taguchi applications and sold itself as a wrapping around such math centered systems, that caused proper set up and use of those math parts. This meant that customers bought other commercial competitor software for Taguchi then bought Xerox software to ease understanding and using well competitor software. Xerox sold approximately one application per each competitor application sold, in the end, a commercially attractive proposition to say the least. Data from first users of the application showed that repeatedly encountering the entire process-of-using-Taguchi-method as the interface gradually taught users, more and more, the early process changes of design assumptions key to proper Taguchi process use, that were missing from users of competing commercial applications for Taguchi method. Benefits of Fractal Regularizations. Using a total quality process specification fractally, to specify both software features and the socialware accompanying it that make those features actually impact in the real world, eliminates bushyness from several sources: one, from non-service non-customer sources of features, two, from neurotic depending on technical systems alone to impact things, three, from developers substituting features convenient for or interesting to themselves for features serving customer needs. Where the advantage of SAPstyle customized standard applications is the sharing among firms they allow, the advantage of fractally expressed total quality specifications of applications is the shared process between firms they enable and the shared process architecture that becomes software application architecture shared among such firms as well. Their real advantage is they share something dynamic that evolves while SAP style applications share a rather static core standard portion that limits speed of incorporation of new technical capabilities into processes.
Fractalization of Prose, Analogical Application 6: to Software Specification Contents Freedom Self Contradicting into Chaos.
There is a general trend in software specification towards getting away from developers who via any formal representation or means “represent” user requirements and based on them come up with applications. That arrangement has been tried and found too remote, with too much distance, hence, error, between developer and user. Rapid prototyping, developed in artificial intelligence communities in the 1970s, moved into general software practice, with users on teams with developers, both playing with prototype applications so that “actual feel from use” informed both their decisions about refinements in prototypes towards final application form. However this led to bushyness of software application features, from the undisciplined nature of what both developers and users do during rapid prototyping. Operating based on shared feel was not enough to prevent hosts of “nice to have” features crowding out essential ones, again and again, to the point that all sense of necessary feature got lost.
The Fractal Alternative.
Total quality specification, at first, came in the form, presented above in this paper, of a process model of what all software applications were to support. This provided the discipline missing from rapid prototyping teams uniting developers with users. There was a profound basis of this that deserves mention here. Total quality invented an ideology for all of business, that replaced the ideology of Fredrick Winslow Taylor, in the first business schools. The total quality ideology for business saw business horizontally as processes stretched between new technical bases, through suppliers, to producers, to customers. The Taylor one had seen business as sets of professions, assembled in single firms, applying well honed skills executed at professional standards to optimize performance of individual work roles. This was a vertical view of businesses as hierarchies of roles, studied scientifically till optimal conditions of execution for each role were found. What happened eventually was a coincidence: just as the internet united across companies and entire economies, total quality process ideology made management of such extended processes the core model of what all of business was, in terms of which all other systems and goals of business got expressed. Total quality furnished the ideology that made a process view, across firm lines of business, central just when technical capabilities made software realization of processes shared by many firms possible. The first step was a process revolution, of a process ideology fostered by total quality wed to a process support capability fostered by the global internet’s invention and deployment. However, total quality ideology rapidly evolved into something else beyond this. A general concern with transparency of processes to technical developments outside sets of related firms, to actual supplier abilities to change and incorporate new such technical capabilities, to actual process capabilities within firms, to directions of needed evolution set by leaders of firms, and to forces evolving how customers viewed process outputs and hence to how their satisfaction with them was changing. These became transparency to the voice of technology, to the voice of the suppliers, to the voice of the process, to the voice of the president, and to the voice of the customers. People labored to eradicate anything that blocked or distorted transmission of these voices through organizations to key processes. The goal was to make every process fully transparent to its relevant voices. This was fractally done in that each step of any process was to be transparent to its own versions of such voices, at the scale of the process (the suppliers of the step, the customers of the step, the “president” of the process the step was in, and so forth.
Fractal Repetition of What Type of Links.
Here a software application is specified as more transparently delivering requirements of the five voices and implementing them in a particular process locale in a set of firms. What is more, the operator contents of the application are defined as functions that detect such requirements, represent them, transmit them, implement them, and assess impact of implementation on process customers. Here each software application is specified as a total quality process working simultaneously in five directions, one for each voice. Similarly each component of the application is specified as more transparently delivering requirements of the five voices working at the whole application level to the component level. This is the fractal repetition aspect of this specification process.
Example System. An example of this at work is the Knowledge Based Systems Circles Program produced at Xerox in 1990 to 1992. The voice of the company president required something in IT (information technology) to show Baldridge examiners, examining Xerox for the Baldridge Award for Best Quality. The voice of technology required replacing quality circles based on statistics technology with software circles based on advanced software technology. The voice of the supplier required deploying knowledge from some parts of Xerox to others to better link and respond to supplier requisites of good supply. The voice of the process required removing high technology implementation from elite, central, complex Ph.d. research groups and giving it instead to operation units, capable of less complicated but higher payback application of technology. The voice of the customer required defining applications that directly addressed root causes of why particular process output traits dissatisfied customers. The combination of all these voice, in one application, defined a Knowledge Based Systems Circles Program of 40 or more volunteer operational workgroups all across the firm, recruited each year into a four-hour a week plus four days per quarter, artificial
Page 13;
Copyright 2004 by Richard Tabor Greene, All Rights Reserved, US Government Registered
intelligence application training and implementation program. This produced 40 plus simple AI applications per year, each specified to address root causes of why specific process output traits dissatisfied customers. This was a revolutionarily new way to deploy new high technologies across an organization. The software tools to run this program and the social tools for it were alike specified by combining the five voices, fractally. When each voice’s requirements of this application was added to the results of combining the other voices, an interesting strategy emerged. Finite element analysis, many decades ago, emerged in engineering as a way to break designs into myriad finite elements simple enough to calculate force interactions, which later were combined into results for large scale design components. Renormalization groups in physics followed a very similar strategy, breaking interactions down into small elements, results from which later assembled. Simulated annealing followed a similar strategy, only processes of loosening and tightening, alternating through time, replaced breaking apart problems in physical space terms. Wavelets followed the same strategy, expressing things with wave functions on all size scales. Population of interacting agents are one of the more recent embodiments of this strategy. Here entire populations of agents are defined and allowed to interact till better-than-planned results emerge by tuning overall interaction parameters. In this context, the intersection of the five voices, suggested a population of social “circles” exploring new technology application frontiers via their myriad individual application attempts, and interactions among them from which “best” experiments and learnings from them emerged for propagation throughout the entire population of circles. This was repeated in form inside each application built by each circle as individual populations of rules or objects interacted in pluriform ways till better-than-planned results emerged. In other words a distributed non-directive technical aspect of artificial intelligence programming (data-driven programming, non-determinism it has been called) was reflected in a distributed non-directive social form of technology application.
Benefits of Fractal Regularizations.
Specifying systems as locales where requirements of five voices converge, and specifying components of applications as locales where requirements of five smaller scale voices converge unified specification and application functioning in a unique way. The same process that specifies is what the specified application actually does in its operation. This turns programming into a simple, though highly abstract and analogical, translation process. Program construction becomes a sequence of translations across size scales of voices. Program operation becomes a similar sequence of translations across size scales of voices. This makes the evolution of software tools, itself a highly transparent process of making other processes more transparent. It forces a clarity of transmission norm throughout software processes and applications. This works to eliminate all sorts of bushyness--bushes of wanted but unneeded software features, bushes of wildly different frameworks and architectures for software applications, bushes of intermediate conveniences of application developers substituting themselves for requirements of users and buyers of applications.
Fractalization of Prose, Analogical Application 7: to the Software Specification & Building Process Freedom Self Contradicting into Chaos. For many years software applications were built by happenstance, artful, idiosyncratic processes or chaotic hacked together processes. The bushyness of application development processes was praised as “artfulness” by some and criticized as “non-quality” by others. Eventually problems with errors in the products produced by such processes combined with total quality success in other production contexts to produce forced adherence to formally specified software development processes. Bushyness was no longer tolerated. This threw out the baby with the bath water, it appears. From excess bushyness we moved to excess rigidity. Gradually an alternative has emerged. Bit by bit software application tools have become recursively used to assist software application development work. In particular, software supports for processes typically used in human meetings and software supports for coordinating work actions between meetings in a process flow have emerged to support all sorts of processes, now gradually including software specification and application development processes.
The Fractal Alternative.
If you think about it, specifying and building a software application is done nearly entirely by conversation. It is done by meetings and between meeting action-items. The software industry has created, year by year, more and more tools for assisting or wholly automating in-meeting process components, and between-meeting process components. Applying these to the work of specifying and developing software applications is both natural and, probably, inevitable. As soon as this step is done, another recursive aspect is found--specifying and building applications is conversations within which are smaller conversations within which are still smaller conversations. As more and more of the world gets mediated by software, more and more conversation gets replaced by automatic analysis applied to appropriate data. We talk to each other precisely where data fails us-because we lack it, because it is unreliable, because new data is needed, and the like. Where adequate data is found, conversation is short and sweet--apply X analysis to Y data and put the result Z as input to some other process needing data. Where adequate data is not found, human heuristics, agreements, opinions or the like are needed as a poor substitute. So we have a fractal hierarchy of conversations, supported by in-meeting tools and between-meeting tools. This results in a fractal hierarchy of such tools. This is an alternative to bushy anything-goes artistic software development processes and to dictatorial one-right-way “totalitarian quality” development processes. The fractal repetition of conversations and tools for enabling their in-meeting and between-meeting components allows great simplifying regularity to infest both traditional and entirely new invented processes of specifying or building. It achieves an evolving dynamic form of regularity rather than a fixed rigid ungrowing form.
Fractal Repetition of What Type of Links.
Here fractal repetition on different size scales of conversations and tools for supporting their in-meeting and between-meeting components is achieved. This replaces the bushyness of anything goes process-less work and the dictatorial opposite--fixed one-right-way work.
Example System. Xerox produced a High Performance Work System in 1992 that formed the basis of a venture software spin-off. It assembled tools for supporting in-meeting processes and coordinating between-meeting processes. The result was working software applications as the “minutes” output from sequences of meetings. The processes of specifying and building applications was automated via automating components of the myriad conversations involved. When actual program specification and building conversations were observed, lots of rather obvious standard topics and treatments of topics were found. People talked about some function they wanted done and what data was needed to do it well, and where and whether such data existed, and how to get ahold of it. As usual, people found the data was slightly incomplete and in formats or on machines that did not integrate with or transfer to others. They talked about what part of such situations was least expensive in time, effort, error, or money to fix. They made judgements and commitments. Total quality ideology helped. It defined a core, essential part, to processes and indirectly a periphery. It helped define a core of valid data and analysis already obtainable and a frontier of next data and analysis needed but not quite available. Specifying and building software applications were a rather rote process of handling one part of that frontier after another, adding to a core of capability defined rather well by total quality ideology. Management and leadership were found to consist of excess managing for social bragging or showing off and missing managing due to imprecision about what needed what sort of managing functioning when. As application after application got defined and built, the lack of valid data allowing excess and under management shrank, leaving smaller and smaller room for “do what I say because I say it” managing by rank and intimidation, for social rather than organizational purposes. A huge amount of “managing” was found to exist merely for social bragging purposes, unrelated to or highly detrimental to the mission of the organization. It was not that extending software application automated managing functions so much as it revealed the phoniness and illusory character of such functions--leaving a small fraction that were found valid and useful. Three components appeared--invalid managing functions, a valid minority which split into how to handle where data was inadequate or missing and how to use best, adequate data that existed, plus expansion of ambitions and purposes as old ones got automated or made easy. Benefits of Fractal Regularizations.
Changing the making of software applications from a happenstance individual thing into a frontier in an on-going process of transforming data-less operating in the world into data-based operation, a frontier that also includes conversations at the frontier of this transformation of people, simplifies not just one application in that process but all applications. The products of individual software application creation efforts are tools, steps, in this transformation process itself. This orders nicely the entire process of choosing what applications to need, and specifying and building them.
Fractalization of Prose, Analogical Application 8: to Email Freedom Self Contradicting into Chaos.
Email is omnipresent, something that everyone in the world will be using, fairly soon. No one notices that we email messages to people. In fact, we can only email messages to people. We have to know their email address or their name in order to address email to them. We cannot email messages to roles in society, roles in processes, processes in organizations, processes between organizations, events, or any of the other functionally central components of modern life. We are restricted to addressing email to people whose names or email addresses we know. We accept this, in part, because for eons, physical mail systems had similar restrictions. News groups in the net, each group being people sharing a specific interest, extend us a tiny bit beyond this in that we can “post” messages rather than send them to specific people, and get anyone knowing an answer or relevant comment to answer. As soon as we become free to mail anyone at all, for free, via email, we discover that the problem is we do not know already all the people we want and need to contact. The gap between whom we need and want to communicate with and whom we already know enough about to send a specifically addressed message to is huge. We know there are literally tens of thousands if not millions out there sharing answers, questions, needs, interests, capabilities, activities we would benefit from. It is just that we do not already know them all personally. Living, in large part, is a process of exploring those millions, finding them, and building relationships with them. There is in sociology a concept called “social indexing” that can be measured for individuals, groups, organizations, technical systems. It measures how many of the primary interests, needs, capabilities, and so forth of people around us anyone knows and takes actions based on. Though complete surveys are yet to be done, initial research indicates that most of us operate under the 4% level, we know less than 4% of the interests, needs, capabilities and so forth of those we work with, live with, and are well acquainted with. That leaves an enormous frontier for improvement. Design of networks, housing layouts, communities, sequences of events that impact the social indexing degree of people is just beginning as existing designs, not based on measuring impact on social indexing degree, largely leave it unchanged.
The Fractal Alternative.
Consider emailing freely to roles in institutions or steps in processes, or who-so-ever knows something or can answer something or shares an interest in something. If the net were well indexed socially, we would not be limited to addressing email to people we already know. Though there is progress at the edges of this, these edges are at the 4% level of social indexing not at robust levels. Related to this is much ballyhoo about “knowledge” “management” (mostly by consultants wanting fees). Software vendors continually search for any rationale at all, however useless or unrealistic, that helps them sell equipment to clients--knowledge management just is a recent addition to a long line. The idea is there are piles of knowledge, unknown by all, that if known could be communicated to points of use and there are distributions of knowledge, unnoticed by all, that if concentrated in the right place, would be powerfully applied to do good. Knowledge is as unindexed as human needs, interests, and capabilities are, it is said. Knowledge markets are set up, part social events and part expensive computer network equipment, with the goal of getting people “rewarded “ for sharing knowledge (which, as soon as it is shared, results in them being fired, their function outsourced to someone cheaper in India, using the knowledge they just shared). Such enticing markets are, not surprisingly, surprisingly ineffective. The fractal alternative is indexing all roles in processes, roles in institutional departments, human interest types, human capability types, human need types, whether in software or socialware form. Email then is enabled directly to each thusly indexed category, allowing contact with relevance before a personal relationship is built or a relevant person is known by name. Within any one category the entire panoply repeats on a small scale in this way: the interest group of people interested in “hunting” is elaborated by the entire index system into those members of it interested in any of 150 categories, capable of any of 150 categories, in need of anything in any of 150 categories, playing roles of any of 150 sorts in processes, playing roles in any of 150 sorts in departments in organizations. At some point fractal repetition peters out, there being too few members to distribute across so many categories in the overall index. This is a matter, overall, of creating socially indexing address systems in email, added to addressing messages to people whose names we already know.
Fractal Repetition of What Type of Links.
The index of category types--interests, needs, capabilities, process roles, organization roles, and so forth--is what is fractally repeated on several size scales. Everyone on the net in one of those categories subdivided by all the categories so those sharing interest in “stereo photography” are subdivided into all categories, with one of those being people needing “urgent short term family counseling”, all members of which are subdivided by all index categories, till too few members exist to cover a useful portion of categories. An entire government agency or town or corporation or global movement of related NGO organizations would similarly be subdivided fractally by the index categories.
Example System. The High Performance Work System at Xerox, referred to above, included this social indexing expansion of email addressability. Each person logged onto the system had to define themselves relative to a dozen social indexing dimensions (the ones above plus some that remain proprietary). Messages could be addressed to particular roles in processes, roles in departments, interest types, need types, capability types, common event participants, and the like. User experience with early releases of the system was automati-
Page 14;
Copyright 2004 by Richard Tabor Greene, All Rights Reserved, US Government Registered
cally captured by focus group events on the system itself, regular quality of use questionnaires, and newsgroup discussions about improving the system. Tens of thousands of feedback messages were gathered in these ways, organized fractally, and turned into root cause specs for future improvements in the system to address with new features. The fractal nature of social indexing appeared in full in this feedback. As organizations came on line with the system, month by month they repeated, within themselves, all the social index categories for the entire firm. As time elapsed even further, subunits so elaborated themselves using all social index categories for the entire firm and their department. The trend was clear, over time, the index pattern repeated itself on smaller and smaller size scales of the organization. The effect this had on performance was stark. People moving to new roles were astonished to find a history of message to their new roles, regardless of what person was there. This log of messages was extremely useful for reading the realities, political, social, psychological, budgetary of doing their new role. Instead of hanging around for months, making beginner mistakes, they could immediately act as if experienced in the role, using the message log of messages sent not to particular persons but to roles. They constantly got news updates on technologies, techniques, people, and events relevant to their role, posted by people all over the organization whom they did not know.
Benefits of Fractal Regularizations.
Sending email messages to social index categories rather than individual people or groups whose names we know, would be a revolution in global effectiveness. The issue it would raise is quality of person and category contacted, solved by stratifying every social index category by new social classes, differentiated by various measures of person or interaction quality, perhaps scores by everyone contacting them, evaluating the quality of their interaction (while the other party evaluates the same interaction’s quality from their own perspective). Reviewers of contacts, named or not, might be published.
Fractalization of Prose, Analogical Application 9: to Just-in-Time Systems Freedom Self Contradicting into Chaos.
Just-in-Time is a core ideology-ette within the total quality ideology that defined the core of business for later internet connectivity to soak up and assist. It replaced a vertical view of business as a hierarchy of monkey-like ranks of men, departments they “head”, with a horizontal view of customers leading processes across firm and department boundaries back to suppliers and new technologies that such processes absorb or dangerously ignore from time to time. Just-in-Time en-lean-ed processes, by finding obvious and not-so-obvious wastes in them and eradicating all sorts of inventories, used because one step in a process did not trust its adjacent steps so buffered itself. In the stripped down state of being bare of non-essentials, Just-in-Time then erected management-by-signal systems where key traits in key process steps that meant outputs of the process were in danger of having traits that displeased customers were marked by highly visible signals that called together process employees into instant study groups to eliminate such step traits before process output traits irritated customers. The chaos of inventories and irregular process management hidden by such inventories that Just-in-Time eliminated, was one form of bushyness. The chaos of re-seeing all organizations as bushes of processes within bigger processes and before and after processes that Just-in-Time produced was a second bushyness. What happened to solve this was profound. Just-in-Time--stripping out inventories, exposing sloppy process performance, distinguishing common causes of variation in process outputs (caused by design of the process, equals, management) from special causes (one time crises or interventions), getting processes in control statistically, en-leaning them to be inventory-less, erecting signals for key step parameters that affected customers--this whole process showed that most manager actions made variation worse not better because managers, ignorant of statistics, could not distinguish common from special causes of variation. A term “statistical tampering” was invented to express this harmful role of traditional unscientific management-by-personal-experience, management-by-monkey-like-authority, management-by-opinion. Managing by “fact” was installed by Just-in-Time systems in those few companies (in the US) where managers were found capable mentally of mastering simple statistics. Outside the US, Japan, Singapore, and a host of Asian nations were able to get Ph.D. level statistical data analysis projects done yearly by major portions of their managements and workforces, where US firms could at best muster one or two “best practice” examples at that level of mathematical sophistication. You found 1200 workgroups, in Fuji Film, for example, presenting Structural Equation Modeling and Multiple Regression studies yearly, compared to Kodak’s 2 workgroups yearly, and MIT’s grad schools producing, at best, 600 yearly such theses. Since normal management mostly tampered statistically in processes making variation in their outcomes worse (often by blaming faults on workers without statistical basis), Just-in-Time defined most of management as waste. This led to two things. First, it led to defining more exactly just what the fundamental functions managers and leaders performed were. Second, it led to invention of alternatives to a fixed inventory of expensive managers--a social class--as a way to deliver such functions. In particular, event-delivery of management functions was toyed with then broadly adopted, by leading quality pioneer firms in Asia. Fixed inventories of a special social class of people called “managers” had delivered too much management (tampering) while delivering too little management (missing where to intervene on a valid statistical basis), the typical under-shooting and over-shooting found wherever inventories were used instead of Just-in-Time lean systems. If criteria of when and how much of particular managing functions were needed could be devised, then mass workshop events could be designed, where workshop groups in parallel applied the steps of how to deliver needed such functions to particular cases. Events could deliver managing functions when and where and in the exact amounts needed instead of a fixed inventory of people over-managing and under-managing all the time. This was termed “Just-in-Time Managing”.
The Fractal Alternative.
These mass workshop events were not like usual business meetings and conferences, workouts and assemblies. Usual business events were monkeylike status display events, mainly functioning for display and showing off, and only incidentally getting work done. Events were largely ceremonial (called “coordination”) in businesses. The Just-in-Time Managing events were, on the contrary, places where great amounts of actual work and designing and implementation got done. They were fractal in form and function. The primary turn of mind or change in patterns of acting, needing to be done, was embedded in each size scale of the event so that every day of a multi-day event, every 4 hours per day, every 1 of those 4 hours, every 15 minutes period in each hour reflected one step of a 4 step core transformation-needed process. It is the fractal repetition of transformation process themes and methods that makes such events actually succeed in transforming work and the people doing work in them. How? The multi-scale nature of human aspirations, dreams, goals, plans, actions needs reflecting in anything that seeks to transform those aspirations, dreams, goals, plans, actions in a coordinated, consistent way. Each management function can be broken down into the steps of transformation of a person, goal, situation, relationship, problem that it involves. These steps get fractally embedded in all size scales of Just-in-Time Management events. Moreover, each event has rational, emotive, interactions, and network extension modalities, each of which gets such fractal embedding of transformation themes of the management function to be done.
Fractal Repetition of What Type of Links.
A unique event type is invented for each fundamental managing function to be done. The steps of transformation, implied by that managing function, are what is embedded in all size scales of the event. This embedding is done in plural simultaneous modalities--rationally, emotionally, interactively, and network extension-ly.
Example System. An example will help here. Associates and I recruited 800 people working together for ten consecutive days (30% stayed for the whole ten days while 3 shifts of about 23% each stayed for four days, overlapping, each) in 80 parallel workshop groups to design, staff, legally enact, and fund 16 venture businesses in Yubari, Hokkaido. We went from zero venture businesses to 16 working businesses in exactly 10 days in this event. No one believed it possible but, by combining government, audit, tax, hiring, lawyers, real estate, funders, venture capital, chambers of commerce, customer surveys, competitor surveys, product surveys, experienced managers, and consultants in a complex, well timed, weave of information and partial products between workshop groups, all got done within ten days. The identification of ventures needed and the building of them were the management processes this event embodied. Instead of one man or a small committee dragging out the process because they lacked both the data and the work ethic needed to assemble on their own the data needed (so substituting their own “judgement” for valid data they were too lazy or statistically incompetent to collect), all the right skills, contacts, agencies, expertises, permissions, approvals, enablers were assembled with each other at just the right point in decision and design processes. It took 9 months to design the workshop procedures for the 80 parallel workshop groups and another 2 months to simulate their simultaneous execution working out bugs and omissions. However, the result was 800 people producing 16 ventures in 10 days-something considered impossible by all till we actually did it. In this event, expanding technologies and expanding demographic customer segments and expanding lifestyle needs were examined for points of intersection, nearest recent similar ventures by others were searched for and studied in detail where found, weaknesses in them were identified and solutions for our versions invented, packages to hide our efforts legally and in markets till big enough to win against such new competitors were devised, quality of resources and inputs needed for great plan execution was specified for each resource type. This became the 5 step fractally repeated pattern, within the 10 days (2.5 days each), within each day (4 hours each), within each 4 hour period (1 hour each), within each hour (15 minutes each) given here: trend and invention overlaps, competitive superiorities, disguises, resource quality, assembly. Rational, emotive, interaction, and network extension forms of each step were embedded fractally in each part of the event on all size scales. People ended up living and breathing, thinking and feeling, relating and extending each step in this process, in everything they did during the 10 day workshop event. All 80 parallel workshops followed the pattern of these steps.
Benefits of Fractal Regularizations. It is
not a matter of benefits, so much, as essentiality. Just-in-Time Managing events without fractal embedding of transformation themes from the managing function they are to execute, simply do not work. The exposure to the transformation process is too thin, one-dimensional, usually too rational and cognitive (equals male) to actually change anyone or anything.
Fractalization of Prose, Analogical Application 10: to Outsourcing Freedom Self Contradicting into Chaos.
Just-in-Time Managing appears threatening (unless you are a board of directors, CEO, or customers). However, it is but a step on the way to something more radical (and potentially threatening), Just-in-Time Firms. Outsourcing is the name we give to our first steps in this direction, launched by Just-in-Time inventory and Just-in-Time managing systems. SAP and its competitor software are the lubricant of this. They abstract coordination and information away from organizations and “managers coordinating things”, allowing alternative social forms of organizing work. Globalization is a different process but it has within it an essential component of this Just-in-Time Firms process, called “globalizing” work, in particular, globalizing outsourcing itself. Outsourcing, in fact, is a disguise for globalizing work. A fractal hierarchy of outsourcing appears with big functions outsourced to big firms abroad who in turn outsource components to medium firm “more abroad”, who in turn outsource small components to the “most abroad” areas. The bushyness this replaces is the vertically integrated, Japanese-like behemoth firm. It replaces the pre-Welch conglomerate firm, doing everything mediocrely. Instead of lots of hiding places, lots of “overhead” departments, people, and functions, you get nearly no hiding places, and little if any overhead. Headquarters buildings as half of the second floor of bowling alleys typifies this new form of business. Giant “integrated” firms are a type of inventory--organization inventory, doing things by organizational unit proliferation. Organization units and the freedom to bequeath new ones, are hiding places, bushes, hiding actual determinants of performance and actual performances achieved. Anyone who has worked in an organization of any size has experienced this in spades--it is inescapably obvious, to both professional and amateur observers.
The Fractal Alternative.
Consider a US company outsourcing (to some Indian firm “off” shore) many of its IT (information technology) functions. That Indian firm has human relations “personnel” functions it needs but does not pretend to expertise doing. It outsources many of those personnel functions to a human resources consultancy in the least expensive state in India. That human resources consultancy has tax and accounting work it does not pretend to be expert at. It outsources that work to a local lone free-lance accountant who comes to work twice a month to do accounts and taxes. We have four layers in this new way of organizing work, each poorer and less expensive than the earlier. We have differences in what functions are outsourced at each layer (though some of these may be happenstance). All this works because the links--different nations, different cultures, different management teams, different firms, different laws--are lubricated away by the internet and software standards for process management like SAP and other so-called Enterprise Resource Planning software. There is a difficult judgement call involved in this. Most firms over-out-source as top managers ignorant of quality of work requisites and importance seek easy-to-quantify financial savings. A few years of bad outputs, mis-spec-ed results, and un-responsive-supply quickly dis-illusions firms (the managers who initiated such outsourcing links long since promoted away beyond accountability for results). They take back most of what they outsourced and leave outsourced only what foreign suppliers and long communication chains can manage to do well. Anything fast, responsive, tailored, subtle, culturally embedded, tacit has to be un-out-sourced eventually. Outsourcing is inherently fractal in form--a big firm outsources its IT (info tech), HR (personnel), accounting, global supply--and each firm it outsources too somewhat later outsources its IT, HR, accounting, global supply, and each of those outsources eventually does the same. Outsourcing is a recursive process that produces a fractal form of inter-firm network.
Fractal Repetition of What Type of Links.
What is going on is each firm is judging what it is now good at and what it must in the future be good at, and wants to focus on that, and hire outsourcers for all functions it does not aspire to greatness at doing. This seems like giving up on greatness until you actually work in firms and realize how silly it is to
Page 15;
Copyright 2004 by Richard Tabor Greene, All Rights Reserved, US Government Registered
aspire to be world-best at all functions. Firms who aspire to greatness at all functions achieve greatness at none, in nearly all cases. Think about Jack Welch, a chemistry Ph.D., working at GE. His early “neutron Jack” phase consisted of forcing heads of his conglomerate’s diverse businesses to apply more stringent measures of success, and make decisions, to fix, to sell, to drop their businesses. For a conglomerate the decision is focussing on some firms, dropping others (Welch saw his conglomerate as a bank taxed at lower levels than true banks, so he ran GE as a bank, fixing up and selling firms as its product, under favorable tax treatment). For a firm the decision is focussing on some functions, dropping others or outsourcing others. Outsourcing judgements recur at later times--the big firms outsource functions to medium ones, who, later get big enough to outsource their functions to smaller firms, who later get big enough to outsource their functions. The functions may by happenstance be different but ultimately they end up the same. The result of recursive processes is an overall accumulation of fractal structure. Now imagine some years hence a huge population of small outsource firms in China and India, handling every somewhat rote business function. Right now each such firm is doing contracts for particular larger firms. However, ERP software, like SAP, can lubricate quick assembly of such firms into different combinations. There might be a bottom up way, lubricated by such software, to coordinate populations of such firms combining to form their own new, invented, middle size firms, for particular products for particular markets. This would results in Just-in-Time Firms composed when data indicated new product or industry invention opportunities, out of populations of small outsource firms.
Example System. A subspecies under outsourcing is virtual firms, firms that exist only part-time each day or week or month. Some companies, for example, have organized their workforces into two entirely different businesses, one existing Monday through Thursday, and the other existing Friday and Saturday only. Managers in one firm are workers in the other and vice versa. The “weekend” firm is a risk, a venture, spun off of the regular firm, but run by everyone rather than just a few, and leveraging the entire resources of the parent firm. In this way risk, adventure, intensity alternate weekly with steady performance in a safe parent firm. In other cases, a group of people organize themselves in virtual firms that exist only four hours a week. These firms tends to be based on business ideas that require clear focus and execution of a pure idea. People can be recruited to a minimum and maximum of 4 hours a week of work on such firms. Indeed, at that level of risk, people can often be recruited to three or more such firms while working 40 hours a week or more at usual firms or while going to grad school. These virtual firms can themselves spin off further virtual firms, doing parts of their work. It is internet software that lubricates all this, lowering coordination costs to the point that huge efforts, composed of large numbers of such virtual businesses, get done. The Center for Fractal Computing runs one such network. Benefits of Fractal Regularizations. One profit is time--the time to find and combine mature function capability in outsource firms is much less than the time to find and hire all the people, tools, resources, facilities needed to do the function yourself. This profit means economies can invent new firms faster than in the past. Another profit is cost--the factors of production for different products and services are not uniformly distributed across the planet. You can source a function where the best cost for quality is found for it, regardless of tradition or geography. This profit means economies can do more work for the same resources used, equals greater productivity. The fractal form such recursive outsourcing processes generate has benefits, too. Layers of different sized firms doing the same functions, lubricated by software standard ways of doing processes, means that gradually, the prerequisites for Just-in-Time assembly of new firms, as information technology or social systems indicate new technology, market, customer needs opportunity, are set in place. As the cost in time and money of assembling capabilities for new firms lowers, the number and quality of new firms increases.
Fractalization of Prose, Analogical Application 11: to Venture Clusters Freedom Self Contradicting into Chaos.
Why bother with creating lots of venture businesses, each betting on risky untried technology, led by inexperienced managers, supported by iffy personal angels risking money they can ill-afford to lose? Most fail quickly. Wouldn’t it work better to have huge big firms support such ventures spun-off by their own employees, having access to experienced managers, deep funding pockets beyond what angels can provide? Where does advantage lie--in the small risky inexperienced ventures or in the protected well-heeled corporate spin-off ventures? In the US, the advantage clearly lies with the small risky inexperienced ventures. They outproduce in nearly all dimensions intrapreneurs and corporate spin-offs. In Japan, the same advantage seems to be with the small risky inexperienced ventures, in spite of tax and other laws that greatly hinder venture establishment in myriad ways. In Europe, the advantage also seems to lie here, though Europe lacks incentives for such risky behavior by entrepreneurs and angels and would-be first customers of ventures. Why? What is it about matching utterly new ideas with utterly new people and utterly new money sources? Why cannot old people and sources of money compete? It all comes down to sex--idea sex, organization sex, career sex. “Clusters” is the name given to copies of Silicon Valley, California--districts where hundreds of venture firms emerge, interact, spawning new ones that emerge. Most fail, fuse, or flee, but some grow tremendously, forming the basis of new industries. Profound thoughts underlie most of this interacting. Consider nano-info-bio-technology--the intersection of DNA, genome, information systems, and nano-technology. The nano-tech techniques allow computers to fully intersect and handle genes and DNA fragments, automating on a tiny fast inexpensive scale, what took lots of time, money, and material on pre-nano-tech platforms. The miniaturization of silicon circuits, matching the miniaturization of understanding of life via genes, matching the miniaturization of living material manipulation devices in nanotech results in a new industry of the future, driven by aging societies and their health costs and needs. Clusters are where ideas have sex with other ideas, careers have sex with other careers, organizations have sex with other organizations. Ideas are the genetic material exchanged in such intercourse.
The Fractal Alternative.
The question in the section immediately above--why risky inexperienced ventures outperform supported experienced ones--went unanswered, it appears. Actually there was an answer provided but only indirectly. It is not individual high tech venture businesses that outperform but clusters of boiling disappearing self emerging ones. Individual high tech venture businesses nearly always fail--what lasts and grows is particular constellations of them that form viable “stable evolutionary strategies” in the genetic competitions among firms. If your particular venture is part of such a constellation, you find yourselves taken along for a wild successful ride, not of your own firm’s individual making. An entire ecosystem of inter-related effects buoys you along on a wave your firm did not imagine or create. When one venture in such a constellation fails, it is immediately replaced by rapid invention of an equivalent (to the other firms in the constellation) one. Supported experienced ventures nearly always fail because their closeness to the mother firm that supports them, lessening their risk and giving access to experience, prevents joining fast moving constellations of firms sharing one stable evolutionary strategy in the overall technology cluster ecosystem. Indeed, first high tech ventures are not really firms at all, they are memberships in cluster dynamics, loci from the vantage of which firm members experience full cluster dynamics. Hence, high tech ventures commonly completely reconfigure themselves four or more times, completely changing market, technology, leadership, customer, as they orient themselves opportunistically in the ecstasy of idea sex, organization sex, financing sex, and career sex going on around them. This is another major reason risky ventures outperform protected supported ones--supported ones tend to tenaciously try to make their initial product and idea commitments work, instead of viewing their firm as a viewing platform of cluster dynamics, till a surf-able wave comes by. The bushyness here, reduced by fractal regularization is all those individual venture businesses lacking coalition relations within a constellation of ventures within an overall cluster. They stick out like a bush every which way, making no coherent directional sense with other firms. MBAs tend to populate their leadership--freshly minted in top ten colleges where greed is taught as the foundation of venturing (though research shows nearly no successful entrepreneurs enter it for the money). You can predict which ventures appear and what each of them ventures within itself as service or product by modeling fractally new technology ideas and new technology execution needs on all size scales from global trade to regional markets to customer firms. Take the example above of information technology made applicable to gene biology via nano-tech. Each customer and product in this emerging industry, within it has new intersections of info tech, gene tech, and nano-tech, each of which, in turn has new smaller scale intersections. Hence, fractal repetition of technology intersections. Each constellation of technology inventions similarly takes on fractal repetition form. Firms that commercialize this idea, follow this fractal form.
Fractal Repetition of What Type of Links.
Intersections of particular ideas become intersections of corresponding new technologies which become, in turn, intersections of corresponding venture businesses. Each component product or service within such firms itself becomes a new intersection of those same ideas, technologies, and eventually further spin-off ventures of the future.
Example System. The Yubari case, mentioned above in the context of fractal patterning of mass workshop events, pertains here as well. Instead of suggesting isolated single new ventures, independent of each other, hence, not likely to chain each other down, the workshop procedures, borrowed from people world best at venture founding, led participants to identify evolutionarily stable sets of related technologies, ventures, careers, ideas, and persons. Several competitive such sets were designed and evaluated, then put into idea-competition with each other before a winner set was selected for immediate hiring, budgeting, legal form establishment and other implementation work. Within each venture were repeated the primary themes and techniques of all the outside ventures in its set, as subdivisions, processes, or organizational units with the venture. This fractal repetition of the diversity in the entire set of ventures within each venture of the set, allowed the venture members of the set to operate smoothly with each other from the beginning. Benefits of Fractal Regularizations.
If you spot this fractal repetition of intersecting ideas, technologies, and ventures you get to predict the future for small scale lower level such intersections now, within single products or technical projects, will, when things grow greatly, become new venture locales in their own right.
Fractalization of Prose, Analogical Application 12: to Social Computation Freedom Self Contradicting into Chaos.
People in conversation and in meetings with each other revel in their freedom to say and do anything they think appropriate in response to what others’ have just said and done. We express ourselves and challenge ourselves and others freely, sometimes acting “in character” and sometimes stretching our limits or trying on new roles. Others do the same. The bushyness of all this freedom interacting with other freedom, as fun as it is, leads to extremely low levels of accomplishment and enlightenment in usual human encounters and the majority of scenes in large organization life. Except where extremely disciplined and strategic people indirectly set up and impose patterning on such situations, they resemble nothing so much as wet fish flopping around in the back of a boat that newly caught them.
The Fractal Alternative. As computers permeate society, especially personally, abstract ideas from them permeate minds.
Humans labored to order ideas in silicon form so they could interact with logic to calculate, order, perceive, select, and control things. Living among such devices, humans begin to imagine new social arrangements of human beings resembling arrays of cooperating computer processors. Information systems of nature, of society, of mind, of machine blend, interact, and mutually inspire each other. Here and there people actually try such imagined new arrangements out--organizing people as arrays of processors with particular algorithms--defined mental processes--distributed across them in particular patterns over time. Which array layouts, which mental processes, which distribution types result in better results than usual human gatherings and communicatings are observed, preserved, and studied for improvement. What is fractal in this? In social computation arrangements, people are organized in physical layouts, subdivided into neighborhoods of two to six people, sharing one assigned tools, viewpoint, process, or function. Among this layout are one or more defined flows, parts of which may depend on intermediate results of individual neighborhoods. The flows are defined as sorts of outputs flowing between specific neighborhoods in the overall layout of people. Each neighborhood applies its assigned tool, viewpoint, process, or function to the inputs that flow into it. Some neighborhoods have one input overall that flows into it, others have many similar inputs that flow into them, others have many diverse inputs that flow into them, and so on. Similarly neighborhoods generate outputs, some none, some one, some many similar ones, some many diverse ones. Within neighborhoods there are sometimes assigned roles to individual people and other times neighborhoods are free to allocate their work among their members as they see fit from time to time. In some social computation arrangements, neighborhoods assign roles to their people and in others they are free to not do so or to freely change any such assignments they make. Given the generality of computation types that this general assortment of protocols, distributions, neighborhoods, layouts can support, it is usual, however, to find a fractal format emerging as “best” for quite a few outcomes wanted from such arrangements. Why is this? Elaborating work processes on several different size scales simultaneously, accumulating partial work results in groupings of people of several different size scales simultaneously, and the like uncover error, push idea to the details of implementation, elucidate cases into high abstraction and otherwise impose quality on ideas and plans. Fractal-less arrangements tend toward a variety of errors, missing when ideas are elaborated or accumulated across many size scales. What happens too often in society and work organizations is human social classes,
Page 16;
Copyright 2004 by Richard Tabor Greene, All Rights Reserved, US Government Registered
or status rank substitutes for them, stratify people so that high ranks work on larger scales and lower ranks work on smaller scales. The competition, ambition, envy, autocracy, pride, and arrogance between ranks distorts messages between scales, so that ideas from small scales go uninfluenced by large scale concerns and vice versa, ideas on large scales go uninfluenced by small scale concerns, hence, fail in implementation. The non-fractality of human ranking and status systems undermines their effectiveness as scales of work get monopolized by scales of “authority” “rank” and “status”.
Fractal Repetition of What Type of Links.
It depends on what the social computation “automaton” is trying to do and how it is configured. I like to form matrices of 100 people arranged as ten rows having ten people each. Then along each row a different input is passed and along each column a different process-to-be-applied to that input is passed. Timing of new inputs to the rows and new inputs to the columns is carefully designed. Experience with this format, however, showed that people wanted to modify it. When such modifications were tried, further modification proposals came. When these too were applied, eventually, after repetitions of these improvement efforts, a fractal arrangement had replaced my initial matrix. The fractal arrangement was four large groups of 16 people each, each group subdivided into four people, each, smaller groups, each smaller group subdivided into individual people eventually. The four large groups were assigned steps like liberation, invention, attracting pioneers, and conserving novelty, which steps were assigned the four groups (of four people each) within each large group. That way the liberation group was divided into a liberation, invention, attracting pioneers, and conserving novelty set of subgroups, which were composed, in turn, of individual people assigned to each function: liberation, invention, attracting pioneers, and conserving novelty. The emergence of this fractal format after repeated user-suggested improvements in a very different initial configuration was surprising. That something similarly fractal emerged from many different groups originally organized in matrix form was even more impressive. We have yet to fully research why this is occurring.
Example System. An interesting example in society, not software, comes to mind. We have liberals and conservatives in the political cultures of many nations. The liberal group itself has a liberal wing and a conservative wing. When a victory over the conservatives is won for some issue, we find, a few months or years later, the victorious liberal faction has incorporated much of what the defeated conservative faction believed and proposed. The victor ends up copying his conquest. In software development teams, people differ on preferred approaches. Arguments ensue and one approach wins. A few months later, however, we can examine the code written and find, in many places and ways, the defeated approach has become embedded, on a smaller (or rarely larger) scale, within the victorious approach. I believe both these examples illustrate the same forces for recursive process evolution at work. Something in the human psyche causes victors to learn from their conquests and incorporate bits from what they conquer. This may be the force that causes my matrices of people to end up fractals of people.
Benefits of Fractal Regularizations.
We are discussing here fractal organizations of people, with steps of a fundamental process repeated in groups of 16, 4, and 1 person. What is the benefit of, within a large group assigned step 1, to have subgroups assigned steps 1, 2, 3, and 4? Why is that better than having, say, all of group one doing only group 1’s step or unique components of it? Why should later parts of a process get reflected as sub-steps in doing that first step of the process? What happens when a four step process is fractally repeated in subgroups is, for example, step one becomes the context in which steps one, two, three, and four are conducted (and step two becomes that sort of context as well, as also is true for steps three and four). Doing all four steps in each of those steps as overarching goal and context has what benefits? It results in understanding each step from the view point of each other step. The steps are not individually done but are done in full relation with and cognizance of each other. Of course, it also pushes each step through several different levels of detail so that smallest scale items and largest scale items and items in-between are elaborated for any one step. This, as mentioned in sections above, catches dreamyness, perfectionism, and other sorts of scale-generated error, error that comes from emphasizing one size scale and slighting others. These two benefits, taken together, are huge in practical impact, in not a few cases.
Fractalization of Prose, Analogical Application 13: to Game-Simulation-Work Continuum Freedom Self Contradicting into Chaos.
One of the largest markets for software is games. An increasing market is simulation, with CAD/CAM simulations allowing digital versions of new car designs, complete with physics, to be tested and simulated manufacturing systems for them invented on screen before real money and facilities are consumed. Work coordination systems that support inter- and intra- firm processes, some based on standard cores like enterprise resource planning systems, and others custom made, are another major portion of the software being bought. Here the bushyness comes from separation of these realms. Work coordination applications lacking gaming and simulation features in upgrades haphazardly end up including them later on. Game applications lacking simulation and work coordination features in upgrades or competitor products later end up including them. Simulations lacking game or work coordination features, in upgrades or competitor products end up including them later on. Busyness here is partial branching that lack regularity of including all three--game, simulation, and work coordination.
The Fractal Alternative.
For games are what simulations evolve into and simulations are what work coordination systems evolve into. Also, the other way round, work coordination systems are what simulations evolve into and simulations are what games evolve into. If you add a bit of reality to a game, it becomes a simulation of a real situation. If you add a bit of reality to a simulation, it becomes work coordination software supporting a real work process. What is more, the best games have embedded in them sub-games, simulations, and work coordination features among gamers and enemies. The best simulations have embedded in them, games and work coordination among simulators. The best work coordination systems have embedded in them, games and simulations. Any of these three--games, simulations, and work coordination systems--can be the start or destination of development work. All of these three share exactly the same functions, but with different levels of fantasy-reality. As fantasy gives way to reality you move from game system to work coordination system, as reality gives way to fantasy you move from work coordination to game. In both movements the mediate positions are simulations.
Fractal Repetition of What Type of Links.
There is a temporal repetition of game evolving through simulation to work system and vice versa, and there is a spatial repetition of all three embedded in each instance of each (games with sub-game parts, simulation parts, and work coordination--between players--parts).
Example System. Our group at the Center for Fractal Computing has been asked by an internet courseware deliverer to develop courseware based on this section’s principle of a game-simulation-workcoordination system continuum. As a result, we are delivering to them software that allows students first to play the game of allocating traffic flows, then add specific bits of reality to that game till it becomes a simulation of real traffic flows, then add still more bits of reality till the same software becomes a work coordination system that actually controls traffic flows. Similar courseware for setting up and managing events, as games, simulations, and work coordination, for handling inter-cultural business negotiations and work assignments as games, simulations, and work coordination are also underway. Within the game portions of all the above, are subgames, simulations, and work coordination of the work of gaming among players involved. The same fractal repetition is embedded in the simulations portions (with game and work coordination components) and work coordination ones (with game and simulation components). Benefits of Fractal Regularizations.
Games that know that within them are simulations and coordinated work systems differ from games that do not. The three functions, aware of each other, make use of each other, accommodate each other. Games that know that playing them evolves into simulation and doing that evolves into doing actual work with work coordination systems differ from games that do not. The three functions, aware of each other, makes use of each other, accommodate each other. The same applies for simulations and work coordination systems. Games, simulations, and work coordination systems all three play with reality. They all admit some reality and allow messing around with it. Each has it own limits, as a type of system, games emphasizing fantasy, simulations emphasizing what if-ing with parameters, and work coordination emphasizing alternative work flow inventions.
Fractalization of Prose, Analogical Application 14: to Knowledge Management Freedom Self Contradicting into Chaos.
Knowledge management was invented as a later phase of artificial intelligence work. What happened is software people traipsed all over corporations in the late 1980s looking for overly concentrated knowledge that expert systems could distribute and overly distributed knowledge that expert systems could concentrate. They found, indeed, such opportunities, but most of what they found did not lend itself to software treatment. There were lots of over-concentrations and over-distributednesses of knowledge that expert system software could not fix, effectively. This became a consulting opportunity for parts of general consulting firms other than their information technology practices. A reaction by information technology people to this invasion by non-information-technology consultants appeared in the form of agent simulations (SWARM, Santa Fe Institute work, among others), data mining (originally merely expert systems built of statistics experts but much accelerated by the need to spot terrorists in masses of data), and automation of knowledge creation via creativity assist systems (originally done in biotech by using nano-technology to bring protein signalling systems and gene switching systems under information system influence). The bushyness here comes from chasing all sorts of forms of all sorts of knowledge across culture, organization, technology boundaries. The boundaries within which knowledge flows well are called “practices” and flowing knowledge across different functions or departments is much harder (Brown and Duguid, 2001). Flowing knowledge across culture and practice boundaries requires re-inventing it in each new domain--hard work. The bushyness is in formats in which knowledge appears in diverse practices and a plurality of borders across which knowledge does not flow. These are becoming strict limits on further progress in knowledge identification, development, and deployment in organizations and economies.
The Fractal Alternative.
Within practices knowledge is always moving, from routines in which it is embedded as mute motion to manuals and advice so others can match an expert’s routines, from manuals and expert advice to routines in which the knowledge is fully deployed, applied, and used. Of course, there is motion from routines of a master to routines of a nearby observing disciple, where the knowledge stays embedded in routines and is not made explicit. However, such mute transmission is inefficient as it takes years for even modest levels of progress in the observing one to occur. Of course, also, there is motion from one set of explicit materials (manuals or the like) to another as things get summarized, compared, published, reported, archived. These four compilations--from routine to explicit form, from explicit form to routines, from routines to routines, from explicit form to explicit form-are basic, going on in all practices all the time. Across practises knowledge is always moving as well. Complexity theory, for example, based on non-linear dynamics maths, spread to 30 different fields and hundreds of different practices in the late 1990s and early 2000s. Consulting companies, for example, exist in part to translate knowledge invented in one field to terms understood by other fields. The same four basic compilations takes place across fields so knowledge flows from explicit formats in one field to different explicit formats in another, from routines observed in one field to routines done in another, from explicit formats in one field to routines in another, and from routines observed in one field to explicit formats in another. The same four compilations occur across practices. The ease, speed, and errorlessness with which knowledge moves, however, differs. It moves within practices easier, faster, and with less error than between practices. It moves between firms, from a practice in one firm to the same practice in another, faster than it moves within firms from one department or practice to a different one. Clusters, like Silicon Valley, California, take advantage of this by enforcing norms that encourage people (each having their own practice knowledge) to transfer readily between new ventures, taking knowledge with them. This speeds knowledge spread and though firms “losing” the knowledge in one such person they also lose, may regret it, they hire people filled with knowledge they could not themselves have developed. The overall amount of learning in the cluster as a whole vastly increases when norms for encouraging such transfers of people and their knowledge are enforced. MIT’s route 128 was soundly defeated by Stanford’s Silicon Valley because MIT applied an East Coast norm that feared people transfer and the knowledge transfer it implied and slowed it down while Stanford saw the greater good in fast movement of knowledge throughout cluster firms. The four basic compilations mentioned above, however, are not a complete list. There are at least 20 formats, so-called knowledge format types, some of which any particular practice prefers. The translating of knowledge between practices, re-inventing it some say, is slow because the knowledge in formats preferred by one practice has to be translated to the formats another practice prefers. These formats operate in tacit, mute routines and in explicit documents and manuals. So all four basic compilations involve 20 other formats and translations among subsets of them. Beyond all this, there is a surprise element that underlies motion of knowledge within and between firms and practices. Till a set of ideas has enough critical mass of human interest directed at it, it is not noticed and paid attention to. Hence, creativity requires isolation just as much as it requires communication. Studies of product development teams showed that in concept development phases of their work, they needed isolation so they could achieve a critical mass of new ideas that, when as a group released to the world would catch attention, respect, and funding attention. Leaking each partial idea before the whole set was invented resulted in ho-hum bored responses by all and no funding was
Page 17;
Copyright 2004 by Richard Tabor Greene, All Rights Reserved, US Government Registered
achieved. The ideas died. Hence, firms have to isolate people developing new ideas enough that what they develop achieves critical mass for surprise and attention collecting. Later phases of product development and invention require selling the resulting surprising mass of ideas to others, so great communication and presentation presence is needed, the opposite of isolation. There are numerous dimensions to knowledge compilation beyond these as well, but too voluminous to discuss here (see Greene, 2002, Are You Creative? 60 Models). For each trait of knowledge--explicitness (versus routine), format type (versus other format types), critical mass (versus boredom), and others--there are compilations among all the possible forms of knowledge. As knowledge passes across different practices, firms, and other borders, different of these compilations are required and done. On a small scale each border is unique in the compilations its requires. On a larger scale, knowledge moving from major corporation to government agency or from executives to implementors or from national economy to other national economy, that entire micro-level of compilations repeats, fractally. So knowledge passing across all the basic disciplines and other boundaries in our world takes on a particular configuration of compilations, that is repeated when it passes among large scale units like economies, nations, large enterprises, sectors of the economy. Over time this recurs at mediate and smaller scales, developing a fractal form of distribution.
Fractal Repetition of What Type of Links.
There is an overall architecture of compilations among diverse knowledge formats and traits that takes place for any particular knowledge as it spreads across the world. That architecture repeats on different size-of-organization scales as the knowledge spreads. So the entire configuration of compilations occurs between economy one and economy two as, say, knowledge flows from the US to China, and the entire configuration of compilations occurs between Procter & Gamble and Lever Brothers as the knowledge flows from one firm to another in their industry, and the entire configuration of compilations occurs between marketing at one firm and product design at that firm, for example.
Example System. There are special people good at interviewing people doing some rare function extremely well and eliciting how they manage to do that. Not everyone is good at such interview construction and delivery. Getting at inside the mind expertise and making it explicit, when it is not inside your own mind, is difficult work, encountered by the first expert system builders decades ago. The opposite of this is having explicit knowledge and getting it embedded into automatic work routines so it is effortlessly and errorlessly implemented without conscious thought. This explicitization of knowledge and routinization of knowledge pair are seen as perhaps the most basic compilations of knowledge formats. For average levels of intended performance, it may be more trouble to find a person with the right expertise and explicitize their knowledge, then routinize it later, than it is worth. You can, to a certain extent, re-invent yourself all sorts of average knowledge about average domains fast and easy enough to make elaborate knowledge markets set up by firms a waste of time. One client of the Center for Fractal Computing came to us with a failing knowledge market installed by a big name global consulting firm for tens of millions of dollars. Employees knew that their job security depended on having special knowledge and so only offered the most token participation in the elaborate system, with its tidbit rewards (trips, bonuses, “recognition” all ultimately destined to end in loss of job). In truth, the interface was so cumbersome and there was so little effective social indexing of people, processes, roles in processes, and roles in departments, that finding a relevant problem you had solution knowledge for and finding a relevant solution to a problem you had, was not worth the time of messing with the interface. We proposed a dual system of events enabled by specialized index software per event, that punctuated all structures and processes of work at regular intervals, with public performance spaces where employees could show off knowledge and problems they have come up with. By driving the system with human desire for limelight and packaging it in social formats with machine enablers (rather than in entirely machine forms) we got participation at many times the level of the more expensive consultant system. We built a periphery of social index subevent, interest groups, each enabled by a specific software tool, around the walls of a large auditorium, where the events were held. This architecture presented the same knowledge compilation events and tools in the same relative locations at each event. People mastered the overall pattern of these compilations and began to repeat them in reports, local work organization designs, and event streams.
Benefits of Fractal Regularizations.
The pattern (constellation, configuration) of compilations differs per type of knowledge involved and its point of origin, from which it spreads. The pattern of compilations repeats on different size scales. This means system builders serious about doing some good via knowledge management work and systems building, have a stable pattern to support with systems. When, say, complexity theory passes as a worldwide wave across diverse nations, industries, functions, and persons, its unique pattern of compilations from one format to another can be studied and supported then repeatedly deployed at different points and size scales throughout the global economy. This reduces the work from systems to support a bunch of wildly proliferating knowledge forms to a fractally repeated pattern of such forms.
Fractalization of Prose, Analogical Application 15: to Experimentrics Freedom Self Contradicting into Chaos.
The costs of coordination are coming down, because of new technologies, like cellular communications and personal computing and the internet. As a result, big firms are becoming cores surrounded by outsources of what formerly were functions done by departments within them. As a result, ventures can form alliances and constellations in instants offering sets of new technologies that fit together to accomplish larger purposes for larger markets.
People lurch from emphasizing small ventures to emphasizing big lasting behemoth firms, as fortunes and stock prices vary in economies. If you have just lost lots of money in the bust of a technology bubble, you favor big firms and their ability to fund long term efforts, immune to the viscidities of the markets. If you have just watched a friend buy a dream vacation home in some resort, with money they made from some venture cashing out in an initial public offering, you favor ventures and their ability to ride a wave of ideas quickly. These stark alternatives--venture and big firm--have been affecting each other, of late. The big firms have seriously tried to structure initiatives inside them as ventures, often by setting up markets where resource holders bid for proposals offerred by venture proposers. The small ventures have come to recognize that as individual firms they do not make much sense and have much importance, but as players in particular configurations of technologies emerging together they make a lot of sense and have much power. They are structuring themselves more and more as role players in configurations of emerging technologies/firms. An interesting idea and associated method has emerged from the way big firms inspire small ones into association and small ones inspire big ones into venturization of initiatives. If you see particular departments in big firms as experiments, for which you collect data on impact and effectiveness, and if you see particular ventures as experiments for which you do the same, then you can call both together things like “experimentization’ or application of “experimentrics”. The idea is structuring organization forms as if there were not permanent proven commitments, but provisional experiments. This involves quite a few actual changes from current practice, however. For forms thusly treated sales changes into a data gathering function to get the data that confirms whether the particulars of the tested form work or not. Management becomes enacting the experimental conditions and controls. First products released are data points as are customer and competitor reactions to them.
The Fractal Alternative.
Whether big firms watching coalitions of initiatives emerge within their bowels or small ventures emerging into alliances with particular other ventures, you find sets of new functions, enabled by sets of related initiatives or ventures, with different competing sets allocating the functions differently across initiatives or ventures. It is not product or service competition that is involved in this, in real economies. The real competition precedes products and services. The real competition is for standards of interface, either among the new functions or between sets of them and existing technologies they interact with or depend on or enhance. Standards favor some clumpings of functions and disfavor others. Once a particular clumping of functions is thusly favored by an established or emerging standard, that, because it is viewed as the favored one, collects disproportionate support that self fulfills, turning it into the winner by expectation, its clumpings fractally repeat in new sets of functions and even particular new functions that emerge to enhance its original product and service forms. Standards are function clumpings that fractally repeat on several size scales.
Fractal Repetition of What Type of Links.
Once a standard freezes a particular partition of function spaces in a new technical area, all sorts of social organizations, department and process structures in organizations, repeat that partition. The partition in the standard becomes sets of firms, sets of processes, sets of departments, sets of proposals, on a cascade downward of several different size scales. A particularly interesting fractal pattern that develops in these cases is a fractal cascade of venture firms. You find firms for each partition of functions and within such firms (using the word “within” loosely) you find sub-firms (departments or outsourcers) for relations between one partition’s product and products of other function partitions. In some cases two or more layers of venture firms repeat this fractal structure before outsourcer firms are involved.
Example System. Zeneca pharmaceutical’s highest sales district was Chicago some years ago. Rather than rest on laurels, the leadership of that district approached us about improving their already leading sales results. We suggested a questionnaire asking their physician and health maintenance organization customers how satisfied they were both with the products of Zeneca and with the process of being sold to by Zeneca. The results were shocking--customers hated the Zeneca sales process (which was about the same as every other pharmaceutical firm--multiply face exposure to physicians by coming often carrying tiny free samples for physicians to give to patients). The Zeneca leadership, introduced to the experimentrics idea, structured their Chicago district as three different sales approach experiment areas, changing management into experiment data collectors and analyzers, changing salespersons into customer data gatherers. Taguchi experiment orthogonal array designs were used and a winning sales approach found, thereafter replicated district wide and later globally. Instead of assuming they already knew how to sell, this company’s leadership experimented and got data proving they did not know how to sell and invented new approaches, also tested experimentally, one of which proved powerful, vastly outselling their already leading performance in sales before the experimenting began. The legacy of this work was a new function in sales, called, sales approach invention division, that, yearly, gathers from customers and employees suggested better sales approaches, forms them into experimental sales forces, and tries them out in a statistically valid portion of the Chicago district. This is fractal embedding of the experimentrics ideas within the sales function of the company. Zeneca now leads its competition in both satisfaction with products and satisfaction with how customers are being sold to by Zeneca. Benefits of Fractal Regularizations.
The fractal repetition of function space partitions in social and technical size scales creates a common configuration of boundaries from left to right and top to bottom within sectors of the economy and within entire industries. That familiar set of boundaries simplifies life and work, inviting repetition of system and solution in diverse circumstances, as the same boundary set reappears in other contexts.
Fractalization of Prose, Analogical Application 16: to Quality Globalization Freedom Self Contradicting into Chaos.
The total quality movement deals with quality of production applying statistical tools and new social arrangements like circles. The environment movement deals with quality of the earth applying social movement and demonstration tools and new ways of living and ways of producing in industry.. The human rights movement deals with quality of conflict applying legal and media tools. Intellectual movements, like spread of complexity theory, deal with quality of cognition, spreading methods from field to field over a period of a decade or two. There are at least ten such types of perpetual social movement going on globally at present. Each one deals with some type of quality--of production, of conflict, of mind. Each one is selfish, promoting its primary values as more important than those of other such movements. These movements compete, globally and locally, for media attention, for funds, for legal changes and frameworks. The bushyness here needing curing is the conflict on all size scales of these quality-related movements. Their competition with each other derationalizes budgeting and discussion, plans and institutional arrangements.
The Fractal Alternative.
Value-meshing practices are invented new practices in particular locales that take the primary values of one or more of the above movements and invent practices that get those diverse values to assist each other. The example that is most commonly cited is quality of the earth needs to save elephants and quality of life needs for poor farmers to not have their families starve. By inventing an elephant tourist park, the poor people guide tours, earning money for quality of life concerns with protecting the elephants instead of illegally killing and selling off parts from them. These value-meshing practice inventions are fractal embodiments of all the primary values of several diverse movements. In such practice inventions you find the primary values of several different movements embedded somewhat harmoniously and always synergistically. Here local competition and choice possibilities are eschewed in favor of fusing of values, that produces a fractal cascade of ten movements embodied locally in one invented practice.
Fractal Repetition of What Type of Links. It is the primary values of each movement, locally embedded in invented new practices, that fractally repeat here. Example System. A politician in the US concerned about “quality of leadership’ in representative democracies and wanting to get practical vote power from
that theme approached us about quality globalization’s uses in politics. We devised a form of campaign “quality circle” that embodied the primary values of ten global quality-related movements, executed in 600 local election district circles. Mass workshop events in which portions or all of these circles gathered to execute expert procedures for inventing or managing one cam-
Page 18;
Copyright 2004 by Richard Tabor Greene, All Rights Reserved, US Government Registered
paign function or another were designed and held, with each such event fractally embodying the primary values of all ten quality-related movements. As a result, the national organizations and leaders of all these movements got involved in the local campaign on the side of this candidate, pouring energy from local members of each movement in his campaign. Here a political goal, by being supported by event-based management tactics, fractally took advantage of the local forces of ten global movements usually not cooperative with each other. A local value-meshing event practice poured these desperate forces into one candidate’s election efforts.
Benefits of Fractal Regularizations.
The invention of local practices that fuse, that is, simultaneously implement the primary values of competing global quality-related movements creates fractal patterning on several size scales. You end up with single individuals who “belong” to plural competing global movements working to implement locally the values of them all by inventing new practices capable of combining all their primary values. This takes terrific dedication and clever inventiveness to pull off. It is made harder by rigid top leaders of each movement fighting other movements jealously instead of working for higher level meshings and synergies. ISO 9000 and ISO 14000, the EU standards for production quality and quality of the earth, symbolize the sequential separate embodiment of such primary values, where quality globalization deals with simultaneous embodiment.