Get e-book Knowledge Management and Organisational Design (Resources for the Knowledge-Based Economy)

Free download. Book file PDF easily for everyone and every device. You can download and read online Knowledge Management and Organisational Design (Resources for the Knowledge-Based Economy) file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Knowledge Management and Organisational Design (Resources for the Knowledge-Based Economy) book. Happy reading Knowledge Management and Organisational Design (Resources for the Knowledge-Based Economy) Bookeveryone. Download file Free Book PDF Knowledge Management and Organisational Design (Resources for the Knowledge-Based Economy) at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Knowledge Management and Organisational Design (Resources for the Knowledge-Based Economy) Pocket Guide.
Log in to Wiley Online Library
Contents:
  1. Call for Papers
  2. Knowledge Management Tools
  3. Technological and Organizational Tools for Knowledge Management: In Search of Configurations
  4. Knowledge Management Tools (Resources for the Knowledge-Based Economy) - PDF Free Download

Caffyn, S. Clark, K. Fujimoto eds. Wheelwright eds. Collis, D. Conner, K. Corso, M. Martini, E. Paolucci and L. Cusumano, M. Auto Plants', Strategic Management Journal 12 8 , Czajkowski, A. Daft, R. De Maio, A. Verganti and M. Dougherty, D. Dyer, J. Edwards, C. Hair, J. Jr, R. Anderson and R. Tatham eds. Hansen, M. Nohria and T. Hayes, R.

Call for Papers

Wheelwright and K. Clark eds. Hedlund, G. Henderson, R. Iansiti, M. Imai, K. Nonaka and H. Clark, R. Hayes and C. Lorenz eds. Itami, H. Joice, W. Katz, R.


  1. Account Options.
  2. References!
  3. L’Esprit de mes pères (LITTERATURE ETR) (French Edition).
  4. Knowledge criteria for organization design | Emerald Insight.
  5. Vigilantes and Lynch Mobs: Narratives of Community and Nation.
  6. Intellectual Capital in Context of Knowledge Management.

Kogut, B. Kompass, , Business Disk Italia , Kompass. Lee, Y. Lundvall, B. Dosi eds. MacCormack, A. Meyer, M. Nonaka, I. Takeuchi eds. What Next? With the advent of more advanced technology, knowledge tools will become increasingly sophisticated. Two entertaining selections were chosen to represent some ideas for what knowledge tools might be able to do in the future.

These groups all contribute interesting notions to how the workings of the human mind might be duplicated and utilized. They can certainly facilitate the implementation of knowledge processes-the generation, transfer, and codification of knowledgeand in some cases they may be able to automate some kinds of knowledge work in these areas. Still, they must be taken in context and implemented as a part of the overall effort to leverage organizational knowledge through integration with the business strategy, the culture, the current processes, and the existing technologies.

This book represents a dialog about how tools can facilitate the knowledge processes of an organization. These selections should help make discussible the many issues involved in choosing and using knowledge tools. Once aware of the pros and cons, strengths and weaknesses of knowledge tools, each person should be ready to begin to make informed decisions about how to incorporate such tools into his or her organization. Not an easy task, but worth the hard work if done knowledgeably. Thus, all past and present digital computers perform basically the same kinds of symbol manipulations.

In programming a computer it is substantially irrelevant what physical processes and devices-electromagnetic, electronic, or what not-accomplish the manipulations. A program written in one of the symbolic programming languages, like ALGOL or FORTRAN,will produce the same symbolic output on a machine that uses electron tubes for processing and storing symbols, one that incorporates magnetic drums, one with a magnetic core memory, or one with completely transistorized circuitry. The program, the organization of symbol-manipulating processes, is what determines the transformation of input into output.

In fact, provided with only the program output, and without information about the processing speed, one cannot determine what kinds of physical devices accomplished the transformations: whether the program was executed by a solid-state computer, an electron-tube device, an electrical relay machine, or a room full of statistical clerks! Only the organization of the processes is determinate. Out of this observation arises the possibility of an independent science of information processing. The output of the processes, the behaviour of Homo cogituns, should reveal how the information processing is organized, without necessarily providing much information about the protoplasmic structures or biochemical processes that implement it.

From this observation follows the possibility of constructing and testing psychological theories to explain human thinking in terms of Reprinted with the permission of the copyright holder from American Scientist, vol. Finally, there is a growing body of evidence that the elementary information processes used by the human brain in thinking are highly similar to a sub-set of the elementary information processes that are incorporated in the instruction codes of present-day computers.

As a consequence it has been found possible to test information-processing theories of human thinking by formulating these theories as computer programs-organizations of the elementary information processes-and examining the outputs of computers so programmed. These, then, are the three propositions on which this discussion rests: 1. A science of information processing can be constructed that is substantially independent of the specific properties of particular informationprocessing mechanisms.

Human thinking can be explained in information-processing terms without waiting for a theory of the underlying neurological mechanisms.

Knowledge Management Tools

Information-processing theories of human thinking can be formulated in computer programming languages, and can be tested by simulating the predicted behaviour with computers. The other sciences provide numerous precedents, perhaps the most relevant being nineteenth-century chemistry. The atomic theory and the theory of chemical combination were invented and developed rapidly and fruitfully during the first three-quarters of the nineteenth century-from Dalton, through Kekuli, to Mendeleev-without any direct physical evidence for or description of atoms, molecules, or valances.

To quote Pauling : Most of the general principles of molecular structure and the nature of the chemical bond were formulated long ago by chemists by induction from the The study of the structure of molecules was great body of chemical facts originally carried on by chemists using methods of investigation that were essentially chemical in nature, relating to the chemical composition of substances, the existence of isomers, the nature of the chemical reactions in which a substance takes part, and so on.

From the consideration of facts of this kind Frankland, Kekult! Information Processing in Computer and Man I 13 formulate the theory of valence and to write the first structural formulas for molecules, van? Hoff and le Be1 were led to bring classical organic stereochemistry into its final form by their brilliant postulate of the tetrahedral orientation of the four valence bonds of the carbon atom, and Werner was led to his development of the theory of the stereochemistry of complex inorganic substances. Psychologists who rejected the empty-head viewpoint, but who were sensitive to the demand for operational constructs, tended to counter the behaviourist objections by couching their theories in physiological language.

On the one hand, hypothetical entities, postulated because they were powerful and fruitful for organizing experimental evidence, proved exceedingly valuable in that science, and did not produce objectionable metaphysics. On the other hand, the hypothetical entities of atomic theory initially had no physical properties other than weight that could explain why they behaved as they did. While an electrical theory of atomic attraction predated valence theory, the former hypothesis actually impeded the development of the latter and had to be discredited before the experimental facts could fall into place.

So it remained for more than half a century until the electron-shell theory was developed by Lewis and others to explain it. Paralleling this example from chemistry, information-processing theories of human thinking employ unobserved entities-symbols-and unobserved processes-elementary information processes. The theories provide explanations of behaviour that are mechanistic without being physiological. That they are mechanistic-that they postulate only processes capable of being effected by mechanism-is guaranteed by simulating the behaviour predicted on ordinary digital computers.

Simulation provides a basis for testing the predictions of the theories, but does not imply that the protoplasm in the brain resembles the electronic components of the computer. In actual game positions where a checkmating possibility exists, a strong player may spend a quarter of an hour or more discovering it, and verifying the correctness of his strategy. In doing so, he may have to look ahead four or five moves, or even more. How do good players solve such problems?

How do they find combinations? A theory now exists that answers these questions in some detail. First, I shall describe what it asserts about the processes going on in the mind of the chess player as he studies the position before him, and what it predicts about his progress in discovering an effective strategy. Then we can see to what extent it accounts for the observed facts. O u r account of the theory will be an English-language translation of the main features of the program. The first two of these specify the way in which the chess player stores in memory his representation of the chess position, and his representation of the moves he is considering, respectively.

The remaining parts of the theory specify the processes he has available for extracting information from these representations and using that information: processes for discovering relations among the pieces and squares of the chess position, for synthesizing chess moves for consideration, and for organizing his search among alternative move sequences. We shall describe briefly each of these five parts of the theory.

The theory asserts, first of all, that the human chess player has means for storing internally, in his brain, encoded representations of the stimuli presented to him. In the case of a highly schematized stimulus like a chess position, the internal symbolic structure representing it can be visualized as similar to the printed diagram used to represent it in a chess book. The internal representation employs symbols that name the squares and the pieces, and symbolizes the relations among squares, among pieces, and between squares and pieces.

On the other hand, the representation does not symbolize directly that two pieces stand on the same diagonal. Relations like this must be discovered or inferred from the representation by the processes to be discussed below. Information Processing in Computer and Man 1 15 Asserting that a position is symbolized internally in this way does not mean that the internal representations are verbal any more than the diagrams in a chess book are verbal. He has symbol-manipulating processes that enable him, from his representations of a position and of a move, to use the latter to modifr the formerthe symbolic structure that describes the position-into a new structure that represents what the position would be after the move.

The chess player has processes that enable him to discover new relations in a position, to symbolize these, and to store the information in memory. For example, in a position he is studying whether the actual one on the board, or one he has produced by considering moves , he can discover whether his King is in check-attacked by an enemy man; or whether a specified piece can move to a specified square; or whether a specified man is defended.

The processes for detecting such relations are usually called perceptual processes. They are characterized by the fact that they are relatively direct: they obtain the desired information from the representation with a relatively small amount of manipulation. The chess player has processes, making use o f the perceptual processes, that permit him to generate or synthesize for his consideration moves with specified properties-for example, to generate all moves that will check the enemy King.

To generate moves having desired characteristics may require a considerable amount of processing. An example of these more complex, indirect processes is a procedure that would discover certain forking moves moves that attack two pieces simultaneously somewhat as follows. Find the square of the opposing Queen. Determine for each of these squares whether it is defended whether an opposing piece can move to it. This search makes use of the processes already enumerated, and proceeds as follows.

If there are no checking moves, he concludes that no checkmating combination can be discovered in the position, and stops his search. If, for one of the checking moves, he discovers there are no legal replies, he concludes that the checking move in question is a checkmate. If, for one of the checking moves, he discovers that the opponent has more than four replies, he concludes that this checking move is unpromising, and does not explore it further. That is, he considers each of its replies in turn, generates the checking moves available after those replies, and the replies to those checking moves.

The most promising checking move for further exploration is selected by these criteria: that checking move to which there are the fewest replies receives first prioritys If two or more checking moves are tied on this criterion, a double check check with two pieces is given priority over a single check. If there is still a tie, a check that does not permit a recapture by the opponent is given priority over one that does. Any remaining ties are resolved by selecting the check generated most recently.

The theory predicts, for any chess position that is presented to it, whether a chess player will discover a mating combination in that position, what moves he will consider and explore in his search for the combination, and which combination if there are several alternatives, as there often are he will discover. These predictions can be compared directly with published analyses of historical chess positions or tape recordings of the thinking-aloud behaviour of human chess players to whom the same position is presented.

Now it is unlikely that, if a chess position were presented to a large number of players, all of them would explore it in exactly the same way. Certainly strong players would behave differently from weak players. Hence, the informationprocessing theory, if it is a correct theory at all, must be a theory only for players of a certain strength. On the other hand, we would not regard its explanation of chess playing as very satisfactory if we had to construct an entirely new theory for each player we studied. Matters are not so bad, however. First, the interpersonal variations in search for chess moves in middle-game positions appear to be quite small for players at a common level of strength as we shall see in a moment.

Second, some of the differences that are unrelated to playing strength appear to correspond to quite simple Information Processing in Computer and Man 1 17 variants of the program altering, for example, the criteria that are used to select the most promising checking move for exploration. Other differences, on the other hand, have major effects on the efficacy of the search, and some of these, also, can be represented quite simply by variants of the program organization. Thus, the basic structure of the program, and the assumptions it incorporates about human information-processing capacities, provide a general explanation for the behaviour, while particular variants of this basic program allow specific predictions to be made of the behavioural consequences of individual differences in program organization and content.

The kinds of information the theory provides, and the ways in which it has been tested, can be illustrated by a pair of examples. Adrian de Groot has gathered and analysed a substantial number of thinking-aloud protocols, some of them from grand masters. The thinking-aloud technique probably underestimates the size of the search tree somewhat, for a player may fail to mention some variations he has seen, but the whole tree is probably not an order of magnitude greater than that reported. In 40 positions from a standard published work on mating combinations where the information-processing theory predicted that a player would find mating strategies, the median size of its search tree ranged from 13 positions for twomove mates, to 53 for five-move mates.

A six-move mate was found with a tree of 95 positions: and an eight-move mate with a tree of The second example tests a much more detailed feature of the theory. In the eight-move mate mentioned above, it had been known that by following a different strategy the mate could have been achieved in seven moves. Both the human grand master Edward Lasker in the game of Lasker-Thomas, and the program found the eight-move mate. Examination of the exploration shows that the shorter sequence could only have been discovered by exploring a branch of the tree that permitted the defender two replies before exploring a branch that permitted a single reply.

The evidence was discovered after the theory was constructed. A second piece of evidence of the same sort has been found in a game between experts reported in Chess Life December The winner discovered a seven-move mate, but overlooked the fact that he could have mated in three moves.

The annotator of the game, a master, also overlooked the shorter sequence. Again, it could only have been found by exploring a check with two replies before exploring one with a single reply. Since the tree branches geometrically, solving a problem of any difficulty would call for a search of completely unmanageable scope numbers like arise frequently in estimating the magnitude of such searches , if there were not at hand powerful heuristics, or rules of thumb, for selecting the promising branches for exploration.

Such heuristics permit the discovery of proofs for theorems and mating combinations with the limited explorations reported here. Hence its appearance in the chess-playing theory, and in the behaviour of the human players, is not fortuitous. It somehow fails to conform to our usual notions of generality and parsimony in theory.

First, it is highly specific-the checkmating theory purports to provide an explanation only of how good chess players behave when they are confronted with a position on the board that calls for a vigorous mating attack. If we were to try to explain the whole range of human behaviour, over all the enormous variety of tasks that particular human beings perform, we should have to compound the explanations from thousands of specific theories like the checkmate program. We used about a thousand words above to provide an approximate description of the checkmate program.

Information Processing in Computer and Man 1 19 Before we recoil from this unwieldy compendium as too unpleasant and unaesthetic to contemplate, let us see how it compares in bulk with theories in the other sciences. With the simplicity of Newtonian mechanics why is this always the first example to which we turn? If classical mechanics is the model, then a theory should consist of three sentences, or a couple of differential equations.

But chemistyg and particularly organic chemistry, presents a different picture. The figure referred to represents more than 40 distinct chemical reactions and a corresponding number of compounds. This diagram, of course, is far from representing the whole theory. Not only does it omit much of the detail, but it contains none of the quantitative considerations for predicting reaction rates, energy balances, and so on. The verbal description accompanying the figure, which also has little to say about the quantitative aspects, or the energetics, is over two pages in length-almost as long as our description of the chess-playing program.

Here we have a clearcut example of a theory of fundamental importance that has none of the parsimony we commonly associate with scientific theorizing. The answer to the question of how photosynthesis proceeds is decidedly long-winded-as is the answer to the question of how chess players find mating combinations. We are often satisfied with such long-winded answers because we believe that the phenomena are intrinsically complex, and that no brief theory will explain them in detail.

We must adjust our expectations about the character of information-processing theories of human thinking to a similar level. Such theories, to the extent that they account for the details of the phenomena, will be highly specific and highly complex. Part of our knowledge in chemistry-and a very important part of the experimental chemist-consists of vast catalogues of substances and reactions, not dissimilar in bulk to the compendium of information processes we are proposing.

The substances, at this more basic level, become geometrical arrangements of particles from a small set of more fundamental substances-atoms and sub-molecules-held together by a variety of known forces whose effects can be estimated qualitatively and, in simple cases, quantitatively. If we examine an information-processing theory like the checkmating program more closely, we find that it, too, is organized from a limited number of building blocks-a set of elementary information processes-and some composite processes that are compounded from the more elementary ones in a few characteristic ways.

Let us try to describe these building blocks in general terms. First, we shall characterize the way in which symbols and structures of symbols are represented internally and held in memory. Then, we shall mention some of the principal elementary processes that alter these symbol structures. It is postulated that tokens can be compared, and that comparison determines that the tokens are occurrences of the same symbol symbol type , or that they are different.

Symbol tokens are arranged in larger structures, called lists. A list is an ordered set, a sequence, of tokens. Hence, with every token on a list, except the last, there is associated a unique next token. Associated with the list as a whole is a symbol, its name. Thus, a list may be a sequence of symbols that are themselves names of lists-a list of lists.

A familiar example of a list of symbols that all of us carry in memory is the alphabet. Associations also exist between symbol types. An association is a twotermed relation, involving three symbols, one of which names the relation, the other two its arguments. Some Elementary Processes A symbol, a list and an association are abstract objects. Their properties are defined by the elementary information processes that operate on them. One important class of such processes are the discrimination processes. The basic discrimination process, which compares symbols to determine whether or not they are identical, has already been mentioned.

Pairs of compound structures-lists and sets of associations-are discriminated from each other by matching processes that apply the basic tests for symbol identity to symbols in corresponding positions in the two structures. For example, two chess positions can be discriminated by a matching process that compares the pieces standing on corresponding Information Processing in Computer and Man 1 21 squares in the two positions. These processes are involved, for example, in fixating or memorizing symbolic materials presented to the sense organs-learning a tune. Still another class of elementary information processes finds information that is in structures stored in memory.

Thus, there must be processes for finding named objects, for finding symbols on a list, for finding the next symbol on a list, and for finding the value of an attribute of an object. This list of elementary information processes is modest, yet provides an adequate collection of building blocks to implement the chess-playing theory as well as the other information-processing theories of thinking that have been constructed to date: including a general problem-solving theory, a theory of rote verbal learning, and several theories of concept formation and pattern recognition, among others.

Moves are similarly represented as symbols with which are associated the names of the squares from which and to which the move was made, the name of the piece moved, the name of the piece captured, if any, and so on. Similarly, the processes for manipulating these representations are compounded from the elementary processes already described. Another example: testing whether the King is in check involves finding the square associated with the King, finding adjoining squares along ranks, files and diagonals, and testing these squares for the presence of enemy men who are able to attack in the appropriate direction.

The latter is determined by associating with each man his type, and associating with each type of man the directions in which such men can legally be moved. We see that, although the chess-playing theory contains several thousand program instructions, these are comprised of only a small number of elementary processes far fewer than the number of elements in the periodic table. The elementary processes combine in a few simple ways into compound processes and operate on structures lists and descriptions that are constructed, combinatorially, from a single kind of building block-the symbol.

Processes in Serial Pattern Recognition A second example of how programs compounded from the elementary processes explain behaviour is provided by an information-processing theory of serial pattern recognition. An experimental subject in the laboratory, asked to extrapolate the series, will, after a little thought, continue: GHM, etc.

To see how he achieves this result, we examine the original sequence. First, it makes use of letters of the Roman alphabet. We can assume that the subject holds this alphabet in memory stored as a list, so that the elementary list process for finding the NEXT item on a list can find B, given A, or find S, given R, and so on. The first letter in each period is NEXT in the alphabet to the second letter in the previous period.

The second letter in each period is NEXT in the alphabet to the first letter in that period. The third letter in each period is the SAME as the corresponding letter in the previous period. Several closely related information-processing theories of human pattern recognition have been constructed using elementary processes for finding and generating the NEXT item in a list see Feldman, Tonge and Kanter ; Laughery and Gregg , and Simon and Kotovsky These theories have succeeded in explaining some of the main features of human behaviour in a number of standard laboratory tasks, including so-called binary choice tasks, and series-completion and symbol-analogy tasks from intelligence tests.

The nature of the series-completion task has already been illustrated. As each one is presented to him, he is asked what the next one will be. The actual sequence is, by construction, random. The evidence shows that, even when the subjects are told this, they rarely treat it as random. Instead, they behave as though they were trying to detect a serial pattern in the sequence and extrapolate it. They behave essentially like subjects faced by the series-completion task, and basically similar information-processing theories using the same elementary processes can explain both behaviours.

It is clear that there is no prospect of eliminating all idiosyncratic elements from the individual theories. A theory to explain chessplaying performances must postulate memory structures and processes that are completely irrelevant to proving theorems in geometry, and vice versa. This, in fact, appears to be the case. The first information-processing theory that isolated some of these common components was called the General Problem Solver Newell and Simon Means-End Analysis The General Problem Solver is a program organized to keep separate 1 problem-solving processes that, according to the theory, are possessed and used by most human beings of average intelligence when they are confronted with any relatively unfamiliar task environment, from 2 specific information about each particular task environment.

The core of the General Problem Solver is an organization of processes for means-end analysis. The problem is defined by specifying a given situation A , and a desired situation B. A discrimination process incorporated in the system of means-end analysis compares A with B, and detects one or more differences D between them, if there are any.

With each difference, there is associated in memory a set of operators, 0, or processes, that are possibly relevant to removing differences of that kind. The means-end analysis program proceeds to try to remove the difference by applying, in turn, the relevant operators. The operator that replaces tan by sinkos will eliminate one of these. The left-hand side still contains an extraneous function, cosine.

The algebraic cancellation operator, applied in the two cosines, might remove this difference. Planning Process Another class of general processes discovered in human problem-solving performances and incorporated in the General Problem Solver are planning processes. The essential idea in planning is that the representation of the problem situation is simplified by deleting some of the detail.

A solution is now sought for the new, simplified, problem, and if one is found, it is used as a plan to guide the solution of the original problem, with the detail reinserted. Consider a simple problem in logic. Common principles for the organization of the executive processes have begun to appear in several of the theories. The general idea has already been outlined above for the chess-playing program.

In this program the executive routine cycles between an exploration search phase and an evaluation scan phase. During the exploration phase, the available problem-solving processes are used to investigate sub-goals. The information obtained through this investigation is stored in such a way as to be accessible to the executive.

During the evaluation phase, the executive uses this information to determine which of the existing sub-goals is the most promising and should be explored next. An executive program organized in this way may be called a search-scan scheme, for it searches an expanding tree of possibilities, which provides a common pool of information for scanning by its evaluative processes. If search takes place in long sequences, interrupted only infrequently to scan for possible alternative directions of exploration, the problem solver suffers from stereotypy.

Technological and Organizational Tools for Knowledge Management: In Search of Configurations

Having initiated search in one direction, it tends to persist in that direction as long as the sub-routines conducting the search determine, locally, that the possibilities for exploration have not been exhausted. These determinations are made in a very decentralized way, and without benefit of the more global information that has been generated.

On the other hand, if search is too frequently interrupted to consider alternative goals to the one being pursued currently, the exploration takes on an uncoordinated appearance, wandering indecisively among a wide range of possibilities. In both theorem-proving and chess-playing programs, extremes of decentralized and centralized control of search have shown themselves ineffective in comparison with a balanced search-scan organization.

Discrimination Trees Common organizational principles are also emerging for the rote memory processes involved in almost all human performance. As a person tries to prove a theorem, say, certain expressions that he encounters along the way gradually become familiar to him and his ability to discriminate among them gradually improves.

This theory is able to explain, for instance, how familiarity and similarity of materials affect rates of learning. The result of discrimination is to find a memory location where information is stored about objects that are similar to the one sorted. Familiarization processes create new compound objects o u t of previously familiar elements. Similarly, the English alphabet, used by the serial pattern-recognizing processes, is a familiar object compounded from the letters arranged in a particular sequence. Because discrimination trees play a central role in EPAM, the program may also be viewed as a theory of pattern detection, and EPAM-like trees have been incorporated in certain information-processing theories of concept formation.

We now know, for example, some of the central processes that are employed in solving problems, in detecting and extrapolating patterns, and in memorizing verba I ma teria Is. Information-processing theories explain behaviour at various levels of detail. In the theories now extant, at least three levels can be distinguished. At the most aggregative level are theories of complex behaviour in specific problem domains: proving theorems in logic or geometry, discovering checkmating combinations in chess. These theories tend to contain very extensive assumptions about the knowledge and skills possessed by the human beings who perform these activities, and about the way in which this knowledge and these skills are organized and represented internally.

Hence each of these theories incorporates a rather extensive set of assumptions, and predicts behaviour only in a narrow domain. Information Processing in Computer and Man I 27 At a second level, similar or identical information-processing mechanisms are common to many of the aggregative theories. Means-end analysis, planning, the search-scan scheme, and discrimination trees are general-purpose organizations for processing that are usable over a wide range of tasks.

As the nature of these mechanisms becomes better understood, they, in turn, begin to serve as basic building blocks for the aggregative theories, allowing the latter to be stated in more parsimonious form, and exhibiting the large fraction of machinery that is common to all, rather than idiosyncratic to individual tasks. The construction and successful testing of large-scale programs that simulate complex human behaviours provide evidence that a small set of elements, similar to those now postulated in information-processing languages, is sufficient for the construction of a theory of human thinking.

Although none of the advances that have been described constitute explanations of human thought at the still more microscopic, physiological level, they open opportunities for new research strategies in physiological psychology. As the information-processing theories become more powerful and better validated, they disclose to the physiological psychologist the fundamental mechanisms and processes that he needs to explain. He need no longer face the task of building the whole long bridge from microscopic neurological and molecular structures to gross human behaviour, but can instead concentrate on bridging the much shorter gap from physiology to elementary information processes.

NOTES 1. The best-known exponent of this radical behaviourist position is Professor B. See also Hebb , Chap. Hence to look ahead four of five moves is to consider sequences of eight or ten successive positions. A general account of this program, with the results of some hand simulations, can be found in Simon and Simon , pp. The theory described there has subsequently been programmed and the hand-simulated findings confirmed on a computer. This is perhaps the most important element in the strategy.

It will be discussed further later. The beginnings of such a compendium have already appeared. A convenient source for descriptions of a number of the information-processing theories is the collection by Feigenbaum and Feldman Of course, even Newtonian mechanics is not at all this simple in structure. See Simon , pp. Only a few of the characteristics of list-processing systems can be mentioned here.

For a fuller account, see Newell and Simon , especially pp. Evidence as to how information is symbolized in the brain is almost non-existent. If the reader is assisted by thinking of different symbols as different macromolecules, this metaphor is as good as any. A few physiologists think it may even be the correct explanation. See Hyden , pp.

Differing patterns of neural activity will d o as well. See Adey, Kador, Didio and Schindler , pp. For examples, see Feigenbaum and Feldman , Part 2. Perhaps the earliest use of the search-scan scheme appeared in the Logic Theorist, the first heuristic theorem-proving program. See Newell and Simon Calvin, M. New York: W.

De Groot, A. The Hague: Mouton. Feigenbaum, E. New York: McGraw-Hill. Feldman, J. Cincinnati: South-Western Publishing. Hebb, D. New York: Wiley. Philadelphia: Saunders. Hubel, D. Laughery, K. Lettvin, J. Newell, A.

Knowledge Management Tools (Resources for the Knowledge-Based Economy) - PDF Free Download

Pauling The Nature of the Chemical Bond. Ithaca: Cornell University Press, 3rd ed. Simon, H.


  • A PRACTICAL COMMENTARY, OR AN EXPOSITION WITH NOTES ON THE EPISTLE OF JUDE. (With Active Table of Contents);
  • Pariag Alley.
  • chapter and author info.
  • The Trembling Jelly Tree!
  • Understanding Telecommunication Networks (IET Telecommunications Series;
  • Wann, T. Chicago: University of Chicago Press. And now they believe we are not far from realizing that dream. Department of Defense DOD is sinking millions of dollars into developing fully autonomous war machines that will respond to a crisis without human intervention. But no matter how many billions of dollars the Defense Department or any other agency invests in AI, there is almost no likelihood that scientists can develop machines capable of making intelligent decisions.

    After 25 years of research, A1 has failed to live up to its promise, and there is no evidence that it ever will. The dangers of turning over the battle-field completely to machines are obvious. But it would also be a mistake to replace skilled air-traffic controllers, seasoned business managers, and master teachers with computers that cannot come close to their level of expertise.

    We wish to stress that we are not Luddites. There are obvious tasks for which computers are appropriate and even indispensable. Computers are more deliberate, more precise, and less prone to exhaustion and error than the most conscientious human being. They can also store, modify, and tap vast files of data more quickly and accurately than humans can. Hence, they can be used as valuable tools in many areas.

    As word processors and telecommunication devices, for instance, computers are already changing our methods of writing and our notions of collaboration. However, we believe that trying to capture more sophisticated skills within the realm of electronic circuits-skills involving not only calculation but also judgment-is a dangerously misguided effort and ultimately doomed to failure. Does that mean we can formulate specific rules to teach someone else how to do it? How would we explain the difference between the feeling of falling over and the sense of being slightly off-balance when turning?

    And do we really know, until the situation occurs, just what we would d o in response to a certain wobbly feeling? That know-how is not accessible to us in the form of facts and rules. We know how to walk. Yet the mechanics of walking on two legs are so complex that the best engineers cannot come close to reproducing them in artificial devices. We have to learn it.

    Small children learn through trial and error, often by imitating those who are proficient. Instead, people usually pass through five levels of skill: novice, advanced beginner, competent, proficient, and expert. Only when we understand this dynamic process can we ask how far the computer could reasonably progress. Why Computers May Never Think Like People 33 During the novice stage, people learn facts relevant to a particular skill and rules for action that are based on those facts. For instance, car drivers learning to operate a stick shift are told at what speed to shift gears and at what distancegiven a particular speed-to follow other cars.

    These rules ignore context, such as the density of traffic or the number of stops a driver has to make. Similarly, novice chess players learn a formula for assigning pieces point values independent of their position. After much experience in real situations, novices reach the advanced-beginner stage. Advanced-beginner drivers pay attention to situational elements, which cannot be defined objectively. For instance, they listen to engine sounds when shifting gears. They can also distinguish between the behavior of a distracted or drunken driver and that of the impatient but alert driver.

    Advanced-beginner chess players recognize and avoid overextended positions. In all these cases, experience is immeasurably more important than any form of verbal description. But soon they must put the rules aside to proceed. For example, at the competent stage, drivers no longer merely follow rules; they drive with a goal in mind.

    If they wish to get from point A to point B very quickly, they choose their route with an eye to traffic but not much attention to passenger comfort. Removing pieces that defend the enemy king becomes their overriding objective, and to reach it these players will ignore the lessons they learned as beginners and accept some personal losses.

    A crucial difference between beginners and more competent performers is their level of involvement. Novices and advanced beginners feel little responsibility for what they do because they are only applying learned rules; if they foul up, they blame the rules instead of themselves. But competent performers, who choose a goal and a plan for achieving it, feel responsible for the result of their choices. A successful outcome is deeply satisfying and leaves a vivid memory.

    Likewise, disasters are not easily forgotten. Yet in our everyday behavior, this model of decision making-the detached, deliberate, and sometimes agonizing selection among alternatives-is the exception rather than the rule. Proficient performers d o not rely on detached deliberation in going about their tasks. Proficient performers recall whole situations from the past and apply them to the present without breaking them down into components or rules.

    Rather, the whole visual scene triggers the memory of similar earlier situations in which an attack was successful. The boxer is using his intuition, or know-how. Intuition should not be confused with the reenactment of childhood patterns or any of the other unconscious means by which human beings come to decisions. Nor is guessing what we mean by intuition. To guess is to reach a conclusion when one does not have enough knowledge or experience to do so. Intuition or know-how is the sort of ability that we use all the time as we go about our everyday tasks.

    Ironically, it is an ability that our tradition has acknowledged only in women and judged inferior to masculine rationality. While using their intuition, proficient performers still find themselves thinking analytically about what to do. For instance, when proficient drivers approach a curve on a rainy day, they may intuitively realize they are going too fast. They then consciously decide whether to apply the brakes, remove their foot from the accelerator, or merely reduce pressure on the accelerator. Proficient marketing managers may intuitively realize that they should reposition a product.

    They may then begin to study the situation, taking great pride in the sophistication of their scientific analysis while overlooking their much more impressive talent-that of recognizing, without conscious thought, the simple existence of the problem. The final skill level is that of expert. Experts generally know what to do because they have a mature and practiced understanding. When deeply involved in coping with their environment, they do not see problems in some detached way and consciously work at solving them.

    The skills of experts have become so much a part of them that they need be no more aware of them than they are of their own bodies. Airplane pilots report that as novices they felt they were flying their planes, but as experienced pilots they simply experience flying itself. Grand masters of chess, engrossed in a game, are often oblivious to the fact that they are manipulating pieces on a board. Instead, they see themselves as participants in a world of opportunities, threats, strengths, weaknesses, hopes, and fears. When playing rapidly, they sidestep dangers as automatically as teenagers avoid missiles in a familiar video game.

    One of us, Stuart, knows all too well the difference between expert and merely competent chess players; he is stuck at the competent level. He took up chess as an outlet for his analytic talent in mathematics, and most of the other players on his college team were also mathematicians.

    At some point, a few of his teammates who were not mathematicians began to play fast five- or ten-minute games of chess, and also began eagerly to replay the great games of the grand masters. They also felt that they could learn nothing from the grandmaster games, since the record of those games seldom if ever provided specific rules and principles. Why Computers May Never Think Like People 35 Some of his teammates who played fast chess and studied grand-mastergames absorbed a great deal of concrete experience and went on to become chess masters.

    Yet Stuart and his mathematical friends never got beyond the competent level. Stuart says he is glad that his analytic approach to chess stymied his progress because it helped him to see that there is more to skill than reasoning. When things are proceeding normally, experts do not solve problems by reasoning; they do what normally works. Expert air-traffic controllers do not watch blips on a screen and deduce what must be going on in the sky.

    Skilled outfields do not take the time to figure out where a ball is going. Unlike novices, they simply run to the right spot. Even with his analytical mind apparently jammed by adding numbers, Kaplan more than held his own against the master in a series of games. Deprived of the time necessary to see problems or construct plans, Kaplan still produced fluid and coordinated play. As adults acquire skills, what stands o u t is their progression from the analytic behavior of consciously following abstract rules to skilled behavior based on unconsciously recognizing new situations as similar to remembered ones.

    Conversely, small children initially understand only concrete examples and gradually learn abstract reasoning. Perhaps it is because this pattern in children is so well known that adult intelligence is so often misunderstood. By now it is evidence that there is more to intelligence than calculative rationality.

    In fact, experts who consciously reason things out tend to regress to the level of a novice or, at best, a competent performer. One expert pilot described an embarrassing incident that illustrates this point. Once he became an instructor, his only opportunity to fly the four-jet KCs at which he had once been expert was during the return flights he made after evaluating trainees.

    He was approaching the landing strip on one such flight when an engine failed. This is technically an emergency, but an experienced pilot will effortlessly compensate for the pull to one side. Being out of practice, our pilot thought about what to do and then overcompensated. He then consciously corrected himself, and the plane shuddered violently as he landed. Consciously using rules, he had regressed to flying like a beginner.

    This is not to say that deliberative rationality has no role in intelligence. Tunnel vision can sometimes be avoided by a type of detached deliberation. Focusing on aspects of a situation that seem relatively unimportant allows another perspective to spring to mind. Having just vanquished an expert opponent, he found himself taking on another member of the enemy squadron who seemed to be brilliantly eluding one masterful ploy after another.

    Things were looking bad until he stopped following his intuition and deliberated. This insight enabled him to vanquish the pilot. Digital computers, which are basically complicated structures of simple onoff switches, were first used for scientific calculation. They saw that one could use symbols to represent elementary facts about the world and rules to represent relationships between the facts. Computers could apply these rules and make logical inferences about the facts.

    For instance, a programmer might give a computer rules about how cannibals like to eat missionaries, and facts about how many cannibals and missionaries must be ferried across a river in one boat that carries only so many people. The computer could then figure out how many trips it would take to get both the cannibals and the missionaries safely across the river.

    Newell and Simon believed that computers programmed with such facts and rules could, in principle, solve problems, recognize patterns, understand stories, and indeed do anything that an intelligent person could do. But they soon found that their programs were missing crucial aspects of problem solving, such as the ability to separate relevant from irrelevant operations. As a result, the programs worked in only a very limited set of cases, such as in solving puzzles and proving theorems of logic.

    I therefore feel that a machine will quite critically need to acquire on the order of a hundred thousand elements of knowledge in order to be- Why Computers May Never Think Like People 37 have with reasonable sensibility in ordinary situations. A million, if properly organized, should be enough for a very great intelligence.

    Nor did the programs have any semantics-that is, any understanding of what their symbols meant. Weizenbaum set out to show just how much apparent intelligence one could get a computer to exhibit without giving it any real understanding at all. Weizenbaum was appalled when some people divulged their deepest feelings to the computer and asked others to leave the room while they were using it. One of us, Hubert, was eager to see a demonstration of the notorious program, and he was delighted when Weizenbaum invited him to sit at the console and interact with ELIZA.

    Hubert spoiled the fun, however. If one could not deal systematically with common-sense knowledge all at once, they asked, then why not develop methods for dealing systematically with knowledge in isolated sub-worlds and build gradually from that? The program allowed a person to engage in a dialogue with the computer, asking questions, making statements, and issuing commands within this simple world of movable blocks.

    The program relied on grammatical rules, semantics, and facts about blocks. Minsky and Papert believed that by combining a large number of these microworlds, programmers could eventually give computers real life understanding. Rather, they are specific elaborations of a whole, without which they could not exist.

    But since microworlds are only isolated, meaningless domains, they cannot be combined and extended to reflect everyday life. This problem has kept A1 from even beginning to fulfill the predictions Minsky and Simon made in the mid-sixties: that within 20 years computers would be able to do everything humans can. If a machine is to interact intelligently with people, it has to been endowed with an understanding of human life. What we understand simply by virtue of being human-that insults make us angry, that moving physically forward is easier than moving backward-all this and much more would have to be programmed into the computer as facts and rules.

    As A1 workers put it, they must give the computer our belief system. This, of course, presumes that human understanding is made up of beliefs that can be readily collected and stored as facts. Even if we assume that this is possible, an immediate snag appears: we cannot program computers for context. However, you should not apply that rule if the opposing king is much more centrally located than yours, or when you are attacking the enemy king.

    And there are exceptions to each of these exceptions. It is virtually impossible to include all the possible exceptions in a program and d o so in such a way that the computer knows which exception to use in which case, In the real world, any system of rules has to be incomplete.

    The law, for instance, always strives for completeness but never achieves it. But the sheer number of lawyers in business tells us that it is impossible to develop a code of law so complete that all situations are unambiguously covered. We can never explicitly formulate this in clear-cut rules and facts; therefore, we cannot program computers to possess that kind of know-how. Nor can we program them to cope with changes in everyday situations. A1 researchers have tried to develop computer programs that describe a normal sequence of events as they unfold.

    One such script, for instance, details what happens when someone goes to a restaurant. It all depends on what else is going on and what their specific purpose is. Are these people there to eat, to hobnob with friends, to answer phone calls, or to give the waiters a hard time? To make sense of behavior in restaurants, one has to understand not only what people typically d o in eating establishments but why they d o it.

    Thus, even if programmers could manage to list all that is possibly relevant in typical restaurant dining, computers could not use the information because they would have no understanding of what is actually relevant to specific customers. Humans often think by forming images and comparing them holistically.

    For instance, human beings use images to predict how certain events will turn out. If people know that a small box is resting on a large box, they can imagine what would happen if the large box were moved. If they see that the small box is tied to a door, they can also imagine what would result if someone were to open the door. A computer, however, must be given a list of facts about boxes, such as their size, weight, and frictional coefficients, as well as information about how each is affected by various kinds of movements. Given enough precise information about boxes and strings, the computer can deduce whether the small box will move with the large one under certain conditions.

    People also reason things out in this explicit, step-by-step way-but only if they must think about relationships they have never seen and therefore cannot imagine. At present, computers have difficulty recognizing images. True, they can store an image as a set of dots and then rotate the set of dots so that a human designer can see the object from any perspective.

    But to know what a scene depicts, a computer must be able to analyze it and recognize every object. Programming a computer to analyze a scene has turned o u t to be very difficult.

    Such programs require a great deal of computation, and they work only in special cases with objects whose characteristics the computer has been programmed to recognize in advance.