Acknowledgements
Chapter 1. General Introduction
1.1. The vision: to enhance cognitive processes
1.2. A transdisciplinary intellectual adventure
1.3. The result: toward hypercortical cognition
1.4. General plan of this book
PART 1 The Philosophy of Information
Chapter 2. The Nature of Information
2.1. Orientation
2.2. The information paradigm
2.3. Layers of encoding
2.4. Evolution in information nature
2.5. The unity of nature
Chapter 3. Symbolic Cognition
3.1. Delimitation of the field of symbolic cognition
3.2. The secondary reflexivity of symbolic cognition
3.3. Symbolic power and its manifestations
3.4. The reciprocal enveloping of the phenomenal world and semantic world
3.5. The open intelligence of culture
3.6. Differences between animal and human collective intelligence
Chapter 4. Creative Conversation
4.1. Beyond “collective stupidity”
4.2. Reflexive explication and sharing of knowledge
4.3. The symbolic medium of creative conversation
Chapter 5. Toward an Epistemological Transformation of the Human Sciences
5.1. The stakes of human development
5.2. Critique of the human sciences
5.3. The threefold renewal of the human sciences
5.4. The Ouroboros
Chapter 6. The Information Economy
6.1. The symbiosis of knowledge capital and cognitive labor
6.2. Toward scientific self-management of collective intelligence
6.3. Flows of symbolic energy
6.4. Ecosystems of ideas and the semantic information economy
6.5. The semantic information economy in the digital medium
PART 1 Modeling Cognition
Chapter 7. Introduction to the Scientific Knowledge of the Mind
7.1. Research program
7.2. The mind in nature
7.3. The three symbolic functions of the cortex
7.4. The IEML model of symbolic cognition
7.5. The architecture of the Hypercortex
7.6. Overview: toward a reflexive collective intelligence
Chapter 8. The Computer Science Perspective: Toward a Reflexive Intelligence
8.1. Augmented collective intelligence
8.2. The purpose of automatic manipulation of symbols: cognitive modeling and self-knowledge
8.3. The means of automatic manipulation of symbols: beyond probabilities and logic
Chapter 9. General Presentation of the IEML Semantic Sphere
9.1. Ideas
9.2. Concepts
9.3. Unity and calculability
9.4. Symmetry
9.5. Internal coherence
9.6. Inexhaustible complexity
Chapter 10. The IEML Metalanguage
10.1. The problem of encoding concepts
10.2. Text units
10.3. Circuits of meaning
10.4. Between text and circuits
Chapter 11. The IEML Semantic Machine
11.1. Overview of the functions involved in symbolic cognition
11.2. Requirements for the construction of the IEML semantic machine
11.3. The IEML textual machine (S)
11.4. The STAR (Semantic Tool for Augmented Reasoning) linguistic engine (B)
11.5. The conceptual machine (T)
11.6. Conclusion
Chapter 12. The Hypercortex
12.1. The role of media and symbolic systems in cognition
12.2. The digital medium
12.3. The evolution of the layers of addressing in the digital medium
12.4. Between the Cortex and the Hypercortex
12.5. Toward an observatory of collective intelligence
12.6. Conclusion: the computability and interoperability of semantic and hermeneutic functions
Chapter 13. Hermeneutic Memory
13.1. Toward a semantic organization of memory
13.2. The layers of complexity of memory
13.3. Radical hermeneutics
13.4. The hermeneutics of information
13.5. The hermeneutics of knowledge
13.6. Wisdom
13.7. Collective interpretation games
Chapter 14. The Perspective of the Humanities: Toward Explicit Knowledge
14.1. Context.
14.2. Methodology: the digital humanities
14.3. Epistemology: explicating symbolic cognition
Chapter 15. Observing Collective Intelligence
15.1. The semantic sphere as a mirror of concepts
15.2. The structure of the cognitive image
15.3. The two eyes of reflexive observation
Bibliography
Index
First published 2011 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.
Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:
ISTE Ltd 27-37 St George’s Road London SW19 4EU UK |
John Wiley & Sons, Inc. 111 River Street Hoboken, NJ 07030 USA |
www.iste.co.uk | www.wiley.com |
© ISTE Ltd 2011
The rights of Pierre Lévy to be identified as the author of this work have been asserted by him in accordance with the Copyright, Designs and Patents Act 1988.
Library of Congress Cataloging-in-Publication Data
Lévy, Pierre, 1956-
The semantic sphere 1 : computation, cognition, and information economy / Pierre Levy.
p. cm.
Includes bibliographical references and index.
ISBN 978-1-84821-251-0 (hardback)
1. Semantic Web. 2. Information society. 3. Human information processing. 4. Metalanguage. I. Title. TK5105.88815.L485 2011
025.04'27--dc23
2011029149
British Library Cataloguing-in-Publication Data
A CIP record for this book is available from the British Library
ISBN 978-1-84821-251-0
The work presented here has been subsidized since 2002 mainly by the Canadian Government through the Canada Research Chairs Program. I also received two research grants from the Social Sciences and Humanities Research Council (SSHRC) of Canada. I would like to thank Michel Biezunski and Steve Newcomb (who programmed the first version of the IEML1 dictionary and parser), Andrew Roczniak (who helped me formalize the mathematical theory of IEML), Christian Desjardins (who programmed an IEML-oriented database) and Samuel Szoniecky for their contributions.
My wife, Darcia Labrosse, has supported me in every possible way over the many years I have been working on the creation of IEML. She assisted and advised me in creating the diagrams and was an attentive, perceptive and tireless editor of this book. Without her, this book and even the IEML metalanguage would not have seen the light of day.
1 Information Economy Meta Language.
A participatory digital memory common to all humanity is on its way. But at the beginning of the 21st Century, the use of this memory is limited by problems of semantic opacity, incompatibility of classification systems, and linguistic and cultural fragmentation. Lacking computable models, we are unable to automate most cognitive operations of analyzing, filtering, synthesizing and interconnecting information so as to take full advantage of the huge mass of data available. We do not yet know how to systematically turn this ocean of data into knowledge, and still less how to turn the digital medium into an observatory that reflects our collective intelligence. The primary goal of this book is to present to the scientific community and the informed public a new system for encoding meanings that will allow operations on meaning in the new digital memory to become transparent, interoperable and computable. This system of semantic coding is called IEML (Information Economy Meta Language). Its use could help eliminate the obstacles that now impede the optimal exploitation of the digital medium to serve human development in its social and personal dimensions. If a dynamic community of semanticists and linguists were to enrich and develop this language, a group of engineers were to program and maintain a collection of software tools exploiting its computational potential, and a critical mass of users and social media were to take possession of these tools, I believe we would have embarked on a new scientific, technical and cultural path leading in the long term to a significant enhancement of human cognitive processes.
In this book I will show that there is no scientific, technical or ethical reason preventing us from using a calculable symbolic system such as IEML on a broad scale. Just as there are impossibility theorems in mathematics (the most famous of which is probably that of Gödel), I will provide what I believe to be mathematical proof – accompanied by solid technical and philosophical arguments – that a new possibility, unsuspected by previous generations, is now opening up for the human mind.
IEML is one of many formal languages that exist today. Its originality and value lay in the fact that all its valid expressions model semantic circuits for channeling information flows. The IEML semantic sphere is a huge, coherent, calculable graph that connects all these circuits and can therefore be used as a system of coordinates for the common digital memory that is being created.
This general introduction is organized in three main sections. Section 1.1 presents the coherent vision that has gradually crystallized over the many years I have devoted to constructing IEML. Section 1.2 recounts, in the first person, my journey of discovery, the intellectual adventure that led me to develop the metalanguage. Finally, section 1.3 summarizes the result of that adventure, a result that I believe meets the challenges of my vision.
In conceiving the IEML semantic sphere, I was responding to three closely interdependent challenges: a strictly semantic imperative, an ethical imperative and a technical imperative.
The immediate goal of IEML is to solve the problem of semantic interoperability – the “digital chaos” resulting from the multitude of natural languages, classification systems and ontologies. IEML functions as a “bridge language”, an addressing system for concepts that is capable of linking different systems for classifying and organizing data that would otherwise be incompatible. I am well aware that the very idea of a universal system for encoding meaning can conjure up the worst images of totalitarianism, or at least the potential impoverishment of the diversity of meanings. I would therefore like to remind the reader that digital sound encoding and the use of universal file formats for recording music has in no way standardized musical messages, but rather has increased the diversity of productions, variations, mixes, exchanges and explorations in the world of music. In the same way, far from standardizing the world of icons, digital encoding of images by means of pixels1 has stimulated computer-assisted production, automated processing and open creation and distribution of images of all kinds. Finally, digital encoding of the letters of the alphabet is the basis of all word-processing programs, and no one has ever claimed that these programs limit the freedom to write. Using an open, collaborative dictionary, a set of basic recombinable operations and a practically infinite transformation groupoid, the IEML encoding should present any determinate meaning as a moment in a whole range of cycles of transformation, a node within a multitude of networks or a figure that only appears as such against a background that can be explored infinitely. That is to say, the inscription of a concept in the semantic sphere will have the effect of opening up its horizons of meaning rather than closing them.
The IEML semantic sphere is an intellectual protocol for expanding the possibilities for interpretive dialog around a common digital memory. This dialog should be understood as translinguistic, transcultural, transreligious, transpartisan, transdisciplinary and transinstitutional. This is why the semantic topology opened up by IEML welcomes all practical, ontological or philosophical points of view and considers them equally legitimate. The only attitude that is disallowed by this generalized perspectivism is denial of the legitimacy of another person’s point of view, refusal of dialog, hermeneutic closure2.
Its aim is to establish a space that accommodates in a single system of coordinates a capacity to make meaning that is virtually infinite in its diversity, so the semantic imperative essentially necessitates maximum multidirectional openness, or “equanimity”. Thus it is not necessary to believe in the philosophical principles that inspired the invention of IEML in order to use it for your own purposes or to benefit from the enhanced individual and collective possibilities for creating and managing knowledge offered by the semantic sphere. But there is a caveat! I am not claiming that all semantic architectures that can be built in IEML are equally valid, or that everyone has to accept the perspectives of others. The semantic imperative assumes only two elementary dialectical principles: first, that all interpretations are in principle equally valid; and second that everyone must accept the right of others to hold points of view different from his or her own. Indeed, individuals and communities that decide to use IEML will be able to choose goals, objectives, sizes and degrees of transdisciplinarity or transculturalism that are as varied as they like. Only specialists in semantic engineering will have to be united by a common mission: to maintain and expand the hermeneutic equanimity of the semantic sphere.
The best use we could make of the contemporary infrastructure of memory, communication and digital processing would be to serve human development. The goal of human development is a reason of the heart, in the sense that “the heart has its own reasons, of which reason knows nothing”3. Rather than deal with each distinct aspect of human development separately (e.g. economic growth, education, public health, human rights, scientific and technical innovation), I propose that we focus our efforts on what a growing community of researchers considers its critical point: knowledge management through a free creative conversation. Knowledge management can be envisaged from two complementary perspectives: first, personal control of information flows with autonomous development of learning strategies; and second, collaborative use of data and sharing of knowledge. A multitude of creative conversations collaborating on indexing the digital data available in IEML and the subsequent use of the information thus produced would make it possible to initiate an autocatalytic virtuous circle between the two aspects – personal and social – of knowledge management. I invented the IEML semantic sphere in the hope of bringing about a socio-technical environment conducive to this creative dialectic.
I am certainly not able at this stage to rigorously demonstrate that a better technology for extracting and refining knowledge based on common digital data will have positive effects on human development. I do, however, sense that the scientific observation of its own functioning in the mirror of a digital Hypercortex will result in the maturation of human collective intelligence. I anticipate that new opportunities for collaborative learning and the expansion of individual intelligence will result from this new situation.
As humanity is a social species with a highly developed ability to manipulate symbols, the availability of automata capable of increasing our capacity to process symbols, coupled with telecommunications and the large-scale storage of information, presages a huge transformation. The inevitable global cultural metamorphosis, of which we have only seen the timid beginnings as we enter the 21st Century, will necessarily extend over many generations. A philosophy that is concerned with fostering cultural creativity in this new technocultural environment thus has an interest in avoiding looking at the digital transformation through the wrong end of the telescope (sector by sector) or in the rear-view mirror of institutions and concepts suited to the era (now past) of static writing systems and one-way communication.
The technical imperative of my philosophy may be formulated as follows: let us automate the symbolic operations that increase cognitive capacities as much as possible and thus in the end enhance the power and autonomy of individuals and communities. I would like to point out that the automation I am speaking of here is not limited to logical reasoning and statistical analysis. Ideally, it encompasses other cognitive processes, particularly those involving huge quantities of data: management and filtering of information flows, simulations of complex processes, perception of analogies, creative synthesis, discovery of blind spots, questioning of established models, etc. This technical imperative induced me to seek as much benefit as possible from the growing power of the automation of symbolic operations, even if this meant to some extent anticipating the calculation, memory and transmission capacities that will be available to future generations. In any case, the transparency of thought processes to calculation – in other words, the emphasis on computational models of cognition – is a cognitive scientist and programmer’s ideal that users of IEML are obviously not obliged to share with me in order to take advantage of the practical benefits of the research program proposed here4.
The IEML semantic sphere is the result of a long quest, the main stages of which I would now like to recount. I have decided to present this brief intellectual autobiography only because I think it may help my readers to better understand my purpose.
At a very young age, I was interested in the natural sciences, in particular cosmology. I was also fascinated by what was then called cybernetics and “electronic brains”. I have maintained these two interests. I went into the human sciences, however, and after a short time in economics I took a university course in history. In the 1970s, Paris offered students a rich intellectual landscape. The French school of history, known as the Annales school, initiated by Marc Bloch and Lucien Febvre and so admirably exemplified by Fernand Braudel and Georges Duby, was at the height of its productivity. Structuralism in anthropology, championed by Claude Lévi-Strauss, was still a powerful intellectual current, and it was used by Roland Barthes to analyze the present. At that time the works of Michel Foucault, Gilles Deleuze and Jacques Derrida were already providing a stimulating counterpoint to structuralism. In the excitement following May 1968, all kinds of Marxist, Freudo- Marxist and Sartrian schools, as well as the Frankfurt school, were putting forward their points of view. To understand communications and the media, I read Marshall McLuhan, Guy Debord and Jean Baudrillard. Through Edgar Morin, I discovered systems theory, theories of self-organization and constructivist epistemologies. In the exact sciences, I had immense intellectual respect for the mathematics of Bourbaki. The young field of molecular biology convincingly explained the mechanisms of evolution and the functioning of organisms; I was particularly impressed by the “cybernetic” form that Jacques Monod gave to biology by bringing information theory into the heart of the living cell5. Debating with Jacques Monod, Illya Prigogine and Isabelle Stengers led me to discover in the Order out of Chaos (1984)6 an evolving, complex, indeterminate and self-organizing nature, a thousand miles from a dead mechanism swinging between chance and necessity.
It was with Michel Serres, who was then teaching the history of science at the Sorbonne, that I really discovered the beauties of philosophy – and the freedom to think. During the many years I attended his seminars, Michel Serres made me understand the complexities and multiple resonances of theories of information and communication as well as the subtle – but profound – connections between the human sciences and the natural sciences. The author of a monumental thesis7 on Leibniz’s Monadology, he transmitted the living spirit of philosophy and Leibnizian encyclopedism to me.
In a course on practical methodology devoted to the use of databases for historical research (taught by Jean-Philippe Genet), I was struck by the transformation of work methods and the increased intellectual rigor that using computers required8. I discovered that the computer was not “just a tool”: it was above all an intellectual technology whose use transformed cognitive processes. Moreover, The Computerization of Society (1981), by Simon Nora and Alain Minc9, which was launched at the same time as Minitel, opened my eyes to what seemed to me at that time one of the main cultural changes my generation – and the generations following!–would experience. This double shock made me decide to do my Master’s thesis with Michel Serres on the subject of communication, teaching and knowledge in a computerized society (quite surprising for an apprentice historian in the late 1970s).
After my studies in history at the Sorbonne, I enrolled in a doctoral program in sociology at the École des Hautes Études en Sciences Sociales (EHESS), with Cornelius Castoriadis, whose book The Imaginary Institution of Society I had just read10. Castoriadis was a philosopher, economist and psychoanalyst. When I joined his seminar, he was doing a complete rereading of the Greek sources of Western thought. The first paper I did with him, which was published in part in the Esprit11 journal, was a meditation on the cultural dimension of computers. When I think back to it today, two important ideas remain:
– first, that the automatic manipulation of symbols was the result of an ancient philosophical and scientific quest going back at least to Aristotle; and
– second, that the computerization of society and the global interconnectedness of computers – which were already becoming apparent in the late 1970s and early 1980s – showed that the movement of conquest of nature and exploration of the planet that had marked the modern era was turning back on itself and the new frontier was now the cognitive inner life of our species.
I knew then that these questions would occupy me for many years to come. But I did not feel ready to take them on without a solid philosophical education. That is why I decided to do my doctoral thesis (again with Castoriadis) on the idea of freedom in antiquity, which gave me the opportunity to do a close reading of the great Greek and Roman texts and the commentaries on them. Philosophically, that thesis, which was subtitled “L’un et le multiple” [The one and the many], centered on the problem of open unity. Was freedom essentially openness to multiplicity, or was it a unity forged in independence and autonomy? Or was it, rather, something like a dialectical balance between these two moments? And could openness to multiplicity be conceived outside a universality capable of containing it without constraining it?
At the beginning of the 1980s, shortly after I defended my thesis, I participated, with Jean-Pierre Dupuy, Pierre Livet, Francisco Varela and Isabelle Stengers, in a collective research project organized by the CREA (Centre de recherches sur l’épistémologie appliqué) of the École Polytechnique on the origins of the idea of self-organization. In the cybernetic area, I was specifically responsible for studying the work of Warren McCulloch12, the first researcher to present a mathematical formalization of neural networks, and Heinz von Foerster13, a pioneer of artificial life14 and proponent of a radical constructivist epistemology. This was the beginning of my immersion in the cognitive sciences, connectionist models and artificial intelligence. Neuronal Man, by Jean-Pierre Changeux, came out in 198315 and the relationship between the mind, the nervous system and automata that manipulate symbols was being passionately discussed by a broad international community of researchers. Although I recognized the general relevance of the research program in cognitive sciences and the huge impact of the invention of the computer16 on intellectual technologies, I was not able to convince myself that mechanisms operating step-by-step on the physical states of electronic circuits could reproduce, in the strong sense of the word, the inner experience of phenomenal consciousness, memory and linguistic meaning characteristic of human experience. My first book, La Machine Univers (1987)17, looked at a tension between language and calculation that in many respects corresponded to the opposition between hermeneutic tradition in the human sciences and the pan-computational approach of the most extreme currents in cognitive sciences. The question of the calculability of human language was from then on present in the background of all my work and would not leave me until I found – in IEML – a satisfactory solution to it.
Shortly after the publication of La Machine Univers in the late 1980s, I spent two years in Montreal as a visiting professor in the communications department of the Université du Québec à Montreal (UQAM). It was there, thanks to the laboratory established by Gilles Zénon Maheu18, that I discovered the nascent world of hypertext and interactive multimedia. While I was making a practical exploration of software for creating hypertext, I was rereading A Thousand Plateaus, by Deleuze and Guattari19, and I was struck by the analogies between the philosophical concept of the rhizome and the new forms of network writing (of which Deleuze and Guattari were not then aware, as they later told me). I saw hypertext as a textual machine that could profoundly change writing, and therefore thought. In 1990, I began to dream of a hypertextual philosophical system illustrating the concept of open unity. In this ideal system, there was a graph of interdependent concepts in which any continuous path between nodes was accepted as legitimate. There was no longer absolute basis, foundation or beginning. Nor were there any final concepts or concepts converging toward an end point. Dictionaries, encyclopedias, indexes, systems of pointers and open works20 of all kinds clearly had not waited for digital hypertext to present free circuits of reading in documentary networks. I imagined a more systematic form, however, making maximum use of computational technology: a machine that generated hypertext. I also envisaged the hypertext universe generated by such a machine as an all-encompassing environment that would present every exclusive philosophy, every specific ontology, as a partial point of view that complements other viewpoints. The conceptual matrix for that machine remained to be found.
Two books were born of my first stay in Quebec. The first one, Les Technologies de l’Intelligence (published in 1990, before the Web!), predicted the merger of computer networks and hypertext networks. It also explored the concept of cognitive ecology, which I conceived as a self-organized emergence based on a combination of biological possibilities, cultural forms, social networks and intellectual technologies. This concept was very close to what, in 1994, I would call collective intelligence. The second book, De la Programmation Considérée Commeun des Beaux-arts (1992), was rooted in my own practice of knowledge engineering for the production of expert systems. My colleague at UQAM, management professor Jacques Ajenstat, had given me the opportunity to work with people in youth protection to develop an automated system for sharing their knowledge with novices. I had also worked with the Geneva entrepreneur and cultural activist Xavier Comtesse on a methodology of knowledge engineering based on several concrete cases of incorporating informal knowledge into software. At this time, there were still very few people talking about knowledge management21. I was thus able to experiment firsthand, and without too many theoretical prejudices, with the major reorganization of cognitive ecologies resulting from the partial automation and media encapsulation of tacit knowledge. Rather than the pair of opposites implicit/explicit, I used procedural/declarative, which was supplied by cognitive psychology and was also suggested by the declarative rules called for by the technology of expert systems. I mainly focused on the creative epistemological, cultural and social restructuring of knowledge architectures resulting from computerization.
When I returned to Europe at the beginning of the 1990s, Xavier Comtesse, Antonio Figueras and Eric Barchechat (who had a grant from the European Union) gave me the assignment of thinking about what a writing system designed especially for computer media could be. Alphabets, which represent the sounds of speech, were invented at the turn of the first millennium before the Common Era in a media environment in which audio recording did not exist. But in contemporary culture, which is dominated by interactive multimedia representations, instantaneous telecommunications and automatic manipulation of symbols, could we imagine something beyond the alphabet, a form of animated writing that would help us to share and collectively organize complex mental models? To draft the plan for L’Idéographie Dynamique22, I had to learn about linguistics, the relationship between linguistic and cognitive sciences, and the complex connections between visual representations (iconic and animated) and language representations of mental models. It goes without saying that, at least in terms of my theoretical education, the invention of IEML owes a great deal to the work I did on dynamic ideography.
At the end of 1991, Michel Serres called on me to assist him with an investigation of open distance learning for the French Government. It was within this framework that, with Michel Authier, we imagined the system of knowledge trees10. One of our mandates was to validate the informal competencies acquired by individuals outside the education system and official curricula. We designed a software program that visually organized the competencies and knowledge of communities on the basis of people’s real learning paths rather than predetermined patterns structured in terms of prerequisites and disciplines (again an example of “open unity”). Our proposition was not adopted by the Government and we decided to develop it in a private company, which was probably France’s first start-up in network communications software specializing in knowledge management (KM). In 1992, the Web did not exist and KM was not yet a very established discipline. One of the most interesting results of our approach was the creation of a different knowledge tree for each community, showing the changes in the tree when people left or joined the community. The system could be used for exchanges of knowledge between people and for organizing knowledge management in schools, businesses and associations of all kinds. My experience in the conception and development of knowledge trees brought me closer to the dream of formalizing the world of ideas and knowledge in a computer model without locking that world into a closed, unchanging structure. The knowledge trees dynamically mapped the learning paths and current knowledge of a community, calculated contextual distances between areas of knowledge, and evaluated the knowledge according to various criteria. This calculable model was simply a reflection of the movements of a collective intelligence, allowing for the emergence of new knowledge or changes in the relationships among areas of knowledge. Even better, by giving all members of the community a common image of the knowledge space they created together, the trees allowed all of them to become aware of the collective intelligence in which they participated and their role in its evolution.
It was during the “Serres mission”, when thinking about how to represent and organize the elementary units of knowledge or competency, that I had my first intuition of what would become the conceptual matrix of IEML. I was teaching in the education department at Paris-X Nanterre at the time. Exploring the foundations of education theory, I came across the trivium (grammar, dialectic, rhetoric) of Greek and Roman antiquity and the European Middle Ages, which I had already encountered during my classical studies. The trivium was for many centuries the basis of liberal education24. Grammar covered the basic abilities of reading and writing (mainly in Greek and Latin) and some familiarity with the corpus of authors traditionally defined as the “classics”. Dialectic corresponded roughly to logic, the rules of reasoning and the ability to carry out a well-argued dialog. As for rhetoric, it consisted essentially of the art of composing, memorizing and delivering elaborate, convincing speeches suited to the circumstances and the audience’s expectations. It seemed to me that this basic education, which was intended for the ruling classes of ancient societies and the clerics of medieval societies, excluded everything related to technology, the material world and what, in the Middle Ages, were called the “mechanical arts”. In addition, the whole area of ethics and relationships among people was only dealt with indirectly, to be left (depending on the period) to philosophy, theology or law. The trivium was essentially only concerned with signs and their manipulation. After reading François Rastier’s La Triade Sémiotique, le Trivium et la Sémantique Linguistique (1990)25, it occurred to me that the semiotic triad could be used to design an expanded, or generalized, trivium.
The semiotic triad corresponds to the distinction made in modern linguistics between signifier, signified (for an interpreter) and referent. This division goes back at least to Aristotle26 and it has been discussed and refined through the history of philosophy27. For my purposes, I renamed it sign (signifier), being (interpreter) and thing (referent). It should be noted that there can only be a signified or concept in the mind of an interpreter (being) or, from a Platonic perspective, in an intelligible world. The abstract concept is very different from the perceptible sign, since there are many signs (in different languages, for example: apple, pomme) that designate the same concept. It is clear, moreover, that a distinction also has to be made between the concept (a class or general category that can only exist for intelligence) and the referent: you can eat an apple (the referent, the thing) but not the concept of an apple.
In parallel with the classical trivium, which was a preparation for mastering the manipulation of signs, a trivium of beings and a trivium of things still had to be conceived. I thus developed a matrix of competencies with nine cells (with grammar/dialectic/rhetoric on one axis and being/sign/thing on the other axis). In Figure 1.1, the stars represent signs, the little figures represent beings and the cubes represent things, while single icons indicate grammar, double icons indicate dialectic and triple icons indicate rhetoric.
At the level of grammar we find fundamental capacities for action, “basic” competencies. But this does not necessarily mean elementary skills; there can obviously be very high degrees of linguistic competency, self-mastery or sensory-motor refinement. Grammatical competencies involve the self. They involve discursive or symbolic power with regard to signs, emotional or affective energies with regard to beings, and physical skills with regard to things.
At the level of dialectic we find interactional competencies. In the signs column, the grammatical mastery of codes serves knowledge of a wide variety of subjects, reasoning and dialog. In the beings column, self-esteem and self-mastery serve egalitarian, mutually respectful relationships with others. Conflicts and divergent interests are settled through negotiation, while agreements and promises are managed contractually. In the things column, sensory-motor competencies serve technical know-how involving the manipulation of tools and machines, and the ability to create and maintain concrete environments for life and work. Once again, dialectical competencies are not “medium” competencies between grammar and rhetoric. Each dialectical competency can be distributed on a scale of excellence from minimal to exceptional.
At the level of rhetoric we find the capacity to get things done. Communication strategies organize signs and messages so as to accomplish the work of persuasion, reframing (or even deception) as effectively as possible. Leadership, the ability to inspire or direct a group, acts on beings, in particular on their social cohesion. Finally, engineering involves having actions carried out on things, combining mechanisms for a particular purpose. Once again, rhetoric is in no way the “summit” of the competencies since there are obviously many degrees of strategic abilities, from weakness to maximum effectiveness.
My innovation was taking the three complementary functions of signification (the objective aspect) or interpretation (the subjective aspect) and using them for classification. The advantage of this approach is that it recalls the interdependence that is its basis: the clear separation of being, sign and thing is not allowed, since each of the three dimensions of signification necessarily refers to the other two. And grammar, dialectic and rhetoric are equally closely linked and complementary, especially in terms of the balance of competencies within a group. Thus, whenever an economic, social or technical change has a direct effect on one of the nine cells of the matrix, we can predict a reorganization of the eight others. In the knowledge trees, each special competency could be characterized by a certain distribution of intensity (which could be illustrated by degrees of grey) on the nine-cell matrix. This indexation using a generalized trivium made it possible to identify unexpected similarities, complementarities that cut across categories and systemic gaps – which a labeling system limited to the usual classifications of disciplines and occupations would not have brought out.
In addition to the purely empirical and local mapping of the knowledge trees, the generalized trivium made it possible to situate competencies, people and groups against a shared background that permitted comparative analyses. On the basis of an individual or collective diagnosis, it became possible to design learning or development strategies that were more well-founded because they took into account the absence or emptiness of certain areas of competency, while the trees showed only what existed. I had constructed a systematic conceptual structure in the form of a matrix that could be used for any field of knowledge or practice.
For the sake of regularity, this structure did not impose an a priori hierarchy or ultimate foundation. It did not dogmatically distribute the substantial and the accessory, or the infrastructure and the reflection. On the contrary, it permitted mapping of concrete situations while highlighting multipolar interdependencies. This was already the germ of the IEML semantic sphere.
Emboldened by these first discoveries, I wondered about the matrix that would result from placing the being/sign/thing triad on both the X-axis and Y-axis. The idea I had in mind was to start from the structure of signification itself in order to create a conceptual matrix that would produce an open, non-excluding hypertextual semantic space. Since all meaning is the product of an interpretation, the general form of the interpretation could not exclude any particular meaning. I then arrived at a new matrix of nine cells (see Figure 1.2).
The signification attributed to these ideograms is the result of the real work of “deciphering”. I first constructed my matrix, and only asked myself the question of the meaning of its nine cells afterwards. It was thus not a matter of illustrating concepts already conceived in natural language, but of interpreting in natural language an ideography generated using a combinatory algorithm (as “small” as that algorithm was at that time). To interpret the meaning of the ideograms, I had to first allow myself to be guided by the form and nature of the symbols. I then had to not lose sight of the need to exhaustively map the most varied dimensions of meaning, but in the mode of reciprocal implication or interdependence rather than that of separation. Finally, no concept could be “superior” or “more fundamental” than another.
The work of deciphering led me to think at length about the precise nature of the relationship between primitives that was presented by an ideogram. In Figure 1.2, there is an arrow connecting two primitives from right to left. The primitives read being, sign and thing. But how should the arrow be read? What is the relationship between the symbols? Figure 1.2 shows only one of the many representations I have used over the years. However, through the changes in representation, I have always read my ideograms as representing “implications”, enfoldings or envelopments of one symbol by another.
In Figure 1.2, World must be read as an interpretation of the ideogram “the thing implies or envelops the being”. This ideogram represents a small stage on which a universe of purely material things is infused with “human” qualities through naming, evaluation and work. It is this implication in the thing of qualities characteristic of being that constructs a world.
In the following ideogram, “the thing envelops the sign”, we see the movement of inscription or recording that “makes” Memory.
Space corresponds to a reciprocal envelopment of things in things, i.e. to the construction of a topology or a material space in which every thing is situated in a universe of things.
In the case of the ideogram of Society, which shows the sign enveloping the being, we have to imagine a multiplicity of beings, such as a concert or a group of people playing music, with the musical sign playing the role of unifying envelope of the collectivity. This role of the envelope creating society can be played by many other types of signs: totems, flags, languages, laws, contracts, etc.
In Thought, the signs envelop each other in deductions, inductions, interpretations, narratives and associations dictated by the imagination.
Truth represents a small stage where the sign implies the thing, i.e. the proposition envelops the fact or the reference.
Affect represents the reciprocal implication of beings, each containing the other in its “heart”, whether in love or hatred.
Language represents the sign enveloped, or understood, by the being: the transformation of sign into message.
Finally, Life represents the assimilation of material qualities (the thing) by the being, suggesting incarnation, which cannot be separated from sensation, nourishment and breathing.
It is clear that someone else faced with the same problem of deciphering under constraint would have found a different solution, which would be expressed through other names given to the ideograms. But my interpretation of this matrix had the advantage that nine distinct philosophical points of view could be arranged on it without hierarchy or separation. Space could represent the materialist, physicalist or atomist point of view. Thought was obviously a good representative of the idealist point of view. Truth represented the positivist or logicist inspiration of analytic philosophy. Language was the place for the philosophy of language, communication and media. Society represented the sociological point of view in general and the interpretation of phenomena in terms of social relationships. Life could be the place for a biologistic philosophy and for empiricism (which is based on sensory experience). Memory could accommodate evolutionist approaches, but also anything based on writing and tradition. Finally, World would present an anthropological approach, in which human culture infuses the cosmos with its order and values. The ideographic matrix I conceived had the advantage of interweaving all these points of view symmetrically.
I had got into the habit of calling these ideograms “folds” and I called the language they made up the “language of folds”, since, as we have seen, the operation of composing the symbols was precisely one of envelopment. Since each of the three primitives could envelop the other two, the primitives could also be seen as envelopes, or at least “balls” of stretchy matter capable of enveloping other “balls”. I then started to refine my model in two directions: first I began to construct envelopments of three terms, and second I tried out envelopments of envelopments, or recursive folds.
The following are three examples of three-term envelopment:
– the thing envelopes the sign in the mode of the sign, which gives the semiotic function Mark;
– the thing envelopes the sign in the mode of the thing, which gives the technical function Container;
– the thing envelopes the sign in the mode of the being, which gives the social role Scribe.
As shown in these examples, the Mark, Container and Scribe each project into their realm (semiotic, technical or social) the original intention expressed by the Memory archetype, which indicates conservation and duration. This is how I constructed the operation of triplication, or triple envelopment. The term on the right in Figures 1.3, 1.4 and 1.5 would be named substance at the end of my research. The substance corresponds to the core or the innermost membrane of the envelopment. The term on the left was later called the attribute. The attribute corresponds to the intermediate layer of the envelope. Finally, the term above the arrow was called the mode. It corresponds to the outside skin of the envelope or the semantic fold. The nine initial archetypes in Figure 1.2 simply have an empty or “transparent” mode.
In examining Figures 1.2, 1.3, 1.4 and 1.5, the reader can observe that there are symmetries not only between the nine folds of each matrix, but also between the folds that occupy the same positions in different matrices, and between the matrices themselves. The key point is that these symmetries are not solely formal (in terms of the arrangement of the elementary symbols) but are also semantic because of the mode of interpretation or deciphering of symbols I had adopted. As in any good scientific ideography, there is thus an analogy between formal symmetries and the semantic symmetries. I will not go into a complete explanation of the deciphering of all these ideograms here, since this will be found – in its final form – in Volume 2 of this book. I will just comment on one last example in order to show the reader the logic governing the construction of IEML.
As a last illustration of the deciphering of the ideograms in this introduction, the general archetype World is projected in the realm of signs as Name, because humans cannot produce a cosmos without naming its elements. It is projected in the realm of social roles as Judge, which refers to the need to evaluate so as to construct an ordered world. It is projected, finally, in the technical realm as Fire, which here designates the mastery of a technique unique to humans, the hearth of warmth and light, the center of the home and the origin of all kinds of transformations and industries (cooking, pottery, metallurgy, etc.).
At the same time as I discovered triplication and the semantic symmetries it allowed me to explore, I began to construct matrices of reciprocal envelopment with the ideograms obtained through triplication of the primitives. For example, Society enveloping Memory gave History, and Memory enveloping Society gave Tradition. While the primitives represented degree zero of envelopment and the archetypes degree one, I could construct envelopments of degree two (the types), three, four, etc. The only constraint I set for myself was that the three operands of a triplication must always be of the same degree or the same layer. These successive layers of envelopment opened up two particularly promising perspectives. First, it became possible to construct ideograms representing concepts as precise and complex as I wished. Indeed, the lower the layer of triplication, the more general the concepts were. Conversely, successive triplications made the ideas increasingly precise (or complex). Second, I was beginning to glimpse a language whose expressions were in the form of envelopes containing envelopes, and so on recursively or “fractally”. From the point of view of the fractal enfolding of the envelopes within each other, this language could be seen as a regular, symmetrical addressing system – necessarily decodable by an automaton – since it was ultimately the recursive application of a well-defined operation to a small number of primitive symbols. From the point of view of the meaning of these fractal folds in successive layers, they were real messages. I thus had in my hands the core of a communication system in which the addresses were messages and the messages were addresses. The readable code on the external envelope summarized the internal folds of its content, and this numerical diagram of a fractal pleat was none other than the topological figure of a concept translatable into natural language.
collective intelligence“”28