Contents
Preface
Chapter 1 Ambient Intelligence: Science or Fad?
1.1. Ambient intelligence: still young at 20 years
1.2. A step forward in the evolution of informatics
1.3. Extreme challenges
1.4. Conclusion
1.5. Bibliography
Chapter 2 Thinking about Ethics
2.1. Ethics and fundamental rights
2.2. Ethics and values
2.3. Ethics and future perspectives
2.4. Bibliography
Chapter 3 Sensor Networks
3.1. MAC layers for wireless sensor networks
3.2. Topology control
3.3. Routing
3.4. Deployment of sensor networks
3.5. Bibliography
Chapter 4 Smart Systems, Ambient Intelligence and Energy Sources: Current Developments and Future Applications
4.1. Introduction
4.2. Did you say “smart systems”?
4.3. Energy harvesting2
4.4. Wearable computers and smart fibers
4.5. Other applications
4.6. Conclusion
4.7. Bibliography
Chapter 5 Middleware in Ubiquitous Computing
5.1. Middleware
5.2 Development of middleware with new computer environments
5.3. Main properties of middleware in ubiquitous computing
5.4. Bibliography
Chapter 6 WComp, Middleware for Ubiquitous Computing and System Focused Adaptation
6.1. Service infrastructure in devices
6.2. Dynamic service composition
6.3. Dynamic adaptation of applications to variations in their infrastructure
6.4. Bibliography
Chapter 7 Data Access and Ambient Computing
7.1. Introduction
7.2. General context
7.3. Types of queries
7.4. Data access models
7.5. Query optimization
7.6. Sensitivity to context
7.7. Conclusion
7.8. Bibliography
Chapter 8 Security and Ambient Systems: A Study on the Evolution of Access Management in Pervasive Information Systems
8.1. Introduction
8.2. Managing access in pervasive information systems
8.3. The evolution of context-aware RBAC models
8.4. Conclusion
8.5. Bibliography
Chapter 9 Interactive Systems and User-Centered Adaptation: The Plasticity of User Interfaces
9.1. Introduction
9.2. The problem space of UI plasticity
9.3. The CAMELEON reference framework for rational development of plastic U
9.4. The CAMELEON-RT run time infrastructure
9.5. Our principles for implementing plasticity
9.6. Conclusion: lessons learned and open challenges
9.7. Appendices
9.8. Bibliography
Chapter 10 Composition of User Interfaces
10.1. Problem
10.2. Case study
10.3. Issues
10.4. State of the art in UI composition
10.5. Two examples of approaches
10.6. Key statements and propositions
10.7. Bibliography
Chapter 11 Smart Homes for People Suffering from Cognitive Disorders
11.1. Introduction
11.2. The impact of cognitive disorders on society
11.3. Cognitive disorders, relevant clients and research at DOMUS
11.4. The objectives of the research program conducted at DOMUS
11.5. Pervasive computing and ambient intelligence
11.6. An integrated and interdisciplinary approach to research
11.7. Transforming a residence into an intelligent habitat
11.8. Research activities
11.9. Conclusion
11.10. Bibliography
Chapter 12 Pervasive Games and Critical Applications
12.1. Introduction
12.2. Pervasive games
12.3. Critical ubiquitous applications
12.4. Conclusion
12.5. Bibliography
Chapter 13 Intelligent Transportation Systems
13.1. Introduction
13.2. Software architecture
13.3. Dedicated transportation services and mode of communication
13.4. Public transportation services
13.5. Conclusion
13.6. Bibliography
Chapter 14 Sociotechnical Ambient Systems: From Test Scenario to Scientific Obstacles
14.1. Introduction
14.2. Definitions and characteristics
14.3. Real-life scenario: Ambient Campus
14.4. Intuitive architectures
14.5. Scientific challenges
14.6. Conclusion
14.7. Acknowledgments
14.8. Bibliography
List of Authors
Index
First published 2013 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.
Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:
ISTE Ltd
27-37 St George’s Road
London SW19 4EU
UK
www.iste.co.uk
John Wiley & Sons, Inc.
111 River Street
Hoboken, NJ 07030
USA
www.wiley.com
© ISTE Ltd 2013
The rights of Gaëlle Calvary, Thierry Delot, Florence Sèdes, Jean-Yves Tigli to be identified as the author of this work have been asserted by them in accordance with the Copyright, Designs and Patents Act 1988.
Library of Congress Control Number: 2012950237
British Library Cataloguing-in-Publication Data
A CIP record for this book is available from the British Library
ISBN: 978-1-84821-437-8
Preface
In recent years, information and communication science and technology has witnessed spectacular advances owing to the groundbreaking nature of new materials, calculation processes and data sources. “Gray box” computers now only represent a small proportion of calculation resources and data sources. Indeed, more than 80% of processors are today integrated into various sophisticated devices. The number of sensors integrated into components with processing and signal transmission units has significantly increased. Each sensor is an active node in a system whose local processing capabilities make it possible to aggregate, sort and filter data or carry out more sophisticated processing.
Human–computer interaction has also significantly evolved. It is no longer simply confined to the traditional “screen, keyboard, mouse” settings, but permeates our everyday lives and activities. User interfaces (UIs) are no longer limited to graphics, nor to static contexts of use. Rather, they have become multimodal and are capable of adapting to dynamic contexts of use. With the mobility of the user, they migrate from one interactive space to another. While this vision is exciting in terms of usage, it raises significant challenges when engineering such UIs. These challenges are great since computing devices become more and more powerful and thus allow exploiting and mining huge databases. Anticipating and overcoming the risk of system hijacks are also a factor to be taken care of in these issues.
Transparency becomes a highly valuable quality for ensuring better access to resources at all levels of the system or corporation (“virtual”, “in-network”, and so on) making resources vulnerable to threats and attacks. Owing to the wide range of risks, from economic intelligence to protecting personal data, it is necessary to find the right balance between ensuring data protection and transparency of access to new autonomous resources in open environments. New challenges have emerged such as those relating to managing data access, ethics, and the well-known “precautionary principle”, “Big data”, “Big Brother”; the list is endless.
With such challenges, the growing diversity of dynamic services and smart objects raises new issues in the design, development and execution of software applications. These applications must be able to adapt to a software and hardware infrastructure, which is continuously and unpredictably changing. Prime examples of this include applications that follow the user as they move, such as those used in mobile phones, cars, houses, etc., which provide them with permanent access to services over a prolonged period of time. As such, the application must respond to variations in the context of use, while ensuring quality of service. Having simplified the distribution of software applications, middleware now facilitates the development of these applications by providing them with the ability to adapt. They must provide software mechanisms during run time that guarantee the permanent adaptation of the application to a changing context of use. These challenges will therefore increase significantly when faced with new uses in increasingly diverse, variable, and unpredictable contexts of use.
This book is the result of two CNRS (French National Center for Scientific Research) “Ambient intelligence” summer schools organized in July of 2009 and 2011. In line with the ethos of these schools, the present work aims to inform the lay reader of the challenges posed by this new field of research. Taking a holistic view, it covers various levels of abstraction, ranging from fundamental to advanced concepts and brings together the contributions of various specialists in the field, the majority of whom have carried out their research within the school.
This book features the main areas of computer science concerning ambient intelligence (e.g. human–computer interaction, middleware, networks, information systems, etc.). It is a multi-disciplinary advance with contributions originating from intelligent materials and ethics, the aim of which is to demonstrate the importance of integrated research, based on social sciences and technological advances. Such research is multi-disciplinary; the aim of which is mobilizing and bringing together expertise from each field to develop new theories. This book also pays tribute to the field’s wide spectrum of applications with chapters focusing on health, transport, and even tourism. Teaching ambient intelligence is not addressed per se; rather, it is designed to provide a stimulating perspective to attest to the challenge of teaching within current frameworks due to the paradigm’s interdisciplinary and contemporary nature and the lack of structures, platforms, and generic materials. Recent initiatives such as “FabLabs” are surely part of a response to this.
We would like to warmly thank all the authors who have contributed to the publication of this book. We also sincerely hope that you will have as much enjoyment reading their contributions as we have had in listening to their presentations during the two editions of the “ambient intelligence” summer school.
Gaëlle CALVARY, Thierry DELOT, Florence SÈDES and Jean-Yves TIGLI
November 2012
Ambient intelligence concerns the use of emerging technologies for computing, sensing, displaying, communicating and interacting to provide services in ordinary human environments. Different facets of this problem have been addressed under a variety of names, including ubiquitous computing, pervasive computing, disappearing computing, and the Internet of Things. Whatever the name, the field is defined by its core aim: to provide services and devices that can adapt to individual’s needs and the social context. This includes diverse applications such as aiding people to adopt more energy-efficient lifestyles, improving the quality of life for the disabled, helping senior citizens remain independent, and aiding families with services for security, entertainment, and with tools for managing the cost of living.
Ambient intelligence is not a new concept. In 1988, only a few years after the introduction of the Macintosh computer and the French Minitel, Mark Weiser [WEI 91] identified the principal challenges under the name “ubiquitous computing”. Weiser stated that technologies centered around daily activities would inevitably fade from view while becoming an imperceptible but ubiquitous component of ordinary life. Weiser contended that, although increasingly widespread, personal computing was the first of many steps in this process.
During the 1990s, researchers at IBM proposed the term pervasive computing. With this approach, emphasis was placed on technical challenges such as developing hardware and software techniques necessary for bringing computing into ordinary human environments. At around the same time, the European IST Advisory Group (ISTAG) put forth its vision of ambient intelligence [STA 03], leading to the creation of the Disappearing Computer program within the European Union’s Fifth Framework Programme for Research and Technological Development. This period also saw the emergence of Philips Research’s “Vision of the Future” [PHI 96] program and the creation of the Philips HomeLab designed to stimulate creativity through experimentation, to explore new opportunities for combining different technologies, to identify the socio-cultural significance of these innovations, and to make these concepts tangible, useful, and accessible to all.
During this period, a number of conferences, workshops, and journals were organized. The Ubicomp1 conference, created from the mobile computing and smart environment communities, focuses primarily on user experience. The IEEE Pervasive and Percom2 conferences have arisen from the “distributed computing” community, to focus on the challenges and technical solutions in distributed systems and networks. Within Europe, the EUSAI (European Symposium on Ambient Intelligence), later renamed AmI – Ambient Intelligence3, was launched in 2002 with support from Philips Research. A scientific community has also emerged to address the topic of context-aware computing. While the concept of context is not new in computer science (nor in other fields), bringing computing into ordinary human environments raises a rich, new set of problems. Other aspects of this problem have also been explored, including collectives of artificial agents for ambient intelligence4, the Internet of Things5 and Machine-to-Machine (M2M), communicating objects, mobile computing, wearable computing6, social computing, intelligent habitats and environments (towns, housing, roads, transport, architecture, etc.), tangible and embedded interaction7, affective computing, human–robot interaction8, and embedded systems.
In summary, Weiser’s vision has been used to justify and define new research within an extremely diverse collection of fields. It is increasingly evident that such research cannot be carried out in isolation. Research in this area is fundamentally multi-disciplinary, requiring the assimilation of problems and concepts from a variety of specializations. From our perspective, in its current state, ambient intelligence is only the latest stage in the evolution of informatics as a scientific discipline. In the following chapters, we will provide an overview of the field in its current state.
Waldner [WAL 07] has summarized the evolution of computing by charting the continual miniaturization of electronic components, the spectacular increase in information processing and memory capacity, the omnipresence of networks, and the reduced costs of hardware production. In this approach, development of resources drives changes in the nature of computing. We carry out a parallel analysis using the changes in resources to predict developments in research. We will focus on three areas in particular: the availability of computing power as a critical resource, the individual as the focus of attention, and the physical and social worlds in relation to the digital world.
Fifty years ago, computing machines were far too expensive and it was cumbersome to even imagine them being used in everyday homes, as shown in Figure 1.1. Access to computing was restricted to specialist operators and programs were carefully encoded on perforated “punch cards”. Computing results were printed on reams of special fan-folded paper, with punch cards, magnetic tapes and removable disks used for long-term storage. At best, computing machines had around a megabyte of central memory. Computing networks and packet switching technologies for communications were an avant-garde area of research9.
During this period, the user was a programmer specialized in scientific computing, statistics, and management applications (such as payroll). Programs were entered by a dedicated operator, who monitored the use of resources using a specific control language (Job Control Language, JCL). Any program that consumed more memory or printed more pages than what was anticipated was automatically terminated by the operating system and the programmer was responsible for declaring the required computing resources. The skill lay in being able to produce a correct program “from the start” using such techniques as memory overlays so that the program ran using the available central memory. The concept of virtual memory therefore became a subject of research. With the emergence of time-sharing systems, punching cards gradually disappeared in favor of personal terminals. These were initially built using TELEX terminals or “teletypes” that were eventually replaced by alphanumerical screens. Bit-mapped displays; however, were judged far too expensive because of memory costs.
The optimization of resources and “virtualization” (found today in cloud computing) remained the driving force in computing until researchers (North American, for the most part) turned their attention to the human component of the human–machine system. Indeed, as the cost of computing machines decreased, labor costs increasingly dominated the cost of computing.
The first CHI (Computer–Human Interaction) conference of the ACM10 was organized in 1983. This conference, parallel with the first appearance of personal computers, marked the start of the pursuit of developing useful and usable applications for users. The user was no longer an experienced programmer, but a non-specialist using the computer as a tool for professional activities. An application was considered useful if it provided the functions expected by its user where it was said to ensure “functional conformity”. A program was considered usable if the user interface (UI), which gives access to applicative functions, conformed to the cognitive, motor and sensory capabilities of the target user. This is known as “interactional conformity”.
Computer scientists, whether academic or industrial, have, for too long, underestimated the cognitive dimension of the human user. Not only should a program provide her/him with the expected functions, but it should also provide access to these functions in a manner that respects the user’s working procedures and abilities to perceive and reason. Not only should this arrangement conform to human thought processes, but it should also be made explicit to the user interface. It was only in 2010 that computing professionals recognized that the design of the human–computer interface was not simply a question of aesthetics, but an issue of user–computer conformity. By contributing concepts, theories and methods, cognitive psychology and ergonomics have played an important role in addressing this problem.
Methods used for user-interface design include participative and contextual design [BEY 98], iterative design (which is well adapted to the practice of “agile” programming), and scenario-based design [ROS 02]. These user-focused methods have given rise to a number of formalisms such as CLG [CAR 83], TAG [PAY 86] and ETAG [TAU 90], UAN [HAR 92], and CTT [PAT 97] to model the thought processes of target users in the form of task models (a tree structure with aims and sub-aims linked by composition operators or temporal relations). Such models go beyond a simple Use Case UML, to specify the functional requirements and task sequences from a user perspective.
Example theories include the Model Human Processor [CAR 83] and Norman’s direct correspondence principle [NOR 86], which state that there should be a clear correspondence between the psychological variables encountered by the user mentally and computing objects, as well as a direct correspondence between the internal state of the system in relation to the user and its representation by the user interface. These theories are, or at least should be, part of the toolkit of any competent computer scientist.
Two complementary representations compete in modern user interface technologies: linguistic representations (including natural and artificial languages, as in the Unix Shell), and metaphorical interfaces based on the real world, such as with the desktop environment in modern personal computers. The WIMP (Window, Icon, Menu, Pointing) interaction paradigm, made possible by modern user interface toolkits, is a modern manifestation of the impact of the direct correspondence principle and its theoretical foundation in the Model Human Processor.
Ergonomics and cognitive psychology have also had a major impact on evaluation methods by suggesting protocols and metrics to assess human performance such as task performance duration and error rates. Although these methods are well elaborated and documented, in practice, software developers are still reluctant to integrate them into the software development process or, if they do so, evaluation is performed too late in the development process to have a real impact on system usability.
At the same time, the Internet, wireless networks, the web (which celebrated its 20th year in 2010), and web browsers are now used by nearly everyone. From a single computer, we have passed on to an era of “instantly connected” computing.
In contrast to the previous era of computing, in which the desktop computer was the archetype, new technologies increasingly enable mobility and integration of digital systems into the ordinary physical objects. Ordinary objects are increasingly being fitted with technologies for computing, communications, sensing, actuation, and interaction. These devices are increasingly networked, forming a complex infrastructure creating a plethora of new services. Figure 1.3 demonstrates this trend in four images.
The examples in Figure 1.3 lead to three immediate observations: the polymorphism of the computer that weaves, both literally and figuratively, the digital into our everyday activities, from the useful to the pointless. In other words, the physical world has become a resource that can be shaped and (re)constructed by the individual, not only to be more efficient but also to improve the quality of our life, pleasure and experiences. This has resulted in the emergence of “funology” (i.e. the science of having fun) [BLY 03]. The user is no longer a subject limited to “consuming” applications imposed by the market, but can now take on the role of actor such as “DIYers” who construct and improve their living space using off-the-shelf components. Even the individual’s ability to create has itself been surpassed by a new phenomenon, social networking [KRA 10], made possible by the universality of the Internet.
The social dimension of computing has in fact been an area of interest since the end of the 1980s11. The initial aim was to develop models, theories and digital systems, called groupware, designed to improve group activities in terms of production, coordination, and communication. With the web, the change in scale has led to new uses. Every individual, collective and community can now collect information, relate it, produce new information, and in turn share it with the rest of the world. Schneiderman [SHN 98] refers to this phenomenon with the mantra collect–relate–create–donate. Wikipedia is the most obvious example of a collective construction of encyclopedic knowledge. Other examples include the Google Image Labeler, which indexes images and TopCoder for thesocial production programs. The digital software stores, inspired by the Apple App Store, have led to changes in the software development process and have triggered new economic models and opportunities.
Despite the constant avalanche of information, the human factor remains constant. The user remains a genuine bottleneck, and requires the invention of new interaction techniques to accommodate a growing flood of information. In this sense, gestural interaction and inertial measurement units in mobile telephones, physical interaction and motion sensing devices using real time 3D reconstruction such as Microsoft’s Kinect muscular interaction, multipoint clear screens, and bendable objects are all noteworthy examples, as illustrated in Figure 1.4. These examples show that innovation requires the unprecedented cooperation ICT (information and communication technologies) and ICT–HSS (human and social sciences), from nanotechnologies to software engineering, and from the individual to all levels of society.
This brief overview indicates that we are entering into an era of radical change, which, in turn, raises a number of new challenges.
The scientific, technical and ethical challenges posed by ambient intelligence have been examined by a number of reports [COU 08, STA 03, PUN 05, WAL 07], specialized journals, conference sessions, and workshops. Research problems are generally organized as a stack of sub-domains shown in Figure 1.5. Three key challenges facing the field cover all of these domains: scalability, heterogeneity and dynamic adaptation. These three challenges arise from the fact that ambient intelligence pushes computing to its limits.
Changes in scale can lead to unexpected phenomena. For ambient intelligence, the challenge of scale results from the massive interconnection of a very large number of ordinary devices augmented with computing, sensing, actuation and interaction. The challenge lies in managing the co-existence of services and systems made possible by the interconnection of devices over a wide range of scales, from personal body-area networks based on wearable computing to city-wide and planetary scale systems. This challenge is greatly complicated by the heterogeneity resulting in part from technical challenges at each scale.
At any scale, a variety of possible solutions may be used to address competing technical challenges. In addition, each scale raises its own unique challenges. Integrating devices with different programming frameworks can prove extremely complex. Integrating across scales makes integration even more challenging. While the field has seen concerted movements toward uniform standards, too often such efforts have been carried out in a vacuum, resulting in isolated silos of inter-operability. In such environments, dynamic adaptation is therefore impossible.
Dynamic adaptation, with its multiple facets, approaches and solutions has been examined for over a number of years in a variety of fields and research specialties (see Figure 1.5). For some researchers, the ultimate aim is an autonomous, safe, and secure system that does not require human intervention. For others, however, the user should remain involved, if desired. It is therefore necessary that “autonomous software compositions” provide users with interaction points at every level of abstraction in order to control the adaptation process if needed.
These problems have only been addressed in a piecemeal manner to date, constrained by the restricted view of a single specialty or area of application. It is therefore necessary to develop new technologies that are generic, enabling, and malleable. These technologies should be generic so that they can be applied to all contexts and allow the rapid development of services by professionals. Malleability is needed so that they can be organized and changed by the end user, as required, in a non-uniform, constrained, dynamic and multi-scale world12. This is not a question of creating a uniform and standardized world, but respecting diversity and the unexpected. For our part, the “malleable” constitutes a major challenge in coming years because we are placing the means to program (unconsciously), develop programs (without endangering life or property), and share them with others (like the App Store over social networks) in the hands of the end user.
In view of the above, is ambient intelligence a fad or an emerging scientific discipline? In line with Thomas Kuhn’s definition, our analysis suggests that ambient intelligence does not have the status of a discipline13 yet. If a scientific community is said to be organized around symposia and specialized reviews, it does not necessarily entail sharing a standard set of concepts and methods. Ambient intelligence is still “application driven” for its socio-economic benefits.
For the foreseeable future, we believe that the response will be a progressive evolution of research processes and a collaborative approach toward a concrete and lasting integrated strategy. Indeed, each discipline and specialty progress by sharing information with other disciplines (it is a multi-disciplinary alliance that drives “collaborative” research projects) or new shared knowledge will arise from the integration of several disciplines and specialties, a pluri-disciplinary convergence, which is a challenge in itself. Human–machine interaction is a perfect example of the convergence between psychology, sociology, and computing. However, it has taken more than 20 years for it to be recognized as a discipline in its own right. It is therefore a question of time. Nevertheless, let us remember Alan Kay’s well-known quote, “the best way to predict the future is to invent it!”.
[BEL 06] BELL G., DOURISH P., “Yesterday’s tomorrows: notes on ubiquitous computing’s dominant vision”, Personal and Ubiquitous Computing, 2006, www.ics.uci.edu/~jpd/ubicomp/BellDourish-YesterdaysTomorrows.pdf.
[BEY 98] BEYER H., HOLTZBLATT K., Contextual Design, Morgan Kaufman, San Francisco, 1998.
[BLY 03] BLYTHE M.A., OVERBEEKE K., MONK A.F., WRIGHT P.C. (ed.), From Usability to Enjoyment, Human Computer Interaction Series, vol. 3, Springer, New York, 2003.
[BUE 08] BUECHLEY L., EISENBERG M., CATCHEN J., CROCKETT A., “The LilyPad Arduino: using computational textiles to investigate engagement, aesthetics, and diversity in computer science education”, Proceedings of the SIGCHI Conference (CHI 2008), Florence, Italy, pp. 423–432, April 2008.
[CAR 83] CARD S.K., MORAN T.P., NEWELL A., The Psychology of Human Computer Interaction, Lawrence Erlbaum, Hillsdale, 1983.
[COU 08] COUTAZ J., CROWLEY J., Intelligence Ambiante: défis et opportunités, Document de rèflexion conjoint du comité d’experts “Informatique Ambiante” du département ST2I du CNRS et du Groupe de travail “Intelligence Ambiante” du Groupe de concertation sectoriel (GCS3) du ministére de l’Enseignement superiéur et de la Recherche, DGRI A3, 2008. http://iihm.imag.fr/publs/2008/RapportIntellAmbiante.V1.2finale.pdf.
[GAV 06] GAVER W., BOWERS J., BOUCHER A., LAW A., PENNINGTON S., VILLAR N., “The history tablecloth; illuminating domestic activity”, DIS’06 Proceedings of the 6th Conference on Designing Interactive Systems, ACM, New York, USA, pp. 199–208, 2006.
[HAR 10] HARRISON C., TAN D., MORRIS D., “Skinput: appropriating the body as an input surface”, Proceedings of CHI’10, ACM, pp. 453–462, 2010.
[HAR 92] HARTSON R., GRAY P., “Temporal aspects of tasks in the user action notation”, Human Computer Interaction, vol. 7, pp. 1–45, 1992.
[KRA 10] KRAUT R., MAHER M.L., OLSON J., MALONE T., PIROLLI P., THOMAS J.C., “Scientific foundations: a case for technology-mediated social participation theory”, IEEE Computer, vol. 43, pp. 22–28, novembre 2010.
[MER 07] MERRILL D., KALANITHI J., MAES P., “Siftables: towards sensor network user interfaces”, Proceedings of the First International Conference on Tangible and Embedded Interaction (TEI’07), Baton Rouge, USA, pp. 15–17, Feburary 2007.
[MOG 06] MOGGRIDGE B., Designing Interactions, The MIT Press, Cambridge, MA, 2006.
[NAR 95] NARDI B., A Small Matter of Programming, Perspectives on End User Computing. The MIT Press, Cambridge, MA, 1995.
[NOR 86] NORMAN D., DRAPER S.W., User Centered Design, New Perspectives on Human-Computer Interaction, Lawrence Erlbaum, Hillsdale, 1986.
[PAR 09] PARVIZ B., “Augmented reality in a contact lens”, IEEE Spectrum, September 2009.
[PAT 97] PATERNò F., MANCINI C., MENICONI S., “ConcurTaskTrees: a diagrammatic notation for specifying task models”, Proceedings of INTERACT 1997, Sydney, Australia, pp. 362–369,1997.
[PAY 86] PAYNE S., GREEN T., “Task-actions grammars: a model of the mental representation of task languages”, Human-Computer Interaction, vol. 2, pp. 93–133, 1986.
[PHI 96] PHILIPS, Vision of the future, Philips Corporate Design, Eindhoven, V+K Publ., Bussum, Netherlands, 1996.
[PUN 05] PUNIE Y., “The future of ambient intelligence in Europe: the need for more everyday life”, Communications & Stratégies, no. 57, 2005, www.idate.fr/fic/revue_telech/ 418/CS57_PUNIE.pdf.
[ROS 02] ROSSON M.B., CARROLL J.M., Usability Engineering, Scenario-Based Development of Human Computer Interaction, Morgan Kaufmann, San Francisco, 2002.
[SCH 04] SCHWESIG C., POUPYREV I., MORI E., “Gummi: a bendable computer”, Proceedings of CHI’2004, ACM, pp. 263–270, 2004.
[STA 03] ST ADVISORY GROUP, Ambient Intelligence: from Vision to Reality, European Commission, 2003.
[SHN 98] SHNEIDERMAN B., “Relate-create-donate: a teaching/learning philosophy for cyber- generation”, Computers and Education, vol. 31, no. 1, pp. 25–39, 1998.
[TAU 90] TAUBER M., “ETAG: extended task action grammar - a language for the description of the user’s task language”, Proceedings INTERACT’90, Elsevier, pp. 163–174,1990.
[WAL 07] WALDNER J.B., Nanocomputers and swarm intelligence, ISTE Ltd., London and John Wiley & Sons, New York, 2007.
[WEI 91] WEISER M., “The computer for the twenty-first century”, Scientific American, vol. 265, no. 3, pp. 66–75,1991.
1 www.ubicomp.org/. Ubicomp, created in 2001 as part of HUC 99 and HUC2k (Handheld and Ubiquitous Computing).
2 http://pervasive2008.org/, www.percom.org/.
3 www.ami-07.org/.
4 Workshop Artificial Societies for Ambient Intelligence, http://asami07.cs.rhul.ac.uk/.
5 www.internet-of-things-2008.org.
6 www.iswc.net/.
7 www.tei-conf.org/.
8 http://hri2007.org/.
9 Louis Pouzin, then a researcher at IRIA (later INRIA), and his team were responsible for the invention of packet-switching communications. Their datagramme technique was used within the Cyclade project, with a first network composed of hubs at IRIA, the CII and the IMAG in France. The first demonstration took place in 1973.
10 CHI’83 followed the first workshop on the subject in Gaithersburg in March 1982, entitled Human Factors in Computer Systems.
11 The first ACM conference on the subject Computer Supported Collaborative Work (CSCW) took place in 1987.
12 What Bell and Dourish call, in less technical terms, a messy fragmented world [BEL 06].
13 www.electroniques.biz/pdf/EIH200312110541038.pdf (in French).
As purveyors of ethical questions on safety and environmental hazards, risks relating to personal surveillance and privacy and the possibility of bettering mankind, information, and communication technologies in all their diversity and complexity raise the issue of evolution, far beyond the scope of a single technology. The consequences of miniaturization, invisibility and interaction of technologies with man, whether intended or not, have led to a surge in ethical reflection that reaches far beyond the scientific community and in which man as citizen is often a factor. Reflection on the ethical and social consequences of these technologies should neither be restricted to a single evaluation of risks and costs, nor to research into the social acceptance of new inventions. The interest, hope and beliefs that their developments engender civil society and the scientific community extend far beyond a mere rational analysis. Developed in a context of uncertainty with regard to their future consequences, a symbolic echo of ancient fears, technology, and its use provoke a collective emotional reaction that is often excessive or irrational. If an ethical enlightenment favors the emergence of a widely applicable solution for evaluating “reasonable doubt” in “proportionate and acceptable risk”, this should be viewed from a perspective recognizing social value systems.
Strongly anchored in industry, these technologies form the frontier of science and its applications that are “at one with it” [KLE 08]. They are characterized by their significant complexity and interactions with other disciplines such as human and social sciences, biology or medicine. Based on action and immediacy, reflection and ethical dimensions are often incompatible with economic demands. For all that, they are the carriers of social and political changes, which upset, sometimes unintentionally, the status quo. In their potential applications, they sometimes reach beyond the limits of the law, indicating future changes. It is in this muddle of contradicting imperatives that an ethical approach must “give sense to an ongoing experiment where we do not yet know the results and even highlight uncertainty” [KLE 08].
Technology is increasingly geared toward human identity in terms of fundamental rights by enabling us, under various restrictions, to process information relating to personal, sensitive1, and biometric data that identify an individual on the basis of his/her physical, biological, or psychological characteristics (DNA, retina, iris, finger prints, hand contours, veins) and medical or behavioral (emotions) information. As such, can an ethical approach “appropriate” this data while respecting our fundamental rights? Do the values set out in and by a society change? Do technologies change the consensus on what is and what is not acceptable?
The increased capacity to collect data from a single person, track individuals in both public (stores, airports, etc.) and private spaces, and “profile” them using their individual behavior has led to the creation of laws and guidelines that restrict data collection. A new frontier has been crossed through the spread of technology and miniaturization, chips, “cloud computing”, “the Internet of Things”, or “pervasive computing”, which have markedly influenced the human environment. The apparent insignificance of data, the priority given to objects over people, the logic of globalization (technological standardization based on an American concept of privacy without accounting for European principles protecting private life), and the risk of a lack of individual vigilance due to the presence and invisible activation complicating the situation2. The protection of private life, a fundamental component of human rights, is becoming a challenge in the development of technologies. Indeed, if the right to private life involves protecting aspects of personal life, it is also a highly vague right whose recognition is left to judges who base their decisions, in part, on European Community legislations such as the Convention for the Protection of Human Rights and Fundamental Freedoms (article 8), the Amsterdam Treaty (November 10, 1997), directives 95/46/EC3 on the protection of personal data in Europe, or even the European Union Charter of Fundamental Rights, Article 8 of which states that everyone has the right to have respect for their personal data. It is therefore the responsibility of judges to not only respect private life, but also to protect it.
These technologies, combined with an industrial, economic, and competitive world in daily life, cause not entirely unfounded fears in civil society. The invisibility of data collection, which can be done without the knowledge or consent of those involved or the notification of purpose; the interoperability, which means that this data can be read by several individuals; profiling individuals, segmentation, discrimination, and exclusion are some of the repercussions that must be considered when evaluating risk. These risks are even harder to appreciate when there is uncertainty about these new technologies. Assistive technology designed to improve daily life for those with disabilities can also be used as a potential surveillance technology. Fears have been raised and fed by the potential endlessness of these technologies in terms of security, for example web profiling or the use of different bio-identification tools such as chips placed in the human body. These technologies and the amalgamation of several technologies (biology, cognitive sciences, etc.) are becoming embedded within the entirety of our social fabric. Authors have described the interaction paradigm in ubiquitous computing as the “processing of information becoming dissolved within behavior”. Human relationships shaped by software, chips, and nano-objets lead the individual to more or less consciously abdicate their right to privacy, the integrity of their individual liberty, and dignity.
Is this a catastrophic scenario for man or a “seductive promise”? The uncertainty of these technologies and the “improved human” paradigm have regularly fed fears often expressed by the scientific community or the media. Even if some fears appear to be unfounded today, such as the imminent arrival of a world controlled by robots, resulting in a fusion of the human central nervous system and machines, others should not be disregarded. Transplants of computer implants into the human body to compensate for a loss in physical capabilities or function by restoring damaged functions “contribute to the promotion of human dignity whilst representing a risk which must be considered”4. According to this perspective, respect for human dignity is a basic human right. It is, along with liberty, one of the principal frames of reference governing activity relating to progress in research. Given the uncertainty of technology, it is necessary to seek, as far as possible, a balance between “the reasonable and the relative”.
As E. Klein, A. Grinbau and V. Bontems have highlighted in relation to nano-sciences “the reference to ethics has become a sort of pervasive code against which every new question must be measured” [KLE 08]. Who, which organizations, an ethical committee, can claim to set out society’s values? The “incomplete” quantification of risks carried out by both scientists and policy makers and the cost-benefit projections made by financiers and industry are not sufficient to measure the potential future consequences of technology.
In terms of policy makers, the creation of regulations that enable better practical adaptation brings citizens into public debate via opinion polls, the creation of user associations, professionals, or professional practices (good practice codes, good behavior codes, recommendations, codes of ethics). Regulation, as a model for technological “ethical governance”, is based on the observation of proven social phenomena and reasonable transactions with regard to values and beliefs driven by these phenomena. The reality is more deceptive because regulation, which is based only on the present, comes from a tangle of origins and sources (citizens, professionals, policy makers, etc), ethics comities, and ethical spaces that give piecemeal, segmented and hard-to-implement recommendations.
Ethics cannot be reduced to the simple regulation and management of risks. It raises questions and provokes reflection on the human condition in civil society whose uncertainties and risks are immeasurable since the future is obviously difficult to predict. Since Max Weber there has been a distinction in numerous theories between the ethics of responsibility and the ethics of the discussion developed by Habermas. While in the ethics of responsibility, if the consequences are attributable to an action, man should, as far as he can predict, place himself in a situation to anticipate potential problems. In the ethics of discussion, it is, however, essential that people can exchange rational arguments about their interests in a public space of free discussion, which will give rise to new norms and common interests.
None of these approaches to ethics is intellectually satisfying. Both tend to impose the principle of precaution as being a basic element of the “ethics of the future”5. The boundaries of the precaution principle were progressively set out over the course of the 1990s from a fluid concept developed in the 1970s around nuclear energy. It is out of concern for preserving the future that this principle recognizes the need and legitimacy of not waiting for scientific certainty to engage preventive actions. For example, in France, the Law Barnier relating to environmental protection calls for “in the absence of certainty, considering the scientific and technical information available [and] the adoption of effective and proportionate measures to prevent the risk of serious and irreversible damage to the environment at an economically acceptable cost”6. Recognition of this principle, which has also been extended to health, has reversed the label of guilt from the victim to the alleged perpetrator of any harm caused.
In this sense, the precaution principle allows us to justify a scientific approach and restrict technologies to only those risks that can be objectively measured. It is, however, not possible in a situation of uncertainty or lack of knowledge around potential or even probable harm. Beyond this principle and its definition, the scientific community has a responsibility to take on the role of the researcher. Should they, therefore, be given preferential treatment in comparison to the lay observer in terms of ethics? Does the scientific community have an obligation to inform policy makers and citizens of the risks where no evaluation is foreseeable or even possible? Is it possible to make decisions as to what is desirable and what is not?
Is there, outside of technological ethics, an ethics specific to researchers that releases them from their liberty and, as a consequence, responsibility? Is there, at the margin of technological ethics, an ethics relieving researchers of their liberty and, as a result, their responsibility? The act of “whistle blowing”7, a new factor in risk management, allows scientists, who discover elements that they consider to be potentially harmful for humans, society, or the environment, to bring them to the attention of officials, organizations, or the media, sometimes against the wishes of their superiors. Do these technologies present a genuinely new perspective for ethics? Should we rethink ethics in relation to the specific properties of these technologies? In the section entitled “Challenges”, the 30th CNIL report (2010) [COM 10] responds directly to these issues by proposing to increase data protection against a framework of shared and common ethics. However, when individuals waive some of their privacy in return for “real or imagined benefits”, in the name of security, for example, allowing their biometric data to be collected, the response must be negative. The consent of those concerned does not remove the need for an ethical reflection on information and communication technology. The requirement to consider human dignity is an absolute principle, the consent of which does not remove the need to acknowledge this or justify its infringement.
Paradoxically, the gravity of the “judicial crisis” left in the wake of these unprecedented technologies can also be questioned. Indeed, on examination of preparatory work for laws concerning freedom of information or annual reports by information commissions, the admission of powerlessness against the extent of upheaval caused is shocking. If doctrine has shown that the fantasy of fixed law lives on, there is no more important a challenge than conforming to norms and values that are evolving at a different rate.