Foundations of Trusted Autonomy.

Saved in:
Bibliographic Details
Superior document:Studies in Systems, Decision and Control Series ; v.117
:
TeilnehmendeR:
Place / Publishing House:Cham : : Springer International Publishing AG,, 2018.
Ã2018.
Year of Publication:2018
Edition:1st ed.
Language:English
Series:Studies in Systems, Decision and Control Series
Online Access:
Physical Description:1 online resource (399 pages)
Tags: Add Tag
No Tags, Be the first to tag this record!
Table of Contents:
  • Intro
  • Foreword
  • Preface
  • Acknowledgements
  • Contents
  • Contributors
  • 1 Foundations of Trusted Autonomy: An Introduction
  • 1.1 Autonomy
  • 1.2 Trust
  • 1.3 Trusted Autonomy
  • Autonomy
  • 2 Universal Artificial Intelligence
  • 2.1 Introduction
  • 2.2 Background and History of AI
  • 2.3 Universal Artificial Intelligence
  • 2.3.1 Framework
  • 2.3.2 Learning
  • 2.3.3 Goal
  • 2.3.4 Planning
  • 2.3.5 AIXI
  • Putting It All Together
  • 2.4 Approximations
  • 2.4.1 MC-AIXI-CTW
  • 2.4.2 Feature Reinforcement Learning
  • 2.4.3 Model-Free AIXI
  • 2.4.4 Deep Learning
  • 2.5 Fundamental Challenges
  • 2.5.1 Optimality and Exploration
  • 2.5.2 Asymptotically Optimal Agents
  • 2.6 Predicting and Controlling Behaviour
  • 2.6.1 Self-Modification
  • 2.6.2 Counterfeiting Reward
  • 2.6.3 Death and Self-Preservation
  • 2.7 Conclusions
  • References
  • 3 Goal Reasoning and Trusted Autonomy
  • 3.1 Introduction
  • 3.2 Goal-Driven Autonomy Models
  • 3.2.1 Goal-Driven Autonomy
  • 3.2.2 Goal Selection
  • 3.2.3 An Application for Human-Robot Teaming
  • 3.3 Goal Refinement
  • 3.3.1 Goal Lifecycle
  • 3.3.2 Guaranteeing the Execution of Specified Behaviors
  • 3.3.3 A Distributed Robotics Application
  • 3.4 Future Topics
  • 3.4.1 Adaptive Autonomy and Inverse Trust
  • 3.4.2 Rebel Agents
  • 3.5 Conclusion
  • References
  • 4 Social Planning for Trusted Autonomy
  • 4.1 Introduction
  • 4.2 Motivation and Background
  • 4.2.1 Automated Planning
  • 4.2.2 From Autistic Planning to Social Planning
  • 4.3 Social Planning
  • 4.3.1 A Formal Model for Multi-agent Epistemic Planning
  • 4.3.2 Solving Multi-agent Epistemic Planning Problems
  • 4.4 Social Planning for Human Robot Interaction
  • 4.4.1 Search and Rescue
  • 4.4.2 Collaborative Manufacturing
  • 4.5 Discussion
  • References
  • 5 A Neuroevolutionary Approach to Adaptive Multi-agent Teams
  • 5.1 Introduction.
  • 5.2 The Legion II Game
  • 5.2.1 The Map
  • 5.2.2 Units
  • 5.2.3 Game Play
  • 5.2.4 Scoring the Game
  • 5.3 Agent Control Architectures
  • 5.3.1 Barbarian Sensors and Controllers
  • 5.3.2 Legion Sensors and Controllers
  • 5.4 Neuroevolution With Enforced Sub-Populations (ESP)
  • 5.5 Experimental Methodology
  • 5.5.1 Repeatable Gameplay
  • 5.5.2 Training
  • 5.5.3 Testing
  • 5.6 Experiments
  • 5.6.1 Learning the Division of Labor
  • 5.6.2 Run-Time Readaptation
  • 5.7 Discussion
  • 5.8 Conclusions
  • References
  • 6 The Blessing and Curse of Emergence in Swarm Intelligence Systems
  • 6.1 Introduction
  • 6.2 Emergence in Swarm Intelligence
  • 6.3 The `Blessing' of Emergence
  • 6.4 The `Curse' of Emergence
  • 6.5 Taking Advantage of the Good While Avoiding the Bad
  • 6.6 Conclusion
  • References
  • 7 Trusted Autonomous Game Play
  • 7.1 Introduction
  • 7.2 TA Game AI
  • 7.3 TA Game
  • 7.4 TA Game Communities
  • 7.5 TA Mixed Reality Games
  • 7.6 Discussion: TA Games
  • References
  • Trust
  • 8 The Role of Trust in Human-Robot Interaction
  • 8.1 Introduction
  • 8.2 Conceptualization of Trust
  • 8.3 Modeling Trust
  • 8.4 Factors Affecting Trust
  • 8.4.1 System Properties
  • 8.4.2 Properties of the Operator
  • 8.4.3 Environmental Factors
  • 8.5 Instruments for Measuring Trust
  • 8.6 Trust in Human Robot Interaction
  • 8.6.1 Performance-Based Interaction: Humans Influencing Robots
  • 8.6.2 Social-Based Interactions: Robots Influencing Humans
  • 8.7 Conclusions and Recommendations
  • References
  • 9 Trustworthiness of Autonomous Systems
  • 9.1 Introduction
  • 9.1.1 Autonomous Systems
  • 9.1.2 Trustworthiness
  • 9.2 Background
  • 9.3 Who or What Is Trustworthy?
  • 9.4 How do We Know Who or What Is Trustworthy
  • 9.4.1 Implicit Justifications of Trust
  • 9.4.2 Explicit Justifications of Trust
  • 9.4.3 A Cognitive Model of Trust and Competence.
  • 9.4.4 Trustworthiness and Risk
  • 9.4.5 Summary
  • 9.5 What or Who Should We Trust?
  • 9.6 The Value of Trustworthy Autonomous Systems
  • 9.7 Conclusion
  • References
  • 10 Trusted Autonomy Under Uncertainty
  • 10.1 Trust and Uncertainty
  • 10.1.1 What Is Trust?
  • 10.1.2 Trust and Distrust in HRI
  • 10.2 Trust and Uncertainty
  • 10.2.1 Trust and Distrust Entail Unknowns
  • 10.2.2 What Is Being Trusted
  • What Is Uncertain?
  • 10.2.3 Trust and Dilemmas
  • 10.3 Factors Affecting Human Reactivity to Risk and Uncertainty, and Trust
  • 10.3.1 Kinds of Uncertainty, Risks, Standards, and Dispositions
  • 10.3.2 Presumptive and Organizational-Level Trust
  • 10.3.3 Trust Repair
  • 10.4 Concluding Remarks
  • References
  • 11 The Need for Trusted Autonomy in Military Cyber Security
  • 11.1 Introduction
  • 11.2 Cyber Security
  • 11.3 Challenges and the Potential Application of Trusted Autonomy
  • 11.4 Conclusion
  • References
  • 12 Reinforcing Trust in Autonomous Systems: A Quantum Cognitive Approach
  • 12.1 Introduction
  • 12.2 Compatible and Incompatible States
  • 12.3 A Quantum Cognition Model for the Emergence of Trust
  • 12.4 Conclusion
  • References
  • 13 Learning to Shape Errors with a Confusion Objective
  • 13.1 Introduction
  • 13.2 Foundations
  • 13.2.1 Binomial Logistic Regression
  • 13.2.2 Multinomial Logistic Regression
  • 13.2.3 Multinomial Softmax Regression for Gaussian Case
  • 13.3 Multinomial Softmax Regression on Confusion
  • 13.4 Implementation and Results
  • 13.4.1 Error Trading
  • 13.4.2 Performance Using a Deep Network and Independent Data Sources
  • 13.4.3 Adversarial Errors
  • 13.5 Discussion
  • 13.6 Conclusion
  • References
  • 14 Developing Robot Assistants with Communicative Cues for Safe, Fluent HRI
  • 14.1 Introduction
  • 14.2 CHARM - Collaborative Human-Focused Assistive Robotics for Manufacturing.
  • 14.2.1 The Robot Assistant, Its Task, and Its Components
  • 14.2.2 CHARM Streams and Thrusts
  • 14.2.3 Plugfest
  • 14.3 Identifying, Modeling, and Implementing Naturalistic Communicative Cues
  • 14.3.1 Phase 1: Human-Human Studies
  • 14.3.2 Phase 2: Behavioral Description
  • 14.3.3 Phase 3: Human-Robot Interaction Studies
  • 14.4 Communicative Cue Studies
  • 14.4.1 Human-Robot Handovers
  • 14.4.2 Hesitation
  • 14.4.3 Tap and Push
  • 14.5 Current and Future Work
  • References
  • Trusted Autonomy
  • 15 Intrinsic Motivation for Truly Autonomous Agents
  • 15.1 Introduction
  • 15.2 Background
  • 15.2.1 Previous Work on Intrinsic Human Motivation
  • 15.2.2 Previous Work on Cognitive Architectures
  • 15.3 A Cognitive Architecture with Intrinsic Motivation
  • 15.3.1 Overview of Clarion
  • 15.3.2 The Action-Centered Subsystem
  • 15.3.3 The Non-Action-Centered Subsystem
  • 15.3.4 The Motivational Subsystem
  • 15.3.5 The Metacognitive Subsystem
  • 15.4 Some Examples of Simulations
  • 15.5 Concluding Remarks
  • References
  • 16 Computational Motivation, Autonomy and Trustworthiness: Can We Have It All?
  • 16.1 Autonomous Systems
  • 16.2 Intrinsically Motivated Swarms
  • 16.2.1 Crowds of Motivated Agents
  • 16.2.2 Motivated Particle Swarm Optimization for Adaptive Task Allocation
  • 16.2.3 Motivated Guaranteed Convergence Particle Swarm Optimization for Exploration and Task Allocation Under Communication Constraints
  • 16.3 Functional Implications of Intrinsically Motivated Swarms
  • 16.3.1 Motivation and Diversity
  • 16.3.2 Motivation and Adaptation
  • 16.3.3 Motivation and Exploration
  • 16.4 Implications of Motivation on Trust
  • 16.4.1 Implications for Reliability
  • 16.5 Implications for Privacy and Security
  • 16.5.1 Implications for Safety
  • 16.6 Implications of Complexity
  • 16.7 Implications for Risk
  • 16.7.1 Implications for Free Will.
  • 16.8 Conclusion
  • References
  • 17 Are Autonomous-and-Creative Machines Intrinsically Untrustworthy?
  • 17.1 Introduction
  • 17.2 The Distressing Principle, Intuitively Put
  • 17.3 The Distressing Principle, More Formally Put
  • 17.3.1 The Ideal-Observer Point of View
  • 17.3.2 Theory-of-Mind-Creativity
  • 17.3.3 Autonomy
  • 17.3.4 The Deontic Cognitive Event Calculus (mathcalDemathcalCEC)
  • 17.3.5 Collaborative Situations
  • Untrustworthiness
  • 17.3.6 Theorem ACU
  • 17.4 Computational Simulations
  • 17.4.1 ShadowProver
  • 17.4.2 The Simulation Proper
  • 17.5 Toward the Needed Engineering
  • References
  • 18 Trusted Autonomous Command and Control
  • 18.1 Scenario
  • References
  • 19 Trusted Autonomy in Training: A Future Scenario
  • 19.1 Introduction
  • 19.2 Scan of Changes
  • 19.3 Trusted Autonomy Training System Map
  • 19.4 Theory of Change
  • 19.5 Narratives
  • 19.5.1 The Failed Promise
  • 19.5.2 Fake It Until You Break It
  • 19.5.3 To Infinity, and Beyond!
  • References
  • 20 Future Trusted Autonomous Space Scenarios
  • 20.1 Introduction
  • 20.2 The Space Environment
  • 20.3 Space Activity - Missions and Autonomy
  • 20.4 Current State-of-the-Art of Trusted Autonomous Space Systems
  • 20.5 Some Future Trusted Autonomous Space Scenarios
  • 20.5.1 Autonomous Space Operations
  • 20.5.2 Autonomous Space Traffic Management Systems
  • 20.5.3 Autonomous Disaggregated Space Systems
  • References
  • 21 An Autonomy Interrogative
  • 21.1 Introduction
  • 21.2 Fundamental Uncertainty in Economics
  • 21.2.1 Economic Agency and Autonomy
  • 21.3 The Inadequacy of Bayesianism
  • 21.4 Epistemic and Ontological Uncertainty
  • 21.5 Black Swans and Universal Causality
  • 21.6 Ontological Uncertainty and Incompleteness
  • 21.6.1 Uncertainty as Non-ergodicity
  • 21.7 Uncertainty and Incompleteness
  • 21.8 Decision-Making Under Uncertainty
  • 21.9 Barbell Strategies.
  • 21.10 Theory of Self.