Foundations of Trusted Autonomy.
Saved in:
Superior document: | Studies in Systems, Decision and Control Series ; v.117 |
---|---|
: | |
TeilnehmendeR: | |
Place / Publishing House: | Cham : : Springer International Publishing AG,, 2018. Ã2018. |
Year of Publication: | 2018 |
Edition: | 1st ed. |
Language: | English |
Series: | Studies in Systems, Decision and Control Series
|
Online Access: | |
Physical Description: | 1 online resource (399 pages) |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
id |
5005579657 |
---|---|
ctrlnum |
(MiAaPQ)5005579657 (Au-PeEL)EBL5579657 (OCoLC)1020319127 |
collection |
bib_alma |
record_format |
marc |
spelling |
Abbass, Hussein A. Foundations of Trusted Autonomy. 1st ed. Cham : Springer International Publishing AG, 2018. Ã2018. 1 online resource (399 pages) text txt rdacontent computer c rdamedia online resource cr rdacarrier Studies in Systems, Decision and Control Series ; v.117 Intro -- Foreword -- Preface -- Acknowledgements -- Contents -- Contributors -- 1 Foundations of Trusted Autonomy: An Introduction -- 1.1 Autonomy -- 1.2 Trust -- 1.3 Trusted Autonomy -- Autonomy -- 2 Universal Artificial Intelligence -- 2.1 Introduction -- 2.2 Background and History of AI -- 2.3 Universal Artificial Intelligence -- 2.3.1 Framework -- 2.3.2 Learning -- 2.3.3 Goal -- 2.3.4 Planning -- 2.3.5 AIXI -- Putting It All Together -- 2.4 Approximations -- 2.4.1 MC-AIXI-CTW -- 2.4.2 Feature Reinforcement Learning -- 2.4.3 Model-Free AIXI -- 2.4.4 Deep Learning -- 2.5 Fundamental Challenges -- 2.5.1 Optimality and Exploration -- 2.5.2 Asymptotically Optimal Agents -- 2.6 Predicting and Controlling Behaviour -- 2.6.1 Self-Modification -- 2.6.2 Counterfeiting Reward -- 2.6.3 Death and Self-Preservation -- 2.7 Conclusions -- References -- 3 Goal Reasoning and Trusted Autonomy -- 3.1 Introduction -- 3.2 Goal-Driven Autonomy Models -- 3.2.1 Goal-Driven Autonomy -- 3.2.2 Goal Selection -- 3.2.3 An Application for Human-Robot Teaming -- 3.3 Goal Refinement -- 3.3.1 Goal Lifecycle -- 3.3.2 Guaranteeing the Execution of Specified Behaviors -- 3.3.3 A Distributed Robotics Application -- 3.4 Future Topics -- 3.4.1 Adaptive Autonomy and Inverse Trust -- 3.4.2 Rebel Agents -- 3.5 Conclusion -- References -- 4 Social Planning for Trusted Autonomy -- 4.1 Introduction -- 4.2 Motivation and Background -- 4.2.1 Automated Planning -- 4.2.2 From Autistic Planning to Social Planning -- 4.3 Social Planning -- 4.3.1 A Formal Model for Multi-agent Epistemic Planning -- 4.3.2 Solving Multi-agent Epistemic Planning Problems -- 4.4 Social Planning for Human Robot Interaction -- 4.4.1 Search and Rescue -- 4.4.2 Collaborative Manufacturing -- 4.5 Discussion -- References -- 5 A Neuroevolutionary Approach to Adaptive Multi-agent Teams -- 5.1 Introduction. 5.2 The Legion II Game -- 5.2.1 The Map -- 5.2.2 Units -- 5.2.3 Game Play -- 5.2.4 Scoring the Game -- 5.3 Agent Control Architectures -- 5.3.1 Barbarian Sensors and Controllers -- 5.3.2 Legion Sensors and Controllers -- 5.4 Neuroevolution With Enforced Sub-Populations (ESP) -- 5.5 Experimental Methodology -- 5.5.1 Repeatable Gameplay -- 5.5.2 Training -- 5.5.3 Testing -- 5.6 Experiments -- 5.6.1 Learning the Division of Labor -- 5.6.2 Run-Time Readaptation -- 5.7 Discussion -- 5.8 Conclusions -- References -- 6 The Blessing and Curse of Emergence in Swarm Intelligence Systems -- 6.1 Introduction -- 6.2 Emergence in Swarm Intelligence -- 6.3 The `Blessing' of Emergence -- 6.4 The `Curse' of Emergence -- 6.5 Taking Advantage of the Good While Avoiding the Bad -- 6.6 Conclusion -- References -- 7 Trusted Autonomous Game Play -- 7.1 Introduction -- 7.2 TA Game AI -- 7.3 TA Game -- 7.4 TA Game Communities -- 7.5 TA Mixed Reality Games -- 7.6 Discussion: TA Games -- References -- Trust -- 8 The Role of Trust in Human-Robot Interaction -- 8.1 Introduction -- 8.2 Conceptualization of Trust -- 8.3 Modeling Trust -- 8.4 Factors Affecting Trust -- 8.4.1 System Properties -- 8.4.2 Properties of the Operator -- 8.4.3 Environmental Factors -- 8.5 Instruments for Measuring Trust -- 8.6 Trust in Human Robot Interaction -- 8.6.1 Performance-Based Interaction: Humans Influencing Robots -- 8.6.2 Social-Based Interactions: Robots Influencing Humans -- 8.7 Conclusions and Recommendations -- References -- 9 Trustworthiness of Autonomous Systems -- 9.1 Introduction -- 9.1.1 Autonomous Systems -- 9.1.2 Trustworthiness -- 9.2 Background -- 9.3 Who or What Is Trustworthy? -- 9.4 How do We Know Who or What Is Trustworthy -- 9.4.1 Implicit Justifications of Trust -- 9.4.2 Explicit Justifications of Trust -- 9.4.3 A Cognitive Model of Trust and Competence. 9.4.4 Trustworthiness and Risk -- 9.4.5 Summary -- 9.5 What or Who Should We Trust? -- 9.6 The Value of Trustworthy Autonomous Systems -- 9.7 Conclusion -- References -- 10 Trusted Autonomy Under Uncertainty -- 10.1 Trust and Uncertainty -- 10.1.1 What Is Trust? -- 10.1.2 Trust and Distrust in HRI -- 10.2 Trust and Uncertainty -- 10.2.1 Trust and Distrust Entail Unknowns -- 10.2.2 What Is Being Trusted -- What Is Uncertain? -- 10.2.3 Trust and Dilemmas -- 10.3 Factors Affecting Human Reactivity to Risk and Uncertainty, and Trust -- 10.3.1 Kinds of Uncertainty, Risks, Standards, and Dispositions -- 10.3.2 Presumptive and Organizational-Level Trust -- 10.3.3 Trust Repair -- 10.4 Concluding Remarks -- References -- 11 The Need for Trusted Autonomy in Military Cyber Security -- 11.1 Introduction -- 11.2 Cyber Security -- 11.3 Challenges and the Potential Application of Trusted Autonomy -- 11.4 Conclusion -- References -- 12 Reinforcing Trust in Autonomous Systems: A Quantum Cognitive Approach -- 12.1 Introduction -- 12.2 Compatible and Incompatible States -- 12.3 A Quantum Cognition Model for the Emergence of Trust -- 12.4 Conclusion -- References -- 13 Learning to Shape Errors with a Confusion Objective -- 13.1 Introduction -- 13.2 Foundations -- 13.2.1 Binomial Logistic Regression -- 13.2.2 Multinomial Logistic Regression -- 13.2.3 Multinomial Softmax Regression for Gaussian Case -- 13.3 Multinomial Softmax Regression on Confusion -- 13.4 Implementation and Results -- 13.4.1 Error Trading -- 13.4.2 Performance Using a Deep Network and Independent Data Sources -- 13.4.3 Adversarial Errors -- 13.5 Discussion -- 13.6 Conclusion -- References -- 14 Developing Robot Assistants with Communicative Cues for Safe, Fluent HRI -- 14.1 Introduction -- 14.2 CHARM - Collaborative Human-Focused Assistive Robotics for Manufacturing. 14.2.1 The Robot Assistant, Its Task, and Its Components -- 14.2.2 CHARM Streams and Thrusts -- 14.2.3 Plugfest -- 14.3 Identifying, Modeling, and Implementing Naturalistic Communicative Cues -- 14.3.1 Phase 1: Human-Human Studies -- 14.3.2 Phase 2: Behavioral Description -- 14.3.3 Phase 3: Human-Robot Interaction Studies -- 14.4 Communicative Cue Studies -- 14.4.1 Human-Robot Handovers -- 14.4.2 Hesitation -- 14.4.3 Tap and Push -- 14.5 Current and Future Work -- References -- Trusted Autonomy -- 15 Intrinsic Motivation for Truly Autonomous Agents -- 15.1 Introduction -- 15.2 Background -- 15.2.1 Previous Work on Intrinsic Human Motivation -- 15.2.2 Previous Work on Cognitive Architectures -- 15.3 A Cognitive Architecture with Intrinsic Motivation -- 15.3.1 Overview of Clarion -- 15.3.2 The Action-Centered Subsystem -- 15.3.3 The Non-Action-Centered Subsystem -- 15.3.4 The Motivational Subsystem -- 15.3.5 The Metacognitive Subsystem -- 15.4 Some Examples of Simulations -- 15.5 Concluding Remarks -- References -- 16 Computational Motivation, Autonomy and Trustworthiness: Can We Have It All? -- 16.1 Autonomous Systems -- 16.2 Intrinsically Motivated Swarms -- 16.2.1 Crowds of Motivated Agents -- 16.2.2 Motivated Particle Swarm Optimization for Adaptive Task Allocation -- 16.2.3 Motivated Guaranteed Convergence Particle Swarm Optimization for Exploration and Task Allocation Under Communication Constraints -- 16.3 Functional Implications of Intrinsically Motivated Swarms -- 16.3.1 Motivation and Diversity -- 16.3.2 Motivation and Adaptation -- 16.3.3 Motivation and Exploration -- 16.4 Implications of Motivation on Trust -- 16.4.1 Implications for Reliability -- 16.5 Implications for Privacy and Security -- 16.5.1 Implications for Safety -- 16.6 Implications of Complexity -- 16.7 Implications for Risk -- 16.7.1 Implications for Free Will. 16.8 Conclusion -- References -- 17 Are Autonomous-and-Creative Machines Intrinsically Untrustworthy? -- 17.1 Introduction -- 17.2 The Distressing Principle, Intuitively Put -- 17.3 The Distressing Principle, More Formally Put -- 17.3.1 The Ideal-Observer Point of View -- 17.3.2 Theory-of-Mind-Creativity -- 17.3.3 Autonomy -- 17.3.4 The Deontic Cognitive Event Calculus (mathcalDemathcalCEC) -- 17.3.5 Collaborative Situations -- Untrustworthiness -- 17.3.6 Theorem ACU -- 17.4 Computational Simulations -- 17.4.1 ShadowProver -- 17.4.2 The Simulation Proper -- 17.5 Toward the Needed Engineering -- References -- 18 Trusted Autonomous Command and Control -- 18.1 Scenario -- References -- 19 Trusted Autonomy in Training: A Future Scenario -- 19.1 Introduction -- 19.2 Scan of Changes -- 19.3 Trusted Autonomy Training System Map -- 19.4 Theory of Change -- 19.5 Narratives -- 19.5.1 The Failed Promise -- 19.5.2 Fake It Until You Break It -- 19.5.3 To Infinity, and Beyond! -- References -- 20 Future Trusted Autonomous Space Scenarios -- 20.1 Introduction -- 20.2 The Space Environment -- 20.3 Space Activity - Missions and Autonomy -- 20.4 Current State-of-the-Art of Trusted Autonomous Space Systems -- 20.5 Some Future Trusted Autonomous Space Scenarios -- 20.5.1 Autonomous Space Operations -- 20.5.2 Autonomous Space Traffic Management Systems -- 20.5.3 Autonomous Disaggregated Space Systems -- References -- 21 An Autonomy Interrogative -- 21.1 Introduction -- 21.2 Fundamental Uncertainty in Economics -- 21.2.1 Economic Agency and Autonomy -- 21.3 The Inadequacy of Bayesianism -- 21.4 Epistemic and Ontological Uncertainty -- 21.5 Black Swans and Universal Causality -- 21.6 Ontological Uncertainty and Incompleteness -- 21.6.1 Uncertainty as Non-ergodicity -- 21.7 Uncertainty and Incompleteness -- 21.8 Decision-Making Under Uncertainty -- 21.9 Barbell Strategies. 21.10 Theory of Self. Description based on publisher supplied metadata and other sources. Electronic reproduction. Ann Arbor, Michigan : ProQuest Ebook Central, 2024. Available via World Wide Web. Access may be limited to ProQuest Ebook Central affiliated libraries. Electronic books. Scholz, Jason. Reid, Darryn J. Print version: Abbass, Hussein A. Foundations of Trusted Autonomy Cham : Springer International Publishing AG,c2018 9783319648156 ProQuest (Firm) Studies in Systems, Decision and Control Series https://ebookcentral.proquest.com/lib/oeawat/detail.action?docID=5579657 Click to View |
language |
English |
format |
eBook |
author |
Abbass, Hussein A. |
spellingShingle |
Abbass, Hussein A. Foundations of Trusted Autonomy. Studies in Systems, Decision and Control Series ; Intro -- Foreword -- Preface -- Acknowledgements -- Contents -- Contributors -- 1 Foundations of Trusted Autonomy: An Introduction -- 1.1 Autonomy -- 1.2 Trust -- 1.3 Trusted Autonomy -- Autonomy -- 2 Universal Artificial Intelligence -- 2.1 Introduction -- 2.2 Background and History of AI -- 2.3 Universal Artificial Intelligence -- 2.3.1 Framework -- 2.3.2 Learning -- 2.3.3 Goal -- 2.3.4 Planning -- 2.3.5 AIXI -- Putting It All Together -- 2.4 Approximations -- 2.4.1 MC-AIXI-CTW -- 2.4.2 Feature Reinforcement Learning -- 2.4.3 Model-Free AIXI -- 2.4.4 Deep Learning -- 2.5 Fundamental Challenges -- 2.5.1 Optimality and Exploration -- 2.5.2 Asymptotically Optimal Agents -- 2.6 Predicting and Controlling Behaviour -- 2.6.1 Self-Modification -- 2.6.2 Counterfeiting Reward -- 2.6.3 Death and Self-Preservation -- 2.7 Conclusions -- References -- 3 Goal Reasoning and Trusted Autonomy -- 3.1 Introduction -- 3.2 Goal-Driven Autonomy Models -- 3.2.1 Goal-Driven Autonomy -- 3.2.2 Goal Selection -- 3.2.3 An Application for Human-Robot Teaming -- 3.3 Goal Refinement -- 3.3.1 Goal Lifecycle -- 3.3.2 Guaranteeing the Execution of Specified Behaviors -- 3.3.3 A Distributed Robotics Application -- 3.4 Future Topics -- 3.4.1 Adaptive Autonomy and Inverse Trust -- 3.4.2 Rebel Agents -- 3.5 Conclusion -- References -- 4 Social Planning for Trusted Autonomy -- 4.1 Introduction -- 4.2 Motivation and Background -- 4.2.1 Automated Planning -- 4.2.2 From Autistic Planning to Social Planning -- 4.3 Social Planning -- 4.3.1 A Formal Model for Multi-agent Epistemic Planning -- 4.3.2 Solving Multi-agent Epistemic Planning Problems -- 4.4 Social Planning for Human Robot Interaction -- 4.4.1 Search and Rescue -- 4.4.2 Collaborative Manufacturing -- 4.5 Discussion -- References -- 5 A Neuroevolutionary Approach to Adaptive Multi-agent Teams -- 5.1 Introduction. 5.2 The Legion II Game -- 5.2.1 The Map -- 5.2.2 Units -- 5.2.3 Game Play -- 5.2.4 Scoring the Game -- 5.3 Agent Control Architectures -- 5.3.1 Barbarian Sensors and Controllers -- 5.3.2 Legion Sensors and Controllers -- 5.4 Neuroevolution With Enforced Sub-Populations (ESP) -- 5.5 Experimental Methodology -- 5.5.1 Repeatable Gameplay -- 5.5.2 Training -- 5.5.3 Testing -- 5.6 Experiments -- 5.6.1 Learning the Division of Labor -- 5.6.2 Run-Time Readaptation -- 5.7 Discussion -- 5.8 Conclusions -- References -- 6 The Blessing and Curse of Emergence in Swarm Intelligence Systems -- 6.1 Introduction -- 6.2 Emergence in Swarm Intelligence -- 6.3 The `Blessing' of Emergence -- 6.4 The `Curse' of Emergence -- 6.5 Taking Advantage of the Good While Avoiding the Bad -- 6.6 Conclusion -- References -- 7 Trusted Autonomous Game Play -- 7.1 Introduction -- 7.2 TA Game AI -- 7.3 TA Game -- 7.4 TA Game Communities -- 7.5 TA Mixed Reality Games -- 7.6 Discussion: TA Games -- References -- Trust -- 8 The Role of Trust in Human-Robot Interaction -- 8.1 Introduction -- 8.2 Conceptualization of Trust -- 8.3 Modeling Trust -- 8.4 Factors Affecting Trust -- 8.4.1 System Properties -- 8.4.2 Properties of the Operator -- 8.4.3 Environmental Factors -- 8.5 Instruments for Measuring Trust -- 8.6 Trust in Human Robot Interaction -- 8.6.1 Performance-Based Interaction: Humans Influencing Robots -- 8.6.2 Social-Based Interactions: Robots Influencing Humans -- 8.7 Conclusions and Recommendations -- References -- 9 Trustworthiness of Autonomous Systems -- 9.1 Introduction -- 9.1.1 Autonomous Systems -- 9.1.2 Trustworthiness -- 9.2 Background -- 9.3 Who or What Is Trustworthy? -- 9.4 How do We Know Who or What Is Trustworthy -- 9.4.1 Implicit Justifications of Trust -- 9.4.2 Explicit Justifications of Trust -- 9.4.3 A Cognitive Model of Trust and Competence. 9.4.4 Trustworthiness and Risk -- 9.4.5 Summary -- 9.5 What or Who Should We Trust? -- 9.6 The Value of Trustworthy Autonomous Systems -- 9.7 Conclusion -- References -- 10 Trusted Autonomy Under Uncertainty -- 10.1 Trust and Uncertainty -- 10.1.1 What Is Trust? -- 10.1.2 Trust and Distrust in HRI -- 10.2 Trust and Uncertainty -- 10.2.1 Trust and Distrust Entail Unknowns -- 10.2.2 What Is Being Trusted -- What Is Uncertain? -- 10.2.3 Trust and Dilemmas -- 10.3 Factors Affecting Human Reactivity to Risk and Uncertainty, and Trust -- 10.3.1 Kinds of Uncertainty, Risks, Standards, and Dispositions -- 10.3.2 Presumptive and Organizational-Level Trust -- 10.3.3 Trust Repair -- 10.4 Concluding Remarks -- References -- 11 The Need for Trusted Autonomy in Military Cyber Security -- 11.1 Introduction -- 11.2 Cyber Security -- 11.3 Challenges and the Potential Application of Trusted Autonomy -- 11.4 Conclusion -- References -- 12 Reinforcing Trust in Autonomous Systems: A Quantum Cognitive Approach -- 12.1 Introduction -- 12.2 Compatible and Incompatible States -- 12.3 A Quantum Cognition Model for the Emergence of Trust -- 12.4 Conclusion -- References -- 13 Learning to Shape Errors with a Confusion Objective -- 13.1 Introduction -- 13.2 Foundations -- 13.2.1 Binomial Logistic Regression -- 13.2.2 Multinomial Logistic Regression -- 13.2.3 Multinomial Softmax Regression for Gaussian Case -- 13.3 Multinomial Softmax Regression on Confusion -- 13.4 Implementation and Results -- 13.4.1 Error Trading -- 13.4.2 Performance Using a Deep Network and Independent Data Sources -- 13.4.3 Adversarial Errors -- 13.5 Discussion -- 13.6 Conclusion -- References -- 14 Developing Robot Assistants with Communicative Cues for Safe, Fluent HRI -- 14.1 Introduction -- 14.2 CHARM - Collaborative Human-Focused Assistive Robotics for Manufacturing. 14.2.1 The Robot Assistant, Its Task, and Its Components -- 14.2.2 CHARM Streams and Thrusts -- 14.2.3 Plugfest -- 14.3 Identifying, Modeling, and Implementing Naturalistic Communicative Cues -- 14.3.1 Phase 1: Human-Human Studies -- 14.3.2 Phase 2: Behavioral Description -- 14.3.3 Phase 3: Human-Robot Interaction Studies -- 14.4 Communicative Cue Studies -- 14.4.1 Human-Robot Handovers -- 14.4.2 Hesitation -- 14.4.3 Tap and Push -- 14.5 Current and Future Work -- References -- Trusted Autonomy -- 15 Intrinsic Motivation for Truly Autonomous Agents -- 15.1 Introduction -- 15.2 Background -- 15.2.1 Previous Work on Intrinsic Human Motivation -- 15.2.2 Previous Work on Cognitive Architectures -- 15.3 A Cognitive Architecture with Intrinsic Motivation -- 15.3.1 Overview of Clarion -- 15.3.2 The Action-Centered Subsystem -- 15.3.3 The Non-Action-Centered Subsystem -- 15.3.4 The Motivational Subsystem -- 15.3.5 The Metacognitive Subsystem -- 15.4 Some Examples of Simulations -- 15.5 Concluding Remarks -- References -- 16 Computational Motivation, Autonomy and Trustworthiness: Can We Have It All? -- 16.1 Autonomous Systems -- 16.2 Intrinsically Motivated Swarms -- 16.2.1 Crowds of Motivated Agents -- 16.2.2 Motivated Particle Swarm Optimization for Adaptive Task Allocation -- 16.2.3 Motivated Guaranteed Convergence Particle Swarm Optimization for Exploration and Task Allocation Under Communication Constraints -- 16.3 Functional Implications of Intrinsically Motivated Swarms -- 16.3.1 Motivation and Diversity -- 16.3.2 Motivation and Adaptation -- 16.3.3 Motivation and Exploration -- 16.4 Implications of Motivation on Trust -- 16.4.1 Implications for Reliability -- 16.5 Implications for Privacy and Security -- 16.5.1 Implications for Safety -- 16.6 Implications of Complexity -- 16.7 Implications for Risk -- 16.7.1 Implications for Free Will. 16.8 Conclusion -- References -- 17 Are Autonomous-and-Creative Machines Intrinsically Untrustworthy? -- 17.1 Introduction -- 17.2 The Distressing Principle, Intuitively Put -- 17.3 The Distressing Principle, More Formally Put -- 17.3.1 The Ideal-Observer Point of View -- 17.3.2 Theory-of-Mind-Creativity -- 17.3.3 Autonomy -- 17.3.4 The Deontic Cognitive Event Calculus (mathcalDemathcalCEC) -- 17.3.5 Collaborative Situations -- Untrustworthiness -- 17.3.6 Theorem ACU -- 17.4 Computational Simulations -- 17.4.1 ShadowProver -- 17.4.2 The Simulation Proper -- 17.5 Toward the Needed Engineering -- References -- 18 Trusted Autonomous Command and Control -- 18.1 Scenario -- References -- 19 Trusted Autonomy in Training: A Future Scenario -- 19.1 Introduction -- 19.2 Scan of Changes -- 19.3 Trusted Autonomy Training System Map -- 19.4 Theory of Change -- 19.5 Narratives -- 19.5.1 The Failed Promise -- 19.5.2 Fake It Until You Break It -- 19.5.3 To Infinity, and Beyond! -- References -- 20 Future Trusted Autonomous Space Scenarios -- 20.1 Introduction -- 20.2 The Space Environment -- 20.3 Space Activity - Missions and Autonomy -- 20.4 Current State-of-the-Art of Trusted Autonomous Space Systems -- 20.5 Some Future Trusted Autonomous Space Scenarios -- 20.5.1 Autonomous Space Operations -- 20.5.2 Autonomous Space Traffic Management Systems -- 20.5.3 Autonomous Disaggregated Space Systems -- References -- 21 An Autonomy Interrogative -- 21.1 Introduction -- 21.2 Fundamental Uncertainty in Economics -- 21.2.1 Economic Agency and Autonomy -- 21.3 The Inadequacy of Bayesianism -- 21.4 Epistemic and Ontological Uncertainty -- 21.5 Black Swans and Universal Causality -- 21.6 Ontological Uncertainty and Incompleteness -- 21.6.1 Uncertainty as Non-ergodicity -- 21.7 Uncertainty and Incompleteness -- 21.8 Decision-Making Under Uncertainty -- 21.9 Barbell Strategies. 21.10 Theory of Self. |
author_facet |
Abbass, Hussein A. Scholz, Jason. Reid, Darryn J. |
author_variant |
h a a ha haa |
author2 |
Scholz, Jason. Reid, Darryn J. |
author2_variant |
j s js d j r dj djr |
author2_role |
TeilnehmendeR TeilnehmendeR |
author_sort |
Abbass, Hussein A. |
title |
Foundations of Trusted Autonomy. |
title_full |
Foundations of Trusted Autonomy. |
title_fullStr |
Foundations of Trusted Autonomy. |
title_full_unstemmed |
Foundations of Trusted Autonomy. |
title_auth |
Foundations of Trusted Autonomy. |
title_new |
Foundations of Trusted Autonomy. |
title_sort |
foundations of trusted autonomy. |
series |
Studies in Systems, Decision and Control Series ; |
series2 |
Studies in Systems, Decision and Control Series ; |
publisher |
Springer International Publishing AG, |
publishDate |
2018 |
physical |
1 online resource (399 pages) |
edition |
1st ed. |
contents |
Intro -- Foreword -- Preface -- Acknowledgements -- Contents -- Contributors -- 1 Foundations of Trusted Autonomy: An Introduction -- 1.1 Autonomy -- 1.2 Trust -- 1.3 Trusted Autonomy -- Autonomy -- 2 Universal Artificial Intelligence -- 2.1 Introduction -- 2.2 Background and History of AI -- 2.3 Universal Artificial Intelligence -- 2.3.1 Framework -- 2.3.2 Learning -- 2.3.3 Goal -- 2.3.4 Planning -- 2.3.5 AIXI -- Putting It All Together -- 2.4 Approximations -- 2.4.1 MC-AIXI-CTW -- 2.4.2 Feature Reinforcement Learning -- 2.4.3 Model-Free AIXI -- 2.4.4 Deep Learning -- 2.5 Fundamental Challenges -- 2.5.1 Optimality and Exploration -- 2.5.2 Asymptotically Optimal Agents -- 2.6 Predicting and Controlling Behaviour -- 2.6.1 Self-Modification -- 2.6.2 Counterfeiting Reward -- 2.6.3 Death and Self-Preservation -- 2.7 Conclusions -- References -- 3 Goal Reasoning and Trusted Autonomy -- 3.1 Introduction -- 3.2 Goal-Driven Autonomy Models -- 3.2.1 Goal-Driven Autonomy -- 3.2.2 Goal Selection -- 3.2.3 An Application for Human-Robot Teaming -- 3.3 Goal Refinement -- 3.3.1 Goal Lifecycle -- 3.3.2 Guaranteeing the Execution of Specified Behaviors -- 3.3.3 A Distributed Robotics Application -- 3.4 Future Topics -- 3.4.1 Adaptive Autonomy and Inverse Trust -- 3.4.2 Rebel Agents -- 3.5 Conclusion -- References -- 4 Social Planning for Trusted Autonomy -- 4.1 Introduction -- 4.2 Motivation and Background -- 4.2.1 Automated Planning -- 4.2.2 From Autistic Planning to Social Planning -- 4.3 Social Planning -- 4.3.1 A Formal Model for Multi-agent Epistemic Planning -- 4.3.2 Solving Multi-agent Epistemic Planning Problems -- 4.4 Social Planning for Human Robot Interaction -- 4.4.1 Search and Rescue -- 4.4.2 Collaborative Manufacturing -- 4.5 Discussion -- References -- 5 A Neuroevolutionary Approach to Adaptive Multi-agent Teams -- 5.1 Introduction. 5.2 The Legion II Game -- 5.2.1 The Map -- 5.2.2 Units -- 5.2.3 Game Play -- 5.2.4 Scoring the Game -- 5.3 Agent Control Architectures -- 5.3.1 Barbarian Sensors and Controllers -- 5.3.2 Legion Sensors and Controllers -- 5.4 Neuroevolution With Enforced Sub-Populations (ESP) -- 5.5 Experimental Methodology -- 5.5.1 Repeatable Gameplay -- 5.5.2 Training -- 5.5.3 Testing -- 5.6 Experiments -- 5.6.1 Learning the Division of Labor -- 5.6.2 Run-Time Readaptation -- 5.7 Discussion -- 5.8 Conclusions -- References -- 6 The Blessing and Curse of Emergence in Swarm Intelligence Systems -- 6.1 Introduction -- 6.2 Emergence in Swarm Intelligence -- 6.3 The `Blessing' of Emergence -- 6.4 The `Curse' of Emergence -- 6.5 Taking Advantage of the Good While Avoiding the Bad -- 6.6 Conclusion -- References -- 7 Trusted Autonomous Game Play -- 7.1 Introduction -- 7.2 TA Game AI -- 7.3 TA Game -- 7.4 TA Game Communities -- 7.5 TA Mixed Reality Games -- 7.6 Discussion: TA Games -- References -- Trust -- 8 The Role of Trust in Human-Robot Interaction -- 8.1 Introduction -- 8.2 Conceptualization of Trust -- 8.3 Modeling Trust -- 8.4 Factors Affecting Trust -- 8.4.1 System Properties -- 8.4.2 Properties of the Operator -- 8.4.3 Environmental Factors -- 8.5 Instruments for Measuring Trust -- 8.6 Trust in Human Robot Interaction -- 8.6.1 Performance-Based Interaction: Humans Influencing Robots -- 8.6.2 Social-Based Interactions: Robots Influencing Humans -- 8.7 Conclusions and Recommendations -- References -- 9 Trustworthiness of Autonomous Systems -- 9.1 Introduction -- 9.1.1 Autonomous Systems -- 9.1.2 Trustworthiness -- 9.2 Background -- 9.3 Who or What Is Trustworthy? -- 9.4 How do We Know Who or What Is Trustworthy -- 9.4.1 Implicit Justifications of Trust -- 9.4.2 Explicit Justifications of Trust -- 9.4.3 A Cognitive Model of Trust and Competence. 9.4.4 Trustworthiness and Risk -- 9.4.5 Summary -- 9.5 What or Who Should We Trust? -- 9.6 The Value of Trustworthy Autonomous Systems -- 9.7 Conclusion -- References -- 10 Trusted Autonomy Under Uncertainty -- 10.1 Trust and Uncertainty -- 10.1.1 What Is Trust? -- 10.1.2 Trust and Distrust in HRI -- 10.2 Trust and Uncertainty -- 10.2.1 Trust and Distrust Entail Unknowns -- 10.2.2 What Is Being Trusted -- What Is Uncertain? -- 10.2.3 Trust and Dilemmas -- 10.3 Factors Affecting Human Reactivity to Risk and Uncertainty, and Trust -- 10.3.1 Kinds of Uncertainty, Risks, Standards, and Dispositions -- 10.3.2 Presumptive and Organizational-Level Trust -- 10.3.3 Trust Repair -- 10.4 Concluding Remarks -- References -- 11 The Need for Trusted Autonomy in Military Cyber Security -- 11.1 Introduction -- 11.2 Cyber Security -- 11.3 Challenges and the Potential Application of Trusted Autonomy -- 11.4 Conclusion -- References -- 12 Reinforcing Trust in Autonomous Systems: A Quantum Cognitive Approach -- 12.1 Introduction -- 12.2 Compatible and Incompatible States -- 12.3 A Quantum Cognition Model for the Emergence of Trust -- 12.4 Conclusion -- References -- 13 Learning to Shape Errors with a Confusion Objective -- 13.1 Introduction -- 13.2 Foundations -- 13.2.1 Binomial Logistic Regression -- 13.2.2 Multinomial Logistic Regression -- 13.2.3 Multinomial Softmax Regression for Gaussian Case -- 13.3 Multinomial Softmax Regression on Confusion -- 13.4 Implementation and Results -- 13.4.1 Error Trading -- 13.4.2 Performance Using a Deep Network and Independent Data Sources -- 13.4.3 Adversarial Errors -- 13.5 Discussion -- 13.6 Conclusion -- References -- 14 Developing Robot Assistants with Communicative Cues for Safe, Fluent HRI -- 14.1 Introduction -- 14.2 CHARM - Collaborative Human-Focused Assistive Robotics for Manufacturing. 14.2.1 The Robot Assistant, Its Task, and Its Components -- 14.2.2 CHARM Streams and Thrusts -- 14.2.3 Plugfest -- 14.3 Identifying, Modeling, and Implementing Naturalistic Communicative Cues -- 14.3.1 Phase 1: Human-Human Studies -- 14.3.2 Phase 2: Behavioral Description -- 14.3.3 Phase 3: Human-Robot Interaction Studies -- 14.4 Communicative Cue Studies -- 14.4.1 Human-Robot Handovers -- 14.4.2 Hesitation -- 14.4.3 Tap and Push -- 14.5 Current and Future Work -- References -- Trusted Autonomy -- 15 Intrinsic Motivation for Truly Autonomous Agents -- 15.1 Introduction -- 15.2 Background -- 15.2.1 Previous Work on Intrinsic Human Motivation -- 15.2.2 Previous Work on Cognitive Architectures -- 15.3 A Cognitive Architecture with Intrinsic Motivation -- 15.3.1 Overview of Clarion -- 15.3.2 The Action-Centered Subsystem -- 15.3.3 The Non-Action-Centered Subsystem -- 15.3.4 The Motivational Subsystem -- 15.3.5 The Metacognitive Subsystem -- 15.4 Some Examples of Simulations -- 15.5 Concluding Remarks -- References -- 16 Computational Motivation, Autonomy and Trustworthiness: Can We Have It All? -- 16.1 Autonomous Systems -- 16.2 Intrinsically Motivated Swarms -- 16.2.1 Crowds of Motivated Agents -- 16.2.2 Motivated Particle Swarm Optimization for Adaptive Task Allocation -- 16.2.3 Motivated Guaranteed Convergence Particle Swarm Optimization for Exploration and Task Allocation Under Communication Constraints -- 16.3 Functional Implications of Intrinsically Motivated Swarms -- 16.3.1 Motivation and Diversity -- 16.3.2 Motivation and Adaptation -- 16.3.3 Motivation and Exploration -- 16.4 Implications of Motivation on Trust -- 16.4.1 Implications for Reliability -- 16.5 Implications for Privacy and Security -- 16.5.1 Implications for Safety -- 16.6 Implications of Complexity -- 16.7 Implications for Risk -- 16.7.1 Implications for Free Will. 16.8 Conclusion -- References -- 17 Are Autonomous-and-Creative Machines Intrinsically Untrustworthy? -- 17.1 Introduction -- 17.2 The Distressing Principle, Intuitively Put -- 17.3 The Distressing Principle, More Formally Put -- 17.3.1 The Ideal-Observer Point of View -- 17.3.2 Theory-of-Mind-Creativity -- 17.3.3 Autonomy -- 17.3.4 The Deontic Cognitive Event Calculus (mathcalDemathcalCEC) -- 17.3.5 Collaborative Situations -- Untrustworthiness -- 17.3.6 Theorem ACU -- 17.4 Computational Simulations -- 17.4.1 ShadowProver -- 17.4.2 The Simulation Proper -- 17.5 Toward the Needed Engineering -- References -- 18 Trusted Autonomous Command and Control -- 18.1 Scenario -- References -- 19 Trusted Autonomy in Training: A Future Scenario -- 19.1 Introduction -- 19.2 Scan of Changes -- 19.3 Trusted Autonomy Training System Map -- 19.4 Theory of Change -- 19.5 Narratives -- 19.5.1 The Failed Promise -- 19.5.2 Fake It Until You Break It -- 19.5.3 To Infinity, and Beyond! -- References -- 20 Future Trusted Autonomous Space Scenarios -- 20.1 Introduction -- 20.2 The Space Environment -- 20.3 Space Activity - Missions and Autonomy -- 20.4 Current State-of-the-Art of Trusted Autonomous Space Systems -- 20.5 Some Future Trusted Autonomous Space Scenarios -- 20.5.1 Autonomous Space Operations -- 20.5.2 Autonomous Space Traffic Management Systems -- 20.5.3 Autonomous Disaggregated Space Systems -- References -- 21 An Autonomy Interrogative -- 21.1 Introduction -- 21.2 Fundamental Uncertainty in Economics -- 21.2.1 Economic Agency and Autonomy -- 21.3 The Inadequacy of Bayesianism -- 21.4 Epistemic and Ontological Uncertainty -- 21.5 Black Swans and Universal Causality -- 21.6 Ontological Uncertainty and Incompleteness -- 21.6.1 Uncertainty as Non-ergodicity -- 21.7 Uncertainty and Incompleteness -- 21.8 Decision-Making Under Uncertainty -- 21.9 Barbell Strategies. 21.10 Theory of Self. |
isbn |
9783319648163 9783319648156 |
callnumber-first |
T - Technology |
callnumber-subject |
TJ - Mechanical Engineering and Machinery |
callnumber-label |
TJ212-225 |
callnumber-sort |
TJ 3212 3225 |
genre |
Electronic books. |
genre_facet |
Electronic books. |
url |
https://ebookcentral.proquest.com/lib/oeawat/detail.action?docID=5579657 |
illustrated |
Not Illustrated |
oclc_num |
1020319127 |
work_keys_str_mv |
AT abbasshusseina foundationsoftrustedautonomy AT scholzjason foundationsoftrustedautonomy AT reiddarrynj foundationsoftrustedautonomy |
status_str |
n |
ids_txt_mv |
(MiAaPQ)5005579657 (Au-PeEL)EBL5579657 (OCoLC)1020319127 |
carrierType_str_mv |
cr |
hierarchy_parent_title |
Studies in Systems, Decision and Control Series ; v.117 |
is_hierarchy_title |
Foundations of Trusted Autonomy. |
container_title |
Studies in Systems, Decision and Control Series ; v.117 |
author2_original_writing_str_mv |
noLinkedField noLinkedField |
marc_error |
Info : Unimarc and ISO-8859-1 translations identical, choosing ISO-8859-1. --- [ 856 : z ] |
_version_ |
1792331054868070400 |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>11081nam a22004813i 4500</leader><controlfield tag="001">5005579657</controlfield><controlfield tag="003">MiAaPQ</controlfield><controlfield tag="005">20240229073831.0</controlfield><controlfield tag="006">m o d | </controlfield><controlfield tag="007">cr cnu||||||||</controlfield><controlfield tag="008">240229s2018 xx o ||||0 eng d</controlfield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9783319648163</subfield><subfield code="q">(electronic bk.)</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="z">9783319648156</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(MiAaPQ)5005579657</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(Au-PeEL)EBL5579657</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)1020319127</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">MiAaPQ</subfield><subfield code="b">eng</subfield><subfield code="e">rda</subfield><subfield code="e">pn</subfield><subfield code="c">MiAaPQ</subfield><subfield code="d">MiAaPQ</subfield></datafield><datafield tag="050" ind1=" " ind2="4"><subfield code="a">TJ212-225</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Abbass, Hussein A.</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Foundations of Trusted Autonomy.</subfield></datafield><datafield tag="250" ind1=" " ind2=" "><subfield code="a">1st ed.</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Cham :</subfield><subfield code="b">Springer International Publishing AG,</subfield><subfield code="c">2018.</subfield></datafield><datafield tag="264" ind1=" " ind2="4"><subfield code="c">Ã2018.</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">1 online resource (399 pages)</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">computer</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">online resource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="490" ind1="1" ind2=" "><subfield code="a">Studies in Systems, Decision and Control Series ;</subfield><subfield code="v">v.117</subfield></datafield><datafield tag="505" ind1="0" ind2=" "><subfield code="a">Intro -- Foreword -- Preface -- Acknowledgements -- Contents -- Contributors -- 1 Foundations of Trusted Autonomy: An Introduction -- 1.1 Autonomy -- 1.2 Trust -- 1.3 Trusted Autonomy -- Autonomy -- 2 Universal Artificial Intelligence -- 2.1 Introduction -- 2.2 Background and History of AI -- 2.3 Universal Artificial Intelligence -- 2.3.1 Framework -- 2.3.2 Learning -- 2.3.3 Goal -- 2.3.4 Planning -- 2.3.5 AIXI -- Putting It All Together -- 2.4 Approximations -- 2.4.1 MC-AIXI-CTW -- 2.4.2 Feature Reinforcement Learning -- 2.4.3 Model-Free AIXI -- 2.4.4 Deep Learning -- 2.5 Fundamental Challenges -- 2.5.1 Optimality and Exploration -- 2.5.2 Asymptotically Optimal Agents -- 2.6 Predicting and Controlling Behaviour -- 2.6.1 Self-Modification -- 2.6.2 Counterfeiting Reward -- 2.6.3 Death and Self-Preservation -- 2.7 Conclusions -- References -- 3 Goal Reasoning and Trusted Autonomy -- 3.1 Introduction -- 3.2 Goal-Driven Autonomy Models -- 3.2.1 Goal-Driven Autonomy -- 3.2.2 Goal Selection -- 3.2.3 An Application for Human-Robot Teaming -- 3.3 Goal Refinement -- 3.3.1 Goal Lifecycle -- 3.3.2 Guaranteeing the Execution of Specified Behaviors -- 3.3.3 A Distributed Robotics Application -- 3.4 Future Topics -- 3.4.1 Adaptive Autonomy and Inverse Trust -- 3.4.2 Rebel Agents -- 3.5 Conclusion -- References -- 4 Social Planning for Trusted Autonomy -- 4.1 Introduction -- 4.2 Motivation and Background -- 4.2.1 Automated Planning -- 4.2.2 From Autistic Planning to Social Planning -- 4.3 Social Planning -- 4.3.1 A Formal Model for Multi-agent Epistemic Planning -- 4.3.2 Solving Multi-agent Epistemic Planning Problems -- 4.4 Social Planning for Human Robot Interaction -- 4.4.1 Search and Rescue -- 4.4.2 Collaborative Manufacturing -- 4.5 Discussion -- References -- 5 A Neuroevolutionary Approach to Adaptive Multi-agent Teams -- 5.1 Introduction.</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">5.2 The Legion II Game -- 5.2.1 The Map -- 5.2.2 Units -- 5.2.3 Game Play -- 5.2.4 Scoring the Game -- 5.3 Agent Control Architectures -- 5.3.1 Barbarian Sensors and Controllers -- 5.3.2 Legion Sensors and Controllers -- 5.4 Neuroevolution With Enforced Sub-Populations (ESP) -- 5.5 Experimental Methodology -- 5.5.1 Repeatable Gameplay -- 5.5.2 Training -- 5.5.3 Testing -- 5.6 Experiments -- 5.6.1 Learning the Division of Labor -- 5.6.2 Run-Time Readaptation -- 5.7 Discussion -- 5.8 Conclusions -- References -- 6 The Blessing and Curse of Emergence in Swarm Intelligence Systems -- 6.1 Introduction -- 6.2 Emergence in Swarm Intelligence -- 6.3 The `Blessing' of Emergence -- 6.4 The `Curse' of Emergence -- 6.5 Taking Advantage of the Good While Avoiding the Bad -- 6.6 Conclusion -- References -- 7 Trusted Autonomous Game Play -- 7.1 Introduction -- 7.2 TA Game AI -- 7.3 TA Game -- 7.4 TA Game Communities -- 7.5 TA Mixed Reality Games -- 7.6 Discussion: TA Games -- References -- Trust -- 8 The Role of Trust in Human-Robot Interaction -- 8.1 Introduction -- 8.2 Conceptualization of Trust -- 8.3 Modeling Trust -- 8.4 Factors Affecting Trust -- 8.4.1 System Properties -- 8.4.2 Properties of the Operator -- 8.4.3 Environmental Factors -- 8.5 Instruments for Measuring Trust -- 8.6 Trust in Human Robot Interaction -- 8.6.1 Performance-Based Interaction: Humans Influencing Robots -- 8.6.2 Social-Based Interactions: Robots Influencing Humans -- 8.7 Conclusions and Recommendations -- References -- 9 Trustworthiness of Autonomous Systems -- 9.1 Introduction -- 9.1.1 Autonomous Systems -- 9.1.2 Trustworthiness -- 9.2 Background -- 9.3 Who or What Is Trustworthy? -- 9.4 How do We Know Who or What Is Trustworthy -- 9.4.1 Implicit Justifications of Trust -- 9.4.2 Explicit Justifications of Trust -- 9.4.3 A Cognitive Model of Trust and Competence.</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">9.4.4 Trustworthiness and Risk -- 9.4.5 Summary -- 9.5 What or Who Should We Trust? -- 9.6 The Value of Trustworthy Autonomous Systems -- 9.7 Conclusion -- References -- 10 Trusted Autonomy Under Uncertainty -- 10.1 Trust and Uncertainty -- 10.1.1 What Is Trust? -- 10.1.2 Trust and Distrust in HRI -- 10.2 Trust and Uncertainty -- 10.2.1 Trust and Distrust Entail Unknowns -- 10.2.2 What Is Being Trusted -- What Is Uncertain? -- 10.2.3 Trust and Dilemmas -- 10.3 Factors Affecting Human Reactivity to Risk and Uncertainty, and Trust -- 10.3.1 Kinds of Uncertainty, Risks, Standards, and Dispositions -- 10.3.2 Presumptive and Organizational-Level Trust -- 10.3.3 Trust Repair -- 10.4 Concluding Remarks -- References -- 11 The Need for Trusted Autonomy in Military Cyber Security -- 11.1 Introduction -- 11.2 Cyber Security -- 11.3 Challenges and the Potential Application of Trusted Autonomy -- 11.4 Conclusion -- References -- 12 Reinforcing Trust in Autonomous Systems: A Quantum Cognitive Approach -- 12.1 Introduction -- 12.2 Compatible and Incompatible States -- 12.3 A Quantum Cognition Model for the Emergence of Trust -- 12.4 Conclusion -- References -- 13 Learning to Shape Errors with a Confusion Objective -- 13.1 Introduction -- 13.2 Foundations -- 13.2.1 Binomial Logistic Regression -- 13.2.2 Multinomial Logistic Regression -- 13.2.3 Multinomial Softmax Regression for Gaussian Case -- 13.3 Multinomial Softmax Regression on Confusion -- 13.4 Implementation and Results -- 13.4.1 Error Trading -- 13.4.2 Performance Using a Deep Network and Independent Data Sources -- 13.4.3 Adversarial Errors -- 13.5 Discussion -- 13.6 Conclusion -- References -- 14 Developing Robot Assistants with Communicative Cues for Safe, Fluent HRI -- 14.1 Introduction -- 14.2 CHARM - Collaborative Human-Focused Assistive Robotics for Manufacturing.</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">14.2.1 The Robot Assistant, Its Task, and Its Components -- 14.2.2 CHARM Streams and Thrusts -- 14.2.3 Plugfest -- 14.3 Identifying, Modeling, and Implementing Naturalistic Communicative Cues -- 14.3.1 Phase 1: Human-Human Studies -- 14.3.2 Phase 2: Behavioral Description -- 14.3.3 Phase 3: Human-Robot Interaction Studies -- 14.4 Communicative Cue Studies -- 14.4.1 Human-Robot Handovers -- 14.4.2 Hesitation -- 14.4.3 Tap and Push -- 14.5 Current and Future Work -- References -- Trusted Autonomy -- 15 Intrinsic Motivation for Truly Autonomous Agents -- 15.1 Introduction -- 15.2 Background -- 15.2.1 Previous Work on Intrinsic Human Motivation -- 15.2.2 Previous Work on Cognitive Architectures -- 15.3 A Cognitive Architecture with Intrinsic Motivation -- 15.3.1 Overview of Clarion -- 15.3.2 The Action-Centered Subsystem -- 15.3.3 The Non-Action-Centered Subsystem -- 15.3.4 The Motivational Subsystem -- 15.3.5 The Metacognitive Subsystem -- 15.4 Some Examples of Simulations -- 15.5 Concluding Remarks -- References -- 16 Computational Motivation, Autonomy and Trustworthiness: Can We Have It All? -- 16.1 Autonomous Systems -- 16.2 Intrinsically Motivated Swarms -- 16.2.1 Crowds of Motivated Agents -- 16.2.2 Motivated Particle Swarm Optimization for Adaptive Task Allocation -- 16.2.3 Motivated Guaranteed Convergence Particle Swarm Optimization for Exploration and Task Allocation Under Communication Constraints -- 16.3 Functional Implications of Intrinsically Motivated Swarms -- 16.3.1 Motivation and Diversity -- 16.3.2 Motivation and Adaptation -- 16.3.3 Motivation and Exploration -- 16.4 Implications of Motivation on Trust -- 16.4.1 Implications for Reliability -- 16.5 Implications for Privacy and Security -- 16.5.1 Implications for Safety -- 16.6 Implications of Complexity -- 16.7 Implications for Risk -- 16.7.1 Implications for Free Will.</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">16.8 Conclusion -- References -- 17 Are Autonomous-and-Creative Machines Intrinsically Untrustworthy? -- 17.1 Introduction -- 17.2 The Distressing Principle, Intuitively Put -- 17.3 The Distressing Principle, More Formally Put -- 17.3.1 The Ideal-Observer Point of View -- 17.3.2 Theory-of-Mind-Creativity -- 17.3.3 Autonomy -- 17.3.4 The Deontic Cognitive Event Calculus (mathcalDemathcalCEC) -- 17.3.5 Collaborative Situations -- Untrustworthiness -- 17.3.6 Theorem ACU -- 17.4 Computational Simulations -- 17.4.1 ShadowProver -- 17.4.2 The Simulation Proper -- 17.5 Toward the Needed Engineering -- References -- 18 Trusted Autonomous Command and Control -- 18.1 Scenario -- References -- 19 Trusted Autonomy in Training: A Future Scenario -- 19.1 Introduction -- 19.2 Scan of Changes -- 19.3 Trusted Autonomy Training System Map -- 19.4 Theory of Change -- 19.5 Narratives -- 19.5.1 The Failed Promise -- 19.5.2 Fake It Until You Break It -- 19.5.3 To Infinity, and Beyond! -- References -- 20 Future Trusted Autonomous Space Scenarios -- 20.1 Introduction -- 20.2 The Space Environment -- 20.3 Space Activity - Missions and Autonomy -- 20.4 Current State-of-the-Art of Trusted Autonomous Space Systems -- 20.5 Some Future Trusted Autonomous Space Scenarios -- 20.5.1 Autonomous Space Operations -- 20.5.2 Autonomous Space Traffic Management Systems -- 20.5.3 Autonomous Disaggregated Space Systems -- References -- 21 An Autonomy Interrogative -- 21.1 Introduction -- 21.2 Fundamental Uncertainty in Economics -- 21.2.1 Economic Agency and Autonomy -- 21.3 The Inadequacy of Bayesianism -- 21.4 Epistemic and Ontological Uncertainty -- 21.5 Black Swans and Universal Causality -- 21.6 Ontological Uncertainty and Incompleteness -- 21.6.1 Uncertainty as Non-ergodicity -- 21.7 Uncertainty and Incompleteness -- 21.8 Decision-Making Under Uncertainty -- 21.9 Barbell Strategies.</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">21.10 Theory of Self.</subfield></datafield><datafield tag="588" ind1=" " ind2=" "><subfield code="a">Description based on publisher supplied metadata and other sources.</subfield></datafield><datafield tag="590" ind1=" " ind2=" "><subfield code="a">Electronic reproduction. Ann Arbor, Michigan : ProQuest Ebook Central, 2024. Available via World Wide Web. Access may be limited to ProQuest Ebook Central affiliated libraries. </subfield></datafield><datafield tag="655" ind1=" " ind2="4"><subfield code="a">Electronic books.</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Scholz, Jason.</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Reid, Darryn J.</subfield></datafield><datafield tag="776" ind1="0" ind2="8"><subfield code="i">Print version:</subfield><subfield code="a">Abbass, Hussein A.</subfield><subfield code="t">Foundations of Trusted Autonomy</subfield><subfield code="d">Cham : Springer International Publishing AG,c2018</subfield><subfield code="z">9783319648156</subfield></datafield><datafield tag="797" ind1="2" ind2=" "><subfield code="a">ProQuest (Firm)</subfield></datafield><datafield tag="830" ind1=" " ind2="0"><subfield code="a">Studies in Systems, Decision and Control Series</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://ebookcentral.proquest.com/lib/oeawat/detail.action?docID=5579657</subfield><subfield code="z">Click to View</subfield></datafield></record></collection> |