Unlocking Artificial Intelligence : : From Theory to Applications.

Saved in:
Bibliographic Details
:
TeilnehmendeR:
Place / Publishing House:Cham : : Springer International Publishing AG,, 2024.
©2024.
Year of Publication:2024
Edition:1st ed.
Language:English
Physical Description:1 online resource (382 pages)
Tags: Add Tag
No Tags, Be the first to tag this record!
Table of Contents:
  • Intro
  • Preface
  • Acknowledgements
  • Contents
  • Part I Theory
  • Chapter 1 Automated Machine Learning
  • 1.1 Introduction
  • 1.2 Components of AutoML Systems
  • 1.2.1 Search Space
  • 1.2.2 Optimization
  • 1.2.3 Ensembling
  • 1.2.4 Feature Selection and Engineering
  • 1.2.5 Meta-Learning
  • 1.2.6 A Brief Note on AutoML in the Wild
  • 1.3 Selected Topics in AutoML
  • 1.3.1 AutoML for Time Series Data
  • 1.3.2 Unsupervised AutoML
  • 1.3.3 AutoML Beyond a Single Objective
  • 1.3.4 Human-In-The-Loop AutoML
  • 1.4 Neural Architecture Search
  • 1.4.1 A Brief Overview of the Current State of NAS
  • 1.4.2 Hardware-aware NAS
  • 1.5 Conclusion and Outlook
  • References
  • Chapter 2 Sequence-based Learning
  • 2.1 Introduction
  • 2.2 Time Series Processing
  • 2.2.1 Time Series Data Streams
  • 2.2.2 Pre-Processing
  • 2.2.3 Predictive Modelling
  • 2.2.4 Post-Processing
  • 2.3 Methods
  • 2.3.1 Temporal Convolutional Networks
  • 2.3.2 Recurrent Neural Networks
  • 2.3.3 Transformer
  • 2.4 Perspectives
  • 2.4.1 Time Series Similarity
  • 2.4.1.1 Deep Metric Learning
  • 2.4.2 Transfer Learning &amp
  • Domain Adaptation
  • 2.4.3 Model Interpretability
  • 2.4.3.1 Interpretability for Time Series
  • 2.4.3.2 Trusting Interpretations
  • 2.5 Conclusion and Outlook
  • Acknowledgments
  • References
  • Chapter 3 Learning from Experience
  • 3.1 Introduction
  • 3.2 Concepts of Reinforcement Learning
  • 3.2.1 Markov Decision Processes (MDPs)
  • 3.2.2 Dynamic Programming
  • 3.2.3 Model-free Reinforcement Learning
  • 3.2.4 General Remarks
  • 3.3 Learning purely through Interaction
  • 3.3.1 Exploration-Exploitation
  • 3.3.1.1 Exploration Strategies
  • 3.3.1.2 Exploration in Deep RL
  • 3.4 Learning with Data or Knowledge
  • 3.4.1 Model-based RL with continuous Actions
  • 3.4.2 MBRL with Discrete Actions: Monte Carlo Tree Search
  • 3.4.3 Offline Reinforcement Learning.
  • 3.4.4 Hierarchical RL
  • 3.5 Challenges for Agent Deployment
  • 3.5.1 Safety through Policy Constraints
  • 3.5.2 Generalizability of Policies
  • 3.5.3 Lack of a Reward Function
  • 3.6 Conclusion and Outlook
  • References
  • Chapter 4 Learning with Limited Labelled Data
  • 4.1 Introduction
  • 4.2 Semi-Supervised Learning
  • 4.2.1 Classical Semi-Supervised Learning
  • 4.2.2 Deep Semi-Supervised Learning
  • 4.2.2.1 Self-training
  • 4.2.2.2 Unsupervised Regularization
  • 4.2.3 Self-Training and Consistency Regularization
  • 4.3 Active Learning
  • 4.3.1 Deep Active Learning (DAL)
  • 4.3.2 Uncertainty Sampling
  • 4.3.3 Diversity Sampling
  • 4.3.4 Balanced Criteria
  • 4.4 Active Semi-Supervised Learning
  • 4.4.1 How can SSL and ALWork Together?
  • 4.4.2 Are SSL and AL Always Mutually Beneficial?
  • 4.5 Conclusion and Outlook
  • References
  • Chapter 5 The Role of Uncertainty Quantification for Trustworthy AI
  • 5.1 Introduction
  • 5.2 Towards Trustworthy AI
  • 5.2.1 The EU AI Act
  • 5.2.2 From Uncertainty to Trustworthy AI
  • 5.3 Uncertainty Quantification
  • 5.3.1 Sources of Uncertainty
  • 5.3.1.1 Aleatoric Uncertainty
  • 5.3.1.2 Epistemic Uncertainty
  • 5.3.2 Methods for Quantification of Uncertainty and Calibration
  • 5.3.2.1 Data-based Methods
  • 5.3.2.2 Architecture-Modifying Methods
  • 5.3.2.3 Post-Hoc Methods
  • 5.3.3 Evaluation Metrics for Uncertainty Estimation
  • 5.3.3.1 Negative Log-Likelihood
  • x
  • 5.3.3.2 Expected Calibration Error
  • 5.3.3.3 Rejection-based Measures
  • 5.4 Conclusion and Outlook
  • References
  • Chapter 6 Process-aware Learning
  • 6.1 Introduction
  • 6.2 Overview of Process Mining
  • 6.2.1 Process Mining Basic Concept
  • 6.2.2 Process Mining Types
  • 6.2.2.1 Process Discovery
  • 6.2.2.2 Conformance Checking
  • 6.2.2.3 Model Enhancement
  • 6.2.3 Event Log
  • 6.2.4 Four Quality Criteria
  • 6.2.5 Types of Processes.
  • 6.2.5.1 Lasagna Processes
  • 6.2.5.2 Spaghetti Processes
  • 6.3 Process-Awareness from Theory to Practice
  • 6.3.1 Predictive Analysis in Process Mining
  • 6.3.2 Predictive Process Mining with Bayesian Statistics
  • 6.3.2.1 Preliminaries for Bayesian Modeling
  • 6.3.2.2 Quality Criteria for Bayesian Modeling
  • 6.3.2.3 Context-Aware Structure Learning for Probabilistic Process Prediction
  • 6.3.3 Process AI
  • 6.4 Conclusion and Outlook
  • References
  • Chapter 7 Combinatorial Optimization
  • 7.1 Introduction
  • 7.2 Solving Methods
  • 7.2.1 Heuristics
  • 7.2.2 Exact Methods
  • 7.3 Modeling Techniques
  • 7.3.1 Graph Theory
  • 7.3.1.1 Clique Problems
  • 7.3.1.2 Flow Models
  • 7.3.2 Mixed Integer Programs and Connections to Machine Learning
  • 7.3.2.1 Modeling Logic
  • 7.3.2.2 Binary Decision Trees
  • 7.3.3 Pooling
  • 7.4 Conclusion and Outlook
  • References
  • Chapter 8 Acquisition of Semantics for Machine-Learning and Deep-Learning based Applications
  • 8.1 Introduction
  • 8.2 Approaches to Acquire Semantics
  • 8.2.1 Manual Annotation and Labeling
  • 8.2.2 Data Augmentation Techniques
  • 8.2.3 Simulation and Generation
  • 8.2.3.1 Physical Modeling
  • 8.2.3.2 Generative Adversarial Networks
  • 8.2.4 High-End Reference Sensors
  • 8.2.5 Active Learning
  • 8.2.6 Knowledge Modeling Using Semantic Networks
  • 8.2.7 Discussion
  • 8.3 Conclusion and Outlook
  • References
  • Part II Applications
  • Chapter 9 Assured Resilience in Autonomous Systems - Machine Learning Methods for Reliable Perception
  • 9.1 Introduction
  • 9.1.1 The Perception Challenge
  • 9.2 Approaches to reliable perception
  • 9.2.1 Choice of Dataset
  • 9.2.2 Unexpected Behavior of ML Methods
  • 9.2.3 Reliable Object Detection for Autonomous Driving
  • 9.2.4 Uncertainty Quantification for Image Classification
  • 9.2.5 Ensemble Distribution Distillation for 2D Object Detection.
  • 9.2.6 Robust Object Detection in Simulated Driving Environments
  • 9.2.6.1 Scenarios Setup
  • 9.2.6.2 Methods and Metrics
  • 9.2.6.3 Results
  • 9.2.7 Out-of-Distribution Detection
  • 9.3 Conclusion and Outlook
  • References
  • Chapter 10 Data-driven Wireless Positioning
  • 10.1 Introduction
  • 10.2 AI-Assisted Localization
  • 10.3 Direct Positioning
  • 10.3.1 Model
  • 10.3.2 Experimental Setup
  • 10.3.2.1 Measurement Campaign
  • 10.3.2.2 Environments
  • 10.3.3 Evaluation
  • 10.3.4 Hybrid Localization
  • 10.3.5 Zone Identification
  • 10.3.6 Experimental Setup
  • 10.3.7 Environments
  • 10.3.8 Evaluation
  • 10.4 Conclusion and Outlook
  • Acknowledgements
  • References
  • Chapter 11 Comprehensible AI for Multimodal State Detection
  • 11.1 Introduction
  • 11.1.1 Cognitive Load Estimation
  • 11.1.2 Challenges in Affective Computing
  • 11.2 Data Collection
  • 11.2.1 Annotation
  • 11.2.2 Data Preprocessing
  • 11.3 Modeling
  • 11.3.1 In-Domain Evaluation
  • 11.3.2 Cross-Domain Evaluation
  • 11.3.3 Interpretability
  • 11.3.4 Improving ECG Representation Learning
  • 11.3.5 Deployment and Application
  • 11.4 Conclusion and Outlook
  • References
  • Chapter 12 Robust and Adaptive AI for Digital Pathology
  • 12.1 Introduction
  • 12.2 Applications: Tumor Detection and Tumor-Stroma Assessment
  • 12.2.1 Generation of Labeled Data Sets
  • 12.2.2 Data Sets for Tumor Detection
  • 12.2.2.1 Primary Data Set
  • 12.2.2.2 Multi-Scanner Dataset
  • 12.2.2.3 Multi-Center Dataset
  • 12.2.2.4 Out-of-Distribution Data Set
  • 12.2.2.5 Urothelial Data Sets
  • 12.2.3 Data Set for Tumor-Stroma Assessment
  • 12.3 Prototypical Few-Shot Classification
  • 12.3.1 Robustness through Data Augmentation
  • 12.3.1.1 Evaluation on the Multi-Scanner Data Set
  • 12.3.1.2 Evaluation on the Multi-Center Data Set
  • 12.3.2 Out-of-Distribution Detection.
  • 12.3.3 Adaptation to Urothelial Tumor Detection
  • 12.3.4 Interactive AI Authoring with MIKAIA®
  • 12.4 Prototypical Few-Shot Segmentation
  • 12.4.1 Tumor-Stroma Assessment
  • 12.5 Conclusion and Outlook
  • Acknowledgements
  • References
  • Chapter 13 Safe and Reliable AI for Autonomous Systems
  • 13.1 Introduction
  • 13.1.1 Reinforcement Learning
  • 13.1.2 Reinforcement Learning for Autonomous Driving
  • 13.2 Generating Environments with Driver Dojo
  • 13.2.1 Method
  • 13.3 Training safe Policies with SafeDQN
  • 13.3.1 Method
  • 13.3.2 Evaluation
  • 13.4 Extracting tree policies with SafeVIPER
  • 13.4.1 Training the Policy
  • 13.4.2 Verification of Decision Trees
  • 13.4.3 Evaluation
  • 13.5 Conclusion and Outlook
  • References
  • Chapter 14 AI for Stability Optimization in Low Voltage Direct Current Microgrids
  • 14.1 Introduction
  • 14.2 Low Voltage DC Microgrids
  • 14.2.1 Control of Low Voltage DC Microgrids
  • 14.2.2 Stability of Low Voltage DC Microgrids
  • 14.3 AI-based Stability Optimization for Low Voltage DC Microgrids
  • 14.3.1 Overview
  • 14.3.2 Digital Network Twin and Generation of Labels to Describe the Stability State
  • 14.3.3 LVDC Microgrid Surrogate Model Applying Random Forests
  • 14.3.4 Stability Optimization Applying Decision Trees
  • 14.4 Implementation and Assessment
  • 14.4.1 Measurement of Grid Stability
  • 14.4.2 Experimental Validation
  • 14.5 Conclusion and Outlook
  • References
  • Chapter 15 Self-Optimization in Adaptive Logistics Networks
  • 15.1 Introduction
  • 15.2 A Brief Overview of Relevant Literature on Predicting the All-Time Buy Quantity
  • 15.3 Predicting the All-Time Buy
  • 15.4 A Probabilistic Hierarchical Growth Curve model
  • 15.5 Determining the Optimal Order Policy
  • 15.5.1 Modeling Non-Linear Costs
  • 15.5.2 Robust Optimization
  • 15.6 Pooling
  • 15.7 Conclusion and Outlook
  • References.
  • Chapter 16 Optimization of Underground Train Systems.