XxAI - Beyond Explainable AI : : International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers.

Saved in:
Bibliographic Details
Superior document:Lecture Notes in Computer Science Series ; v.13200
:
TeilnehmendeR:
Place / Publishing House:Cham : : Springer International Publishing AG,, 2022.
©2022.
Year of Publication:2022
Edition:1st ed.
Language:English
Series:Lecture Notes in Computer Science Series
Online Access:
Physical Description:1 online resource (397 pages)
Tags: Add Tag
No Tags, Be the first to tag this record!
id 5006954332
ctrlnum (MiAaPQ)5006954332
(Au-PeEL)EBL6954332
(OCoLC)1311285955
collection bib_alma
record_format marc
spelling Holzinger, Andreas.
XxAI - Beyond Explainable AI : International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers.
1st ed.
Cham : Springer International Publishing AG, 2022.
©2022.
1 online resource (397 pages)
text txt rdacontent
computer c rdamedia
online resource cr rdacarrier
Lecture Notes in Computer Science Series ; v.13200
Intro -- Preface -- Organization -- Contents -- Editorial -- xxAI - Beyond Explainable Artificial Intelligence -- 1 Introduction and Motivation for Explainable AI -- 2 Explainable AI: Past and Present -- 3 Book Structure -- References -- Current Methods and Challenges -- Explainable AI Methods - A Brief Overview -- 1 Introduction -- 2 Explainable AI Methods - Overview -- 2.1 LIME (Local Interpretable Model Agnostic Explanations) -- 2.2 Anchors -- 2.3 GraphLIME -- 2.4 Method: LRP (Layer-wise Relevance Propagation) -- 2.5 Deep Taylor Decomposition (DTD) -- 2.6 Prediction Difference Analysis (PDA) -- 2.7 TCAV (Testing with Concept Activation Vectors) -- 2.8 XGNN (Explainable Graph Neural Networks) -- 2.9 SHAP (Shapley Values) -- 2.10 Asymmetric Shapley Values (ASV) -- 2.11 Break-Down -- 2.12 Shapley Flow -- 2.13 Textual Explanations of Visual Models -- 2.14 Integrated Gradients -- 2.15 Causal Models -- 2.16 Meaningful Perturbations -- 2.17 EXplainable Neural-Symbolic Learning (X-NeSyL) -- 3 Conclusion and Future Outlook -- References -- General Pitfalls of Model-Agnostic Interpretation Methods for Machine Learning Models -- 1 Introduction -- 2 Assuming One-Fits-All Interpretability -- 3 Bad Model Generalization -- 4 Unnecessary Use of Complex Models -- 5 Ignoring Feature Dependence -- 5.1 Interpretation with Extrapolation -- 5.2 Confusing Linear Correlation with General Dependence -- 5.3 Misunderstanding Conditional Interpretation -- 6 Misleading Interpretations Due to Feature Interactions -- 6.1 Misleading Feature Effects Due to Aggregation -- 6.2 Failing to Separate Main from Interaction Effects -- 7 Ignoring Model and Approximation Uncertainty -- 8 Ignoring the Rashomon Effect -- 9 Failure to Scale to High-Dimensional Settings -- 9.1 Human-Intelligibility of High-Dimensional IML Output -- 9.2 Computational Effort.
9.3 Ignoring Multiple Comparison Problem -- 10 Unjustified Causal Interpretation -- 11 Discussion -- References -- CLEVR-X: A Visual Reasoning Dataset for Natural Language Explanations -- 1 Introduction -- 2 Related Work -- 3 The CLEVR-X Dataset -- 3.1 The CLEVR Dataset -- 3.2 Dataset Generation -- 3.3 Dataset Analysis -- 3.4 User Study on Explanation Completeness and Relevance -- 4 Experiments -- 4.1 Experimental Setup -- 4.2 Evaluating Explanations Generated by State-of-the-Art Methods -- 4.3 Analyzing Results on CLEVR-X by Question and Answer Types -- 4.4 Influence of Using Different Numbers of Ground-Truth Explanations -- 4.5 Qualitative Explanation Generation Results -- 5 Conclusion -- References -- New Developments in Explainable AI -- A Rate-Distortion Framework for Explaining Black-Box Model Decisions -- 1 Introduction -- 2 Related Works -- 3 Rate-Distortion Explanation Framework -- 3.1 General Formulation -- 3.2 Implementation -- 4 Experiments -- 4.1 Images -- 4.2 Audio -- 4.3 Radio Maps -- 5 Conclusion -- References -- Explaining the Predictions of Unsupervised Learning Models -- 1 Introduction -- 2 A Brief Review of Explainable AI -- 2.1 Approaches to Attribution -- 2.2 Neuralization-Propagation -- 3 Kernel Density Estimation -- 3.1 Explaining Outlierness -- 3.2 Explaining Inlierness: Direct Approach -- 3.3 Explaining Inlierness: Random Features Approach -- 4 K-Means Clustering -- 4.1 Explaining Cluster Assignments -- 5 Experiments -- 5.1 Wholesale Customer Analysis -- 5.2 Image Analysis -- 6 Conclusion and Outlook -- A Attribution on CNN Activations -- A.1 Attributing Outlierness -- A.2 Attributing Inlierness -- A.3 Attributing Cluster Membership -- References -- Towards Causal Algorithmic Recourse -- 1 Introduction -- 1.1 Motivating Examples -- 1.2 Summary of Contributions and Structure of This Chapter -- 2 Preliminaries.
2.1 XAI: Counterfactual Explanations and Algorithmic Recourse -- 2.2 Causality: Structural Causal Models, Interventions, and Counterfactuals -- 3 Causal Recourse Formulation -- 3.1 Limitations of CFE-Based Recourse -- 3.2 Recourse Through Minimal Interventions -- 3.3 Negative Result: No Recourse Guarantees for Unknown Structural Equations -- 4 Recourse Under Imperfect Causal Knowledge -- 4.1 Probabilistic Individualised Recourse -- 4.2 Probabilistic Subpopulation-Based Recourse -- 4.3 Solving the Probabilistic Recourse Optimization Problem -- 5 Experiments -- 5.1 Compared Methods -- 5.2 Metrics -- 5.3 Synthetic 3-Variable SCMs Under Different Assumptions -- 5.4 Semi-synthetic 7-Variable SCM for Loan-Approval -- 6 Discussion -- 7 Conclusion -- References -- Interpreting Generative Adversarial Networks for Interactive Image Generation -- 1 Introduction -- 2 Supervised Approach -- 3 Unsupervised Approach -- 4 Embedding-Guided Approach -- 5 Concluding Remarks -- References -- XAI and Strategy Extraction via Reward Redistribution -- 1 Introduction -- 2 Background -- 2.1 Explainability Methods -- 2.2 Reinforcement Learning -- 2.3 Credit Assignment in Reinforcement Learning -- 2.4 Methods for Credit Assignment -- 2.5 Explainability Methods for Credit Assignment -- 2.6 Credit Assignment via Reward Redistribution -- 3 Strategy Extraction via Reward Redistribution -- 3.1 Strategy Extraction with Profile Models -- 3.2 Explainable Agent Behavior via Strategy Extraction -- 4 Experiments -- 4.1 Gridworld -- 4.2 Minecraft -- 5 Limitations -- 6 Conclusion -- References -- Interpretable, Verifiable, and Robust Reinforcement Learning via Program Synthesis -- 1 Introduction -- 2 Background on Reinforcement Learning -- 3 Programmatic Policies -- 3.1 Traditional Interpretable Models -- 3.2 State Machine Policies -- 3.3 List Processing Programs.
3.4 Neurosymbolic Policies -- 4 Synthesizing Programmatic Policies -- 4.1 Imitation Learning -- 4.2 Q-Guided Imitation Learning -- 4.3 Updating the DNN Policy -- 4.4 Program Synthesis for Supervised Learning -- 5 Case Studies -- 5.1 Interpretability -- 5.2 Verification -- 5.3 Robustness -- 6 Conclusions and Future Work -- References -- Interpreting and Improving Deep-Learning Models with Reality Checks -- 1 Interpretability: For What and For Whom? -- 2 Computing Interpretations for Feature Interactions and Transformations -- 2.1 Contextual Decomposition (CD) Importance Scores for General DNNs -- 2.2 Agglomerative Contextual Decomposition (ACD) -- 2.3 Transformation Importance with Applications to Cosmology (TRIM) -- 3 Using Attributions to Improve Models -- 3.1 Penalizing Explanations to Align Neural Networks with Prior Knowledge (CDEP) -- 3.2 Distilling Adaptive Wavelets from Neural Networks with Interpretations -- 4 Real-Data Problems Showcasing Interpretations -- 4.1 Molecular Partner Prediction -- 4.2 Cosmological Parameter Prediction -- 4.3 Improving Skin Cancer Classification via CDEP -- 5 Discussion -- 5.1 Building/Distilling Accurate and Interpretable Models -- 5.2 Making Interpretations Useful -- References -- Beyond the Visual Analysis of Deep Model Saliency -- 1 Introduction -- 2 Saliency-Based XAI in Vision -- 2.1 White-Box Models -- 2.2 Black-Box Models -- 3 XAI for Improved Models: Excitation Dropout -- 4 XAI for Improved Models: Domain Generalization -- 5 XAI for Improved Models: Guided Zoom -- 6 Conclusion -- References -- ECQx: Explainability-Driven Quantization for Low-Bit and Sparse DNNs -- 1 Introduction -- 2 Related Work -- 3 Neural Network Quantization -- 3.1 Entropy-Constrained Quantization -- 4 Explainability-Driven Quantization -- 4.1 Layer-Wise Relevance Propagation.
4.2 eXplainability-Driven Entropy-Constrained Quantization -- 5 Experiments -- 5.1 Experimental Setup -- 5.2 ECQx Results -- 6 Conclusion -- References -- A Whale's Tail - Finding the Right Whale in an Uncertain World -- 1 Introduction -- 2 Related Work -- 3 Humpback Whale Data -- 3.1 Image Data -- 3.2 Expert Annotations -- 4 Methods -- 4.1 Landmark-Based Identification Framework -- 4.2 Uncertainty and Sensitivity Analysis -- 5 Experiments and Results -- 5.1 Experimental Setup -- 5.2 Uncertainty and Sensitivity Analysis of the Landmarks -- 5.3 Heatmapping Results and Comparison with Whale Expert Knowledge -- 5.4 Spatial Uncertainty of Individual Landmarks -- 6 Conclusion and Outlook -- References -- Explainable Artificial Intelligence in Meteorology and Climate Science: Model Fine-Tuning, Calibrating Trust and Learning New Science -- 1 Introduction -- 2 XAI Applications -- 2.1 XAI in Remote Sensing and Weather Forecasting -- 2.2 XAI in Climate Prediction -- 2.3 XAI to Extract Forced Climate Change Signals and Anthropogenic Footprint -- 3 Development of Attribution Benchmarks for Geosciences -- 3.1 Synthetic Framework -- 3.2 Assessment of XAI Methods -- 4 Conclusions -- References -- An Interdisciplinary Approach to Explainable AI -- Varieties of AI Explanations Under the Law. From the GDPR to the AIA, and Beyond -- 1 Introduction -- 1.1 Functional Varieties of AI Explanations -- 1.2 Technical Varieties of AI Explanations -- 1.3 Roadmap of the Paper -- 2 Explainable AI Under Current Law -- 2.1 The GDPR: Rights-Enabling Transparency -- 2.2 Contract and Tort Law: Technical and Protective Transparency -- 2.3 Banking Law: More Technical and Protective Transparency -- 3 Regulatory Proposals at the EU Level: The AIA -- 3.1 AI with Limited Risk: Decision-Enabling Transparency (Art. 52 AIA)? -- 3.2 AI with High Risk: Encompassing Transparency (Art. 13 AIA)?.
3.3 Limitations.
Description based on publisher supplied metadata and other sources.
Electronic reproduction. Ann Arbor, Michigan : ProQuest Ebook Central, 2024. Available via World Wide Web. Access may be limited to ProQuest Ebook Central affiliated libraries.
Electronic books.
Goebel, Randy.
Fong, Ruth.
Moon, Taesup.
Müller, Klaus-Robert.
Samek, Wojciech.
Print version: Holzinger, Andreas XxAI - Beyond Explainable AI Cham : Springer International Publishing AG,c2022 9783031040825
ProQuest (Firm)
Lecture Notes in Computer Science Series
https://ebookcentral.proquest.com/lib/oeawat/detail.action?docID=6954332 Click to View
language English
format eBook
author Holzinger, Andreas.
spellingShingle Holzinger, Andreas.
XxAI - Beyond Explainable AI : International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers.
Lecture Notes in Computer Science Series ;
Intro -- Preface -- Organization -- Contents -- Editorial -- xxAI - Beyond Explainable Artificial Intelligence -- 1 Introduction and Motivation for Explainable AI -- 2 Explainable AI: Past and Present -- 3 Book Structure -- References -- Current Methods and Challenges -- Explainable AI Methods - A Brief Overview -- 1 Introduction -- 2 Explainable AI Methods - Overview -- 2.1 LIME (Local Interpretable Model Agnostic Explanations) -- 2.2 Anchors -- 2.3 GraphLIME -- 2.4 Method: LRP (Layer-wise Relevance Propagation) -- 2.5 Deep Taylor Decomposition (DTD) -- 2.6 Prediction Difference Analysis (PDA) -- 2.7 TCAV (Testing with Concept Activation Vectors) -- 2.8 XGNN (Explainable Graph Neural Networks) -- 2.9 SHAP (Shapley Values) -- 2.10 Asymmetric Shapley Values (ASV) -- 2.11 Break-Down -- 2.12 Shapley Flow -- 2.13 Textual Explanations of Visual Models -- 2.14 Integrated Gradients -- 2.15 Causal Models -- 2.16 Meaningful Perturbations -- 2.17 EXplainable Neural-Symbolic Learning (X-NeSyL) -- 3 Conclusion and Future Outlook -- References -- General Pitfalls of Model-Agnostic Interpretation Methods for Machine Learning Models -- 1 Introduction -- 2 Assuming One-Fits-All Interpretability -- 3 Bad Model Generalization -- 4 Unnecessary Use of Complex Models -- 5 Ignoring Feature Dependence -- 5.1 Interpretation with Extrapolation -- 5.2 Confusing Linear Correlation with General Dependence -- 5.3 Misunderstanding Conditional Interpretation -- 6 Misleading Interpretations Due to Feature Interactions -- 6.1 Misleading Feature Effects Due to Aggregation -- 6.2 Failing to Separate Main from Interaction Effects -- 7 Ignoring Model and Approximation Uncertainty -- 8 Ignoring the Rashomon Effect -- 9 Failure to Scale to High-Dimensional Settings -- 9.1 Human-Intelligibility of High-Dimensional IML Output -- 9.2 Computational Effort.
9.3 Ignoring Multiple Comparison Problem -- 10 Unjustified Causal Interpretation -- 11 Discussion -- References -- CLEVR-X: A Visual Reasoning Dataset for Natural Language Explanations -- 1 Introduction -- 2 Related Work -- 3 The CLEVR-X Dataset -- 3.1 The CLEVR Dataset -- 3.2 Dataset Generation -- 3.3 Dataset Analysis -- 3.4 User Study on Explanation Completeness and Relevance -- 4 Experiments -- 4.1 Experimental Setup -- 4.2 Evaluating Explanations Generated by State-of-the-Art Methods -- 4.3 Analyzing Results on CLEVR-X by Question and Answer Types -- 4.4 Influence of Using Different Numbers of Ground-Truth Explanations -- 4.5 Qualitative Explanation Generation Results -- 5 Conclusion -- References -- New Developments in Explainable AI -- A Rate-Distortion Framework for Explaining Black-Box Model Decisions -- 1 Introduction -- 2 Related Works -- 3 Rate-Distortion Explanation Framework -- 3.1 General Formulation -- 3.2 Implementation -- 4 Experiments -- 4.1 Images -- 4.2 Audio -- 4.3 Radio Maps -- 5 Conclusion -- References -- Explaining the Predictions of Unsupervised Learning Models -- 1 Introduction -- 2 A Brief Review of Explainable AI -- 2.1 Approaches to Attribution -- 2.2 Neuralization-Propagation -- 3 Kernel Density Estimation -- 3.1 Explaining Outlierness -- 3.2 Explaining Inlierness: Direct Approach -- 3.3 Explaining Inlierness: Random Features Approach -- 4 K-Means Clustering -- 4.1 Explaining Cluster Assignments -- 5 Experiments -- 5.1 Wholesale Customer Analysis -- 5.2 Image Analysis -- 6 Conclusion and Outlook -- A Attribution on CNN Activations -- A.1 Attributing Outlierness -- A.2 Attributing Inlierness -- A.3 Attributing Cluster Membership -- References -- Towards Causal Algorithmic Recourse -- 1 Introduction -- 1.1 Motivating Examples -- 1.2 Summary of Contributions and Structure of This Chapter -- 2 Preliminaries.
2.1 XAI: Counterfactual Explanations and Algorithmic Recourse -- 2.2 Causality: Structural Causal Models, Interventions, and Counterfactuals -- 3 Causal Recourse Formulation -- 3.1 Limitations of CFE-Based Recourse -- 3.2 Recourse Through Minimal Interventions -- 3.3 Negative Result: No Recourse Guarantees for Unknown Structural Equations -- 4 Recourse Under Imperfect Causal Knowledge -- 4.1 Probabilistic Individualised Recourse -- 4.2 Probabilistic Subpopulation-Based Recourse -- 4.3 Solving the Probabilistic Recourse Optimization Problem -- 5 Experiments -- 5.1 Compared Methods -- 5.2 Metrics -- 5.3 Synthetic 3-Variable SCMs Under Different Assumptions -- 5.4 Semi-synthetic 7-Variable SCM for Loan-Approval -- 6 Discussion -- 7 Conclusion -- References -- Interpreting Generative Adversarial Networks for Interactive Image Generation -- 1 Introduction -- 2 Supervised Approach -- 3 Unsupervised Approach -- 4 Embedding-Guided Approach -- 5 Concluding Remarks -- References -- XAI and Strategy Extraction via Reward Redistribution -- 1 Introduction -- 2 Background -- 2.1 Explainability Methods -- 2.2 Reinforcement Learning -- 2.3 Credit Assignment in Reinforcement Learning -- 2.4 Methods for Credit Assignment -- 2.5 Explainability Methods for Credit Assignment -- 2.6 Credit Assignment via Reward Redistribution -- 3 Strategy Extraction via Reward Redistribution -- 3.1 Strategy Extraction with Profile Models -- 3.2 Explainable Agent Behavior via Strategy Extraction -- 4 Experiments -- 4.1 Gridworld -- 4.2 Minecraft -- 5 Limitations -- 6 Conclusion -- References -- Interpretable, Verifiable, and Robust Reinforcement Learning via Program Synthesis -- 1 Introduction -- 2 Background on Reinforcement Learning -- 3 Programmatic Policies -- 3.1 Traditional Interpretable Models -- 3.2 State Machine Policies -- 3.3 List Processing Programs.
3.4 Neurosymbolic Policies -- 4 Synthesizing Programmatic Policies -- 4.1 Imitation Learning -- 4.2 Q-Guided Imitation Learning -- 4.3 Updating the DNN Policy -- 4.4 Program Synthesis for Supervised Learning -- 5 Case Studies -- 5.1 Interpretability -- 5.2 Verification -- 5.3 Robustness -- 6 Conclusions and Future Work -- References -- Interpreting and Improving Deep-Learning Models with Reality Checks -- 1 Interpretability: For What and For Whom? -- 2 Computing Interpretations for Feature Interactions and Transformations -- 2.1 Contextual Decomposition (CD) Importance Scores for General DNNs -- 2.2 Agglomerative Contextual Decomposition (ACD) -- 2.3 Transformation Importance with Applications to Cosmology (TRIM) -- 3 Using Attributions to Improve Models -- 3.1 Penalizing Explanations to Align Neural Networks with Prior Knowledge (CDEP) -- 3.2 Distilling Adaptive Wavelets from Neural Networks with Interpretations -- 4 Real-Data Problems Showcasing Interpretations -- 4.1 Molecular Partner Prediction -- 4.2 Cosmological Parameter Prediction -- 4.3 Improving Skin Cancer Classification via CDEP -- 5 Discussion -- 5.1 Building/Distilling Accurate and Interpretable Models -- 5.2 Making Interpretations Useful -- References -- Beyond the Visual Analysis of Deep Model Saliency -- 1 Introduction -- 2 Saliency-Based XAI in Vision -- 2.1 White-Box Models -- 2.2 Black-Box Models -- 3 XAI for Improved Models: Excitation Dropout -- 4 XAI for Improved Models: Domain Generalization -- 5 XAI for Improved Models: Guided Zoom -- 6 Conclusion -- References -- ECQx: Explainability-Driven Quantization for Low-Bit and Sparse DNNs -- 1 Introduction -- 2 Related Work -- 3 Neural Network Quantization -- 3.1 Entropy-Constrained Quantization -- 4 Explainability-Driven Quantization -- 4.1 Layer-Wise Relevance Propagation.
4.2 eXplainability-Driven Entropy-Constrained Quantization -- 5 Experiments -- 5.1 Experimental Setup -- 5.2 ECQx Results -- 6 Conclusion -- References -- A Whale's Tail - Finding the Right Whale in an Uncertain World -- 1 Introduction -- 2 Related Work -- 3 Humpback Whale Data -- 3.1 Image Data -- 3.2 Expert Annotations -- 4 Methods -- 4.1 Landmark-Based Identification Framework -- 4.2 Uncertainty and Sensitivity Analysis -- 5 Experiments and Results -- 5.1 Experimental Setup -- 5.2 Uncertainty and Sensitivity Analysis of the Landmarks -- 5.3 Heatmapping Results and Comparison with Whale Expert Knowledge -- 5.4 Spatial Uncertainty of Individual Landmarks -- 6 Conclusion and Outlook -- References -- Explainable Artificial Intelligence in Meteorology and Climate Science: Model Fine-Tuning, Calibrating Trust and Learning New Science -- 1 Introduction -- 2 XAI Applications -- 2.1 XAI in Remote Sensing and Weather Forecasting -- 2.2 XAI in Climate Prediction -- 2.3 XAI to Extract Forced Climate Change Signals and Anthropogenic Footprint -- 3 Development of Attribution Benchmarks for Geosciences -- 3.1 Synthetic Framework -- 3.2 Assessment of XAI Methods -- 4 Conclusions -- References -- An Interdisciplinary Approach to Explainable AI -- Varieties of AI Explanations Under the Law. From the GDPR to the AIA, and Beyond -- 1 Introduction -- 1.1 Functional Varieties of AI Explanations -- 1.2 Technical Varieties of AI Explanations -- 1.3 Roadmap of the Paper -- 2 Explainable AI Under Current Law -- 2.1 The GDPR: Rights-Enabling Transparency -- 2.2 Contract and Tort Law: Technical and Protective Transparency -- 2.3 Banking Law: More Technical and Protective Transparency -- 3 Regulatory Proposals at the EU Level: The AIA -- 3.1 AI with Limited Risk: Decision-Enabling Transparency (Art. 52 AIA)? -- 3.2 AI with High Risk: Encompassing Transparency (Art. 13 AIA)?.
3.3 Limitations.
author_facet Holzinger, Andreas.
Goebel, Randy.
Fong, Ruth.
Moon, Taesup.
Müller, Klaus-Robert.
Samek, Wojciech.
author_variant a h ah
author2 Goebel, Randy.
Fong, Ruth.
Moon, Taesup.
Müller, Klaus-Robert.
Samek, Wojciech.
author2_variant r g rg
r f rf
t m tm
k r m krm
w s ws
author2_role TeilnehmendeR
TeilnehmendeR
TeilnehmendeR
TeilnehmendeR
TeilnehmendeR
author_sort Holzinger, Andreas.
title XxAI - Beyond Explainable AI : International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers.
title_sub International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers.
title_full XxAI - Beyond Explainable AI : International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers.
title_fullStr XxAI - Beyond Explainable AI : International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers.
title_full_unstemmed XxAI - Beyond Explainable AI : International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers.
title_auth XxAI - Beyond Explainable AI : International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers.
title_new XxAI - Beyond Explainable AI :
title_sort xxai - beyond explainable ai : international workshop, held in conjunction with icml 2020, july 18, 2020, vienna, austria, revised and extended papers.
series Lecture Notes in Computer Science Series ;
series2 Lecture Notes in Computer Science Series ;
publisher Springer International Publishing AG,
publishDate 2022
physical 1 online resource (397 pages)
edition 1st ed.
contents Intro -- Preface -- Organization -- Contents -- Editorial -- xxAI - Beyond Explainable Artificial Intelligence -- 1 Introduction and Motivation for Explainable AI -- 2 Explainable AI: Past and Present -- 3 Book Structure -- References -- Current Methods and Challenges -- Explainable AI Methods - A Brief Overview -- 1 Introduction -- 2 Explainable AI Methods - Overview -- 2.1 LIME (Local Interpretable Model Agnostic Explanations) -- 2.2 Anchors -- 2.3 GraphLIME -- 2.4 Method: LRP (Layer-wise Relevance Propagation) -- 2.5 Deep Taylor Decomposition (DTD) -- 2.6 Prediction Difference Analysis (PDA) -- 2.7 TCAV (Testing with Concept Activation Vectors) -- 2.8 XGNN (Explainable Graph Neural Networks) -- 2.9 SHAP (Shapley Values) -- 2.10 Asymmetric Shapley Values (ASV) -- 2.11 Break-Down -- 2.12 Shapley Flow -- 2.13 Textual Explanations of Visual Models -- 2.14 Integrated Gradients -- 2.15 Causal Models -- 2.16 Meaningful Perturbations -- 2.17 EXplainable Neural-Symbolic Learning (X-NeSyL) -- 3 Conclusion and Future Outlook -- References -- General Pitfalls of Model-Agnostic Interpretation Methods for Machine Learning Models -- 1 Introduction -- 2 Assuming One-Fits-All Interpretability -- 3 Bad Model Generalization -- 4 Unnecessary Use of Complex Models -- 5 Ignoring Feature Dependence -- 5.1 Interpretation with Extrapolation -- 5.2 Confusing Linear Correlation with General Dependence -- 5.3 Misunderstanding Conditional Interpretation -- 6 Misleading Interpretations Due to Feature Interactions -- 6.1 Misleading Feature Effects Due to Aggregation -- 6.2 Failing to Separate Main from Interaction Effects -- 7 Ignoring Model and Approximation Uncertainty -- 8 Ignoring the Rashomon Effect -- 9 Failure to Scale to High-Dimensional Settings -- 9.1 Human-Intelligibility of High-Dimensional IML Output -- 9.2 Computational Effort.
9.3 Ignoring Multiple Comparison Problem -- 10 Unjustified Causal Interpretation -- 11 Discussion -- References -- CLEVR-X: A Visual Reasoning Dataset for Natural Language Explanations -- 1 Introduction -- 2 Related Work -- 3 The CLEVR-X Dataset -- 3.1 The CLEVR Dataset -- 3.2 Dataset Generation -- 3.3 Dataset Analysis -- 3.4 User Study on Explanation Completeness and Relevance -- 4 Experiments -- 4.1 Experimental Setup -- 4.2 Evaluating Explanations Generated by State-of-the-Art Methods -- 4.3 Analyzing Results on CLEVR-X by Question and Answer Types -- 4.4 Influence of Using Different Numbers of Ground-Truth Explanations -- 4.5 Qualitative Explanation Generation Results -- 5 Conclusion -- References -- New Developments in Explainable AI -- A Rate-Distortion Framework for Explaining Black-Box Model Decisions -- 1 Introduction -- 2 Related Works -- 3 Rate-Distortion Explanation Framework -- 3.1 General Formulation -- 3.2 Implementation -- 4 Experiments -- 4.1 Images -- 4.2 Audio -- 4.3 Radio Maps -- 5 Conclusion -- References -- Explaining the Predictions of Unsupervised Learning Models -- 1 Introduction -- 2 A Brief Review of Explainable AI -- 2.1 Approaches to Attribution -- 2.2 Neuralization-Propagation -- 3 Kernel Density Estimation -- 3.1 Explaining Outlierness -- 3.2 Explaining Inlierness: Direct Approach -- 3.3 Explaining Inlierness: Random Features Approach -- 4 K-Means Clustering -- 4.1 Explaining Cluster Assignments -- 5 Experiments -- 5.1 Wholesale Customer Analysis -- 5.2 Image Analysis -- 6 Conclusion and Outlook -- A Attribution on CNN Activations -- A.1 Attributing Outlierness -- A.2 Attributing Inlierness -- A.3 Attributing Cluster Membership -- References -- Towards Causal Algorithmic Recourse -- 1 Introduction -- 1.1 Motivating Examples -- 1.2 Summary of Contributions and Structure of This Chapter -- 2 Preliminaries.
2.1 XAI: Counterfactual Explanations and Algorithmic Recourse -- 2.2 Causality: Structural Causal Models, Interventions, and Counterfactuals -- 3 Causal Recourse Formulation -- 3.1 Limitations of CFE-Based Recourse -- 3.2 Recourse Through Minimal Interventions -- 3.3 Negative Result: No Recourse Guarantees for Unknown Structural Equations -- 4 Recourse Under Imperfect Causal Knowledge -- 4.1 Probabilistic Individualised Recourse -- 4.2 Probabilistic Subpopulation-Based Recourse -- 4.3 Solving the Probabilistic Recourse Optimization Problem -- 5 Experiments -- 5.1 Compared Methods -- 5.2 Metrics -- 5.3 Synthetic 3-Variable SCMs Under Different Assumptions -- 5.4 Semi-synthetic 7-Variable SCM for Loan-Approval -- 6 Discussion -- 7 Conclusion -- References -- Interpreting Generative Adversarial Networks for Interactive Image Generation -- 1 Introduction -- 2 Supervised Approach -- 3 Unsupervised Approach -- 4 Embedding-Guided Approach -- 5 Concluding Remarks -- References -- XAI and Strategy Extraction via Reward Redistribution -- 1 Introduction -- 2 Background -- 2.1 Explainability Methods -- 2.2 Reinforcement Learning -- 2.3 Credit Assignment in Reinforcement Learning -- 2.4 Methods for Credit Assignment -- 2.5 Explainability Methods for Credit Assignment -- 2.6 Credit Assignment via Reward Redistribution -- 3 Strategy Extraction via Reward Redistribution -- 3.1 Strategy Extraction with Profile Models -- 3.2 Explainable Agent Behavior via Strategy Extraction -- 4 Experiments -- 4.1 Gridworld -- 4.2 Minecraft -- 5 Limitations -- 6 Conclusion -- References -- Interpretable, Verifiable, and Robust Reinforcement Learning via Program Synthesis -- 1 Introduction -- 2 Background on Reinforcement Learning -- 3 Programmatic Policies -- 3.1 Traditional Interpretable Models -- 3.2 State Machine Policies -- 3.3 List Processing Programs.
3.4 Neurosymbolic Policies -- 4 Synthesizing Programmatic Policies -- 4.1 Imitation Learning -- 4.2 Q-Guided Imitation Learning -- 4.3 Updating the DNN Policy -- 4.4 Program Synthesis for Supervised Learning -- 5 Case Studies -- 5.1 Interpretability -- 5.2 Verification -- 5.3 Robustness -- 6 Conclusions and Future Work -- References -- Interpreting and Improving Deep-Learning Models with Reality Checks -- 1 Interpretability: For What and For Whom? -- 2 Computing Interpretations for Feature Interactions and Transformations -- 2.1 Contextual Decomposition (CD) Importance Scores for General DNNs -- 2.2 Agglomerative Contextual Decomposition (ACD) -- 2.3 Transformation Importance with Applications to Cosmology (TRIM) -- 3 Using Attributions to Improve Models -- 3.1 Penalizing Explanations to Align Neural Networks with Prior Knowledge (CDEP) -- 3.2 Distilling Adaptive Wavelets from Neural Networks with Interpretations -- 4 Real-Data Problems Showcasing Interpretations -- 4.1 Molecular Partner Prediction -- 4.2 Cosmological Parameter Prediction -- 4.3 Improving Skin Cancer Classification via CDEP -- 5 Discussion -- 5.1 Building/Distilling Accurate and Interpretable Models -- 5.2 Making Interpretations Useful -- References -- Beyond the Visual Analysis of Deep Model Saliency -- 1 Introduction -- 2 Saliency-Based XAI in Vision -- 2.1 White-Box Models -- 2.2 Black-Box Models -- 3 XAI for Improved Models: Excitation Dropout -- 4 XAI for Improved Models: Domain Generalization -- 5 XAI for Improved Models: Guided Zoom -- 6 Conclusion -- References -- ECQx: Explainability-Driven Quantization for Low-Bit and Sparse DNNs -- 1 Introduction -- 2 Related Work -- 3 Neural Network Quantization -- 3.1 Entropy-Constrained Quantization -- 4 Explainability-Driven Quantization -- 4.1 Layer-Wise Relevance Propagation.
4.2 eXplainability-Driven Entropy-Constrained Quantization -- 5 Experiments -- 5.1 Experimental Setup -- 5.2 ECQx Results -- 6 Conclusion -- References -- A Whale's Tail - Finding the Right Whale in an Uncertain World -- 1 Introduction -- 2 Related Work -- 3 Humpback Whale Data -- 3.1 Image Data -- 3.2 Expert Annotations -- 4 Methods -- 4.1 Landmark-Based Identification Framework -- 4.2 Uncertainty and Sensitivity Analysis -- 5 Experiments and Results -- 5.1 Experimental Setup -- 5.2 Uncertainty and Sensitivity Analysis of the Landmarks -- 5.3 Heatmapping Results and Comparison with Whale Expert Knowledge -- 5.4 Spatial Uncertainty of Individual Landmarks -- 6 Conclusion and Outlook -- References -- Explainable Artificial Intelligence in Meteorology and Climate Science: Model Fine-Tuning, Calibrating Trust and Learning New Science -- 1 Introduction -- 2 XAI Applications -- 2.1 XAI in Remote Sensing and Weather Forecasting -- 2.2 XAI in Climate Prediction -- 2.3 XAI to Extract Forced Climate Change Signals and Anthropogenic Footprint -- 3 Development of Attribution Benchmarks for Geosciences -- 3.1 Synthetic Framework -- 3.2 Assessment of XAI Methods -- 4 Conclusions -- References -- An Interdisciplinary Approach to Explainable AI -- Varieties of AI Explanations Under the Law. From the GDPR to the AIA, and Beyond -- 1 Introduction -- 1.1 Functional Varieties of AI Explanations -- 1.2 Technical Varieties of AI Explanations -- 1.3 Roadmap of the Paper -- 2 Explainable AI Under Current Law -- 2.1 The GDPR: Rights-Enabling Transparency -- 2.2 Contract and Tort Law: Technical and Protective Transparency -- 2.3 Banking Law: More Technical and Protective Transparency -- 3 Regulatory Proposals at the EU Level: The AIA -- 3.1 AI with Limited Risk: Decision-Enabling Transparency (Art. 52 AIA)? -- 3.2 AI with High Risk: Encompassing Transparency (Art. 13 AIA)?.
3.3 Limitations.
isbn 9783031040832
9783031040825
callnumber-first Q - Science
callnumber-subject Q - General Science
callnumber-label Q334-342
callnumber-sort Q 3334 3342
genre Electronic books.
genre_facet Electronic books.
url https://ebookcentral.proquest.com/lib/oeawat/detail.action?docID=6954332
illustrated Not Illustrated
dewey-hundreds 000 - Computer science, information & general works
dewey-tens 000 - Computer science, knowledge & systems
dewey-ones 006 - Special computer methods
dewey-full 006.31
dewey-sort 16.31
dewey-raw 006.31
dewey-search 006.31
oclc_num 1311285955
work_keys_str_mv AT holzingerandreas xxaibeyondexplainableaiinternationalworkshopheldinconjunctionwithicml2020july182020viennaaustriarevisedandextendedpapers
AT goebelrandy xxaibeyondexplainableaiinternationalworkshopheldinconjunctionwithicml2020july182020viennaaustriarevisedandextendedpapers
AT fongruth xxaibeyondexplainableaiinternationalworkshopheldinconjunctionwithicml2020july182020viennaaustriarevisedandextendedpapers
AT moontaesup xxaibeyondexplainableaiinternationalworkshopheldinconjunctionwithicml2020july182020viennaaustriarevisedandextendedpapers
AT mullerklausrobert xxaibeyondexplainableaiinternationalworkshopheldinconjunctionwithicml2020july182020viennaaustriarevisedandextendedpapers
AT samekwojciech xxaibeyondexplainableaiinternationalworkshopheldinconjunctionwithicml2020july182020viennaaustriarevisedandextendedpapers
status_str n
ids_txt_mv (MiAaPQ)5006954332
(Au-PeEL)EBL6954332
(OCoLC)1311285955
carrierType_str_mv cr
hierarchy_parent_title Lecture Notes in Computer Science Series ; v.13200
is_hierarchy_title XxAI - Beyond Explainable AI : International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers.
container_title Lecture Notes in Computer Science Series ; v.13200
author2_original_writing_str_mv noLinkedField
noLinkedField
noLinkedField
noLinkedField
noLinkedField
marc_error Info : MARC8 translation shorter than ISO-8859-1, choosing MARC8. --- [ 856 : z ]
_version_ 1792331062550986752
fullrecord <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>11272nam a22005293i 4500</leader><controlfield tag="001">5006954332</controlfield><controlfield tag="003">MiAaPQ</controlfield><controlfield tag="005">20240229073846.0</controlfield><controlfield tag="006">m o d | </controlfield><controlfield tag="007">cr cnu||||||||</controlfield><controlfield tag="008">240229s2022 xx o ||||0 eng d</controlfield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9783031040832</subfield><subfield code="q">(electronic bk.)</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="z">9783031040825</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(MiAaPQ)5006954332</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(Au-PeEL)EBL6954332</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)1311285955</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">MiAaPQ</subfield><subfield code="b">eng</subfield><subfield code="e">rda</subfield><subfield code="e">pn</subfield><subfield code="c">MiAaPQ</subfield><subfield code="d">MiAaPQ</subfield></datafield><datafield tag="050" ind1=" " ind2="4"><subfield code="a">Q334-342</subfield></datafield><datafield tag="082" ind1="0" ind2=" "><subfield code="a">006.31</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Holzinger, Andreas.</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">XxAI - Beyond Explainable AI :</subfield><subfield code="b">International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers.</subfield></datafield><datafield tag="250" ind1=" " ind2=" "><subfield code="a">1st ed.</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Cham :</subfield><subfield code="b">Springer International Publishing AG,</subfield><subfield code="c">2022.</subfield></datafield><datafield tag="264" ind1=" " ind2="4"><subfield code="c">©2022.</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">1 online resource (397 pages)</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">computer</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">online resource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="490" ind1="1" ind2=" "><subfield code="a">Lecture Notes in Computer Science Series ;</subfield><subfield code="v">v.13200</subfield></datafield><datafield tag="505" ind1="0" ind2=" "><subfield code="a">Intro -- Preface -- Organization -- Contents -- Editorial -- xxAI - Beyond Explainable Artificial Intelligence -- 1 Introduction and Motivation for Explainable AI -- 2 Explainable AI: Past and Present -- 3 Book Structure -- References -- Current Methods and Challenges -- Explainable AI Methods - A Brief Overview -- 1 Introduction -- 2 Explainable AI Methods - Overview -- 2.1 LIME (Local Interpretable Model Agnostic Explanations) -- 2.2 Anchors -- 2.3 GraphLIME -- 2.4 Method: LRP (Layer-wise Relevance Propagation) -- 2.5 Deep Taylor Decomposition (DTD) -- 2.6 Prediction Difference Analysis (PDA) -- 2.7 TCAV (Testing with Concept Activation Vectors) -- 2.8 XGNN (Explainable Graph Neural Networks) -- 2.9 SHAP (Shapley Values) -- 2.10 Asymmetric Shapley Values (ASV) -- 2.11 Break-Down -- 2.12 Shapley Flow -- 2.13 Textual Explanations of Visual Models -- 2.14 Integrated Gradients -- 2.15 Causal Models -- 2.16 Meaningful Perturbations -- 2.17 EXplainable Neural-Symbolic Learning (X-NeSyL) -- 3 Conclusion and Future Outlook -- References -- General Pitfalls of Model-Agnostic Interpretation Methods for Machine Learning Models -- 1 Introduction -- 2 Assuming One-Fits-All Interpretability -- 3 Bad Model Generalization -- 4 Unnecessary Use of Complex Models -- 5 Ignoring Feature Dependence -- 5.1 Interpretation with Extrapolation -- 5.2 Confusing Linear Correlation with General Dependence -- 5.3 Misunderstanding Conditional Interpretation -- 6 Misleading Interpretations Due to Feature Interactions -- 6.1 Misleading Feature Effects Due to Aggregation -- 6.2 Failing to Separate Main from Interaction Effects -- 7 Ignoring Model and Approximation Uncertainty -- 8 Ignoring the Rashomon Effect -- 9 Failure to Scale to High-Dimensional Settings -- 9.1 Human-Intelligibility of High-Dimensional IML Output -- 9.2 Computational Effort.</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">9.3 Ignoring Multiple Comparison Problem -- 10 Unjustified Causal Interpretation -- 11 Discussion -- References -- CLEVR-X: A Visual Reasoning Dataset for Natural Language Explanations -- 1 Introduction -- 2 Related Work -- 3 The CLEVR-X Dataset -- 3.1 The CLEVR Dataset -- 3.2 Dataset Generation -- 3.3 Dataset Analysis -- 3.4 User Study on Explanation Completeness and Relevance -- 4 Experiments -- 4.1 Experimental Setup -- 4.2 Evaluating Explanations Generated by State-of-the-Art Methods -- 4.3 Analyzing Results on CLEVR-X by Question and Answer Types -- 4.4 Influence of Using Different Numbers of Ground-Truth Explanations -- 4.5 Qualitative Explanation Generation Results -- 5 Conclusion -- References -- New Developments in Explainable AI -- A Rate-Distortion Framework for Explaining Black-Box Model Decisions -- 1 Introduction -- 2 Related Works -- 3 Rate-Distortion Explanation Framework -- 3.1 General Formulation -- 3.2 Implementation -- 4 Experiments -- 4.1 Images -- 4.2 Audio -- 4.3 Radio Maps -- 5 Conclusion -- References -- Explaining the Predictions of Unsupervised Learning Models -- 1 Introduction -- 2 A Brief Review of Explainable AI -- 2.1 Approaches to Attribution -- 2.2 Neuralization-Propagation -- 3 Kernel Density Estimation -- 3.1 Explaining Outlierness -- 3.2 Explaining Inlierness: Direct Approach -- 3.3 Explaining Inlierness: Random Features Approach -- 4 K-Means Clustering -- 4.1 Explaining Cluster Assignments -- 5 Experiments -- 5.1 Wholesale Customer Analysis -- 5.2 Image Analysis -- 6 Conclusion and Outlook -- A Attribution on CNN Activations -- A.1 Attributing Outlierness -- A.2 Attributing Inlierness -- A.3 Attributing Cluster Membership -- References -- Towards Causal Algorithmic Recourse -- 1 Introduction -- 1.1 Motivating Examples -- 1.2 Summary of Contributions and Structure of This Chapter -- 2 Preliminaries.</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">2.1 XAI: Counterfactual Explanations and Algorithmic Recourse -- 2.2 Causality: Structural Causal Models, Interventions, and Counterfactuals -- 3 Causal Recourse Formulation -- 3.1 Limitations of CFE-Based Recourse -- 3.2 Recourse Through Minimal Interventions -- 3.3 Negative Result: No Recourse Guarantees for Unknown Structural Equations -- 4 Recourse Under Imperfect Causal Knowledge -- 4.1 Probabilistic Individualised Recourse -- 4.2 Probabilistic Subpopulation-Based Recourse -- 4.3 Solving the Probabilistic Recourse Optimization Problem -- 5 Experiments -- 5.1 Compared Methods -- 5.2 Metrics -- 5.3 Synthetic 3-Variable SCMs Under Different Assumptions -- 5.4 Semi-synthetic 7-Variable SCM for Loan-Approval -- 6 Discussion -- 7 Conclusion -- References -- Interpreting Generative Adversarial Networks for Interactive Image Generation -- 1 Introduction -- 2 Supervised Approach -- 3 Unsupervised Approach -- 4 Embedding-Guided Approach -- 5 Concluding Remarks -- References -- XAI and Strategy Extraction via Reward Redistribution -- 1 Introduction -- 2 Background -- 2.1 Explainability Methods -- 2.2 Reinforcement Learning -- 2.3 Credit Assignment in Reinforcement Learning -- 2.4 Methods for Credit Assignment -- 2.5 Explainability Methods for Credit Assignment -- 2.6 Credit Assignment via Reward Redistribution -- 3 Strategy Extraction via Reward Redistribution -- 3.1 Strategy Extraction with Profile Models -- 3.2 Explainable Agent Behavior via Strategy Extraction -- 4 Experiments -- 4.1 Gridworld -- 4.2 Minecraft -- 5 Limitations -- 6 Conclusion -- References -- Interpretable, Verifiable, and Robust Reinforcement Learning via Program Synthesis -- 1 Introduction -- 2 Background on Reinforcement Learning -- 3 Programmatic Policies -- 3.1 Traditional Interpretable Models -- 3.2 State Machine Policies -- 3.3 List Processing Programs.</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">3.4 Neurosymbolic Policies -- 4 Synthesizing Programmatic Policies -- 4.1 Imitation Learning -- 4.2 Q-Guided Imitation Learning -- 4.3 Updating the DNN Policy -- 4.4 Program Synthesis for Supervised Learning -- 5 Case Studies -- 5.1 Interpretability -- 5.2 Verification -- 5.3 Robustness -- 6 Conclusions and Future Work -- References -- Interpreting and Improving Deep-Learning Models with Reality Checks -- 1 Interpretability: For What and For Whom? -- 2 Computing Interpretations for Feature Interactions and Transformations -- 2.1 Contextual Decomposition (CD) Importance Scores for General DNNs -- 2.2 Agglomerative Contextual Decomposition (ACD) -- 2.3 Transformation Importance with Applications to Cosmology (TRIM) -- 3 Using Attributions to Improve Models -- 3.1 Penalizing Explanations to Align Neural Networks with Prior Knowledge (CDEP) -- 3.2 Distilling Adaptive Wavelets from Neural Networks with Interpretations -- 4 Real-Data Problems Showcasing Interpretations -- 4.1 Molecular Partner Prediction -- 4.2 Cosmological Parameter Prediction -- 4.3 Improving Skin Cancer Classification via CDEP -- 5 Discussion -- 5.1 Building/Distilling Accurate and Interpretable Models -- 5.2 Making Interpretations Useful -- References -- Beyond the Visual Analysis of Deep Model Saliency -- 1 Introduction -- 2 Saliency-Based XAI in Vision -- 2.1 White-Box Models -- 2.2 Black-Box Models -- 3 XAI for Improved Models: Excitation Dropout -- 4 XAI for Improved Models: Domain Generalization -- 5 XAI for Improved Models: Guided Zoom -- 6 Conclusion -- References -- ECQx: Explainability-Driven Quantization for Low-Bit and Sparse DNNs -- 1 Introduction -- 2 Related Work -- 3 Neural Network Quantization -- 3.1 Entropy-Constrained Quantization -- 4 Explainability-Driven Quantization -- 4.1 Layer-Wise Relevance Propagation.</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">4.2 eXplainability-Driven Entropy-Constrained Quantization -- 5 Experiments -- 5.1 Experimental Setup -- 5.2 ECQx Results -- 6 Conclusion -- References -- A Whale's Tail - Finding the Right Whale in an Uncertain World -- 1 Introduction -- 2 Related Work -- 3 Humpback Whale Data -- 3.1 Image Data -- 3.2 Expert Annotations -- 4 Methods -- 4.1 Landmark-Based Identification Framework -- 4.2 Uncertainty and Sensitivity Analysis -- 5 Experiments and Results -- 5.1 Experimental Setup -- 5.2 Uncertainty and Sensitivity Analysis of the Landmarks -- 5.3 Heatmapping Results and Comparison with Whale Expert Knowledge -- 5.4 Spatial Uncertainty of Individual Landmarks -- 6 Conclusion and Outlook -- References -- Explainable Artificial Intelligence in Meteorology and Climate Science: Model Fine-Tuning, Calibrating Trust and Learning New Science -- 1 Introduction -- 2 XAI Applications -- 2.1 XAI in Remote Sensing and Weather Forecasting -- 2.2 XAI in Climate Prediction -- 2.3 XAI to Extract Forced Climate Change Signals and Anthropogenic Footprint -- 3 Development of Attribution Benchmarks for Geosciences -- 3.1 Synthetic Framework -- 3.2 Assessment of XAI Methods -- 4 Conclusions -- References -- An Interdisciplinary Approach to Explainable AI -- Varieties of AI Explanations Under the Law. From the GDPR to the AIA, and Beyond -- 1 Introduction -- 1.1 Functional Varieties of AI Explanations -- 1.2 Technical Varieties of AI Explanations -- 1.3 Roadmap of the Paper -- 2 Explainable AI Under Current Law -- 2.1 The GDPR: Rights-Enabling Transparency -- 2.2 Contract and Tort Law: Technical and Protective Transparency -- 2.3 Banking Law: More Technical and Protective Transparency -- 3 Regulatory Proposals at the EU Level: The AIA -- 3.1 AI with Limited Risk: Decision-Enabling Transparency (Art. 52 AIA)? -- 3.2 AI with High Risk: Encompassing Transparency (Art. 13 AIA)?.</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">3.3 Limitations.</subfield></datafield><datafield tag="588" ind1=" " ind2=" "><subfield code="a">Description based on publisher supplied metadata and other sources.</subfield></datafield><datafield tag="590" ind1=" " ind2=" "><subfield code="a">Electronic reproduction. Ann Arbor, Michigan : ProQuest Ebook Central, 2024. Available via World Wide Web. Access may be limited to ProQuest Ebook Central affiliated libraries. </subfield></datafield><datafield tag="655" ind1=" " ind2="4"><subfield code="a">Electronic books.</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Goebel, Randy.</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Fong, Ruth.</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Moon, Taesup.</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Müller, Klaus-Robert.</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Samek, Wojciech.</subfield></datafield><datafield tag="776" ind1="0" ind2="8"><subfield code="i">Print version:</subfield><subfield code="a">Holzinger, Andreas</subfield><subfield code="t">XxAI - Beyond Explainable AI</subfield><subfield code="d">Cham : Springer International Publishing AG,c2022</subfield><subfield code="z">9783031040825</subfield></datafield><datafield tag="797" ind1="2" ind2=" "><subfield code="a">ProQuest (Firm)</subfield></datafield><datafield tag="830" ind1=" " ind2="0"><subfield code="a">Lecture Notes in Computer Science Series</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://ebookcentral.proquest.com/lib/oeawat/detail.action?docID=6954332</subfield><subfield code="z">Click to View</subfield></datafield></record></collection>