Deep Neural Networks and Data for Automated Driving : : Robustness, Uncertainty Quantification, and Insights Towards Safety / / edited by Tim Fingscheidt, Hanno Gottschalk, Sebastian Houben.
"This open access book brings together the latest developments from industry and research on automated driving and artificial intelligence. Environment perception for highly automated driving heavily employs deep neural networks, facing many challenges. How much data do we need for training and...
Saved in:
TeilnehmendeR: | |
---|---|
Place / Publishing House: | Cham : : Springer Nature,, 2022. |
Year of Publication: | 2022 |
Language: | English |
Physical Description: | 1 online resource (xviii, 427 pages) :; illustrations |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
id |
993603656104498 |
---|---|
ctrlnum |
(CKB)5860000000058569 (NjHacI)995860000000058569 (EXLCZ)995860000000058569 |
collection |
bib_alma |
record_format |
marc |
spelling |
Deep Neural Networks and Data for Automated Driving : Robustness, Uncertainty Quantification, and Insights Towards Safety / edited by Tim Fingscheidt, Hanno Gottschalk, Sebastian Houben. Deep Neural Networks and Data for Automated Driving Cham : Springer Nature, 2022. 1 online resource (xviii, 427 pages) : illustrations text txt rdacontent computer c rdamedia online resource cr rdacarrier Description based on publisher supplied metadata and other sources. "This open access book brings together the latest developments from industry and research on automated driving and artificial intelligence. Environment perception for highly automated driving heavily employs deep neural networks, facing many challenges. How much data do we need for training and testing? How to use synthetic data to save labeling costs for training? How do we increase robustness and decrease memory usage? For inevitably poor conditions: How do we know that the network is uncertain about its decisions? Can we understand a bit more about what actually happens inside neural networks? This leads to a very practical problem particularly for DNNs employed in automated driving: What are useful validation techniques and how about safety? This book unites the views from both academia and industry, where computer vision and machine learning meet environment perception for highly automated driving. Naturally, aspects of data, robustness, uncertainty quantification, and, last but not least, safety are at the core of it. This book is unique: In its first part, an extended survey of all the relevant aspects is provided. The second part contains the detailed technical elaboration of the various questions mentioned above." Chapter 1. Inspect, Understand, Overcome: A Survey of Practical Methods for AI Safety Chapter 2. Does Redundancy in AI Perception Systems Help to Test for Super-Human Automated Driving Performance? Chapter 3. Analysis and Comparison of Datasets by Leveraging Data Distributions in Latent Spaces Chapter 4. Optimized Data Synthesis for DNN Training and Validation by Sensor Artifact Simulation Chapter 5. Improved DNN Robustness by Multi-Task Training With an Auxiliary Self-Supervised Task Chapter 6. Improving Transferability of Generated Universal Adversarial Perturbations for Image Classi{uFB01}cation and Segmentation Chapter 7. Invertible Neural Networks for Understanding Semantics of Invariances of CNN Representations Chapter 8. Con{uFB01}dence Calibration for Object Detection and Segmentation Chapter 9. Uncertainty Quanti{uFB01}cation for Object Detection: Output- and Gradient-based Approaches Chapter 10. Detecting and Learning the Unknown in Semantic Segmentation Chapter 11. Evaluating Mixture-of-Expert Architectures for Network Aggregation Chapter 12. Safety Assurance of Machine Learning for Perception Functions Chapter 13. A Variational Deep Synthesis Approach for Perception Validation Chapter 14. The Good and the Bad: Using Neuron Coverage as a DNN Validation Technique Chapter 15. Joint Optimization for DNN Model Compression and Corruption Robustness. Chapter 1. Inspect, Understand, Overcome: A Survey of Practical Methods for AI Safety Chapter 2. Does Redundancy in AI Perception Systems Help to Test for Super-Human Automated Driving Performance Chapter 3. Analysis and Comparison of Datasets by Leveraging Data Distributions in Latent Spaces Chapter 4. Optimized Data Synthesis for DNN Training and Validation by Sensor Artifact Simulation Chapter 5. Improved DNN Robustness by Multi-Task Training With an Auxiliary Self-Supervised Task Chapter 6. Improving Transferability of Generated Universal Adversarial Perturbations for Image Classi̐Ơѓcation and Segmentation Chapter 7. Invertible Neural Networks for Understanding Semantics of Invariances of CNN epresentations Chapter 8. Con̐Ơѓdence Calibration for Object Detection and Segmentation Chapter 9. Uncertainty Quanti̐Ơѓcation for Object Detection: Output- and Gradient-based Approaches Chapter 10. Detecting and Learning the Unknown in Semantic Segmentation Chapter 11. Evaluating Mixture-of-Expert Architectures for Network Aggregation Chapter 12. Safety Assurance of Machine Learning for Perception Functions Chapter 13. A Variational Deep Synthesis Approach for Perception Validation Chapter 14. The Good and the Bad: Using Neuron Coverage as a DNN Validation Technique Chapter 15. Joint Optimization for DNN Model Compression and Corruption Robustness. Automobiles Automatic control. 3-031-03489-9 Fingscheidt, Tim, editor. Gottschalk, Hanno, editor. Houben, Sebastian, editor. |
language |
English |
format |
eBook |
author2 |
Fingscheidt, Tim, Gottschalk, Hanno, Houben, Sebastian, |
author_facet |
Fingscheidt, Tim, Gottschalk, Hanno, Houben, Sebastian, |
author2_variant |
t f tf h g hg s h sh |
author2_role |
TeilnehmendeR TeilnehmendeR TeilnehmendeR |
title |
Deep Neural Networks and Data for Automated Driving : Robustness, Uncertainty Quantification, and Insights Towards Safety / |
spellingShingle |
Deep Neural Networks and Data for Automated Driving : Robustness, Uncertainty Quantification, and Insights Towards Safety / Chapter 1. Inspect, Understand, Overcome: A Survey of Practical Methods for AI Safety Chapter 2. Does Redundancy in AI Perception Systems Help to Test for Super-Human Automated Driving Performance? Chapter 3. Analysis and Comparison of Datasets by Leveraging Data Distributions in Latent Spaces Chapter 4. Optimized Data Synthesis for DNN Training and Validation by Sensor Artifact Simulation Chapter 5. Improved DNN Robustness by Multi-Task Training With an Auxiliary Self-Supervised Task Chapter 6. Improving Transferability of Generated Universal Adversarial Perturbations for Image Classi{uFB01}cation and Segmentation Chapter 7. Invertible Neural Networks for Understanding Semantics of Invariances of CNN Representations Chapter 8. Con{uFB01}dence Calibration for Object Detection and Segmentation Chapter 9. Uncertainty Quanti{uFB01}cation for Object Detection: Output- and Gradient-based Approaches Chapter 10. Detecting and Learning the Unknown in Semantic Segmentation Chapter 11. Evaluating Mixture-of-Expert Architectures for Network Aggregation Chapter 12. Safety Assurance of Machine Learning for Perception Functions Chapter 13. A Variational Deep Synthesis Approach for Perception Validation Chapter 14. The Good and the Bad: Using Neuron Coverage as a DNN Validation Technique Chapter 15. Joint Optimization for DNN Model Compression and Corruption Robustness. Chapter 1. Inspect, Understand, Overcome: A Survey of Practical Methods for AI Safety Chapter 2. Does Redundancy in AI Perception Systems Help to Test for Super-Human Automated Driving Performance Chapter 3. Analysis and Comparison of Datasets by Leveraging Data Distributions in Latent Spaces Chapter 4. Optimized Data Synthesis for DNN Training and Validation by Sensor Artifact Simulation Chapter 5. Improved DNN Robustness by Multi-Task Training With an Auxiliary Self-Supervised Task Chapter 6. Improving Transferability of Generated Universal Adversarial Perturbations for Image Classi̐Ơѓcation and Segmentation Chapter 7. Invertible Neural Networks for Understanding Semantics of Invariances of CNN epresentations Chapter 8. Con̐Ơѓdence Calibration for Object Detection and Segmentation Chapter 9. Uncertainty Quanti̐Ơѓcation for Object Detection: Output- and Gradient-based Approaches Chapter 10. Detecting and Learning the Unknown in Semantic Segmentation Chapter 11. Evaluating Mixture-of-Expert Architectures for Network Aggregation Chapter 12. Safety Assurance of Machine Learning for Perception Functions Chapter 13. A Variational Deep Synthesis Approach for Perception Validation Chapter 14. The Good and the Bad: Using Neuron Coverage as a DNN Validation Technique Chapter 15. Joint Optimization for DNN Model Compression and Corruption Robustness. |
title_sub |
Robustness, Uncertainty Quantification, and Insights Towards Safety / |
title_full |
Deep Neural Networks and Data for Automated Driving : Robustness, Uncertainty Quantification, and Insights Towards Safety / edited by Tim Fingscheidt, Hanno Gottschalk, Sebastian Houben. |
title_fullStr |
Deep Neural Networks and Data for Automated Driving : Robustness, Uncertainty Quantification, and Insights Towards Safety / edited by Tim Fingscheidt, Hanno Gottschalk, Sebastian Houben. |
title_full_unstemmed |
Deep Neural Networks and Data for Automated Driving : Robustness, Uncertainty Quantification, and Insights Towards Safety / edited by Tim Fingscheidt, Hanno Gottschalk, Sebastian Houben. |
title_auth |
Deep Neural Networks and Data for Automated Driving : Robustness, Uncertainty Quantification, and Insights Towards Safety / |
title_alt |
Deep Neural Networks and Data for Automated Driving |
title_new |
Deep Neural Networks and Data for Automated Driving : |
title_sort |
deep neural networks and data for automated driving : robustness, uncertainty quantification, and insights towards safety / |
publisher |
Springer Nature, |
publishDate |
2022 |
physical |
1 online resource (xviii, 427 pages) : illustrations |
contents |
Chapter 1. Inspect, Understand, Overcome: A Survey of Practical Methods for AI Safety Chapter 2. Does Redundancy in AI Perception Systems Help to Test for Super-Human Automated Driving Performance? Chapter 3. Analysis and Comparison of Datasets by Leveraging Data Distributions in Latent Spaces Chapter 4. Optimized Data Synthesis for DNN Training and Validation by Sensor Artifact Simulation Chapter 5. Improved DNN Robustness by Multi-Task Training With an Auxiliary Self-Supervised Task Chapter 6. Improving Transferability of Generated Universal Adversarial Perturbations for Image Classi{uFB01}cation and Segmentation Chapter 7. Invertible Neural Networks for Understanding Semantics of Invariances of CNN Representations Chapter 8. Con{uFB01}dence Calibration for Object Detection and Segmentation Chapter 9. Uncertainty Quanti{uFB01}cation for Object Detection: Output- and Gradient-based Approaches Chapter 10. Detecting and Learning the Unknown in Semantic Segmentation Chapter 11. Evaluating Mixture-of-Expert Architectures for Network Aggregation Chapter 12. Safety Assurance of Machine Learning for Perception Functions Chapter 13. A Variational Deep Synthesis Approach for Perception Validation Chapter 14. The Good and the Bad: Using Neuron Coverage as a DNN Validation Technique Chapter 15. Joint Optimization for DNN Model Compression and Corruption Robustness. Chapter 1. Inspect, Understand, Overcome: A Survey of Practical Methods for AI Safety Chapter 2. Does Redundancy in AI Perception Systems Help to Test for Super-Human Automated Driving Performance Chapter 3. Analysis and Comparison of Datasets by Leveraging Data Distributions in Latent Spaces Chapter 4. Optimized Data Synthesis for DNN Training and Validation by Sensor Artifact Simulation Chapter 5. Improved DNN Robustness by Multi-Task Training With an Auxiliary Self-Supervised Task Chapter 6. Improving Transferability of Generated Universal Adversarial Perturbations for Image Classi̐Ơѓcation and Segmentation Chapter 7. Invertible Neural Networks for Understanding Semantics of Invariances of CNN epresentations Chapter 8. Con̐Ơѓdence Calibration for Object Detection and Segmentation Chapter 9. Uncertainty Quanti̐Ơѓcation for Object Detection: Output- and Gradient-based Approaches Chapter 10. Detecting and Learning the Unknown in Semantic Segmentation Chapter 11. Evaluating Mixture-of-Expert Architectures for Network Aggregation Chapter 12. Safety Assurance of Machine Learning for Perception Functions Chapter 13. A Variational Deep Synthesis Approach for Perception Validation Chapter 14. The Good and the Bad: Using Neuron Coverage as a DNN Validation Technique Chapter 15. Joint Optimization for DNN Model Compression and Corruption Robustness. |
isbn |
3-031-03489-9 |
callnumber-first |
T - Technology |
callnumber-subject |
TL - Motor Vehicles and Aeronautics |
callnumber-label |
TL152 |
callnumber-sort |
TL 3152.8 D447 42022 |
illustrated |
Illustrated |
dewey-hundreds |
600 - Technology |
dewey-tens |
620 - Engineering |
dewey-ones |
629 - Other branches of engineering |
dewey-full |
629.2 |
dewey-sort |
3629.2 |
dewey-raw |
629.2 |
dewey-search |
629.2 |
work_keys_str_mv |
AT fingscheidttim deepneuralnetworksanddataforautomateddrivingrobustnessuncertaintyquantificationandinsightstowardssafety AT gottschalkhanno deepneuralnetworksanddataforautomateddrivingrobustnessuncertaintyquantificationandinsightstowardssafety AT houbensebastian deepneuralnetworksanddataforautomateddrivingrobustnessuncertaintyquantificationandinsightstowardssafety AT fingscheidttim deepneuralnetworksanddataforautomateddriving AT gottschalkhanno deepneuralnetworksanddataforautomateddriving AT houbensebastian deepneuralnetworksanddataforautomateddriving |
status_str |
n |
ids_txt_mv |
(CKB)5860000000058569 (NjHacI)995860000000058569 (EXLCZ)995860000000058569 |
carrierType_str_mv |
cr |
is_hierarchy_title |
Deep Neural Networks and Data for Automated Driving : Robustness, Uncertainty Quantification, and Insights Towards Safety / |
author2_original_writing_str_mv |
noLinkedField noLinkedField noLinkedField |
_version_ |
1796653241984876544 |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>05134nam a2200313 i 4500</leader><controlfield tag="001">993603656104498</controlfield><controlfield tag="005">20230515134029.0</controlfield><controlfield tag="006">m o d </controlfield><controlfield tag="007">cr |||||||||||</controlfield><controlfield tag="008">230515s2022 sz a o 000 0 eng d</controlfield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(CKB)5860000000058569</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(NjHacI)995860000000058569</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(EXLCZ)995860000000058569</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">NjHacI</subfield><subfield code="b">eng</subfield><subfield code="e">rda</subfield><subfield code="c">NjHacl</subfield></datafield><datafield tag="050" ind1=" " ind2="4"><subfield code="a">TL152.8</subfield><subfield code="b">.D447 2022</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">629.2</subfield><subfield code="2">23</subfield></datafield><datafield tag="245" ind1="0" ind2="0"><subfield code="a">Deep Neural Networks and Data for Automated Driving :</subfield><subfield code="b">Robustness, Uncertainty Quantification, and Insights Towards Safety /</subfield><subfield code="c">edited by Tim Fingscheidt, Hanno Gottschalk, Sebastian Houben.</subfield></datafield><datafield tag="246" ind1=" " ind2=" "><subfield code="a">Deep Neural Networks and Data for Automated Driving </subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Cham :</subfield><subfield code="b">Springer Nature,</subfield><subfield code="c">2022.</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">1 online resource (xviii, 427 pages) :</subfield><subfield code="b">illustrations</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">computer</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">online resource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="588" ind1=" " ind2=" "><subfield code="a">Description based on publisher supplied metadata and other sources.</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">"This open access book brings together the latest developments from industry and research on automated driving and artificial intelligence. Environment perception for highly automated driving heavily employs deep neural networks, facing many challenges. How much data do we need for training and testing? How to use synthetic data to save labeling costs for training? How do we increase robustness and decrease memory usage? For inevitably poor conditions: How do we know that the network is uncertain about its decisions? Can we understand a bit more about what actually happens inside neural networks? This leads to a very practical problem particularly for DNNs employed in automated driving: What are useful validation techniques and how about safety? This book unites the views from both academia and industry, where computer vision and machine learning meet environment perception for highly automated driving. Naturally, aspects of data, robustness, uncertainty quantification, and, last but not least, safety are at the core of it. This book is unique: In its first part, an extended survey of all the relevant aspects is provided. The second part contains the detailed technical elaboration of the various questions mentioned above."</subfield></datafield><datafield tag="505" ind1="0" ind2=" "><subfield code="a">Chapter 1. Inspect, Understand, Overcome: A Survey of Practical Methods for AI Safety Chapter 2. Does Redundancy in AI Perception Systems Help to Test for Super-Human Automated Driving Performance? Chapter 3. Analysis and Comparison of Datasets by Leveraging Data Distributions in Latent Spaces Chapter 4. Optimized Data Synthesis for DNN Training and Validation by Sensor Artifact Simulation Chapter 5. Improved DNN Robustness by Multi-Task Training With an Auxiliary Self-Supervised Task Chapter 6. Improving Transferability of Generated Universal Adversarial Perturbations for Image Classi{uFB01}cation and Segmentation Chapter 7. Invertible Neural Networks for Understanding Semantics of Invariances of CNN Representations Chapter 8. Con{uFB01}dence Calibration for Object Detection and Segmentation Chapter 9. Uncertainty Quanti{uFB01}cation for Object Detection: Output- and Gradient-based Approaches Chapter 10. Detecting and Learning the Unknown in Semantic Segmentation Chapter 11. Evaluating Mixture-of-Expert Architectures for Network Aggregation Chapter 12. Safety Assurance of Machine Learning for Perception Functions Chapter 13. A Variational Deep Synthesis Approach for Perception Validation Chapter 14. The Good and the Bad: Using Neuron Coverage as a DNN Validation Technique Chapter 15. Joint Optimization for DNN Model Compression and Corruption Robustness. Chapter 1. Inspect, Understand, Overcome: A Survey of Practical Methods for AI Safety Chapter 2. Does Redundancy in AI Perception Systems Help to Test for Super-Human Automated Driving Performance Chapter 3. Analysis and Comparison of Datasets by Leveraging Data Distributions in Latent Spaces Chapter 4. Optimized Data Synthesis for DNN Training and Validation by Sensor Artifact Simulation Chapter 5. Improved DNN Robustness by Multi-Task Training With an Auxiliary Self-Supervised Task Chapter 6. Improving Transferability of Generated Universal Adversarial Perturbations for Image Classi̐Ơѓcation and Segmentation Chapter 7. Invertible Neural Networks for Understanding Semantics of Invariances of CNN epresentations Chapter 8. Con̐Ơѓdence Calibration for Object Detection and Segmentation Chapter 9. Uncertainty Quanti̐Ơѓcation for Object Detection: Output- and Gradient-based Approaches Chapter 10. Detecting and Learning the Unknown in Semantic Segmentation Chapter 11. Evaluating Mixture-of-Expert Architectures for Network Aggregation Chapter 12. Safety Assurance of Machine Learning for Perception Functions Chapter 13. A Variational Deep Synthesis Approach for Perception Validation Chapter 14. The Good and the Bad: Using Neuron Coverage as a DNN Validation Technique Chapter 15. Joint Optimization for DNN Model Compression and Corruption Robustness.</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">Automobiles</subfield><subfield code="x">Automatic control.</subfield></datafield><datafield tag="776" ind1=" " ind2=" "><subfield code="z">3-031-03489-9</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Fingscheidt, Tim,</subfield><subfield code="e">editor.</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Gottschalk, Hanno,</subfield><subfield code="e">editor.</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Houben, Sebastian,</subfield><subfield code="e">editor.</subfield></datafield><datafield tag="906" ind1=" " ind2=" "><subfield code="a">BOOK</subfield></datafield><datafield tag="ADM" ind1=" " ind2=" "><subfield code="b">2023-06-09 11:12:32 Europe/Vienna</subfield><subfield code="f">System</subfield><subfield code="c">marc21</subfield><subfield code="a">2022-07-14 08:50:39 Europe/Vienna</subfield><subfield code="g">false</subfield></datafield><datafield tag="AVE" ind1=" " ind2=" "><subfield code="i">DOAB Directory of Open Access Books</subfield><subfield code="P">DOAB Directory of Open Access Books</subfield><subfield code="x">https://eu02.alma.exlibrisgroup.com/view/uresolver/43ACC_OEAW/openurl?u.ignore_date_coverage=true&portfolio_pid=5338970690004498&Force_direct=true</subfield><subfield code="Z">5338970690004498</subfield><subfield code="b">Available</subfield><subfield code="8">5338970690004498</subfield></datafield></record></collection> |