High-performance and time-predictable embedded computing / / editors, Luis Miguel Pinho [and six others].

Journal of Cyber Security and Mobility provides an in-depth and holistic view of security and solutions from practical to theoretical aspects. It covers topics that are equally valuable for practitioners as well as those new in the field.

Saved in:
Bibliographic Details
Superior document:River Publishers series in information science and technology
TeilnehmendeR:
Place / Publishing House:Gistrup, Denmark : : River Publishers,, 2018.
©2018
Year of Publication:2018
Edition:1st ed.
Language:English
Series:River Publishers series in information science and technology.
Physical Description:1 online resource (236 pages).
Tags: Add Tag
No Tags, Be the first to tag this record!
id 993570975704498
ctrlnum (CKB)4100000005879328
(MiAaPQ)EBC5704151
(MiAaPQ)EBC30251894
(Au-PeEL)EBL30251894
(oapen)https://directory.doabooks.org/handle/20.500.12854/94310
(MiAaPQ)EBC7245310
(Au-PeEL)EBL7245310
(EXLCZ)994100000005879328
collection bib_alma
record_format marc
spelling High-performance and time-predictable embedded computing / editors, Luis Miguel Pinho [and six others].
1st ed.
Gistrup, Denmark : River Publishers, 2018.
©2018
1 online resource (236 pages).
text txt rdacontent
computer c rdamedia
online resource cr rdacarrier
River Publishers series in information science and technology
Journal of Cyber Security and Mobility provides an in-depth and holistic view of security and solutions from practical to theoretical aspects. It covers topics that are equally valuable for practitioners as well as those new in the field.
English
European Commission
Description based on print version record.
Front Cover -- Half Title page -- RIVER PUBLISHERS SERIES IN INFORMATIONSCIENCE AND TECHNOLOGY -- Title page -- Copyright page -- Contents -- Preface -- List of Contributors -- List of Figures -- List of Tables -- List of Abbreviations -- Chapter 1 - Introduction -- 1.1 Introduction -- 1.1.1 The Convergence of High-performance and Embedded Computing Domains -- 1.1.2 Parallelization Challenge -- 1.2 The P-SOCRATES Project -- 1.3 Challenges Addressed in This Book -- 1.3.1 Compiler Analysis of Parallel Programs -- 1.3.2 Predictable Scheduling of Parallel Tasks on Many-core Systems -- 1.3.3 Methodology for Measurement-based Timing Analysis -- 1.3.4 Optimized OpenMP Tasking Runtime System -- 1.3.5 Real-time Operating Systems -- 1.4 The UpScale SDK -- 1.5 Summary -- References -- Chapter 2 - Manycore Platforms -- 2.1 Introduction -- 2.2 Manycore Architectures -- 2.2.1 Xeon Phi -- 2.2.2 Pezy SC -- 2.2.3 NVIDIA Tegra X1 -- 2.2.4 Tilera Tile -- 2.2.5 STMicroelectronics STHORM -- 2.2.6 Epiphany-V -- 2.2.7 TI Keystone II -- 2.2.8 Kalray MPPA-256 -- 2.2.8.1 The I/O subsystem -- 2.2.8.2 The Network-on-Chip (NoC) -- 2.2.8.3 The Host-to-IOS communication protocol -- 2.2.8.4 Internal architecture of the compute clusters -- 2.2.8.5 The shared memory -- 2.3 Summary -- References -- Chapter 3 - Predictable Parallel Programming with OpenMP -- 3.1 Introduction -- 3.1.1 Introduction to Parallel Programming Models -- 3.1.1.1 POSIX threads -- 3.1.1.2 OpenCLTM -- 3.1.1.3 NVIDIA R CUDA -- 3.1.1.4 Intel R CilkTM Plus -- 3.1.1.5 Intel R TBB -- 3.1.1.6 OpenMP -- 3.2 The OpenMP Parallel Programming Model -- 3.2.1 Introduction and Evolution of OpenMP -- 3.2.2 Parallel Model of OpenMP -- 3.2.2.1 Execution model -- 3.2.2.2 Acceleration model -- 3.2.2.3 Memory model -- 3.2.3 An OpenMP Example -- 3.3 Timing Properties of the OpenMP Tasking Model.
3.3.1 Sporadic DAG Scheduling Model of Parallel Applications -- 3.3.2 Understanding the OpenMP Tasking Model -- 3.3.3 OpenMP and Timing Predictability -- 3.3.3.1 Extracting the DAG of an OpenMP program -- 3.3.3.2 WCET analysis is applied to tasks and task parts -- 3.3.3.3 DAG-based scheduling must not violate the TSCs -- 3.4 Extracting the Timing Information of an OpenMP Program -- 3.4.1 Parallel Structure Stage -- 3.4.1.1 Parallel control flow analysis -- 3.4.1.2 Induction variables analysis -- 3.4.1.3 Reaching definitions and range analysis -- 3.4.1.4 Putting all together: The wave-front example -- 3.4.2 Task Expansion Stage -- 3.4.2.1 Control flow expansion and synchronization predicate resolution -- 3.4.2.2 tid: A unique task instance identifier -- 3.4.2.3 Missing information when deriving the DAG -- 3.4.3 Compiler Complexity -- 3.5 Summary -- References -- Chapter 4 - Mapping, Scheduling, and Schedulability Analysis -- 4.1 Introduction -- 4.2 System Model -- 4.3 Partitioned Scheduler -- 4.3.1 The Optimality of EDF on Preemptive Uniprocessors -- 4.3.2 FP-scheduling Algorithms -- 4.3.3 Limited Preemption Scheduling -- 4.3.4 Limited Preemption Schedulability Analysis -- 4.4 Global Scheduler with Migration Support -- 4.4.1 Migration-based Scheduler -- 4.4.2 Putting All Together -- 4.4.3 Implementation of a Limited Preemption Scheduler -- 4.5 Overall Schedulability Analysis -- 4.5.1 Model Formalization -- 4.5.2 Critical Interference of cp-tasks -- 4.5.3 Response Time Analysis -- 4.5.3.1 Inter-task interference -- 4.5.3.2 Intra-task interference -- 4.5.3.3 Computation of cp-task parameters -- 4.5.4 Non-conditional DAG Tasks -- 4.5.5 Series-Parallel Conditional DAG Tasks -- 4.5.6 Schedulability Condition -- 4.6 Specializing Analysis for Limited Pre-emption Global/Dynamic Approach -- 4.6.1 Blocking Impact of the Largest NPRs (LP-max).
4.6.2 Blocking Impact of the Largest Parallel NPRs (LP-ILP) -- 4.6.2.1 LP worst-case workload of a task executing on c cores -- 4.6.2.2 Overall LP worst-case workload -- 4.6.2.3 Lower-priority interference -- 4.6.3 Computation of Response Time Factors of LP-ILP -- 4.6.3.1 Worst-case workload of ˝ i executing on c cores: i[c] -- 4.6.3.2 Overall LP worst-case workload of lp(k) per executionscenario sl: ˆk[sl] -- 4.6.4 Complexity -- 4.7 Specializing Analysis for the Partitioned/Static Approach -- 4.7.1 ILP Formulation -- 4.7.1.1 Tied tasks -- 4.7.1.2 Untied tasks -- 4.7.1.3 Complexity -- 4.7.2 Heuristic Approaches -- 4.7.2.1 Tied tasks -- 4.7.2.2 Untied tasks -- 4.7.3 Integrating Interference from Additional RT Tasks -- 4.7.4 Critical Instant -- 4.7.5 Response-time Upper Bound -- 4.8 Scheduling for I/O Cores -- 4.9 Summary -- References -- Chapter 5 - Timing Analysis Methodology -- 5.1 Introduction -- 5.1.1 Static WCET Analysis Techniques -- 5.1.2 Measurement-based WCET Analysis Techniques -- 5.1.3 Hybrid WCET Techniques -- 5.1.4 Measurement-based Probabilistic Techniques -- 5.2 Our Choice of Methodology for WCET Estimation -- 5.2.1 Why Not Use Static Approaches? -- 5.2.2 Why Use Measurement-based Techniques? -- 5.3 Description of Our Timing Analysis Methodology -- 5.3.1 Intrinsic vs. Extrinsic Execution Times -- 5.3.2 The Concept of Safety Margins -- 5.3.3 Our Proposed Timing Methodology at a Glance -- 5.3.4 Overview of the Application Structure -- 5.3.5 Automatic Insertion and Removal of the Trace-points -- 5.3.5.1 How to insert the trace-points -- 5.3.5.2 How to remove the trace-points -- 5.3.6 Extract the Intrinsic Execution Time: The Isolation Mode -- 5.3.7 Extract the Extrinsic Execution Time: The Contention Mode -- 5.3.8 Extract the Execution Time in Real Situation: The Deployment Mode -- 5.3.9 Derive WCET Estimates -- 5.4 Summary -- References.
Chapter 6 - OpenMP Runtime -- 6.1 Introduction -- 6.2 Offloading Library Design -- 6.3 Tasking Runtime -- 6.3.1 Task Dependency Management -- 6.4 Experimental Results -- 6.4.1 Offloading Library -- 6.4.2 Tasking Runtime -- 6.4.2.1 Applications with a linear generation pattern -- 6.4.2.2 Applications with a recursive generation pattern -- 6.4.2.3 Applications with mixed patterns -- 6.4.2.4 Impact of cutoff on LINEAR and RECURSIVE applications -- 6.4.2.5 Real applications -- 6.4.3 Evaluation of the Task Dependency Mechanism -- 6.4.3.1 Performance speedup and memory usage -- 6.4.3.2 The task dependency mechanism on the MPPA -- 6.5 Summary -- References -- Chapter 7 - Embedded Operating Systems -- 7.1 Introduction -- 7.2 State of The Art -- 7.2.1 Real-time Support in Linux -- 7.2.1.1 Hard real-time support -- 7.2.1.2 Latency reduction -- 7.2.1.3 Real-time CPU scheduling -- 7.2.2 Survey of Existing Embedded RTOSs -- 7.2.3 Classification of Embedded RTOSs -- 7.3 Requirements for The Choice of The Run Time System -- 7.3.1 Programming Model -- 7.3.2 Preemption Support -- 7.3.3 Migration Support -- 7.3.4 Scheduling Characteristics -- 7.3.5 Timing Analysis -- 7.4 RTOS Selection -- 7.4.1 Host Processor -- 7.4.2 Manycore Processor -- 7.5 Operating System Support -- 7.5.1 Linux -- 7.5.2 ERIKA Enterprise Support -- 7.5.2.1 Exokernel support -- 7.5.2.2 Single-ELF multicore ERIKA Enterprise -- 7.5.2.3 Support for limited preemption, job, and global scheduling -- 7.5.2.4 New ERIKA Enterprise primitives -- 7.5.2.5 New data structures -- 7.5.2.6 Dynamic task creation -- 7.5.2.7 IRQ handlers as tasks -- 7.5.2.8 File hierarchy -- 7.5.2.9 Early performance estimation -- 7.6 Summary -- References -- Index -- About the Editors -- Back Cover.
Includes bibliographial references and index.
Embedded computer systems.
High performance computing.
87-93609-69-8
Pinho, Luis Miguel, editor.
River Publishers series in information science and technology.
language English
format eBook
author2 Pinho, Luis Miguel,
author_facet Pinho, Luis Miguel,
author2_variant l m p lm lmp
author2_role TeilnehmendeR
title High-performance and time-predictable embedded computing /
spellingShingle High-performance and time-predictable embedded computing /
River Publishers series in information science and technology
Front Cover -- Half Title page -- RIVER PUBLISHERS SERIES IN INFORMATIONSCIENCE AND TECHNOLOGY -- Title page -- Copyright page -- Contents -- Preface -- List of Contributors -- List of Figures -- List of Tables -- List of Abbreviations -- Chapter 1 - Introduction -- 1.1 Introduction -- 1.1.1 The Convergence of High-performance and Embedded Computing Domains -- 1.1.2 Parallelization Challenge -- 1.2 The P-SOCRATES Project -- 1.3 Challenges Addressed in This Book -- 1.3.1 Compiler Analysis of Parallel Programs -- 1.3.2 Predictable Scheduling of Parallel Tasks on Many-core Systems -- 1.3.3 Methodology for Measurement-based Timing Analysis -- 1.3.4 Optimized OpenMP Tasking Runtime System -- 1.3.5 Real-time Operating Systems -- 1.4 The UpScale SDK -- 1.5 Summary -- References -- Chapter 2 - Manycore Platforms -- 2.1 Introduction -- 2.2 Manycore Architectures -- 2.2.1 Xeon Phi -- 2.2.2 Pezy SC -- 2.2.3 NVIDIA Tegra X1 -- 2.2.4 Tilera Tile -- 2.2.5 STMicroelectronics STHORM -- 2.2.6 Epiphany-V -- 2.2.7 TI Keystone II -- 2.2.8 Kalray MPPA-256 -- 2.2.8.1 The I/O subsystem -- 2.2.8.2 The Network-on-Chip (NoC) -- 2.2.8.3 The Host-to-IOS communication protocol -- 2.2.8.4 Internal architecture of the compute clusters -- 2.2.8.5 The shared memory -- 2.3 Summary -- References -- Chapter 3 - Predictable Parallel Programming with OpenMP -- 3.1 Introduction -- 3.1.1 Introduction to Parallel Programming Models -- 3.1.1.1 POSIX threads -- 3.1.1.2 OpenCLTM -- 3.1.1.3 NVIDIA R CUDA -- 3.1.1.4 Intel R CilkTM Plus -- 3.1.1.5 Intel R TBB -- 3.1.1.6 OpenMP -- 3.2 The OpenMP Parallel Programming Model -- 3.2.1 Introduction and Evolution of OpenMP -- 3.2.2 Parallel Model of OpenMP -- 3.2.2.1 Execution model -- 3.2.2.2 Acceleration model -- 3.2.2.3 Memory model -- 3.2.3 An OpenMP Example -- 3.3 Timing Properties of the OpenMP Tasking Model.
3.3.1 Sporadic DAG Scheduling Model of Parallel Applications -- 3.3.2 Understanding the OpenMP Tasking Model -- 3.3.3 OpenMP and Timing Predictability -- 3.3.3.1 Extracting the DAG of an OpenMP program -- 3.3.3.2 WCET analysis is applied to tasks and task parts -- 3.3.3.3 DAG-based scheduling must not violate the TSCs -- 3.4 Extracting the Timing Information of an OpenMP Program -- 3.4.1 Parallel Structure Stage -- 3.4.1.1 Parallel control flow analysis -- 3.4.1.2 Induction variables analysis -- 3.4.1.3 Reaching definitions and range analysis -- 3.4.1.4 Putting all together: The wave-front example -- 3.4.2 Task Expansion Stage -- 3.4.2.1 Control flow expansion and synchronization predicate resolution -- 3.4.2.2 tid: A unique task instance identifier -- 3.4.2.3 Missing information when deriving the DAG -- 3.4.3 Compiler Complexity -- 3.5 Summary -- References -- Chapter 4 - Mapping, Scheduling, and Schedulability Analysis -- 4.1 Introduction -- 4.2 System Model -- 4.3 Partitioned Scheduler -- 4.3.1 The Optimality of EDF on Preemptive Uniprocessors -- 4.3.2 FP-scheduling Algorithms -- 4.3.3 Limited Preemption Scheduling -- 4.3.4 Limited Preemption Schedulability Analysis -- 4.4 Global Scheduler with Migration Support -- 4.4.1 Migration-based Scheduler -- 4.4.2 Putting All Together -- 4.4.3 Implementation of a Limited Preemption Scheduler -- 4.5 Overall Schedulability Analysis -- 4.5.1 Model Formalization -- 4.5.2 Critical Interference of cp-tasks -- 4.5.3 Response Time Analysis -- 4.5.3.1 Inter-task interference -- 4.5.3.2 Intra-task interference -- 4.5.3.3 Computation of cp-task parameters -- 4.5.4 Non-conditional DAG Tasks -- 4.5.5 Series-Parallel Conditional DAG Tasks -- 4.5.6 Schedulability Condition -- 4.6 Specializing Analysis for Limited Pre-emption Global/Dynamic Approach -- 4.6.1 Blocking Impact of the Largest NPRs (LP-max).
4.6.2 Blocking Impact of the Largest Parallel NPRs (LP-ILP) -- 4.6.2.1 LP worst-case workload of a task executing on c cores -- 4.6.2.2 Overall LP worst-case workload -- 4.6.2.3 Lower-priority interference -- 4.6.3 Computation of Response Time Factors of LP-ILP -- 4.6.3.1 Worst-case workload of ˝ i executing on c cores: i[c] -- 4.6.3.2 Overall LP worst-case workload of lp(k) per executionscenario sl: ˆk[sl] -- 4.6.4 Complexity -- 4.7 Specializing Analysis for the Partitioned/Static Approach -- 4.7.1 ILP Formulation -- 4.7.1.1 Tied tasks -- 4.7.1.2 Untied tasks -- 4.7.1.3 Complexity -- 4.7.2 Heuristic Approaches -- 4.7.2.1 Tied tasks -- 4.7.2.2 Untied tasks -- 4.7.3 Integrating Interference from Additional RT Tasks -- 4.7.4 Critical Instant -- 4.7.5 Response-time Upper Bound -- 4.8 Scheduling for I/O Cores -- 4.9 Summary -- References -- Chapter 5 - Timing Analysis Methodology -- 5.1 Introduction -- 5.1.1 Static WCET Analysis Techniques -- 5.1.2 Measurement-based WCET Analysis Techniques -- 5.1.3 Hybrid WCET Techniques -- 5.1.4 Measurement-based Probabilistic Techniques -- 5.2 Our Choice of Methodology for WCET Estimation -- 5.2.1 Why Not Use Static Approaches? -- 5.2.2 Why Use Measurement-based Techniques? -- 5.3 Description of Our Timing Analysis Methodology -- 5.3.1 Intrinsic vs. Extrinsic Execution Times -- 5.3.2 The Concept of Safety Margins -- 5.3.3 Our Proposed Timing Methodology at a Glance -- 5.3.4 Overview of the Application Structure -- 5.3.5 Automatic Insertion and Removal of the Trace-points -- 5.3.5.1 How to insert the trace-points -- 5.3.5.2 How to remove the trace-points -- 5.3.6 Extract the Intrinsic Execution Time: The Isolation Mode -- 5.3.7 Extract the Extrinsic Execution Time: The Contention Mode -- 5.3.8 Extract the Execution Time in Real Situation: The Deployment Mode -- 5.3.9 Derive WCET Estimates -- 5.4 Summary -- References.
Chapter 6 - OpenMP Runtime -- 6.1 Introduction -- 6.2 Offloading Library Design -- 6.3 Tasking Runtime -- 6.3.1 Task Dependency Management -- 6.4 Experimental Results -- 6.4.1 Offloading Library -- 6.4.2 Tasking Runtime -- 6.4.2.1 Applications with a linear generation pattern -- 6.4.2.2 Applications with a recursive generation pattern -- 6.4.2.3 Applications with mixed patterns -- 6.4.2.4 Impact of cutoff on LINEAR and RECURSIVE applications -- 6.4.2.5 Real applications -- 6.4.3 Evaluation of the Task Dependency Mechanism -- 6.4.3.1 Performance speedup and memory usage -- 6.4.3.2 The task dependency mechanism on the MPPA -- 6.5 Summary -- References -- Chapter 7 - Embedded Operating Systems -- 7.1 Introduction -- 7.2 State of The Art -- 7.2.1 Real-time Support in Linux -- 7.2.1.1 Hard real-time support -- 7.2.1.2 Latency reduction -- 7.2.1.3 Real-time CPU scheduling -- 7.2.2 Survey of Existing Embedded RTOSs -- 7.2.3 Classification of Embedded RTOSs -- 7.3 Requirements for The Choice of The Run Time System -- 7.3.1 Programming Model -- 7.3.2 Preemption Support -- 7.3.3 Migration Support -- 7.3.4 Scheduling Characteristics -- 7.3.5 Timing Analysis -- 7.4 RTOS Selection -- 7.4.1 Host Processor -- 7.4.2 Manycore Processor -- 7.5 Operating System Support -- 7.5.1 Linux -- 7.5.2 ERIKA Enterprise Support -- 7.5.2.1 Exokernel support -- 7.5.2.2 Single-ELF multicore ERIKA Enterprise -- 7.5.2.3 Support for limited preemption, job, and global scheduling -- 7.5.2.4 New ERIKA Enterprise primitives -- 7.5.2.5 New data structures -- 7.5.2.6 Dynamic task creation -- 7.5.2.7 IRQ handlers as tasks -- 7.5.2.8 File hierarchy -- 7.5.2.9 Early performance estimation -- 7.6 Summary -- References -- Index -- About the Editors -- Back Cover.
title_full High-performance and time-predictable embedded computing / editors, Luis Miguel Pinho [and six others].
title_fullStr High-performance and time-predictable embedded computing / editors, Luis Miguel Pinho [and six others].
title_full_unstemmed High-performance and time-predictable embedded computing / editors, Luis Miguel Pinho [and six others].
title_auth High-performance and time-predictable embedded computing /
title_new High-performance and time-predictable embedded computing /
title_sort high-performance and time-predictable embedded computing /
series River Publishers series in information science and technology
series2 River Publishers series in information science and technology
publisher River Publishers,
publishDate 2018
physical 1 online resource (236 pages).
edition 1st ed.
contents Front Cover -- Half Title page -- RIVER PUBLISHERS SERIES IN INFORMATIONSCIENCE AND TECHNOLOGY -- Title page -- Copyright page -- Contents -- Preface -- List of Contributors -- List of Figures -- List of Tables -- List of Abbreviations -- Chapter 1 - Introduction -- 1.1 Introduction -- 1.1.1 The Convergence of High-performance and Embedded Computing Domains -- 1.1.2 Parallelization Challenge -- 1.2 The P-SOCRATES Project -- 1.3 Challenges Addressed in This Book -- 1.3.1 Compiler Analysis of Parallel Programs -- 1.3.2 Predictable Scheduling of Parallel Tasks on Many-core Systems -- 1.3.3 Methodology for Measurement-based Timing Analysis -- 1.3.4 Optimized OpenMP Tasking Runtime System -- 1.3.5 Real-time Operating Systems -- 1.4 The UpScale SDK -- 1.5 Summary -- References -- Chapter 2 - Manycore Platforms -- 2.1 Introduction -- 2.2 Manycore Architectures -- 2.2.1 Xeon Phi -- 2.2.2 Pezy SC -- 2.2.3 NVIDIA Tegra X1 -- 2.2.4 Tilera Tile -- 2.2.5 STMicroelectronics STHORM -- 2.2.6 Epiphany-V -- 2.2.7 TI Keystone II -- 2.2.8 Kalray MPPA-256 -- 2.2.8.1 The I/O subsystem -- 2.2.8.2 The Network-on-Chip (NoC) -- 2.2.8.3 The Host-to-IOS communication protocol -- 2.2.8.4 Internal architecture of the compute clusters -- 2.2.8.5 The shared memory -- 2.3 Summary -- References -- Chapter 3 - Predictable Parallel Programming with OpenMP -- 3.1 Introduction -- 3.1.1 Introduction to Parallel Programming Models -- 3.1.1.1 POSIX threads -- 3.1.1.2 OpenCLTM -- 3.1.1.3 NVIDIA R CUDA -- 3.1.1.4 Intel R CilkTM Plus -- 3.1.1.5 Intel R TBB -- 3.1.1.6 OpenMP -- 3.2 The OpenMP Parallel Programming Model -- 3.2.1 Introduction and Evolution of OpenMP -- 3.2.2 Parallel Model of OpenMP -- 3.2.2.1 Execution model -- 3.2.2.2 Acceleration model -- 3.2.2.3 Memory model -- 3.2.3 An OpenMP Example -- 3.3 Timing Properties of the OpenMP Tasking Model.
3.3.1 Sporadic DAG Scheduling Model of Parallel Applications -- 3.3.2 Understanding the OpenMP Tasking Model -- 3.3.3 OpenMP and Timing Predictability -- 3.3.3.1 Extracting the DAG of an OpenMP program -- 3.3.3.2 WCET analysis is applied to tasks and task parts -- 3.3.3.3 DAG-based scheduling must not violate the TSCs -- 3.4 Extracting the Timing Information of an OpenMP Program -- 3.4.1 Parallel Structure Stage -- 3.4.1.1 Parallel control flow analysis -- 3.4.1.2 Induction variables analysis -- 3.4.1.3 Reaching definitions and range analysis -- 3.4.1.4 Putting all together: The wave-front example -- 3.4.2 Task Expansion Stage -- 3.4.2.1 Control flow expansion and synchronization predicate resolution -- 3.4.2.2 tid: A unique task instance identifier -- 3.4.2.3 Missing information when deriving the DAG -- 3.4.3 Compiler Complexity -- 3.5 Summary -- References -- Chapter 4 - Mapping, Scheduling, and Schedulability Analysis -- 4.1 Introduction -- 4.2 System Model -- 4.3 Partitioned Scheduler -- 4.3.1 The Optimality of EDF on Preemptive Uniprocessors -- 4.3.2 FP-scheduling Algorithms -- 4.3.3 Limited Preemption Scheduling -- 4.3.4 Limited Preemption Schedulability Analysis -- 4.4 Global Scheduler with Migration Support -- 4.4.1 Migration-based Scheduler -- 4.4.2 Putting All Together -- 4.4.3 Implementation of a Limited Preemption Scheduler -- 4.5 Overall Schedulability Analysis -- 4.5.1 Model Formalization -- 4.5.2 Critical Interference of cp-tasks -- 4.5.3 Response Time Analysis -- 4.5.3.1 Inter-task interference -- 4.5.3.2 Intra-task interference -- 4.5.3.3 Computation of cp-task parameters -- 4.5.4 Non-conditional DAG Tasks -- 4.5.5 Series-Parallel Conditional DAG Tasks -- 4.5.6 Schedulability Condition -- 4.6 Specializing Analysis for Limited Pre-emption Global/Dynamic Approach -- 4.6.1 Blocking Impact of the Largest NPRs (LP-max).
4.6.2 Blocking Impact of the Largest Parallel NPRs (LP-ILP) -- 4.6.2.1 LP worst-case workload of a task executing on c cores -- 4.6.2.2 Overall LP worst-case workload -- 4.6.2.3 Lower-priority interference -- 4.6.3 Computation of Response Time Factors of LP-ILP -- 4.6.3.1 Worst-case workload of ˝ i executing on c cores: i[c] -- 4.6.3.2 Overall LP worst-case workload of lp(k) per executionscenario sl: ˆk[sl] -- 4.6.4 Complexity -- 4.7 Specializing Analysis for the Partitioned/Static Approach -- 4.7.1 ILP Formulation -- 4.7.1.1 Tied tasks -- 4.7.1.2 Untied tasks -- 4.7.1.3 Complexity -- 4.7.2 Heuristic Approaches -- 4.7.2.1 Tied tasks -- 4.7.2.2 Untied tasks -- 4.7.3 Integrating Interference from Additional RT Tasks -- 4.7.4 Critical Instant -- 4.7.5 Response-time Upper Bound -- 4.8 Scheduling for I/O Cores -- 4.9 Summary -- References -- Chapter 5 - Timing Analysis Methodology -- 5.1 Introduction -- 5.1.1 Static WCET Analysis Techniques -- 5.1.2 Measurement-based WCET Analysis Techniques -- 5.1.3 Hybrid WCET Techniques -- 5.1.4 Measurement-based Probabilistic Techniques -- 5.2 Our Choice of Methodology for WCET Estimation -- 5.2.1 Why Not Use Static Approaches? -- 5.2.2 Why Use Measurement-based Techniques? -- 5.3 Description of Our Timing Analysis Methodology -- 5.3.1 Intrinsic vs. Extrinsic Execution Times -- 5.3.2 The Concept of Safety Margins -- 5.3.3 Our Proposed Timing Methodology at a Glance -- 5.3.4 Overview of the Application Structure -- 5.3.5 Automatic Insertion and Removal of the Trace-points -- 5.3.5.1 How to insert the trace-points -- 5.3.5.2 How to remove the trace-points -- 5.3.6 Extract the Intrinsic Execution Time: The Isolation Mode -- 5.3.7 Extract the Extrinsic Execution Time: The Contention Mode -- 5.3.8 Extract the Execution Time in Real Situation: The Deployment Mode -- 5.3.9 Derive WCET Estimates -- 5.4 Summary -- References.
Chapter 6 - OpenMP Runtime -- 6.1 Introduction -- 6.2 Offloading Library Design -- 6.3 Tasking Runtime -- 6.3.1 Task Dependency Management -- 6.4 Experimental Results -- 6.4.1 Offloading Library -- 6.4.2 Tasking Runtime -- 6.4.2.1 Applications with a linear generation pattern -- 6.4.2.2 Applications with a recursive generation pattern -- 6.4.2.3 Applications with mixed patterns -- 6.4.2.4 Impact of cutoff on LINEAR and RECURSIVE applications -- 6.4.2.5 Real applications -- 6.4.3 Evaluation of the Task Dependency Mechanism -- 6.4.3.1 Performance speedup and memory usage -- 6.4.3.2 The task dependency mechanism on the MPPA -- 6.5 Summary -- References -- Chapter 7 - Embedded Operating Systems -- 7.1 Introduction -- 7.2 State of The Art -- 7.2.1 Real-time Support in Linux -- 7.2.1.1 Hard real-time support -- 7.2.1.2 Latency reduction -- 7.2.1.3 Real-time CPU scheduling -- 7.2.2 Survey of Existing Embedded RTOSs -- 7.2.3 Classification of Embedded RTOSs -- 7.3 Requirements for The Choice of The Run Time System -- 7.3.1 Programming Model -- 7.3.2 Preemption Support -- 7.3.3 Migration Support -- 7.3.4 Scheduling Characteristics -- 7.3.5 Timing Analysis -- 7.4 RTOS Selection -- 7.4.1 Host Processor -- 7.4.2 Manycore Processor -- 7.5 Operating System Support -- 7.5.1 Linux -- 7.5.2 ERIKA Enterprise Support -- 7.5.2.1 Exokernel support -- 7.5.2.2 Single-ELF multicore ERIKA Enterprise -- 7.5.2.3 Support for limited preemption, job, and global scheduling -- 7.5.2.4 New ERIKA Enterprise primitives -- 7.5.2.5 New data structures -- 7.5.2.6 Dynamic task creation -- 7.5.2.7 IRQ handlers as tasks -- 7.5.2.8 File hierarchy -- 7.5.2.9 Early performance estimation -- 7.6 Summary -- References -- Index -- About the Editors -- Back Cover.
isbn 1-00-333841-0
1-000-79156-4
1-003-33841-0
1-000-79468-7
87-93609-62-0
87-93609-69-8
callnumber-first T - Technology
callnumber-subject TK - Electrical and Nuclear Engineering
callnumber-label TK7895
callnumber-sort TK 47895 E42 H544 42018
illustrated Illustrated
dewey-hundreds 000 - Computer science, information & general works
dewey-tens 000 - Computer science, knowledge & systems
dewey-ones 004 - Data processing & computer science
dewey-full 004.16
dewey-sort 14.16
dewey-raw 004.16
dewey-search 004.16
work_keys_str_mv AT pinholuismiguel highperformanceandtimepredictableembeddedcomputing
status_str n
ids_txt_mv (CKB)4100000005879328
(MiAaPQ)EBC5704151
(MiAaPQ)EBC30251894
(Au-PeEL)EBL30251894
(oapen)https://directory.doabooks.org/handle/20.500.12854/94310
(MiAaPQ)EBC7245310
(Au-PeEL)EBL7245310
(EXLCZ)994100000005879328
carrierType_str_mv cr
hierarchy_parent_title River Publishers series in information science and technology
is_hierarchy_title High-performance and time-predictable embedded computing /
container_title River Publishers series in information science and technology
author2_original_writing_str_mv noLinkedField
_version_ 1803511308319981568
fullrecord <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01697nam a22003733i 4500</leader><controlfield tag="001">993570975704498</controlfield><controlfield tag="005">20231110172225.0</controlfield><controlfield tag="006">m o d | </controlfield><controlfield tag="007">cr cnu||||||||</controlfield><controlfield tag="008">231110s2018 dk ao ob 001 0 eng d</controlfield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">1-00-333841-0</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">1-000-79156-4</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">1-003-33841-0</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">1-000-79468-7</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">87-93609-62-0</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(CKB)4100000005879328</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(MiAaPQ)EBC5704151</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(MiAaPQ)EBC30251894</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(Au-PeEL)EBL30251894</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(oapen)https://directory.doabooks.org/handle/20.500.12854/94310</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(MiAaPQ)EBC7245310</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(Au-PeEL)EBL7245310</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(EXLCZ)994100000005879328</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">MiAaPQ</subfield><subfield code="b">eng</subfield><subfield code="e">rda</subfield><subfield code="e">pn</subfield><subfield code="c">MiAaPQ</subfield><subfield code="d">MiAaPQ</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="4"><subfield code="a">TK7895.E42</subfield><subfield code="b">.H544 2018</subfield></datafield><datafield tag="082" ind1="0" ind2=" "><subfield code="a">004.16</subfield><subfield code="2">23</subfield></datafield><datafield tag="245" ind1="0" ind2="0"><subfield code="a">High-performance and time-predictable embedded computing /</subfield><subfield code="c">editors, Luis Miguel Pinho [and six others].</subfield></datafield><datafield tag="250" ind1=" " ind2=" "><subfield code="a">1st ed.</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Gistrup, Denmark :</subfield><subfield code="b">River Publishers,</subfield><subfield code="c">2018.</subfield></datafield><datafield tag="264" ind1=" " ind2="4"><subfield code="c">©2018</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">1 online resource (236 pages).</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">computer</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">online resource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="490" ind1="1" ind2=" "><subfield code="a">River Publishers series in information science and technology</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Journal of Cyber Security and Mobility provides an in-depth and holistic view of security and solutions from practical to theoretical aspects. It covers topics that are equally valuable for practitioners as well as those new in the field.</subfield></datafield><datafield tag="546" ind1=" " ind2=" "><subfield code="a">English</subfield></datafield><datafield tag="536" ind1=" " ind2=" "><subfield code="a">European Commission</subfield></datafield><datafield tag="588" ind1=" " ind2=" "><subfield code="a">Description based on print version record.</subfield></datafield><datafield tag="505" ind1="0" ind2=" "><subfield code="a">Front Cover -- Half Title page -- RIVER PUBLISHERS SERIES IN INFORMATIONSCIENCE AND TECHNOLOGY -- Title page -- Copyright page -- Contents -- Preface -- List of Contributors -- List of Figures -- List of Tables -- List of Abbreviations -- Chapter 1 - Introduction -- 1.1 Introduction -- 1.1.1 The Convergence of High-performance and Embedded Computing Domains -- 1.1.2 Parallelization Challenge -- 1.2 The P-SOCRATES Project -- 1.3 Challenges Addressed in This Book -- 1.3.1 Compiler Analysis of Parallel Programs -- 1.3.2 Predictable Scheduling of Parallel Tasks on Many-core Systems -- 1.3.3 Methodology for Measurement-based Timing Analysis -- 1.3.4 Optimized OpenMP Tasking Runtime System -- 1.3.5 Real-time Operating Systems -- 1.4 The UpScale SDK -- 1.5 Summary -- References -- Chapter 2 - Manycore Platforms -- 2.1 Introduction -- 2.2 Manycore Architectures -- 2.2.1 Xeon Phi -- 2.2.2 Pezy SC -- 2.2.3 NVIDIA Tegra X1 -- 2.2.4 Tilera Tile -- 2.2.5 STMicroelectronics STHORM -- 2.2.6 Epiphany-V -- 2.2.7 TI Keystone II -- 2.2.8 Kalray MPPA-256 -- 2.2.8.1 The I/O subsystem -- 2.2.8.2 The Network-on-Chip (NoC) -- 2.2.8.3 The Host-to-IOS communication protocol -- 2.2.8.4 Internal architecture of the compute clusters -- 2.2.8.5 The shared memory -- 2.3 Summary -- References -- Chapter 3 - Predictable Parallel Programming with OpenMP -- 3.1 Introduction -- 3.1.1 Introduction to Parallel Programming Models -- 3.1.1.1 POSIX threads -- 3.1.1.2 OpenCLTM -- 3.1.1.3 NVIDIA R CUDA -- 3.1.1.4 Intel R CilkTM Plus -- 3.1.1.5 Intel R TBB -- 3.1.1.6 OpenMP -- 3.2 The OpenMP Parallel Programming Model -- 3.2.1 Introduction and Evolution of OpenMP -- 3.2.2 Parallel Model of OpenMP -- 3.2.2.1 Execution model -- 3.2.2.2 Acceleration model -- 3.2.2.3 Memory model -- 3.2.3 An OpenMP Example -- 3.3 Timing Properties of the OpenMP Tasking Model.</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">3.3.1 Sporadic DAG Scheduling Model of Parallel Applications -- 3.3.2 Understanding the OpenMP Tasking Model -- 3.3.3 OpenMP and Timing Predictability -- 3.3.3.1 Extracting the DAG of an OpenMP program -- 3.3.3.2 WCET analysis is applied to tasks and task parts -- 3.3.3.3 DAG-based scheduling must not violate the TSCs -- 3.4 Extracting the Timing Information of an OpenMP Program -- 3.4.1 Parallel Structure Stage -- 3.4.1.1 Parallel control flow analysis -- 3.4.1.2 Induction variables analysis -- 3.4.1.3 Reaching definitions and range analysis -- 3.4.1.4 Putting all together: The wave-front example -- 3.4.2 Task Expansion Stage -- 3.4.2.1 Control flow expansion and synchronization predicate resolution -- 3.4.2.2 tid: A unique task instance identifier -- 3.4.2.3 Missing information when deriving the DAG -- 3.4.3 Compiler Complexity -- 3.5 Summary -- References -- Chapter 4 - Mapping, Scheduling, and Schedulability Analysis -- 4.1 Introduction -- 4.2 System Model -- 4.3 Partitioned Scheduler -- 4.3.1 The Optimality of EDF on Preemptive Uniprocessors -- 4.3.2 FP-scheduling Algorithms -- 4.3.3 Limited Preemption Scheduling -- 4.3.4 Limited Preemption Schedulability Analysis -- 4.4 Global Scheduler with Migration Support -- 4.4.1 Migration-based Scheduler -- 4.4.2 Putting All Together -- 4.4.3 Implementation of a Limited Preemption Scheduler -- 4.5 Overall Schedulability Analysis -- 4.5.1 Model Formalization -- 4.5.2 Critical Interference of cp-tasks -- 4.5.3 Response Time Analysis -- 4.5.3.1 Inter-task interference -- 4.5.3.2 Intra-task interference -- 4.5.3.3 Computation of cp-task parameters -- 4.5.4 Non-conditional DAG Tasks -- 4.5.5 Series-Parallel Conditional DAG Tasks -- 4.5.6 Schedulability Condition -- 4.6 Specializing Analysis for Limited Pre-emption Global/Dynamic Approach -- 4.6.1 Blocking Impact of the Largest NPRs (LP-max).</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">4.6.2 Blocking Impact of the Largest Parallel NPRs (LP-ILP) -- 4.6.2.1 LP worst-case workload of a task executing on c cores -- 4.6.2.2 Overall LP worst-case workload -- 4.6.2.3 Lower-priority interference -- 4.6.3 Computation of Response Time Factors of LP-ILP -- 4.6.3.1 Worst-case workload of ˝ i executing on c cores: i[c] -- 4.6.3.2 Overall LP worst-case workload of lp(k) per executionscenario sl: ˆk[sl] -- 4.6.4 Complexity -- 4.7 Specializing Analysis for the Partitioned/Static Approach -- 4.7.1 ILP Formulation -- 4.7.1.1 Tied tasks -- 4.7.1.2 Untied tasks -- 4.7.1.3 Complexity -- 4.7.2 Heuristic Approaches -- 4.7.2.1 Tied tasks -- 4.7.2.2 Untied tasks -- 4.7.3 Integrating Interference from Additional RT Tasks -- 4.7.4 Critical Instant -- 4.7.5 Response-time Upper Bound -- 4.8 Scheduling for I/O Cores -- 4.9 Summary -- References -- Chapter 5 - Timing Analysis Methodology -- 5.1 Introduction -- 5.1.1 Static WCET Analysis Techniques -- 5.1.2 Measurement-based WCET Analysis Techniques -- 5.1.3 Hybrid WCET Techniques -- 5.1.4 Measurement-based Probabilistic Techniques -- 5.2 Our Choice of Methodology for WCET Estimation -- 5.2.1 Why Not Use Static Approaches? -- 5.2.2 Why Use Measurement-based Techniques? -- 5.3 Description of Our Timing Analysis Methodology -- 5.3.1 Intrinsic vs. Extrinsic Execution Times -- 5.3.2 The Concept of Safety Margins -- 5.3.3 Our Proposed Timing Methodology at a Glance -- 5.3.4 Overview of the Application Structure -- 5.3.5 Automatic Insertion and Removal of the Trace-points -- 5.3.5.1 How to insert the trace-points -- 5.3.5.2 How to remove the trace-points -- 5.3.6 Extract the Intrinsic Execution Time: The Isolation Mode -- 5.3.7 Extract the Extrinsic Execution Time: The Contention Mode -- 5.3.8 Extract the Execution Time in Real Situation: The Deployment Mode -- 5.3.9 Derive WCET Estimates -- 5.4 Summary -- References.</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">Chapter 6 - OpenMP Runtime -- 6.1 Introduction -- 6.2 Offloading Library Design -- 6.3 Tasking Runtime -- 6.3.1 Task Dependency Management -- 6.4 Experimental Results -- 6.4.1 Offloading Library -- 6.4.2 Tasking Runtime -- 6.4.2.1 Applications with a linear generation pattern -- 6.4.2.2 Applications with a recursive generation pattern -- 6.4.2.3 Applications with mixed patterns -- 6.4.2.4 Impact of cutoff on LINEAR and RECURSIVE applications -- 6.4.2.5 Real applications -- 6.4.3 Evaluation of the Task Dependency Mechanism -- 6.4.3.1 Performance speedup and memory usage -- 6.4.3.2 The task dependency mechanism on the MPPA -- 6.5 Summary -- References -- Chapter 7 - Embedded Operating Systems -- 7.1 Introduction -- 7.2 State of The Art -- 7.2.1 Real-time Support in Linux -- 7.2.1.1 Hard real-time support -- 7.2.1.2 Latency reduction -- 7.2.1.3 Real-time CPU scheduling -- 7.2.2 Survey of Existing Embedded RTOSs -- 7.2.3 Classification of Embedded RTOSs -- 7.3 Requirements for The Choice of The Run Time System -- 7.3.1 Programming Model -- 7.3.2 Preemption Support -- 7.3.3 Migration Support -- 7.3.4 Scheduling Characteristics -- 7.3.5 Timing Analysis -- 7.4 RTOS Selection -- 7.4.1 Host Processor -- 7.4.2 Manycore Processor -- 7.5 Operating System Support -- 7.5.1 Linux -- 7.5.2 ERIKA Enterprise Support -- 7.5.2.1 Exokernel support -- 7.5.2.2 Single-ELF multicore ERIKA Enterprise -- 7.5.2.3 Support for limited preemption, job, and global scheduling -- 7.5.2.4 New ERIKA Enterprise primitives -- 7.5.2.5 New data structures -- 7.5.2.6 Dynamic task creation -- 7.5.2.7 IRQ handlers as tasks -- 7.5.2.8 File hierarchy -- 7.5.2.9 Early performance estimation -- 7.6 Summary -- References -- Index -- About the Editors -- Back Cover.</subfield></datafield><datafield tag="504" ind1=" " ind2=" "><subfield code="a">Includes bibliographial references and index.</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">Embedded computer systems.</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">High performance computing.</subfield></datafield><datafield tag="776" ind1=" " ind2=" "><subfield code="z">87-93609-69-8</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Pinho, Luis Miguel,</subfield><subfield code="e">editor.</subfield></datafield><datafield tag="830" ind1=" " ind2="0"><subfield code="a">River Publishers series in information science and technology.</subfield></datafield><datafield tag="906" ind1=" " ind2=" "><subfield code="a">BOOK</subfield></datafield><datafield tag="ADM" ind1=" " ind2=" "><subfield code="b">2024-07-03 00:37:30 Europe/Vienna</subfield><subfield code="f">system</subfield><subfield code="c">marc21</subfield><subfield code="a">2018-09-01 19:45:54 Europe/Vienna</subfield><subfield code="g">false</subfield></datafield><datafield tag="AVE" ind1=" " ind2=" "><subfield code="i">DOAB Directory of Open Access Books</subfield><subfield code="P">DOAB Directory of Open Access Books</subfield><subfield code="x">https://eu02.alma.exlibrisgroup.com/view/uresolver/43ACC_OEAW/openurl?u.ignore_date_coverage=true&amp;portfolio_pid=5341442520004498&amp;Force_direct=true</subfield><subfield code="Z">5341442520004498</subfield><subfield code="b">Available</subfield><subfield code="8">5341442520004498</subfield></datafield></record></collection>