Optimizing HPC Applications with Intel Cluster Tools : : Hunting Petaflops.

Saved in:
Bibliographic Details
:
TeilnehmendeR:
Place / Publishing House:Berkeley, CA : : Apress L. P.,, 2014.
Ã2014.
Year of Publication:2014
Edition:1st ed.
Language:English
Online Access:
Physical Description:1 online resource (291 pages)
Tags: Add Tag
No Tags, Be the first to tag this record!
id 5006422827
ctrlnum (MiAaPQ)5006422827
(Au-PeEL)EBL6422827
(OCoLC)1231606970
collection bib_alma
record_format marc
spelling Supalov, Alexander.
Optimizing HPC Applications with Intel Cluster Tools : Hunting Petaflops.
1st ed.
Berkeley, CA : Apress L. P., 2014.
Ã2014.
1 online resource (291 pages)
text txt rdacontent
computer c rdamedia
online resource cr rdacarrier
Intro -- Contents at a Glance -- Contents -- About the Authors -- About the Technical Reviewers -- Acknowledgments -- Foreword -- Introduction -- Chapter 1: No Time to Read This Book? -- Using Intel MPI Library -- Using Intel Composer XE -- Tuning Intel MPI Library -- Gather Built-in Statistics -- Optimize Process Placement -- Optimize Thread Placement -- Tuning Intel Composer XE -- Analyze Optimization and Vectorization Reports -- Use Interprocedural Optimization -- Summary -- References -- Chapter 2: Overview of Platform Architectures -- Performance Metrics and Targets -- Latency, Throughput, Energy, and Power -- Peak Performance as the Ultimate Limit -- Scalability and Maximum Parallel Speedup -- Bottlenecks and a Bit of Queuing Theory -- Roofline Model -- Performance Features of Computer Architectures -- Increasing Single-Threaded Performance: Where You Can and Cannot Help -- Process More Data with SIMD Parallelism -- Distributed and Shared Memory Systems -- Use More Independent Threads on the Same Node -- Don't Limit Yourself to a Single Server -- HPC Hardware Architecture Overview -- A Multicore Workstation or a Server Compute Node -- Coprocessor for Highly Parallel Applications -- Group of Similar Nodes Form an HPC Cluster -- Other Important Components of HPC Systems -- Summary -- References -- Chapter 3: Top-Down Software Optimization -- The Three Levels and Their Impact on Performance -- System Level -- Application Level -- Working Against the Memory Wall -- The Magic of Vectors -- Distributed Memory Parallelization -- Shared Memory Parallelization -- Other Existing Approaches and Methods -- Microarchitecture Level -- Addressing Pipelines and Execution -- Closed-Loop Methodology -- Workload, Application, and Baseline -- Iterating the Optimization Process -- Summary -- References -- Chapter 4: Addressing System Bottlenecks.
Classifying System-Level Bottlenecks -- Identifying Issues Related to System Condition -- Characterizing Problems Caused by System Configuration -- Understanding System-Level Performance Limits -- Checking General Compute Subsystem Performance -- Testing Memory Subsystem Performance -- Testing I/O Subsystem Performance -- Characterizing Application System-Level Issues -- Selecting Performance Characterization Tools -- Monitoring the I/O Utilization -- Analyzing Memory Bandwidth -- Summary -- References -- Chapter 5: Addressing Application Bottlenecks: Distributed Memory -- Algorithm for Optimizing MPI Performance -- Comprehending the Underlying MPI Performance -- Recalling Some Benchmarking Basics -- Gauging Default Intranode Communication Performance -- Gauging Default Internode Communication Performance -- Discovering Default Process Layout and Pinning Details -- Gauging Physical Core Performance -- Doing Initial Performance Analysis -- Is It Worth the Trouble? -- Example 1: Initial HPL Performance Investigation -- Getting an Overview of Scalability and Performance -- Learning Application Behavior -- Example 2: MiniFE Performance Investigation -- Choosing Representative Workload(s) -- Example 2 (cont.): MiniFE Performance Investigation -- Balancing Process and Thread Parallelism -- Example 2 (cont.): MiniFE Performance Investigation -- Doing a Scalability Review -- Example 2 (cont.): MiniFE Performance Investigation -- Analyzing the Details of the Application Behavior -- Example 2 (cont.): MiniFE Performance Investigation -- Choosing the Optimization Objective -- Detecting Load Imbalance -- Example 2 (cont.): MiniFE Performance Investigation -- Dealing with Load Imbalance -- Classifying Load Imbalance -- Addressing Load Imbalance -- Example 2 (cont.): MiniFE Performance Investigation -- Example 3: MiniMD Performance Investigation.
Optimizing MPI Performance -- Classifying the MPI Performance Issues -- Addressing MPI Performance Issues -- Mapping Application onto the Platform -- Understanding Communication Paths -- Selecting Proper Communication Fabrics -- Using Scalable Datagrams -- Specifying a Network Provider -- Using IP over IB -- Controlling the Fabric Fallback Mechanism -- Using Multirail Capabilities -- Detecting and Classifying Improper Process Layout and Pinning Issues -- Controlling Process Layout -- Controlling the Global Process Layout -- Controlling the Detailed Process Layout -- Setting the Environment Variables at All Levels -- Controlling the Process Pinning -- Controlling Memory and Network Affinity -- Example 4: MiniMD Performance Investigation on Xeon Phi -- Example 5: MiniGhost Performance Investigation -- Tuning the Intel MPI Library -- Tuning Intel MPI for the Platform -- Tuning Point-to-Point Settings -- Adjusting the Eager and Rendezvous Protocol Thresholds -- Changing DAPL and DAPL UD Eager Protocol Threshold -- Bypassing Shared Memory for Intranode Communication -- Bypassing the Cache for Intranode Communication -- Choosing the Best Collective Algorithms -- Tuning Intel MPI Library for the Application -- Using Magical Tips and Tricks -- Disabling the Dynamic Connection Mode -- Applying the Wait Mode to Oversubscribed Jobs -- Fine-Tuning the Message-Passing Progress Engine -- Reducing the Pre-reserved DAPL Memory Size -- What Else? -- Example 5 (cont.): MiniGhost Performance Investigation -- Optimizing Application for Intel MPI -- Avoiding MPI_ANY_SOURCE -- Avoiding Superfluous Synchronization -- Using Derived Datatypes -- Using Collective Operations -- Betting on the Computation/Communication Overlap -- Replacing Blocking Collective Operations by MPI-3 Nonblocking Ones -- Using Accelerated MPI File I/O.
Example 5 (cont.): MiniGhost Performance Investigation -- Using Advanced Analysis Techniques -- Automatically Checking MPI Program Correctness -- Comparing Application Traces -- Instrumenting Application Code -- Correlating MPI and Hardware Events -- Collecting and Analyzing Hardware Counter Information in ITAC -- Collecting and Analyzing Hardware Counter Information in VTune -- Summary -- References -- Chapter 6: Addressing Application Bottlenecks: Shared Memory -- Profiling Your Application -- Using VTune Amplifier XE for Hotspots Profiling -- Hotspots for the HPCG Benchmark -- Compiler-Assisted Loop/Function Profiling -- Sequential Code and Detecting Load Imbalances -- Thread Synchronization and Locking -- Dealing with Memory Locality and NUMA Effects -- Thread and Process Pinning -- Controlling OpenMP Thread Placement -- Thread Placement in Hybrid Applications -- Summary -- References -- Chapter 7: Addressing Application Bottlenecks: Microarchitecture -- Overview of a Modern Processor Pipeline -- Pipelined Execution -- Data Conflicts -- Control Conflicts -- Structural Conflicts -- Out-of-order vs. In-order Execution -- Superscalar Pipelines -- SIMD Execution -- Speculative Execution: Branch Prediction -- Memory Subsystem -- Putting It All Together: A Final Look at the Sandy Bridge Pipeline -- A Top-down Method for Categorizing the Pipeline Performance -- Intel Composer XE Usage for Microarchitecture Optimizations -- Basic Compiler Usage and Optimization -- Using Optimization and Vectorization Reports to Read the Compiler's Mind -- Optimizing for Vectorization -- The AVX Instruction Set -- Why Doesn't My Code Vectorize in the First Place? -- Data Dependences -- Data Aliasing -- Array Notations -- Vectorization Directives -- ivdep -- vector -- simd -- Understanding AVX: Intrinsic Programming -- What Are Intrinsics?.
First Steps: Loading and Storing -- Arithmetic -- Data Rearrangement -- Dealing with Disambiguation -- Dealing with Branches -- __builtin_expect -- Profile-Guided Optimization -- Pragmas for Unrolling Loops and Inlining -- unroll/nounroll -- unroll_and_jam/nounroll_and_jam -- inline, noinline, forceinline -- Specialized Routines: How to Exploit the Branch Prediction for Maximal Performance -- When Optimization Leads to Wrong Results -- Using a Standard Library Method -- Using a Manual Implementation in C -- Vectorization with Directives -- Analyzing Pipeline Performance with Intel VTune Amplifier XE -- Summary -- References -- Chapter 8: Application Design Considerations -- Abstraction and Generalization of the Platform Architecture -- Types of Abstractions -- Levels of Abstraction and Complexities -- Raw Hardware vs. Virtualized Hardware in the Cloud -- Questions about Application Design -- Designing for Performance and Scaling -- Designing for Flexibility and Performance Portability -- Data Layout -- Structured Approach to Express Parallelism -- Understanding Bounds and Projecting Bottlenecks -- Data Storage or Transfer vs. Recalculation -- Total Productivity Assessment -- Summary -- References -- Index.
Description based on publisher supplied metadata and other sources.
Electronic reproduction. Ann Arbor, Michigan : ProQuest Ebook Central, 2024. Available via World Wide Web. Access may be limited to ProQuest Ebook Central affiliated libraries.
Electronic books.
Semin, Andrey.
Dahnken, Christopher.
Klemm, Michael.
Print version: Supalov, Alexander Optimizing HPC Applications with Intel Cluster Tools Berkeley, CA : Apress L. P.,c2014 9781430264965
ProQuest (Firm)
https://ebookcentral.proquest.com/lib/oeawat/detail.action?docID=6422827 Click to View
language English
format eBook
author Supalov, Alexander.
spellingShingle Supalov, Alexander.
Optimizing HPC Applications with Intel Cluster Tools : Hunting Petaflops.
Intro -- Contents at a Glance -- Contents -- About the Authors -- About the Technical Reviewers -- Acknowledgments -- Foreword -- Introduction -- Chapter 1: No Time to Read This Book? -- Using Intel MPI Library -- Using Intel Composer XE -- Tuning Intel MPI Library -- Gather Built-in Statistics -- Optimize Process Placement -- Optimize Thread Placement -- Tuning Intel Composer XE -- Analyze Optimization and Vectorization Reports -- Use Interprocedural Optimization -- Summary -- References -- Chapter 2: Overview of Platform Architectures -- Performance Metrics and Targets -- Latency, Throughput, Energy, and Power -- Peak Performance as the Ultimate Limit -- Scalability and Maximum Parallel Speedup -- Bottlenecks and a Bit of Queuing Theory -- Roofline Model -- Performance Features of Computer Architectures -- Increasing Single-Threaded Performance: Where You Can and Cannot Help -- Process More Data with SIMD Parallelism -- Distributed and Shared Memory Systems -- Use More Independent Threads on the Same Node -- Don't Limit Yourself to a Single Server -- HPC Hardware Architecture Overview -- A Multicore Workstation or a Server Compute Node -- Coprocessor for Highly Parallel Applications -- Group of Similar Nodes Form an HPC Cluster -- Other Important Components of HPC Systems -- Summary -- References -- Chapter 3: Top-Down Software Optimization -- The Three Levels and Their Impact on Performance -- System Level -- Application Level -- Working Against the Memory Wall -- The Magic of Vectors -- Distributed Memory Parallelization -- Shared Memory Parallelization -- Other Existing Approaches and Methods -- Microarchitecture Level -- Addressing Pipelines and Execution -- Closed-Loop Methodology -- Workload, Application, and Baseline -- Iterating the Optimization Process -- Summary -- References -- Chapter 4: Addressing System Bottlenecks.
Classifying System-Level Bottlenecks -- Identifying Issues Related to System Condition -- Characterizing Problems Caused by System Configuration -- Understanding System-Level Performance Limits -- Checking General Compute Subsystem Performance -- Testing Memory Subsystem Performance -- Testing I/O Subsystem Performance -- Characterizing Application System-Level Issues -- Selecting Performance Characterization Tools -- Monitoring the I/O Utilization -- Analyzing Memory Bandwidth -- Summary -- References -- Chapter 5: Addressing Application Bottlenecks: Distributed Memory -- Algorithm for Optimizing MPI Performance -- Comprehending the Underlying MPI Performance -- Recalling Some Benchmarking Basics -- Gauging Default Intranode Communication Performance -- Gauging Default Internode Communication Performance -- Discovering Default Process Layout and Pinning Details -- Gauging Physical Core Performance -- Doing Initial Performance Analysis -- Is It Worth the Trouble? -- Example 1: Initial HPL Performance Investigation -- Getting an Overview of Scalability and Performance -- Learning Application Behavior -- Example 2: MiniFE Performance Investigation -- Choosing Representative Workload(s) -- Example 2 (cont.): MiniFE Performance Investigation -- Balancing Process and Thread Parallelism -- Example 2 (cont.): MiniFE Performance Investigation -- Doing a Scalability Review -- Example 2 (cont.): MiniFE Performance Investigation -- Analyzing the Details of the Application Behavior -- Example 2 (cont.): MiniFE Performance Investigation -- Choosing the Optimization Objective -- Detecting Load Imbalance -- Example 2 (cont.): MiniFE Performance Investigation -- Dealing with Load Imbalance -- Classifying Load Imbalance -- Addressing Load Imbalance -- Example 2 (cont.): MiniFE Performance Investigation -- Example 3: MiniMD Performance Investigation.
Optimizing MPI Performance -- Classifying the MPI Performance Issues -- Addressing MPI Performance Issues -- Mapping Application onto the Platform -- Understanding Communication Paths -- Selecting Proper Communication Fabrics -- Using Scalable Datagrams -- Specifying a Network Provider -- Using IP over IB -- Controlling the Fabric Fallback Mechanism -- Using Multirail Capabilities -- Detecting and Classifying Improper Process Layout and Pinning Issues -- Controlling Process Layout -- Controlling the Global Process Layout -- Controlling the Detailed Process Layout -- Setting the Environment Variables at All Levels -- Controlling the Process Pinning -- Controlling Memory and Network Affinity -- Example 4: MiniMD Performance Investigation on Xeon Phi -- Example 5: MiniGhost Performance Investigation -- Tuning the Intel MPI Library -- Tuning Intel MPI for the Platform -- Tuning Point-to-Point Settings -- Adjusting the Eager and Rendezvous Protocol Thresholds -- Changing DAPL and DAPL UD Eager Protocol Threshold -- Bypassing Shared Memory for Intranode Communication -- Bypassing the Cache for Intranode Communication -- Choosing the Best Collective Algorithms -- Tuning Intel MPI Library for the Application -- Using Magical Tips and Tricks -- Disabling the Dynamic Connection Mode -- Applying the Wait Mode to Oversubscribed Jobs -- Fine-Tuning the Message-Passing Progress Engine -- Reducing the Pre-reserved DAPL Memory Size -- What Else? -- Example 5 (cont.): MiniGhost Performance Investigation -- Optimizing Application for Intel MPI -- Avoiding MPI_ANY_SOURCE -- Avoiding Superfluous Synchronization -- Using Derived Datatypes -- Using Collective Operations -- Betting on the Computation/Communication Overlap -- Replacing Blocking Collective Operations by MPI-3 Nonblocking Ones -- Using Accelerated MPI File I/O.
Example 5 (cont.): MiniGhost Performance Investigation -- Using Advanced Analysis Techniques -- Automatically Checking MPI Program Correctness -- Comparing Application Traces -- Instrumenting Application Code -- Correlating MPI and Hardware Events -- Collecting and Analyzing Hardware Counter Information in ITAC -- Collecting and Analyzing Hardware Counter Information in VTune -- Summary -- References -- Chapter 6: Addressing Application Bottlenecks: Shared Memory -- Profiling Your Application -- Using VTune Amplifier XE for Hotspots Profiling -- Hotspots for the HPCG Benchmark -- Compiler-Assisted Loop/Function Profiling -- Sequential Code and Detecting Load Imbalances -- Thread Synchronization and Locking -- Dealing with Memory Locality and NUMA Effects -- Thread and Process Pinning -- Controlling OpenMP Thread Placement -- Thread Placement in Hybrid Applications -- Summary -- References -- Chapter 7: Addressing Application Bottlenecks: Microarchitecture -- Overview of a Modern Processor Pipeline -- Pipelined Execution -- Data Conflicts -- Control Conflicts -- Structural Conflicts -- Out-of-order vs. In-order Execution -- Superscalar Pipelines -- SIMD Execution -- Speculative Execution: Branch Prediction -- Memory Subsystem -- Putting It All Together: A Final Look at the Sandy Bridge Pipeline -- A Top-down Method for Categorizing the Pipeline Performance -- Intel Composer XE Usage for Microarchitecture Optimizations -- Basic Compiler Usage and Optimization -- Using Optimization and Vectorization Reports to Read the Compiler's Mind -- Optimizing for Vectorization -- The AVX Instruction Set -- Why Doesn't My Code Vectorize in the First Place? -- Data Dependences -- Data Aliasing -- Array Notations -- Vectorization Directives -- ivdep -- vector -- simd -- Understanding AVX: Intrinsic Programming -- What Are Intrinsics?.
First Steps: Loading and Storing -- Arithmetic -- Data Rearrangement -- Dealing with Disambiguation -- Dealing with Branches -- __builtin_expect -- Profile-Guided Optimization -- Pragmas for Unrolling Loops and Inlining -- unroll/nounroll -- unroll_and_jam/nounroll_and_jam -- inline, noinline, forceinline -- Specialized Routines: How to Exploit the Branch Prediction for Maximal Performance -- When Optimization Leads to Wrong Results -- Using a Standard Library Method -- Using a Manual Implementation in C -- Vectorization with Directives -- Analyzing Pipeline Performance with Intel VTune Amplifier XE -- Summary -- References -- Chapter 8: Application Design Considerations -- Abstraction and Generalization of the Platform Architecture -- Types of Abstractions -- Levels of Abstraction and Complexities -- Raw Hardware vs. Virtualized Hardware in the Cloud -- Questions about Application Design -- Designing for Performance and Scaling -- Designing for Flexibility and Performance Portability -- Data Layout -- Structured Approach to Express Parallelism -- Understanding Bounds and Projecting Bottlenecks -- Data Storage or Transfer vs. Recalculation -- Total Productivity Assessment -- Summary -- References -- Index.
author_facet Supalov, Alexander.
Semin, Andrey.
Dahnken, Christopher.
Klemm, Michael.
author_variant a s as
author2 Semin, Andrey.
Dahnken, Christopher.
Klemm, Michael.
author2_variant a s as
c d cd
m k mk
author2_role TeilnehmendeR
TeilnehmendeR
TeilnehmendeR
author_sort Supalov, Alexander.
title Optimizing HPC Applications with Intel Cluster Tools : Hunting Petaflops.
title_sub Hunting Petaflops.
title_full Optimizing HPC Applications with Intel Cluster Tools : Hunting Petaflops.
title_fullStr Optimizing HPC Applications with Intel Cluster Tools : Hunting Petaflops.
title_full_unstemmed Optimizing HPC Applications with Intel Cluster Tools : Hunting Petaflops.
title_auth Optimizing HPC Applications with Intel Cluster Tools : Hunting Petaflops.
title_new Optimizing HPC Applications with Intel Cluster Tools :
title_sort optimizing hpc applications with intel cluster tools : hunting petaflops.
publisher Apress L. P.,
publishDate 2014
physical 1 online resource (291 pages)
edition 1st ed.
contents Intro -- Contents at a Glance -- Contents -- About the Authors -- About the Technical Reviewers -- Acknowledgments -- Foreword -- Introduction -- Chapter 1: No Time to Read This Book? -- Using Intel MPI Library -- Using Intel Composer XE -- Tuning Intel MPI Library -- Gather Built-in Statistics -- Optimize Process Placement -- Optimize Thread Placement -- Tuning Intel Composer XE -- Analyze Optimization and Vectorization Reports -- Use Interprocedural Optimization -- Summary -- References -- Chapter 2: Overview of Platform Architectures -- Performance Metrics and Targets -- Latency, Throughput, Energy, and Power -- Peak Performance as the Ultimate Limit -- Scalability and Maximum Parallel Speedup -- Bottlenecks and a Bit of Queuing Theory -- Roofline Model -- Performance Features of Computer Architectures -- Increasing Single-Threaded Performance: Where You Can and Cannot Help -- Process More Data with SIMD Parallelism -- Distributed and Shared Memory Systems -- Use More Independent Threads on the Same Node -- Don't Limit Yourself to a Single Server -- HPC Hardware Architecture Overview -- A Multicore Workstation or a Server Compute Node -- Coprocessor for Highly Parallel Applications -- Group of Similar Nodes Form an HPC Cluster -- Other Important Components of HPC Systems -- Summary -- References -- Chapter 3: Top-Down Software Optimization -- The Three Levels and Their Impact on Performance -- System Level -- Application Level -- Working Against the Memory Wall -- The Magic of Vectors -- Distributed Memory Parallelization -- Shared Memory Parallelization -- Other Existing Approaches and Methods -- Microarchitecture Level -- Addressing Pipelines and Execution -- Closed-Loop Methodology -- Workload, Application, and Baseline -- Iterating the Optimization Process -- Summary -- References -- Chapter 4: Addressing System Bottlenecks.
Classifying System-Level Bottlenecks -- Identifying Issues Related to System Condition -- Characterizing Problems Caused by System Configuration -- Understanding System-Level Performance Limits -- Checking General Compute Subsystem Performance -- Testing Memory Subsystem Performance -- Testing I/O Subsystem Performance -- Characterizing Application System-Level Issues -- Selecting Performance Characterization Tools -- Monitoring the I/O Utilization -- Analyzing Memory Bandwidth -- Summary -- References -- Chapter 5: Addressing Application Bottlenecks: Distributed Memory -- Algorithm for Optimizing MPI Performance -- Comprehending the Underlying MPI Performance -- Recalling Some Benchmarking Basics -- Gauging Default Intranode Communication Performance -- Gauging Default Internode Communication Performance -- Discovering Default Process Layout and Pinning Details -- Gauging Physical Core Performance -- Doing Initial Performance Analysis -- Is It Worth the Trouble? -- Example 1: Initial HPL Performance Investigation -- Getting an Overview of Scalability and Performance -- Learning Application Behavior -- Example 2: MiniFE Performance Investigation -- Choosing Representative Workload(s) -- Example 2 (cont.): MiniFE Performance Investigation -- Balancing Process and Thread Parallelism -- Example 2 (cont.): MiniFE Performance Investigation -- Doing a Scalability Review -- Example 2 (cont.): MiniFE Performance Investigation -- Analyzing the Details of the Application Behavior -- Example 2 (cont.): MiniFE Performance Investigation -- Choosing the Optimization Objective -- Detecting Load Imbalance -- Example 2 (cont.): MiniFE Performance Investigation -- Dealing with Load Imbalance -- Classifying Load Imbalance -- Addressing Load Imbalance -- Example 2 (cont.): MiniFE Performance Investigation -- Example 3: MiniMD Performance Investigation.
Optimizing MPI Performance -- Classifying the MPI Performance Issues -- Addressing MPI Performance Issues -- Mapping Application onto the Platform -- Understanding Communication Paths -- Selecting Proper Communication Fabrics -- Using Scalable Datagrams -- Specifying a Network Provider -- Using IP over IB -- Controlling the Fabric Fallback Mechanism -- Using Multirail Capabilities -- Detecting and Classifying Improper Process Layout and Pinning Issues -- Controlling Process Layout -- Controlling the Global Process Layout -- Controlling the Detailed Process Layout -- Setting the Environment Variables at All Levels -- Controlling the Process Pinning -- Controlling Memory and Network Affinity -- Example 4: MiniMD Performance Investigation on Xeon Phi -- Example 5: MiniGhost Performance Investigation -- Tuning the Intel MPI Library -- Tuning Intel MPI for the Platform -- Tuning Point-to-Point Settings -- Adjusting the Eager and Rendezvous Protocol Thresholds -- Changing DAPL and DAPL UD Eager Protocol Threshold -- Bypassing Shared Memory for Intranode Communication -- Bypassing the Cache for Intranode Communication -- Choosing the Best Collective Algorithms -- Tuning Intel MPI Library for the Application -- Using Magical Tips and Tricks -- Disabling the Dynamic Connection Mode -- Applying the Wait Mode to Oversubscribed Jobs -- Fine-Tuning the Message-Passing Progress Engine -- Reducing the Pre-reserved DAPL Memory Size -- What Else? -- Example 5 (cont.): MiniGhost Performance Investigation -- Optimizing Application for Intel MPI -- Avoiding MPI_ANY_SOURCE -- Avoiding Superfluous Synchronization -- Using Derived Datatypes -- Using Collective Operations -- Betting on the Computation/Communication Overlap -- Replacing Blocking Collective Operations by MPI-3 Nonblocking Ones -- Using Accelerated MPI File I/O.
Example 5 (cont.): MiniGhost Performance Investigation -- Using Advanced Analysis Techniques -- Automatically Checking MPI Program Correctness -- Comparing Application Traces -- Instrumenting Application Code -- Correlating MPI and Hardware Events -- Collecting and Analyzing Hardware Counter Information in ITAC -- Collecting and Analyzing Hardware Counter Information in VTune -- Summary -- References -- Chapter 6: Addressing Application Bottlenecks: Shared Memory -- Profiling Your Application -- Using VTune Amplifier XE for Hotspots Profiling -- Hotspots for the HPCG Benchmark -- Compiler-Assisted Loop/Function Profiling -- Sequential Code and Detecting Load Imbalances -- Thread Synchronization and Locking -- Dealing with Memory Locality and NUMA Effects -- Thread and Process Pinning -- Controlling OpenMP Thread Placement -- Thread Placement in Hybrid Applications -- Summary -- References -- Chapter 7: Addressing Application Bottlenecks: Microarchitecture -- Overview of a Modern Processor Pipeline -- Pipelined Execution -- Data Conflicts -- Control Conflicts -- Structural Conflicts -- Out-of-order vs. In-order Execution -- Superscalar Pipelines -- SIMD Execution -- Speculative Execution: Branch Prediction -- Memory Subsystem -- Putting It All Together: A Final Look at the Sandy Bridge Pipeline -- A Top-down Method for Categorizing the Pipeline Performance -- Intel Composer XE Usage for Microarchitecture Optimizations -- Basic Compiler Usage and Optimization -- Using Optimization and Vectorization Reports to Read the Compiler's Mind -- Optimizing for Vectorization -- The AVX Instruction Set -- Why Doesn't My Code Vectorize in the First Place? -- Data Dependences -- Data Aliasing -- Array Notations -- Vectorization Directives -- ivdep -- vector -- simd -- Understanding AVX: Intrinsic Programming -- What Are Intrinsics?.
First Steps: Loading and Storing -- Arithmetic -- Data Rearrangement -- Dealing with Disambiguation -- Dealing with Branches -- __builtin_expect -- Profile-Guided Optimization -- Pragmas for Unrolling Loops and Inlining -- unroll/nounroll -- unroll_and_jam/nounroll_and_jam -- inline, noinline, forceinline -- Specialized Routines: How to Exploit the Branch Prediction for Maximal Performance -- When Optimization Leads to Wrong Results -- Using a Standard Library Method -- Using a Manual Implementation in C -- Vectorization with Directives -- Analyzing Pipeline Performance with Intel VTune Amplifier XE -- Summary -- References -- Chapter 8: Application Design Considerations -- Abstraction and Generalization of the Platform Architecture -- Types of Abstractions -- Levels of Abstraction and Complexities -- Raw Hardware vs. Virtualized Hardware in the Cloud -- Questions about Application Design -- Designing for Performance and Scaling -- Designing for Flexibility and Performance Portability -- Data Layout -- Structured Approach to Express Parallelism -- Understanding Bounds and Projecting Bottlenecks -- Data Storage or Transfer vs. Recalculation -- Total Productivity Assessment -- Summary -- References -- Index.
isbn 9781430264972
9781430264965
callnumber-first Q - Science
callnumber-subject QA - Mathematics
callnumber-label QA76
callnumber-sort QA 276.76 C65
genre Electronic books.
genre_facet Electronic books.
url https://ebookcentral.proquest.com/lib/oeawat/detail.action?docID=6422827
illustrated Not Illustrated
oclc_num 1231606970
work_keys_str_mv AT supalovalexander optimizinghpcapplicationswithintelclustertoolshuntingpetaflops
AT seminandrey optimizinghpcapplicationswithintelclustertoolshuntingpetaflops
AT dahnkenchristopher optimizinghpcapplicationswithintelclustertoolshuntingpetaflops
AT klemmmichael optimizinghpcapplicationswithintelclustertoolshuntingpetaflops
status_str n
ids_txt_mv (MiAaPQ)5006422827
(Au-PeEL)EBL6422827
(OCoLC)1231606970
carrierType_str_mv cr
is_hierarchy_title Optimizing HPC Applications with Intel Cluster Tools : Hunting Petaflops.
author2_original_writing_str_mv noLinkedField
noLinkedField
noLinkedField
marc_error Info : Unimarc and ISO-8859-1 translations identical, choosing ISO-8859-1. --- [ 856 : z ]
_version_ 1792331059490193409
fullrecord <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>10292nam a22004573i 4500</leader><controlfield tag="001">5006422827</controlfield><controlfield tag="003">MiAaPQ</controlfield><controlfield tag="005">20240229073838.0</controlfield><controlfield tag="006">m o d | </controlfield><controlfield tag="007">cr cnu||||||||</controlfield><controlfield tag="008">240229s2014 xx o ||||0 eng d</controlfield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9781430264972</subfield><subfield code="q">(electronic bk.)</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="z">9781430264965</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(MiAaPQ)5006422827</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(Au-PeEL)EBL6422827</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)1231606970</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">MiAaPQ</subfield><subfield code="b">eng</subfield><subfield code="e">rda</subfield><subfield code="e">pn</subfield><subfield code="c">MiAaPQ</subfield><subfield code="d">MiAaPQ</subfield></datafield><datafield tag="050" ind1=" " ind2="4"><subfield code="a">QA76.76.C65</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Supalov, Alexander.</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Optimizing HPC Applications with Intel Cluster Tools :</subfield><subfield code="b">Hunting Petaflops.</subfield></datafield><datafield tag="250" ind1=" " ind2=" "><subfield code="a">1st ed.</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Berkeley, CA :</subfield><subfield code="b">Apress L. P.,</subfield><subfield code="c">2014.</subfield></datafield><datafield tag="264" ind1=" " ind2="4"><subfield code="c">Ã2014.</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">1 online resource (291 pages)</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">computer</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">online resource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="505" ind1="0" ind2=" "><subfield code="a">Intro -- Contents at a Glance -- Contents -- About the Authors -- About the Technical Reviewers -- Acknowledgments -- Foreword -- Introduction -- Chapter 1: No Time to Read This Book? -- Using Intel MPI Library -- Using Intel Composer XE -- Tuning Intel MPI Library -- Gather Built-in Statistics -- Optimize Process Placement -- Optimize Thread Placement -- Tuning Intel Composer XE -- Analyze Optimization and Vectorization Reports -- Use Interprocedural Optimization -- Summary -- References -- Chapter 2: Overview of Platform Architectures -- Performance Metrics and Targets -- Latency, Throughput, Energy, and Power -- Peak Performance as the Ultimate Limit -- Scalability and Maximum Parallel Speedup -- Bottlenecks and a Bit of Queuing Theory -- Roofline Model -- Performance Features of Computer Architectures -- Increasing Single-Threaded Performance: Where You Can and Cannot Help -- Process More Data with SIMD Parallelism -- Distributed and Shared Memory Systems -- Use More Independent Threads on the Same Node -- Don't Limit Yourself to a Single Server -- HPC Hardware Architecture Overview -- A Multicore Workstation or a Server Compute Node -- Coprocessor for Highly Parallel Applications -- Group of Similar Nodes Form an HPC Cluster -- Other Important Components of HPC Systems -- Summary -- References -- Chapter 3: Top-Down Software Optimization -- The Three Levels and Their Impact on Performance -- System Level -- Application Level -- Working Against the Memory Wall -- The Magic of Vectors -- Distributed Memory Parallelization -- Shared Memory Parallelization -- Other Existing Approaches and Methods -- Microarchitecture Level -- Addressing Pipelines and Execution -- Closed-Loop Methodology -- Workload, Application, and Baseline -- Iterating the Optimization Process -- Summary -- References -- Chapter 4: Addressing System Bottlenecks.</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">Classifying System-Level Bottlenecks -- Identifying Issues Related to System Condition -- Characterizing Problems Caused by System Configuration -- Understanding System-Level Performance Limits -- Checking General Compute Subsystem Performance -- Testing Memory Subsystem Performance -- Testing I/O Subsystem Performance -- Characterizing Application System-Level Issues -- Selecting Performance Characterization Tools -- Monitoring the I/O Utilization -- Analyzing Memory Bandwidth -- Summary -- References -- Chapter 5: Addressing Application Bottlenecks: Distributed Memory -- Algorithm for Optimizing MPI Performance -- Comprehending the Underlying MPI Performance -- Recalling Some Benchmarking Basics -- Gauging Default Intranode Communication Performance -- Gauging Default Internode Communication Performance -- Discovering Default Process Layout and Pinning Details -- Gauging Physical Core Performance -- Doing Initial Performance Analysis -- Is It Worth the Trouble? -- Example 1: Initial HPL Performance Investigation -- Getting an Overview of Scalability and Performance -- Learning Application Behavior -- Example 2: MiniFE Performance Investigation -- Choosing Representative Workload(s) -- Example 2 (cont.): MiniFE Performance Investigation -- Balancing Process and Thread Parallelism -- Example 2 (cont.): MiniFE Performance Investigation -- Doing a Scalability Review -- Example 2 (cont.): MiniFE Performance Investigation -- Analyzing the Details of the Application Behavior -- Example 2 (cont.): MiniFE Performance Investigation -- Choosing the Optimization Objective -- Detecting Load Imbalance -- Example 2 (cont.): MiniFE Performance Investigation -- Dealing with Load Imbalance -- Classifying Load Imbalance -- Addressing Load Imbalance -- Example 2 (cont.): MiniFE Performance Investigation -- Example 3: MiniMD Performance Investigation.</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">Optimizing MPI Performance -- Classifying the MPI Performance Issues -- Addressing MPI Performance Issues -- Mapping Application onto the Platform -- Understanding Communication Paths -- Selecting Proper Communication Fabrics -- Using Scalable Datagrams -- Specifying a Network Provider -- Using IP over IB -- Controlling the Fabric Fallback Mechanism -- Using Multirail Capabilities -- Detecting and Classifying Improper Process Layout and Pinning Issues -- Controlling Process Layout -- Controlling the Global Process Layout -- Controlling the Detailed Process Layout -- Setting the Environment Variables at All Levels -- Controlling the Process Pinning -- Controlling Memory and Network Affinity -- Example 4: MiniMD Performance Investigation on Xeon Phi -- Example 5: MiniGhost Performance Investigation -- Tuning the Intel MPI Library -- Tuning Intel MPI for the Platform -- Tuning Point-to-Point Settings -- Adjusting the Eager and Rendezvous Protocol Thresholds -- Changing DAPL and DAPL UD Eager Protocol Threshold -- Bypassing Shared Memory for Intranode Communication -- Bypassing the Cache for Intranode Communication -- Choosing the Best Collective Algorithms -- Tuning Intel MPI Library for the Application -- Using Magical Tips and Tricks -- Disabling the Dynamic Connection Mode -- Applying the Wait Mode to Oversubscribed Jobs -- Fine-Tuning the Message-Passing Progress Engine -- Reducing the Pre-reserved DAPL Memory Size -- What Else? -- Example 5 (cont.): MiniGhost Performance Investigation -- Optimizing Application for Intel MPI -- Avoiding MPI_ANY_SOURCE -- Avoiding Superfluous Synchronization -- Using Derived Datatypes -- Using Collective Operations -- Betting on the Computation/Communication Overlap -- Replacing Blocking Collective Operations by MPI-3 Nonblocking Ones -- Using Accelerated MPI File I/O.</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">Example 5 (cont.): MiniGhost Performance Investigation -- Using Advanced Analysis Techniques -- Automatically Checking MPI Program Correctness -- Comparing Application Traces -- Instrumenting Application Code -- Correlating MPI and Hardware Events -- Collecting and Analyzing Hardware Counter Information in ITAC -- Collecting and Analyzing Hardware Counter Information in VTune -- Summary -- References -- Chapter 6: Addressing Application Bottlenecks: Shared Memory -- Profiling Your Application -- Using VTune Amplifier XE for Hotspots Profiling -- Hotspots for the HPCG Benchmark -- Compiler-Assisted Loop/Function Profiling -- Sequential Code and Detecting Load Imbalances -- Thread Synchronization and Locking -- Dealing with Memory Locality and NUMA Effects -- Thread and Process Pinning -- Controlling OpenMP Thread Placement -- Thread Placement in Hybrid Applications -- Summary -- References -- Chapter 7: Addressing Application Bottlenecks: Microarchitecture -- Overview of a Modern Processor Pipeline -- Pipelined Execution -- Data Conflicts -- Control Conflicts -- Structural Conflicts -- Out-of-order vs. In-order Execution -- Superscalar Pipelines -- SIMD Execution -- Speculative Execution: Branch Prediction -- Memory Subsystem -- Putting It All Together: A Final Look at the Sandy Bridge Pipeline -- A Top-down Method for Categorizing the Pipeline Performance -- Intel Composer XE Usage for Microarchitecture Optimizations -- Basic Compiler Usage and Optimization -- Using Optimization and Vectorization Reports to Read the Compiler's Mind -- Optimizing for Vectorization -- The AVX Instruction Set -- Why Doesn't My Code Vectorize in the First Place? -- Data Dependences -- Data Aliasing -- Array Notations -- Vectorization Directives -- ivdep -- vector -- simd -- Understanding AVX: Intrinsic Programming -- What Are Intrinsics?.</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">First Steps: Loading and Storing -- Arithmetic -- Data Rearrangement -- Dealing with Disambiguation -- Dealing with Branches -- __builtin_expect -- Profile-Guided Optimization -- Pragmas for Unrolling Loops and Inlining -- unroll/nounroll -- unroll_and_jam/nounroll_and_jam -- inline, noinline, forceinline -- Specialized Routines: How to Exploit the Branch Prediction for Maximal Performance -- When Optimization Leads to Wrong Results -- Using a Standard Library Method -- Using a Manual Implementation in C -- Vectorization with Directives -- Analyzing Pipeline Performance with Intel VTune Amplifier XE -- Summary -- References -- Chapter 8: Application Design Considerations -- Abstraction and Generalization of the Platform Architecture -- Types of Abstractions -- Levels of Abstraction and Complexities -- Raw Hardware vs. Virtualized Hardware in the Cloud -- Questions about Application Design -- Designing for Performance and Scaling -- Designing for Flexibility and Performance Portability -- Data Layout -- Structured Approach to Express Parallelism -- Understanding Bounds and Projecting Bottlenecks -- Data Storage or Transfer vs. Recalculation -- Total Productivity Assessment -- Summary -- References -- Index.</subfield></datafield><datafield tag="588" ind1=" " ind2=" "><subfield code="a">Description based on publisher supplied metadata and other sources.</subfield></datafield><datafield tag="590" ind1=" " ind2=" "><subfield code="a">Electronic reproduction. Ann Arbor, Michigan : ProQuest Ebook Central, 2024. Available via World Wide Web. Access may be limited to ProQuest Ebook Central affiliated libraries. </subfield></datafield><datafield tag="655" ind1=" " ind2="4"><subfield code="a">Electronic books.</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Semin, Andrey.</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Dahnken, Christopher.</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Klemm, Michael.</subfield></datafield><datafield tag="776" ind1="0" ind2="8"><subfield code="i">Print version:</subfield><subfield code="a">Supalov, Alexander</subfield><subfield code="t">Optimizing HPC Applications with Intel Cluster Tools</subfield><subfield code="d">Berkeley, CA : Apress L. P.,c2014</subfield><subfield code="z">9781430264965</subfield></datafield><datafield tag="797" ind1="2" ind2=" "><subfield code="a">ProQuest (Firm)</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://ebookcentral.proquest.com/lib/oeawat/detail.action?docID=6422827</subfield><subfield code="z">Click to View</subfield></datafield></record></collection>