Computer Aided Verification : : 34th International Conference, CAV 2022, Haifa, Israel, August 7-10, 2022, Proceedings, Part I.

Saved in:
Bibliographic Details
Superior document:Lecture Notes in Computer Science Series ; v.13371
:
TeilnehmendeR:
Place / Publishing House:Cham : : Springer International Publishing AG,, 2022.
©2022.
Year of Publication:2022
Edition:1st ed.
Language:English
Series:Lecture Notes in Computer Science Series
Online Access:
Physical Description:1 online resource (563 pages)
Tags: Add Tag
No Tags, Be the first to tag this record!
Table of Contents:
  • Intro
  • Preface
  • Organization
  • Contents - Part I
  • Contents - Part II
  • Invited Papers
  • A Billion SMT Queries a Day (Invited Paper)
  • 1 Introduction
  • 2 Eliminate Writing Specifications
  • 2.1 Generating Possible Specifications (Findings)
  • 3 Domain-Specific Abstractions
  • 4 SMT Solving at Cloud Scale
  • 4.1 Monotonicity in Runtimes Across Solver Versions
  • 4.2 Stability of the Solvers
  • 4.3 Concluding Remarks
  • References
  • Program Verification with Constrained Horn Clauses (Invited Paper)
  • 1 Introduction
  • 2 Logic of Constrained Horn Clauses
  • 3 Solving CHC Modulo Theories
  • References
  • Formal Methods for Probabilistic Programs
  • Data-Driven Invariant Learning for Probabilistic Programs
  • 1 Introduction
  • 2 Preliminaries
  • 3 Algorithm Overview
  • 4 Learning Exact Invariants
  • 4.1 Generate Features (getFeatures)
  • 4.2 Sample Initial States (sampleStates)
  • 4.3 Sample Training Data (sampleTraces)
  • 4.4 Learning a Model Tree (learnInv)
  • 4.5 Extracting Expectations from Models (extractInv)
  • 4.6 Verify Extracted Expectations (verifyInv)
  • 5 Learning Sub-invariants
  • 5.1 Sample Training Data (sampleTraces)
  • 5.2 Learning a Neural Model Tree (learnInv)
  • 5.3 Verify Extracted Expectations (verifyInv)
  • 6 Evaluations
  • 6.1 R1: Evaluation of the Exact Invariant Method
  • 6.2 R2: Evaluation of the Sub-invariant Method
  • 7 Related Work
  • References
  • Sound and Complete Certificates for Quantitative Termination Analysis of Probabilistic Programs
  • 1 Introduction
  • 2 Overview
  • 3 Preliminaries
  • 4 A Sound and Complete Characterization of SIs
  • 5 Stochastic Invariants for LBPT
  • 6 Automated Template-Based Synthesis Algorithm
  • 7 Experimental Results
  • 8 Related Works
  • 9 Conclusion
  • References
  • Does a Program Yield the Right Distribution?
  • 1 Introduction
  • 2 Generating Functions.
  • 2.1 The Ring of Formal Power Series
  • 2.2 Probability Generating Functions
  • 3 ReDiP: A Probabilistic Programming Language
  • 3.1 Program States and Variables
  • 3.2 Syntax of ReDiP
  • 3.3 The Statement x += iid(D, y)
  • 4 Interpreting ReDiP with PGF
  • 4.1 A Domain for Distribution Transformation
  • 4.2 From Programs to PGF Transformers
  • 4.3 Probabilistic Termination
  • 5 Reasoning About Loops
  • 5.1 Fixed Point Induction
  • 5.2 Deciding Equivalence of Loop-free Programs
  • 6 Case Studies
  • 7 Related Work
  • 8 Conclusion and Future Work
  • References
  • Abstraction-Refinement for Hierarchical Probabilistic Models
  • 1 Introduction
  • 2 Overview
  • 3 Formal Problem Statement
  • 3.1 Background
  • 3.2 Hierarchical MDPs
  • 3.3 Optimal Local Subpolicies and Beyond
  • 4 Solving hMDPs with Abstraction-Refinement
  • 4.1 The Macro-MDP Formulation
  • 4.2 The Uncertain Macro-MDP Formulation
  • 4.3 Set-Based SubMDP Analysis
  • 4.4 Templates for Set-Based subMDP Analysis
  • 5 Implementing the Abstraction-Refinement Loop
  • 6 Experiments
  • 7 Related Work
  • 8 Conclusion
  • References
  • Formal Methods for Neural Networks
  • Shared Certificates for Neural Network Verification
  • 1 Introduction
  • 2 Background
  • 3 Proof Sharing with Templates
  • 3.1 Motivation: Proof Subsumption
  • 3.2 Proof Sharing with Templates
  • 4 Efficient Verification via Proof Sharing
  • 4.1 Choice of Abstract Domain
  • 4.2 Template Generation
  • 4.3 Robustness to Adversarial Patches
  • 4.4 Geometric Robustness
  • 4.5 Requirements for Proof Sharing
  • 5 Experimental Evaluation
  • 5.1 Experimental Setup
  • 5.2 Robustness Against Adversarial Patches
  • 5.3 Robustness Against Geometric Perturbations
  • 5.4 Discussion
  • 6 Related Work
  • 7 Conclusion
  • References
  • Example Guided Synthesis of Linear Approximations for Neural Network Verification
  • 1 Introduction.
  • 2 Preliminaries
  • 2.1 Neural Networks
  • 2.2 Existing Neural Network Verification Techniques and Limitations
  • 3 Problem Statement and Challenges
  • 3.1 The Synthesis Problem
  • 3.2 Challenges and Our Solution
  • 4 Our Approach
  • 4.1 Partitioning the Input Interval Space (Ix)
  • 4.2 Learning the Function g(l, u)
  • 4.3 Ensuring Soundness of the Linear Approximations
  • 4.4 Efficient Lookup of the Linear Bounds
  • 5 Evaluation
  • 5.1 Benchmarks
  • 5.2 Experimental Results
  • 6 Related Work
  • 7 Conclusions
  • References
  • Verifying Neural Networks Against Backdoor Attacks
  • 1 Introduction
  • 2 Problem Definition
  • 3 Verifying Backdoor Absence
  • 3.1 Overall Algorithm
  • 3.2 Verifying Backdoor Absence Against a Set of Images
  • 3.3 Abstract Interpretation
  • 3.4 Generating Backdoor Triggers
  • 3.5 Correctness and Complexity
  • 3.6 Discussion
  • 4 Implementation and Evaluation
  • 5 Related Work
  • 6 Conclusion
  • References
  • Trainify: A CEGAR-Driven Training and Verification Framework for Safe Deep Reinforcement Learning
  • 1 Introduction
  • 2 Deep Reinforcement Learning (DRL)
  • 3 Model Checking of DRL Systems
  • 3.1 The Model Checking Problem
  • 3.2 Challenges in Model Checking DRL Systems
  • 4 The CEGAR-Driven DRL Approach
  • 4.1 The Framework
  • 4.2 Training on Abstract States
  • 4.3 Model Checking Trained DRL Systems
  • 4.4 Counterexample-Guided Refinement
  • 5 Implementation and Evaluation
  • 5.1 Implementation
  • 5.2 Benchmarks and Experimental Settings
  • 5.3 Reliability and Verifiability Comparison
  • 5.4 Performance Comparison
  • 6 Related Work
  • 7 Discussion and Conclusion
  • References
  • Neural Network Robustness as a Verification Property: A Principled Case Study
  • 1 Introduction
  • 2 Existing Training Techniques and Definitions of Robustness
  • 3 Robustness in Evaluation, Attack and Verification.
  • 4 Relative Comparison of Definitions of Robustness
  • 4.1 Standard and Lipschitz Robustness
  • 4.2 (Strong) Classification Robustness
  • 4.3 Standard vs Classification Robustness
  • 5 Other Properties of Robustness Definitions
  • 6 Conclusions
  • References
  • Software Verification and Model Checking
  • The Lattice-Theoretic Essence of Property Directed Reachability Analysis*-12pt
  • 1 Introduction
  • 2 Fixed-points in Complete Lattices
  • 3 Lattice-Theoretic Reconstruction of PDR
  • 3.1 Positive LT-PDR: Sequential Positive Witnesses
  • 3.2 Negative PDR: Sequential Negative Witnesses
  • 3.3 LT-PDR: Integrating Positive and Negative
  • 4 Structural Theory of PDR by Category Theory
  • 4.1 Categorical Modeling of Dynamics and Predicate Transformers
  • 4.2 Structural Theory of PDR from Transition Systems
  • 5 Known and New PDR Algorithms as Instances
  • 5.1 LT-PDRs for Kripke Structures: PDRF-Krand PDRIB-Kr
  • 5.2 LT-PDR for MDPs: PDRIB-MDP
  • 5.3 LT-PDR for Markov Reward Models: PDRMRM
  • 6 Implementation and Evaluation
  • 7 Conclusions and Future Work
  • References
  • Affine Loop Invariant Generation via Matrix Algebra
  • 1 Introduction
  • 1.1 Related Work
  • 2 Preliminaries
  • 3 Affine Invariants via Farkas' Lemma
  • 4 Single-Branch Affine Loops with Deterministic Updates
  • 4.1 Derived Constraints from the Farkas Tables
  • 4.2 Loops with Tautological Guard
  • 4.3 Loops with Guard: Single-Constraint Case
  • 4.4 Loops with Guard: Multi-constraint Case
  • 5 Generalizations
  • 5.1 Affine Loops with Non-deterministic Updates
  • 5.2 An Extension to Bidirectional Affine Invariants
  • 5.3 Other Possible Generalizations
  • 6 Approximation of Eigenvectors through Continuity
  • 7 Experimental Results
  • References
  • Data-driven Numerical Invariant Synthesis with Automatic Generation of Attributes
  • 1 Introduction.
  • 2 Safety Verification Using Learning of Invariants
  • 2.1 Linear Constraints and Safety Verification
  • 2.2 The ICE Learning Framework
  • 2.3 ICE-DT: Invariant Learning Using Decision Trees
  • 3 Linear Formulas as Abstract Objects
  • 4 Generating Attributes from Sample Separators
  • 4.1 Abstract Sample Separators
  • 4.2 Computing a Join-Maximal Abstract Separator
  • 4.3 Integrating Separator Computation in ICE-DT
  • 4.4 Computing Separators Incrementally
  • 5 Experiments
  • 6 Conclusion
  • References
  • Proof-Guided Underapproximation Widening for Bounded Model Checking
  • 1 Introduction
  • 2 Background
  • 2.1 Language Model
  • 2.2 VC Generation for a Procedure
  • 2.3 Static Versus Dynamic Inlining
  • 2.4 Verification with Stratified Inlining
  • 2.5 Overapproximation Refinement Guided Stratified Inlining
  • 3 Overview
  • 3.1 Underapproximation Widening
  • 4 Algorithms
  • 4.1 Underapproximation Widening Guided Stratified Inlining (UnderWidenSI)
  • 4.2 Portfolio Technique
  • 5 Experimental Results
  • 5.1 Corral Versus Legion
  • 5.2 Performance of Legion+
  • 6 Related Work
  • 7 Conclusion
  • References
  • SolCMC: Solidity Compiler's Model Checker
  • 1 Introduction
  • 2 Solidity Model Checking
  • 2.1 The CHC Verification Engine
  • 2.2 Horn Encoding
  • 3 User Features
  • 4 Real World Experiments
  • 4.1 CHC Solver Options
  • 4.2 Deposit Contract
  • 4.3 ERC777
  • 4.4 Discussion
  • 5 Conclusions and Future Work
  • References
  • Hyperproperties and Security
  • Software Verification of Hyperproperties Beyond k-Safety
  • 1 Introduction
  • 1.1 Verification Beyond k-Safety
  • 1.2 Contributions and Structure
  • 2 Overview: Reductions and Quantification as a Game
  • 2.1 Reductions as a Game
  • 2.2 Beyond k-Safety: Quantification as a Game
  • 3 Preliminaries
  • 4 Observation-Based HyperLTL
  • 5 Reductions as a Game
  • 6 Verification Beyond k-Safety.
  • 6.1 Existential Trace Quantification as a Game.