Perception error model
In autonomous vehicle testing, a perception error model (PEM) is an approach to simulating the behaviour of sensing and perception systems by modeling the errors they produce rather than simulating the underlying sensor physics.[1][2] This differs from high-fidelity sensor simulation techniques, where one wishes to generate synthetic sensor signals that the actual perception algorithms would process.[3] Perception error models are sometimes referred to as surrogate models for perception systems, because they approximate the output of computationally expensive object detectors at a fraction of the computational cost.[4]
The primary motivation for PEMs is to enable efficient virtual testing of autonomous vehicle driving policies.[1][2] Since the failure modes of perception systems have significant impact on downstream planning and control, testing these systems requires capturing the dependencies between perception errors and vehicle behaviour.[2] PEMs provide a computationally efficient alternative to running full physics-based sensor simulations with actual perception algorithms.[4]
Definition and design
Formal definition
A formal definition of perception error models was proposed as an approximation of the combined function of sensing subsystem S and perception subsystem P:[5][1]
where W denotes the ground truth world state consisting of surrounding objects, denotes the perceived world, and E represents the perception error. The PEM receives the ground truth world W and returns the perceived world .
The driving policy DP then generates a response R by analyzing the perceived world:
This definition does not depend on specific sensor types and provides a standard interface for integration in simulation pipelines.
Design considerations
Four key factors have been identified that affect the manifestation of perception errors in autonomous vehicles, which should be considered when designing a PEM.[1]
Positional aspects: The relative position of objects with respect to the ego-vehicle affects the quality of perception. Sensors have limited field of view (FoV), including limited range and blind spots. Signal strength and resolution degrade with distance, and perception systems typically perform better in areas where multiple sensor fields of view overlap.
Parameter inter-dependencies: Object parameters such as class and size affect error manifestation. Larger objects are more likely to be detected, slow-moving vehicles are easier to track, and classification errors may affect size estimation. Parameters not directly relevant to driving policy may also be important, such as object colour or material affecting LiDAR or radar performance.
Occlusion: Objects in the scene may influence detection of other objects. Large vehicles such as trucks may occlude smaller objects such as cars or cyclists. Occlusion significantly impacts perception reliability, and cumulative statistics should be reported separately for different occlusion levels.
Temporal aspects: Objects move through the scene, causing previously occluded objects to become visible and vice versa. Algorithm uncertainties and filtering techniques depend on their previous state, so errors evolve over time and should be modeled as time series using dynamical models.
Applications
Data-driven perception error models
A generalized data-driven approach to PEM construction was proposed using the Apollo autonomous driving software and the nuScenes public dataset.[1] This approach models the Sensing and Perception system as a whole using Hidden Markov Models, partitioned based on spatial location around the ego-vehicle and occlusion levels. The key contribution was demonstrating PEM integration into a simulation pipeline, enabling analysis of how perception errors affect autonomous vehicle safety. Similarly, neural networks have been tested as efficient surrogates in the CARLA simulator, demonstrating a reduction in computation time while maintaining similar downstream behaviour to actual detectors.[4]
In addition, PEMs have been extended to cooperative perception scenarios, studying how vehicle-to-everything communication can improve safety by compensating for perception limitations through information sharing between vehicles and infrastructure.[6]
Adaptive simulation using perception error models
PEMs have been deployed in emergency braking scenarios using efficient importance sampling strategies to estimate rare collision probabilities.[7] This enables likely and safety-critical perception errors to be identified.
In a similar fashion, a method has been described to identify perception errors that score highly on standard quality metrics but cause planning failures, termed adversarial perception errors.[8] Using a boundary-attack algorithm on black-box planners in the CARLA simulator, such errors were shown to be systematically constructable, highlighting limitations of standard perception metrics for predicting downstream safety. This adversarial approach was extended with EMPERROR, a transformer-based generative PEM using the conditional variational autoencoder framework.[9]
References
- ↑ 1.0 1.1 1.2 1.3 1.4 Piazzoni, A.; Cherian, J.; Dauwels, J.; Chau, L.-P. (2023). "PEM: Perception Error Model for Virtual Testing of Autonomous Vehicles". IEEE Transactions on Intelligent Transportation Systems 25: 670–681. doi:10.1109/TITS.2023.3311633.
- ↑ 2.0 2.1 2.2 Hoss, M.; Scholtes, M.; Eckstein, L. (2022). "A Review of Testing Object-Based Environment Perception for Safe Automated Driving". Automotive Innovation 5: 223–250. doi:10.1007/s42154-021-00172-y.
- ↑ Pandharipande, A.; Cheng, C.-H.; Dauwels, J.; Gurbuz, S. Z.; Ibanez-Guzman, J.; Li, G.; Piazzoni, A.; Wang, P. et al. (2023). "Sensing and Machine Learning for Automotive Perception: A Review". IEEE Sensors Journal 23 (11): 11097–11115. doi:10.1109/JSEN.2023.3262134.
- ↑ 4.0 4.1 4.2 Sadeghi, J.; Rogers, B.; Gunn, J.; Saunders, T.; Samangooei, S.; Dokania, P. K.; Redford, J. (2021). "A Step Towards Efficient Evaluation of Complex Perception Tasks in Simulation". Proceedings of the Conference on Neural Information Processing Systems (NeurIPS) Workshops.
- ↑ Piazzoni, A.; Cherian, J.; Slavik, M.; Dauwels, J. (2020). "Modeling perception errors towards robust decision making in autonomous vehicles". Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI).
- ↑ Piazzoni, A.; Cherian, J.; Dauwels, J. (2022). "Cooperative Perception Error Models for Autonomous Vehicles". IEEE International Conference on Intelligent Transportation Systems (ITSC).
- ↑ Innes, C.; Ramamoorthy, S. (2023). "Testing Rare Downstream Safety Violations via Upstream Adaptive Sampling of Perception Error Models". Proceedings of the IEEE International Conference on Robotics and Automation (ICRA).
- ↑ Sadeghi, J.; Lord, N. A.; Redford, J.; Mueller, R. (2023). "Attacking Motion Planners Using Adversarial Perception Errors". arXiv:2311.12722 [cs.RO].
- ↑ Hanselmann, N.; Doll, S.; Cordts, M.; Lensch, H. P. A.; Geiger, A. (2025). "EMPERROR: A Flexible Generative Perception Error Model for Probing Self-Driving Planners". IEEE Robotics and Automation Letters 10 (6): 5807–5814. doi:10.1109/LRA.2025.3562789. Bibcode: 2025IRAL...10.5807H.
