Anomaly detection in autonomous vehicle sensor data

Authors:  Simon Bouget, David Eklund, Abdulrahman Kanaa

The nIoVe project aims to develop a holistic framework for the cybersecurity of the Internet of Vehicles (IoV), and in particular self-driving Autonomous Vehicle. As part of this framework, one of the components, the Network and Connected Devices Behaviour Anomaly Detection, process all the available data in order to detect deviations from the normal patterns which could be a sign of a cyberattack. The Toolkit is composed of several specialized algorithms dedicated to analysing specific kinds of data, and in this blog post, we will present the progress of the GNSS sensor analyser.

GNSS data is location data used by Autonomous Vehicles to track their position over time. GNSS, or Global Navigation Satellite System, relies on radio frequency communications with far distant satellites and is relatively vulnerable to spoofing or jamming attacks using nearby dedicated devices. Being able to detect when the communications are perturbed in such a manner is a critical task of the sensor analyser.

Another potential application of the sensor analyser is when the GNSS data is correct but the vehicle itself presents an abnormal behaviour. This behaviour is visible in the GNSS data, and the sensor analyser can trigger an alert to inform other parts of the nIoVe framework that this vehicle is having an issue.

Currently, the GNSS sensor analyser has been tested in the nIoVe co-simulation tool under various scenarios. Below, we describe the general principles of anomaly detection and we present the results of the analyser in several scenarios.

Detection of abnormal behavior

In GNSS spoofing, the received GNSS signal by the autonomous vehicle does not represent the actually expected signal. This could be a result of signal manipulation done by a cyber-attack that generate a fake GNSS signal in the driving area. That may lead to a vehicle misorientation and cause unexpected behaviour. In the simulation environment, this is achieved by redefining the planned route of the autonomous vehicle. This results in the vehicle changing its route in an abnormal way and produces unexpected vehicle behaviour.

Various methods are possible to detect anomalies in GNSS data, whether due to spoofing or to abnormal behaviour, and we selected different methods based on the context and the available data. Below, we present several different situations and the method we selected.

When the planned itinerary of the vehicle is available, we detect anomalies using the following 6-step process:

1.The waypoints of the given map are transformed into geolocation points and the projection to the relative coordinate system is taken.

2.The angle of rotation between every two consecutive waypoints is calculated.

3.The vehicle data are received, and the projection of vehicle positions are calculated.

4.We check the waypoints within a certain distance to the vehicle.

5.If the waypoint is within a certain distance, the waypoint direction is compared to the vehicle direction.

6.If the difference in directions exceeds a certain limit, the vehicle is considered to have abnormal behaviour.

In other cases, when only historical data is available, we built a probability density of the vehicle position based on its usual routes. The model is kernel-based and can be evaluated very quickly, enabling real-time detection of an anomaly once the model has been properly trained. Yet another alternative to the 6 steps above is to include velocity direction in the probability density so that the probability is evaluated over both position and orientation simultaneously. We tested this with similar results as the procedure above but decided to keep the straightforward implementation since it is transparent and explainable. This is less the case for the kernel-based method as well as its counterpart using a neural network to estimate probability density.

Some results and outcomes

In the following three examples, the vehicle exhibits abnormal behaviour at the end of its journey. The anomaly detector picks this up and the results are presented below. Red points represent abnormal behaviour whereas green points represent normal behaviour.

 Unexpected anomalies in simulations

In the course of testing the sensor analyzer in the nIoVe co-simulation tools, we discovered some interesting unintended behavior. For example, when the vehicle's initial orientation is not quite right for the planned route, the vehicle will engage in rather reckless driving at the start. In the example below, the vehicle starts by making a U-turn over the solid yellow line. This is dangerous behavior discovered by the anomaly detector.

Similarly, other parts of the anomaly detection algorithm pick up anomalous events that are slightly different from the anomalies they were designed to detect. For example, if a vehicle crashes into a house or a wall, the graphics engine has a hard time keeping up which triggers the detection algorithms designed to detect that the video stream has been manipulated. Another case of interest are sanity checks that checks that the odometry speed estimation is consistent with the GNSS data. If a vehicle crashed and experiences extreme deacceleration, the odometry is delayed in its response and this is detected as a mismatch between GNSS and odometry.

Wrapping up

Above we tried to give some insights into the inner workings of the sensor analyser and anomaly detection in the nIoVe project. Other topics of interest in relation with this is the Visual Analytics Detector which detects anomalies in the CAN Bus data and the TCP-IP analyser which detects cyberattacks using network traffic data.

By accepting you will be accessing a service provided by a third-party external to