In this demo, we show the working of a camera based object detection system that we have prototyped. Such systems are typically part of Advanced Driver Assistance Systems (ADAS).
ADAS helps to increase road and vehicle safety by using sensors to detect nearby objects and vehicles and respond accordingly, such as slowing down the vehicle or changing the vehicle’s direction. In addition to camera sensors, ADAS systems typically also make use of RADAR, LIDAR and GPS sensors. For object detection using a camera sensor, the system first processes the images provided by the camera to detect and identify the objects close to the vehicle. This is called the perception phase. After this, based on the objects detected and the current situation of the vehicle and the external factors, the system determines the action to be taken. In the next phase, the system issues the required command to other vehicle systems for the action(s) to be executed.
In this demo, we focus on the perception phase i.e. we show that our system is capable of accurately identifying objects in front of our vehicle. The hardware that we have used for this demo consists of a camera sensor that is mounted on the roof of the vehicle used for the demo. The camera sensor is connected to a computer via a MIPI CSI interface. The computer we have used is a Jetson Nano board from Nvidia, which has a 128 CUDA GPU. On the Jetson Nano, we have an AI-based software running. This is a deep neural network based software called YOLO, or “You Only Look Once”. This is an open source software provided by the Nvidia community. This software is implemented based on a pre-trained object detection model, and is optimized for the Jetson Nano board. It is capable of detecting up to 9000 classes of objects. The object detection software runs on the GPU of the Jetson Nano board. It processes the inputs from the camera sensor to detect identify the objects, vehicles or pedestrians in front of our vehicle. The image from the camera is then rendered on the computer display, with bounding boxes around the pertinent objects that the software has detected. This entire process happens in real time, as can been seen in the video.
It can be seen from the video that all the vehicles and pedestrians visible through the windshield of our vehicle are being indicated on the computer display as well. The computer completes its processing and renders the outputs at 25 frames per second.
Crevavi Engineering Solutions Pvt. Ltd.
123/107, Gokharam Rathnam Complex,
1st Floor, 2nd Main Rd, Yediyur Jayanagar, 7th Block, Bengaluru, Karnataka 560082.
Crevavi Technologies Pvt. Ltd.
A V Complex (2nd Floor),
Plot No. 436-E, Ring Road, Hebbal Indl. Area,
Mysore, India 570016
Crevavi Inc.
325 N
Saint Paul Street
Suite 3100
Dallas, TX 75201-0161
Crevavi Technologies UG (haftungsbeschränkt)
Sitz: München, HRB 276611
Geschäftsanschrift: Leopoldstraße 180, 80804 München, Germany