Volume 5 , Issue 2 , PP: 21-30, 2023 | Cite this article as | XML | Html | PDF | Full Length Article
Mostafa Abotaleb 1 * , Ehsaneh khodadadi 2 , Nadjem Bailek 3
Doi: https://doi.org/10.54216/JAIM.050202
The rate of progress in autonomous car technology has increased exponentially over the past decade, mostly thanks to advancements in deep learning and artificial intelligence. This work aims to summarize recent progress made in the application of deep learning techniques to the problem of autonomous driving. First, we will go through the deep reinforcement learning paradigm and other AI-based solutions for autonomous driving, such as convolutional and recurrent neural networks. Algorithms for driving scene recognition, path planning, behavior arbitration, and motion control were developed with these techniques in mind. Both the End2End system, which immediately converts sensory input into steering commands, and the modular perception-planning-action pipeline, each module of which is built using deep learning techniques, are the focus of our studies. We also discuss the modern challenges of building AI systems for autonomous driving, such as making sure they are safe to use, finding good places to practice, and creating effective computing hardware. This survey's comparison sheds light on the pros and cons of AI and deep learning approaches to autonomous driving, which aids in making design decisions.
Artificial Intelligence , Deep learning , Self-driving , End2End system.
[1] A Krizhevsky, I Sutskever, and G. E. Hinton, ImageNet Classification with Deep Convolutional Neural Networks. Advances in Neural Information Processing Systems, 25, 1097–1105, 2012.
[2] M Andrychowicz et al, Learn- ing Dexterous In-Hand Manipulation. CoRR, vol. abs/1808.00177, August 2018. [Online]. Available: https://arxiv.org/abs/1808.00177
[3] Y. Goldberg, Neural Network Methods for Natural Language Processing, ser. Synthesis Lectures on Hu- man Language Technologies. Morgan & Claypool, 37, 2017.
[4] SAE Committee, Taxonomy and Definitions for Terms Related to On-road Motor Vehicle Automated Driving Systems, 2014.
[5] E Dickmanns, V. Graefe, Dynamic Monocular Machine Vision. Machine vision and applications, 1, 223–240, 1988.
[6] B Paden, M Cap, S Z Yong, D S Yershov, E. Frazzoli, A Survey of Motion Planning and Control Techniques for Self-Driving Urban Vehicles. IEEE Trans. Intelligent Vehicles, 1(1), 33– 55, 2016.
[7] S Shalev-Shwartz, S Shammah, A Shashua, Safe, Multi-Agent, Reinforcement Learning for Autonomous Driving, 2016.
[8] S. D. Pendleton, H. Andersen, X. Du, X. Shen, M. Meghjani, Y. H. Eng, D. Rus, and M. H. Ang, Perception, Planning, Control, and Coordination for Autonomous Vehicles. Machines, 5(1), 6, 2017.
[9] E. Rehder, J. Quehl, and C. Stiller, Driving Like a Human: Imitation Learning for Path Planning us- ing Convolutional Neural Networks. Int. Conf. on Robotics and Automation Workshops, 2017.
[10] L. Sun, C. Peng, W. Zhan, and M. Tomizuka, A Fast Integrated Planning and Control Framework for Autonomous Driving via Imitation Learning. ASME 2018 Dynamic Systems and Control Conference, 3, 2018.
[11] S. Grigorescu, B. Trasnea, L. Marina, A. Vasilcoi, and T. Cocias, NeuroTrajectory: A Neuroevolution- ary Approach to Local State Trajectory Learning for Autonomous Vehicles. IEEE Robotics and Automation Letters, 4(4), 3441–3448, 2019.
[12] L. Yu, X. Shao, Y. Wei, and K. Zhou, Intelli- gent Land-Vehicle Model Transfer Trajectory Planning Method Based on Deep Reinforcement Learning. Sensors (Basel, Switzerland), 18, 09, 2018.
[13] C. Paxton, V. Raman, G. D. Hager, and M. Kobilarov, Combining Neural Networks and Tree Search for Task and Motion Planning in Challenging Environments. Int. Conf. on Intelligent Robots and Systems (IROS), 2017.
[14] W. Schwarting, J. Alonso-Mora, and D. Rus, Planning and Decision-Making for Autonomous Vehi- cles,” Annual Review of Control, Robotics, and Autonomous Systems, 1, 2018.
[15] T. Gu, J. M. Dolan, and J. Lee, Human-like Planning of Swerve Maneuvers for Autonomous Vehicles. IEEE Intelligent Vehicles Symposium (IV), 716-721, 2016.
[16] A. I. Panov, K. S. Yakovlev, and R. Suvorov, Grid Path Planning with Deep Reinforcement Learning: Preliminary Results. Procedia Computer Science, 123, 347 – 353, 2018.
[17] C. J. Ostafew, J. Collier, A. P. Schoellig, and T. D. Barfoot, Learning-based Nonlinear Model Predic- tive Control to Improve Vision-based Mobile Robot Path Tracking. Journal of Field Robotics, 33(1), 133–152, 2015.
[18] P. J. Nguyen-Tuong D and S. M, Local Gaussian Process Regression for Real Time Online Model Learning. Proceedings of the neural information processing systems Conference, 1193-1200, 2008.
[19] O. Sigaud, C. Salaun, and V. Padois, On-line Regression Algorithms for Learning Mechanical Mod- els of Robots: A Survey. Robotics and Autonomous Systems, 59(12), 1115–1129, 2011.
[20] C. Ostafew, A. Schoellig, and T. D. Barfoot, Visual Teach and Repeat, Repeat, Repeat: Iterative Learning Control to Improve Mobile Robot Path Tracking in Challenging Outdoor Environments, 176–181, 2013.
[21] B. Panomruttanarug, Application of Iterative Learning Control in Tracking a Dubin’s Path in Parallel Parking. Int. Journal of Automotive Technology, 18(6), 1099–1107, 2017.
[22] N. R. Kapania and J. C. Gerdes, Path Tracking of Highly Dynamic Autonomous Vehicle Trajectories via Iterative Learning Control. American Control Conference (ACC), 2753– 2758, 2015.
[23] S. Lefvre, A. Carvalho, and F. Borrelli, A Learning- Based Framework for Velocity Control in Autonomous Driving. IEEE Transactions on Automation Science and Engineering, 13(1), 32– 42, 2016.
[24] P. Drews, G. Williams, B. Goldfain, E. A Theodorou, and J. M Rehg, Aggressive Deep Driving: Combining Convolutional Neural Networks and Model Predictive Control, 133–142, 2017.
[25] J. Rawlings and D. Mayne, Model Predictive Control: Theory and Design, Nob Hill Pub., 2009.
[26] M. Kamel, A. Hafez, and X. Yu, A Review on Motion Control of Unmanned Ground and Aerial Vehicles Based on Model Predictive Control Techniques. Engineering Science and Military Technologies, 2, 10–23, 2018.
[27] M. Brunner, U. Rosolia, J. Gonzales, and F. Borrelli, Repetitive Learning Model Predictive Control: An Autonomous Racing Example. IEEE 56th Annual Conference on Decision and Control (CDC), 2017, 2545–2550, 2017.
[28] D. A. Pomerleau, Alvinn: An autonomous Land Vehicle in a Neural Network. in Advances in neural information processing systems, 305–313, 1989.
[29] U. Muller, J. Ben, E. Cosatto, B. Flepp, and Y. L. Cun, Off-road Obstacle Avoidance through End-to- End Learning. Advances in neural information processing systems, 739–746, 2006.
[30] H. Xu, Y. Gao, F. Yu, and T. Darrell, End-to-End Learning of Driving Models from Large-scale Video Datasets. IEEE Conf. on Computer Vision and Pat- tern Recognition (CVPR), 2017.
[31] E. Perot, M. Jaritz, M. Toromanoff, and R. D. Charette, End-to-End Driving in a Realistic Racing Game with Deep Reinforcement Learning, IEEE Conf. on Computer Vision and Pattern Recognition Workshops (CVPRW), 474–475, 2017.
[32] L. Fridman, et al, MIT Autonomous Vehicle Technology Study: Large-Scale Deep Learning Based Analysis of Driver Behavior and Interaction with Automation. IEEE Access 2017, 2017.