Volume 18 , Issue 2 , PP: 447-464, 2026 | Cite this article as | XML | Html | PDF | Full Length Article
Mostafa Borhani 1 *
Doi: https://doi.org/10.54216/JISIoT.180231
Automated vehicle monitoring in intelligent transportation systems must operate reliably around the clock, including under conditions that routinely cripple conventional visible-light cameras: night, glare, shadows, and adverse weather. This paper proposes a modular Internet of Things (IoT) architecture for thermal-based vehicle detection, classification, and trajectory analysis, together with a four-phase deployment roadmap that connects public-dataset evaluation to live-traffic field validation. The system integrates longwave infrared (LWIR) imaging (8–14 𝜇m) with YOLO-family deep learning detectors (YOLOv8/v11/v12) and multi-object tracking algorithms (ByteTrack, BoTSORT, StrongSORT), deployed across NVIDIA Jetson edge devices and cloud infrastructure through JSON/MQTT formalized data contracts. The primary novel contribution is a system-level integration framework that bridges the gap between component-level algorithmic research and operational deployment. Concretely, this work: (i) defines five functionally independent modules with explicit interface specifications and latency budgets not previously formalized in the thermal-ITS literature; (ii) introduces quantified decision gates linking progression criteria directly to published benchmark values; (iii) provides region-specific operational availability estimates derived from empirical weather-degradation data; and (iv) integrates domain adaptation, GDPR compliance, edge hardware budgets, and regulatory WIM frameworks within a single coherent system blueprint. Domain adaptation strategies reported in peer-reviewed literature recover 20–50% of cross-dataset mAP degradation (typically 10–30%) caused by sensor and scene variability; these figures are literature benchmarks, not results obtained in this work. An optional weight-estimation module (Module 4) based on recent vision-based and bridge WIM validation studies is treated as an exploratory extension requiring site-specific validation.
Thermal imaging , Vehicle detection , Multi-object tracking , IoT architecture , Intelligent transportation systems , Edge computing , YOLO , Domain adaptation , Weigh-in-motion , Smart cities
[1] M. Adresi, M. Lacidogna, and G. Grasso, “A review of different types of weigh-in-motion sensors: State ofthe- art,” Measurement, vol. 225, article 113976, 2024. DOI: 10.1016/j.measurement.2023.113976.
[2] J. M. R. Velázquez, L. Khoudour, G. Saint Pierre, P. Duthon, S. Liandrat, F. Bernardin, S. Fiss, I. Ivanov, and R. Peleg, “Analysis of thermal imaging performance under extreme foggy conditions: Applications to autonomous driving,” J. Imaging, vol. 8, no. 11, article 306, 2022. DOI: 10.3390/jimaging8110306.
[3] M. Vollmer and K.-P. Möllmann, Infrared Thermal Imaging: Fundamentals, Research and Applications, 2nd ed. Weinheim, Germany: Wiley-VCH, 2018, ISBN: 978-3-527-41351-5. DOI: 10.1002/9783527693306.
[4] NVIDIA Corporation, “Jetson AGX Xavier, Xavier NX and Nano developer kits,” [Online]. Available: https: //developer.nvidia.com/embedded/jetson, accessed Apr. 2025.
[5] Ultralytics, “Ultralytics YOLO: State-of-the-art real-time object detection,” [Online]. Available: https: //github.com/ultralytics/ultralytics, accessed Apr. 2025.
[6] M. Ding, Z. Han, and X. Li, “TIR-YOLO-ADAS: A thermal infrared object detection framework for advanced driver assistance systems,” IET Intell. Transp. Syst., vol. 18, no. 3, pp. 512–524, 2024. DOI: 10.1049/itr2.12471.
[7] Y. Zhang, P. Sun, Y. Jiang, D. Yu, F. Weng, Z. Yuan, P. Luo, W. Liu, and X. Wang, “ByteTrack: Multiobject tracking by associating every detection box,” in Proc. Eur. Conf. Comput. Vis. (ECCV), Lecture Notes Comput. Sci., vol. 13682, Springer, Cham, 2022, pp. 1–21. DOI: 10.1007/978-3-031-20047-2_1.
[8] N. Aharon, R. Orfaig, and B.-Z. Bobrovsky, “BoT-SORT: Robust associations multi-pedestrian tracking,” arXiv preprint arXiv:2206.14651, 2022. Available: https://arxiv.org/abs/2206.14651.
[9] N. Wojke, A. Bewley, and D. Paulus, “Simple online and realtime tracking with a deep association metric,” in Proc. IEEE Int. Conf. Image Process. (ICIP), 2017, pp. 3645–3649. DOI: 10.1109/ICIP.2017.8296962.
[10] Y. Du, Z. Zhao, Y. Song, Y. Zhao, F. Su, T. Gong, and H. Meng, “StrongSORT: Make DeepSORT great again,” IEEE Trans. Multimedia, vol. 25, pp. 8725–8737, 2023. DOI: 10.1109/TMM.2023.3240881.
[11] B. Jacob, E. J. O’Brien, and S. Jehaes, Weigh-in-Motion of Road Vehicles: Final Report of the COST 323 Action. LCPC, Paris, France, 1999.
[12] S. Hwang, J. Park, N. Kim, Y. Choi, and I. S. Kweon, “Multispectral pedestrian detection: Benchmark dataset and baseline,” in Proc. IEEE CVPR, 2015, pp. 1037–1045. DOI: 10.1109/CVPR.2015.7298706.
[13] Y. Tian, D. Ye, and L. Zhang, “YOLOv12: Attention-centric real-time object detectors,” arXiv preprint
arXiv:2502.12524, Feb. 2025. https://arxiv.org/abs/2502.12524
[14] R. Ahmad and M. S. Khan, “Benchmarking YOLO variants for thermal image object detection,” MSRA Journal Review, vol. 2, no. 2, pp. 45–58, 2025. Zenodo record: https://zenodo.org/records/17310153. Note: This outlet is not indexed in Web of Science or Scopus; the YOLOv11n/v12n mAP values cited herein require independent replication on FLIR ADAS V2 before use as primary ISI benchmarks.
[15] U. Uzun, C. Sissons, and F. Murat, “Self-supervised domain adaptation for thermal object detection,” in Proc. IEEE/CVF Winter Conf. Appl. Comput. Vis. (WACV), 2021, pp. 2750–2759. DOI: 10.1109/WACV48630.2021.00280.
[16] U. Ustun, M. Ozay, and F. Murat, “Spectral transfer guided active domain adaptation for thermal imagery,” in Proc. IEEE/CVF CVPR Workshops (CVPRW), 2023, pp. 1–10. DOI: 10.1109/CVPRW59228.2023.00449.
[17] Y. Wang, C. Li, Z. Sun, and J. Liu, “Unsupervised thermal-to-visible domain adaptation for pedestrian detection,” Pattern Recognit. Lett., vol. 153, pp. 222–228, 2022. DOI: 10.1016/j.patrec.2021.11.024.
[18] K. Karthikeyan, P. Rajasekaran, S. Selvi, and M. Kalaivani, “A secured IoT-based intelligent transport system framework for smart cities,” Cluster Comput., vol. 26, pp. 2703–2719, 2023. DOI: 10.1007/s10586-022-03694- 6.
[19] M. Urblik, M. Kopp, and M. Dolanský, “A modular framework for data processing at the edge,” Sensors, vol. 23, no. 18, article 7774, 2023. DOI: 10.3390/s23187774.
[20] Y. Gong, X. Li, Z. Chen, T. Liu, and W. Wang, “A multispectral vision-based machine learning framework for non-contact vehicle weigh-in-motion,” Measurement, vol. 228, article 114162, 2024. DOI: 10.1016/j.measurement.2024.114162.
[21] A. Tempelhahn, H. Budzier, V. Krause, and G. Gerlach, “Shutter-less calibration of uncooled infrared cameras,” J. Sensors Sens. Syst., vol. 5, no. 1, pp. 9–16, Jan. 2016. DOI: 10.5194/jsss-5-9-2016.
[22] V. S. Vibashan, D. Poster, S. You, S. Hu, and V. M. Patel, “Meta-UDA: Unsupervised domain adaptive thermal object detection using meta-learning,” in Proc. IEEE/CVFWinter Conf. Appl. Comput. Vis. (WACV),Waikoloa, HI, USA, Jan. 2022, pp. 1412–1423. https://openaccess.thecvf.com/content/WACV2022/html/VS_ Meta-UDA_Unsupervised_Domain_Adaptive_Thermal_Object_Detection_Using_Meta-Learning_ WACV_2022_paper.html
[23] H. Zhao, X. Mao, C. Zhou, H. Li, and X. Liu, “Non-contact vehicle weigh-in-motion using computer vision,” Measurement, vol. 153, article 107415, 2020. DOI: 10.1016/j.measurement.2019.107415.
[24] D. Paul and K. Roy, “Application of bridge weigh-in-motion system in bridge health monitoring: A state ofthe-art review,” Structural Health Monitoring, 2023. DOI: 10.1177/14759217231154431.
[25] European Parliament and Council of the European Union, “Regulation (EU) 2016/679 on the Protection of Natural Persons with Regard to the Processing of Personal Data (General Data Protection Regulation),” Official Journal of the European Union, vol. L 119, pp. 1–88, May 2016. https://eur-lex.europa.eu/ legal-content/EN/TXT/?uri=CELEX:32016R0679.
[26] Teledyne FLIR, “FLIR ADAS Dataset V2,” [Online]. Available: https://oem.flir.com/solutions/ automotive/adas-dataset/, accessed Jan. 2024.
[27] F. McKenna, M. H. Scott, G. L. Fenves, and B. Jeremić, “Open System for Earthquake Engineering Simulation (OpenSees),” [Online]. Available: https://opensees.berkeley.edu/, accessed 2024.
[28] W. G. Cochran, Sampling Techniques, 3rd ed. New York, NY, USA: John Wiley & Sons, 1977.
[29] X. Jia, C. Zhu, M. Li, W. Tang, and W. Zhou, “LLVIP: A visible-infrared paired dataset for low-light vision,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. Workshops (ICCVW), Montreal, QC, Canada, Oct. 2021, pp. 3496–3504. DOI: 10.1109/ICCVW54120.2021.00389.