Big data represents a new era in data exploration and utilization. With the growing quantity and variety of data being collected from intelligent transportation systems and other sensors, data-driven transportation research will rely on a new generation of tools to analyze and visualize those data. To address this need, the Digital Roadway Interactive Visualization and Evaluation Network (DRIVE Net) was developed to enable large-scale online data sharing, visualization, modeling and analysis. By incorporating an increasing variety of data sets from different sensing and acquisition technologies, the new DRIVE Net system provides a more stable, powerful, and interactive platform, and is now able to handle more complex computational tasks, visualize large-scale spatial data, and support data sharing services. There has also been other research related to transit fare smart card data and its use in understanding the various origin destination patters.
With the growth in the amount of traffic on the road, there is a need of better and more advanced traffic video detection techniques. The research related to traffic video detection in STARLab focusses on traffic scene understanding and traffic parameters extraction using video analytics and computer vision. One of the main studies being carried out at STARLab focusses on development of a framework to automatically detect vehicle-pedestrian near-misses through onboard monocular vision. The proposed framework can estimate depth and real-world motion information through monocular vision with a moving video background. The experimental results based on processing over 30-hours video data demonstrate the ability of the system to capture near-misses by comparison with the events logged by the Rosco/MobilEye Shield+ system which includes four cameras working cooperatively. Another important study focusses on understanding and predicting traffic flow parameters by processing videos obtained from Unmanned Aerial Vehicles (UAVs). The lab also focusses on deep learning-based computer vision techniques to develop robust traffic scene understanding for both traffic surveillance and autonomous driving.
The current problems in Transportation Engineering have numerous characteristics that make them amenable to Artificial Intelligence-based solutions. These problems usually involve both quantitative and qualitative data. The different transportation systems that these problems deal with have many different misunderstood or not fully understood components due to the uncertainty factor of human component within the system. The only way to deal with this uncertainty is to build empirical solutions based on the available data for example, build deep neural networks for modelling complex transportation related behavior. One of the major studies at STARLab focuses on understanding transportation network-scale behavior and traffic speed prediction using some of the most advanced AI algorithms such as Generative Adversarial Networks and Convolutional Neural Networks.
Since the age of big-data has come, it is important that traffic analysis should be conducted based on the high-grade, massive and valuable real-time data set. Previously, several traffic sensors have been widely used for data collection tasks, for instance, loop detector, video camera and etc. However, traditional traffic sensors still have weakness in some particular kinds of traffic data collection, for example, travel time, pedestrian detection and etc. Mobile sensing is the technology for catching the Media Access Control (MAC) address of the mobile devices in the surrounding area of the sensor. This technology has been used for traffic data analysis for past years. In contrast to traditional traffic sensors, MAC address of mobile device can be used as a unique identifier of each traveler to record the traveler's spatial-temporal movement so that traffic parameters can be extracted including travel time, traffic speed, traffic volume and etc. Based on the understanding of traffic analysis and investigation results of current product in the market, we have developed a novel traffic sensor which is called Mobile Unit for Sensing Traffic (MUST) version 2 by the support of relevant original project. The cost of MUST 2 is only 200 dollars which is hugely lower than the cost of other similar products in the market. By deploying the GPS module, MUST 2 not only can be used for data collection in a fixed location, but also can be installed in mobile object to collect data, like transit vehicle and probe vehicle. Video camera and environmental sensing module also integrated with MUST 2, which highly extended the capability and effectiveness for traffic sensing. MUST 2 is able to send data to remote server in real-time by several wireless communication protocols. In addition, a data visualization platform was developed on DRIVE Net to present the value in a more direct way. MUST 2 sensor can be applied for lots sensing tasks, for example, pedestrian detection, transit passenger OD estimation, fog detection and etc. In terms of these detection functions, MUST 2 sensor can be used to improve the mobility and safety of transportation systems. Besides the motorized traffic data collection, this kind of technology also can catch the traffic information of non-motorized traffic by monitoring the mobile device carried by pedestrian and bicyclist.
Traffic safety has been a fundamental area of research within the field since its beginnings. Continual advances ranging from improvements in vehicle technologies to enhancements in roadside safety hardware have helped make our roads safer by reducing crash frequencies and/or severities. That said, the number of fatalities due to motor vehicle crashes in the United States has been increasing since 2015, following several years of decline. Thus, there is still much work to do if we want to obtain a future where goals like zero deaths from traffic crashes become a reality. STAR Lab researchers have actively been studying traffic safety since the lab's inception. At a high level, research projects have focused on topics including hotspot identification, crash frequency modeling, and crash severity modeling. Most of the research is centered around developing new modeling methodologies to address the aforementioned problems and applying real-world data (e.g., crash report data, geometrics, weather, etc.) to use results from models in forecasting and decision-support applications. Additionally, there is strong overlap between our computer vision and safety work as several projects have worked to study traffic safety in a real-time, computer-vision-based context (e.g., transit collision avoidance and near-miss detection based on video data collected from in-vehicle cameras).