Semi-trucks weigh up to 80,000 pounds and operate mainly on highways with a maximum speed of 75 mph. Due to their size and limited maneuverability, semi-trucks take much more time than passenger vehicles to brake, change lanes, and make left-hand turns. The braking distance at top speed can be more than 200 meters (roughly the length of two football fields), a complete lane change maneuver for a semi-truck on the highway takes 10 seconds, and making a safe left-hand turn on local roads can take up to 16 seconds. At 75 mph, a fully-loaded semi-truck has twice the kinetic energy of a fully loaded semi-truck traveling at 55 mph. To safely navigate all of these real-world situations, a long planning horizon up to 1,000 meters (or 35 seconds) is necessary for safe Level 4 autonomous trucking.
Our 1,000-meter perception capability helps our vehicles to handle challenging scenarios on the road. In the image below, a white car is parked on the emergency lane far away. Our truck should attempt to change to the left lane to safely avoid the emergency lane vehicle (ELV). Our autonomous truck driving system will determine if making a full lane change is safe and, if it is not, what other preventive measures our truck should take, such as slowing down. Changing lanes is the preferred safety option for a variety of reasons. For example, the ELV could accelerate and merge into our truck’s lane which means that our truck needs to change lanes as early as possible to avoid possible interactions with the ELV. A complete lane change takes approximately 10 seconds or 450-meters, but for optimal safety, our autonomous truck driving system doubles the time and distance required to change lanes to account for any potentially challenging scenarios, such as insufficient space in the lane we are targeting to enter. Our perception system will recognize the ELV approximately 1,000-meters away, enabling our autonomous driving system to plan for and execute the lane change smoothly and safely.
To solve ELV scenarios at night, we integrated Sony’s latest CMOS camera technology into our perception system. The Sony cameras work with our other sensor arrays and proprietary algorithms to produce a perception system with superior night vision. Many law enforcement vehicles and construction vehicles operate at night in emergency lanes. To drive safely, our truck should be two lanes over from the ELVs if there is a police officer or construction worker present. Night time driving is more challenging as there is low illumination resulting in ambiguous object shapes. Despite the dark road and unclear vehicle shape in the pictures below, our autonomous vehicle perception system successfully detects the emergency vehicle up to 1,000-meters ahead. This planning horizon provides sufficient time for our truck to make a safe and smooth maneuver.
Our 1,000-meter perception system for autonomous driving is more than just perception, and it is more than a single algorithm from a single sensor. It combines multiple state-of-the-art learning-based and geometry-based algorithms using various sensors (cameras, LiDARs, and radars) to process 600 trillion operations per second. This integrated technology helps our truck not only to see but interpret what is happening on the road. Our autonomous truck driving system can infer other on-road vehicles’ distance, speed, and vehicle type using this perception system. The system attempts to predict the intentions and future maneuvers of other vehicles the way a human driver would and performs extensive scenario classification to understand the entire context of the road.
While our partner’s hardware is not exclusive to TuSimple, the way in which our computer vision technology interprets the images and information provided by the hardware sensors is proprietary. Our long-range perception system enables our L4 autonomous truck to perceive and understand road situations from up to 1,000-meters forward and 300-meters in all directions, providing up to a 35-second planning horizon. Due to the physical constraints of LiDAR-oriented systems, such as limited range and sparse input, it is not safe to run LiDAR-only autonomous trucks on the road. On the other hand, like human eyes, cameras provide rich, high-resolution, and dense environmental knowledge for long ranges, making them ideal for the 1,000-meter perception task. Our computer vision technology has broken ten world records and ranked first at KITTI, Cityscapes, and Waymo’s 2020 Open Dataset for 2D Detection.
As a result, our 1,000-meter vehicle perception system has enabled us to be the only autonomous trucking company to demonstrate autonomous operations on both surface streets and highways up to 75 mph, positioning us as the industry leader. By combining our vehicle perception system with our other hardware and software and with our go-to-market strategy, we are on a path to be the first to bring L4 freight capacity to the market, making transportation safer, more efficient, and more affordable.