- Texas A&M University: Peng Jiang, Kasi Viswanath, Akhil Nagariya, George Chustz, Srikanth Saripalli
- DEVCOM Army Research Laboratory Maggie Wigness, Philip Osteen, Tim Overbye, Christian Ellis, Long Quang
The Great Outdoors Dataset: Off-Road Multi-Modal Dataset is a comprehensive resource aimed at advancing autonomous navigation research in challenging off-road environments. Collected using an unmanned ground vehicle (UGV) designed for unstructured terrain, this dataset offers a rich combination of sensor data to support robust and safe navigation. The sensor setup includes a 64-channel LiDAR for detailed 3D point cloud generation, multiple RGB cameras for high-resolution visual capture, and a thermal camera for infrared imaging in low-visibility or night-time conditions. In addition, the dataset features data from an inertial navigation system (INS) that provides accurate motion and orientation measurements, a 2D mmWave radar for enhanced perception in adverse weather conditions, and an RTK GPS system for precise geolocation. The Great Outdoors Dataset places a strong emphasis on semantic scene understanding, addressing the gap in off-road autonomy research by offering multimodal data with annotated labels for 3D semantic segmentation. Unlike many existing datasets that focus on urban environments, this dataset is specifically tailored for off-road applications, providing a crucial resource for the development of advanced machine learning models and sensor fusion techniques. By building on the foundation of RELLIS-3D, it is designed to push the boundaries of autonomous navigation in unstructured environments, enabling the development of algorithms that can effectively navigate and perceive the complex dynamics of off-road settings.
- 64 channels Lidar: Ouster OS1
- 3 RGB Camera: Basler acA1920-50gc + Edmund Optics 16mm/F1.8 86-571
- Thermal Camera: FLIR Boson 640
- Inertial Navigation System (IMU/GPS): MicroStrain 3DM-GX5-AHRS
- 2D mmwave RADAR: Navtech CTS350-X
- RTK GPS: Saprkfun RTK Facet
3D scan of the sensor setup.(Download)
The Great Outdoors Dataset ├── pt_test.lst ├── pt_val.lst ├── pt_train.lst ├── 00000 ├── os1_cloud_node_kitti_bin/ -- directory containing ".bin" files with Ouster 64-Channels point clouds. ├── nav_radar_node/ -- directory containing radar polar images. ├── pylon_camera_node/ -- directory containing ".png" files from the color camera. ├── pylon_camera_node_label_color -- color image lable ├── pylon_camera_node_label_id -- id image lable ├── lwir_camera_node/ -- directory containing ".png" files from the thermal camera. ├── lwir_camera_node_label_color -- color image lable ├── lwir_camera_node_label_id -- id image lable └── poses.txt -- file containing the poses of every scan.
To provide multi-modal data for enhancing autonomous off-road navigation, we developed an ontology of object and terrain classes that extends the foundation of the RELLIS-3D dataset, while incorporating additional terrain and object categories specific to our dataset. Notably, our sequences introduce new classes such as gravel and mulch, which were absent in RELLIS-3D. Overall, the dataset encompasses 22 distinct classes, including trees, grass, dirt, sky, gravel, bush, mulch, water, poles, fences, persons, buildings, objects, vehicles, barriers, mud, concrete, puddles, rubble, asphalt, and a void class. This expanded ontology provides a more comprehensive understanding of off-road environments, offering enriched data for advanced semantic segmentation and improved performance in challenging, unstructured terrains.
Ontology Definition (Ontology)
Image with Annotation Examples (Download)
Full Images (Download)
Full Image Annotations Color Format (Download)
Full Image Annotations ID Format (Download)
Full Images (Download)
Full Image Annotations Color Format (Download)
Full Image Annotations ID Format (Download)
Synced LiDAR Pointcloud Semantic-KITTI Format (Download)
Synced RADAR Polar Images (Download)
Camera Instrinsic (Download 2KB)
RGB Cameras to Ouster LiDAR (Download 3KB)
Boson Thermal to RGB camera (Download 3KB)
Data included in raw ROS bagfiles:
Topic Name | Message Tpye | Message Descriptison |
---|---|---|
/Navtech/FFTData | nav_ross/HighPrecisionFFTData | Radar FFT data |
/lester/imu/data | sensor_msgs/Imu | Filtered imu data from embeded imu of Warthog |
/lester/imu/data_raw | sensor_msgs/Imu | Raw imu data from embeded imu of Warthog |
/img_node/intensity_image | sensor_msgs/Image | Intensity image generated by ouster Lidar |
/lester/imu/mag | sensor_msgs/MagneticField | Raw magnetic field data from embeded imu of Warthog |
/lester/lidar_points | sensor_msgs/PointCloud2 | Point cloud data from Ouster Lidar |
/lester/ouster_center/imu | sensor_msgs/Imu | Raw imu data from embeded imu of Ouster Lidar |
/lester/lidar_points_center | sensor_msgs/PointCloud2 | Centered point cloud data from Ouster Lidar |
/lester/lwir_front/camera_info | sensor_msgs/CameraInfo | Intrinsics of thermal camera |
/lester/lwir_front/image_rect/compressed | sensor_msgs/CompressedImage | sensor_msgs/Imu |
/lester/stereo_left/camera_info | sensor_msgs/CameraInfo | |
/lester/stereo_left/image_rect_color/compressed | sensor_msgs/CompressedImage | Image from left RGB camera |
/lester/stereo_right/camera_info | sensor_msgs/CameraInfo | |
/lester/stereo_right/image_rect_color/compressed | sensor_msgs/CompressedImage | Image from right RGB camera |
/lester/rear_center/camera_info | sensor_msgs/CameraInfo | |
/lester/rear_center/image_rect_color/compressed | sensor_msgs/CompressedImage | Image from rear RGB camera |
/lester/ublox/fix | sensor_msgs/NavSatFix | INS data from ublox |
lester/right_drive/status/battery_current | std_msgs/Float64 | |
lester/right_drive/status/battery_voltage | std_msgs/Float64 | |
lester/left_drive/status/battery_current | std_msgs/Float64 | |
lester/left_drive/status/battery_voltage | std_msgs/Float64 | |
/lester/rc_teleop/cmd_vel | geometry_msgs/Twist | RC input to warthog |
/tf | tf2_msgs/TFMessage | |
/tf_static | tf2_msgs/TFMessage |
The following is the link to the rosbag.(Download)
All datasets and code on this page are copyright by us and published under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License.
A RUGD Dataset for Autonomous Navigation and Visual Perception inUnstructured Outdoor Environments