Autonomous navigation for a mobile robot is a method used to localize and navigate a robot in a random environment. The existing solution leverages a 2D LiDAR on top of the robot for mapping and obstacle avoidance but the LiDAR at times is unable to detect objects due to their size and color properties.
The main challenge of these mobile robots is that they should autonomously navigate in a random environment, which involves localization and mapping using SLAM (Simultaneous Localization and Mapping). LiDAR (Light Detection and Ranging) is the most common sensor, which is used for gathering the data from the environment for the SLAM.
Autonomous navigation technology is the basis of underground crawler walking equipment automation. Based on research of the sector laser pose parameter detection method, a pose detection system based on a cross laser is proposed. The mathematical model between pose parameters, laser receiver measurement [...] Read more.
Navigation stack can be applied for both differential and omnidirectional robots. It requires a laser sensor mounted on the mobile base. This laser sensor is used for localization and mapping. The Navigation Stack cannot be applied on arbitrary shaped robots as it was developed for a square or circular shaped robot.
The vision system uses a hybrid neural network known as YOLOv3 [5]. YOLOv3 is a deep learning object detection algorithm that recognizes certain objects in images. The objects that need to be detected are first trained in the neural network by tuning the weights and then it is deployed. YOLO is much faster than other networks. As shown in Fig. 2, Y
For this robot simulation, Hokuyo laser sensor plugin is used here to get laser scan values from the gazebo environment. These laser scan values are published under the LiDAR scan sensor messages within ROS environment. Hokuyo Node (UTM-04G) can scan up to 270° with a range up to 4 m. This sensor is used for localization and mapping. Figure 5shows
Mecanum wheel is also known as the Swedish wheel or Ilon wheel uses complex designed wheels that enable the robot as shown in Fig. 6to move in any direction in the absence of changing the orientation of the robot. A mobile robot with ordinary wheels must turn or steer to change the direction of the motion but a Mecanum robot can move in any directi
The mapping algorithm utilizes a Rao-Blackwellized particle filter for the pose estimation of the robot on the map. In this approach, the algorithm takes laser range data and odometry values for more accurate position estimation. For an indoor environment with a lot of features, the particle filter-based approach can estimate the pose of the robot
The Navigation Stack acquires odometry values and laser range values from encoders and laser sensors, respectively. After computation, it produces velocity values under the topic command velocity to the move-base node as shown in Fig. 9. Navigation stack can be applied for both differential and omnidirectional robots. It requires a laser sensor mou
ROS navigation stack receives the laser scan data through sensor messages. These messages are produced by the LiDAR device drivers. These messages can also be produced manually by simple python programs. The objects that are not able to be detected by LiDAR will be detected by the image sensor so whenever an object is detected by vision system, the
To merge the pseudo laser scan and the actual laser scan, an Ira laser scan merger is used. Ira laser merger is a ROS package that allows merging multiple 2D laser scans into a single one; this is very useful when a robot is equipped with multiple single plane laser scanners. The output scan will appear as generated from a single scanner as shown i
In this simulation, the object, a fire hydrant, is placed in front of the robot. As these objects are smaller in size, the LiDAR is unable to detect them but the vision system detects these objects as shown in Fig. 12, and then it sends a message to the pseudo laser node, which eventually produces a laser scan at the distance where the object is lo