Topological Map Generation Using Hardware Robot ENG4701
- Subject Code :
ENG4701
- University :
Monash University Exam Question Bank is not sponsored or endorsed by this college or university.
- Country :
Malaysia
Topological Map Generation Using Hardware Robot
ENG4701/FIT4701: Final Year Project - Progress Report
Author(s): Trisha Lim Ruo Jing (32797389)
Student ID(s): 32797389
Supervisor(s):Dr.Patrick Ho
Date of Submission: 18/10/2024
Project type: Research
Contents
- Introduction 4
- Aims and Objectives 5
- Literature Review 6
- Introduction 6
- Overview of Robot Navigation 6
- Localisation and Positioning of Robot 6
- Real Time Data Processing and Path Adjustment 7
- Obstacles Avoidance in Dynamic Environments 7
- Robot Sensing Technologies 8
- Overview of Robot Operating System (ROS) 8
- ROS Framework Overview 8
- ROS-Enabled Hardware Integration 9
- ROS-Powered Software Architecture 9
- ROS-Based Simulation and Visualisation 9
- Conclusion 10
- Methodology and Methods 11
- Robot Design and Construction 11
- Software Development and Integration 12
- Experimental Setup and Testing 14
- Project Progress 16
- Preliminary Results and Discussion 16
- Topological Map Generation 16
- Path Planning 16
- Robot Construction 17
- Inertia Measurement Units (IMU) Calculation and Results 18
- Limitations and Future Work 20
- Preliminary Results and Discussion 16
- Scope, Project Plan & Timeline 22
- Risk Management Plan 24
- Sustainability Plan 26
- Alignment with UN Sustainable Development Goals (SDGs) 26
- Triple Bottom Line 26
- Likely Consequences and Proactive Measures 26
- Whole System-Based Design and Adaptability 27
- Resource Origin and Efficiency 27
- Pollution and Recycling 27
- Behavioural Impact and Externalities 27
- Stakeholders 28
- Conclusion 28
- Alignment with UN Sustainable Development Goals (SDGs) 26
- References 29
- Appendices 32
- Robot Design and Construction 11
1.
Introduction
In the field of robotics and autonomous navigation, creating accurate and efficient maps remains a key challenge. While Simultaneous Localisation and Mapping (SLAM) techniques have made tremendous progress, there is still work to be done in producing topological maps that represent the higher-level spatial linkages. Topological maps provide a unique perspective on the environment, emphasising links and connectivity between spaces rather than exact geometric details. Unlike traditional maps that highlight precise distances and directions, a topological map prioritises the connections between places and features. It simplifies details to emphasise how points are linked, making them valuable for tasks like navigation, path planning and scene interpretation.
Though several algorithms exist for topological map generation, their implementation often relies on simulated environments or post-processed sensor data. Nevertheless, there are some challenges when implementing these theoretical developments in practical settings. One major issue is sensor noise, which can cause data distortion and make accurate map generation difficult. Additionally, real-world environments often contain dynamic obstacles and changing conditions that challenge the static assumptions of many algorithms. Varying lighting conditions, especially in indoor and outdoor transitions, can further complicate sensor readings. Moreover, topological mapping algorithms frequently struggle with scaling to larger environments and maintaining accuracy in real-time navigation, as they require significant computational resources to process complex data streams from multiple sensors. This project aims to close the gap between theory and practical environment by deploying a physical robot to generate topological maps in real-world scenarios.
This research provides a useful abstraction of the world by drawing inspiration from the work of past final year students' topological map generation methods. This project will involve building a robot with the necessary sensors, integrating the topological mapping algorithm, and validating the algorithm's performance in a variety of test scenarios in order to demonstrate the viability of the topological map generation process. This proposal outlines the research, design, and implementation steps, addressing potential risks and management strategies to ensure successful project completion. The project will focus on implementing the existing algorithm through real-world experimentation, taking into account the unique challenges posed by sensor noise and dynamic environments. The outcomes of this project will eventually help to advance the field of robotics, ultimately leading to more robust and reliable topological mapping solutions.
2.
Aims and Objectives
2.1 Hypothesis
The hardware robot, equipped with sensors LiDAR and IMU, will be able to generate a topological map with the map generation algorithm and autonomously navigate according to the nodes on the map with a path planning algorithm. The hardware robot should also generate a map of its travelled path, this map should be identical with the fed in topological map when tested with map overlapping.
2.2 Aims
To design, develop, and evaluate a hardware robot capable of autonomously generating and navigating topological maps of real-world environments derived from conventional maps, demonstrating the practical application of the topological mapping algorithm to enhance the efficiency and accuracy of robotic navigation.
2.3 Objectives
- To design and construct a mechanical robot equipped with LiDAR, IMU and wheel encoders that can accurately collect and process data for mapping and localization, achieving an accuracy rate of 85% or higher in sensor data acquisition. The robot should be capable of navigating within a predefined indoor environment following a generated topological map along the best route found by the path planning algorithm.
- To process the sensor data and leverage computational capabilities of the robot for real-time performance to generate a map of the robot's actual path, ensuring that map generation and updates occur within set intervals, with a 90% match between the expected and real-time generated path.
- To compare the actual path map generated by the robot with the fed-in topological map generated by the existing algorithm, achieving at least an 85% overlap in key features such as node positions and connectivity, using specific metrics like node matching accuracy and edge consistency.
3.
Literature Review
3.1 Introduction
Robotics has come a long way in the last few years, especially in autonomous navigation and mapping. Enabling robots to explore complicated environments properly and effectively is a major difficulty in this field [1]. Scalability and adaptability are two issues that traditional approaches, like metric mapping, frequently have, particularly in dynamic or large-scale systems. To address these challenges, researchers have increasingly turned to topological mapping as a workable alternative [2]. Topological maps describe important locations and the paths that connect them, abstracting the surrounding landscape into a graph of nodes and edges [3]. This abstraction improves the robot's performance in a variety of environments and streamlines the navigation challenge.
This literature review aims to provide a thorough summary of the current state of topological mapping and its application in robotic navigation. It starts by going over the fundamental ideas and several topological mapping algorithms. Subsequently, it delves into the sensory technologies that make precise perception and mapping of the environment possible. The review then examines the architectural and design factors involved in creating a robot capable of implementing these algorithms. It also discusses the difficulties in integrating software and the need for real-time processing necessary for efficient navigation. The review concludes with a discussion on testing methodologies and the potential challenges and limitations encountered in deploying such systems.
3.2 Overview of Robot Navigation
Effective robot navigation is essential for enabling robots to move autonomously and interact with their environment safely. In order to do this, robots must become proficient in a number of critical navigational skills, which are discussed in this section. These elements consist of obstacle avoidance in dynamic scenarios, real-time data processing and path adjustment, localisation and positioning [4]. Gaining a comprehensive understanding of these elements is essential to create reliable autonomous systems that can navigate complex interior environments.
3.2.1 Localisation and Positioning of Robot
One of the main challenges in indoor robot navigation is finding the exact location of the robot. Indoor environments do not provide as straightforward of solutions as outdoor environments, where GPS devices provide accurate positional data. This constraint necessitates the development of complex sensor fusion techniques and visual perception tools [5]. High-level sensor fusion is a technique used to track the position of the robot in the environment and generate precise maps using data from several sources, such as light detection and ranging (LiDAR), cameras, and inertial measurement units (IMU) [6]. An example flowchart of Lidar-IMU and wheel odometer based autonomous vehicle localisation system is shown in Figure 3.1.
LiDAR is particularly advantageous for indoor environments due to its ability to measure distances to surrounding objects with high precision, creating detailed maps of the environment. However, while LiDAR excels at capturing spatial information, it is limited in detecting dynamic changes or tracking the robots orientation over time. For this reason, combining LiDAR with an IMU is essential. The IMU provides data on the robots orientation and movement dynamics, which helps to compensate for LiDARs inability to track rotational motion. Together, these sensors offer a more comprehensive understanding of the robots position by combining spatial mapping with real-time movement tracking. In addition to that, the method of odometry calculation uses data from the encoders of the DC motor and the IMU of the robot to estimate its position over time. While odometry can provide useful information, it may accumulate errors, which is why it is often combined with other methods to improve accuracy [5].
Although visual perception systems, such as cameras, can be used to enhance localization by recognizing landmarks and features, they are not utilized in this project. The fed in topological map does not contain information about any landmarks or features. Besides, the complexity of extracting reliable positional data from cameras, particularly in varying lighting conditions and when dealing with sensor noise, makes them less suitable for this specific application. Instead, the focus will be on fusing data from LiDAR and IMU, which are more reliable and efficient for the intended real-time localization tasks in this project.
3.2.2 Real Time Data Processing and Path Adjustment
Extending the concept of precise localisation, navigating in a dynamic environment necessitates a robot that is constantly adjusting its path in response to the surrounding conditions. Finding a collision-free route that also keeps the journey as fast as possible is quite a challenging task. The robot must process data instantaneously and respond quickly to any new impediments or changes in order to travel safely. The robot must strike a compromise between the accuracy of its navigational decisions and the requirement of quick processing in order to handle data in real-time and alter its path, which presents substantial computational problems. Simultaneous localisation and mapping (SLAM) comes in handy in this case [8]. This technique is used to generate a map of an unknown environment while simultaneously keeping track of the robots location within it [6][9]. Crafting effective solutions means creating smart algorithms and using powerful computing resources to ensure the robot can navigate swiftly and accurately in real-time.
To address real-time path adjustment, the A* algorithm is widely utilized for its effectiveness in finding optimal paths, an example of route planned by A* algorithm is illustrated in Figure 3.2 [10]. A* is a best-first search method that determines the shortest path by taking into account both the estimated distance to the objective which is also known as h-cost or heuristic and the distance already travelled known as g-cost. Its heuristic-based methodology guarantees effective navigation, even in intricate settings. This capacity is further enhanced by A* variants like D*-Lite and Dynamic A*, which allow for constant path changes as the robot comes across new impediments [11]. These modifications enable the robot to replan its route without having to recalculate the entire course from scratch, which is especially helpful in dynamic surroundings. To manage the computational demands of real-time systems, algorithms like Hybrid A* and Theta* have been developed to reduce complexity while maintaining accurate path planning. These methods, combined with SLAM, allow the robot to map unknown terrains and localise itself, further enhancing the accuracy and speed of real-time navigation [12]. The A* algorithm and its variants, supported by powerful computing resources and smart algorithms, enable the robot to navigate safely and efficiently in real time.
3.2.3 Obstacles Avoidance in Dynamic Environments
Beyond just knowing the robots location and adjusting its path, indoor spaces are often chaotic and dynamic, with moving obstacles like people, making navigation and obstacle avoidance a tough job for autonomous robots. Traditional obstacle detection methods in indoor environments face significant challenges. For instance, problems with complicated shapes that 2D laser rangefinders find difficult to consistently identify, difficulties representing obstacles at different heights, and inefficient detection and navigation around dynamic obstacles. In order to solve these issues, the robot needs advanced path planning and obstacle avoidance algorithms by integrating multiple sensors and advanced algorithms to manoeuvre through such complex environments [14]. Proposed solutions include using path optimisation techniques to reduce journey times and improve navigation efficacy, and fusing data from depth cameras and LRFs to improve obstacle awareness [15]. These improvements have been validated through simulations and real-world experiments, greatly increasing the safety and dependability of mobile robot navigation indoors.
3.3 Robot Sensing Technologies
Various sensing technologies are used to improve robot navigation abilities in order to provide all of the aforementioned navigation capabilities. By utilising these sensors, robots can accurately perceive and interact with their environment, overcoming the inherent challenges of indoor navigation. Some commonly mentioned sensing technologies include laser range finders, encoders, IMUs, and 2D LiDAR, each of which adds differently to the overall navigation approach [16][17]. 2D LiDAR provides high-resolution distance measuring to surrounding objects by bouncing light off a single surface. 3D Lidar, which improves 2D Lidar by employing several laser beams simultaneously to create a three-dimensional image, is useful to enable more accurate mapping and localisation [18][19].
To measure the robots orientation and acceleration, inertia measurement units are used in a robot [16]. Accelerometers and gyroscopes are the standard components of IMUs; some also include magnetometers. These parts in the IMUs allow measurements of the robot's magnetic field's strength and direction, as well as linear acceleration and angular velocity [20]. To produce a more precise and comprehensive image of the motion and orientation of the device, the data from several separate sensors is frequently merged using methods like Kalman filtering [20][21]. IMUs do not rely on external signals like GPS which make them suitable for indoor environments. Additionally, they include high-frequency reading and data-gathering capabilities, enabling the capture of rapid movements and changes in orientation [21]. Motor encoders can be used to measure the robots wheel rotations. This data, combined with IMU readings, is used for odometry-based motion estimation, allowing the robot to track its position and movement accurately [16].
3.4 Overview of Robot Operating System (ROS)
Having explored the sensing technologies that provide the raw data for robot navigation, the attention is now turned to the software framework that orchestrates these diverse components - the Robot Operating System (ROS).
3.4.1 ROS Framework Overview
The ROS framework stands for a widely adopted, open-source framework that provides a flexible and modular architecture that allows developers to create complex robotic systems as shown in Figure 3.3. It offers a collection of tools, libraries, and conventions that simplify the task of building and programming robots. The framework allows developers to select the most appropriate programming language for their projects, supporting a number of languages, mainly Python and C++. Due to the modularity of ROS, it is possible to integrate various components and divide functions into reusable packages, which encourages code reuse and makes it simpler to maintain and upgrade the system's separate components. The framework's communication mechanisms, such as topics and services, enable seamless data exchange and interaction between different components of the robot system. Even though ROS has several benefits, it's vital to take into account its drawbacks. Its high resource requirements may prevent it from being used in low-power devices.
3.4.2 ROS-Enabled Hardware Integration
One of the key strengths of ROS lies in its ability to facilitate hardware integration. Through standardised interfaces, ROS excels at easing the integration of diverse hardware components, including actuators, controllers, and sensors. This integration allows for seamless communication between hardware and software, enabling real-time data processing and control [23]. However, due to the variety of technology and the requirement for compatible drivers, integration can occasionally be difficult. To overcome these limitations, the ROS community regularly creates and updates a vast array of hardware interfaces and drivers, making it possible to support new devices fast. Local processing is also implemented for lower latency and improved response time.
3.4.3 ROS-Powered Software Architecture
Beyond hardware integration, ROS also provides a robust software architecture that underpins the entire robotic system. ROS's software architecture is made to facilitate distributed computing, which enables several nodes on various machines to connect with one another over a network [24][25]. With the help of this design, distinct nodes can manage diverse functions like sensing, planning, and control. If one node fails, the other can continue to operate, ensuring the systems robustness. Despite these advantages, distributed computing also comes with its own set of challenges. Controlling inter-node communication can add complexity and latency. To address these issues, ROS offers instruments for monitoring and troubleshooting node interactions, as well as optimising data flow through techniques like message filtering and compression.
3.4.4 ROS-Based Simulation and Visualisation
Apart from its real-world capabilities, ROS also offers powerful simulation tools such as Gazebo, which is used to test and visualise the robots in a virtual environment before deploying them in the real world [23]. An example of ROS-based SLAM simulation in Gazebo for the indoor environment 3D model is provided in Figure 3.4 where the robot has been moving from right to left and back [26]. This skill is essential for guaranteeing safety and validating algorithms. This feature significantly reduces the risk of hardware damage and accelerates the development process. However, it is crucial to recognise that simulations are not an exact duplicate of reality. Discrepancies between simulated and real-world performance can lead to challenges when transitioning from simulation to actual deployment. In order to lessen this, developers are urged to test and improve the systems iteratively in a real-time setting and to employ realistic models and scenarios in simulations.
3.5 Conclusion
In summary, this literature review highlights the need to tackle the challenges involved in real-world topological map generation for autonomous robots. Traditional mapping methods often fail in dynamic or large environments, leading to the exploration of topological mapping as a more reliable option. This review covers various algorithms, sensor technologies, and the key role of the Robot Operating System (ROS), which provides software flexibility, hardware integration, and simulation features. Although challenges like sensor limitations and real-time performance persist, solutions such as sensor fusion, adaptive algorithms, and careful system design offer promising ways forward. By refining the topological mapping algorithm introduced by Chew Jing Wei and using the capabilities of ROS, this project aims to improve autonomous robot navigation using topological maps in real-world situations.
4.
Methodology and Methods
there are few proposed methods for constructing a hardware robot:
- Robot hardware design, construction and sensor integration
- Algorithm integration
- ROS implementation
- Simulate and visualise the performance of robot
- Test and validate the robot performance with the existing topological map
4.1 Robot Design and Construction
4.1.1 Mechanical Design
The mechanical design of the robot was focused on creating a stable and efficient platform, specifically optimised for navigating smooth, hard indoor surfaces. Key priorities in the design included balance, durability, and manoeuverability. The robot features a three-tier chassis, with the lowest deck housing components such as the battery, DC motors, and motor driver. The middle deck is for Raspberry Pi and IMU sensor while the upper deck is reserved for the LiDAR sensors. The chassis of the robot will be designed to be lightweight yet strong enough to support all components on it, using materials such as acrylic [27]. The layout ensures that all components, including batteries, motors, and sensors, are securely mounted with appropriate weight distribution, promoting stability. The robot is equipped with two wheels powered by bi-directional geared DC motors, and additional caster wheels are added to the front and rear for extra support without complicating the control system [28].
4.1.2 Electronic Design
The electronic design of the robot emphasizes a reliable power supply, efficient control, and accurate sensing systems. A lightweight, high-energy-density rechargeable lithium-polymer (LiPo) battery is used, with voltage regulators ensuring stable power delivery to all components [29].
In the design of this robot, a Raspberry Pi serves as the central processing unit of the robot to handle decision-making process, sensor data fusion and communication between various components. The Raspberry Pi 4 Model B, with its quad-core 64-bit processor and up to 8GB of RAM, offers a cost-effective solution that balances performance and affordability for this application. An L298N motor driver will be used as an interface between the Raspberry Pi and the motors, sensors and other peripherals [25]. It provides a simplified interface for controlling the DC motors, and reading analog and digital sensors. This can largely reduce the complexity of wiring and enable easy integration of various components. The Raspberry Pi also includes Wi-Fi for remote communication, allowing the robot to be monitored and controlled during testing and operation.
The hardware part of the robot consists of a sensor system which is a very important part of a mobile robot. LiDAR, the acronym for Light Detection and Ranging is one of the sensing methods. The LiDAR sensor, mounted at a strategic height, detects distance between the robot and obstacles by using laser light to illuminate a target [29]. Odometry is essential for continuously estimating the robots position and orientation, helping to track its path and guide it to the desired destination. To perform odometry estimation, the robot is equipped with encoders and an IMU (Inertial Measurement Unit). Encoders measure the rotational movement of the wheels, providing speed and distance information [30]. The IMU, which includes sensors like an accelerometer and gyroscope, measures acceleration and angular velocity [31]. Data from both the accelerometer and gyroscope are used separately to calculate angles, with both sources of data being calibrated for accuracy. Acceleration values received from accelerometers and angular velocity from gyroscope are kept separately. Angles can be determined from both sensors so both sensor data can be calibrated as shown in Figure 4.2 [31].
4.1.3 Assembly and Prototyping
Starting with assembling the chassis, attaching the motors, wheels, caster balls, fitting all parts together as designed. Then, carefully route and connect all electrical components, including the battery, Raspberry Pi, motor driver and sensors. It is important to organise the wiring to avoid interference and ensure easy maintenance. To test and troubleshoot the initial design of the robot, perform a power-on test to check that all components receive power and communicate with each other correctly. Diagnose and fix all issues that arise during assembly and ensure that all components are securely attached and functioning. After the prototype is complete, prepare the robot for the experimental setup and testing phase.
4.2 Software Development and Integration
4.2.1 Topological Mapping Algorithm Adaptation
This project will build upon the topological mapping algorithm developed by Chew Jing Wei, as detailed in their final year project report. The algorithm is able to generate topological maps from noisy occupancy grid maps, the result is shown in Figure 4.3 [32]. This project will modify the topological mapping algorithm which is excellent at managing noisy occupancy grid maps and precisely extracting important topological features. It uses obstacle expansion techniques to reduce noise, a thinning algorithm to derive topological lines, and a transition-based method for precise node extraction, even when the lines are thicker than one pixel. Additionally, it also includes pruning for semantic node identification representing distinct spaces and a distance transform-based method for identifying nodes representing distinct spaces [32]. The project will adapt this algorithm to incorporate data from the robots LiDAR and IMU sensors, possibly using sensor fusion [33]. Furthermore, the algorithm will be optimised for real-time performance on the Raspberry Pi, with improvements made through code profiling and tuning of parameters such as noise reduction, obstacle expansion, and node extraction thresholds to ensure its effectiveness in real-world applications.
4.2.2 ROS Implementation
As the original algorithm has not been integrated into ROS, it will be encapsulated in a new ROS package, following the standard ROS package structure, including directories for source code, launch files, configuration, and documentation [34]. The algorithms code will be refactored to follow ROS conventions, using ROS message types for data exchange and employing ROS services to interact with other nodes. The topological map generation algorithm will be implemented as a ROS node within the package, subscribing to sensor data topics to process input and publishing the resulting topological map. Then, launch files need to be created to conveniently start the necessary nodes and configure their parameters.
Interfacing the pre-existed of custom-developed ROS nodes with the LiDAR and IMU sensors and gathering the raw sensor data and publishing it to designated ROS topics ensure a standardised and accessible data stream for subsequent processing. Additionally, leveraging packages like Adaptive Monte Carlo Localisation (amcl) can help to estimate the robots position within the generated map and continuously track the robots location. ROSs visualisation tool, Rviz, will provide a real-time view of the topological map, robot position, sensor data, and planned paths, helping to monitor performance, understand the robots environment perception, and troubleshoot issues [34].
4.2.3 Path Planning Algorithm
The path planning methodology for this robot project involves multiple steps, combining map processing, path generation, and real-time goal point navigation with the topological mapping algorithm adapted beforehand. Initially, the environment is represented as a binary occupancy grid map, where obstacles and traversable areas are identified through image processing techniques like binarization and connected component labeling. This map is then further processed using a thinning algorithm to extract a topological skeleton of the traversable space, which ensures efficient navigation through the environment.
Each grid point on the map is represented as a Node object that holds essential attributes like its coordinates, movement cost, and heuristic distance to the goal. These nodes are linked to their neighbouring nodes, allowing the robot to traverse through them in four cardinal directions [35]. Using the A* algorithm, the robot computes the shortest path by evaluating the total cost of each node [36]. This cost is a combination of the movement score and heuristic distance, ensuring the robot selects the most optimal path from the starting point to the goal. A function is designed to generate a 2D grid from the processed map, marking traversable areas and assigning start and goal points. The robots navigation is managed through a ROS-based system that publishes goal points periodically. The robot moves from one goal point to the next, with its movements adjusted in real-time based on feedback from a subscriber node that monitors its progress. This process ensures efficient, dynamic path planning while validating the robot's actual path against the precomputed optimal route.
4.3 Experimental Setup and Testing
4.3.1 Data Collection
The data collection process involves gathering essential sensor data to validate the robot's navigation and map generation capabilities. The robot will operate in a controlled indoor environment with predefined obstacles, continuously recording data from the LiDAR, IMU, and encoders. The LiDAR provides distance measurements to nearby objects, while the IMU captures the robot's orientation and motion dynamics. The encoders track the robots wheel rotations, contributing to odometry calculations. All sensor data will be time-stamped and stored in ROS bags. The raw data will undergo preprocessing, including filtering and synchronisation to minimise noise. A low-pass filter will be used to eliminate high-frequency noise from the IMU, while a median filter will remove outliers and noise from the LiDAR data.
4.3.2 Data Fusion
Data fusion integrates multiple sources of data to produce more accurate, reliable, and comprehensive information than could be achieved by processing each data source individually, the process of data fusion is shown in Figure 4.4. The pre-processed LiDAR data is utilised for scan matching, where key features are identified across scans to estimate the robot's movement by comparing consecutive scans. The alignment process of two consecutive scans finds the optimal match between corresponding features in the two scans. Algorithms Iterative Closest Point (ICP) and Normal Distributions Transform (NDT) can be used to achieve the alignment [37]. The scan matching output estimates the robot's displacement and rotation in terms of distance and direction.
The Kalman filter is applied to the IMU data, combining it with encoder data to estimate the robots position and orientation. The filter predicts the robots next state based on its previous state and control inputs and then corrects this prediction by incorporating new IMU measurements, reducing errors in position and orientation estimation [38]. After obtaining the odometry estimation from the Kalman filter, the result is fused with the result of scan matching [33]. The Kalman filter continuously updates this fused estimate by considering uncertainties in both the scan matching and odometry data [39].
4.3.3 Map Validation
Map validation involves comparing the robot-generated map with a reference map of the environment, which serves as the accuracy benchmark. The reference map, created using an existing algorithm, will be used for comparison. The generated maps key features, such as node positions and connectivity, will be analysed against the reference. Metrics including node matching accuracy, edge consistency, and the overall structure of the map will be analysed. Any discrepancies between the generated map and the reference will be examined to identify and rectify potential sources of error in the mapping process.
5.Project Progress
5.1 Preliminary Results and Discussion
5.1.1 Topological Map Generation
A topological map as shown in Figure 5.2 is generated from the raw map of the floor level as in Figure 5.1. To convert the raw map to topological map, it is first converted into a .pgm format and then binarized using a thresholding technique. The binarized image is then processed to remove noise and cleaned up by erosion and dilation. Then, the binary map is inverted, and a thinning algorithm is applied to thin the free-space regions, reducing them to their skeletal form. A pruning algorithm removes some redundant parts by examining the connectivity of the lines. After that, key nodes are identified, in Figure 5.2, the orange nodes are the end nodes while the red nodes are the junction nodes. A distance transform is applied to the map to calculate the distance of every free-space pixel to the nearest obstacle. The centroids of free-space areas are adjusted to align with the nearest topological lines, improving the accuracy of the maps representation. As the final step, the .pgm map and the generated topological map are overlaid to visualise the final result as in Figure 5.2. Relevant data is exported too to be used for pathfinding algorithms.
5.1.2 Path Planning
The path planning algorithm works in conjunction with the A* algorithm to compute optimal paths for the robot to navigate through the topological map. First, the map data including dimensions, traversable paths and end nodes are loaded from the results of topological map generation. A 2D grid is generated from the map creation function, where each cell is represented by a node object. These nodes are linked to their neighbouring nodes, forming a network that enables pathfinding. For each pair of consecutive end nodes, the A* algorithm is applied to calculate the shortest path. The A* algorithm works by evaluating unexplored nodes based on a score combining the distance travelled and the estimated distance to the target which is also known as the heuristic. It selects nodes with the lowest total score and updates their neighbours score until it reaches the target node. Once the target is found, the algorithm backtracks from the target to the start node to generate the optimal path. This path is then returned as a sequence of waypoints. The combination of these processes ensures the robot efficiently navigates the map by following the shortest possible route between key points.
5.1.3 Robot Construction
The robots design as shown in Figure 5.3 is focusing on both mechanical and electronic design. The chassis has been designed to provide a stable platform for indoor navigation, housing key components like the Raspberry Pi, motors, and sensors. The sensors are mounted for optimal data collection, and initial wiring for power and sensor integration has been completed. LiDAR sensor is yet to be installed and it is planned to be installed on the top chassis layer above the Raspberry Pi. For the current progress, sensor calibration is being finalized, and motor functions are being calibrated for precise performance. Challenges encountered include minor delays in component delivery, but the construction is progressing as planned, with full assembly expected soon.
5.1.4 Inertia Measurement Units (IMU) Calculation and Results
After initialization of the BMI160 sensor, it is calibrated by collecting 1000 calibration samples while the sensor is kept still. The offsets for accelerometer and gyroscope data are calculated by averaging the readings over this period. Gyroscope offsets are converted from degrees per second to radians per second while accelerometer offsets are adjusted to convert raw values to gravitational acceleration (g).
The accelerometer measures acceleration in the X, Y, and Z axes, while the gyroscope measures angular velocity around the same axes. Accelerometer readings are converted by dividing by 16384.0 to obtain acceleration in g since the sensor gives raw data in 16-bits value. Gyroscope readings are converted from degrees per second into radians per second using Eq.(1):
???????????????? ?
???????????????? =
????????????
????????????_
180
(1)
The roll, pitch and yaw are estimated using the gyroscope data through numerical integration over time. Roll, pitch and yaw represent the rotational angles of the robot in space around the X, Y and Z axes respectively. These angle are computed using a simple integration formula shown as Eq.(2):
????????????????????[????] = ???????????????? ????? (2)
????????????
Here, ????? is the time interval which is set as 10ms, the angles accumulate over time, which give the robots orientation in 3D space. Velocity is estimated by integrating acceleration over time while the robots position is estimated by integrating velocity over time using Eq(3):
????????????????????????????????[????] = ????????????????????[????] ????? (3)
The robot is tested by moving from node 1 to node 2 according to the path in Figure 5.2. All data, including the robots orientation and acceleration, is sent out in real-time over serial communication for further analysis.
Figure 5.4. Acceleration on Axes X, Y and Z
On the Z-axis which is the line in green in Figure 5.4, the acceleration Az shows a mostly steady behaviour, indicating a minimal vertical motion and constant elevation. This is because the sensor is attached firmly on the robot chassis where it would not move in an up down motion. The acceleration on the Z-axis is only due to the gravitational acceleration. Ax and Ay show some fluctuations, suggesting some lateral or forward movement, with the strongest values seen around the timeline, indicating acceleration changes when moving straight.
Figure 5.5. Angular velocity on (a) Axis X, (b) Axis Y, (c) Axis Z
In Figure 5.5(c), Gz, which is the Z-axis angular velocity, shows significant spikes with a large value compared to the two other axes, indicating rotation along the Z-axis. The significant peaks represent changes in orientation that correspond to a rotational manoeuver. Gx and Gy in Figure 5.5(a) and (b) show smaller variations, suggesting only small vibrations on the X and Y axes, consistent with the minimal pitch and roll in Figure 5.6. Yaw shows the most pronounced changes, confirming the robots rotation during this period. The sharp rise and fall in yaw corresponds with the Gz spikes in the gyroscope data. Pitch and roll remain mostly steady in Figure 5.6, showing the robot did not tilt significantly during the movement due to the steady base with 2 extra caster wheels.
Figure 5.6. Pitch, roll and yaw
Focusing on the plot of yaw in green in Figure 5.6, the plot is first divided into 5 divisions. In the first division, yaw remains stable indicates the robot is moving straight with very less rotation along the Z-axis. In division 2, the sudden increase in yaw indicates the robot is rotating clockwise, and the few stable points around 40s shows that the robot is moving straight again for a short while. Then, the robot rotates anticlockwise and causes the yaw to drop as in division 3. The significant rise in division 4 indicates a sharp
and substantial clockwise rotation of the robot. Finally, the robot moves straight and the yaw values stabilize in division 5 until the end of recorded time. This analysis of yaw angle can indirectly prove that the robot is navigating according to the path in Figure 5.2 from node 1 to 2 before the visualisation of the actual robot path is done.
5.2 Limitations and Future Work
Limitations
Processing sensor data in real-time while maintaining computational efficiency was a significant challenge, particularly given the constraints of the Raspberry Pi hardware. The topological mapping algorithm, though capable of handling noisy data, required optimisation to achieve real-time performance. The computational resources required for continuous mapping, localisation, and path planning resulted in occasional delays, which could impact the robot's ability to navigate efficiently in a dynamic environment [40].
Additionally, the reliance on odometry for tracking the robots position led to cumulative errors over time. The IMU used in the robot is prone to gyroscope drift. This can lead to accumulating errors in orientation estimation over time, affecting the accuracy of the robot's navigation and the navigated path generation. Similar issues have been reported in other studies using IMUs for robot navigation [31].
Furthermore, the current testing has only been conducted in a controlled indoor environment which is Lab 5210 in Monash University Malaysia. This limits the robustness of the system when facing more complex or dynamic scenarios.
Regarding environment changes, adapting the existing topological mapping algorithm to work with real-world sensor data posed challenges in terms of handling environmental noise and dynamic obstacles. The algorithm required extensive tuning of parameters like noise thresholds and node extraction settings, and even then, it was not always able to handle complex, cluttered environments.
Impact and Mitigation of Limitations
The limitations identified above will bring significant implication for the research project's findings and conclusions. Real-time processing constraints and computational resource limitations may lead to suboptimal performance in dynamic environments, potentially affecting the accuracy of robot navigation and mapping results. An optimised algorithm should be used for real-time performance, preferably with parallel processing techniques.
Similarly, the cumulative error in odometry and IMU drift would possibly result in decreased accuracy of sensor data, impacting the reliability of the generated navigated path. To mitigate this limitation, gyroscope data should be combined with magnetometers or visual odometry to reduce the cumulative errors.
Other than that, the controlled testing environment limits the generalisability of the findings to more complex scenarios, potentially overestimating the robots performance in real-world applications. Challenges in handling environmental noise and dynamic obstacles may have resulted in incomplete or inaccurate topological maps, affecting the overall robustness and reliability of the navigation system. To overcome these limitations, conduct the robot testing in varied environments to ensure system adaptability and incorporate obstacle detection and avoidance algorithms.
Future Work and Likelihood to Complete
There is little work to be done in the future. To meet the first objective, a LiDAR sensor has to be installed on the robot and software has to be developed to interface with the LiDAR for its data processing. LiDAR integration is a common task in robotics which increases the likelihood of completion, the LiDAR sensor comes with ROS packages which make the implementation straightforward. The second future work is to integrate data from the IMU and LiDAR. Implement sensor fusion algorithms to create a unified representation of the robots state. However, achieving accurate fusion, especially in real-time, can be complex. Great amount of experiment has to be done with the fusion algorithm, Kalman Filter to find the best fit for the robot system. With the fused sensor data and the robots state, a mapping function is required to draw the robots path to fulfil the second objective of this project. The final future work to be done is to implement an algorithm to align and compare the generated path map with the original topological map which might have a moderate likelihood of completeness. To fully meet the final objective, metrics like map overlap have to be developed to quantify the accuracy and differences between the two maps. Aligning potentially different map representations can be more challenging. Custom algorithms are needed to convert between map types and establish correspondence between features.
6. Scope, Project Plan & Timeline
6.1 Project Scope
In-Scope
- Design and construct a mechanical robot with various sensors
- Adapt and integrate an existing topological map generation algorithm into the hardware
- Test and validate the path navigated by the robot with the topological map generated by the
Out-of-Scope
- Improve and optimise the existing topological map generation
6.2 Project Plan & Timeline
Gantt chart is a visual aid for project management that shows milestones, timeframes, and activities for the project. It offers a concise and organised summary of all project tasks, along with information on their beginning and ending dates and interdependencies. Figure 6.1 and Figure 6.2 demonstrate the Gantt chart.
The project timeline for the "Topological Map Generation Using Hardware Robot" is broken down into two parts which are FYP A and FYP B and several key phases. FYP A begins with preparing the Project Proposal Report, where the introduction, aims, objectives, literature review, methodology, project scope and risk management plan are developed in sequence. Next, the Robot Design and Construction phase involves budgeting, purchasing items, testing sensors, assembling the robot mechanically and integrating electronics and sensors. This phase lays the foundation for the robots physical structure and functionality.
The following is the Algorithm Adaptation phase. Here, the existing algorithm is reviewed and modified to work with the robot sensors, and implemented in ROS, ensuring that the software and hardware components are aligned. After the robot integration, Testing and Initial Evaluation follow, where the robot is tested in controlled environments, real-world data is collected, and the data is analysed and evaluated to assess the performance of the robot. At the same time, the Project Progress Report is prepared together. This phase involves drafting various sections of the report which include the introduction, methodology, results, discussion, conclusion and risk analysis.
For FYP B, the first task in the Fine-Tuning phase involves making necessary modifications to the algorithm based on previous tests and results to optimise the accuracy of the map generated by the robot compared with the input map. Then is the Final Testing and Documentation which tests and evaluates the robot performance in various scenarios and documents the findings and results in a test report. The last phase is the Final Report and Presentation. This phase is to write the final report to summarise the entire project and prepare to present and demonstrate the projects findings and achievements.
7.
Risk Management Plan
Table 1 provides an overview of the risk assessment for non-OHS (Occupational Health and Safety) in this project. Other than that, OHS risks in this project are also attached to this report in Appendix A.
Table 1: Risk Assessment Table for non-OHS Project Risks
Project Risk | Risk | Likelihood | Consequence | Risk Level | Mitigation | Residual Risk |
Robot Construction | Mechanical failure or instability during robot operation. | Possible | Major | H | Ensure rigorous testing of mechanical components, use high-quality materials, and conduct multiple iterations of design and testing. | Mechanical issues may still arise due to unforeseen circumstances, requiring additional adjustments or repairs. |
Sensor Malfunction | LiDAR and IMU sensors fail to provide accurate data. | Unlikely | Minor | L | Calibrate sensors thoroughly before deployment, and incorporate error-checking algorithms. | Inaccurate sensor data may still occur, requiring recalibration or sensor replacement. |
Navigation Failure | Robot fails to navigate according to the generated map. | Possible | Moderate | M | Integrate robust navigation algorithms, perform thorough testing in varied environments, and include manual override capabilities. | Navigation issues may still occur, requiring real-time adjustments or manual intervention. |
Battery Drain | Robot runs out of power during operation. | Possible | Moderate | M | Monitor battery levels continuously, use energy-efficient components, and provide easily accessible charging options. | Power issues may still disrupt operation, necessitating a contingency for rapid battery replacement or recharge. |
Data Logging Failure | Path data is not recorded accurately. | Possible | Major | H | Implement reliable data logging systems with backup storage, and test the system rigorously before deployment. | Data loss or corruption may still occur, requiring data recovery procedures. |
Software Integration | Issues with integrating hardware and software components. | Possible | Major | H | Follow systematic software development practices, perform incremental integration, and thoroughly test each integration step. | Integration issues may still arise, requiring debugging and potential redesign of interfaces. |
Communicati on Failure | Issues with communication between robot components or with the control system. | Unlikely | Moderate | M | Use reliable communication protocols, test for interference, and implement fallback communication methods. | Communication issues may still occur, requiring manual intervention or system resets. |
Robot Damage | Physical damage to the robot during testing or operation | Unlikely | Moderate | M | Implement protective casing, use durable materials, and conduct tests in controlled environments. | Damage may still occur, necessitating repairs or component replacement. |
Schedule Delays | Project milestones are not met on time | Likely | Moderate | H | Develop a detailed project timeline with buffers, monitor progress regularly, and adjust schedules as needed. | Delays may still happen, requiring contingency planning and potential scope reduction. |
8. Sustainability Plan
8.1 Alignment with UN Sustainable Development Goals (SDGs)
This project, which involves creating a robot capable of generating topological maps in real-world environments, closely aligns with UN SDG 9: Industry, Innovation, and Infrastructure.
8.1.1 Target and Key Indicators
Target 9.5: Enhance scientific research, upgrade the technological capabilities of industrial sectors in all countries, in particular developing countries, including, by 2030, encouraging innovation and substantially increasing the number of research and development workers per 1 million people and public and private research and development spending.
Indicator 9.5.1: Research and development expenditure as a proportion of GDP
Indicator 9.5.2: Researchers (in full-time equivalent) per million inhabitants
This project addresses several challenges in the field of autonomous navigation. By leveraging innovative technology like Lidar, IMU sensors, and a topological map generation algorithm, this project contributes to SDG 9 by fostering innovation in robotics. Sustainable engineering principles guide the design and implementation, ensuring positive impacts on the environment, society, and the economy.
8.2 Triple Bottom Line
- Environmental Sustainability: The robot uses low-power components and recyclable materials, reducing energy consumption and electronic Additionally, the robots operation helps reduce the need for energy-intensive machinery in mapping, leading to a lower carbon footprint.
- Economic Sustainability: The project emphasises on affordable, commercially available components and open-source software like ROS, making robotic solutions accessible to a wider range of industries. This lessens the financial burden of conventional mapping methods while fostering industry innovation in line with SDG 9. The robot's potential to improve operational efficiency in sectors like logistics or urban planning demonstrates its long-term economic sustainability.
- Social Sustainability: The robots applications in urban planning, disaster relief, and search and rescue improve public safety and quality of life. The project tackles societal well-being by lowering human risk and enhancing decision-making in high-stress situations. Moreover, the project promotes equitable access to advanced technology, contributing to social innovation and
8.3 Likely Consequences and Proactive Measures
8.3.1 Positive Impacts
The robot's increased efficiency in mapping and navigation across several industries has a major positive influence. It can improve industrial automation, urban planning, and disaster response, resulting in safer neighbourhoods and more environmentally friendly infrastructure. Additionally, the use of open-source software fosters collaboration and knowledge sharing, contributing to global innovation.
8.3.2 Negative Impacts and Mitigation
Potential negative impacts include electronic waste from sensor failure or component obsolescence. To mitigate this, the robots design prioritises modularity, allowing for easy repairs and part replacements. Long-term environmental damage is also decreased by using energy-efficient components and recyclable materials. Apart from that, thorough testing and algorithm optimisation proactively address the drawback of sensor noise and system inefficiencies.
8.4 Whole System-Based Design and Adaptability
8.4.1 Design Elements for Resilience
By taking into account the whole life cycle of the product from resource sourcing and manufacture to operational usage, maintenance, and eventual recycling or repurposing, the robot design adopts a whole-system approach. The project uses modular designs, which allow for easy upgrades and part replacements, thereby extending the robots functional life and minimising waste. The robot's environmental impact is minimised throughout its existence by utilising readily available components like Raspberry Pi and IMU sensors, as well as recyclable materials like acrylic. Open-source software ROS enables easy integration of new algorithms or sensors, facilitating adaptation to future technological advancements. The robot is guaranteed to be resilient, adaptive, and in line with long-term sustainability objectives thanks to this all-encompassing strategy. It minimises unintended consequences by planning for future repairs, reuse, and recycling from the outset, thus contributing to a circular economy.
8.4.2 Anticipatory and Adaptive Approaches
By using a proactive approach to risk management, the project foresees possible problems such as component obsolescence, system inefficiencies, and environmental effects. The robots modularity allows for quick adaptation to future technological advancements or changes in operational needs.
8.5 Resource Origin and Efficiency
The robot's components are sourced from suppliers that prioritise sustainable manufacturing practices. The use of widely available electronics like Raspberry Pi ensures resource efficiency, as these components are produced in large quantities and designed to be energy efficient. The overall system is designed to use minimal energy, reducing strain on natural resources. The project designs for the reuse and recycling of materials and components, such as the modular electronic parts, to reduce its influence on the availability of resources in the future.
8.6 Pollution and Recycling
The robot's environmental impact is minimised by its reliance on a long-lasting, recyclable construction and a rechargeable LiPo battery. The acrylic chassis, as well as electronic components, can be recycled or repurposed at the end of the robots life cycle, minimising waste. Additionally, by eliminating the need for repetitive tasks, the robot's effective navigation algorithms indirectly lessen their total impact on the environment.
8.7 Behavioural Impact and Externalities
The project is likely to encourage sustainable behaviour in industries reliant on mapping and navigation by demonstrating the benefits of energy-efficient, autonomous systems. It reduces human labour, energy consumption, and the environmental footprint of traditional mapping methods. By accounting for externalities such as sensor wear and waste, the design incorporates resilience and adaptability, ensuring long-term sustainability.
8.8 Stakeholders
Key stakeholders in the project include suppliers of components, industries using the robot for navigation , academia, and end users. Through the provision of creative solutions in areas including urban planning, disaster assistance, and industrial automation, the project helps to improve people's quality of life. The robot enhances operational efficiency and safety, reducing risks to human workers in hazardous environments and improving decision-making capabilities through accurate map generation. The initiative can assist both established and developing communities by lowering the cost and increasing accessibility to advanced robotic technologies, offering broader societal benefits.
8.9 Conclusion
Sustainable engineering concepts are integrated into this project's social, economic, and environmental aspects. Through compliance with SDG 9, resource efficiency, modularity, and recyclability are encouraged, and the project reduces its environmental impact while boosting technological innovation and social progress. Its focus on long-term resilience, adaptability, and stakeholder engagement ensures that it contributes positively to sustainable industrial practices and improves the quality of life for various communities.