首页 > > 详细

辅导 An Autonomous Drone with SLAM and Object Recognition in Disaster Response辅导 留学生Matlab语言程序

An Autonomous Drone with SLAM and Object Recognition in Disaster Response

Autonomous drones are increasingly used for complex missions like search and rescue. In disaster zones, mapping changed terrain and locating survivors is a critical challenge requiring reliable perception, localization, and recognition. This project aims to develop an intelligent aerial robot that navigates unknown areas using Simultaneous Localization and Mapping (SLAM) and object recognition to detect humans, debris, and fire.

The project's main challenges are achieving accurate localization on changed terrain and improving environmental understanding via semantic perception. This integration connects low-level autonomy (navigation, mapping) with high-level reasoning (object interpretation), vital for disaster response.

The proposed system combines a visual–inertial SLAM framework with an object detection module based on computer vision and machine learning. The SLAM component will construct a detailed 3D map and estimate the drone’s position in real time, while the detection module identifies and classifies key objects using image processing and neural network algorithms. A decision-making layer will also be implemented to enable intelligent responses—for example, hovering when detecting a person, rerouting to avoid obstacles, or marking detected survivors’ locations on the generated map for later rescue.

The hypothesis is that integrating object recognition with SLAM will significantly improve navigation safety, mapping accuracy, and overall mission effectiveness in dynamic environments. Experiments will be carried out in Webots, comparing pure SLAM and SLAM + object recognition under different visibility and obstacle conditions. Evaluation metrics will include localization accuracy, detection precision, and system stability during flight simulation.

Finally, each teammate will program a core component:

Renjie Xu - Localization: Codes the SLAM framework's state estimation and odometry to estimate the drone’s position in real time with high localization accuracy.

Olanrewaju Sajinyan - Perception: Codes, trains, and optimizes the neural network algorithms to identify and classifies key objects (humans, debris, fire) with high detection precision.

Jiaqi Zhao - Decision-Making: Codes the high-level decision-making layer, interpreting sensor data to decide what intelligent responses to trigger (e.g., hover, reroute).

Jianfeng Du - Mapping: Codes the SLAM "Mapping" component, using localization data to "construct a detailed 3D map" and mark detected objects.

Junjie Yang - Controll: Codes the low-level Control algorithms that execute intelligent responses (e.g., stable "hovering" or "rerouting to avoid obstacles").

Shared Responsibilities: System integration, including defining interfaces (APIs) and repository management, is a collective task. All members will develop and test modules in the shared Webots simulation. For the final report and video, each member will document and record their module's contribution for final team compilation.




联系我们
  • QQ:99515681
  • 邮箱:99515681@qq.com
  • 工作时间:8:00-21:00
  • 微信:codinghelp
热点标签

联系我们 - QQ: 99515681 微信:codinghelp
程序辅导网!