Formation control for multi-robot systems毕业论文
2022-01-08 21:59:47
论文总字数:68463字
摘 要
Abstract VII
Chapter 1 Literature Review 1
1.1 Mobile Robots background 1
1.1.1 Classification of mobile robot 2
1.2 Multi-agent system (MAS) 3
1.2.1 Multi-Robot Systems (MRS) 4
1.3 Introduction to Formation Control 7
1.3.1 Coordination and Control Techniques 8
Chapter 2 Turtlebot 10
2.1 Presentation of the Turtlebot 2 Hardware 10
2.2 Kinematic Equations of the Turtlebot 13
2.2.1 The kinematic Model of the Robot 14
2.3 ROS 17
Chapter 3 Methodology and Theory basics 19
3.1 Vehicle model conversion 19
3.2 Formation control protocol of second-order multi-agent system 21
3.3 Leader-Follower formation control theory 21
3.3.1 Virtual Leader and Follower formation 22
Chapter 4 Simulation and experimental results 24
4.1 Matlab simulations 24
4.1.1 Simulation analysis of second-order multi-agent system with three followers: 24
4.1.2 Matlab Robotic System Toolbox™ 30
4.2 Gazebo Simulation 34
4.2.1 Gazebo 3 D Simulation Results 36
Chapter 5 Conclusion and Future Work 37
5.1 Conclusion 37
5.2 Future Work 37
REFERENCES 38
LIST OF FIGURES 42
LIST OF ABBREVIATIONS 43
AUTHOR PERSONAL INFORMATION 44
Acknowledgement 45
摘要
多机器人系统的编队控制在非完整移动机器人是以机器人操作系统(ROS)和Gazebo仿真器器为基础的。为了实现理想的编队,移动机器人需要将自己定位在环境中,交流彼此之间的位置,并且测量他们相对应的速度。
创建仿真场景可以使编队问题在被应用于真正的机器人前更容易地被测试出来。
在这篇论文中,我们详细地描述了每一场仿真以及它们是如何被用来解决真实模拟环境下的编程控制问题。
在Matlab计算程序中,建模和仿真原型算法在模拟情景中也被检验。我们使用(Robotic System Toolbox™) 机器人系统工具箱设计,并测试了移动机器运算法则以防碰撞。
关键词:多机器人系统,移动机器人,编队控制,Robot Operation System (ROS), Matlab Robotics System Toolbox™
Abstract
The formation control for multi-robot systems on non-holonomic mobile robots based on ROS (Robot Operation System) and gazebo simulator to achieve the desired formation, mobile robots need to localize themselves within the environment, to communicate their positions to each other and to measure their corresponding velocities. Creating simulation scenarios that makes the formation problem easier to be tested before applying them to a real robot.
We describe each of these simulations and how they can be used to solve the formation control problem under realistic simulation worlds.
Modeling and simulation prototype algorithms are also tested on Matlab to simulate scenarios using Robotics System Toolbox™ to designing, and test mobile robots algorithms for collision checking.
Keywords: multi-robot system, mobile robots, formation control, Robot Operation System (ROS), Matlab Robotics System Toolbox™
Chapter 1 Literature Review
Mobile Robots background
Mobile robots are used in a wide range of applications including in factories (e.g., automated guided vehicles), for military operations (e.g., unmanned ground reconnaissance vehicles), in healthcare (e.g., pharmaceutical delivery), for search and rescue, as security guards, and in homes (e.g., floor cleaning and lawn mowing). Mobile robots have the capability to move around in their environment and are not fixed to one physical location. Mobile robots can be "autonomous" (AMR - autonomous mobile robot) which means they are capable of navigating an uncontrolled environment without the need for physical or electro-mechanical guidance devices. Alternatively, mobile robots can rely on guidance devices that allow them to travel a pre-defined navigation route in relatively controlled space (AGV - autonomous guided vehicle). By contrast, industrial robots are usually more-or-less stationary.
Automated guided vehicles or automatic guided vehicles (AGVs) were invented in 1953. Mobile robots are most often used in industrial applications to move materials around a manufacturing facility or a warehouse. Mobile robots address the demand for flexible material handling, the desire for robots to be able to operate on large structures, and the need for rapid reconfiguration of work areas. Much of the earlier work on outdoor vehicles for defense, search and rescue, and bomb disposal is relevant to the manufacturing domain, as is work that has been done on personal care robots and robots for household and hospital applications.
The components of a mobile robot are a controller, sensors, actuators and power system. The controller is generally a microprocessor, embedded microcontroller or a personal computer (PC). The sensors used are dependent upon the requirements of the robot. The requirements could be dead reckoning, tactile and proximity sensing, triangulation ranging, collision avoidance, position location and other specific applications. Actuators usually refer to the motors that move the robot can be wheeled or legged.
Classification of mobile robot
Mobile robot can be classified by:
- The environment in which they can travel:
- Land or home robots are usually referred to as Unmanned Ground Vehicles (UGVs).
- Delivery amp; Transportation robots can move materials and supplies through a work environment.
- Aerial robots are usually referred to as Unmanned Aerial Vehicles (UAVs).
- Underwater robots are usually called autonomous underwater vehicles (AUVs).
- Polar robots, designed to navigate icy, crevasse filled environments.
- The devices they use to move, mainly:
- Legged robot: human-like legs.
- Wheeled robot.
- Tracks.
For the Navigation and Localization of Mobile robots often operate in large facilities and many different approaches have been taken for localization and navigation. They range from methods in which the entire facility is first mapped and routes are planned a priori to those in which sensors provide information about traversable areas and the vehicles determine their own current positions and plan their paths dynamically based on features recognized in the environment. When there is not expected to be much change in the environment and cycle times are critical, a priori planning is usually preferred. When the workspace or the tasks change frequently it is often better to plan dynamically. Manufacturing facilities often take a middle road. Other sensors on-board the vehicle look for obstacles or unexpected objects in the path of the vehicle and may be able to plan a way around them before returning to their pre-planned route. . It is also important to know the position and orientation (pose) of a mobile robot and many methods have been developed to provide this information. A commonly-used approach is to rely on odometry augmented by sensor-based measurements from lasers, radio-frequency identification (RFID) systems, two-dimensional bar codes (e.g., QR codes), and cameras.
Multi-agent system (MAS)
A multi-agent system (MAS or "self-organized system") is a computerized system composed of multiple interacting intelligent agents. Multi-agent systems can solve problems that are difficult or impossible for an individual agent or a monolithic system to solve.
Intelligence may include methodic, functional, procedural approaches, algorithmic search or reinforcement learning. Multi-agent systems consist of agents and their environment. Typically multi-agent systems research refers to software agents. However, the agents in a multi-agent system could equally well be robots.
Multi-agent systems are ubiquitous in the real-world and have received an increasing attention by many researchers worldwide. A multi-agent system is composed of many agents interconnected by a communication network.
Multi-agent systems can manifest self-organization as well as self-direction and other control paradigms and related complex behaviors even when the individual strategies of all their agents are simple. When agents can share knowledge using any agreed language, within the constraints of the system's communication protocol, the approach may lead to a common improvement.
Over the last decade, multi-agent systems as a special kind of complex networks have attracted an increasing attention [1][2]from many researchers worldwide, which arise from many network systems in the real-world such as flocks[3], a group of vehicles [4], power systems [6] and complex Fractional-Order Dynamics [7]. multi-agent systems based on the strong connectedness assumption on a directed graph. Ren and Beard further generalized results and found that the above multi-agent systems can achieve consensus if there exists a spanning tree in the directed graph. Moreover,[7] applied the consensus algorithm into the formation control of multi-vehicle systems. Thereafter, many kinds of distributed protocols were designed for different multi-agent systems.
Multi-Robot Systems (MRS)
Multi-robot systems (MRSs) are an important part of robotics research. In the late 1980s, a group of scientists began investigating this direction of research. A series of projects have been realized successfully, many serious projects have been realized successfully.
MRS is a subset of Multi-agent systems (MAS)(see Figure 1), Multi-Robot Systems (MRS) typically focus on fundamental technical aspects, like coordination and communication, that need to be considered in order to coordinate a team of robots to perform a given task effectively and efficiently. Multi-Robot Systems (MRS) typically focus on fundamental technical aspects, like coordination and communication, that need to be considered in order to coordinate a team of robots to perform a given task effectively and efficiently.
Such as Yasuda[9], Mantha[10], CEBOT[10], Molina[12], dos Reis[13], one of the main challenges for Multi-robot systems is to design a suitable and relevant coordination strategies between the robots that enable them to perform operations efficiently in conditional and term of time and working space.
Multi-robot systems have been object of significant research efforts in the last years. The basic idea is that multi-robot systems can perform tasks more competently than a single robot or can accomplish tasks not executable by a single one. Moreover, multi-robot systems have advantages like increasing tolerance to possible vehicle fault, providing flexibility to the task execution or taking advantages of distributed sensing and actuation. The use of a platoon of vehicles is of interest in many applications, such as exploration of an unknown environment, navigation and formation control, demining, object transportation, up to playing team games; these may involve grounded, aerial, underwater or surface vehicles.
Nowadays most of robots fall into one of these three primary categories, namely manipulators, mobile robots and humanoid robots.
In our research, it is focused on multiple mobile robot systems (MMRSa), which is robots should cooperate to work together to accomplish a given task or to follow a given path by moving around in the simulated environment. We should be careful not to confuse Multi-Robot Systems (MRSs) with distributed artificial intelligence (DAIs), because the DAI field is primarily related with problems concerned software agents. In contrast, the area of multi robot system involves mobile robots that can move in the physical and simulated worlds and must interact with each other physically.
So far by now, a number of papers have been published concerning the research review, taxonomy and survey analysis for Multi robot system .presented a taxonomy that classifies multi agent systems according to communication, computational capacity and certain other capabilities. They also presented additional results concerning the MAS to illustrate the usefulness of the taxonomy and demonstrate that a collective can be more powerful than a single unit of the collective
Figure 1: Multi-Agents System and its subset
It have been studied in many instances that multiple robots cooperate to perform complex tasks that would otherwise be almost impossible for one single powerful robot to accomplish. The fundamental theory behind multi-agent robotics suggests dispatching smaller sub-problems to individual robots in a group and allowing them to interact with each other to find many solutions for complex problems. Simple robots can be built and made to cooperate together to achieve complex behaviors. It has been observed that multi-robot systems (MRS) are very cost effective as compared to building a single costly robot with all the capabilities. As these systems are usually decentralized, distributed and inherently redundant, they are fault tolerant and improve the reliability and robustness of the system. The simplicity of multi-robots systems has produced a potentially wide set of applications.
Robotic Systems: Single-robot And Multi-robot
A single-robot system can contain only one individual robot that is able to model itself, the environment and their interaction , Several individual robots are well known such as RHINO[14], ASIMO[16], MER-A [17], BigDog [18] and NAO [19]. The robot in a single-robot system is usually integrated with multiple sensors, which themselves need a complex mechanism and an advanced intelligent control system. Even though single-robot system give a relatively strong performance, but some tasks may b\e inherently to complex or even impossible for it to perform, such as spatially separate tasks. For example, BigDog [18] gave an example of missile launch task that requires some sort of synchronization: there are two keys separated by a large distance in space that need to be restriction to the single-robot system is that it is spatially limited.
A Multi Robot System (MRS) can contain more than one individual robot, weather a group homogeneous or heterogeneous. These things can also undergo a question “Why Multi-robot Systems”. Using a MRS can have many several potential advantages over a single-robot system;
- A multi robot system has a better spatial distribution
- A MRS can achieve better overall system performance.
- A MRS introduces robustness that can benefit from data fusion and information sharing among the robots, and fault-tolerance that can benefit from information redundancy. For example, multiple robots can localize themselves more efficiently if they exchange information about their position (Formation Control).
- A MRS can have a lower cost. Using a number of simple robots can be simpler to program, cheaper to build than using a single powerful robot that is complex and expensive to accomplish a task.
- A MRS can exhibit better system reliability, flexibility, scalability and versatility.
Therefore the MRS advantages can be expressed as:
- Robustness “if one robot fails, the others step in”
- Scalability “ if the problem gets bigger, just get more robots”
- Performance “More robots will get this done faster”
- Specialization “While some robots do this, others do already that”
Introduction to Formation Control
Formation control is one of the most challenging difficulties in cooperative multi-robot systems, which has attracted considerable attention in the robot research community over the past decades. It is an important part in the cooperative area of multi-robot has always been the most modern research topics. There are many methods have been applied for theoretical research and engineering applications, such as behavior-based, potential field-based, leader-follower, graph theory-based and virtual structure, etc. In general, the mobile robots formation can be described as controlling a group of mobile robots track a desired trajectory while maintaining a desired geometric shape including positions and orientations. Since the late 1980s researchers have been motivated to design and build teams of robots with the ability of working together on some given task. This motivation stems from the fact that in many applications, Multi-Robot Systems (MRS) brings about several advantages over Single Robot Systems. In particular, MRS are generally more time-efficient, less prone to single-points of failure, and typically exhibit multiple capabilities, which in many cases yield a more effective solution to a given problem. In early works, researchers observed natural systems, such as a swarm of bees, ants and even humans, to study how a group of individual entities can work together to perform a given task. The multidisciplinary nature of these early studies, eventually led to MRS being applied in several different application domains such as surveillance, search and rescue, foraging, exploration, cooperative manipulation and transportation of objects, among others. In the past few years, observer has been widely applied to the formation control [20][21][22]. For example, augmented fuzzy observer was put forward in [23] to implement synchronous estimation of the system state and the disturbance term. In[22] the integral action was incorporated into the observer controller to improve the formation tracking and observation performance.
Formation control represents nowadays one of the most important research areas in mobile robotics. The gain in popularity of self-driving vehicles and their increasing demand on the market, the Ubiquity of unmanned aerial vehicles (UAVs), and the necessity of efficient fleets of autonomous underwater vehicles (AUVs) has broadened the spectrum of formation-control scenarios.
Formation control of multi-robot systems has received a significant attention in last decades due to its multiple potential applications in space-based interferometers, combat such as military, and observation, and investigation systems, hazardous material handling, and distributed reconfigurable sensor networks. Formation control usually requires that individual vehicles share a consistent view of the objectives and the surrounded areas. For example, a multiple-robot rendezvous task requires that each vehicle know the rendezvous point. Control problems of robot have gained a lot of attention and developed rapidly owing to their extensive applied space. A number of great theoretical worthy results have been reported. Formation control is an important issue in coordinated control for a group of unmanned autonomous vehicles/robots. In many applications, a group of autonomous vehicles are required to follow a predefined trajectory while sustaining a desired spatial pattern.
Coordination and Control Techniques
One of the most important forms of coordination in multi-robot systems is observed when robots interact with each other to move in a formation while preserving the formation, like a flock of birds. Flocking [25][26] is a form of collective behavior of large number of interacting agents with a common group objective. The engineering applications of flocking include parallel and simultaneous transportation of vehicles, delivery of payloads, performing military missions such as battlefield surveillance etc.
In 1986, Reynolds [27] introduced three heuristic rules that led to the creation of the first computer animation of flocking. He suggested the following three rules for a successful flock:
Flock Centering: attempt to stay close to nearby flock mates.Obstacle Avoidance: avoid collisions with nearby flock mates. Velocity Matching: attempt to match velocity with nearby flock mates.
These rules are also known as cohesion, separation, and alignment rules in the literature. These rules had very broad interpretations and the issue of how to interpret them correctly could only be resolved when Reynold published more recent papers, [28]which describe steering behaviors of autonomous characters in computer animation and Mikhailyuk[29] that describe method for constructing large groups of autonomous characters. These autonomous characters were then made to respond in real time to interact with the user, as well as with other characters and their environment.
The formation problem has been regarded as an important problem in multi-robot systems where the objective is to make a team of vehicles move toward and maintain a desired geometric pattern, while maintaining a featured motion.
Formation structure can be divided into three strategies: the leader–follower strategy, the behavioral and the virtual structure approaches. Several approaches have been proposed in the literature to solve this problem. However, most of the existing literature tackles the theoretical side of the problem mainly the controller design is considered where several control strategies are adopted to make the formation errors converge to zero. Nevertheless, some of them have carried on real experiments to prove the effectiveness of their proposed controller. Furthermore, multi robots systems implementation on ROS has rarely been considered except in few works like [29] about multi robots coverage and [31] about multi robot collision avoidance.
The problem of multi-vehicle formation control has been studied in many papers and articles, where the focus is on consensus based formation control. Ren and Cao in [32] classify the formation control problems become formation shaping problems, in which the objective control is to establish formation shape, and formation tracking problem, which is to find a control algorithm so that the agents track the predefined trajectories. In [33] and[34], graph theoretic methods and consensus, cooperation in networked multivalent systems are the focus. In the case of road transportation or warehouse automation, the tracks or lanes have been determined and the tasks have been provided by a fleet system or a global coordinator.
Chapter 2 Turtlebot
Presentation of the Turtlebot 2 Hardware
Turtlebot is an open source hardware platform and mobile base. When powered by ROS software, Turtlebot can handle vision, localization, communication and mobility. It can autonomously move anything on top of it to wherever that item needs to go, avoiding obstacles along the way. This may not seem terribly exciting at first glance, but consider two things:
- A mobile base is the heart of a modular/interoperability model of robotics. Without a shared base, parts such as robotic arms, sensors, and other tools could not find or get to their location. Even at their desired location, each would require an independent “brain” to know what to do, which would in turn require interpretation between each of the components.
- The Kiva robot system, which is essentially a really strong Turtlebot (a robot that autonomously moves objects on top of it around an environment)
The Turtlebot is an open source robot built in collaboration with the original makers of ROS, with a focus on education and early-stage development. The Turtlebot consists of a mobile base, a 3D sensor (Kinect), a laptop computer, and the Turtlebot mounting hardware kit . The Turtlebot 2 is one of the non-holonomic robots officially proposed by Willow Garage to develop in the operating system dedicated to robotics: ROS. It is equipped with a Kinect sensor, a Netbook, trays for the installation of these two components and a Kobuki base as shown in Figure 2
Figure 2: Basic components of the Turtlebot 2
The robot computer: The main task of the robot computer is to receive data and send it to a desktop computer or save it to a hard disk. The robot computer also interacts with the Turtlebot hardware.
The Kobuki base: The Kobuki is the Turtlebot 2 mobile base. It is a mobile wheeled robot with two differential wheels and two castor wheels; it also contains proximity sensors, encoders on the wheels and a gyro-meter for each axis. The Kobuki base can also supply power to external sensors (Kinect, ultrasonic sensors, infrared sensors, laser scanners, other cameras ...) or to actuators (motors, servomotors). The following Figure 3 shows the top and bottom Kobuki's views.
Figure 3: The Kobuki base
- Kobuki base sensors:
- Encoders: Encoders are sensors attached to a rotating object (eg wheels or motors) to measure rotation. By measuring the rotation, the displacement, speed and acceleration of the robot can be determined. These encoders allow location by odometry. The odometry is based on the individual measurement of wheels movements to reconstruct the overall movement of the robot. Starting from a known initial position and integrating the measured displacements, it is then possible to calculate at each instant the current position of the mobile robot.
- Gyroscope: The gyroscope is a sensor of angular velocities on three axes
x, y and z.
- Bumpers: There are three bumpers which are distributed between the left part, the center and the right part of the base.
- Vacuum sensors: Similarly, there are three sensors which are distributed between the left, the center and the right part of the base.
- Wheel descent sensor: There are two of these sensors one for each wheel (left, right).
- Figure 4 shows the Kinect sensor
Figure 4: The Kinect Sensor
The main Kinect components are:
- RGB (Red Green Blue) camera that stores data in three channels with a resolution of 1280x960 pixels. It allows the capture of a color image.
- Infrared transmitter / receiver (IR) and IR depth sensor. The transmitter emits beams of IR light and the depth sensor reads the IR beams reflected by the obstacles encountered. The reflected beams are converted into depth information thus measuring the distance between the obstacle and the sensor. This technology allows the capture of a depth image.
Kinematic Equations of the Turtlebot
A Kobuki Turtlebot is a low-cost, open source differential drive robot. It consists of a mobile base, an RGB-D sensor and an Arduino processor making it a perfect entry-level mobile robot platform. The Kobuki was chosen because it is an open source UGV platform, making it perfect for research and development. The Kobuki SDK is based off ROS, which is the preferred development platform for ACE researchers because of its intuitive publisher/subscriber message passing structure that allows robust and simple communication within multiple facets of a robotic system.
Figure 5: Representation of the TurtleBot2 UGV in the Coordinate
The Turtlebot uses a differential drive to steer, and Cook provides a clear approach in order to obtain and simulate the robots movement. The following are the kinematic equations, as shown in the Figure 5, R is the instantaneous radius of curvature of the robot’s trajectory, and W stands for the distance between the wheels.
Where:
: Velocity of the left wheel : Velocity of the right wheel : Angular rate |
Next, the Angular Rate of the robot is calculated:
Next the instantaneous radius of curvature is calculated using (1):
Using (2) we have:
Velocity along the robot’s longitudinal axis is calculated using (2) and (3):
|
Representing the robot’s velocity on earth coordinates we have:
|
Using equation (4) we have:
For this reason the control variables will be:
=
=
The kinematic Model of the Robot
The Turtlebot 2 is moved by the following model: We consider that (X,Y) is the global coordinate system and (XR, YR) is the local coordinate system of the robot. The position of the robot is represented in Cartesian coordinates in the global coordinate system. The relationship between the robot frame and the global frame is given by the basic transformation matrix:
R( |
Figure 6: Turtlebot 2 representation in a Cartesian coordinate system
The figure above shows the relationship between the two frames. The wheels are motorized independently. When both wheels turn with the same speed in the direct action the robot moves forward otherwise it moves backwards. Turning right is done by actuating the left wheel at a higher speed than that of the right wheel and vice versa to turn left. It also can rotate on the spot by actuating a wheel forward and the second wheel in the opposite direction with the same speed. The third wheel is a free wheel preserve the robot stability.
The Turtlebot 2 wheels are non-holonomic, they represent non holonomic constraints:
The relationship between the speeds of the robot and the speeds of the wheels is expressed by these two equations:
Where is the wheel radius and is the distance between wheels.
Suppose that the robot is in an arbitrary position and the distance between its current position and the desired position; defined with reference to the global frame; higher than 0.
Figure 7: Turtlebot 2 position and orientation error vectors
请支付后下载全文,论文总字数:68463字