Download Presentation
## The Equipment Outline of the Humanoid Robot RO-PE and the Self-limitation Calculation in RoboCup

Download Now

**The Hardware Design of the Humanoid Robot RO-PE and the**Self-localization Algorithm in RoboCup Tian Bo Control and Mechatronics Lab Mechanical Engineering 20 Feb 2009 SMC**RoboCup and Team RO-PE**• RoboCupTM is an international joint project to promote artificial intelligence and robotics. • Team RO-PE RO-PE (RObot for Personal Entertainment) is a series of small size humanoid robots developed by the Legged Locomotion Group**Self-localization in RoboCup**• Global localization problem • The robot is not told its initial pose, but has to determine it from the very beginning**Self-localization in RoboCup**• Global localization problem • Kidnapped robot problem - a well-localized robot is teleported to some other position without being told - The kidnapped robot problem is often used to test a robot’s ability to recover autonomously form catastrophic localization failures**Self-localization in RoboCup**• Global localization problem • Kidnapped robot problem • Other difficulties in the humanoid soccer scenario - The field of view is limited, due to the human-likesensor.- Noisy perceptions and noisy odometry. - Computational resources are limited. But data needs to be processed in real-time.**What is Particle Filter**• Belongs to the family of Bayesian Filters (Bayes Filter,Kalman Filter…) • Bayesian filter techniques provide a powerful statistical tool to help manage measurement uncertainty • Based on the knowledge of previous state, Bayesian filter probabilistically estimates a dynamic system’s state from noisy environment.**What is Particle Filter**• Particle filters represent beliefs by a sets of samples, or particles • It is a probabilistic approach, in which the current location of the modeled as a density of the particles. • Each particle can be seen as the hypotheses of the robot being located at this posture.**What is Particle Filter**• The main objective of particle filtering is to “track” a variable of interest as it evolves over time, typically with a non-Gaussian and potentially multi-modal pdf • The particle filter algorithm is recursive in nature and operates in two phases: prediction and update**Particle Filter Localization**Move all the particles according to the motion model of the previous action of the robot More practical part Determine the probabilities qi based on observation model (real “trick”) Resampling**Algorithm particle_filter( St-1, ut-1 zt):**• For Generate new samples • Sample index j(i) from the discrete distribution given by wt-1 • Sample from using and • Compute importance weight • Update normalization factor • Insert • For • Normalize weights Particle Filter Algorithm (Probabilistic Robotics, C4 P98)**Particle Filter for Self-Localization**Loop • initializeParticles(); //p[i] (x, y, theta, w) While(sensor reset != 1){ • motionModel(); • sensorModel();{ • updateWeight();} • resampling(); • output();}**Motion Model**This is the prediction part • The particle filter for self-localization estimates the robot’s pose • Odometry-based Method Take x for example: p[m].x = p[m].x + deltaX *(1+gaussian)**Motion Model**• Simplified Leg Model • Step 1 hip_yaw = 0**Motion Model**• Simplified Leg Model • Step 2 hip_yaw = θ**Motion Model**• We do localization when the left leg just touch theground • This odometry gets the data from the motion commend sent to servo,it will not be affected bythe control signal. It can be more accurate if the servo can feedback its position.**Error for the Motion Model(1)**with the steps increased, the error increase. The largest error is 25%, happens at the 14th step.**Error for the Motion Model(2)**Like walking motion, the real distance has a linear relationship with the number of steps, so we can achieve better results through improve our model or make correction. Back to Particle Filter**Sensor Model**This is the update part • In the whole field, we only use the two goals and two poles for self-localization. The world model is known. • We only take the angle from the landmark to the front of robot into consideration**Sensor Model**• We are only using the wide angle camera for landmark recognition, the information we can abstract from the camera is limited.**Sensor Model**• Once a landmark is observed by the robot, the function sensorModel() will be executed. The weight for every particle will be updated accordingly. • If several landmarks are observed at once, the weight will be**Sensor Model**We can get the expectedTheta through the position and orientation of the particle and the world model • if(blue_goal_found){ /*we can get the percievedTheta from camera, the coordination of the landmark on the image, and the position of the panning servo of the head */ updateWeight(blue_goal)}**Sensor Model**• Update Weight deltaTheta = fabs ( expectedTheta – perceivedTheta) ; belief = distribution ( deltaTheta ); p[i].weight = p[i].weight * belief Normalize(p[i].weight); • Distribution Policy Now we are using Gaussian distribution.**Resampling**• The simplest method of resampling is to select each particle with a probability equal to its weight. Select with Replacement Linear time Resampling Resampling by Liu et al.**Resampling**(A Particle Filter Tutorial for Mobile Robot Localization TR-CIM-04-02)**Final Estimation**• Finding the Largest Cluster • Give the best result but computational expensive • Calculating the Average • May affect by the far away particles • Best Weight • Fastest way to give the result, suitable for the real-time system**Future work**• Find out the condition to make sensor resetting, or else sometimes the particle will converge to a false point and cannot recover. • Including the distance information in sensor model. • Try new resampling and weightUpdate algorithm.