Adaptive and reinforcement learning control methods for active compliance control of a humanoid robot arm
Khan, S. G. (2012) Adaptive and reinforcement learning control methods for active compliance control of a humanoid robot arm. PhD, University of the West of England.
Full text not available from this repository
Safety is an important requirement for human-robot interaction. Compliance control can often help to ensure safety in human-robot interaction (HRI). The aim of this work is to develop a compliance control strategy for safe HRI. Compliance can be achieved through passive means (mechanical structure or passive actuation) or through active compliance methods, employing force/torque feedback. This thesis deals with the compliance control of Bristol-Elumotion Robot Torso (BERT) II robot arm which is inherently rigid and heavy. As the dynamic model of the arm is difficult to obtain and prone to inaccuracies, parametric uncertainties and un-modelled nonlinearities, a model-free adaptive compliance controller is employed. The control scheme is using a mass-spring-damper system as reference model to produce compliant behaviour. The adaptive control scheme may cause actuator saturations, which could lead to instabilities and eventually windup. Hence, an anti-windup compensator is employed to address actuator saturation issues. The control scheme is a Cartesian one (tracking x, y and z coordinates) and employing four joints (namely, shoulder flexion, shoulder abduction, humeral rotation and elbow flexion joints) of the BERT II arm. Although, this needs three degrees of freedom (DOF), the fourth redundant DOF is employed to generate human-like motion, minimising a gravitational function. The adaptive compliance control scheme works efficiently for the application and produces good tracking and compliance results. It is often the case that adaptive control schemes are not necessarily (control) optimal, which may create difficulties in the controller design. Furthermore, it is difficult to incorporate constraints or any other desired behaviour. Therefore, bio-inspired reinforcement learning (RL) schemes are explored. A recently formulated RL based optimal adaptive controller scheme is employed and modified for real time testing on our robot arm. The RL based scheme is implemented for non-constrained and constrained cases in the joint space. Particularly, the results produced with the constrained case are encouraging, where the controller learns to deal with the constraints in the form of joint limits. An RL based Cartesian model reference compliance controller is also tested for two links of the BERT II arm. Generally, the results with this scheme are very good. However, there are limitations on the representation of the RL cost functions and the control scheme using neural networks (NNs). To a large extent these limitations have been overcome through a novel practical approach of representing the cost function and the control via a simple neural network. Nevertheless, available computational power permitted only two link experimental implementation. Integration of these new control approaches into practical HRI system is important. A final achievement is an initial HRI experiment for passing of objects between human and robot employing the model reference adaptive compliance control scheme mentioned in the beginning. This experimental scenario is implemented using also separate hand controller and speech interface.
Repository Staff Only: item control page
Total Document DownloadsMore statistics for this item...