Introduction
Robotic manipulators are
composed of linkages that are generally modeled as perfectly rigid bodies. The
linkages are connected, or coupled, by servo-controlled joints that are either
revolute (rotational) or prismatic (translational). Manipulators most often
have six joints, and each joint has one degree of freedom. An end effector,
also referred to as a hand or gripper, is attached to the free end of the
manipulator, e.g., to the sixth joint of a six joint manipulator.
The problem of analyzing
and predicting manipulator motion is, strictly speaking, divided into two
separate regimes. The first is the positioning problem and it may be described
as a kinematics problem. The second regime is that of manipulator dynamics and
control.
These two regimes are
described concisely by Kahn and Roth.[1]
Their statement of the kinematics problem is: "Given the desired position
and orientation of the free end of the manipulator, what are the joint
positions which will position the free end at the desired point in space with the
specified orientation?" The
question of dynamics is similarly posed, and the problem of control is discussed:
"What forces or torques must be exerted on the manipulator joints in order
to move the manipulator from its present configuration to the desired configuration?
If the manipulator is moved very slowly, no significant dynamic forces will act
on the system. A simple solution to this problem is to control each of the
joints independently. This is accomplished by using a control system whose
force output to each joint is a function of the position of the joint. However,
if rapid motions are desired, a more complex control system is required due to
the dynamic interactions which occur between the bodies in the chain."
The positioning problem,
as defined above, is not entirely equivalent to the position control problem.
The latter, being a control problem, necessarily involves system dynamics (for
rapid motion), whereas the former term, being synonymous with kinematics, may
be defined as an abstract description of the possible geometric configurations
of the system.
Nevertheless, for the
purpose of manipulator modeling, position control may be considered from a
purely kinematic viewpoint. Then the torques and forces required for actual
manipulation are ignored. In designing a position control system, of course,
these torques and forces must be considered.
The distinction between real-world
design and kinematic modeling is put into perspective by Luh, Walker and Paul,[2]
who note that "the position control of the manipulator is a practical
problem in solving for the required input torques/forces to apply to the joints
for a set of desired positions, velocities, and accelerations." In their
paper, Luh, Walker and Paul develop a scheme (resolved-acceleration control)
for computing torques and forces to be applied to a manipulator, and thus they
consider the dynamics of position control.
In this report, the
kinematic control of mechanical manipulators—that is, position control without
regard for system dynamics—is considered in Section II. In Section III, position control dynamics is
considered, and in Section IV interactive position and force control and task
interaction are discussed.
II.
Kinematic Control Models
The simplest form of
position control requires only that the robotic arm and end effector move from
one specified position and orientation to another. As far as control is concerned,
the manipulator's configuration and the orientation of the end effector during
the movement are ignored. Likewise, the velocities and accelerations of the
manipulator are not subject to real-time control.
The method of
teaching-by-doing, or guiding, is, in its simplest form, an example of this
sort of positional control. The end effector is moved by an operator through a
series of desired configurations, and points corresponding to end-points of the
desired motion are recorded by a positional transducer (optical encoder or
potentiometer) located at each manipulator joint. As the information is
recorded it is stored in a computer-controller's memory. Manipulation then consists of the physical
playback of the desired positions and orientations. The motion of such a
manipulator, if not subject to further control, is referred to as
"bang-bang”. Robot manipulators
were originally bang-bang systems. Present-day robot programming languages may
allow guided programming as one method of entering position and orientation
data.
A.
Coordinate Transformations
Manipulation actually takes
place in some sort of task-based coordinate system, which may be called the
world or Cartesian system, and this coordinate system can be related by
coordinate transformations to the six-dimensional configuration space used to
describe the positions of the joints of a six-degree-of-freedom manipulator.
Such coordinate transformations must be done, for example, if the Cartesian
coordinates of the end points of a positionally-controlled manipulator are
numerically entered in the computer-controller's memory.
Homogeneous
transformations are probably the most widely used robotic arm coordinate
transformations. Using this method, basic rotations of the end effector in
terms of specified base coordinates are first defined. The basic rotations may
be specified by Euler angles, roll, pitch and yaw, quaternions, or some other
three-dimensional rotation scheme. (Taylor[3]
and others have used quaternions instead
of homogeneous transformations; Paul[4]
uses quaternions in one formulation of the homogeneous transformations.)
The fundamental form of
the homogeneous transformation is that of a four-by-four matrix describing the
translation or rotation of one manipulator link with respect to an adjacent
link. For a complete specification of joint angles or displacements in terms of
the end effector's Cartesian (base) position and orientation, the
transformation for a six-joint manipulator is of the form
T06 = A01A12A23A34A45A56
where the superscript on T identifies the manipulator base (link
zero) and the Ai-1i
are 4x4 homogeneous transformation
matrices describing the relation between the i th link and link i-1.
Homogeneous
transformations are inherently concerned with relationships between coordinate
systems. For implementation of the homogeneous transformation scheme, an
orthogonal three-dimensional coordinate system, which may be conveniently
referred to as a local Cartesian coordinate system, is imagined to be attached
to each manipulator joint. Thus the transformation given above is a succession
of transformations from one local Cartesian frame of reference to the next, starting
at the free end of the manipulator.
As already mentioned, the
manipulator's end effector must have its position and orientation specified
with respect to the base or world coordinate system. In addition to the end
effector specification, objects to be manipulated also have local Cartesian
coordinates. One example of the
usefulness of homogeneous transformations is that of a video-monitored
workstation. The manipulator's computer-controller, using the relationships
among various local coordinate systems as defined with respect to a base coordinate
system visible to the camera, can direct the alignment of the gripper (end
effector) with an object. The object may then be picked up, pushed, or
otherwise reoriented. A human
operator may use this sort of set up with interactive robot programming
(possibly including interactive graphics), to perform positionally-controlled
remote manipulation via mathematical transformations. (There is more discussion
of remote manipulation in Section IV.)
B.
Differential Transformations and Path Tracking
In order to actually put
into practice the preceeding example of vision-based position control, fine
motions of the manipulator must be controlled. The problem of manipulator fine
motion control has received considerable attention due to its importance in mechanical
assembly. Most often, fine motion control is considered in conjunction with
force sensing.
The kinematic description
of fine motion control makes use of differential relationships. An example is
provided by Whitney's resolved rate control method (1972),[5]
wherein a differential transformation relates joint velocities (as the
independent variables) to the velocity components in an end-effector-based
coordinate system. Originally, Whitney was not concerned with fine motions but
with simply coordinating a manipulator's joint motions. As described by
Whitney, "The objective of coordinated control is to allow the operator of
a mechanical arm to command rates of the arm's hand along coordinate axes which
are convenient and visible to the operator." Later, Whitney (1977) applied
the idea of resolved rate control to the problem of controlling fine motions
using force sensing.
Another case that can be
considered from the viewpoint of fine motion control is that of path tracking,
which, when combined with a path p1anning stage, fits into the general
framework of hierarchical manipulator control models. Given a positional control system that allows
a point in Cartesian workspace coordinates to be transformed into manipulator
joint- positions, a path may be determined by the method of guiding or by
numerically entering the path information into computer memory. (In the latter
case, the computer-controller can interpolate between the given points to
provide better path specification.) Using the simple positional control model
discussed earlier, the manipulator will move to a specified point and
momentarily stop before moving on to the next point.
If the points are chosen
to be fairly close together, then using resolved rate control a straight-line
trajectory between each point may be implemented, along which velocities are
specified to be nonzero. Tachometers built into a manipulator’s joints can be
used to provide the necessary velocity feedback information.
III.
Dynamic Control Models
Dynamic control of a mechanical
manipulator may be examined in two contexts. First, position control may be
considered from the point of view of system dynamics, and indeed, for real-time
positional control and path tracking, manipulator dynamics must be considered.
Forces and torques must be calculated, although the system may still be
considered as an isolated system. The second dynamic control context is
realized when the manipulator interacts with its environment in such a way that
the work done by the manipulator on the environment (dW= F·dX)
is nonzero. For the most part, this
section will discuss the dynamics of the manipulator as an isolated system. The
next section (Compliance and Task Interaction) will discuss the more general
problem of the manipulator/environment system.
A.
Formulations of the Equations of Motion
The mechanical manipulator
equations of motion have been examined mainly by use of Lagrangian generalized
coordinates, recursive Newton-Euler methods, and by a recursive Lagrangian
formulation. The complete Lagrange equations of motion do not lend themselves
to quick computation, and have proven to be too much of a computational burden
for use in real-time manipulator control. The recursive Lagrangian formulation given by Hollerbach[6]
makes real-time computation possible, as does the recursive Newton-Euler
formulation given by Luh, Walker and Paul.
Hollerbach makes use of a
previously developed backward (from the manipulator base) recursion formula for
generalized forces and extends this recursion to include forward recursion of
forces from the end link to the base of the manipulator. Luh et al use backward recursion of the
velocities and accelerations, and joint torques are obtained by forward
recursion of forces using Newton-Euler methods. In Hollerbach’s scheme, the use
of 3X3 rotational matrices rather than 4X4 homogeneous transformations allows
faster computation using the recursive equations. Luh et
al make use of the relationships of moving coordinate systems to relate
adjacent link dynamics. Both the recursive Lagrangian method and the recursive
Newton-Euler method reduce the number of computations to the order n, where n is the number of manipulator joints. In the full Lagrangian
formulation, the number of computations and therefore the computation time is
of order n4.
Other formulations of the
equations of motion exist, but the recursive Newton-Euler and recursive
Lagrangian formulations remain the most efficient schemes for computing
manipulator dynamics. A recently proposed inherently discrete dynamic robot model
may lead to more efficient computational algorithms, however.
B.
Implementation of Dynamics Computations
Making use of manipulator
equations of motion can be a theoretical exercise in optimal system control or
a practical implementation for the control of a particular manipulator, or a
combination of both.
Perhaps the best example
of the purely theoretical use of the equations of motion is Kahn and Roth's
paper, in which time-optimal manipulator control is studied. First, the full
Lagrangian equations of motion (neglecting friction) are given, then the
equations are solved for a particular three-link model. Next, the time-optimal
control problem is stated and solved. The solution to the time-optimal control
problem, however, which is a set of twelve Euler-Lagrange equations, shows
optimal control is bang-bang: Time-optimal control can be achieved only as a
function of time, unexpected disturbances cannot be accounted for, and
computations must be repeated for each set of initial and final conditions.
Kahn and Roth then
construct a sub-optimal control model in which the equations of motion are
linearized and uncoupled by a suitable linear transformation. A canonical form of the state equation
describing the plant (i.e., the manipulator) is found, and sub-optimal control
is shown to closely approximate the optimal control with regard to response times
and trajectories. The Kahn-Roth optimization excludes Coriolis and centrifugal
forces, however, and has not been found to be applicable to manipulators with
more than three links.
A model that considers the
first three links as a separate computational problem from that of the outer
links has been developed by Horak.[7]
For the first three manipulator links, the classical (non-matrix) Lagrange
equations are used, and the velocities are computed in link coordinates.
Succeeding links are described by the recursive Newton-Euler equations. The
main advantage of this method is its computational efficiency, which results
from the fact that the solutions to the equations of motion for the first three
links can be obtained in closed form. (The dynamics of the outer links may be
computed from the velocities and accelerations of the fourth joint. The fourth
joint is considered to be attached to the end of the third link.) A slight
disadvantage is that the model assumes some specific manipulator
characteristics are known. The disadvantage, however, is partially overridden
by the fact that the specific characteristics are common to manipulators
currently in use. If two microprocessors are used for the dynamics computation,
one for the Lagrangian calculations and one for the Newton-Euler calculations,
the computation can be done ten times faster than in the conventional
Newton-Euler approach.
Another method that makes
use of multiple processors is given by Luh and Lin.[8]
A CPU is assigned to each manipulator
link, and parallel dynamics computations are performed. Precedence relations
based on the dynamic coupling between the links are handled by a
"variable" branch-and-bound technique that determines the
optimum-ordered schedule for each of the CPU's. The optimum-ordered schedule is
the one resulting in a minimum-time pre-planned path.
The same sort of problem of
using path planning to maximize the manipulator's efficiency is addressed by
Kim and Shin,[9]
who make use of manipulator dynamics in the path planning process. By taking
the dynamics into account, Kim and Shin are able to specify local upper bounds
on joint accelerations. Other path planning methods specify constant global
bounds on the joint accelerations, so that the manipulator will be able to
follow the pre-planned path under varying load conditions. The use of this
global least-upper-bound often causes a manipulator to move more slowly than is
necessary, thereby under-utilizing the manipulator's capabilities. Kim and Shin's path planning scheme solves the
under-utilization problem by transforming it into a set of local optimization
problems.
In addressing a similar
problem, Hollerbach has developed a method for predicting whether a given
manipulator trajectory is dynamically realizable and, more importantly, enables
the trajectory to be dynamically scaled so that it will not violate joint
torque bounds. Dynamic scaling may
result in slower motion or faster motion for a particular joint, and Hollerbach
gives examples of both, along with an example of a dynamically unrealizable
trajectory.
Also significant is Hollerbach's
finding that velocity and acceleration terms in the manipulator equations of
motion are equally important at all speeds of movement. The generally applied
assumption in robotics literature, Hollerbach says, is that the velocity terms
can be ignored except at higher movement speeds because they contain nonlinear
terms. Hollerbach further states that this assumption is reasonable, but that
for consistency acceleration terms should also be ignored at slower speeds,
although this is not done.
However, the paper of Kahn
and Roth apparently contradicts Hollerbach's statement, since as quoted earlier
it states that for slow movements "no significant dynamic forces will act
on the system." At any rate, high-speed movement is inherently desirable
and velocity terms may not be ignorable in designing future robotic arm control
systems.
IV.
Force Control, Compliance and Task Interaction
Computer-controlled
mechanical manipulators are designed to be general purpose devices, as opposed
to so-called hard automation, which is designed to perform one repetitive
series of tasks. Manipulators therefore should be able to interact dynamically
with a changing environment. Attempts to achieve ideal general purpose
manipulation currently involve the use of force control and force-and-position
control operating together in compliance with positional constraints imposed by
the geometry of the manipulator's environment.
A.
Force Control
Mason[10]
identifies two prevailing approaches to force control, the explicit feedback
approach and the hybrid controller approach. Mason
first discusses two types of explicit feedback force control, the generalized
spring and the generalized damper:
The explicit feedback
approach uses an explicit force control law which feeds sensed forces back to a
position or velocity controller. Typical of the explicit feedback approach is
the generalized spring which feeds
back force information through a stiffness matrix to a position controller.
This method can be modeled by the relation
f = K(p – p0)
Where f is the effector
force, p is the effector position, and p0
is the nominal position, which is input supplied from the planning system or
user program. K is the stiffness matrix,
which relates forces observed at the effector to deviations from the nominal
position. The stiffness matrix can be chosen to optimize performance of a
particular task. The generalized damper
method is similar in form but assumes a velocity controller instead of a
position controller. This method can be modeled by the relation
f = B(v – v0)
where f is the effector
force, v is the effector velocity,
and v0 is the nominal
velocity, which is input from the planning system or user program. B is the damping matrix, in this case relating effector force to deviations
from the nominal velocity. A generally useful choice for B is just the identity matrix times some negative damping
coefficient.
An implementation of the
generalized spring approach is the remote center compliance device developed at
the Charles Stark Draper Laboratory. The remote center compliance device is a
spring-loaded robotic arm "wrist" that is a passive generalized
spring device. The instrumented remote center compliance device-makes use of
strain gauges for force measurement, and is therefore an active generalized
spring device.
An example of the
generalized damper is Whitney's "Force feedback of manipulator fine
motions," which makes use of the
resolved motion rate control method developed by Whitney. The scheme involves the use of a force-sensing
wrist (developed at the Draper Laboratory, but not equivalent to the remote
center compliance device) that provides six differential signals from which
forces and torques are computed. The signals are provided by three extensional
strain gauges and three strain gauge shear bridges cemented to three columns
that connect the two ring-shaped "wristbands" of the force-sensing
wrist.
Whitney's force feedback
scheme works as follows: An a priori motion
command is given in terms of coordinates attached to the end effector. The v command (the velocity command, in end
effector coordinates) is then transformed into joint velocities by resolved
motion rate control. When contact with the environment occurs, contact forces
and torques are set up. The force feedback gain matrix (the inverse of the
damping matrix) multiplies the force and torque quantities and creates velocity
modification commands. The force feedback gain matrix is linear, so that
Whitney's method is a linear force feedback method.
B. Compliance and Task Interaction
The hybrid controller
approach, rather than the explicit feedback approach, is used by Mason to
develop a compliance formalism. Mason's primary concern is the development of
control strategy synthesis methods, and to that end he develops formal models
of the manipulator (the ideal effector), the task geometry (the ideal surface),
and the desired manipulator behavior (goal trajectory).
Raibert and Craig[11]
use Mason's theoretical framework, which is partially based on earlier hybrid
controller work done by Raibert and Craig, to develop a "conceptually
simple approach to controlling compliant motions of a robot manipulator." The goal of Raibert and Craig, like that of
Mason, is to improve controller architecture so that force feedback and other
sensory information may be efficiently used.
Mason explains compliant
motion as that motion which “occurs when the position of a manipulator is
constrained by the task.” Others have referred to compliant motions as motions
of accommodation. In developing his formalism, Mason uses succinct descriptions
of the situations in which pure force control and pure position control may be
considered.
If the end effector is
buried in a solid block of immovable material, no positional freedom exists and
no positional control is possible. Any force
command can be accepted by the end effector, however, so that the manipulator
has complete force freedom.
On the other hand, a
manipulator end effector unconstrained and in a space free of obstacles has
complete positional freedom but no force freedom—nothing is present to accept a
force applied by the manipulator. This
is the regime of pure position control.
Intermediate between the
two regimes are configuration space "C-surfaces" as explained by
Mason:
Loosely speaking, a C-surface is a
task configuration which allows only partial
positional freedom. Freedom of motion occurs along C-surface tangents, while
freedom of force occurs along C-surface normals. Neither pure position or pure
force control is appropriate in this case, but rather a hybrid mode of control,
which gives control of effector force along the C-surface normal and control of
effector position along the C-surface tangent.
Raibert and Craig explain
more about the C-surfaces: "These two types of constraint, force and
position, partition the degrees of freedom of possible hand motions into two
orthogonal sets that must be controlled according to different criteria."
In order to implement the different criteria, artificial constraints must be specified. Artificial constraints
represent a generalization of manipulator control strategies, and are used to
specify desired motions or forces. The artificial constraints are complementary
to the natural constraints of the C-surfaces: Artificial force constraints are
specified along surface normals and artificial position constraints are
specified along surface tangents. (Although not explicitly stated above, it is
implied that natural force constraints are along C-surface tangents and natural
position constraints are along C-surface normals.)
The coordinate system in
which the constraints and trajectories are specified is a Cartesian system
defined with respect to the task geometry. If the constraint coordinate system
is properly chosen, manipulator trajectories are easily determined, and force
constraints may remain constant during the trajectory.
The case of a manipulator
turning a crank provides a good illustration of the use of natural and
artificial constraints. An XYZ
coordinate system is attached to the handle of the crank, with the crank's
angular motion confined to the XY plane and the positive Z axis pointing upward
from the upward-pointing crank handle. Forces, torques, linear velocities, and
angular velocities are given in terms of their XYZ components. The natural
constraints imply that vx ,
vz , ωx , ωy
, fy , and nz are all zero. These
quantities are linear velocities, angular velocities, force, and torque, in the
indicated directions. The artificial constraints are therefore ωz = constant, and vy
, fx , fz , nx , and ny all equal zero.
Hybrid control and active
compliance are architectural concepts that link the constraints of a task
requiring force feedback (via, for instance, a force-sensing wrist) to the
actual controller design, and are therefore useful in achieving kinesthetic feedback
for the purpose of remote manipulation.
Force feedback first found
application as an adjunct to position control of remote manipulators. In that
same context, force feedback is currently being used, but emphasis is now on a general
application of force feedback to both remote manipulation and computer-controlled
manipulation. The general implementation of force feedback and other sensory
sources has been referred to as data-driven automation.
The generality of data-driven
automation is achieved by using real-time mathematical transformation of
robotic arm joint variables, where joint variables are measured both at the remote
manipulator's input device and at the manipulator output. Computer control is therefore an interface between
the operator and the manipulator, and the degree of direct, autonomous computer
control over the manipulator may be
varied by the operator, or computer control may be used to simply supply force
and torque feedback to the hand controller, thus enabling the operator to
perform fine-motion remote manipulation. A system that uses the kinematics of
homogeneous transformations is described by Bejczy and Salisbury.[12] Video input using homogeneous
transformations, as described in Section II, is easily implemented with such a
system, but with force feedback supplying fine-motion information the use of
video monitoring would mainly supply information for large-scale manipulator
movements.