Thursday, October 1, 2015

1963-64: When We Moved to Pine Bluff



Published in the Jefferson County Historical Quarterly, Spring 2014.

 When We Moved to Pine Bluff

By David Trulock

Quite a few historically significant 50th anniversaries have come and gone recently. Among them, the anniversaries of President Kennedy’s assassination and of the Beatles arrival in the United States have deservedly received the most attention. One event that occurred between the JFK assassination and the Beatles debut in America has deservedly received no attention at all, until now:  My family’s move to Pine Bluff.

We weren’t particularly special in moving to Pine Bluff at that time. Between 1960 and 1970, the city’s population grew from 44,037 to 57,389, a thirty percent increase that resulted in the highest census count Pine Bluff has yet achieved.  There were numerous jobs to be had in Pine Bluff in the 1960s, jobs with the railroad, the two paper mills, the Pine Bluff Arsenal, Arkansas Power & Light Company, the newly built Jefferson Hospital, and the huge nearby dredging operations on the Arkansas River that helped to create the McClellan-Kerr Navigation System and the Port of Pine Bluff.

In contrast to people who moved from far away to take new jobs in Pine Bluff, and severed or stretched ties with family and friends in doing so, my family moved only about ten miles.  And my father didn’t change jobs—he’d done that a few years before, when he went from farming to being a stock broker and owning his own business, Trulock & Company, in downtown Pine Bluff.

Since 1955 my family had lived on my Trulock grandparents’ farm along the Arkansas River, just across the Free Bridge five miles north of town on Highway 79 (now Highway 79B). We lived in a sort of an extended-family compound, in an arrangement reminiscent of the Old South. My grandparents’ home was about a double stone’s throw from the converted brick dairy-barn house my family lived in, and my uncle, aunt and cousin (Leo, Sue and Lynne Trulock) lived within a half mile of us.

In the middle of our family compound were the “quarters,” a few basic sharecropper houses, rather old but constructed of long-lasting cypress, where some of the black families whose fathers worked on the farm lived. Two families with kids close to my age were those of Calvin Murry and Charlie Lee.  Although we weren’t supposed to go in their houses and vice-versa, the outdoor activities among the boys in the Lee, Murry, and Trulock families were a free-for-all of swimming in cattle ponds we weren’t supposed to swim in and playing in cotton trailers—at least up until a cap pistol caused a fire at the gin and my grandfather put the cotton trailers off limits.

So moving to town did change the childhood activities my brothers and I had taken for granted, but it didn’t stretch family ties or change my parents’ friendships and community activities. My maternal grandparents, Arch and Elner (King) Miller, weren’t affected by our move, either. They’d lived in Pine Bluff in the 1920s, 30s, and early 40s, but by 1964 they were living in the Pulaski Heights section of Little Rock (a frugal middle-class couple could still afford to buy a house there in the 1950s). So they were still about the same distance from us once we’d moved into town.

The address we moved to was 4006 Cherry Street, with Eden Park Country Club a few blocks to the east, and Jefferson Hospital a few blocks to the west. Although I didn’t realize it at the time, the move to town was inevitable, given the fact that my parents wanted their sons to attend school in the Pine Bluff school district, as they both had done, rather than in the Altheimer school district. Prior to buying the house on Cherry Street in the late summer of ’63, my father had been paying tuition at Trinity Episcopal Day School for three of his sons, and two more would be entering the school system in a few years.

The purchase of the house on Cherry Street meant my parents could enroll my two school-age brothers and me at Forrest Park elementary school for the 1963-64 school year. (Other cities have their Forest Parks; Pine Bluff has its Forrest Park section of town, the result of either a misspelling of forest or of someone's desire to memorialize the Confederate general of that name, or both.)  My brother Jeff was in the 5th grade, I was in the 4th grade, and Steven was in the 2nd grade.  At the start of the school year, Greg was two years old and Arch was still in the womb.

When Mother asked Greg the standard question of whether he wanted the baby to be a boy or a girl, he chose a third option:  “A turtle!  Being the youngest of five brothers, and not having any sisters, Arch probably sometimes wished he’d had a protective shell.  He was born on October 23rd in Jefferson Hospital. We were still living on the farm at the time.

With Arch’s birth our family was complete, and the remodeling of the house on Cherry Street, under the supervision of Leslie McIntyre, was about halfway finished.  In December, we had our last Christmas on the farm, and sometime around the beginning of the New Year we moved into town.

In the world at large during 1963 and ‘64, changes were occurring that would have lasting effects.  I can think of quite a few others besides the two events mentioned at the beginning of this article, but will focus only on two more, each historically significant in its own way. I’ll also mention one other event related to my family, significant only to me.

In Pine Bluff, in February of ’63, sit-ins at Woolworth’s lunch counter by AM&N students and others signaled the beginning of the end of the city’s retail establishments’ racial discrimination in its most blatant form, the unequal segregation of "colored" from "white." In May of ’64, in New Jersey, two physicists at Bell Laboratories—trying to find the source of the constant, low-level noise picked up by their microwave antenna— accidentally discovered the “cosmic background radiation” that later provided the first incontrovertible evidence for the Big Bang cosmological model.

Also in May of 1964, a Pine Bluff Commercial photographer came to 4006 Cherry and took a photo for an article about the League of Women Voters.  The original photo is reproduced below. As is often the case in the newsprint business, something had to be cut to make the published item fit the space available.  I was cut out of the published photo, and also not mentioned along with my brothers in the caption.  After fifty years, I’m glad to be able to put myself back in the picture.
 
One of the things us Trulock boys (and our dad) had to grudgingly put up with was Mother's insistence on having "family meetings" that turned out to be lectures from her.  The photographer who snapped this picture thus captured an emblematic moment in our family history.  From left: Steven, Betty, Arch, Jeff, Greg, and me (the one who got cropped).  --D.T.

Wednesday, August 12, 2015

Robotic Manipulator Overview 1985

This is a study of robotics I did when I was applying for a job in 1985 with a new Little Rock research company that had a contract with the NASA Jet Propulsion Lab.  It isn't original research, it's just a survey of research in the field of robotic manipulators.  I did get the job, but I only worked about three months because the company was closed down after an investigation by the Air Force into another contract that was not being fulfilled by the company.  

Here are a couple of links about the meaning of kinematics and homogeneous transformations. Four dimensional matrices are of interest to me because they're also used in relativity: three space dimensions and one time dimension in the same matrices. Also, one of my favorite parts of the first mechanics course I took, taught by Richard Rolleigh at Hendrix College in 1978-79, was the subject of coordinate transformations (from one type of coordinate system to another, such as cartesian to parabolic cylindrical).  I didn't much like the textbook (Symon), however.  My current favorite mechanics text is Classical Dynamics: A Contemporary Approach.


Robotic Manipulator Overview
I. Introduction
II. Kinematic Control Models
      A. Coordinate Transformations
      B. Differential Transformations and Path Tracking
III. Dynamic Control Models
      A. Formulations of the Equations of Motion
      B. Implementation of Dynamics Computations
IV. Force Control, Compliance and Task Interaction
      A. Force Control
      B.  Compliance and Task Interaction




Introduction

Robotic manipulators are composed of linkages that are generally modeled as perfectly rigid bodies. The linkages are connected, or coupled, by servo-controlled joints that are either revolute (rotational) or prismatic (translational). Manipulators most often have six joints, and each joint has one degree of freedom. An end effector, also referred to as a hand or gripper, is attached to the free end of the manipulator, e.g., to the sixth joint of a six joint manipulator.

The problem of analyzing and predicting manipulator motion is, strictly speaking, divided into two separate regimes. The first is the positioning problem and it may be described as a kinematics problem. The second regime is that of manipulator dynamics and control.

These two regimes are described concisely by Kahn and Roth.[1] Their statement of the kinematics problem is: "Given the desired position and orientation of the free end of the manipulator, what are the joint positions which will position the free end at the desired point in space with the specified orientation?"  The question of dynamics is similarly posed, and the problem of control is discussed: "What forces or torques must be exerted on the manipulator joints in order to move the manipulator from its present configuration to the desired configuration? If the manipulator is moved very slowly, no significant dynamic forces will act on the system. A simple solution to this problem is to control each of the joints independently. This is accomplished by using a control system whose force output to each joint is a function of the position of the joint. However, if rapid motions are desired, a more complex control system is required due to the dynamic interactions which occur between the bodies in the chain."

The positioning problem, as defined above, is not entirely equivalent to the position control problem. The latter, being a control problem, necessarily involves system dynamics (for rapid motion), whereas the former term, being synonymous with kinematics, may be defined as an abstract description of the possible geometric configurations of the system.
Nevertheless, for the purpose of manipulator modeling, position control may be considered from a purely kinematic viewpoint. Then the torques and forces required for actual manipulation are ignored. In designing a position control system, of course, these torques and forces must be considered.

The distinction between real-world design and kinematic modeling is put into perspective by Luh, Walker and Paul,[2] who note that "the position control of the manipulator is a practical problem in solving for the required input torques/forces to apply to the joints for a set of desired positions, velocities, and accelerations." In their paper, Luh, Walker and Paul develop a scheme (resolved-acceleration control) for computing torques and forces to be applied to a manipulator, and thus they consider the dynamics of position control.

In this report, the kinematic control of mechanical manipulators—that is, position control without regard for system dynamics—is considered in Section II.  In Section III, position control dynamics is considered, and in Section IV interactive position and force control and task interaction are discussed.

II. Kinematic Control Models


The simplest form of position control requires only that the robotic arm and end effector move from one specified position and orientation to another. As far as control is concerned, the manipulator's configuration and the orientation of the end effector during the movement are ignored. Likewise, the velocities and accelerations of the manipulator are not subject to real-time control.

The method of teaching-by-doing, or guiding, is, in its simplest form, an example of this sort of positional control. The end effector is moved by an operator through a series of desired configurations, and points corresponding to end-points of the desired motion are recorded by a positional transducer (optical encoder or potentiometer) located at each manipulator joint. As the information is recorded it is stored in a computer-controller's memory.  Manipulation then consists of the physical playback of the desired positions and orientations. The motion of such a manipulator, if not subject to further control, is referred to as "bang-bang”.  Robot manipulators were originally bang-bang systems. Present-day robot programming languages may allow guided programming as one method of entering position and orientation data.

A. Coordinate Transformations

Manipulation actually takes place in some sort of task-based coordinate system, which may be called the world or Cartesian system, and this coordinate system can be related by coordinate transformations to the six-dimensional configuration space used to describe the positions of the joints of a six-degree-of-freedom manipulator. Such coordinate transformations must be done, for example, if the Cartesian coordinates of the end points of a positionally-controlled manipulator are numerically entered in the computer-controller's memory.

Homogeneous transformations are probably the most widely used robotic arm coordinate transformations. Using this method, basic rotations of the end effector in terms of specified base coordinates are first defined. The basic rotations may be specified by Euler angles, roll, pitch and yaw, quaternions, or some other three-dimensional rotation scheme. (Taylor[3] and others have used quaternions instead of homogeneous transformations; Paul[4] uses quaternions in one formulation of the homogeneous transformations.)

The fundamental form of the homogeneous transformation is that of a four-by-four matrix describing the translation or rotation of one manipulator link with respect to an adjacent link. For a complete specification of joint angles or displacements in terms of the end effector's Cartesian (base) position and orientation, the transformation for a six-joint manipulator is of the form

T06 = A01A12A23A34A45A56

where the superscript on T identifies the manipulator base (link zero) and the Ai-1i  are 4x4 homogeneous transformation matrices describing the relation between the i th link and link i-1.

Homogeneous transformations are inherently concerned with relationships between coordinate systems. For implementation of the homogeneous transformation scheme, an orthogonal three-dimensional coordinate system, which may be conveniently referred to as a local Cartesian coordinate system, is imagined to be attached to each manipulator joint. Thus the transformation given above is a succession of transformations from one local Cartesian frame of reference to the next, starting at the free end of the manipulator.

As already mentioned, the manipulator's end effector must have its position and orientation specified with respect to the base or world coordinate system. In addition to the end effector specification, objects to be manipulated also have local Cartesian coordinates.  One example of the usefulness of homogeneous transformations is that of a video-monitored workstation. The manipulator's computer-controller, using the relationships among various local coordinate systems as defined with respect to a base coordinate system visible to the camera, can direct the alignment of the gripper (end effector) with an object. The object may then be picked up, pushed, or otherwise reoriented.      A human operator may use this sort of set up with interactive robot programming (possibly including interactive graphics), to perform positionally-controlled remote manipulation via mathematical transformations. (There is more discussion of remote manipulation in Section IV.)

B. Differential Transformations and Path Tracking

In order to actually put into practice the preceeding example of vision-based position control, fine motions of the manipulator must be controlled. The problem of manipulator fine motion control has received considerable attention due to its importance in mechanical assembly. Most often, fine motion control is considered in conjunction with force sensing.
The kinematic description of fine motion control makes use of differential relationships. An example is provided by Whitney's resolved rate control method (1972),[5] wherein a differential transformation relates joint velocities (as the independent variables) to the velocity components in an end-effector-based coordinate system. Originally, Whitney was not concerned with fine motions but with simply coordinating a manipulator's joint motions. As described by Whitney, "The objective of coordinated control is to allow the operator of a mechanical arm to command rates of the arm's hand along coordinate axes which are convenient and visible to the operator." Later, Whitney (1977) applied the idea of resolved rate control to the problem of controlling fine motions using force sensing.

Another case that can be considered from the viewpoint of fine motion control is that of path tracking, which, when combined with a path p1anning stage, fits into the general framework of hierarchical manipulator control models.  Given a positional control system that allows a point in Cartesian workspace coordinates to be transformed into manipulator joint- positions, a path may be determined by the method of guiding or by numerically entering the path information into computer memory. (In the latter case, the computer-controller can interpolate between the given points to provide better path specification.) Using the simple positional control model discussed earlier, the manipulator will move to a specified point and momentarily stop before moving on to the next point.
If the points are chosen to be fairly close together, then using resolved rate control a straight-line trajectory between each point may be implemented, along which velocities are specified to be nonzero. Tachometers built into a manipulator’s joints can be used to provide the necessary velocity feedback information.

III. Dynamic Control Models

Dynamic control of a mechanical manipulator may be examined in two contexts. First, position control may be considered from the point of view of system dynamics, and indeed, for real-time positional control and path tracking, manipulator dynamics must be considered. Forces and torques must be calculated, although the system may still be considered as an isolated system. The second dynamic control context is realized when the manipulator interacts with its environment in such a way that the work done by the manipulator on the environment (dW= F·dX) is nonzero.  For the most part, this section will discuss the dynamics of the manipulator as an isolated system. The next section (Compliance and Task Interaction) will discuss the more general problem of the manipulator/environment system.

A. Formulations of the Equations of Motion

The mechanical manipulator equations of motion have been examined mainly by use of Lagrangian generalized coordinates, recursive Newton-Euler methods, and by a recursive Lagrangian formulation. The complete Lagrange equations of motion do not lend themselves to quick computation, and have proven to be too much of a computational burden for use in real-time manipulator control.  The recursive Lagrangian  formulation given by Hollerbach[6] makes real-time computation possible, as does the recursive Newton-Euler formulation given by Luh, Walker and Paul.

Hollerbach makes use of a previously developed backward (from the manipulator base) recursion formula for generalized forces and extends this recursion to include forward recursion of forces from the end link to the base of the manipulator. Luh et al use backward recursion of the velocities and accelerations, and joint torques are obtained by forward recursion of forces using Newton-Euler methods. In Hollerbach’s scheme, the use of 3X3 rotational matrices rather than 4X4 homogeneous transformations allows faster computation using the recursive equations.  Luh et al make use of the relationships of moving coordinate systems to relate adjacent link dynamics. Both the recursive Lagrangian method and the recursive Newton-Euler method reduce the number of computations to the order n, where n is the number of manipulator joints. In the full Lagrangian formulation, the number of computations and therefore the computation time is of order n4.

Other formulations of the equations of motion exist, but the recursive Newton-Euler and recursive Lagrangian formulations remain the most efficient schemes for computing manipulator dynamics. A recently proposed inherently discrete dynamic robot model may lead to more efficient computational algorithms, however.

B. Implementation of Dynamics Computations

Making use of manipulator equations of motion can be a theoretical exercise in optimal system control or a practical implementation for the control of a particular manipulator, or a combination of both.

Perhaps the best example of the purely theoretical use of the equations of motion is Kahn and Roth's paper, in which time-optimal manipulator control is studied. First, the full Lagrangian equations of motion (neglecting friction) are given, then the equations are solved for a particular three-link model. Next, the time-optimal control problem is stated and solved. The solution to the time-optimal control problem, however, which is a set of twelve Euler-Lagrange equations, shows optimal control is bang-bang: Time-optimal control can be achieved only as a function of time, unexpected disturbances cannot be accounted for, and computations must be repeated for each set of initial and final conditions.

Kahn and Roth then construct a sub-optimal control model in which the equations of motion are linearized and uncoupled by a suitable linear transformation.         A canonical form of the state equation describing the plant (i.e., the manipulator) is found, and sub-optimal control is shown to closely approximate the optimal control with regard to response times and trajectories. The Kahn-Roth optimization excludes Coriolis and centrifugal forces, however, and has not been found to be applicable to manipulators with more than three links.

A model that considers the first three links as a separate computational problem from that of the outer links has been developed by Horak.[7] For the first three manipulator links, the classical (non-matrix) Lagrange equations are used, and the velocities are computed in link coordinates. Succeeding links are described by the recursive Newton-Euler equations. The main advantage of this method is its computational efficiency, which results from the fact that the solutions to the equations of motion for the first three links can be obtained in closed form. (The dynamics of the outer links may be computed from the velocities and accelerations of the fourth joint. The fourth joint is considered to be attached to the end of the third link.) A slight disadvantage is that the model assumes some specific manipulator characteristics are known. The disadvantage, however, is partially overridden by the fact that the specific characteristics are common to manipulators currently in use. If two microprocessors are used for the dynamics computation, one for the Lagrangian calculations and one for the Newton-Euler calculations, the computation can be done ten times faster than in the conventional Newton-Euler approach.

Another method that makes use of multiple processors is given by Luh and Lin.[8]  A CPU is assigned to each manipulator link, and parallel dynamics computations are performed. Precedence relations based on the dynamic coupling between the links are handled by a "variable" branch-and-bound technique that determines the optimum-ordered schedule for each of the CPU's. The optimum-ordered schedule is the one resulting in a minimum-time pre-planned path.

The same sort of problem of using path planning to maximize the manipulator's efficiency is addressed by Kim and Shin,[9] who make use of manipulator dynamics in the path planning process. By taking the dynamics into account, Kim and Shin are able to specify local upper bounds on joint accelerations. Other path planning methods specify constant global bounds on the joint accelerations, so that the manipulator will be able to follow the pre-planned path under varying load conditions. The use of this global least-upper-bound often causes a manipulator to move more slowly than is necessary, thereby under-utilizing the manipulator's capabilities.  Kim and Shin's path planning scheme solves the under-utilization problem by transforming it into a set of local optimization problems.

In addressing a similar problem, Hollerbach has developed a method for predicting whether a given manipulator trajectory is dynamically realizable and, more importantly, enables the trajectory to be dynamically scaled so that it will not violate joint torque bounds.  Dynamic scaling may result in slower motion or faster motion for a particular joint, and Hollerbach gives examples of both, along with an example of a dynamically unrealizable trajectory. 

Also significant is Hollerbach's finding that velocity and acceleration terms in the manipulator equations of motion are equally important at all speeds of movement. The generally applied assumption in robotics literature, Hollerbach says, is that the velocity terms can be ignored except at higher movement speeds because they contain nonlinear terms. Hollerbach further states that this assumption is reasonable, but that for consistency acceleration terms should also be ignored at slower speeds, although this is not done.

However, the paper of Kahn and Roth apparently contradicts Hollerbach's statement, since as quoted earlier it states that for slow movements "no significant dynamic forces will act on the system." At any rate, high-speed movement is inherently desirable and velocity terms may not be ignorable in designing future robotic arm control systems.


IV. Force Control, Compliance and Task Interaction


Computer-controlled mechanical manipulators are designed to be general purpose devices, as opposed to so-called hard automation, which is designed to perform one repetitive series of tasks. Manipulators therefore should be able to interact dynamically with a changing environment. Attempts to achieve ideal general purpose manipulation currently involve the use of force control and force-and-position control operating together in compliance with positional constraints imposed by the geometry of the manipulator's environment.

A. Force Control

Mason[10] identifies two prevailing approaches to force control, the explicit feedback approach and the hybrid controller approach.  Mason first discusses two types of explicit feedback force control, the generalized spring and the generalized damper:

The explicit feedback approach uses an explicit force control law which feeds sensed forces back to a position or velocity controller. Typical of the explicit feedback approach is the generalized spring which feeds back force information through a stiffness matrix to a position controller. This method can be modeled by the relation
f = K(p – p0)
Where f is the effector force, p is the effector  position, and p0 is the nominal position, which is input supplied from the planning system or user program. K is the stiffness matrix, which relates forces observed at the effector to deviations from the nominal position. The stiffness matrix can be chosen to optimize performance of a particular task. The generalized damper method is similar in form but assumes a velocity controller instead of a position controller. This method can be modeled by the relation
f = B(v – v0)
where f is the effector force, v is the effector velocity, and v0 is the nominal velocity, which is input from the planning system or user program. B is the damping matrix, in this case relating effector force to deviations from the nominal velocity. A generally useful choice for B is just the identity matrix times some negative damping coefficient.

An implementation of the generalized spring approach is the remote center compliance device developed at the Charles Stark Draper Laboratory. The remote center compliance device is a spring-loaded robotic arm "wrist" that is a passive generalized spring device. The instrumented remote center compliance device-makes use of strain gauges for force measurement, and is therefore an active generalized spring device.

An example of the generalized damper is Whitney's "Force feedback of manipulator fine motions,"  which makes use of the resolved motion rate control method developed by Whitney.  The scheme involves the use of a force-sensing wrist (developed at the Draper Laboratory, but not equivalent to the remote center compliance device) that provides six differential signals from which forces and torques are computed. The signals are provided by three extensional strain gauges and three strain gauge shear bridges cemented to three columns that connect the two ring-shaped "wristbands" of the force-sensing wrist.

Whitney's force feedback scheme works as follows: An a priori motion command is given in terms of coordinates attached to the end effector. The v command (the velocity command, in end effector coordinates) is then transformed into joint velocities by resolved motion rate control. When contact with the environment occurs, contact forces and torques are set up. The force feedback gain matrix (the inverse of the damping matrix) multiplies the force and torque quantities and creates velocity modification commands. The force feedback gain matrix is linear, so that Whitney's method is a linear force feedback method.

B.   Compliance and Task Interaction

The hybrid controller approach, rather than the explicit feedback approach, is used by Mason to develop a compliance formalism. Mason's primary concern is the development of control strategy synthesis methods, and to that end he develops formal models of the manipulator (the ideal effector), the task geometry (the ideal surface), and the desired manipulator behavior (goal trajectory).

Raibert and Craig[11] use Mason's theoretical framework, which is partially based on earlier hybrid controller work done by Raibert and Craig, to develop a "conceptually simple approach to controlling compliant motions of a robot manipulator."  The goal of Raibert and Craig, like that of Mason, is to improve controller architecture so that force feedback and other sensory information may be efficiently used.

Mason explains compliant motion as that motion which “occurs when the position of a manipulator is constrained by the task.” Others have referred to compliant motions as motions of accommodation. In developing his formalism, Mason uses succinct descriptions of the situations in which pure force control and pure position control may be considered.
If the end effector is buried in a solid block of immovable material, no positional freedom exists and no positional control is possible.  Any force command can be accepted by the end effector, however, so that the manipulator has complete force freedom.

On the other hand, a manipulator end effector unconstrained and in a space free of obstacles has complete positional freedom but no force freedom—nothing is present to accept a force applied by the manipulator.  This is the regime of pure position control.

Intermediate between the two regimes are configuration space "C-surfaces" as explained by Mason:

Loosely speaking, a C-surface is a task configuration which allows only partial positional freedom. Freedom of motion occurs along C-surface tangents, while freedom of force occurs along C-surface normals. Neither pure position or pure force control is appropriate in this case, but rather a hybrid mode of control, which gives control of effector force along the C-surface normal and control of effector position along the C-surface tangent.

Raibert and Craig explain more about the C-surfaces: "These two types of constraint, force and position, partition the degrees of freedom of possible hand motions into two orthogonal sets that must be controlled according to different criteria." In order to implement the different criteria, artificial constraints must be specified. Artificial constraints represent a generalization of manipulator control strategies, and are used to specify desired motions or forces. The artificial constraints are complementary to the natural constraints of the C-surfaces: Artificial force constraints are specified along surface normals and artificial position constraints are specified along surface tangents. (Although not explicitly stated above, it is implied that natural force constraints are along C-surface tangents and natural position constraints are along C-surface normals.)

The coordinate system in which the constraints and trajectories are specified is a Cartesian system defined with respect to the task geometry. If the constraint coordinate system is properly chosen, manipulator trajectories are easily determined, and force constraints may remain constant during the trajectory.

The case of a manipulator turning a crank provides a good illustration of the use of natural and artificial constraints.  An XYZ coordinate system is attached to the handle of the crank, with the crank's angular motion confined to the XY plane and the positive Z axis pointing upward from the upward-pointing crank handle. Forces, torques, linear velocities, and angular velocities are given in terms of their XYZ components. The natural constraints imply that vx , vz , ωx , ωy , fy , and nz are all zero. These quantities are linear velocities, angular velocities, force, and torque, in the indicated directions. The artificial constraints are therefore ωz = constant, and  vy , fx , fz , nx , and ny all equal zero.

Hybrid control and active compliance are architectural concepts that link the constraints of a task requiring force feedback (via, for instance, a force-sensing wrist) to the actual controller design, and are therefore useful in achieving kinesthetic feedback for the purpose of remote manipulation.

Force feedback first found application as an adjunct to position control of remote manipulators. In that same context, force feedback is currently being used, but emphasis is now on a general application of force feedback to both remote manipulation and computer-controlled manipulation. The general implementation of force feedback and other sensory sources has been referred to as data-driven automation. 

The generality of data-driven automation is achieved by using real-time mathematical transformation of robotic arm joint variables, where joint variables are measured both at the remote manipulator's input device and at the manipulator output.   Computer control is therefore an interface between the operator and the manipulator, and the degree of direct, autonomous computer  control over the manipulator may be varied by the operator, or computer control may be used to simply supply force and torque feedback to the hand controller, thus enabling the operator to perform fine-motion remote manipulation. A system that uses the kinematics of homogeneous transformations is described by Bejczy and Salisbury.[12]  Video input using homogeneous transformations, as described in Section II, is easily implemented with such a system, but with force feedback supplying fine-motion information the use of video monitoring would mainly supply information for large-scale manipulator movements.



References

Abbreviations:  J-DSMC = ASME Journal of Dynamic Systems, Measurement, and Control;  SMC = IEEE Transactions on Systems, Man and Cybernetics. (May 2017:  I haven’t yet completely reconstructed the end notes, so about half of them are listed alphabetically at the end of the numbered references.)



[1] Kahn/Roth: "The near-minimum-time control of open-loop kinematic chains," J-DSMC Sept. 1971, 165-172.
[2] Luh/Walker/Paul: "On-line computational scheme for mechanical manipulators," J-DSMC 1980, 69-76.
[3] Taylor: "Planning and execution of manipulator straight-line trajectories," IBM J. Res. Develop., July 1979, 424-436.
[4] Paul: Robot Manipulators, MIT Press, 1981.
[5] Whitney: "The mathematics of coordinated control of prosthetic arms," J-DSMC 1972,      303-309;  "Force feedback control of manipulator fine motions," J-DSMC 1977, 91-97.
[6] Hollerbach: "A recursive Lagrangian formulation of manipulator dynamics and a comparative study of dynamics formulation complexity," IEEE SMC-10 1980, 730-736;  "Dynamic scaling of manipulator trajectories," J-DSMC 1984, 102-106.
[7] Horak: "A simplified modeling and computational scheme for manipulator dynamics," J-DSMC 1984, 350-353.
[8] Luh/Lin: "Optimum path planning for mechanical manipulators," J-DSMC 1981, 142-151;  "Scheduling of parallel computation for computer-controlled mechanical manipulators," IEEE SMC-12 1982, 214-234;  "Resolved-acceleration control of mechanical manipulators," IEEE AC-25 1980, 469-474.

[9] Kim/Shin: "Minimum-time path planning for robot arms and their dynamics," IEEE SMC-15 1985, 213-223.
[10] Mason: "Compliance and force control for computer-controlled manipulators," IEEE SMC-11 1981, 418-432.
[11] Raibert/Craig: "Hybrid position/force control of manipulators," J-DSMC 1981, 126-133.
[12] Bejczy & Salisbury: "Controlling remote manipulators through kinesthetic coupling," Computers in Mechanical Engineering, July 1983, 48-60; "Data-driven automation in remote applications of robots," Robotics Research: 1st Int'l Symposium, MIT Press, 1984.

Alphabetical listing of other references used:

 Albus: "A new approach to manipulator control: The cerebellar model articulation controller," J-DMSC 1975, 221-227.
Anex/Hubbard: "Modelling and adaptive control of a mechanical manipulator," J-DSMC 1984, 211-217.
Goertz/Bevilacqua: "A force-reflecting positional servomechanism," Nucleonics, Nov. 1952, 43-45.
Goldenberg/Lawrence: "A generalized solution to the inverse kinematics of robotic manipulators," J-DSMC 1985, 103-106.
Hogan: "Impedance control: An approach to manipulation,"(3-parts) J-DSMC 1985, 1-24.
Lee: "Resolved-motion adaptive control for mechanical manipulators," J-DSMC 1984, 134-142.
------  "Robot arm kinematics, dynamics, and control," IEEE Computer, Dec. 1982, 62-80.
Lozano-Perez:   "Robot programming," IEEE Proceedings, July 1983, 821-841.
Lozano-Perez/Mason/Taylor:  "Automatic synthesis of fine motion strategies for robots," Robotics Research: lst Int'l Symposium, MIT Press, 1984.
Luh: "An anatomy of industrial robots and their controls," IEEE AC-28 1983, 133-153.
Luh/Fisher/Paul: "Joint torque control by a direct feedback for industrial robots," IEEE AC-28 1983, 153-161.
Neuman /Tourassis: "Discrete dynamic robot models," IEEE SC-15 1985, 193-203.
Ogata:  System Dynamics, Prentice Hall, 1978.
Paul/Shimano/Mayer:  "Kinematic control equations for simple manipulators," IEEE SMC-11 1981, 449-455.
-----------  "Differential kinematic control equations for simple manipulators," IEEE SMC-11 1981, 456-460.
Ruoff: "Fast trig functions for robot control," Robotics Age, Nov./Dec. 1981
Shimano/Geschke/Spalding: "VAL-II: A robot programming language and control system," Robotic Research: 1st Int'l Symposium, MIT Press, 1984.
Shin/Malin: "A structured framework for the control of industrial manipulators," IEEE SMC-15 1985, 78-89.
Takegaki/Arimoto: "A new feedback method for dynamic control of manipulators," J-DSMC 1981, 119-125.
Vukobratovic/Stokic: "One engineering concept of dynamic control of manipulators," J-DSMC 1981, 108-118.
Wu/Paul: "Resolved motion force control of robot manipulators," IEEE  SMC-12  1982, 266-275.