One of the things we robo-manipulation guys have to deal with is how to design the user control for the robot arm. I'm not talking about the whole user interface, but in particular the controls to make the robot move. If there's some autonomy in there, you have a little bit more flexibility, and you can fairly effectively just use a mouse. But when you just want to allow the operator to teleoperate or "remote control" the arm, things are a little trickier.
Now, just driving a wheeled robot around is not so challenging, because people are accustomed to driving cars, and sometimes even from a remote perspective. With a robot arm, you have to make a control that operates in full 3D, and that can even include 3 axes of rotation. If you want to go really low level, then you need some way to control each joint of the arm.
My first attempt at it was to use two "thumbsticks" on a modern video game controller. The type of control I'm going for is end-effector control, where you move a virtual target point for the robot's gripper to reach. One thumbstick controls motion in one plane, and the other stick controls motion in the remaining axis. This works OK with some training, but a lot of people seemed to still struggle with it. The next attempt was to reduce the control to one stick, and change the way the controls work depending on the view. From a top-down view, the end effector moves in the XY plane (where Z is vertical), and from a side view, the end effector moves in a plane parallel to the Z axis. The tricky part is in views that are neither side or top views. When the view is halfway between those, how should the end effector move? For now I have it somewhat "remember" what control it's in, and you have to go almost all the way to the other view to switch modes. This ends up being rather confusing even for me after practicing for a while. Another option would be to simply move the end effector in the view plane. It's hard to say whether that would be good.
Some other ideas would be to make a non-standard end effector control approach. For instance, left and right on the stick could rotate the base of the arm, and forward and back would extend the arm. Instead of controlling individual joints, however, this mode would still be moving the end effector. There is a difference.
The other day one of the people on my committee offered his Novint Falcon (3D force feedback controller) for controlling the robot arm. At first I thought this was totally the way to go and would solve all of my problems including world hunger. This would mean that I wouldn't have to use two separate joysticks or different modes... with a 3D controller up is up, left is left, and back is back.
Then I remembered the whole view-dependent thing. The trouble is that the interface has head tracking to adjust the view. So it's really easy to adjust the view. It's possibly good and important, but it also makes one wonder whether the controls should always do the same thing, or depend on the view. There's a paper by Jose Macedo called "The Effect of Automated Compensation for Incongruent Axes on Teleoperator Performance" that talks about this, and they basically say that people do better with the automatic compensation (or as I say, view-dependent) control than without.
I think my situation's a little different than theirs, though. They evaluate 2D control, and it's also static. By static I mean that once they have the control axes and the display axes determined, they remain in their particular (mis)alignment for the duration of the experiment. In my case, it's 3D control, and the alignment between axes is dynamic throughout the experiment. So I think it needs to be tested. Perhaps after I get this thesis done.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment