Thursday, August 27, 2009

Head tracking design changes

I spent some time the past couple of days refining the head tracking manipulation interface.
The first change was to mount the Wii remote up on the wall so it's farther away from the operator.
This change was motivated by the operator easily moving out of the camera's field of view when leaning left and right to change the view.

One side effect of this change due to the camera pointing at a steep downward angle is that "leaning closer in" now adjusts the declination of the virtual camera, instead of the zoom distance.
At first I thought I should modify the trig that calculates the head position, but first I decided to test it.
The result is I think I like the "lean in" to adjust the declination.
My explanation for why I prefer this way is adjusting zoom is rarely needed for this task, and adjusting declination is needed much more often.

Another change I made was to couple the virtual camera azimuth with the base joint rotation of the arm.
This means that the operator can sit still and rotate the arm, and the view keeps the arm in the same orientation by rotating the virtual camera.
The head tracking comes into play by offsetting from the coupled view.
This essentially means only relatively small head motions are required to get the most useful viewpoints (top down and side views).

Speaking of viewpoints reminded me of a configuration I should probably compare against.
Several people have asked whether I have tried displaying a side and top-down view at the same time.
I think it might be fast and usable in clean environments where you can isolate your object of interest, but cluttered environments would make it impossible to use such a display without an additional "3/4" dynamic view to understand what blob corresponds with what object.
It's probably something worth including for the journal paper I plan on writing.

Wednesday, August 26, 2009

Erroneous artifacts in 3D display

Having recently finished annotating all of the video for the second user study, I noticed a few issues.
None of the issues I saw in the second user study were as severe as the first user study, but they are interesting.
Two people, that with my subjective judgment were complete novices to robot control and interpreting 3D information on a 2D display, mistook some artifacting in the 3D model as the deposit box, and so repeatedly dropped blocks on the artifacts.
The artifacts were showing because of an imperfect filter that is supposed to remove all parts of the 3D model that are not relevant to the task (that is, the blocks, pipes, and deposit box).
Some of the floor was showing up in the model, and these two subjects seemed to think it looked like the box.
If I were designing an interface to cater specifically to this task, I can think of ways to support the operator that would really make the deposit box stand out.
I don't think that's really what I'm researching, though, so I'm not going to change the interface design in that way, especially since 30 out of 32 people had no trouble finding the deposit box.
Most likely nobody will want to use a mobile manipulator for this particular task, since it's really mostly a toy world.

Another problem is a remnant from the first user study.
Yes, we're coming back to the alignment issue.
For one target block in one particular layout, the alignment was off by enough that at least half of the people had trouble getting it.
There were a couple other blocks that were slightly off, but most people got them as long as they followed the instructions in the training.
What it amounts to is that the calibration was slightly off for those couple of regions.
It's a little disappointing, but not too much so, since I can filter out the problem blocks to look at what happens with the well-aligned blocks.
Since there are 6 layouts and 3 blocks per layout, that means that only 1 or 2 block samples out of 18 is bad.
I think it's still plenty usable and will give some interesting insights.