I spent some time the past couple of days refining the head tracking manipulation interface.
The first change was to mount the Wii remote up on the wall so it's farther away from the operator.
This change was motivated by the operator easily moving out of the camera's field of view when leaning left and right to change the view.
One side effect of this change due to the camera pointing at a steep downward angle is that "leaning closer in" now adjusts the declination of the virtual camera, instead of the zoom distance.
At first I thought I should modify the trig that calculates the head position, but first I decided to test it.
The result is I think I like the "lean in" to adjust the declination.
My explanation for why I prefer this way is adjusting zoom is rarely needed for this task, and adjusting declination is needed much more often.
Another change I made was to couple the virtual camera azimuth with the base joint rotation of the arm.
This means that the operator can sit still and rotate the arm, and the view keeps the arm in the same orientation by rotating the virtual camera.
The head tracking comes into play by offsetting from the coupled view.
This essentially means only relatively small head motions are required to get the most useful viewpoints (top down and side views).
Speaking of viewpoints reminded me of a configuration I should probably compare against.
Several people have asked whether I have tried displaying a side and top-down view at the same time.
I think it might be fast and usable in clean environments where you can isolate your object of interest, but cluttered environments would make it impossible to use such a display without an additional "3/4" dynamic view to understand what blob corresponds with what object.
It's probably something worth including for the journal paper I plan on writing.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment