As I watched participants in the pilot study, the dependence on cameras is still very high. The system as it was was not developed enough to support a camera-less operation. In particular, the gripper had no feedback on whether it was actually closed all the way or just partway. The virtual display was only able to show the commanded state of the gripper, as that is all the servo controller (and in fact the servos themselves) give.
The result of this lack of feedback is that the operator must depend on other sources to examine the state of the system. One of the experiment parameters I had set was to use the 3d model by itself and a birds-eye-view camera. The up-close side-view camera was disabled. So the only way someone could check to see if they successfully grabbed an object was to lift it up until the gripper was in the camera view and see if the object was there. More often than not, they missed, and so had to try again.
As a result, I found a relatively simple way to get position feedback from these hobbyist servos. Now when the gripper misses an object, it shows up completely closed, and when it successfully grasps an object, it shows up halfway open. I think this will help quite a bit.