Tuesday, October 28, 2008

Business card


I finally designed a business card. Of course, I accidentally took a pencil eraser to the sensitive text, so you can't use this card to contact me. But at least you can look at the graphics.

Research Profile

I spent some time editing my research profile page on our lab wiki so that I'm not embarrassed to reveal where it is.

https://cswiki.cs.byu.edu/HCMI/index.php/Alan_Atherton

If I do any major changes to it in the future, I'll try to make note of it here.

Monday, October 27, 2008

It's Away!

And no, it's not a torpedo (hopefully). It's the first draft of my qualifying paper, destination: adviser. So, in a matter of a few weeks, I should be past quals. It also means within a few years it should be available to the public. And that in a few decades someone might read it. So in a few centuries maybe someone will see that it's useful. And that's the optimistic side of things!

Friday, October 24, 2008

Servo feedback hacking

One of the big problems with the first run of my user study was that the base rotation joint on the arm had some backlash, so the servo was almost never where it claimed to be.  So I decided to do a little electronics hacking to get actual noisy feedback from the servo.  It's about like the quality of a baby monitor... you can tell that someone's saying something, but you can't quite make out what it is they're saying.  (Ok fine, it's not quite that bad, but anything less extreme just doesn't sound funny).

Anyway, the idea wasn't mine... I used some guidance posted here:
http://www.lynxmotion.net/viewtopic.php?t=3182

Here's a shot of what it looks like:

I struggled in thought for a few minutes about how to get a 2V power source with materials on hand, because our lab isn't really designed for electronics hacking.  After a few minutes I remembered that months ago I fried a servo and the pot wiper was still intact.  So I removed it and wired it up to my trusty servo controller.  I hooked it up to a volt meter and twisted the dial until it read 2V, then followed the rest of the guide and poof! 

I got it all in place, and even added some simple windowed running average software filtering to get rid of the major jitter in the voltages.  For 180 degrees of motion I get about 170 values... so almost-degree resolution isn't so bad.  Unfortunately, I still don't get a good enough alignment between the virtual arm display and the real world.  Something's still amiss.  I decided to wait until our stereo camera comes in (just a few weeks now) to do more work in this space of things.

Writing progress

Every time I work on a big writing project, I find that I have to rediscover my work flow. When I start a new paper from scratch, the thing that usually happens is I just start writing in a linear fashion, and then the flow totally stinks. So I have to kinda start over with another strategy.

First I try to write a one-sentence overview of the whole paper. Then I expand to a paragraph. Each sentence in the paragraph becomes essentially a major topic/section of the paper. Then in each section, I try to come up with key sentences that become the topic sentences for the paragraphs in the section. Finally I fill in the details for each paragraph.

It's like... orders of magnitude or something.

It's not an uncommon technique, but I tend to forget it until I'm done writing the first draft... or 0.5th draft.

It's all amplified to worse proportions when I take an existing paper and try to update it with new information. Starting from scratch would probably work better and faster in those cases. Maybe next paper I write I'll remember that.

Thursday, October 9, 2008

Slight improvement

Instead of discarding what I've already done I decided to try getting a better calibration with linear fitting. The way it works right now is I've mounted a target on the robot arm, and using forward kinematics I know where the target is located in the robot arm's coordinate system. From the SR-3000 I can find the target (currently done by human), so I have correlated points from two coordinate systems. All that's left is to linear least-squares solve to find the transformation between the two.

Today I added some calibration targets on the ground plane (a wooden board) that correspond to a height of zero. The reason for this is that yesterday, targets low to the ground were very poorly aligned (2-3 cm off). I thought one reason might be because the target was mounted somewhat high on the robot arm, so any skew happening down low was missed.

The end result is that it works somewhat better. It's still off by an average of 0.5 cm I'd say, but it appears to be consistent. With the "guess and check" method I could align it for one setup where two blocks were about 0.2 cm off, but the third would be 2 cm off. When I changed the setup, everything ended up being off by 1 cm or more. I think if I sample just the right space, and then keep all items of interest in that space, I may not have to worry about nonlinear distortion so much. After all, everything's linear in the limit, right? :)

Wednesday, October 8, 2008

M(is)aligned decomposition

So I finished coding up the 3D camera calibration routines using Singular Value Decomposition and the like, and the results were better than I expected for a first attempt, but they more or less still stink. I think the problems come from three things: nonlinear distortions, inadequate sample space, and backlash in the robot arm.

I really do think the SR-3000 has some nonlinear distortion to it. It's not extreme, but it is there. So I should learn how to account for nonlinearity and calibrate accordingly. Now don't get me wrong, the SR-3000 is a pretty nice sensor, especially considering its size, power requirements, and illumination independence. It's just not quite good enough for the precision I'm looking for. If the scan is misaligned by 0.5cm in any direction, the task is extremely difficult to do.

The inadequate sample space comes from the calibration target being mounted in a place on the arm that cannot go below about 10cm (where the highest reach of the arm is 40cm). The result appears to be that objects sitting at around 15cm are aligned perfectly, while objects below 10cm are typically off by about 1 cm. Of course, this is after playing with it for only half an hour. One solution to this may be to mount the target in/on the gripper instead of the "wrist". Another possibility would be to put a few targets on the ground plane, although that could introduce some discrepancies between my robot arm forward kinematics model and the real world. In other words, where the arm really is and where it thinks it is.

Arm backlash is probably more important than I would like it to be. The base joint basically doesn't rotate for about 2-5 degrees when you change directions. I tried to hack a software solution together, but it's not good enough. To fix that I think I'll have to modify the hobbyist servos to give noisy analog position feedback, then run it through a filter to get the actual position. Not so difficult, but it takes time. Probably a better solution would be to put digital encoders on there, but while my servo controller has analog inputs, it has no digital inputs. That means I would have to get into AVR programming. While I would love to... it all comes back to time.

Friday, October 3, 2008

Linear Least Squares and SVD

I'm trying to improve alignment between a 3D scan and a virtual model display. Right now I'm simply using the guess-and-check method, which doesn't work well enough. I think there is some shearing (skewing) in the transformation, and the guess-and-check method doesn't account for that. So I'm going to go a little more robust and use SVD. It took some time to find a good explanation of how to use SVD, and here's a nice tutorial with lots of detail. If someone knows of another good one, send it my way (or post a comment).

Link (pdf)

Further than that, I'll be using the basics of this approach to set up the linear system:

http://groups.google.com/group/sci.engr.surveying/msg/45e29b51626626ec

I think his comments about how many points you need are incorrect, especially if one set of points is skewed. I've seen documents that state 6 points are needed and some others that say 7 are needed. At least they all agree that the more you have, the better, so I'll probably sample 10 or more points. However, the basic idea of how to setup the system seems correct. Toward the bottom of the post, there is this line:

R ={INV [P'P]} * [P'U]

Where P' is P transpose. Well, Inv(P'P) * P' is the pseudo-inverse, and we can calculate that using SVD, as shown in the tutorial linked above. So, given a noisy set of points and a corresponding set of points in a different coordinate system, we can find what the (best) transformation is between those points.

Tracking heads

I'm in the middle of writing up a paper for the results of the preliminary manipulator arm experiment. The experiment went OK, but the results aren't quite as strong as we were hoping for. Lucky for us, there are a few things that we can fix/add that will make the results better next time. I would say that the paper is somewhere about 80% done (including the polishing stages). Most of the writing is in rough draft form, but it's mostly done. I still have some statistics to remember how to do, but nothing too scary.

Since I'm in the middle of writing a paper, I tend to get coding withdrawals after a few days, so yesterday I decided to implement head tracking, one of the things we wanted to do. We just used Johnny Lee's Wii remote head tracking. Since our user interface is written in C#, and so is Johnny Lee's project, the transition went pretty fast. I simply referenced the WiimoteLib project in C#, then inserted Johnny Lee's code to connect to the remote and read values for head tracking. After that, all that had to happen was to use the head position to modify the virtual camera view angle and it works. Sometime I plan to record a video of the interface in use and post it.

Next up I've got the paper to finish, then the Master's thesis to finish, so plenty of writing all ahead. Every now and then I'll squeeze in a new feature so by the time I'm done writing the thesis I'll have another experiment ready to run and a new paper ready to write. Write on!