Impressions and Our Project’s Goals – Quoc Pham

I’m really enjoying my first couple of weeks here at UW Stout’s Robotics REU. The program kicks off very quickly, and the research team was already able to start testing some ideas by the end of the first week. I like that it puts a lot of the responsibility on the undergraduate researchers to take charge of our project as well as allow us to manage our own deadlines.

The walk from UW Stout’s campus to our dorms bypasses a good number of restaurants and shops yet is still only ~10 mins walk. This makes it convenient to grab breakfast, lunch, and even dinner from work and home. Though I haven’t fully explored the city of Menomonie, the lakeside view from the dorm makes for a nice scenic welcoming coming back from the lab.

At the robotics research lab, my team and I are hoping to autonomously control a robotic arm using a camera. We want the arm to be able to feed a person. This means that the robot must be able to determine when they’re ready for a bite, where the food is, how to grab it, etc. Specifically, our team is tackling the topic of object detection so the software has to be able to locate the mouth and identify if it’s open. If successful, the concept could be applied to a number of other assistive applications, like picking up things around a wheelchair, and non-assistive applications, like picking up litter. The plan is to first get successful camera-servomotor control on one of the boards, then use open source computer vision libraries since our Beaglebone board runs Linux.

There can be noticeable benefits when implementing computer vision (CV) on an FPGA as opposed to a compact computer like a Beaglebone Black. The FPGA offers parallel programming as opposed the CPU’s serial programming, which can help to speed up the processing speed. In addition, the Beaglebone runs an operating system (Linux) which means there’s more overhead to the computations, which slows down the general process. The reason the processing speed is important to computer vision is due to the real-time component of the task. A robotic arm holding a spoon in your face, for instance, has to avoid jabbing you if you’re not perfectly still. Although the FPGA can be faster, it doesn’t have an extensive library like the Beaglebone Black does due to the computer vision’s open source community. All that extra work from other developers can help us go a long way in accomplishing our goals.

In our first week, we developed a method in SystemVerilog to use the color red to send a stop output signal. Then in our second week, we expanded upon that so it can send different output signals depending on where the color red was on the screen. I learned while working on improving the robustness of the color detection that there was a lot of uncertainty in the real world. Though we eventually got the values to a satisfying performance, it took lots of trial and error due to the difficulty in debugging since we couldn’t read out individual pixel RGB values to see what we’re getting in real-time. Currently, my team is working on porting the basic algorithm over to a tiny FPGA board that will power a small mobile robot to follow red signs or tape.

Since our ultimate goal is to implement computer vision, our next step will be to experiment with the computer vision’s open source library and develop one specifically to detect a mouth. We hope to get our robot moving by Wednesday or Thursday so we can start tinkering on the Beaglebone on Friday.

Hope you enjoyed reading!

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s