Understanding the code

Added by Nandan Dubey almost 6 years ago

Hi,
I wanted to modify the recognition code and make my own custom recognition firmware on pixy. I am using gcc branch.
Can you please tell me where the signature matching happen in the code? Where exactly the input frame pixels are processed to match with set signatures?

As far as I understood from the code(this might be wrong but it may also be helpful for community)
When we press the button setSignature is called which in tern calls cc_setSigPoint with image center as seed point for setting the signature. Now generateSignature function in colorlut.cpp takes the seed point and grow region around this seed point based on some algorithm which isn't clear to me(I am assuming it is somewhat similar to floodfill taking similar pixel data).

Since pixy can't accommodate the complete image frame is just part of the image? IterPixel::next gives pixel data but only for a part of the frame?
So at the end you have the signature for the object which is stored in m_runtimeSigs with some parameters like (m_uMin, m_uMax, m_vMax, m_vMin, m_rgbSat)? But where is the signature matching happening for each frames? Is generateLUT somehow is related to this? What is the parent method which is called with each new frame grabbed by sensor?

Thanks & Regards,


Replies (8)

RE: Understanding the code - Added by Nandan Dubey almost 6 years ago

I went through more posts in the forums. Now I am in the impression that only when the button is pressed M4 receit mves the frame(So only when signature setting is required). For processing each frame and comparing with signature to locate the object M0 is used. This frame data is never sent out to M4. Is this the case? Can you give me some pointer as to what methods to look for regarding this.
I may be incorrect throughout. But if this is the case can we pass all frames to M4 all the time? Will it be too much for the hardware? M4 is the LPC4330 what is M0? USB, UART, Serial (all the external communication code is in M4? Creating signature is also done in M4. Even setting the registers on CMOS camera(controlling exposure etc) is also done with M4?

Basically, can you please give some light on how image data from CMOS sensor is flowing through the code and which functions to look into for processing of raw image data.

Thanks & Regards,

RE: Understanding the code - Added by Edward Getz almost 5 years ago

Hello Nandan,
When Pixy is normally processing (in default mode), it isn't putting raw frames in memory. The M0 is reading pixels directly from the imager and performing a lookup operation to determine if the pixel is potentially part of the 7 signatures. So the M0 creates a set of run-lengths of interesting pixels. The M4 then processes the run-lengths to form connected components blobs.

When learning signatures, the M4 grabs raw frames (320x200 resolution) and processes them. The code that does this is in the button code in button.cpp. ButtonMachine::handleSignature is a state machine that grabs frames (cam_getFrameChirpFlags). The signature learning code is called in ButtonMachine::ledPipe and ButtonMachin::setSignature.

Hope this helps!

Edward

RE: Understanding the code - Added by Nandan Dubey almost 5 years ago

Hi Edward,

Your answer surely helped a lot. I have some more query though.
I got that when finding signature M4 is reading image data using Bayer Filter(bg1g2r)? I also got the process by which color signature is calculated using ratios and connected component. colorlut.cpp is almost understandable now.

What is lacking is M0 code which is mostly assemble language. What I have got till now is
rls_m0.c is the main processing file. getRLSFrame is called on each frame grab this intern calls below two functions for each frame line read:
lineProcessedRL0A
lineProcessedRL1A

lineProcessedRL0A seems to be doing the preprocessing of color pixels not sure what is achieved after completion of this function call.
lineProcessedRL1A seems to calculate u, v values and seems to load the color signature for comparison. Is this method the main method in doing the matching and calculating connected component for matched signature? How and where is lookup operation done in M0 to match with seven color signatures. I guess there is no connected component in M0 and each pixel data is compared individually with the signature to get the match? So, now I just want to understand how and where M0 matches the read pixel data with signature?
How LUT is working in M0 for all the set signatures? After some preprocessing is done in M0 the pixel is passed to M4 if it passes the initial screening. Then in M4 final processing is done to form the blob right (in blob.cpp)? How this communication happens?

Thanks & Regards,

RE: Understanding the code - Added by Edward Getz almost 5 years ago

Hello Nandan,
It sounds like you're on the right track. Since the Pixy sensor is "raw Bayer" you need at least two lines of data to determine an RGB value. This is what's going on wiht lineProcessRL0A and lineProcessRL1A.

The M0 and M4 communicate through the Qqueue -- see qqueue.h. It's a queue of Q-values (run-lengths).

Hope this helps!

Edward

RE: Understanding the code - Added by Nandan Dubey almost 5 years ago

Hi Edward,

Can you please explain a little bit more about lineProcessRL0A and lineProcessRL1A . What exactly these functions doing?
What I got is there is a loop in both functions. there are some calculations of blue+green+lastb+g some more for v (blue - green) some shift operations also lut value is also loaded and compared. It would be very helpful if you can explain exactly what all happen into these functions.

I am guessing that they are not just doing the "raw Bayer" conversion to RGB value? If it is so then where is the preprocessing of the pixel data in M0 to discard most of the pixels. I was thinking that M0 does some preprocessing on running pixel data to discard most of the pixels and only few pixels in a queue are sent to M4 for more processing(matching with signatures).

Regards,
Nandan

RE: Understanding the code - Added by Edward Getz almost 5 years ago

Hello Nandan,
I'm not the expert on that code, but the RL0A routine just buffers the blue and green pixels and the RL1A routine combines the red and green with the blue and green from the previous line (processed by the RL0A) routine.

What's going on in RL1A is an index into a lookup table is formed from red, green and blue. If two adjacent pixels have a non-zero look-up table entry, these pixels are logged into the queue along with the r+g+b value, the column value, the signature number and the r-g and b-g values, otherwise no data goes in the queue. This is how many of the pixels are eliminated by the M0. The M4 then reads the data in the queue to determine the connected componenets.

Hope this helps!

Edward

RE: Understanding the code - Added by Nandan Dubey almost 5 years ago

Hi Eduard,

Thank you very much for answering all the questions. I saw the code where blobs are processed from queue in M4 in blobs.cpp also found out how blob position width height of the bounding box is calculated.

Now I wanted to know whether I can program Pixy usb interface to act as an HID device? I mean to say that as soon as I connect Pixy to any computer can it be detected as human interface device just like mouse or even more better as an multi pointer interface device. If yes, then Where should I start to make changes. I see that libpixy_m4 project has some usb files is that the place to start?
Have anybody done some similar effort already? I didn't find anything by forum search.

Regards,
Nandan

RE: Understanding the code - Added by Edward Getz almost 5 years ago

Hello Nandan,
You might consider taking a step back and focusing on getting Pixy to act like a USB mouse, which is an independent problem. There are some example pieces of code on the NXP site:

https://www.lpcware.com/content/page/mouse-device-example

After you get this under your belt, you can then tackle the problem of integrating this with the Pixy firmware. (That's how I'd approach this.)

Hope this helps!

Edward

(1-8/8)