Distinguish a particular recognized object from a PixyMon-like display

Added by Sal Trupiano 7 months ago

Hello,

I am involved in a project with a Raspberry Pi + Pixy, connected via USB, mounted on a robot. The Pi has WiFi. We have prior experience with the Pixy.

On the robot, the Pixy will see one or more groups of objects, in different arrangements. Each group will have one or more objects. Each object in each group is the same shape and color.

The Pixy is trained to recognize one of the objects.

We would like to have a remote computer that has WiFi access to the Pi show the real-time Pixy camera frames along with the one or more groups of objects that Pixy recognizes.

Basically, the remote computer would have a display that looks like PixyMon. Remote computer is Windows, but it doesn't need to be. Although I am not sure if another OS on the remote computer would make this easier or not. I would say not.

This remote computer display would be viewed by an operator. We would like the operator to select a group from the display and then send the object data for that selection to the robot so that the robot can navigate to that group.

Since everything is the same color / hue (object), there would seem to be no way to distinguish between different groups.

Therefore, the first idea I have is to change the Pixy M4 firmware so that it will include a unique ID with every block, so as to distinguish a specific block.

The next issue is how to get a PixyMon-like display and incorporate selecting a particular group / block on the display. There seems to be several possible approaches.

The one that would appear to be the least amount of work is to compile PixyMon for Pi / Raspbian and run a VNC server on the Pi and then connect to the Pi VNC server via WiFi from some other computer as a client. This should allow PixyMon to be operated via the VNC client on the remote computer. libpixyusb and PixyMon would need to change to recognize the unique block ID. In addition, PixyMon would also need to change to allow a user to select a block and then send the object info for the selected block to the robot. However, being able to select a block would seem to require a fair amount of understanding about Qt and the Qt side of PixyMon.

Instead of VNC-ing PixyMon, I have thought about breaking PixyMon into a client and a server. When it gets down to actually communicating with the Pixy, it looks PixyMon could be split at the Link class. The Link class has derived class for USB (USBLink). I was thinking of creating a new class derived from Link, like Network. The implementation of the Network class on the remote computer would do some sort of socket communication to the Pi over WiFi. Everything up from the Network class would run on the remote Windows computer. Everything down from there would run on the Pi. The Pi would need a corresponding Network class implementation that receives everything coming from the network down to the Pixy and sends everything coming up from the Pixy back to the remote computer. I don't know what is expected in general in terms of the transport protocol. Is it send one command, wait for a response, then send another command? Or is it more full-duplex? Or multiple commands at the same tine as well? I would need to get this exactly right in order for this to work at all. All the PixyMon, libpixyusb and M4 firmware changes noted above in the VNC solution would still be required.

Another approach would be to not rely on PixyMon at all and instead do something with the cam_getFrame command that is discussed in this forum. This would require getting frames and then writing over the video frame buffer the objects like PixyMon does. Some of this could happen on the Pi, or it all could get sent to the remote Windows computer and happen there. This seems like a lot of tricky graphics work and keeping the frame rate decent might be an issue. I wouldn't need to change the M4 firmware, because I could tag every object block coming out of libpixyusb on the Pi with an ID.

I suppose the most totally basic approach, one that doesn't really implement the concept, is to just send all the object block info received by the Pi from the Pixy to the remote computer and create an application there that displays the blocks without the video frames. Maybe that is more useful than nothing.

Sorry for my long message.

All input / feedback is very much appreciated.

Thanks,
Sal


Replies (7)

RE: Distinguish a particular recognized object from a PixyMon-like display - Added by Jesse French 7 months ago

Hey Sal,

unfortunately I have no idea what the feasibility / difficulty of this is. I know you've experimented with modifying Pixy firmware before.

Rich is the one to answer this question, but he is away today / this weekend and will be back on Monday. So...stay tuned! Apologies for the delay.

Thanks,
Jesse

RE: Distinguish a particular recognized object from a PixyMon-like display - Added by Sal Trupiano 7 months ago

Thanks for following-up Jesse.

I have been thinking about this problem more and I think it makes the most sense to run PixyMon on Pi and access via VNC.

We will train the Pixy for the object color and all 7 signatures will be the same.

Then we will use signature to select which recognized object is selected by the operator. This selection will be made via a separate path, outside of PixyMon. As I remember, PixyMon labels what it recognizes with the signature.

The only planned modification to PixyMon is when it gets blocks from the Pixy (which it seems to do directly via libusb and not via libpixyusb) is to send all those blocks via the network from the Pi to the robot.

My concerns:
1. How well does PixyMon run on Pi? We are using P3 Model B, which has 4 ARMv8 cores a 1GB of RAM. But it seems like PixyMon does all of the image manipulation in software, which I'm guessing can eat CPU cycles pretty well. Plus, the Pi needs to run VNC to remote PixyMon, which beyond eating up CPU cycles, may have unacceptably long latency when drawing the window. There are some VNC alternatives, but I don't know if they are more efficient. Beyond PixyMon and VNC, the Pi won't be running any other user level stuff.
2. Do signatures behave as I described?

Thanks again for any input and feedback,
Sal

RE: Distinguish a particular recognized object from a PixyMon-like display - Added by Sal Trupiano 7 months ago

Got PixyMon to run on Pi. Next step is to access it remotely, but that is really outside the scope of Pixy and PixyMon.

My question is about Pixy and PixyMon behavior in the case of multiple recognized objects all of the same color.

Let's assume that Pixy is trained for all 7 signatures to be of the same object / color.

Does Pixy round robin through the signatures and therefore send a different signature with each object info block it recognizes?

If it does, then each signature label could be the signature number itself. That signature label / number would appear in PixyMon then it could be uniquely selected from the other recognized objects in different locations.

If Pixy doesn't round robin through the signatures, then there is no way to distinguish / get object info block for a particular object. In this case, modifications would be required to at least PixyMon, and possibly the M4 core firmware in the Pixy, to support a unique block id.

I would appreciate any comments / corrections.

Thanks,
Sal

RE: Distinguish a particular recognized object from a PixyMon-like display - Added by Edward Getz 7 months ago

Hello Sal,
I'm not understanding your question.

Pixy supports 7 signatures, that's correct, but training all 7 signatures for the same color would result in poor performance. Pixy relies on each signature being a distinct color in order to reliably detect objects of that color. For example, green, yellow, orange for signatures 1, 2, 3 --- not green, green, green.

Pixy can detect hundreds of objects of a given signature.

I hope that helps explain -- my apologies if not.

Edward

RE: Distinguish a particular recognized object from a PixyMon-like display - Added by Sal Trupiano 7 months ago

Hi Edward,

Thanks for your reply.

I do now understand that all 7 signatures shouldn't have the same color.

But my basic question is: let's say that Pixy recognizes 2 objects, both of the same signature. How can I tell one object from the other? So 2 objects, same signature, are displayed by PixyMon. There isn't any distinguishing information between the 2. Just want to confirm that this is the case.

Thanks again,
Sal

RE: Distinguish a particular recognized object from a PixyMon-like display - Added by Edward Getz 7 months ago

Hello Sal,
Detected objects with the same signature can vary in size and location, but that's the extent of it for that color connected components algorithm.

Edward

RE: Distinguish a particular recognized object from a PixyMon-like display - Added by Sal Trupiano 6 months ago

Hello Edward, Understood. Thanks for your help and patience. Sal

(1-7/7)