« Previous - Version 35/163 (diff) - Next » - Current version
Anthony Rowe, 09/14/2012 10:31 am


Overview

Processor
  • NXP LPC4330
  • dual core, ARM Cortex-M4 and Cortex-M0
  • 208 MHz (both cores)
  • 264k RAM (0 wait state)
  • USB 2.0 high speed (OTG support)
  • I2C, SPI, UART, etc.
  • A/D, D/A
  • Floating point unit (M4)
  • SIMD instructions (M4)

Sensor
  • Omnivision OV9715
  • 1/4” sensor – low light, low noise
  • 1280x800, RGB bayer direct
  • 25 fps full resolution, 50 fps 640x400
  • end-of-life tolerant

Hardware
  • USB device and host support (10Mbyte/sec)
  • 1M bytes flash, 264K bytes RAM
  • I2C, SPI and UART interfaces
  • ESD protection on all connectors
  • RGB LED (bright!)
  • pushbutton
  • Power input: USB, 10-pin connector, or unregulated power connector
  • Voltage sensing
  • 2 RC servo outputs

Architecture
  • M0 core: front end acquisition of pixels, initial processing stages, data reduction, etc, writes results to shared RAM
  • 6 clock cycles per pixel (1280x800 25fps)
  • 12 clock cycles per pixel (640x400, 640x200 50fps)
  • 24 clock cycles per pixel (320x400, 320x200 50fps)
  • M4 core: high-level processing, PC and device communications

Building

Currently, embedded development is done using the ARM uVision V4.20 IDE with RealView compiler. PC-side development is done in the Qt application framework.

Debug Serial

Connector J7 on the PCB is debug serial out. It is recommended with the Rev. 1.0 PCB to solder the connector on the opposite side of the soldermask (Camera side of PCB) to avoid the power and RC-servo connectors getting in the way of the USB to serial board. This USB to serial board is cheap and works well:

https://www.sparkfun.com/products/718

Currently the software sets the debug baud at 115K (8-bit data, 1 stop bit, no parity, no flow control). I discovered this recently-- it's a no nonsense terminal program:

Windows http://freeware.the-meiers.org/CoolTermWin.zip
Mac http://freeware.the-meiers.org/CoolTermMac.zip

Remote procedure call interface

We're using a custom RPC framework called Chirp (Call Hopefully Informative Remote Procedure) for communication between processors. Chirp is used in 3 places:
  1. Communication between PC and Pixy (M4 core) over USB in "hi-speed" mode (10M bytes/sec measured)
  2. Communication between M4 and M0 ARM cores over shared memory
  3. Communication between M4 and serial host (microcontroller) over UART, SPI or I2C

The biggest advantage of Chirp is that it's simple and easy to add procedures. It doesn't require you to write stub code. It's a binary interface, so it's quick. It supports enumerating the available procedures (service discovery), does type checking of arguments, and supports custom datatypes. The data interface is abstracted, a simple read/write interface so you can chirp across various physical interfaces. Currently, Chirp over USB and shared memory is tested and working. Support for serial, I2C and SPI will be added and will include error detection/correction.

Here's a simple example--- from the caller end there are 2 steps:

1) Query the procedure by name. This takes place only once, upon initialization:

getFrame = getProc("getFrame", NULL);

this gets the index for the procedure called getFrame and puts it in a variable. The NULL variable is a callback procedure, if you want asynchronous operation, you can have the result returned in the callback procedure without having to block/wait.

2) Call the procedure as desired:

result = call(SYNC, getFrame,
              UINT16(x_offset), UINT16(y_offset), UINT16(width), UINT16(height), END_OUT_ARGS,
              &frameLen, &frameData, END_IN_ARGS);

here we call getframe with 4 arguments and get the result in 2 variables, one for the size of the frame result and one for the frame data. We specified SYNC which means that we wish to block/wait for the result. If we had specified ASYNC, we would get the result in the callback procedure we passed into getProc(). The result will tell us if chirp call succeeded or not (error, timeout, etc.)

From the callee end, there are 3 steps:

1) Register the procedure. This is done once upon initialization:

setProc("getFrame", (ProcPtr)getFrame);

2) Define the procedure:

void getFrame(Chirp *chirp, UINT16_IN(x_offset), UINT16_IN(y_offset), UINT16_IN(width), UINT16_IN(height))
{
  // code to get frame 

  // set result to send back
  CRP_RETURN(chirp, frameLen, frameData);
}

3) Call the service method periodically:

chirp->service();

Projects

This isn't a complete list of work that needs to be done, just some suggestions. We imagine that the first release will have a connected component color blob tracking algorithm, PC-based GUI utility, firmware upload support, Arduino support and not much else.

Command interpreter

The PC GUI utility will have a command window for typing in commands much like previous CMUcam GUIs. From the command window we may want to:
  • Make Chirp calls to the Pixy (query, type-check arguments, grab relevant data)
  • Perform simple scripting using the Chirp calls, e.g. while loops, ?
  • Save results to files on PC (text, binary, JPEG)
  • Load results from PC files
  • ?
  • Would it make sense to embedded LUA here?
Flash filesystem
  • The Rev 1.0 PCB has 1M bytes of flash available (more can be added if we feel it's necessary). We will store the firmware on flash, but the remaining space would be nice to use to save non-volatile data in an intuitive way (color model data, configuration data, including presets, etc.) A flash disk is an attractive choice because it's efficient, intuitive and there are some simple implementations we can use to help get us going, e.g.:

http://elm-chan.org/fsw/ff/00index_p.html

DFU bootstrap support
  • The white button on the Pixy can be used to enter a USB bootstrap mode called DFU (Device Firmware Upgrade). (Hold down the white button while powering up, and it will enter this mode.) DFU is a standard. The goal is to have a fail-safe way to upload firmware onto the Pixy RAM and flash, and having this capability integrated into the PC GUI utility (PixyMon). There are open-source DFU utilities we can use for help. It will entail getting DFU working in a "hello world" sense, just getting a simple program uploaded onto the device and running, and then bringing the code into PixyMon. This task will require some digging into the internals of the NXP processor, but appears to be well documented.
Connected components code
  • The color algorithm we want to implement has 2 steps: (1) process the frame, determine if each pixel has membership in any of the N color models, and create a run-length encoded "frame" that describes model membership, and (2) take this RLE frame and assemble the run-lengths into connected component "blobs". We have code that does this, and it needs to be brought in and tested. Since the first step of the algorithm probably won't be implemented yet, we'll need to synthesize some RLE frames for testing.
Color model generation
  • We will need to generate color models for the color algorithm. The current thinking is that each model will be a look-up table with the index being a 15-bit color (5-bits red, 5-bits green, 5-bits blue) and the output will be a single bit: 1=membership, 0=non-membership. Using 32K of memory, we can have 8 models. We need a way to generate these models. We have code that converts RGB pixels into a LUT. It extracts the mean hue and allows you to specify the saturation range. The luminance is disregarded. It then creates a LUT in 15-bit RGB space. But this problem is somewhat open-ended. Selecting pixels from a camera-generated image is a robust and convenient way to convey "use these colors" to the algorithm, but the we don't know the most effective method to convert pixels to LUTs. For example, the LUTs should be luminance-robust, invariant, so lighting conditions have minimal effect, but should the LUT include multiple hues, a range of saturations, or should the LUTs be thought of as simple histograms that describe the model we're interested in? We might run tests using real image data in likely environments, varying lighting, etc, to help us determine the best approach.
Arduino support
  • Should we use I2C, serial or both to communicate with the Arduino? What's the best way to hook up to an Arduino (what's the cable design)? We have a C version of Chirp that can be ported over to Arduino, and when we get the color algorithm working, we'll want a good hello world example.
Device classes (RGB LED, Servo)
  • There is an RGB LED and RC-servo port on the Pixy. The NXP processor has peripherals that generate PWM in a flexible way. (I've generated RC-servo PWM for testing purposes.) Making this a little more interesting-- the RGB LED has voltage sensing to determine the current going into each LED. This was done because the forward voltage can vary quite a bit between LEDs, especially across production batches. If we want accurate colors across all Pixys (no calibration) sensing the current will be required. This task entails creating a couple classes with simple member functions for controlling the LED brightness and hue and controlling the RC-servo output. And doing the current normalizing thing.
GCC support
  • We may want to defer this until later, perhaps after the first release. Currently we use the ARM RealView compiler. RealView great as far as generating fast code, but it isn't free. We're considering using GCC to generate loadable modules. The user would generate a binary object that can be loaded and linked dynamically. The object could sit on the flash disk and be loaded as needed. The code contained would make calls to system facilities (USB, serial, camera, memory allocation, etc) through a fixed, ROM-resident table of function pointers. Probably the biggest advantage of this approach is that it's simple (for us and the user). The objects are small, can be distributed easily, and there is a layer of abstraction. The biggest disadvantage is that it may constrain the user somewhat.

Future, other work

QR code detector/decoder
  • It would be pretty cool to get QR detector and decoder working. You can extract a normal vector from a QR code. It could be used for robot navigation, or by anyone else who wants to decode QR codes. The sensor has decent resolution (1280x800), so that helps. Barcodes (I understand) are a simpler subset of QR codes.

Face detector/tracker

Freespace calculation
  • fast obstacle detection/avoidance, face the camera down at the floor at an angle and determine the floor color/texture. Use this information to calculate the freespace in front of the camera.
Proximity sensor
  • Strobe the RGB LED at full brightness with the 50 hz framerate to determine what the scene looks like with and without the LED turned on. The difference in reflected light intensity is related to the distance.
Text to speech
  • Future versions of Pixy could have a speaker without adding much cost to the BOM. The NXP processor has enough computing power to play MP3s and do text to speech.
Simple SIFT
  • If we could extract SIFT features (may not be feasible-- too little RAM?) we could do interesting things--- object recognition, navigation, etc.

???