Need of advice to improve GLCapture code (and large buffer)

Hi community,
I’m quite used to Java, however I’m starting with Processing.
My aim is to make a realtime slit-scan effect, with the Pi camera (V2) and RPi3.
The following code is working, and I’m yet impressed by the Processing ability to generate a quite constant frame-rate.
Ideally I would like to achive the high frame rate ability of the camera module V2, around 120/180 fps. So my question is in two part.
First, do we have the possibility to change other settings of the camera, such as shutter speed, iso ?
And secondly:
This code is working at about 20fps. It relies on the PImage.copy() method to store the large rotating image buffer and rebuild the time scanned output image.
Is there a better - more efficient strategy for the buffer and reconstruction ?

Thanks a lot, and if Processing team reads this: kudos for this nice job!

cap1718

import gohai.glvideo.*;
GLCapture video;

PImage img[];
int imgIndex = 0;
PImage imgDest;

void setup() {
  size(256, 256, P2D); // Important to note the renderer

  // Get the list of cameras connected to the Pi
  String[] devices = GLCapture.list();
  println("Devices:");
  printArray(devices);

  // Get the resolutions and framerates supported by the first camera
  if (0 < devices.length) {
    String[] configs = GLCapture.configs(devices[0]);
    println("Configs:");
    printArray(configs);
  }

  // this will use the first recognized camera by default
  //video = new GLCapture(this);

  // you could be more specific also, e.g.
  //video = new GLCapture(this, devices[0]);
  video = new GLCapture(this, devices[0], 256, 256, 90);
  //video = new GLCapture(this, devices[0], configs[0]);

  img = new PImage[256];
  imgDest = createImage(256, 256, RGB);

        for (int j = 0; j < img.length; j++) {
                img[j] = createImage(256, 256, RGB);
        }

  video.start();
}

void draw() {
  background(0);
  // If the camera is sending new data, capture that data
  if (video.available()) {
    video.read();
    img[imgIndex%256].copy(video, 0,0,256,256,0,0,256,256);
    for(int i=0; i<256; i++){
      imgDest.copy(img[(imgIndex + i)%256], i, 0, 1, 256, i, 0, 1, 256);
    }
    imgIndex++;
  }
  image(imgDest, 0, 0, 256, 256);
}

1 Like

I’m probably not the person to answer this question because I haven’t done anything on the raspberry pi but I’ll give it a shot.

Generally you have to go lower level if you want lots of control over hardware. It looks like you can specify fps in the library you’re using but I don’t see shutter speed and iso as controllable. If you’d like that level of control you probably need to be working in C++ and even then it might not be possible.

As for the second question if I understand slit scanning correctly you only need one line of pixels. I would copy just that one column and store it in an array. So instead of an array of PImages try an arraylist of color arrays. Something like the code below (disclaimer I haven’t tested this so it might have some errors but I hope it gets across the basic idea).

ArrayList<color[]> imgColumns;

//in setup
imgColumns = new ArrayList<color[]>();
// loop for i < width
  color[] column = new color[height]
  // loop over column setting every value to black
  imgColumns.add(column);
}

//in draw
if (video.avaliable()) {
  video.read();
  video.loadPixels(); // can treat video like PImage because of inheritance
  color[] column = new color[height]
  // for i < column.length
    column[i] = video.pixels[width/2 + i * width]; // copies middle column of pixels from video
  imgColumns.add(column); // add to the end so it will be display as the last column
  imgColumns.remove(0); // removes the first array shifting all columns one to the left
  imgDest.loadPixels(); // loading pixels in destination image 
  // for x < imgColumns.size()
    // for y < height
       imgDest.pixels[x + y * width] = imgColumn.get(x)[y];
  imgDest.updatePixels(); // don't forget this
  // display imgDest
}

Thanks a lot for your fast reply figraham!

Actually the behavior of the code with PImages is exactly what I’m looking for. You’re right, slit-scan process literally generates one image by scanning. However I’d like to keep the entire frame animated. I found someone that uses this kind of time delay between each line - or column. He call this ‘space-time’


We can see the process as the following: for a line ‘n’ the delay is ‘n’ frame. So the first image line has no delay, and the last one has 500 frames of delay - for an image height of 500 pixel.

You’re right as well, C code seems the way. I asked the question in case there is a way in Processing to keep all this work done in the GPU. Maybe drawing ‘n’ sprites all shifted by one pixel.

Anyway I’ll continue to tests different things, and thanks a lot for the fix-scan code !

1 Like

That’s an interesting way to do slit scanning.

I don’t know of a way to utilize the GPU in processing but you may want to look into openframeworks. I haven’t done very much using it but from what I’ve heard they have some good libraries for hardware. It might be a better fit for your project.

You could try capturing video at a high framerate and then post processing it to create your slit scanning animation. It would take some of the time constraints out of the equation.

Best of luck with your project sounds very interesting. I would be interested to see what results you come up with.

You just make me discover Openframeworks ! Thanks a lot for your advice, I’ll post something if I can make it work nicely.

1 Like

Hello,
A little follow up on this effect. Under the 6by9 of raspberry pi forum advice, I managed to have a compiled C code (derived from raspividyuv) that fit my needs. The camera acquisition is done at 90fps, the rendering around 30fps for 15% of cpu usage. (However I’m keeping my Java code for the project (a photobooth) which launch the camera process). Here a video example of the effect in real-time:

Thanks again and have a nice day !