createGraphics() to capture camera frames

hello again everyone!

I’m having some trouble figuring out how to capture video in my sketch. As far as I fiddled with saveFrame() or even capture() functions, they all rely on image being displayed in the sketch window in order to save anything. In my specific case, I have already something running in the draw function and I wanted to capture the video feed as silently as I could.

then I stumbled upon createGraphics() function on saveFrame() reference, pointing towards a solution that buffers the imagery but doesn’t necessarily need to be displayed. my question is if it’s possible to use such method to capture my webcam images, and if it is, any example I could start looking at

thanks again, I’ll be around if I sounded to confusing

1 Like

Processing.org/reference/PImage_save_.html

Do you mean that you have a webcam stream that you want to save as a series of image frames, as in your title? Or do you want to save e.g. an mpeg4 or other video file from your camera, with or without sound?

2 Likes

hey thanks for your answer Jeremy!

I actually wanted to have a complete video file by capturing frames and then encoding them together. I think the fastest way was using saveframe(), but when I used ‘silent’ I meant something that was somehow unbeknownst for the user. the only methods I knew were using the image displayed on draw window, and since I’m using it already, I’m searching for alternatives!

thanks again

thanks once again Loop! do you have any examples of this function being used in video sketches? I could only find usages with still images

B/c a Capture object is constantly changing its pixels[] content, perhaps you should createImage() of the same width & height:
Processing.org/reference/createImage_.html

And then use arrayCopy() to transfer Capture::pixels[] to PImage::pixels[] after Capture::read():
Processing.org/reference/libraries/video/Capture_read_.html
Processing.org/reference/PImage_pixels.html
Processing.org/reference/arrayCopy_.html

Use method loadPixels() for the Capture object before arrayCopy() and updatePixels() for the PImage object after:
Processing.org/reference/PImage_loadPixels_.html
Processing.org/reference/PImage_updatePixels_.html

PImage::save() should be safer now:
Processing.org/reference/PImage_save_.html

1 Like

Does the video need audio?

Does the video need to be available immediately when the sketch runs, or can you save a framebuffer to disk and then compile the frames into a video later, like with ffmpeg? (e.g. https://funprogramming.org/114-How-to-create-movies-using-Processing.html or https://stackoverflow.com/questions/12566777/how-can-i-export-a-processing-sketch-as-a-video

Is there a reason that the webcam capture needs to be part of the sketch at all? Does the sketch do anything with the frame data, or does it just start and stop recording? The sketch could call out to a recording application on stop / start using exec or launch
https://processing.org/reference/launch_.html

If you just want a pushbutton video recording that is embedded in your sketch, perhaps a 3rd party java library like jxcapture? (untested)
https://sites.google.com/a/teamdev.com/jxcapture-support/samples/video/windows/capture-video-from-web-camera

2 Likes

You might also be interested (depending on your requirements) in using VLCJ to capture a camera (with the external VLC video lan client).

1 Like

Or even the GStreamer bindings, which have the benefit of being included in the Video library. I’m hoping the Video library GSoC will allow the full underlying capabilities to be accessed more easily - I am certainly hinting at it! :wink:

@neilcsmith – good point. who is the GSoC participant – could we ping this thread to their attention so they could think about it as a use case?

1 Like

@jeremydouglass I’ve emailed them a link to this. What I’ve suggested is that the capture aspect is updated to use pipeline description strings, which can be overridden - this is the approach used in PraxisLIVE. That way it’s incredibly easy to use a pipeline that saves capture input to disk alongside display.

Even better would be to rebuild the library on top of generic support for PImage input and output - that way it could also support video export of sketches. That all this functionality is included but not accessible is a shame!

2 Likes

dayum, I summoned a lot of savvy people in here!

@GoToLoop this seems a very functional solution, but it might clutter my code, and the program is already showing signs of its processing. nevertheless, I’m saving all those references for later uses, thanks!

@jeremydouglass wow, that’s pretty much covering all bases! the video doesn’t need its audio, I thought about using some kind of processing -> FFMPEG routine, gonna take a look over this video. the camera could also be triggered by the sketch, not necessarily run in the very same processing code, that’s a very very nice outside of the box thinking! thanks a bunch!

@neilcsmith I think I have already reached you in a gitter support chat of PraxisLIVE! hahah I’m developing this project to reach its final stage screening the aforementioned video, so if you have any suggestions on how to generate something easily accessible and detectable by Praxis, that would be really great! I tried to study a bit about GStreamers, but I think I lack the knowledge to delve deeper into it. thank you, as well!

:man_technologist:

after researching a bit on what you guys posted, I feel like I’m still hitting this brick wall…

I’ll try to summarize what I have been doing so far, so I can try to point at something specific that I’m trying to achieve by asking and bothering you people.

WALL OF TEXT AHEAD

having little background in coding or whatsoever (dropped at the end of my 1st on eletrical engineering deg) I tried to kept studying but having no place and time to develop skills, eventually I kinda put aside my desire of developing and using programming and electronics into my artistic projects. I enrolled into a media studies and cinema degree in another university and tried to emerge those desires and abilities in any project I could, with some victories!

then I decided that my final project would relate to that subject, in anyway I could. with all my teachers having no background of programming I could only rely on foruns and little self-breakthroughs to advance in the technicalities. and for that I really really thank you guys

the whole thing is very simple: I’m trying to play with how much an average user inputs data into something that he/she doesn’t perceive its consequences. so I created some kind of questionaire in this processing sketch, registering the inputs of the said user into a txt file. at the end of the sketch, I’m generating a treemap-based image on what is registered. at this point it consists only in the answers provided, but I’m thinking of what else I could add in this mix. then this image would be outsourced into a projection screen, preferably using PraxisLIVE in some easy setup like the ‘hello’ example. confronting the user at first with their answers, the projection would later reveal the little caveat of it all: he/she have been filmed all the time, and reaching a certain threshold distance within the projection the itself, it would be revealed.

as of now, my processing sketch fulfils until the treemap image generation, missing the video capture and saving/encoding part. regarding PraxisLIVE I still need to research how can I feed those recent-generated video and image files into the project, if the software can perceive somehow that new data and source it inside the project for each and every interaction.

TL;DR: rookie programmer seeks help into not just solving a single sketch, but delivering a whole project based on great softwares you guys developed and/or are maintaning through question-solving

here’s what I have so far, as the Processing bit goes. the video capture black frames, I’m failing into thinking where I could properly integrate this whole camera signal capture so it could happen during each and every user interaction:

import controlP5.*;
import processing.video.*;

//treemap inclusions

import treemap.*;
import processing.pdf.*;
import java.util.Calendar;

Treemap map;
MapLayout layoutAlgorithm = new SquarifiedLayout();
//

ControlP5 cp5;
Capture cam;

Textfield infosInput; // naming textfield for better calling throughout the code

//Treemap declarations

boolean savePDF = false;

int maxFontSize = 1000;
int minFontSize = 1;

PFont font;
//


int state = 0;
int k = 0;
int MAXSTATE = 6; // numbers of questions to implement switch/case

String infos;
String infosOutput[];
char infosOutput2[];

void setup() {

  size (500, 500);

  //treemap setup

  font = createFont("Mad Hacker.ttf", 10);
  smooth();



  //

  String[] cameras = Capture.list();

  cp5 = new ControlP5(this);

  cp5.addTextarea("titulo")
    .setPosition(100, 100)
    .setSize(400, 40)
    .setFont(createFont("arial", 30))
    .setLineHeight(14)
    .setColor(color(128))
    .setColorBackground(color(185, 100))
    .setColorForeground(color(255, 100))
    .setText("enTro");

  cp5.addTextarea("clique")
    .setPosition(280, 295)
    .setSize(200, 25)
    .setFont(createFont("arial", 12))
    .setLineHeight(14)
    .setColor(color(135))
    .setColorBackground(color(198, 100))
    .setColorForeground(color(255, 100))
    .setText("ENTER para continuar...")
    ;

  infosInput = cp5.addTextfield("")          //declared earlier, I name the textfield to aid later calling
    .setPosition(120, 100)
    .setSize(390, 30)
    .setFont(createFont("arial", 30))
    .setColor(color(128))
    .setColorBackground(color(185, 100))
    .setColorForeground(color(255, 100))
    ;

  cp5.addTextarea("nome")
    .setPosition(20, 350)
    .setSize(320, 40)
    .setFont(createFont("arial", 26))
    .setLineHeight(14)
    .setColor(color(120))
    .setColorBackground(color(195, 100))
    .setColorForeground(color(249, 100))
    .setText("digite seu nome");

  //camera settings 
   
   if (cameras.length == 0) {
   println("There are no cameras available for capture.");
   exit();
   } else {
   println("Available cameras:");
   for (int i = 0; i < cameras.length; i++) {
   println(cameras[i]);
   }
   
   // The camera can be initialized directly using an 
   // element from the array returned by list():
   cam = new Capture(this, cameras[10]);
   } 
}

void draw() {

  background(0);

  if (cam.available() == true) {
  cam.read();
   } 

  switch (state) {
  case 0:  //splash screen
    cp5.get(Textarea.class, "titulo").setVisible(true);
    cp5.get(Textarea.class, "clique").setVisible(false);
    cp5.get(Textarea.class, "nome").setVisible(false);
    cp5.get(Textfield.class, "").setVisible(false);
    break;

  case 1:                                                  //enter to continue
    cp5.get(Textarea.class, "titulo").setVisible(true);
    cp5.get(Textarea.class, "clique").setVisible(true);
    break;

  case 2:                                                  //user inputs their name
    cam.start();
    cp5.get(Textarea.class, "titulo").setVisible(false);
    cp5.get(Textarea.class, "clique").setVisible(false);   //hides unused textfields
    cp5.get(Textfield.class, "").setVisible(true);
    cp5.get(Textfield.class, "").setPosition(120, 100);
    cp5.get(Textarea.class, "nome").setVisible(true);
    cp5.get(Textarea.class, "nome").setText("digite seu nome");
    cp5.get(Textarea.class, "nome").setPosition(20, 350);
    saveFrame();
    break;

  case 3:                                                  //user inputs their age
    cp5.get(Textfield.class, "").setPosition(180, 80);
    cp5.get(Textarea.class, "nome").setText("digite sua idade"); //changes the question of the box, in order to avoid creating multiple fields
    cp5.get(Textarea.class, "nome").setPosition(30, 280);        //slightly changes the position for aesthetics
    saveFrame();
    break;  

  case 4:                                                  //user inputs their nationality
    cp5.get(Textfield.class, "").setPosition(100, 90);
    cp5.get(Textarea.class, "nome").setText("digite sua nacionalidade"); 
    cp5.get(Textarea.class, "nome").setPosition(45, 200);
    saveFrame();
    break;

  case 5:                                                  //user inputs their belief
    cp5.get(Textfield.class, "").setPosition(180, 140);
    cp5.get(Textarea.class, "nome").setText("descreva sua religião"); 
    cp5.get(Textarea.class, "nome").setPosition(250, 120);
    saveFrame();
    break;

  case 6:  //user inputs their skin tone
    cam.stop();
    cp5.get(Textfield.class, "").setPosition(230, 80);
    cp5.get(Textarea.class, "nome").setText("qual sua descrição fenotípica");
    cp5.get(Textarea.class, "nome").setSize(420, 40); 
    cp5.get(Textarea.class, "nome").setPosition(95, 30);
    break;
  }
}

void keyPressed() {



  if ( key == ENTER || key == RETURN)
  {
    /*
     
     this step is reserved for saving the string into a .txt, but it raises the question: is it better to do
     after each single keypress or at the very end of the program, when a whole string is formed?
     
     that written, need to research a method for:
     
     - saving textfield into a .txt;
     - each answer from the user is stored in the same .txt;
     - preferrably in this step I already multiply randomly the answers throughout the strings, in order to generate a treemap afterwards:
     http://www.generative-gestaltung.de/2/sketches/?01_P/P_3_1_4_01;
     
     */
    if (state > 1)
    {  
      // even in state 3 we execute the following lines and save afterwards (so please no else if)
      infos = infos + "#" + infosInput.getText();
      infosInput.setText("");
      println("boink: "+infos);
    }  
    if (state == 6)
    {

      // saving 
      infos=infos.trim(); 
      if (infos.charAt(0) == '#') 
        infos=infos.substring(1); // delete leading # 

      // println("new:"+infos); 
      String[] infosOutput1 = split(infos, '#');
      String[] infosOutput2 = split(infos, '#');  //saving index

      //expanding string with the information
      infosOutput1 = expand(infosOutput1, 1000);

      for (int j = 4; j < 999; j++)
      { 
        infosOutput1[j+1] = infosOutput1[k];
        k++;

        if (k > 4) //paying attention on k value related to state/questions value!!!!!
        {
          k = 0;
        }
      }

      //randomizing its contents
      shuffle(infosOutput1);

      // reference says: saveStrings(filename, data)
      saveStrings(infosOutput2[0] + "_" + timestamp() + "_.txt", infosOutput1);
      // println(infos);

      //creating tree map   


      WordMap mapData = new WordMap();

      String[] lines = loadStrings(infosOutput2[0] + "_" + timestamp() + "_.txt");
      // join all lines to a big string
      String joinedText = join(lines, " ");

      // replacings
      joinedText = joinedText.replaceAll("_", "");  

      // split tex into words by delimiters
      String[] words = splitTokens(joinedText, " ¬ª¬´‚Äì_-–().,;:?!\u2014\"");

      // add all words to the treemap
      for (int i = 0; i < words.length; i++) {
        // translate all to LOWERCASE
        String word = words[i].toLowerCase();
        mapData.addWord(word);
      }

      //  ------ treemap data is ready ------
      mapData.finishAdd();

      // create treemap with mapData
      map = new Treemap(mapData, 0, 0, width, height);



      if (savePDF) beginRecord(PDF, timestamp()+".pdf");

      background(66, 244, 69);
      map.setLayout(layoutAlgorithm);
      map.updateLayout();
      map.draw();

      saveFrame(infosOutput2[0] + "_" + timestamp() + "_##.png");

      if (savePDF) {
        savePDF = false;
        endRecord();
      }

      //end of treemap creation
    } 
    if (state==0) {
      // reset
      infos="";
    }


    state = (state+1) % (MAXSTATE+1);

    println(state);
  }
}

final String[] shuffle(final String... arr) {
  if (arr == null)  return null;

  int idx = arr.length;

  while (idx > 1) { 
    final int rnd = (int) random(idx--);
    final String tmp = arr[idx];
    arr[idx] = arr[rnd];
    arr[rnd] = tmp;
  }

  return arr;
}


// timestamp
String timestamp() {
  Calendar now = Calendar.getInstance();
  return String.format("%1$ty%1$tm%1$td_%1$tH%1$tM%1$tS", now);
}

if this whole yadda-yadda should be posted somewhere else, please I’d love to not clutter this question thread, but I’m running out of time and kinda desperate to end it asap

most shiny and deep thank yous