Processing can't find the camera - Video Libraries don't work

It works! thanks! Now it can be opened every time!

Thanks @neilcsmith worked like a charm

1 Like

@neilcsmith Thanks for the Capture solution it works like everyone else says, but I also get the same error for a basic video example “frames”

I don’t understand how I can apply the pipeline fix to the normal video src

I keep getting the error: "WARNING: no real random source present!
Processing video library using GStreamer 1.16.2 "

mov = new Movie(this, “launch2.mp4”);

/**

  • Frames
  • by Andres Colubri.
  • Moves through the video one frame at the time by using the
  • arrow keys. It estimates the frame counts using the framerate
  • of the movie file, so it might not be exact in some cases.
    */

import processing.video.*;

Movie mov;
int newFrame = 0;

void setup() {
size(560, 406);
background(0);
// Load and set the video to play. Setting the video
// in play mode is needed so at least one frame is read
// and we can get duration, size and other information from
// the video stream.
mov = new Movie(this, “launch2.mp4”);

// Pausing the video at the first frame.
mov.play();
mov.jump(0);
mov.pause();
}

void movieEvent(Movie m) {
m.read();
}

void draw() {
background(0);
image(mov, 0, 0, width, height);
fill(0);
text(getFrame() + " / " + (getLength() - 1), 10, 30);
}

void keyPressed() {
if (key == CODED) {
if (keyCode == LEFT) {
if (0 < newFrame) newFrame–;
} else if (keyCode == RIGHT) {
if (newFrame < getLength() - 1) newFrame++;
}
}
setFrame(newFrame);
}

int getFrame() {
return ceil(mov.time() * 30) - 1;
}

void setFrame(int n) {
mov.play();

// The duration of a single frame:
float frameDuration = 1.0 / mov.frameRate;

// We move to the middle of the frame by adding 0.5:
float where = (n + 0.5) * frameDuration;

// Taking into account border effects:
float diff = mov.duration() - where;
if (diff < 0) {
where += diff - 0.25 * frameDuration;
}

mov.jump(where);
mov.pause();
}

int getLength() {
return int(mov.duration() * mov.frameRate);
}

1 Like

Good morning everyone. I’ve carefully read your replies and tried them on an example sketch by processing. (I am using Version 4.0b3)

My webcam is connected trough EOS Webcam Utility.

When using this code

/**
 * Frame Differencing 
 * by Golan Levin. 
 *
 * Quantify the amount of movement in the video frame using frame-differencing.
 */ 


import processing.video.*;

int numPixels;
int[] previousFrame;
Capture video;

void setup() {
  size(640, 480);
  printArray(Capture.list());
  // This the default video input, see the GettingStartedCapture 
  // example if it creates an error
  //video = new Capture(this, Capture.list()[0]);
  
  video =  new Capture(this, "pipeline:autovideosrc");
  
  // Start capturing the images from the camera
  video.start(); 
  
  numPixels = video.width * video.height;
  // Create an array to store the previously captured frame
  previousFrame = new int[numPixels];
  loadPixels();
}

void draw() {
  if (video.available()) {
    // When using video to manipulate the screen, use video.available() and
    // video.read() inside the draw() method so that it's safe to draw to the screen
    video.read(); // Read the new frame from the camera
    video.loadPixels(); // Make its pixels[] array available
    
    int movementSum = 0; // Amount of movement in the frame
    for (int i = 0; i < numPixels; i++) { // For each pixel in the video frame...
      color currColor = video.pixels[i];
      color prevColor = previousFrame[i];
      // Extract the red, green, and blue components from current pixel
      int currR = (currColor >> 16) & 0xFF; // Like red(), but faster
      int currG = (currColor >> 8) & 0xFF;
      int currB = currColor & 0xFF;
      // Extract red, green, and blue components from previous pixel
      int prevR = (prevColor >> 16) & 0xFF;
      int prevG = (prevColor >> 8) & 0xFF;
      int prevB = prevColor & 0xFF;
      // Compute the difference of the red, green, and blue values
      int diffR = abs(currR - prevR);
      int diffG = abs(currG - prevG);
      int diffB = abs(currB - prevB);
      // Add these differences to the running tally
      movementSum += diffR + diffG + diffB;
      // Render the difference image to the screen
      pixels[i] = color(diffR, diffG, diffB);
      // The following line is much faster, but more confusing to read
      //pixels[i] = 0xff000000 | (diffR << 16) | (diffG << 8) | diffB;
      // Save the current color into the 'previous' buffer
      previousFrame[i] = currColor;
    }
    // To prevent flicker from frames that are all black (no movement),
    // only update the screen if the image has changed.
    if (movementSum > 0) {
      updatePixels();
      println(movementSum); // Print the total amount of movement to the console
    }
  }
}

I am getting following console Output:

Processing video library using GStreamer 1.16.2
BaseSrc: [autovideosrc0-actual-src-ksvide] : No supported formats found
BaseSrc: [autovideosrc0-actual-src-ksvide] : not negotiated
BaseSrc: [autovideosrc0-actual-src-ksvide] : Internal data stream error.
WARNING: no real random source present!

Had the same problem for months, and adding just this solved it.

Oops, this link is no longer active.

I appreciate this

see Reference / Processing.org

Quote

Use Capture.list() to show the names of any attached devices.

The “pipeline:autovideosrc” solution works for me. However when I try to export the code (windows X64) the compiled version does not work even if I include Java. Any idea?

@antoncivit read this - Exporting Sketch With Movie Is Blank - #5 by neilcsmith

I have the same but different problem with regards to webcams. I have an integrated webcam and a USB camera. I downloaded Processing’s Capture sketch and it only recognizes the integrated webcam. I have tried 3 different cameras, but got the same results. I used the above mentioned “pipeline:autovideosrc” piece of code, but saw no changes. I tried entering the name of my USB camera in cam = new Capture(this, LifeCam HD-3000); but now it says camera not found. However, this USB camera shows up on my computer along with the integrated camera using the computer’s camera app. Has anyone found a solution to this. I am using Processing version 4.
Thank You

I would try disabling the integrated webcam in Device Manager then see if autovideosrc finds the USB camera.

You could also try "pipeline: ksvideosrc device-index=1" (or device-index=0 if the integrated webcam is disabled) instead of "pipeline: autovideosrc"

Hi
The "pipeline:ksvideosrc device-index = 1; was the fix.
Thank You for solving this problem for me. A job well done.

1 Like

Thnx this solved all my troubles. I had to restart the code like 5 times to run now it works every time.

I have some students trying to get the video capture to work, and when they set their Capture object like this:

cam = new Capture(this, “pipeline:autovideosrc”);

they get this error in the console, and it does not run:

GstException: no element “autovideosrc”
GstException: no element “autovideosrc”
GstException: no element “autovideosrc”

I also tried the ‘ksvideosrc’ option, and that said that ‘ksvideosrc’ does not exist. Otherwise, the library loads, and it says it is using GStreamer 1.16.2

They are running Processing 4.b7 and b8

That’s interesting. It sounds like you might have a problem with your video library installation.

First check to make sure the camera appears and is enabled in the device manager and works with other apps, but it sounds like that’s not your problem.

Try running this sketch:

import processing.video.*;

Capture cam;

void setup() {
  size(640, 480);
  cam = new Capture(this, "pipeline:videotestsrc pattern=snow");
  cam.start();
}

void draw() {
  if (cam.available() == true) {
    cam.read();
  }
  image(cam, 0, 0, width, height);
}

You should see television static.

Reinstall the video library and delete any other instances of the video library or similar libraries that might be interfering. Better yet delete all of your libraries, reinstall processing and point your sketchbook to a clean directory, install the video library and see if anything changes.

Does it only happen on school computers? It seems like it could easily be an issue with security software or IT administration but that’s beyond my expertise. Try running Processing as an administrator and see if that solves anything. You could also check your environment variables in case it’s trying to run a different instance of gstreamer (I’m not sure if this would actually cause a problem, but worth a shot imo).

Thanks for the suggestions, lfredericks. The students each have their own laptops, so it’s all varieties of OS types / versions / hardware… 60 different computers to support… 0_O

One question: I’m not as familiar with Windows, how can you tell if you are running Processing as an administrator?

hey I had the same problem as you. I’m on Windows btw.
In my case the default Processing sketchfolder path had something interfering with recognizing the libraries path.
So I made a new Processing sketchfolder in a different hard drive and set that folder as the default sketchfolder in the Processing preference tab. Installed the video library there and it worked.
I hope this works for you.

Since it’s affecting all of your students on different OSes, it’s probably either a problem with the way Processing was installed and set up en masse, or a problem with your IT department having tightly restricted user permissions across the whole system, in which case you may have to get them involved.

I think @bjkim0215 is on the right track. Try pointing your sketchbook to a directory where you have high user privileges and reinstall the video library.

In windows just right click on processing.exe and select ‘run as administrator,’ but it won’t work unless you have admin credentials. If you do, reinstall the video library while running Processing as an admin. If that works, exit the program and run it as a normal user to see if it was a one-time fix.

If it still won’t work and IT is no help, you could try to run it from a thumb drive but that may or may not be an ordeal.

1 Like

Hola Neil :slight_smile:
In my case it worked, your solution allowed me to see myself, but the image is wrong. The image that gets displayed from the cam is squeezed on the ‘X’ axis, and also the image is mirrored.
Tried doing: cam = new Capture(this, 640, 480, "pipeline:autovideosrc"); thinking that the proportions were wrong, but the result was the same. :frowning: What do you think I could do?

Many thanks from the Dominican Rep.

First make sure that the resolution you are requesting is one that is supported by your camera. An easy way to do this on Windows is in the camera app under video settings: there is a dropdown which will show you all of the resolutions supported by your webcam. Alternatively use something like ffmpeg which can provide much more information if you need it.

Second make sure that you are not inadvertently stretching the image when you draw it by specifying incorrect dimensions.

Once you’ve ruled out these more common issues, you can try adjusting it in your pipeline with something like video/xraw, width=640, height=480. Check the gstreamer documentation and look for examples online if you decide to go this route.

The commonly prescribed solution to mirror your video is :

pushMatrix();
scale(-1,1);
image(cam, 0, 0);
popMatrix();

Other people seem to have success with this, but every time I have tried it, it has either not worked or killed my framerate, so I have started doing it through gstreamer with videoflip method=horizontal-flip

Append commands to your pipeline separated by ! until you get something you can work with in processing. Putting that together, your code will look something like: cam = new Capture(this, "pipeline: autovideosrc ! video/xraw, width=640, height=480 ! videoflip method=horizontal-flip");

2 Likes