Adaptive background subtraction with motionless objects

Hi! I would like to get a background subtraction method for outdoor conditions, capable of gradually adjust itself to environment light variations but with the capacity of revealing a presence even that is not in motion. The problem with adaptive opencv background subtraction methods is that they are only capable to detect a presence when it is moving. On the other hand, old background subtraction methods do not work when the conditions of light are not always the same. In order to get this I’ve modified the Golan Levin’s method in the video library of Processing (actual frames are compared with a first initial frame), setting a certain low difference threshold. I therefore assume that all changes over that threshold are due to presence (persons, animals, etc), and changes below this are due to progressive light conditions. Then I lerp this changed pixel with the original one, and I substitute the background’s pixels array with the result.

/* auto-updating background part*/
diferencia = diffR+diffG+diffB;
if (diferencia<minDif) backgroundPixels[i]=lerpColor(backgroundPixels[i], video.pixels[i], .5);

That’s not working satisfactorily, image gets dirty, far of being homogeneus. Any idea of how to achieve this would be extremely welcome. I post the whole code, if it could be of some help. Thanks a lot for your time.

import processing.video.*;

int numPixels;
int[] backgroundPixels;
Capture video;
int camSel=0;
int topDiff=763;
int unbralDif=120;
int mindDif=20;
boolean subtraction, lowSubtr;
PGraphics _tempPG;

void setup() {
  size(640, 480); 

  _tempPG=createGraphics(width, height);
  if (camSel==0)video = new Capture(this, width, height);
  else video = new Capture(this, width, height, Capture.list()[1]);

  video.start();  

  numPixels = video.width * video.height;
  backgroundPixels = new int[numPixels];
  loadPixels();
}

void draw() {
  if (video.available()) {
    video.read(); 
    video.loadPixels(); 

    int presenceSum = 0;

    for (int i = 0; i < numPixels; i++) { 
      color currColor = video.pixels[i];
      color bkgdColor = backgroundPixels[i];

      int currR = (currColor >> 16) & 0xFF;
      int currG = (currColor >> 8) & 0xFF;
      int currB = currColor & 0xFF;

      int bkgdR = (bkgdColor >> 16) & 0xFF;
      int bkgdG = (bkgdColor >> 8) & 0xFF;
      int bkgdB = bkgdColor & 0xFF;

      int diffR = abs(currR - bkgdR);
      int diffG = abs(currG - bkgdG);
      int diffB = abs(currB - bkgdB);

      presenceSum += diffR + diffG + diffB;

      pixels[i] = 0xFF000000 | (diffR << 16) | (diffG << 8) | diffB;

      /* auto-updating background part*/
      int diferencia = diffR+diffG+diffB;
      /*detect pixels that have change below a threshold,
      lerp them and substitute the original background pixels */
      if (lowSubtr && diferencia<mindDif) backgroundPixels[i]=lerpColor(backgroundPixels[i], video.pixels[i], .5);
    }

    updatePixels();
  }
  subtraction=false;
}


void keyPressed() {
  if (keyPressed)startSubtr();
}

void startSubtr() {
  arraycopy(video.pixels, backgroundPixels);
  lowSubtr=true;
}


void actualizacion(int[] _srcArr, int[] _inputArr, int _ind) {
  for (int i=0; i<_srcArr.length; i++) {
    _srcArr[_ind]=_inputArr[i];
  }
}

Merging earlier conversation to combine the thread:

You may want to try using the opencv library to get a more robust background subtraction algorithm. It uses an “adaptive” algorithm – although depending on your goals you might want to drop the entire background subtraction thing and go with learning and object detection.

At any rate, the algorithm and its paper is described here:

https://kyamagu.github.io/mexopencv/matlab/BackgroundSubtractorMOG.html

…and it is built into the opencv processing library, an older example of a background subtraction sketch:

I haven’t read it, but there might be a more updated discussion / example in this recent chapter:

Chung B.W. (2017) “Understanding Motion.” In: Pro Processing for Images and Computer Vision with OpenCV. Apress, Berkeley, CA

Hi, @ jeremydouglass , and thanks for your response. I tried to simplify the content of this post in a later one, but I couldn’t delete this one.

I was studying the opencv method before writing this post, and is the kind of implementation that turns not moving foreground into background.

What I’m looking for, and trying to implement myself, is a method that detect subtile changes and gradually update the background, being still capable of understanding what’s a foreground element, even it has not moved in seconds. Thanks for the papers, I will read them, maybe they can put some light on that matter. Thanks a lot!