# Detect direction from video motion and apply it to Particle System

Greetings,

Newcomer in this forum and currently an apprentice with Processing.
I did a lot of research in the previous processing forums and in this one but I haven’t found a resource/answer that will apply to the current problem I’m working on.

So here’s the situation I’m facing:
I’ve been tinkering for some time with a sketch which detects the motion from a video stream and generates particles for every pixel that has been changed between the current and the previous frame(RGB difference between pixel colors of current and previous video frame).

The code of my sketch is a combination of the simple Particle system (from Nature of Code) and the video motion tracking (also part of Daniel Shiffman’s youtube tutorials) programs.
What my code currently does is that it creates the particles always with a standard direction (from left-to-right).
What I want to achieve is, to be able to detect the direction of a moving person/object from the video(moving from left-to-right and vice versa) and apply this direction to the movement of the generated particles. i.e : If a person moves from right-to-left, the generated particles should also have the same direction with the motion of the person.

I’ve tried to play with the example from the Smoking Particle System in my code, but I’m not sure how to apply a similar “wind” force with a direction from the video tracked object/person instead from the mouse horizontal position.

``````// Calculate a "wind" force based on mouse horizontal position
float dx = map(mouseX, 0, width, -0.2, 0.2);
``````

I’m not sure how to proceed with this feature and also I’m not sure up to what extent is possible to achieve an effect like this. I’d appreciate any ideas/suggestions/feedback.

Here’s the current code >

``````import processing.video.*;
import java.util.Iterator;

Capture video;
ParticleSystem ps;

// Previous Frame
PImage prevFrame;

// degree of sensitivity to motion detection.
float threshold = 99;

float avgX = 0 ;
int count = 0;

void captureEvent(Capture video){
// Save previous frame for motion detection!!
// Before we read the new frame, we always save the previous frame for comparison!
prevFrame.copy(video, 0, 0, video.width, video.height, 0, 0, video.width, video.height);
prevFrame.updatePixels();

if(video.available()){
}
}

void setup() {
size(640,480);
pixelDensity(displayDensity());// Pulling the display's density dynamically

video = new Capture(this, width, height); //set up video
video.start();

ps = new ParticleSystem(new PVector(width/2, height-60)); //create the particle system
prevFrame = createImage(video.width, video.height, RGB); // create the initial Image of prevFrame
}

void draw(){

image(video, 0, 0);

// Begin loop to walk through every pixel
for (int x = 0; x < video.width; x = x + 4 ) {
for (int y = 0 ; y < video.height; y = y + 4 ) {

int loc = x + y*video.width;            // Step 1, what is the 1D pixel location
color current = video.pixels[loc];      // Step 2, what is the current color
color previous = prevFrame.pixels[loc]; // Step 3, what is the previous color

// Step 4, compare colors (previous vs. current)
float r1 = red(current);
float g1 = green(current);
float b1 = blue(current);
float r2 = red(previous);
float g2 = green(previous);
float b2 = blue(previous);
float diff = dist(r1, g1, b1, r2, g2, b2);

// Step 5, How different are the colors?
// If the color at that pixel has changed, then there is motion at that pixel.
// modulo operations used for limiting the amount of  generated particles in the screen.
if (diff > threshold && x % 8 == 0 && y % 4 == 0) {
PVector position = new PVector(x, y);   //capture pixel position
}
}
}

//apply gravity force to particles
PVector gravity = new PVector(0,0.02);
ps.applyForce(gravity);

ps.run(); //run the particle system

smooth();
noStroke();
updatePixels();
}

// A class to describe a group of Particles
// An ArrayList is used to manage the list of Particles
class ParticleSystem {
ArrayList<Particle> particles;
PVector origin;

ParticleSystem(PVector position) {
origin = position.copy();
particles = new ArrayList<Particle>();
}

void addParticle(PVector pos, color currentPartColor) {
}

void applyForce(PVector force) {
for(Particle p: particles){
p.applyForce(force);
}
}

void run() {
Iterator<Particle> it = particles.iterator();  // Using an Iterator object instead of counting with int i
while (it.hasNext()) {
Particle p = it.next();
p.run();
it.remove();
}
}
}
}

// A simple Particle class

class Particle {
PVector position;
PVector velocity;
PVector acceleration;
float decay;
color currentCol;
float mass = 1;

Particle(PVector currentPos, color currentPartColor) {
acceleration = new PVector(0, 0.0);
position = currentPos.copy();
decay = 43.0;
currentCol = currentPartColor;
velocity = new PVector(1.7, random(-0.1, 0.1));
}

void run() {
update();
display();
}

void applyForce(PVector force) {
PVector f = force;
f.div(mass); //we take mass into account
}

// Method to update position
void update() {
acceleration.mult(0);
decay -= 0.4; //frequency of decay
}

// Method to display
void display() {
fill(currentCol, decay);
ellipse(position.x, position.y, 7, 7);
}

// Is the particle still useful?
if (decay < 0.0) {
return true;
} else {
return false;
}
}
}
``````

I’m running Processing 3.5.3 with the processing video library 2.0-beta1 on Ubuntu 18.04.2 .

Thank you for your time and patience.

Hi!
Which Daniel Shiffman tutorial have you used?
I have taken a look at this one :

I am not sure if you have tried this already, maybe I didn’t understand your problem properly, you tell me I think you might try to use the variables avgX, avgY, which he uses to draw a pink dot in the video (or later in the video motionX). If you take the difference between the previous avgX and the current avgX, you could know if the pink dot (so average motion) is going left or right. You can then generate a PVector force similar to wind in the Smoking Particle System example.

Absolute pixel difference is a measure of change – it has no direction.

For motion, you need to be able to locate an object in frame 1, then locate the same object in frame 2, then find the distance moved. This could be fiducials, face detection, or object detection, but what you are able to detect determines what you can then measure the directional motion of. If you use a face detector, and the camera sees no faces, then no direction motion is measured.

There are ways to attempt to infer relative motion without objects – by creating bins of color, for example, and then measuring their distribution on the screen – but they are often extremely susceptible to small changes in lighting conditions. If you have a stable background, you can also use background subtraction to isolate blobs of changing pixels and then find the center of mass of all changes, and track that – but this strategy will fall apart if two or more people move in front of your camera at the same time. (although it might not matter if all you care about is “making wind”).

Compared to blob detection, a full object detection model (“that is a cup, that is a bicycle, that is a cow”) might be overkill for your application – but here is a recent example:

http://www.magicandlove.com/blog/2018/08/08/darknet-yolo-v3-testing-in-processing-with-the-opencv-dnn-module/

1 Like