Monitor live stream and log changes


Hope this post finds everybody safe and well at such a strange and worrying time globally.

I am currently trying to setup a camera trap at my bird feeder, and have so far setup an ESP32 WiFi camera module with a PIR sensor, which kind of, but doesn’t really work! The sensor keeps getting triggered by reflections from the sun and I have a feeling it just isn’t going to be sensitive enough. Currently once triggered I get an email with the photo attached.

Another way (looking for advice as to the feasibility of this) could be to stream a constant live camera feed to my own website and then monitor the stream against a “resting” image, i.e when no bird is present.

Can anyone suggest how to go about this? I understand p5.js is the extra web-friendly version of processing. Would it be possible to use this to monitor a live video stream and log changes and save them as stills from the stream?

Tricky things in my mind would be:

  • Variations in light, morning vs evening. The change detection would still need to register no change against the “resting” image despite different light.
  • Variations in weather, rain vs dry vs snow(willing to forget this as I can imagine this would be super hard). The change detection would still need to register no change against the “resting” image despite different weather conditions.
  • Variations in feeder position, the wind may blow it slightly. The change detection would still need to register no change against the “resting” image despite slight movement.

Perhaps the sketch could just zoom right in/crop to the area of the stream which shows the feeder and ignore the rest of the image.

Attached is an image from the current setup (I know, the quality is laughably bad!)


Hi Henri,

It should be possible to train a neural network to detect a significant change at feeder. And it possible to deploy it with p5py, because there is TensorFlow.js library. It would need a couple of hundred example photos and lot’s of GPU-power to train the network. Once network is trained using it doesn’t require much processing power.

One option would be to compare current image to the previous one. Then lighting changes wouldn’t come in to play. Image once a second or so should have more or less the same lighting. This document describes how to make and use Grey Level Co-occurrence Matrix. It can be used to analyze structures in an image. I would assume that movement of feeder wouldn’t change structural values much. But a bird in the image would create at least significant local change. To ease the calculation you should concentrate on a smaller are of the image.

Hope this helps. If you find either option interesting I can help you with them.

1 Like

Hi @SomeOne,

Thanks for your reply. I think the compare current to previous image is perhaps more suited to my current level - walk before you can run hey! I came across this video last night which I thought I could probably apply: Motion Detection Coding Train

Last night I had a play around and have now got a live stream of the camera up on my website:

It is running off a battery bank right now, so may go on and offline.

I’ll have a go at making a start this evening, thanks for the kind offer of some help :slight_smile:

1 Like

That video was quite good and presenter was funny. Anyway that should give you a good starting point, besides he does give a link to the code in comments.

Oh man, Dan Schiffman is amazing, so much energy!