This project is still a work in progress but I would like to share with you the progress I have made so far, the direction I want to follow and gather some feedback or advice about it.
I wanted to build an ambilight system for my TV but the problem I ran into was that I was and still am using only an ethernet cable to get my video feed. So I couldn’t use those solutions you can find online using the HDMI, USB or VGA feed.
For those who don’t know what an ambilight system is, here is a link to a corresponding wikipedia page:
To overcome that problem, I got an idea to obtain that feed through a camera and use that as my source to detect the colors of my LEDs.
For this, I will connect a camera (RPI04V2) to a Raspberry PI 3 B+ running a processing sketch. That sketch will have the role to read the feed form the camera and figure out the color of the LEDs for the current image. Then those information will be sent by bluetooth to an arduino connected to the LED.
Now, I am not completely sure it will work because of 3 main factors :
the delay that will be introduced between the moment the image is displayed and the moment the LEDs get the information. I have done some tests already and I think it will be ok for normal scenes but maybe a bit slow for action scenes.
The quality of the feed : because it is a camera, some darker and brighter zones appear on the image and it can affect the color of the LED. I have already thought about some ways how to tackle this issue and ran some positive tests but it still needs a bit more work
The camera parameters : I’m afraid that when the image becomes brighter the camera will adapt its sensitivity to show a darker picture and the other way around for a dark image. I haven’t worked on that one yet.
You can find the current version in this github repository:
What is already done:
Display the list of the available cameras and select the one of interest
Detect the TV screen position by displaying a unique color on the TV screen
Straighten the image of the TV
What needs to be done:
Apply the color correction to the image
Make it work on the raspberry PI
Get the LEDs color
Send the information to the arduino via bluetooth
Build the LEDs panel
Write the arduino code
If you have any idea on how to improve it or if you want me to explain in a bit more in detail how some algorithms work, feel free to post a new message
I finally got some time and courage to continue this project.
I started developing the concept on my windows computer using the video library provided by the processing foundation. Sadly, I figure out the hard way that I can’t use that library with a Pi board so I used the glvideo library by gohai instead. I faced many problems to transcript the previous code to the new one and I ended up with big changes in the structure. All that to say that I have created a new repository on github to get a fresh start (the old one is still available):
Now let’s get to the fun part, some screenshots of how it works.
For testing reasons, the tv is replaced by a piece of paper sticked to the wall:
Then you need to calibrate the tool. It consists of 2 steps:
Display a unique color (different enough than the background) on your tv
Change the threshold value until only the screen is selected
The tool assists you by displaying a red overlay on top of the pic, showing the selected area.
You can see the difference between a bad threshold and a good one on the 2 followings screenshots:
Once you found a good threshold, the tool bakes some calculations needed at every frame for better performances.
The first task is to find the corners of that area. The second debug mode can show that this operation has been successful:
The second task is to reshape that tetragon to make it a perfect rectangle. In order to save some performance, the accuracy of the process can be changed and only the outer edges (that we are really interested in) is straightened.
To better show the result the background pic is changed:
After, it is just a matter of figuring out the proper color to display. Here are the result with the previous background and the result with a new one to better see the effect:
In addition to flicker reduction, you might also try out a smoothing / crossfade operation with an adjustable duration – although if I understand right the original aesthetic of the ambilight is to be instantaneous.
Thank you @jeremydouglass for the feedback, really appreciated!
The problem I have with flicker reduction is that I think it is hardware related. I had huge trouble making it works and currently, for example, I can’t use the camera at more than 20 fps when I should, in theory, be able to go up to 60 fps…
Another issue I have with the camera is the auto-brightness adjustment: I have no control over it what so ever and it is not a simple light sensor outside the camera that I can tape
And I don’t think I can do something about that flickering effect by code.
Now for the smoothing/crossfade operation, as you said, the goal would be to be as instantaneous as possible to make it feel like it is an extension of the screen. The smoothing would break that effect.
Now if push completely to the limit, the idea can be to not want a perfect match with the screen but to have more like an ambiant lighting around the screen. It would not use just the pixels on the edge but the whole screen, sense the ambiance of the scene, and display something accordingly.
If people have ideas linked to that, I would be happy to read them