@noel ===
yes; i have told you that in the previous post… Note that this solution is one (as often) among others (videoView…textureView) but that is not really important!
I don"t know what you precisely want to achieve, but if you are still interested I uploaded a code for a video player with the play/pause/rewind buttons and a slider to fast forward/backward. here
@Mesalcode ===
yes, that is possible; in the thread that starts the surfaceview you comment the setZOrderOnTop ; instead of it you write mySurface.setZOrderMediaOverlay(true);
in your setup() or before (onStart()) you create the view for your rect, the most simple way is to create a button from Button class, setting its text (if needed!), its textColor, its coordinates and layout params; then you add it to the contentView; if you want only the text just before adding you set its background color to transparent. Of course you can also a) create a drawable shape and use the same code or b) create another surface wiew for your shape and put it upon the first one with the video.
That is very unfortunate for me as I am using Processing for its buildin functions for rendering textured cubes etc. I already spent alot of time building this app and don’t want to throw all my work away for the background.
My idea was to split the video into each frame, store the frames in an array and then iterate through the frames every time draw is called. This however leads the app into crashing without any error message after a few seconds. An OutOfMemoryError would be an explanation, but I also tried loading only 20 images with 10kb each and it would still crash.
Do you have any explanation for why this is happening?
That’s far too memory consuming. When you use the Gif lib with a small file, you already have to extend processing’s memory use to 1G in the preferences. But I guess, maybe, you can get the frame of the “underlying” video, by capturing a frame at run time with the MediaCodec class transferring it into a PGraphics, and display it in the sketch.
Have you tried what i have explained? (i have tested and it works)
Sorry if I did not clarify the problem enough, but I only want to add a moving background to an existing game that fully consists of processing functions. So I would have to replace everything with native methods and basically would have wasted all my work just to add a background.
Can you put some code snippet which shows what you are doing when it crashes?
This is a minimal reproducable example:
Animation animation1;
float xpos;
float ypos;
float drag = 30.0;
void setup() {
fullScreen(P3D);
background(255, 204, 0);
frameRate(24);
animation1 = new Animation("", 300);
ypos = height * 0.25;
}
void draw() {
println(frameRate);
float dx = mouseX - xpos;
xpos = xpos + dx/drag;
// Display the sprite at the position xpos, ypos
if (mousePressed) {
background(153, 153, 0);
animation1.display(xpos-animation1.getWidth()/2, ypos);
}
/*
else {
background(255, 204, 0);
animation2.display(xpos-animation1.getWidth()/2, ypos);
}*/
}
// Class for animating a sequence of GIFs
class Animation {
PImage[] images;
int imageCount;
int frame;
Animation(String imagePrefix, int count) {
imageCount = count;
images = new PImage[imageCount];
for (int i = 0; i < imageCount; i++) {
// Use nf() to number format 'i' into four digits
String filename = imagePrefix + nf(i, 4) + ".jpg";
images[i] = loadImage(filename);
}
}
void display(float xpos, float ypos) {
frame = (frame+1) % imageCount;
image(images[frame], xpos, ypos);
}
int getWidth() {
return images[0].width;
}
}
That sounds really complicated because it is a mix of native android and processing, but that could work. I haven’t worked with the MediaCodec class yet and to be honest I don’t have the know how regarding codecs etc. Could you provide a code example that shows how to read a frame from a video file and how to get its pixels?
Edit: I found this code on StackOverflow but the author calls it very inefficient as it takes a fifth of a second to execute per bitmap:
public static void read(@NonNull final Context iC, @NonNull final String iPath)
{
long time;
int fileCount = 0;
//Create a new Media Player
MediaPlayer mp = MediaPlayer.create(iC, Uri.parse(iPath));
time = mp.getDuration() * 1000;
Log.e("TAG", String.format("TIME :: %s", time));
MediaMetadataRetriever mRetriever = new MediaMetadataRetriever();
mRetriever.setDataSource(iPath);
long a = System.nanoTime();
//frame rate 10.03/sec, 1/10.03 = in microseconds 99700
for (int i = 99700 ; i <= time ; i = i + 99700)
{
Bitmap b = mRetriever.getFrameAtTime(i, MediaMetadataRetriever.OPTION_CLOSEST_SYNC);
if (b == null)
{
Log.e("TAG", String.format("BITMAP STATE :: %s", "null"));
}
else
{
fileCount++;
}
long curTime = System.nanoTime();
Log.e("TAG", String.format("EXECUTION TIME :: %s", curTime - a));
a = curTime;
}
Log.e("TAG", String.format("COUNT :: %s", fileCount));
}
Edit 2: On codota I found Bitmap.getPixels() but I am unsure what it returns
Yes I saw that. He is talking about 5 frames per second. Still better than the one I found, 2f/s @akenaton has been my guru since I started with P4A, so I guess we will rely on him.
Bitmap.getPixels() is the same as when we use ‘loadPixels()’
Okay. I hope @akenaton has an idea for performance improvements, if not I will have to live with the terrible performance of 5fps or less or replace the video animation with a object oriented processing animation that won’t look far as good as the current background.
On there it also refers to this site: https://bigflake.com/mediacodec/#ExtractMpegFramesTest
It is really user unfriendly but it states that it is able to perform with 30fps which would be perfectly fine, so I will have to try to understand this method.
This is an example I found on the site, which gets the first 10 frames of a video and exports it to png, which I would have to change to read the pixels:
That’s the one I’ve read yesterday. But you didn’t complete the sentence with “but the additional steps required to save it to disk as a PNG are expensive (about half a second).” That’s the 2 f/s.