Help with displaying video

@noel ===
working code (from the first one you posted ) but i have also the version getting the frameLayout::


import android.media.MediaMetadataRetriever;
import android.os.Looper;
import android.app.Activity;
import android.view.ViewGroup;
import android.view.View;
import android.view.SurfaceHolder;
import android.view.SurfaceView;
import android.media.MediaMetadataRetriever;
import android.media.MediaPlayer;
import android.content.res.Resources;
import android.content.res.AssetFileDescriptor;
import android.content.res.AssetManager;
import android.content.Context;

AssetFileDescriptor afd;
Context context;
Activity act;
SurfaceView mySurface;
SurfaceHolder mSurfaceHolder;
MediaMetadataRetriever metaRetriever;
MediaPlayer mp;
int videoH = 0;
int videoL= 0;

void settings(){
  size(displayWidth, displayHeight);
}

void setup() {
  //size(400, 400, P2D);
  background(0);
  act = this.getActivity();
  context = act.getApplicationContext();
  Looper.prepare();
  mp = new MediaPlayer();
  try {
    afd = context.getAssets().openFd("who.mp4");
    MediaMetadataRetriever metaRetriever = new MediaMetadataRetriever();
    metaRetriever.setDataSource(afd.getFileDescriptor(), afd.getStartOffset(), afd.getLength());
    String h = metaRetriever.extractMetadata(MediaMetadataRetriever.METADATA_KEY_VIDEO_HEIGHT);
    String w = metaRetriever.extractMetadata(MediaMetadataRetriever.METADATA_KEY_VIDEO_WIDTH);
    videoL= int(w);
      videoH=int(h);
  mp.setDataSource(afd.getFileDescriptor(), afd.getStartOffset(), afd.getLength());
  mp.prepare();

}
    catch (IOException e) {
    e.printStackTrace();
  }
    mySurface = new SurfaceView(act);
    mSurfaceHolder = mySurface.getHolder();
    mSurfaceHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);
    mSurfaceHolder.addCallback(new SurfaceHolder.Callback() {
    @Override
      public void surfaceCreated(SurfaceHolder surfaceHolder) {
      mp.setDisplay(surfaceHolder);
      println("surface créeé");
    }
    @Override
      public void surfaceChanged(SurfaceHolder surfaceHolder, int i, int i2, int i3) {
      mp.setDisplay(surfaceHolder);
      println("surface changée");
    }
    @Override
      public void surfaceDestroyed(SurfaceHolder surfaceHolder) {
        println("surface détruite");
    }
  }
  );
    startVideo();
}

void startVideo() {
  act.runOnUiThread(new Runnable() {
    public void run() {
      //try {
        println("jexecute le runnable");
        //mp.setDataSource(afd.getFileDescriptor(), afd.getStartOffset(), afd.getLength());
        mSurfaceHolder = mySurface.getHolder();
        mSurfaceHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);
        
        //mp.prepare();
        act.addContentView(mySurface, new ViewGroup.LayoutParams(videoL, videoH));
        mySurface.bringToFront();
      mySurface.setZOrderOnTop(true);
        mySurface.setX(displayWidth/2-videoL/2);
        mySurface.setY((displayHeight/2)-videoH/2);
        if (mp.isPlaying() == false) {
          mp.start();
        }
      //}
      //catch (IOException e) {
        //e.printStackTrace();
      //}
    }
  }
    );
}

void draw() {
}

void onStop() {
    if (mp!=null) {
        mp.release();
        mp = null;
  }
    super.onStop() ;
}
1 Like

@akenaton
Thank you very much.
I understood the setDataSource and prepare() tip.
But the mySurface.setZOrderOnTop(true); really did the trick.
Great!

@noel ===
yes; i have told you that in the previous post… Note that this solution is one (as often) among others (videoView…textureView) but that is not really important!

I don"t know what you precisely want to achieve, but if you are still interested I uploaded a code for a video player with the play/pause/rewind buttons and a slider to fast forward/backward. here

vtvt

How do I draw OVER the video?

I have responded here.

@Mesalcode ===
yes, that is possible; in the thread that starts the surfaceview you comment the setZOrderOnTop ; instead of it you write mySurface.setZOrderMediaOverlay(true);
in your setup() or before (onStart()) you create the view for your rect, the most simple way is to create a button from Button class, setting its text (if needed!), its textColor, its coordinates and layout params; then you add it to the contentView; if you want only the text just before adding you set its background color to transparent. Of course you can also a) create a drawable shape and use the same code or b) create another surface wiew for your shape and put it upon the first one with the video.

Thank you for your help!

That is very unfortunate for me as I am using Processing for its buildin functions for rendering textured cubes etc. I already spent alot of time building this app and don’t want to throw all my work away for the background.

My idea was to split the video into each frame, store the frames in an array and then iterate through the frames every time draw is called. This however leads the app into crashing without any error message after a few seconds. An OutOfMemoryError would be an explanation, but I also tried loading only 20 images with 10kb each and it would still crash.

Do you have any explanation for why this is happening?

@Mesalcode ===

  • Have you tried what i have explained? (i have tested and it works)
  • Can you put some code snippet wich shows what you are doing when it crashes?

That’s far too memory consuming. When you use the Gif lib with a small file, you already have to extend processing’s memory use to 1G in the preferences. But I guess, maybe, you can get the frame of the “underlying” video, by capturing a frame at run time with the MediaCodec class transferring it into a PGraphics, and display it in the sketch.

Have you tried what i have explained? (i have tested and it works)

Sorry if I did not clarify the problem enough, but I only want to add a moving background to an existing game that fully consists of processing functions. So I would have to replace everything with native methods and basically would have wasted all my work just to add a background.

Can you put some code snippet which shows what you are doing when it crashes?

This is a minimal reproducable example:

Animation animation1;

float xpos;
float ypos;
float drag = 30.0;

void setup() {
  fullScreen(P3D);
  background(255, 204, 0);
  frameRate(24);
  animation1 = new Animation("", 300);
  ypos = height * 0.25;
}

void draw() { 
  println(frameRate);
  float dx = mouseX - xpos;
  xpos = xpos + dx/drag;

  // Display the sprite at the position xpos, ypos
  if (mousePressed) {
    background(153, 153, 0);
    animation1.display(xpos-animation1.getWidth()/2, ypos);
  } 
  /*
  else {
    background(255, 204, 0);
    animation2.display(xpos-animation1.getWidth()/2, ypos);
  }*/
}



// Class for animating a sequence of GIFs

class Animation {
  PImage[] images;
  int imageCount;
  int frame;
  
  Animation(String imagePrefix, int count) {
    imageCount = count;
    images = new PImage[imageCount];

    for (int i = 0; i < imageCount; i++) {
      // Use nf() to number format 'i' into four digits
      String filename = imagePrefix + nf(i, 4) + ".jpg";
      images[i] = loadImage(filename);
    }
  }

  void display(float xpos, float ypos) {
    frame = (frame+1) % imageCount;
    image(images[frame], xpos, ypos);
  }
  
  int getWidth() {
    return images[0].width;
  }
}

That sounds really complicated because it is a mix of native android and processing, but that could work. I haven’t worked with the MediaCodec class yet and to be honest I don’t have the know how regarding codecs etc. Could you provide a code example that shows how to read a frame from a video file and how to get its pixels?

Edit: I found this code on StackOverflow but the author calls it very inefficient as it takes a fifth of a second to execute per bitmap:

public static void read(@NonNull final Context iC, @NonNull final String iPath)
{
    long time;

    int fileCount = 0;

    //Create a new Media Player
    MediaPlayer mp = MediaPlayer.create(iC, Uri.parse(iPath));
    time = mp.getDuration() * 1000;

    Log.e("TAG", String.format("TIME :: %s", time));

    MediaMetadataRetriever mRetriever = new MediaMetadataRetriever();
    mRetriever.setDataSource(iPath);

    long a = System.nanoTime();

    //frame rate 10.03/sec, 1/10.03 = in microseconds 99700
    for (int i = 99700 ; i <= time ; i = i + 99700)
    {
        Bitmap b = mRetriever.getFrameAtTime(i, MediaMetadataRetriever.OPTION_CLOSEST_SYNC);

        if (b == null)
        {
            Log.e("TAG", String.format("BITMAP STATE :: %s", "null"));
        }
        else
        {
            fileCount++;
        }

        long curTime = System.nanoTime();
        Log.e("TAG", String.format("EXECUTION TIME :: %s", curTime - a));
        a = curTime;
    }

    Log.e("TAG", String.format("COUNT :: %s", fileCount));
}

Edit 2: On codota I found Bitmap.getPixels() but I am unsure what it returns

Yes I saw that. He is talking about 5 frames per second. Still better than the one I found, 2f/s
@akenaton has been my guru since I started with P4A, so I guess we will rely on him.
Bitmap.getPixels() is the same as when we use ‘loadPixels()’

Okay. I hope @akenaton has an idea for performance improvements, if not I will have to live with the terrible performance of 5fps or less :frowning_face: or replace the video animation with a object oriented processing animation that won’t look far as good as the current background.

Have you read this one?

Yes, that is the thread I found.

On there it also refers to this site: https://bigflake.com/mediacodec/#ExtractMpegFramesTest
It is really user unfriendly but it states that it is able to perform with 30fps which would be perfectly fine, so I will have to try to understand this method.

This is an example I found on the site, which gets the first 10 frames of a video and exports it to png, which I would have to change to read the pixels:

https://bigflake.com/mediacodec/ExtractMpegFramesTest.java.txt

Also its dependent on some other libraries which is also really subobtimal…

That’s the one I’ve read yesterday. But you didn’t complete the sentence with “but the additional steps required to save it to disk as a PNG are expensive (about half a second).” That’s the 2 f/s.

But I’m not saving the images as a PNG so I’d would not lose the time needed for that or do I understand something incorrectly?