Using [deep-vision-processing library] framework

I want to play a video only when a face is detected on live webcam and play another video if no face is detected.

I want to use this code from the Examples, but I am unable to understand the syntax.

<>import processing.video.Capture;

Capture cam;

DeepVision vision;
CascadeClassifierNetwork network;
ResultList detections;

public void setup() {
size(640, 480, FX2D);
colorMode(HSB, 360, 100, 100);

println(“creating network…”);
vision = new DeepVision(this);
network = vision.createCascadeFrontalFace();

println(“loading model…”);
network.setup();

println(“setup camera…”);
String cams = Capture.list();
cam = new Capture(this, cams[0]);
cam.start();
}

public void draw() {
background(55);

if (cam.available()) {
cam.read();
}

image(cam, 0, 0);
detections = network.run(cam);

noFill();
strokeWeight(2f);

stroke(200, 80, 100);
for (ObjectDetectionResult detection : detections) {
rect(detection.getX(), detection.getY(), detection.getWidth(), detection.getHeight());
}

surface.setTitle("Face Recognition Test - FPS: " + Math.round(frameRate));
}

Here, in the code face detection happens inside the for loop. I want to do something like

if (face is detected){
video1.play();
} else {

video2.play();
}

If I play one of the videos inside the for loop, I am not able to define the moment when no face is detected.