Download Videos from websites for example "animefrenzy"

Hey, I want to write a program with which you can download and watch the series hunter x hunter. I just need help with the download as I just can’t find a solution. I know I need “more” java for this problem. That’s why I started programming it in Eclipse. Can someone tell me how I can download videos from there? an example code would be very helpful thank you

Here is my code I have:

Click to see the my code

public class Main {
	public static void main(String[] args) {
		String link = "https://gogo-cdn.com/download.php?url=aHR0cHM6LyAdeqwrwedffryretgsdFrsftrsvfsfsr9jZG4xMCURASDGHUSRFSJGYfdsffsderFStewthsfSFtrftesdf5jbG91ZDl4eC5jb20vdXNlcjEzNDIvZGM5MWZhY2E0ZDJkODI4ZWI4ZjRmMDQ5ZWNhODhkOWQvRVAuNjMuMzYwcC5tcDQ/dG9rZW49UDQ4OVVLWVBEM2dzdEhScnp6ckhJdyZleHBpcmVzPTE2MjU3MDI0OTEmaWQ9MTExOTA5";
		
		String path = "/Users/Flolo/Downloads/";
		
		String fileName= "test.mp4";
		Downloader.download(link, path, fileName);
	}
}

//

import java.io.BufferedOutputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.net.HttpURLConnection;
import java.net.URL;
import java.net.URLConnection;

class Downloader {
	public static void download(String fAddress, String destinationDir, String localFileName) {
		BufferedOutputStream outStream = null;
		URLConnection conn;
		InputStream is = null;
		try {
			URL url;
			byte[] buf;
			int byteRead, byteWritten = 0;
			url = new URL(getFinalLocation(fAddress));
			outStream = new BufferedOutputStream(new FileOutputStream(destinationDir + "\\" + localFileName));

			conn = url.openConnection();
			is = conn.getInputStream();
			int size = 2048;
			buf = new byte[size];
			while ((byteRead = is.read(buf)) != -1) {
				outStream.write(buf, 0, byteRead);
				byteWritten += byteRead;
			}
		} catch (Exception e) {
			e.printStackTrace();
		} finally {
			try {
				is.close();
				outStream.close();
			} catch (IOException e) {
				e.printStackTrace();
			}
		}

	}

	public static String getFinalLocation(String address) throws IOException {
		URL url = new URL(address);
		HttpURLConnection conn = (HttpURLConnection) url.openConnection();
		int status = conn.getResponseCode();
		if (status != HttpURLConnection.HTTP_OK) {
			if (status == HttpURLConnection.HTTP_MOVED_TEMP || status == HttpURLConnection.HTTP_MOVED_PERM
					|| status == HttpURLConnection.HTTP_SEE_OTHER) {
				String newLocation = conn.getHeaderField("Location");
				return getFinalLocation(newLocation);
			}
		}
		return address;
	}
}

The problem is that I always get an error:

Click to see the ERROR
java.io.IOException: Server returned HTTP response code: 403 for URL: https://cdn10.cloud9xx.com/user1342/dc91faca4d2d828eb8f4f049eca88d9d/EP.63.360p.mp4?token=P489UKYPD3gstHRrzzrHIw&expires=1625702491&id=111909
	at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1932)
	at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1528)
	at java.base/sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:224)
	at JavaYoutubeDownloader.download(JavaYoutubeDownloader.java:22)
	at Main.main(Main.java:9)
Exception in thread "main" java.lang.NullPointerException: Cannot invoke "java.io.InputStream.close()" because "is" is null
	at JavaYoutubeDownloader.download(JavaYoutubeDownloader.java:33)
	at Main.main(Main.java:9)

Here are the links I found:

https://www.google.com/search?client=firefox-b-d&q=error+403+

Your code works but likely you do not have permission from the server to access the file.

1 Like

Ok but when I try to download the Video with my Browser it works. So Is there a way to Tell the Website that the Programm is a normal browser user to?

Not sure, this is out of my range of understanding. Perhaps simulating browser access from java
https://www.google.com/search?client=firefox-b-d&q=simulate+browser+java

but even then I’m not sure how you would then link the two. There must be a solution as people clearly have made youtube apps before to download files. Also important to note that youtube do make it particularly tricky to download files as it is their main bread and butter, and always recommend the premium service which allows this feature. So it might be that an api is required.

1 Like

You could create a website sweeper if the website has a download feature, otherwise you could create a program that plays all of the scenes and uses Robot (java screen library) for capture. It will then save it into a file. I don’t have any experience with this and it is probably a wasteful solution but hey : P

EDIT:
I did a quick search and found a decent website. It is called thewatchcaroononline . tv and doesn’t have intrusive ads. Once the video is playing, you can use Ctrl + S to download it and if you set default folder with chrome it will just silently download it. This would be best for a website sweeper

EDIT 2:
I didn’t post this link to support piracy, it is for purely academic purposes, if you know what I mean.

2 Likes

Hey I want to ask what a website sweeper is? :sweat_smile:
Because the website has a download link.

1 Like

A website sweeper is a program that sweeps the website : P

There are two ways a website sweeper can gather information/content from a website.
1: using the websites API - It opens things like google maps API and autonomously gathers data based on the starting requirements. You can use the gathered data in a game or anywhere else.

2: using screen capture and simulated user actions - Used for smaller websites, that do not have an API. It is a simple program that simulates users actions. Fake user actions can be hardcoded or calculated through the program searching for a specific thing and pressing the correct key / mouse button on the found button (like searching for the text that matches “DOWNLOAD” or something similar.

If you have a website, you can just hardcode the program to constantly click in a certain pattern, ex. click to play the video, right click the mouse, press “download video”, press “NEXT EPISODE” button.

It is reccomended to use a browser extension to remove all the unnecessary elements from the website, so there are less varriables.

I hope this helps : P
Have a nice day.

EDIT:
alternatively, you can use it in combination with video downloading service. It would automatically input the URL and the desired thing, and the service program would do it and output it. The sweeper would again repeat this action untill you stop the program.

1 Like

Ok Thanks!
And is there a way to simulate a user interaction in java (in the background) so the user dont have to open the browser?

1 Like

I am not sure, but you can easily simulate user actions through the Robot library. Check out the documentation.

1 Like

last answer

2 Likes

also not sure if this is of any help, but this sketch will browse through all the links in a webpage and return them. Perhaps theres a possibility of searching through the links for a download location or video file and retrieving a video this way. This is of course just a suggestion and not something I’ve tested.

class Html{
  
  String html = "",url;
  ArrayList<String> lines = new ArrayList<String>();
  ArrayList<String> links = new ArrayList<String>();
  boolean stop = false,textScanned;
  
  int counter;
  
  Html(String Url){
    url = Url;
    
    
    String line, result = "";
      if(!stop){
      try {
        HttpURLConnection conn = (HttpURLConnection) new URL(url).openConnection();
        conn.setRequestMethod("GET");
        setRequestHeaders(conn);
        BufferedReader rd = new BufferedReader
          (new InputStreamReader(conn.getInputStream()));
        while ( (line = rd.readLine ()) != null) {
          result += line;
          lines.add(line);
          html += line;
        }
        rd.close();stop = true;
      } 
      catch (Exception e) {
        e.printStackTrace();
      }}
  };
  
  void getString(){
    
  String html = getHTML(url);
  //println(html);
  }

    String getHTML(String url) {
      
      String line, result = "";
      if(!stop){
    
      try {
        HttpURLConnection conn = (HttpURLConnection) new URL(url).openConnection();
        conn.setRequestMethod("GET");
        setRequestHeaders(conn);
        BufferedReader rd = new BufferedReader
          (new InputStreamReader(conn.getInputStream()));
        while ( (line = rd.readLine ()) != null) {
          result += line;
          lines.add(line);
          html += line;
        }
        rd.close();stop = true;
      } 
      catch (Exception e) {
        e.printStackTrace();
      }}
    
      return html;
    }

    void setRequestHeaders(HttpURLConnection conn)
    {
      String ua = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.56 Safari/536.5";
      conn.setRequestProperty("User-Agent", ua);
      conn.setRequestProperty("Accept-Language", "en-US,en;q=0.8");
      conn.setRequestProperty("Accept-Charset", "ISO-8859-1,utf-8;q=0.7,*;q=0.3");
      conn.setRequestProperty("Connection", "keep-alive");
      conn.setRequestProperty("Accept", "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8");
    }
    
    void readString(){
      
      //for(int i=0;i<html
      
      fill(255);
      for(int i=0;i<lines.size();i++){
        String s = lines.get(i);
        
        text(s,40,40+10*i);
        //if(i<50)println("readstring", s);
      }
      
    };
    
    void getLinks(){
      counter = 0;
      if(!textScanned){
      for(int i=0;i<html.length() - 4;i++){
        char h = html.charAt(i);
        char t = html.charAt(i+1);
        char t_ = html.charAt(i+2);
        char p = html.charAt(i+3);
        
        if(h=='h'&&t=='t'&&t_=='t'&&p=='p'){
          counter = i;
          
          findEnd(i);
          }
          if(i==html.length() - 5) textScanned = true;
      }}
      
      //if(textScanned)println(links.size());
      
      
      
    };
    
    void findEnd(int i){
      
      for(int j=i+3;j<html.length();j++){
            char end = html.charAt(j);
            if(end=='>'){
              links.add(html.substring(counter,j));break;
            }}
    };
    
    void displayLinks(){
      
      for(int i=0;i<links.size();i++){
        String s = links.get(i);
        fill(255);
        text(s,10,50 + 10 * i);
        //println("display links",s);
      }
      
    };
  
};
import java.io.BufferedWriter;
import java.io.FileWriter;
import java.io.*;
import java.net.*;

Html webpage;
void setup(){
  size(700,200);
  webpage = new Html("https://www.youtube.com/watch?v=L3oOldViIgY");
  webpage.getString();
webpage.readString();
webpage.getLinks();
background(50);
webpage.displayLinks();
};

void draw(){
  
};
1 Like

also might want want to take a look at this as perhaps reverse engineering it so it works with (assuming) desktop processing.

1 Like