Downloading from uploaded.to/.net with premium credentials through the command line is possible using standard tools such as wget or curl. However, there is no official API and the exact method required depends on the mechanism implemented by the uploaded.to/.net website. Finding these implementation details requires a little amount reverse engineering.
Here I share a small shell script that should work on all POSIX-compliant platforms (e.g. Mac or Linux). The method is based on current behavior of the uploaded.to website. There are no special tools involved, just wget
, grep
, sed
, mktemp
.
(The solutions I found on the web did not work (anymore) and/or were suspiciously wrong.)
Usage
Copy the script content below, define username and password, and save the script as, for instance, download.sh
. Then, invoke the script like so:
$ /bin/sh download.sh urls.txt
The file urls.txt
should contain one uploaded.to/.net URL per line, such as in this example:
http://uploaded.net/file/98389123/foo.rar http://uploaded.net/file/bxmsdkfm/bar.rar http://uploaded.net/file/72asjh98/not.zip
Method
This paragraph is just for the curious ones. The script first POSTs your credentials to http://uploaded.net/io/login
and stores the resulting authentication cookie in a file. This authentication cookie is then used for retrieving the website corresponding to an uploaded.to file. That website contains a temporarily valid download URL corresponding to the file. Using grep
and sed
, the HTML code is filtered for this URL. The payload data transfer is triggered by firing a POST request with empty body against this URL (cookie not needed). Files are downloaded to the current working directory. All intermediate data is stored in a temporary directory. That directory is automatically deleted upon script exit (no data is leaked, unless the script is terminated with SIGKILL).
The script
#!/bin/sh # Copyright 2015 Jan-Philip Gehrcke, http://gehrcke.de # See http://gehrcke.de/2015/03/uploaded-to-download-with-wget/ USERNAME="user" PASSWORD="password" if [ "$#" -ne 1 ]; then echo "Missing argument: URLs file (containing one URL per line)." >&2 exit 1 fi URLSFILE="${1}" if [ ! -r "${URLSFILE}" ]; then echo "Cannot read URLs file ${URLSFILE}. Exit." >&2 exit 1 fi if [ ! -s "${URLSFILE}" ]; then echo "URLs file is empty. Exit." >&2 exit 1 fi TMPDIR="$(mktemp -d)" # Install trap that removes the temporary directory recursively # upon exit (except for when this program retrieves SIGKILL). trap 'rm -rf "$TMPDIR"' EXIT LOGINRESPFILE="${TMPDIR}/login.response" LOGINOUTPUTFILE="${TMPDIR}/login.outerr" COOKIESFILE="${TMPDIR}/login.cookies" LOGINURL="http://uploaded.net/io/login" echo "Temporary directory: ${TMPDIR}" echo "Log in via POST request to ${LOGINURL}, save cookies." wget --save-cookies=${COOKIESFILE} --server-response \ --output-document ${LOGINRESPFILE} \ --post-data="id=${USERNAME}&pw=${PASSWORD}" \ ${LOGINURL} > ${LOGINOUTPUTFILE} 2>&1 # Status code is 200 even if login failed. # Uploaded sends a '{"err":"User and password do not match!"}'-like response # body in case of error. echo "Verify that login response is empty." # Response is more than 0 bytes in case of login error. if [ -s "${LOGINRESPFILE}" ]; then echo "Login response larger than 0 bytes. Print response and exit." >&2 cat "${LOGINRESPFILE}" exit 1 fi # Zero response size does not necessarily imply successful login. # Wget adds three commented lines to the cookies file by default, so # set cookies should result in more than three lines in this file. COOKIESFILELINES="$(cat ${COOKIESFILE} | wc -l)" echo "${COOKIESFILELINES} lines in cookies file found." if [ "${COOKIESFILELINES}" -lt "4" ]; then echo "Expected >3 lines in cookies file. Exit.". >&2 exit 1 fi echo "Process URLs." # Assume that login worked. Iterate through URLs. while read CURRENTURL; do if [ "x$CURRENTURL" = "x" ]; then # Skip empty lines. continue fi echo -e "\n\n" TMPFILE="$(mktemp --tmpdir=${TMPDIR} response.html.XXXX)" echo "GET ${CURRENTURL} (use auth cookie), store response." wget --no-verbose --load-cookies=${COOKIESFILE} \ --output-document ${TMPFILE} ${CURRENTURL} if [ ! -s "${TMPFILE}" ]; then echo "No HTML response: ${TMPFILE} is zero size. Skip processing." continue fi # Extract (temporarily valid) download URL from HTML. LINEOFINTEREST="$(grep post ${TMPFILE} | grep action | grep uploaded)" # Match entire line, include space after action="bla" , replace # entire line with first group, which is bla. DLURL=$(echo $LINEOFINTEREST | sed 's/.*action="\(.\+\)" .*/\1/') echo "Extracted download URL: ${DLURL}" # This file contains account details, so delete as soon as not required # anymore. rm -f "${TMPFILE}" echo "POST to URL w/o data. Response is file. Get filename from header." # --content-disposition should extract the proper filename. wget --content-disposition --post-data='' "${DLURL}" done < "${URLSFILE}"
Leave a Reply