Uploaded.to download with wget

Downloading from uploaded.to/.net with premium credentials through the command line is possible using standard tools such as wget or curl. However, there is no official API and the exact method required depends on the mechanism implemented by the uploaded.to/.net website. Finding these implementation details requires a little amount reverse engineering.

Here I share a small shell script that should work on all POSIX-compliant platforms (e.g. Mac or Linux). The method is based on current behavior of the uploaded.to website. There are no special tools involved, just wget, grep, sed, mktemp.

(The solutions I found on the web did not work (anymore) and/or were suspiciously wrong.)


Copy the script content below, define username and password, and save the script as, for instance, download.sh. Then, invoke the script like so:

$ /bin/sh download.sh urls.txt

The file urls.txt should contain one uploaded.to/.net URL per line, such as in this example:



This paragraph is just for the curious ones. The script first POSTs your credentials to http://uploaded.net/io/login and stores the resulting authentication cookie in a file. This authentication cookie is then used for retrieving the website corresponding to an uploaded.to file. That website contains a temporarily valid download URL corresponding to the file. Using grep and sed, the HTML code is filtered for this URL. The payload data transfer is triggered by firing a POST request with empty body against this URL (cookie not needed). Files are downloaded to the current working directory. All intermediate data is stored in a temporary directory. That directory is automatically deleted upon script exit (no data is leaked, unless the script is terminated with SIGKILL).

The script

# Copyright 2015 Jan-Philip Gehrcke, http://gehrcke.de
# See http://gehrcke.de/2015/03/uploaded-to-download-with-wget/
if [ "$#" -ne 1 ]; then
    echo "Missing argument: URLs file (containing one URL per line)." >&2
    exit 1
if [ ! -r "${URLSFILE}" ]; then
    echo "Cannot read URLs file ${URLSFILE}. Exit." >&2
    exit 1
if [ ! -s "${URLSFILE}" ]; then
    echo "URLs file is empty. Exit." >&2
    exit 1
TMPDIR="$(mktemp -d)"
# Install trap that removes the temporary directory recursively
# upon exit (except for when this program retrieves SIGKILL).
trap 'rm -rf "$TMPDIR"' EXIT
echo "Temporary directory: ${TMPDIR}"
echo "Log in via POST request to ${LOGINURL}, save cookies."
wget --save-cookies=${COOKIESFILE} --server-response \
    --output-document ${LOGINRESPFILE} \
    --post-data="id=${USERNAME}&pw=${PASSWORD}" \
# Status code is 200 even if login failed.
# Uploaded sends a '{"err":"User and password do not match!"}'-like response
# body in case of error.
echo "Verify that login response is empty."
# Response is more than 0 bytes in case of login error.
if [ -s "${LOGINRESPFILE}" ]; then
    echo "Login response larger than 0 bytes. Print response and exit." >&2
    cat "${LOGINRESPFILE}"
    exit 1
# Zero response size does not necessarily imply successful login.
# Wget adds three commented lines to the cookies file by default, so
# set cookies should result in more than three lines in this file.
echo "${COOKIESFILELINES} lines in cookies file found."
if [ "${COOKIESFILELINES}" -lt "4" ]; then
    echo "Expected >3 lines in cookies file. Exit.". >&2
    exit 1
echo "Process URLs."
# Assume that login worked. Iterate through URLs.
while read CURRENTURL; do
    if [ "x$CURRENTURL" = "x" ]; then
        # Skip empty lines.
    echo -e "\n\n"
    TMPFILE="$(mktemp --tmpdir=${TMPDIR} response.html.XXXX)"
    echo "GET ${CURRENTURL} (use auth cookie), store response."
    wget --no-verbose --load-cookies=${COOKIESFILE} \
        --output-document ${TMPFILE} ${CURRENTURL}
    if [ ! -s "${TMPFILE}" ]; then
        echo "No HTML response: ${TMPFILE} is zero size. Skip processing."
    # Extract (temporarily valid) download URL from HTML.
    LINEOFINTEREST="$(grep post ${TMPFILE} | grep action | grep uploaded)"
    # Match entire line, include space after action="bla" , replace
    # entire line with first group, which is bla.
    DLURL=$(echo $LINEOFINTEREST | sed 's/.*action="\(.\+\)" .*/\1/')
    echo "Extracted download URL: ${DLURL}"
    # This file contains account details, so delete as soon as not required
    # anymore.
    rm -f "${TMPFILE}"
    echo "POST to URL w/o data. Response is file. Get filename from header."
    # --content-disposition should extract the proper filename.
    wget --content-disposition --post-data='' "${DLURL}"
done < "${URLSFILE}"

One Pingback/Trackback

  • Taher

    Thanks. it worked for me.

  • Daniel


    it worked fine some days ago, now (without changing anything) I got an error:

    It downloads the file (without percentages.. !?) and then deletes the temporary file and then:

    Extracted download URL:
    POST to URL w/o data. Response is file. Get filename from header.
    http://: Invalid Hostname.

    • Can you please try again, from scratch? If it still does not work, I need more debugging information, such as the file you try to download. You can mail me via jgehrcke@googlemail.com, if you do not want to disclose that information here.

      • Fabian

        I had the same error.

        I changed the line where LINEOFINTEREST is set to:
        LINEOFINTEREST=”$(grep -a post ${TMPFILE} | grep -a action | grep -a uploaded)”

        thanks for the script.

        • Ederson

          I’m getting the same error.

          I changed the line you mentioned, but I’m still getting the same error.

          The script download the file successfully to the tmp dir, but it delete the whole dir and exit with the message:

          Extracted download URL:
          POST to URL w/o data. Response is file. Get filename from header.
          http://: Invalid Hostname.

          It happens with every url I’ve tried. An example is http://uploaded.net/file/m212whum

          Please let me know if I could help with any other info/file.


          • It would have been helpful to see the entire script output. You can prevent deletion of the temp directory by outcommenting the trap installed in the top section of the script. Then you can go into the dir and look into the files and debug things. It would be helpful if you did that. You can also mail me details. Your download URL works for me with with the script posted above, using bash 4.3.30, wget 1.16, sed 4.2.2, grep 2.20.

  • Jordi

    Thank you very much.

    This worked like a charm for me.

  • Dima Medvedev


    I get:
    root@DD-WRT:/tmp/mnt/sda1# sh download.sh urls.txt
    : not found: line 4:
    : not found: line 5:
    : not found: line 8:
    : not found: line 9:
    download.sh: line 98: syntax error: unexpected “done” (expecting “then”)

    Upon running the script on the latest build of DD-WRT v3.0-r27858 std (09/28/15) on Netgear R700 with Entware installed (http://www.dd-wrt.com/phpBB2/viewtopic.php?p=986106).

    What could be the reason?

  • NiNJA-Sp4rK

    Work perfect for me! Thank you!!!

  • Pingback: Download Script uploaded.net – polz.in()

  • David Calligaris

    me too, I had an error:

    GET http://uploaded.net/file/8fq3mebg/from/fd9zx9 (use auth cookie), store response.
    2016-03-02 00:34:36 URL:http://am4-r1f9-stor03.uploaded.net/dl/c77345a1-83f2-4e04-823e-de45bb337202 [208666624/208666624] -> “/tmp/tmp.n28qD8C0EG/response.html.lWSP” [1]
    Extracted download URL:
    POST to URL w/o data. Response is file. Get filename from header.
    http://: Nome dell’host non valido.

  • Luis

    Hi i get the following error

    GET http://uploaded.net/file/t7hgb26j (use auth cookie), store response.

    2016-04-17 16:24:18 URL:http://am4-r1f7-stor08.uploaded.net/dl/d61b588c-dce6-4287-9a1e-7e20ebd549fb [786814924/786814924] -> “/tmp/tmp.AwCOmHNbkP/response.html.eP54” [1]

    Extracted download URL:

    POST to URL w/o data. Response is file. Get filename from header.

    http://: Invalid host name.