Uploaded.to download with wget

Downloading from uploaded.to/.net with premium credentials through the command line is possible using standard tools such as wget or curl. However, there is no official API and the exact method required depends on the mechanism implemented by the uploaded.to/.net website. Finding these implementation details requires a little amount reverse engineering.

Here I share a small shell script that should work on all POSIX-compliant platforms (e.g. Mac or Linux). The method is based on current behavior of the uploaded.to website. There are no special tools involved, just wget, grep, sed, mktemp.

(The solutions I found on the web did not work (anymore) and/or were suspiciously wrong.)


Copy the script content below, define username and password, and save the script as, for instance, download.sh. Then, invoke the script like so:

$ /bin/sh download.sh urls.txt

The file urls.txt should contain one uploaded.to/.net URL per line, such as in this example:



This paragraph is just for the curious ones. The script first POSTs your credentials to http://uploaded.net/io/login and stores the resulting authentication cookie in a file. This authentication cookie is then used for retrieving the website corresponding to an uploaded.to file. That website contains a temporarily valid download URL corresponding to the file. Using grep and sed, the HTML code is filtered for this URL. The payload data transfer is triggered by firing a POST request with empty body against this URL (cookie not needed). Files are downloaded to the current working directory. All intermediate data is stored in a temporary directory. That directory is automatically deleted upon script exit (no data is leaked, unless the script is terminated with SIGKILL).

The script

# Copyright 2015 Jan-Philip Gehrcke, http://gehrcke.de
# See http://gehrcke.de/2015/03/uploaded-to-download-with-wget/
if [ "$#" -ne 1 ]; then
    echo "Missing argument: URLs file (containing one URL per line)." >&2
    exit 1
if [ ! -r "${URLSFILE}" ]; then
    echo "Cannot read URLs file ${URLSFILE}. Exit." >&2
    exit 1
if [ ! -s "${URLSFILE}" ]; then
    echo "URLs file is empty. Exit." >&2
    exit 1
TMPDIR="$(mktemp -d)"
# Install trap that removes the temporary directory recursively
# upon exit (except for when this program retrieves SIGKILL).
trap 'rm -rf "$TMPDIR"' EXIT
echo "Temporary directory: ${TMPDIR}"
echo "Log in via POST request to ${LOGINURL}, save cookies."
wget --save-cookies=${COOKIESFILE} --server-response \
    --output-document ${LOGINRESPFILE} \
    --post-data="id=${USERNAME}&pw=${PASSWORD}" \
# Status code is 200 even if login failed.
# Uploaded sends a '{"err":"User and password do not match!"}'-like response
# body in case of error.
echo "Verify that login response is empty."
# Response is more than 0 bytes in case of login error.
if [ -s "${LOGINRESPFILE}" ]; then
    echo "Login response larger than 0 bytes. Print response and exit." >&2
    cat "${LOGINRESPFILE}"
    exit 1
# Zero response size does not necessarily imply successful login.
# Wget adds three commented lines to the cookies file by default, so
# set cookies should result in more than three lines in this file.
echo "${COOKIESFILELINES} lines in cookies file found."
if [ "${COOKIESFILELINES}" -lt "4" ]; then
    echo "Expected >3 lines in cookies file. Exit.". >&2
    exit 1
echo "Process URLs."
# Assume that login worked. Iterate through URLs.
while read CURRENTURL; do
    if [ "x$CURRENTURL" = "x" ]; then
        # Skip empty lines.
    echo -e "\n\n"
    TMPFILE="$(mktemp --tmpdir=${TMPDIR} response.html.XXXX)"
    echo "GET ${CURRENTURL} (use auth cookie), store response."
    wget --no-verbose --load-cookies=${COOKIESFILE} \
        --output-document ${TMPFILE} ${CURRENTURL}
    if [ ! -s "${TMPFILE}" ]; then
        echo "No HTML response: ${TMPFILE} is zero size. Skip processing."
    # Extract (temporarily valid) download URL from HTML.
    LINEOFINTEREST="$(grep post ${TMPFILE} | grep action | grep uploaded)"
    # Match entire line, include space after action="bla" , replace
    # entire line with first group, which is bla.
    DLURL=$(echo $LINEOFINTEREST | sed 's/.*action="\(.\+\)" .*/\1/')
    echo "Extracted download URL: ${DLURL}"
    # This file contains account details, so delete as soon as not required
    # anymore.
    rm -f "${TMPFILE}"
    echo "POST to URL w/o data. Response is file. Get filename from header."
    # --content-disposition should extract the proper filename.
    wget --content-disposition --post-data='' "${DLURL}"
done < "${URLSFILE}"

Leave a Reply

Your email address will not be published. Required fields are marked *

Human? Please fill this out: * Time limit is exhausted. Please reload CAPTCHA.

  1. Chaudhari Kaushik Avatar
    Chaudhari Kaushik


  2. JordanFisher2000 Avatar

    Can you please update the script for 2021?

  3. sm00nie Avatar

    Thanks for the script, works great!

    Just a note in case anyone else runs into it. Initially, I was getting the error that many seem to be getting:

    Extracted download URL:
    POST to URL w/o data. Response is file. Get filename from header.
    http://: Invalid host name.

    The resolution was to disable (uncheck) “direct downloads” from your Uploaded.net/to profile.

  4. Johannes Blattner Avatar
    Johannes Blattner


    thank you so much for this neat little script – I love and use it regularly! One little tweak I have made is to add the option “–continue” for the wget command in the second to last line. With this option, wget continues with partially downloaded files (or skips them if they’re complete) instead of creating new files with the same content but an appended “.1”.

    Cheers Johannes

  5. Luis Avatar

    Hi i get the following error

    GET http://uploaded.net/file/t7hgb26j (use auth cookie), store response.

    2016-04-17 16:24:18 URL:http://am4-r1f7-stor08.uploaded.net/dl/d61b588c-dce6-4287-9a1e-7e20ebd549fb [786814924/786814924] -> “/tmp/tmp.AwCOmHNbkP/response.html.eP54” [1]

    Extracted download URL:

    POST to URL w/o data. Response is file. Get filename from header.

    http://: Invalid host name.

  6. David Calligaris Avatar
    David Calligaris

    me too, I had an error:

    GET http://uploaded.net/file/8fq3mebg/from/fd9zx9 (use auth cookie), store response.
    2016-03-02 00:34:36 URL:http://am4-r1f9-stor03.uploaded.net/dl/c77345a1-83f2-4e04-823e-de45bb337202 [208666624/208666624] -> “/tmp/tmp.n28qD8C0EG/response.html.lWSP” [1]
    Extracted download URL:
    POST to URL w/o data. Response is file. Get filename from header.
    http://: Nome dell’host non valido.

  7. […] Jan Philip Gehrcke hat ein kleines Bash Script veröffentlicht, mit welchem das herunterladen von vielen Dateien bei uploaded.net sehr komfortabel aus dem Terminal heraus funktioniert. Es nutzt nur Standardprogramme die auf jedem Linux und Mac vorhanden sein sollten (wget, grep, sed, mktemp). […]

  8. NiNJA-Sp4rK Avatar

    Work perfect for me! Thank you!!!

  9. Dima Medvedev Avatar
    Dima Medvedev


    I get:
    root@DD-WRT:/tmp/mnt/sda1# sh download.sh urls.txt
    : not found: line 4:
    : not found: line 5:
    : not found: line 8:
    : not found: line 9:
    download.sh: line 98: syntax error: unexpected “done” (expecting “then”)

    Upon running the script on the latest build of DD-WRT v3.0-r27858 std (09/28/15) on Netgear R700 with Entware installed (http://www.dd-wrt.com/phpBB2/viewtopic.php?p=986106).

    What could be the reason?

    1. Dima Medvedev Avatar
      Dima Medvedev

      I found that I have a limited WGET version for some reason, but a full CURL.

      So I managed to achieve what needed by following the same actions described in the script, just with CURL command:

      curl –data “id=userID&pw=password” http://uploaded.net/io/login -c uploaded.cookies

      curl -v -b uploaded.cookies -O http://fra-7m17-stor03.uploaded.net/dl/b0dfb380-32d2-428f-a346-a940094af498

      The problem is, that I use the FINAL, REDIRECTED version of the download, but if I use the original one (http://uploaded.net/file/6b7ayf2f), I just get (and download) an html file.

      CURL documentation states, that it supports only a basic type of REDIRECT:

      1. Janik Heß Avatar
        Janik Heß

        Could you please upload your finished script?

        I’ve got the same probs on that :(

        1. Jan-Philip Gehrcke Avatar

          Embedded devices usually ship busybox with severely stripped down versions of the GNU utils. It is to be expected that wget there does not have a complete feature set. I wouldn’t even know if the shell provided by your device is POSIX-compliant or if it provides the features we require here (also cf. http://stackoverflow.com/questions/11376975/is-there-a-minimally-posix-2-compliant-shell for details on the features of minimal shells) — So, this script is built for a canonical Linux system. Writing something for a crippled embedded device environment is a different problem to solve, and it cannot be solved generically.

          It is left to be said that this for sure can be solved. But you have to debug for yourself where exactly things fail, and then work around these issues.

        2. Dima Medvedev Avatar
          Dima Medvedev

          Still hadn’t the time to complete it, I’ll upload it here when finished, it should be soon.

      2. Dima Medvedev Avatar
        Dima Medvedev

        I’ve managed tom install Pyload (you have to have Entware installed), and this eases life a lot, no need in complicated bash scripts with cookie caching in CURL.
        After Pyload installation, you’ve to manually copy all these plugins:
        to Pyload plugins dir.
        P.S Installing Entware without solid DDWRT (and possible Linux to a certain extent) background & experience ISN’T a good idea.

  10. Jordi Avatar

    Thank you very much.

    This worked like a charm for me.

  11. Daniel Avatar


    it worked fine some days ago, now (without changing anything) I got an error:

    It downloads the file (without percentages.. !?) and then deletes the temporary file and then:

    Extracted download URL:
    POST to URL w/o data. Response is file. Get filename from header.
    http://: Invalid Hostname.

    1. Jan-Philip Gehrcke Avatar

      Can you please try again, from scratch? If it still does not work, I need more debugging information, such as the file you try to download. You can mail me via jgehrcke@googlemail.com, if you do not want to disclose that information here.

      1. Fabian Avatar

        I had the same error.

        I changed the line where LINEOFINTEREST is set to:
        LINEOFINTEREST=”$(grep -a post ${TMPFILE} | grep -a action | grep -a uploaded)”

        thanks for the script.

        1. Ederson Avatar

          I’m getting the same error.

          I changed the line you mentioned, but I’m still getting the same error.

          The script download the file successfully to the tmp dir, but it delete the whole dir and exit with the message:

          Extracted download URL:
          POST to URL w/o data. Response is file. Get filename from header.
          http://: Invalid Hostname.

          It happens with every url I’ve tried. An example is http://uploaded.net/file/m212whum

          Please let me know if I could help with any other info/file.


          1. Jan-Philip Gehrcke Avatar

            It would have been helpful to see the entire script output. You can prevent deletion of the temp directory by outcommenting the trap installed in the top section of the script. Then you can go into the dir and look into the files and debug things. It would be helpful if you did that. You can also mail me details. Your download URL works for me with with the script posted above, using bash 4.3.30, wget 1.16, sed 4.2.2, grep 2.20.

  12. Taher Avatar

    Thanks. it worked for me.