So I was wondering what the flags do too, to check if this is any safer. My curl manual does not say that -f
will not output half downloaded files, only that it will fail on HTTP response codes of 400 it greater... Did you test that it does not emit the part that it got on network error?
At least with the $()
that timing attack won't work, because you only start executing when curl completes...
tgt
What's that? A connection problem? Ah, it's already running the part that it did get... Oops right on the boundary of rm -rf /thing/that/got/cut/off
. I'm angry now. I expected the script maintainer to keep in mind that their script could be cut off at litterally any point... (Now what is that set -e
the maintainer keeps yapping about?)
Can you really expect maintainers to keep network error in mind when writing a Bash script?? I'll just download your script first like I would your binary. Opening yourself up to more issues like this is just plain dumb.
It is absolutely possible to know as the server serving a bash script if it is being piped into bash or not purely by the timing of the downloaded chunks. A server could halfway through start serving a different file if it detected that it is being run directly. This is not a theoretical situation, by the way, this has been done. At least when downloading the script first you know what you'll be running. Same for a source tarball. That's my main gripe with this piping stuff. It assumes you don't even care about the security.
At the end, in redirection, <<
: that's not how here-documents work. The example gives the impression it will read the given file up until "STOP", but in reality the shell expects you to keep writing your here-doc until you write "STOP" and then feeds it to the program as if it were all on stdin. I don't think wc even does anything with the stdin if you give it a filename...
Note that variable expansion will happen in here-docs, so it's a bit different than a simple cat
.
Also look into here-strings. And process substitution, I find that quite handy.
See the proof of concept for the pipe detection mentioned elsewhere in the thread https://github.com/Stijn-K/curlbash_detect . For that to work, curl has to send to stdout without having all data yet. Most reasonable scripts won't be large enough, and will probably be buffered in full, though, I guess.
Thanks for the laugh on the package installer, haha.