So I have a CTFd instance that runs and stores its files using Amazon S3 Buckets, and when I upload a large file (around 600MB) it uploads till about 10% full, and then stops and does nothing at all. Small files upload just fine. What could be the issue with this?
My first guess would be that there’s a timeout on the HTTP request uploading the file? CTFd needs time to upload the file up to S3. I would investigate your local setup and connection and see whether there’s some pattern that can be deduced.
Does this same behavior happen with the
aws s3 command?
I guess I should mention, I do have an nginx reverse proxy setup to have connections be directed through an https connection when accessing the site. It seems that it might be interfering with the upload process, but I do not know if it is because of a timeout issue, or if it is blocking the file upload cause the file is large. Any ideas?
I would imagine that nginx is the problem. CTFd uploads are tested and I’ve seen people upload large files just fine.
I would investigate the POST timeout configuration for nginx.
Yup, that did the trick. I increased the timeout to a very high value so it would work properly. I also had to add an option to allow NGINX to accept file sizes over 1MB (it was set to the default value) since it kept denying the file for being too large. Thanks for the help!