Timeout during local upload of gcode

Hi,
I have an issue related to my Plugin "DisplayLayerProgress".
A user wants to upload via Browser (OctoPrint 1.13.12) a >50MB GCode file, but receives a "504 Gateway Timeout".
image
The plugin use the preprocessing hook during upload.
I could optimize the preprocessor, so 50MB could now be handled, but 100+MB is not possible.
(https://github.com/OllisGit/OctoPrint-DisplayLayerProgress/issues/101)

  • Is there a server configuration to increase the upload timeout?

  • Do you have any kind of experience regarding large gcode files (>100MB)?

Thx, in advance
Olli

I haven't ever had a file that large, but could see with some of the larger printers and HD 3D scans around now that it wouldn't be too difficult to come across one. The largest I've ever printed was a plague mask for Halloween that ended up being 57.5 MB.

The only thing I see related to upload limits is in config.yaml documented here but only seems to be size based, not timeout based..

  # Settings for file uploads to OctoPrint, such as maximum allowed file size and
  # header suffixes to use for streaming uploads. OctoPrint does some nifty things internally in
  # order to allow streaming of large file uploads to the application rather than just storing
  # them in memory. For that it needs to do some rewriting of the incoming upload HTTP requests,
  # storing the uploaded file to a temporary location on disk and then sending an internal request
  # to the application containing the original filename and the location of the temporary file.
  uploads:

    # Maximum size of uploaded files in bytes, defaults to 1GB.
    maxSize: 1073741824

How long is the preprocessing taking? Could it be optimized? I struggle to imagine what kind of computational work is being done here that it would trigger any kind of timeouts between client and server. The fact that it's a gateway timeout sounds like the reverse proxy is simply being unhappy here, in which case you'd need to change the timeout in that and not anything in OctoPrint (which wouldn't make much sense anyhow since that timeout would be something that is defined in the client, not the server).

The above screenshot shows a popup created by octorprints javascript, see files.js self._handleUploadFail, so upload is done via browser and from a slicer or other client.

I ask the user if there is a reverse-proxy between his browser and the the octoprint server, but my assumption is that is a direct connection. waiting for feedback.

The file upload is done (blue progress bar increasing, with text uploading), after that the text is switching to save (or saving) with color-animation. During that process the preprocessing hooks scans each single line, with several expressions:

pattern.match(line)

I already optimised the process performance up to 40-50%, I just evaluate only comment-lines and skip any other line. Thats the reason why 50MB files now works for him, but 100MB is still not working.

In my environment I didn't receive a gateway timeout, instead after 15 minutes a popup appears with the message "server is offline. automatic reload not possible". A manual reload (Strg+F5) is not possible "spinning wheel of death"...because file is still processing.

The preprocessing took round about 28 minutes (enabled logging per line, Raspberry Pi-3B, 50MB Cura g-code)

Any other ideas...my lates idea is to create a separate thread for analysing, if the file-size reach a defined threshold.

If preprocessing indeed takes THAT long then yeah, that's probably the best solution. The thin is, OctoPrint's API runs as a WSGI context in the singlethreaded tornado framework. So if you block the upload procedure for several minutes you are also blocking the whole webserver for several minutes. That will cause all kinds of issues with the web socket and similar.

Long term I'm thinking about switching from Flask-with-WSGI-on-Tornado to just Tornado and thus become able to handle such scenarios better, but for now if things are taking this long, best fire up a different thread.