Rendering Timelapse on Pi Zero 2W - Alternatives?

I have several printers being controlled by OctoPrint. I found issues with doing multiple instances on one computer, so I'm using one Pi Zero 2W per printer Overall, it works nicely, but the biggest issue I have is dealing with rendering timelapse videos. I've considered disabling them, but I find them extremely useful for debugging issues, so I want to keep them active.

The issue is that, on a Pi Zero, it can take a LONG time to render a timelapse, which delays me starting my next print. I've tried to find alternatives and haven't found clear answers about this.

I can think of a few possible solutions for this, but I have no experience working with this kind of thing in coding. I don't mind doing the "how to code in Python" research on an idea, but first I'd like to find out if any of these ideas are workable.

  1. Cancel generation: Once I finish a print, is there a way to cancel timelapse rendering, or to cancel rendering once it's started? That way if I find a print is good, I can just stop it from rendering the timelapse.

  2. Capture images but don't render: I can think of several different ways I could handle this if I could capture the stills but not render them. Is there some way to have OctoPrint capture images but not render the video? And if I do that, what about accessing the stills from another system? I take it I could set up a Samba share to do that, but what directory are the still stored in?

  3. Totally offload the process: I see a few issues with this. First, I would need a way to detect when OctoPrint has started a print. Is there a way to detect that from another computer? Second, if the webcam is on the Pi Zero that OctoPrint is on, will another comptuer capturing the stills add much of a load to the CPU on the OctoPrint system? And once I detect that a print has started, is there a way to read the name of the print job?

I'm sure there are also other possibilities that I haven't thought of. I'm open to other methods. The main issue I'm trying to solve is to not have to wait for timelapse rendering before I start a new print from a slower Pi Zero.

Currently, I don't think there is, Is it true that you can't start the next print while the timelapse is rendering? I thought it would pause the rendering and start over, but I could be wrong since I don't use timelapse.

You could use event manager plugin to copy/send the files as they are created using CpatureDone and the command to copy the file based on payload. Events β€” OctoPrint master documentation. You could symlink the folder where the images are created and not allow delete so they can't be removed when timelapse rendering is complete. This could cause errors though. Either case the path on OctoPi image is ~/.octoprint/timelapse/tmp.

Similar to the previous answer you could use event manager to send commands to other machines/systems to notify of print start/complete, etc. Not sure how difficult it would be to tie those things together though.

@MMcLure might have some ideas too, I know he's done some interesting tweaks to the timelapse rendering process.

Whether you are running OctoLapse or using the built in Timelapse function may affect where the stills are stored (I think). Both ways delete the stills after conversion. You could always make a button that "killall ffmpeg" and abort the conversion, but I don't know if that will leave those stills lying atound.

Good point. You could kill it using Event Manager and the MovieRendering event, which is the start of rendering.

I don't remember if it's in the notification that comes up while the timelapse is being rendered or somewhere else, but I remember reading to not start a new print while the timelapse is rendering, since it took up CPU time and resources and could interfere with timing when sending print commands. I believe I saw some discussion about setting the priority of ffmpeg so it was low enough that the print process would get much higher priority, but apparently it's not just a matter of priority, but other resources.

Okay, really good info there, since I haven't had time to read anything on the API or anything like that and didn't know about the event manager. I'm sure that has a lot of info for me. It might give me other ideas. It looks like what you're suggesting is to have the event manager run a command on the OctoPrint system that could copy the files to an NAS share on my LAN. This would work well, but I have two questions about it: 1) Would that use a lot of CPU time or resources so it might interfere with an ongoing print? and 2) If I do that, is there a way to capture the images and NOT render them? I'm thinking two scripts would do the job. One would copy the captured images and the other would, at the end of the job, delete all the images.

Could I use the event manager to just write a lock file on a Samba shere (either on my OctoPrint system or on an NAS share) when a job starts and delete the file when the job finishes? If I could do that, I could wrote a Python script to run on a server that would monitor lockfiles like that and, when it appeared, it could start saving files from the webcam, then, when the job is done, it could do the rendering. Since that would run on another system, all that would be taking up resources on the OctoPrint system would be the event manager writing and deleting the lock file and whatever overhead is used when /webcam/?action=snapshot is accessed by another system.

I'm assuming the commands would be sent through a socket? If I can use it to write a lockfile and delete it, that'd give me 2 choices and both would use minimal resources on the OctoPrint Pi Zero.

One of those comments that's a big help because it tells me about something I didn't know about (adding buttons). I take it that means I can easily add buttons to the OctoPrint UI?

Oh - that would be useful! (Your post on this appeared after I stared writing this response - just saw it.) So the timelapse could capture all the stills, when the job is done, the event manager can kill the rendering process and trigger the rendering process on a server. When the rendering is done, the files could be deleted. (I'm thinking I can set the directory in Octoprint, with the images, as a Samba share so my script running on a server can access the images and delete them when done.)

Looking at OctoPrint source code, it looks like it uses a cutoff value to delete those unfinished rendering snapshots. So as long as it wasn't too old they should stick around. The setting isn't exposed to the UI, but appears to be 7 days...

https://docs.octoprint.org/en/master/configuration/config_yaml.html#webcam

webcam:
  cleanTmpAfterDays: 7

https://github.com/OctoPrint/OctoPrint/blob/a8fff3930e3c3901bd560ca77656c281959134b3/src/octoprint/timelapse.py#L270C43-L270C72

  1. probably not a lot of resources, it's just a copy command basically of each snapshot after it's generated. 2) the killall idea mentioned earlier might work. or you could mess with the ffmpeg command for rendering to not process anything maybe? That would take some experimenting/research.

yes, should be possible.

yeah, could just wait until the end of the process and not copy every image as it's generated using MovieRendering event instead of CaptureDone event.

This is the default ffmpeg command for rendering timelapse. I wonder if just removing the command completely and making it blank would basically resolve the issue. Otherwise, you could probably use this to run your copy command of all the {input} files.

{ffmpeg} -framerate {fps} -i "{input}" -vcodec {videocodec} -threads {threads} -b:v {bitrate} -f {containerformat} -y {filters} "{output}"

something like

cp "{input}" /path/to/share/

EDIT:

@CmdrCody would know this better than me, but I think you can chain commands together too, like

/usr/bin/mkdir "/path/to/share/{output}" && /usr/bin/cp "{input}" "/path/to/share/{output}/"

and I think that would create a subfolder based on what the output filename of the timelapse would be, and then copy the files to that folder.

I've been using Timelapse Purger to delete my timelapse videos after 7 days, since I've been worried about the videos taking up too much space. It sounds like it's not just the videos that have been an issue (in terms of storage), but that the stills are taking up space and not being deleted for 7 days.

(A big reason I've been worried about space is because, after a month or so of use, I found uploading gcode to this OctoPrint system started going very slowly - I've wondered if there could be a storage or resource issue.)

I was just thinking htat as I read your previous post! I could replace the ffmpeg command path with a path to a script that could copy the files to my NAS, or do whatever I wanted, such as creating the lockfile with information data in it like the job name, date, and time, so a script on my server would see that and do the rest of the work.

Just off the top of my head, I'm thinking I'd replace the ffmpeg command with one that would create a directory on the NAS and the name would combine the job name, date, and time. Then it'd copy over all the image files to that directory on the NAS. Once done, and confirmed, it'd delete the image files on the Pi and exit. My script on my server would see the new directory and do the rendering work.

That should take a minimum of resources on the Pi and offload most of the work to my server, but still let the Pi take the snapshots and delete them. (I think that'd be easier than doing everything on the server, including taking snapshots.)

Yeah, I think you do it with my edited post and chaining commands together. It really depends on what {input} translates to in reality. I don't know if it's a full list of files or a wildcard pattern, would require testing or digging through source code more. And to avoid the need to "clean up" you could use mv instead of cp command.

/usr/bin/mkdir "/path/to/share/{output}" && /usr/bin/mv "{input}" "/path/to/share/{output}/"

Ah, this trick won't work it seems, unless you adjust path to ffmpeg.

Invalid webcam.ffmpegCommandline setting, lacks mandatory {ffmpeg}, {input} or {output}

so changing the path to ffmpeg to be /usr/bin/mkdir and then advanced section for command line to be

{ffmpeg} "{output}" && /usr/bin/mv "{input}" "{output}/" && /usr/bin/mv "{output}" "/path/to/share/"

and it might work.

EDIT: doing a shell script might be the way to go. I figured out that {output} is a full path to filename under timelapse folder, and {input} is using a tokenized filename not recognized by the mv command.

/usr/bin/mv: cannot stat '/home/pi/.octoprint/timelapse/tmp/Shape-Box_PLA+_205_20240908014031-%d.jpg': No such file or directory

Have to morph the '%d' into a '*'. That's an ffmpeg dohicky to mean a bunch of numbers.
But you really only want to do that if you want to keep the timelapse. Also I'm pretty sure the temps are cleared after thee conversion. The timelapses may be cleared after 7 days.

I'm unclear. Are you saying that the path to ffmpeg must end with "ffmpeg?" If so, I can always change the path to something like ~/bin/ffmpeg and the ffmpeg there would be my script. It might take some manipulating, but I would guess it wouldn't be too hard for a script to alter {input} to something it could use.

I was going to write a simple script that would write {input} to a file so I could see just what it was. If it's not using an exact filename, then it is probably possible to expand it in my script.

I was thinking of cp rather than mv because I prefer extra checks when writing to a share on another system. I was thinking of copying, verifying the files were copied, then deleting. I know mv won't delete a file until it has finished the copy - never checked if it verifies the file is actually there. If it does, then that would work better.

There are a number of things I want to test on this, but right now I'm dealing with the need to get a vent hood over my CNC (so using a laser on it doesn't keep triggering my smoke alarm), and I have ONE more part to print and so many things are going wrong with my printer right now it's taking me multiple print attempts (on a 20 hour print job!) and a lot of troubleshooting to get this thing printed. So I'm a bit overwhelmed fixing something that, 2 days ago, I thought was a no-problem job.

I remember checking on timelapse retention, since mine were orgininally not being deleted or being deleted after a long, long time. I use the Timelapse Purger plugin because, at one point, I had so many timelapse files on my system I was worried about storage issues.

(Oh, I like having a Sky Marshall like Commander Cody around here!)

No, just the setting in advanced options for the command to run must include the tokens {ffmpeg}, {input}, and {output}. So if you set the path to ffmpeg to be your python or bash script to do the work the advanced command can pass parameters to that file. for example, set path to ffmpeg to be /home/pi/my_script.sh and the advanced command to be {ffmpeg} {input} {output} and you can use the passed arguments to do what you want.

That's the reason why I made the plugin. They do not get purged on their own or natively in OctoPrint, only those images saved in tmp appear to be flushed out from incompleted timelapses.

Got it. Seems the easiest thing to do, then, is to let OctoPrint do the snapshots and save them, then have my script run instead of ffmpeg. It'll make a directory on the NAS and move the files over. Then everything on the OctoPrint Pi is done and gone.

From there, I can have my program on my server watch for a lockfile or a new directory on the NAS. Once one shows up, it'll use ffmpeg to render the video and, if needed, copy it to a new directory. The ONLY issue I see is that I don't have the convenience of having the timelapse availabel in the OctoPrint UI. (But I suspect that if I copied it back to the Pi, in the right directory, it'd be seen as a timelapse video and be added to the list when I check on timelapses.)

The one issue I see is error reporting. Since my script is running in a shell, if it runs into an error, like the NAS volume being offline, it could report that in a log file, but then I'd need a way to show that error on the Octoprint UI so I know there was a problem.

End the script with an "exit 0" on a good run, "exit 4" if it errors. (Any number other than 0). That should be reported back to OctoPrint and it might flag it as a failed conversion.

You are correct, just copying the rendered image back to /home/pi/.octoprint/timelapse should make it available within the OctoPrint UI/Timelapse tab.

you could in theory actually run the command/render script remotely without dealing with lock files and watching. https://www.cyberciti.biz/tips/linux-running-commands-on-a-remote-host.html

But then what does OctoPrint do with it? Does it send a notification to the user that there was an issue, or is it just logged somewhere?

True. Not sure I want to go with that. That means giving the Pi access to a server, including storing a password or a private key. In general, that's a less secure system than my server, so I'd rather it have very limited access to the server, like just being able to write on the Samba share.