Hi, Please I need some help with a plugin development. I am trying to create a plugin that takes a snapshot after the printer finishes printing every layer. I have not developed any octoprint plugin before and I am trying to take this up as a challenge as the first of many. help please & thank you!
This happens with the built in timelapse functionality, so I guess the question is what is the end goal because you could also possibly do something with event subscriptions for hooking into the default timelapse snapshot functionality I think. You could also look into OctoLapse as it does fancy stuff too with snapshots, etc.
Ah! I see, so what I am trying to do is to compare sliced layer files to a camera snapshot of the same layer and make the printer make a decision if it should continue printing or not. How do you think i could go about that? Thank you
I would probably pre-process the file, or track the individual gcode commands as they go to the printer. You would have to render what you think the layer would look like based on gcode. There may be a Python-based gcode viewer out there you could look at. Then comparing this to a snapshot of the layer, might be difficult, but I hope you know something about this, otherwise you wouldn't of started this project .
Keep in mind performance, you don't want to overload the server that it is running on - usually a Raspberry Pi. The Spaghetti Detective offloads their processing to separate servers, because their AI failure detection is too much for the Pi to handle.
There's lots of hooks & mixins available for plugins, if you haven't already I would complete the plugin tutorial, build something simple to get used to how OctoPrint plugins work, then you can jump head first into AI based failure detection .
Hi @AYODEJI_FAWOLE,
from my perspective you have three "challenges"
- receiving "layer-changed-event"
- take a snapshot from the camera
- comer the image
- unfortunately octoprint doesn't support "layer change events" out of the box.
One solution could be to use additional Plugin "DisplayLayerProgress". This plugin send after each layer an event thru the octoprint event-bus.
Then you can listen on this (see GitHub - OllisGit/OctoPrint-DisplayLayerProgress: OctoPrint-Plugin):
def on_event(self, event, payload):
if event == "DisplayLayerProgress_layerChanged":
## do something usefull
Or you can add special GCODE or Comments to your code file and listen on that, to identify a layer change
- You can look into the OP source
timelapse.py
or intoCameraManager.py
from PrintJobHistory OctoPrint-PrintJobHistory/CameraManager.py at 410ca8562cef903977860d1fe6b1fc783a97d9f0 Β· OllisGit/OctoPrint-PrintJobHistory Β· GitHub
Take a snapshot and save the image to filesystem (need pillow-module):
snapshotUrl = self._globalSettings.global_get(["webcam", "snapshot"])
rotate = self._globalSettings.global_get(["webcam", "rotate90"])
flipH = self._globalSettings.global_get(["webcam", "flipH"])
flipV = self._globalSettings.global_get(["webcam", "flipV"])
# make snapshot url call to receive the image
response = requests.get(snapshotUrl, verify=not True,timeout=float(10))
if response.status_code == requests.codes.ok:
self._logger.info("Process snapshot image")
with i_open(snapshotFilename, 'wb') as snapshot_file:
for chunk in response.iter_content(1024):
if chunk:
snapshot_file.write(chunk)
# adjust orientation
if flipH or flipV or rotate:
image = Image.open(snapshotFilename)
if flipH:
image = image.transpose(Image.FLIP_LEFT_RIGHT)
if flipV:
image = image.transpose(Image.FLIP_TOP_BOTTOM)
if rotate:
# image = image.transpose(Image.ROTATE_270)
image = image.transpose(Image.ROTATE_90)
# output = StringIO.StringIO()
image.save(snapshotFilename, format="JPEG")
self._logger.info("Image stored to '" + snapshotFilename + "'")
# without this I get errors during load (happens in resize, where the image is actually loaded)
ImageFile.LOAD_TRUNCATED_IMAGES = True
else:
self._logger.error("Invalid response code from snapshot-url. Code:" + str(response.status_code))
- You can do the image processing by your own (grayscale, bit-compare) or use the already mention pillow-module
from PIL import Image
from PIL import ImageChops
image_one = Image.open(path_one)
image_two = Image.open(path_two)
diff = ImageChops.difference(image_one, image_two)
if diff.getbbox():
print("images are different")
else:
print("images are the same")
I've talked about this with both @Kenneth_Jiang (Spaghetti Detective) and @leigh-johnson (OctoPrint Nanny) and both have said the comparison of gcode to actual print snapshots is next to impossible but that would be the perfect world of detection. The issue seems to be the calibration of camera to make it at the right angle to compare versus the angle in the 3d rendering of the gcode file etc. There have also been a couple of lengthy discussions in regard to this "AI/Machine Learning" approach on the issue tracker, and does seem it would be something you would want to not do directly on the pi that is printing.
Thank you very much! I'm going to try this out and give you some feedback
Ideally, AI/Machine learning would do an excellent job with this but im not there yet. Thank you