Computer Vision applied to 3d printing

Hi everyone,

they are several months that I'm entered in this amazing world of 3d printing and like master student of engineering I was thinking about a CV(computer vision) control of the print. Why not stop the print if the first layer is detached or is not perfect or if any issue is present? All controlled by the raspberry cam.
Obviously this is only the first idea, but other controls can be actuated to the print to prevent/stop the print as soon as a problem is present.

Mentioned in the most recent Octoprint On Air update: https://www.youtube.com/watch?v=fHN5Ze5k67o&t=2832s

If you could make this happen, that would be absolutely awesome. I once looked into doing something with computer vision, spent a little time searching, then had a good laugh and gave up.

2 Likes

Thank you a lot for the answer I didn't know about this "On air" videos.
I worked with machine learning a lot in the last year and I will probably try (time permitting) to implement this idea of classification of the first layer to stop eventually the print if the first layer is not enough good.
To be honest the task is not so difficult, there are enough features (in terms of image properties extracted, ex: edges, texture, and so on..) to discriminate a good first layer from a bad one.
And obviously, if you have any suggestion, it is welcome :smiley:

1 Like

I think writing it for a specific printer, build plate, filament color, and background might not be hard, but then making it generic... if someone's got an open printer like a Prusa (so there is a bunch of background in the image) and is printing with black filament on a black buildtak print surface, it could become really tough.

Printing in an enclosed printer with a bright yellow filament on a black buildplate would probably be easy - but then again, I know jack about machine vision so I could very likely be thinking these would be problems that might not be if you know what you're doing.

Either way I sure wish you luck and hope it goes well for all of our sakes!

1 Like

Hi. I wonder if an easier thing to do is to use the normal camera view (front corner angle) and look for the unmistakable spaghetti mess that appears after a print starts to fail. If members of this group collected lots of images of this spaghetti then it should be possible to train a neural network to spot it. OctoPrint could pause the print if spaghetti is detected / suspected.
For example, see https://medium.com/@sangho/how-to-build-a-image-recogniser-using-your-own-dataset-22bb9f806e1d
Another scenario that could be detected is gap appearing between the top of the print and the nozzle - caused by filament jam or nozzle blockage. This would be harder train for so a phase 2 perhaps!
So how about getting a bunch of filament spaghetti images together to make an open source training set?
Richard

1 Like

Hi cris,

is there any update on the developing of this CV aided printer? I am very interested in this idea and would like to collaborate in the realization of this.
Actually some people I know wuld be very interesd in this, and we are ready to work for.

1 Like

Hi oiud!
Sorry for the delay in answering.
Unfortunately I've had no time to implement the idea. The only done trial has been a telegram bot which command a raspberry interface to shut down automatically the printer when the print is finished. This done without any gcode command or usb command. Only using a cam to detect the movement.
So the trial until now has been only a simple movement detector (aka change detection between to frames captured at two different intervals).
Now, the idea of a neural network (also deep) can probably help in generalize between differences of printers but unfortunately there are no datasets, no labeled data, anything useful.
Another problem is where to train the model... Unfortunately I've not powerful GPU(only at university but I don't want to risk :D) and so the training of a NN or, better, of a CNN is a little tricky and slow on CPU.
So I was thinking about a Fully Connected classifier(a kind of NN) or a SVM trained using hand crafted features (HOG and edge at the beginning).
These are the ideas I've got, due to the limited free time I have due to studies, I have not already implemented these ideas.
What were you thinking about to solve the task?
Thank you in advance!

A friend has written a plugin for octoprint for the eye toy+ kinect which do exactly that.
But he was sued before because the wrote open source software. he won but now he release nothing anymore.
Sadly the plugins were written after he was sued.

With a normal cam you have no chance, the sony ms cams work fine on a odroid c2.

his cam checks the layers. Over/under extrusion detection works not so great. Color change detection when filament change is working great.

I do this with a RGB color sensor on a arduino and it is not as nice as on the odroid :-/

I've thought about this. It's a game of neural network where you compare bad prints from the past with good ones. The computer would compare the current one and try to make a determination.

That said, somebody is already way ahead of us on this with Project Kronos.

With few experience in computer vision and none in neural networks what I was thinking about was a "spot the difference" algorithm that compares pictures of the printed part in real time with the gcode preview rotated at the same angle.
My naive view is that with digital image processing on the "picture" and on the gcode preview a point could be reached in which the two could be compared excluding background, shadows, etc, with an initial printer setup (fixed images without printed part, etc).
From this comparison it should be pretty easy to detect a difference between expected and effective features, starting with first layer correctness (comparison of the 2D shape), missed steps (detecting a non linear vertex that should be linear), and the like.
I think a V approach like this should be easier that a deep neural network one, that would require tons of previous data.
I also imagine a simpler CV network like this would be easier on the Rpi recources, given octoprint is running too.
I think my first line of work would be to understand then test the DIP and CV part on PC, at least to understand if the gap between picture and gcode preview could be filled, and leave Rpi at the last. I think your Rpi work could be very valuable in the second sense, at the very least you managed to build the setup to do the image analysis and printer control.

I'm sure it's an interesting problem. I do think that it would be problematic to try to extend the data set beyond your own printer.

My personal list of things which I would like to catch, in order of occurrence:

  • The hotend gets clogged or the filament spool was cross-loaded or it was a sticky filament like carbon fiber or it's brittle and breaks... and it begins "air printing" for the remainder of the print
  • Very tall columns sometimes break loose from the base due to bad adhesion or lack of a raft, perhaps, resulting in air printing
  • Parts which occupy nearly the entire footprint of the print volume will often start curling at the edges (non-heated bed) which sometimes results in hotend crashing in that zone
  • When there are many small parts without a raft, one or more of those might not adhere correctly and never get a good start

I would say that most of my print job failures happen within the first 8 layers, even the first 3 if I'm honest.

I almost never get things that look like the Flying Spaghetti Monster.

In my own cases, I'm thinking that a better solution would be to detect that the spool itself hasn't moved in 60 seconds, say. This would likely cover half of my problems. Curling, part separation might be good candidates for photographic analysis. In theory, the Cancel button behavior could be modified to mark a particular job as failed within this plugin which then takes a snapshot in this moment and stores it for your data stack along with the layer information. But you'd also need to store similar photos at that same layer, I'd think otherwise. So if my fails are mostly happening at layer 3, then successful prints at layer 3 also need to be sampled.

I don't think so - remember that we have the incoming gcode so we know what the print SHOULD look like. It would be straightforward to have the user train the model with a test print, so that it could get used to their camera, lighting, and what the bed and extruder looks like. Then just watch for spagetti, or for the first layer to start sliding around in sync with the mechanism...

I know a guy who's into ML and will run this by him, but I'm also going to go check out Kronos.

Hi Daniel
May I know why was your friend sued? Will building the above mentioned feature infringe anyone's intellectual property?

he was sued because he didn't released the shell scripts to build the distribution and also no shellhistory of this develope machine.
Which he didn't have todo.
All Software he used was/is open source / he only provided an generalized image of his installation.

Will building the above mentioned feature infringe anyone's intellectual property?

i don't think someone patented it.

Hi. It's been a while since I posted the original post and I've printed much more with my Taz 6. Most of my failures have been because of filament grinding in the extruder. This can be because of a hot environment - my shed gets very hot in the summer and this seems to soften the filament making it more likely to happen. Sometimes the hotend temperature needs to be increased to support frantic retraction and repriming for certain designs.
I agree that some sort of "filament still moving" sensor would catch most of my failures. This could be done by:

a) A physical rub wheel on the filament input perhaps incorporated into a filament runout sensor assembly. A slotted disc and IR pickup or a magnet and hall effect switch could be used here.
b) A camera watching the spool rotation and software that recognises when this stops.
c) If the spool sits in a spool holder with 4 bearings / 4 wheels (like the "fat tracks" thing on thingiverse) then it would be possible to add a sensor to one of the 4 roller wheels. A magnet would probably be easiest to fit to the roller wheel and a hall effect switch on the base picking it up. An Arduino would then condition the pulses and connect to Octoprint (or a custom sensor type could be added to the Enclosure plug in thereby taking away the need for the Arduino)

You'd need to be careful if ironing mode is enabled because for longish periods there'd be no extrusion / filament use but the print would be OK.

My favorite option is (b) because it's all done in software but (c) is probably the more practical to build.

Richard

1 Like

Have you seen the plugin The Spaghetti Detective . It runs either on the cloud or on a more powerful machine than a raspberry pi. Not tried it but it's got a lot of installs and a large development backing. Might be what you are looking for for the fail detection with the camera.