Yes I'm very aware of that (I helped write a chunk of the USB stack used on the LPC1768 Marlin implementation), which was why I said that the "UARTs (on the Pi) don't come into it". My comment was in reply to OutsourcedGuru who had said that you may need to use a "good UART" on the Pi to get good performance. My understanding is that this thread is about printers connected to the Pi by USB (even if that USB connection then uses a UART), not (the relatively small number of) printers connected directly to the Pi via a UART on the Pi board.
But the thing is that even with a reasonable USB stack running directly on a 32 bit board you may still not get very high communication speeds. I've just been running some tests to transfer a 20Mbyte file using the "Upload to SD" button in Octoprint. This uses exactly the same code to send the gcode to the printer as used during printing. It takes 61 minutes to upload the file. The same file uploaded from Repetier host takes just under 7 minutes. Which one of these do you think is less likely to have issues feeding large numbers of small move gcode operations like the ones you described into a printer?
The above tests generated no errors, no resends, no extra logging, so are reasonably comparable. They were run on different host side hardware, Octoprint was running on a Pi 2 and Repetier on a relatively low end PC. The Pi had a RPi Cam attached but was not streaming (activating streaming slows things down to about 72 mins, not surprising given that the network connection runs via the same USB controller). But that same file can be transferred to the printers SD card over USB (using the feature that shares the printers SD card as a "USB drive") in just over 30 seconds from the PC and around 56 seconds from the same Pi used to run Octoprint. So although the Pi/USB/printer transfer is slower than the PC/USB/printer it is not that much slower. All of these tests are basically transferring roughly the same volume of data (and using the same USB bulk endpoints).
So why is an Octoprint transfer so much slower? To be honest I'm not really sure. The protocol used by Octoprint (so called ping pong mode) is not as efficient as the buffer state tracking used by Repetier, but even switching Repetier Host to use ping pong mode only extends the time to around 14 minutes. Even if we combine that with the above 2 fold increase for the PC v Pi USB speed we only get to around 28 minutes. So it looks like there may be something else going on here. But whatever the reason it would seem that there may be some room for improving the Octoprint to printer transfer rate.
I worked on a project that was basically a man-in-the-middle computer between the OctoPrint Raspi and the printer controller board (Mega 2560 as I recall). The computer in the middle was a Raspberry Pi Zero W. I wrote a simple driver for passing the serial communications through the Raspi Zero.
Unfortunately, I was slammed by the absence of two good UARTs in the Zero in this case. The maximum that I could push on the "bad" side was either 9600 or perhaps twice that. I could be wrong but this is what I ran into.
It seems to jibe with an issue that was seen on the Prusa forum where the designer spec'd a Zero to be soldered to their board without understanding the UART situation. (By disabling UART use by Bluetooth and the console their setup then works, making the good UART available for OctoPrint.)
I did a similar to your example earlier yesterday:
Starting with 30 segments (what looks like is used by openscad, when no $fn is defined) and doubled the value for each run.
Above 2400 (which is - IMHO - already a "nearly-insane" value) I simply added 0 (zero) to the value 24000 - 240000 - 2400000
The only thing I noticed:
with each of this really crazy iterations OpenScad needs longer to render
the file size of the stl becomes bigger
Slic3r PE (didn't try it with another slicer yet) needs longer to load and process the resulting stl file (which is logical)
didn't notice any problems while printig the resultin gcode files but!!!
the size of the resulting gcode files didn't become siginficant bigger above the $fn=2400
so I asume that at least Silc3r PE takes care of these "unnecessary" data in the stl-file and reduce it to something 8-bit boards can handle?
Maybe... like one of those sample gauge parts which change values with Z so that you can dial in the right... temperature or whatever.
What about a cylinder which starts with 16 segments and doubles with each additional mm in height? At the point where it's beginning to stutter you can then abort the job, measure the height and calculate the segment threshold which breaks your printer.
start=16; //start segments
end=356; //end segments
step=16; //increase each cylinder by this much
union() {
for(fn = [start : step : end]) {
translate([0,0,fn-start])
cylinder(d=60,h=step,$fn=fn);
}
}
Obviously the more end segments / lower the steps the taller the thing will be so you'd need to make sure (manually) it fits in your print volume. You'd start at like 32 and end at 50 with a step of 5 or something. Each cylinder is as tall as the step value. It could be better but meh, it's 1am.
most slicers do, use for e.g. older version of simplify3d - 3.0 is perfect for this test as it will generate files that are larger and larger with a lot of super short moves ..
Yup.
Also editor type. Parametric mesh editors can be problematic unless export resolution is reduced.
iirc from my own experiments; a 20mm diameter vase mode cylinder with:
360 segments will stutter slightly.
144 segments will not
Actually it might have been higher than that, I forget the tests I did, my usual technique was ok so I just left it at that. I habitually use multiples of 36 as curve segment counts, with 36, 72, and 144 being the common ones I use. I have a feeling 360 was ok, and 720 was not.
I always print at a max speed of 60mm/s on an i3 with chinesium RAMPS/2560 and a 30cm USB cable. But of course most toolpath types print somewhat slower than this depending on type and how my slicer is set.
I come from a ye-olde game development background, so efficiency of polygons was always in the front of my mind back then, and it sticks in anything I design now too. Mostly because editing mesh that is unnecessarily dense is seriously annoying. But also partly due to my tools ie: blender is not a parametric modeller, people using parameteric mesh like fusion360 does will have this kind of problem unless they tone down the export resolution, or cap in in their slicer.
In the last year of printing I can honestly say I have never been able to see a difference in print speed/quality due to use of SD or stable release of Octoprint.
Both work perfectly and when they don't its generally me doing a spotty job of slicing. Using a Prusa i3 mk3. Rpi 3 b, running 2 Camera's and Enclosure Plugin on a Micro SD Sandisk Ultra 32gb. SD Card is a 16gb Sandisk Ultra.
I remember back when I started out with an old Ultimaker and ancient slicing software I had trouble with buffer underruns and motion stalling because there was too much detail in curves and the XY on the Ultimaker could easily reach 150mm/s and above. As far as I can tell modern slicers are better at optimizing gcode for excessively detailed stl files.
It would be cool if you could send data between the raspberry pi and the arduino with SPI instead of USB UART, then you could reach much higher transfer speeds. It would require a ton of new code and is not really realistic to implement though due to all the different controller hardware out there. And the SPI bus on the arduino is occupied by the SD card so it would probably require nasty hardware mods.
I would think if the interface were truly a USB interface, rather than serial masquerading as USB, we'd have plenty of bandwidth. But I'll be the first to admit, I have not put a lot of time into understanding the hardware side of this.
@John_Mc See my earlier post which details actual transfer rates that can be obtained by Octoprint with a "full USB" interface (not one that goes via a UART). Even with this type of interface the way that Octoprint uses send and wait limits the overall transfer speed. It is not just the interface but how it is then used.
I think it's not the interface. It's the transfer overhang. Checksums have to be recalculated by the printer board and have to be acknowledged to the host. Also the buffer has to be managed. That all takes calculation time from the print job.
115 kbits/s serial speed, that is more than 9500 characters per second. Even a line has 25 characters, that are 380 command lines per second. I think, that is fast enough.
As has been said in this thread, the baudrate is not the only limiting factor; the protocol and processing power of the controller board are also a major factor. Gcode gets sent line-by-line. For each line a checksum needs to be calculated, and then succesful reception is communicated back to the host, before the next line can be sent.
Here's an analogy: Say you are performing a complex task that needs your attention. Juggling chainsaws or something like that. Now imagine someone telling you a story, and it is important for you to get and remember every last detail of the story, so say "ok" after every sentence you understood or "come again?" after every sentence you didn't quite get. Do you think this proces become much faster if the person telling the story starts speaking ridiculously fast?
There are a couple of ways around this. One is to have a different chip (or core) handle the communication. Another would be to change the protocol, for example to switch to something binary (less data) and not sending line-by-line but block by block (less handshaking overhead).
it is solved by using "better" motherboard (32bit one)
many years ago (can't say for sure but more then 6 less then 10) netfabb took on themselves to try to fix the broken firmware for bitsfrombytes rapman 3.0 (and 3.1) pic32mx based board that had bunch of issues with one major being unsafe reading from the SD card. It was one of the first boards (if not first) out there that was stand alone with oled and sd card reader and they were using fatfs library with default configuration that does not checks the checksum when it reads data from the sd card so a messed up read would throw the print process to hell (weird movements for e.g.)... they didn't figure out the problem was in fatfs implementation ( I found that out after the fact when I was working on imroving the new firmware by netfabb team) and they solved the problematic reads by implementing a "binary g-code file" a.k.a .bgc that was a very simple structure with same size "command" that had a "command type" (G or M code number translates directly into it) and command parameters (gcode and M code parameters translate directly) with a crc at the end of it all .. made reading from the sd card + parsing faster by order of magnitude even with crc calculation compared to string read and parsing.... same process I created for usb communication and it was lot faster than regular txt, unfortunately I got some other priorities and I never finished that firmware... don't remember if Eric from Ultimaker used that one to make his version for 32mx or he based his on the original one from bitsfrombytes it was all too long ago to remember.. no clue what firmware they use now on rapman's since I stopped contact with them after they sold themselves to the dark side (3dsystems)
anyhow, using simple binary protocol that easily maps to g-code would be cool but would require some serious support by the firmware and redesign of the interface both in host apps like octoprint, prontrface and others and the firmwares... and since 8bit boards should really become part of history I doubt anyone cares to work on that as with 32bit boards you don't have such problems ... and one can move to 32bit board if you are using octoprint for no money by flashing the 8bit board with klipper client and running klipper on the same board octoprint is running at... no serial port madness problem, no speed problem, no resources issues...
But don't do it that way that the ASCII line is recalculated. Then nothing is won.
The printer firmware should completely work binary. So, no g-code any more.
A fixed data format would be appreciated, so the parser can work faster.
E.g. 2 bytes for command. one byte for number of parameters, [one byte for the parameter (X, Y, Z, E, F etc) and a usual and precise enough floating point for values].
If you'd try to suggest that, the hardcore "I do everything by hand!!1!" tinkerer crowd would immediately retaliate.
People love their ASCII plain text (even though they hate it when it is pointed out that that also means no UTF-8 content in printer responses ) and their verbose line by line protocol and it will be quite the feat to ever establish something different.
Sorry... had to vent a bit after some bad experiences in the past.