Running into some 'weird' websocket problem


Hey all,

I'm the developer of OctoPod (iOS open source client) and I'm troubleshooting some 'random' error that I'm getting in the client when reading from the websocket. FYI, I'm using Starscream (also open source) for reading from the websocket.

I'm testing with OctoPrint latest from repo and this also happens with 3.1.8. I do not remember seeing this with 3.1.6 or earlier but maybe it was happening there as well. IIRC, I saw this error also with 3.1.7. I haven't had time to start up an old version to test this. Anyway, the client connects to the raw websocket.

The error I get on the client is that it randomly reads "corrupted" data from the websocket. This is an example of what it got reading from the websocket:

"tTimeLeft": null, "printTimeOriRecv: ok", "1 M19"Send:11N8 M83*16",rigin": null}}}{"current": {"logs": ["Recv: w452"]820offsets": {}, "serverTime": .......

At that same time, OctoPrint printed this WARNING message:

octoprint.server.util.sockjs - WARNING - Could not send message to client Existing exports of data: object cannot be re-sized

I'm using PyCharm to run the server and have some breakpoints that do not suspend but print data being sent over websockets. These are the 2 things that OctoPrint was trying to send right before the warning was printed to the console. All previous messages before these 2 things were correctly received by the client.

{'event': {'type': 'Home', 'payload': None}}

{'current': {'logs': [u'Recv: ok', u'Send: N11 M190 S60*110', u'Recv: wait'], 'offsets': <frozendict {}>, 'serverTime': 1539227642.694162, 'busyFiles': [{'origin': 'local', 'path': u'PLA_20x20x15_calibration-shape.gcode'}], 'messages': [], 'job': <frozendict {'averagePrintTime': 23.049983152976402, 'lastPrintTime': 20.89697003364563, 'user': '_api', 'file': <frozendict {'origin': 'local', 'name': u'PLA_20x20x15_calibration-shape.gcode', 'date': 1539058651, 'path': u'PLA_20x20x15_calibration-shape.gcode', 'display': u'PLA 20x20x15_calibration-shape.gcode', 'size': 29007}>, 'estimatedPrintTime': 122.75602841248207, 'filament': {u'tool0': {u'volume': 0.18941659711422423, u'length': 78.75026999999955}}}>, 'temps': [<frozendict {'tool0': <frozendict {'actual': 160.0, 'target': 160.0}>, 'bed': <frozendict {'actual': 21.3, 'target': 60.0}>, 'time': 1539227642}>], 'state': <frozendict {'text': 'Printing', 'flags': <frozendict {'cancelling': False, 'paused': False, 'operational': True, 'pausing': False, 'printing': True, 'resuming': False, 'sdReady': True, 'error': False, 'ready': False, 'finishing': False, 'closedOrError': False}>}>, 'currentZ': None, 'progress': <frozendict {'completion': 3.0165132554211054, 'printTimeLeftOrigin': 'average', 'printTime': 15, 'printTimeLeft': 22, 'filepos': 875}>}}

OctoPrint was regularly sending {'current' ..} notifications just fine. And then an {'event'} notification appeared and ruined the party. :slight_smile:

I did a quick google search on that WARNING and found this thread: I wonder if OctoPrint has another thread, other than the original ioloop thread writing to the websocket as mentioned in that issue. And I also wonder if this is the root cause for what I'm receiving on the client side.



In my previous post I talked about receiving "corrupted" data (possibly due to a threading issue). I sometimes receive on the client (remember that client is connected to raw websocket) the following text in a single frame:

{"timelapse": null}{"event": {"type": "ClientOpened", "payload": {"remoteAddress": ""}}}

I was looking into Starscream (also open source)'s code in case the problem was there when I noticed that each time this happened I get this WARNING in OctoPrint's log:

octoprint.server.util.sockjs - WARNING - Could not send message to client

In this case there is no detail of the error so no idea what caused that. There are no ERROR or other WARNINGs before that one.

2018-10-11 03:56:47,346 - octoprint.server.util.sockjs - INFO - New connection from client:
2018-10-11 03:56:47,609 - octoprint.server.util.sockjs - WARNING - Could not send message to client

I wonder if this is the same problem manifesting itself in this other way.


Opened issue since I think that could be a better place for this.


Opening an issue on this was a good choice! Currently working through a rather large mail backlog, but once I'm through that I'll look into the ticket.


Great. Thanks Gina. In the opened issue I continued the analysis and proposed a tentative fix that has been tested during the weekend and solved the problem. If you think it is on the right track (disclaimer: I do not know Python) then I can create a PR.