I've discovered a slow memory leak in my plugin, having isolated things to determine that it is definitely the cause. Running htop
shows me no less than 39 instances of python running OctoPrint of which 21 of these are from my plugin.
It looks like htop
probably isn't the tool for troubleshooting this because the memory-related values are lumping all 39 process IDs together for those amounts. Likewise, mem
just gives totals. I've attempted to use the technique below within each of my classes...
from resource import getrusage, RUSAGE_SELF
...
def DisplayMemoryUsed(self):
try:
op.obj._logger.info('ClassName() usage: ' + str(getrusage(RUSAGE_SELF).ru_maxrss))
except Exception as e:
op.obj._logger.info('Failed to track memory...' + str(e))
...unfortunately it's not giving me the individual object's memory, it's just showing some growing aggregate of the plugin itself. It looks like the python v2.7.13 that I'm using doesn't support the RUSAGE_THREAD value for who
.
I've tried commenting out any calls to Clock.schedule_interval()
which might have run code on a recurring basis but this hasn't really slowed the slow drain of RAM.
I guess next I'll try to force garbage collection at some python level but I'm open to suggestions.
You could use sys.getsizeof()
to see the memory used by particular objects maybe? This would be particularly useful if your data is self contained in class objects.
What kind of imports are you using? Lots of c extensions have memory leak issues due to the complicated and inconsistent refcount mechanism. I learned this the hard way (still have a small memory leak somewhere in my cpython code, but it's hard to tell even).
Anyway, if you find a way that works, please let me know. I for one am anxiously awaiting our transition to Python 3
I'll give that a try. I'm right in the middle of cloning the microSD and updating Kivy (1.10.1 -> 1.11.1) in place on that one to see if it's any better.
I suspect that Kivy is the problem here, to be honest. My plugin is huge (at least 189 files in the display portion of it). The .kv
file itself is 11K lines long. I do have a fair amount of classes which I've created to carve things up. I'll try your suggestion.
Wow, what are you working on @OutsourcedGuru, that's a lot of files.
It's a plugin to support a new 3D printer, actually.
Unfortunately, it looks like this method returns well-known class sizes. If you've written anything yourself or you're pulling in code from Kivy, in this case, you'd have to write your own MyClass.get_elements()
iterator for everything there. For what it's worth, here is a nifty total-size bit of python code but it still needs those iterators to be written.
And yet... I've re-written it a bit which seems to be reporting each object size (even the ones outside of that list).
from sys import getsizeof, stderr
from itertools import chain
from collections import deque
def total_size(o):
dict_handler = lambda d: chain.from_iterable(d.items())
all_handlers = {tuple: iter,
list: iter,
deque: iter,
dict: dict_handler,
set: iter,
frozenset: iter,
}
seen = set() # track which object id's have already been seen
default_size = getsizeof(0) # estimate sizeof object without __sizeof__
def sizeof(o):
if id(o) in seen: # do not count the same object multiple times
return 0
seen.add(id(o))
s = getsizeof(o, default_size)
for typ, handler in all_handlers.items():
if isinstance(o, typ):
s += sum(map(sizeof, handler(o)))
break
else:
if not hasattr(o.__class__, '__slots__'):
if hasattr(o, '__dict__'):
s += sizeof(o.__dict__)
else:
s += sum(sizeof(getattr(o, x)) for x in o.__class__.__slots__ if hasattr(o, x))
return s
return sizeof(o)
1 Like
For what it's worth, I've tried to add a two-minute recurring garbage collection cycle which doesn't seem to be helping, honestly. I think I can rule out python-based allocations... perhaps.
from gc import collect
# As invoked every two minutes...
collect()
Can you let me know how much memory is leaking approx? I've been having a similar leak, and have thrown everything at it (including the kitchen sink) with no success. My best guess would be some cpython module is leaking somewhere along the way, or there is some kind of caching process that is constantly growing?
Memory leaks with a GC.. UG..
Both with and without my own bumps to CG, it's approximately 1M/minute. By the end of Friday I finally troubleshot this down to Kivy (1.10.1)'s Video class.
From my own perspective—as backed up by a lot of research lately—if it's within the realm of Python then it probably falls under the rules of garbage collection. If you've compiled something or you're calling something that's written in C/C++ or similar and it has the standard malloc
, for example, code then this is the likeliest area where you should look.
Random links which could help illuminate the problem:
1 Like
Thanks for that, I'm sure it will come in handy.
1mb/min! Eeek.
My own problems aren't due to malloc (confirmed via memory profiling), but are either due to some other cpython code, or by me not handling refcounts properly when I create python objects in C++. I will read through the links you sent, and will hopefully find something illuminating.
And finally, I kicked its butt (I think). Not sure what exactly was the cause but I've split my own derived WebcamImage class into two (one per screen which it appears in) and a recurring handler which will note when it finally loads, then toggles the state of each to stop
. Upon entering/exiting each page I then toggle the play/stop state.
I'll keep an eye on it. I may have to unload()/reload() it every hour perhaps and see if that's cleaner.
Update: Nope. It's still leaking at the same rate at idle.
Update 2: Kivy support is trying to tell me that the underlying gstplayer
in Raspbian Stretch is to blame, suggsting that Buster's is much cleaner.