Python, Celery, Kubernetes, and Memory

I’ve been thinking about Python and Kubernetes and long-lived pods (such as celery workers); and I have some questions and thoughts.

When you use memory-intensive modules like pandas, would it make more sense to have the “worker” simply be a listener and fork a process (passing the environment, of course) to do the actual processing using memory-intensive modules? The thought process here is that, by launching a subprocess, the memory utilization should go down once that subprocess exits. Therefore memory should be freed and any chance of memory leaks should be circumvented.

Secondly, with kafka and faust, is celery even relevant for high-availability microservice applications?

I would really like to hear some real-world experience with any of these.

原文链接:Python, Celery, Kubernetes, and Memory

© 版权声明
THE END
喜欢就支持一下吧
点赞13 分享
Forever facing sunlight, so you can not see the shadow of the.
永远面向阳光,这样你就看不见阴影了
评论 抢沙发

请登录后发表评论

    暂无评论内容