As containerized environments become prevalent (like Kubernetes), knowing the physical memory that will be used by the JVM becomes ever more important. Typical thread-per-request web frameworks can easily use thousands of threads, which can contribute to the memory footprint. In this article I explore the base amount of physical memory a JVM thread stack uses on Linux. This can help guide decisions on sizing of thread pools.
I’m a bit of a home networking nerd. After many iterations, I’ve settled on a custom built Linux home router. My goals are:
- Secure as possible.
- Supports fq_codel and the ability to disable kernel offloads to favor latency and eliminate bufferbloat.
updated: 2018-06-05 Replace gorb with merlin.
In addition to the Kubernetes stack on AWS, I’m also helping to build an on-premise Kubernetes platform. We want to continue to leverage feed, the ingress controller we built. Ingress generally requires an external IP load balancer to front requests from the internet and elsewhere. In AWS we use ELBs. For on-premise, we need to build our own.
The solution we’ve settled on for now is:
- IPVS with consistent hashing (using built-in source hash module) and direct-return.
- merlin to provide an API for ipvs so our ingress controller can attach and detach itself.
- VIPs registered to a DNS entry with active/passive failover, handled by keepalived.
Over the last two years I’ve been building an in house PaaS system based on Kubernetes at Sky. We started on Kubernetes 1.0, which was early days. It’s been a challenging and fun experience.
Continue reading “A Kubernetes Stack from Scratch”
I never did XP style pairing until I arrived in London. My experience had been mostly solo work, with plenty of team collaboration, and some rare pairing on tough problems. I was pretty excited about trying something new. And it’s part of why I picked my first role in London.
Every now and then a bit of networking knowledge comes in handy.
We’re using Cassandra for some fallback behaviour in my current project. Whenever a downstream system is successfully hit, we store a copy of the data locally that we can fall back to in case of downsystem failure.
During load tests of the fallback behaviour, we starting getting really long, crazy timeouts on reads.
I’ve been working on a CI trigger that runs particular jobs depending on which project changed. The tricky aspect is we have a single git repository. So given a commit hash, we want to determine which projects to trigger builds for.
VisualVM and jconsole are two useful tools for debugging JVM issues. However they both rely on a JMX port to be open on the remote instance. You can work around this on the fly by running
jstatd on the remote host, but you’ll find certain things disabled. So ideally, the jmx port will be enabled on startup.