I feel like etcd is one of the few use cases where Intel Optane would actually make sense. I build and run several bare metal clusters with over 10k nodes and etcd is by and large the biggest pain for us. Sometimes an etcd node just randomly stops accepting any proposals which halts the entire cluster until you can remove the bad etcd node.
From what I remember, GKE has implemented an etcd shim on top of spanner as a way to get around the scalability issues, but unfortunately for the rest of us who do not have spanner there aren’t any great options.
I feel like at a fundamental level that pod affinity, antiaffinity, and topology spreads are not compatible with very large clusters due to the complexity explosion in large clusters.
Another thing to consider is that the larger a cluster becomes, the larger the blast radius is. I have had clusters of 10k nodes spectacularly fail due to code bugs within k8s. Sharding total compute capacity compute capacity into multiple isolated k8s clusters reduces the likelihood that a software bug is going to take down everything as you can carefully upgrade only a single cell at a time with bake periods between each cell.
Yep, every cluster approaching 10k I know of has either pared back etcd's durability guarantees or rewritten and replaced it in some manner. Actually the post goes into detail about doing this exactly, the Alibaba paper they reference says about the same.
> Sharding total compute capacity compute capacity into multiple isolated k8s clusters reduces the likelihood that a software bug is going to take down everything as you can carefully upgrade only a single cell at a time with bake periods between each cell.
Yeah, I've been meaning to try out something like Armada to simplify things on the cluster-user side. Cluster-providers have lots of tools to make managing multiple clusters easier but if it means having to rewrite every batch job..
It,s nice to know that the upper bound of the resiliency of a k8s cluster is the amount of redundancy etcd has - which is in essence a horizontally scaled monolith.
AFAIK all the hyperscalers have replaced etcd for their managed Kubernetes services [1], [2], [3] - though Azure is the least clear about what they actually use currently.
This is an awesome experiment and write up. I really appreciate the reproducibility.
I would like to see how moving to database that scales write throughput with replicas would behave, namely FoundationDB. I think this will require more than an intermediary like kine to be efficient, as the author illustrates the apisever does a fair bit of its own watching and keeping state. I also think there's benefit, at least for blast radius, to shard the server by api group or namespace.
I think years ago this would have been a non starter with the community, but given AWS has replaced etcd (or at least aspects) with their internal log service for their large cluster offering, I bet there's some appetite for making this interchangable and bringing and open source solution to market.
I share the authors viewpoint that for modern cloud based deployments, you're probably best avoiding it and relying on VMs being stable and recoverable. I think reliability does matter if you want to actually realize the "borg" value and run it on bare metal across a serious fleet. I haven't found the business justification to work on that though!
I read this as napkin math[1] for Kube and thoroughly enjoyed. You can only find the important numbers relative to performance and scaling by trying to accomplish some kind of goal. Benchmarks are mostly bikeshedding.
If you don't need the isolation of of k8s then don't forget about erlang, which is another option to scale up to 1 million functions. Obviously k8s containers (which are fundamentally just isolated processes) and erlang processes are not interchangeable things, but when thinking about needing in the order of millions of processes erlang is pretty good prior art
This is 1m nodes, you typically run tens or hundreds of pods per node, each with one or more containers. So more like 100m+ functions if I follow the Erlang analogy correctly?
There are similar libraies in Elixir. Is the ecosystem for ML as developed as for python? Nope, but not every ML project needs the most obscure libraries etc.
(For the record I don't really see Erlang clusters as a replacement for k8s)
Kubernetes is way heavier than Erlang’s lightweight processes, so for millions of tasks at scale, a middle-ground solution could blend Erlang’s concurrency efficiency with k8s’ orchestration power, dodging containers’ overhead while keeping flexibility for diverse workloads. That's if you don't actually need the strict isolation of pods/containers and you're just trying to run something at massive scale. I don't get why so many people want to run everything as heavy container processes or pods vs coming up with a better solution. The point is we don't have to fit every problem into the shoe called kubernetes if it doesn't seem to fit, and we should look at other ways to spin up millions of processes
The node failure rate is much higher than that. On a 1M node cluster of cloud-managed instances (AWS, GCP, Azure, etc.) you'd likely see failures a few times a month, if not more.
Instead of giving up the good guarantee of etcd, a better approach maybe grouping some nodes together to create a tree like structure with sub clusters.
This is an absolutely incredible technical deep-dive. The section on
replacing etcd with mem_etcd resonates with challenges we've been tackling
at a much smaller scale building an AI agent system.
A few thoughts:
*On watch streams and caching*: Your observation about the B-Tree vs
hashmap cache tradeoff is fascinating. We hit similar contention issues
with our agent's context manager - switched from a simple dict to a more
complex indexed structure for faster "list all relevant context" queries,
but update performance suffered. The lesson about O(1) writes vs O(log n)
reads being the wrong tradeoff for high-write workloads is universal.
*On optimistic concurrency for scheduling*: The scatter-gather scheduler
design is elegant. We use a similar pattern for our dual-agent system
(TARS planner + CASE executor) where both agents operate semi-independently
but need coordination. Your point about "presuming no conflicts, but
handling them when they occur" is exactly what we learned - pessimistic
locking kills throughput far worse than occasional retries.
*The spicy take on durability*: "Most clusters don't need etcd's
reliability" is provocative but I suspect correct for many use cases.
For our Django development agent, we keep execution history in SQLite with
WAL mode (no fsync), betting that if the host crashes, we'd rather rebuild
from Git than wait on every write. Similar philosophy.
The mem_etcd implementation in Rust is particularly interesting - curious
if you considered using FoundationDB's storage engine or something similar
vs rolling your own? The per-prefix file approach is clever for reducing
write amplification.
Fantastic work - this kind of empirical systems research is exactly what
the community needs more of. The "what are the REAL limits" approach vs
"conventional wisdom says X" is refreshing.
Typical large scale high performance computing clusters are at a size of 10k nodes (for instance Jupiter and SuperMUC in Germany) [1]. These centers are quite remarkably big buildings. I wonder how much 1M node single k8s clusters there are in the world right now. Most likely at the hyperscalers.
[1] what is a node? Typically it is a synonym for "server". In some configurations HPC schedulers allow node sharing. Then we talk about order of 100k cores to be scheduled.
I doubt any Hyperscalers are running 1M Node clusters either. They probably just have groups of clusters at each datacenter and some overall scheduler that determines which cluster is best suited for workload during deployment then connects to that cluster and schedules the workload.
Some hyperscalers even have services for that. Which even makes it possible to have cross cluster ingress. And other things. And it makes it possible to have multiple cluster ingress different regions that somewhat work together.
>> [1] what is a node? Typically it is a synonym for "server". In some configurations HPC schedulers allow node sharing
I'm sure they mean actual servers / not just cores. Even in traditional HPC it isn't abstracted to the level of individual cores usually since most HPC jobs care about memory bandwidth - even with Infiniband or other techniques throughput / latency is much worse than on a single machine. Of course, multiple machines are connected (usually using MPI / Infiniband) but important to try to minimize communication between nodes where possible.
For AI workloads, they are running GPUs - so 10K+ cores on a single device so even less likely to be talking about cores here.
without publishing mem_etcd code, and without telling us what happens when one of the etcd/mem_etcd node dies to compare, this write up doesn't provide much information.
etcd is also the entire point of k8s. that it's a single self-contained framework and doesn't require an external backer service. there is no kubernetes without etcd. much of the "secret sauce" of kubernetes is the "watch etcd" logic that "watches" desired state and does the cybernetic loop to bring the observed state adhere to the desired state.
The API and controller loops are the point of k8s. etcd is an implementation detail and lots of clusters swap it out for something else like sqlite. I'm pretty sure that GCP and Azure are using Spanner or Cosmos instead of etcd for their managed offerings.
not exactly a fair assessment since neither of those were out and/or available to the kubernetes team at the time. sure, some things at many times from now into eternity may be or become better suited for the kubernetes data plane but at the time if etcd wasn't used there would be no kubernetes today
The Kubernetes team chose etcd specifically because they were trying to replace Borg's master/slave database at Google. Nothing about Kubernetes requires etcd; the team was trying to solve a Google-internal problem with it (and in the end, didn't gain traction within Google.) k3s uses sqlite by default which was an option at the time, other clusters today use PostgreSQL.
Have you looked at the etcd keys and values in a Kubernetes cluster? It's a _remarkably_ simple schema you could do in pretty much any database with fast prefix or path scans.
Is it? I honestly kinda believe that etcd is probably the weakest point in vanilla k8s. It is simply unsuitable for heavy write environments and causes lots of consistency problems under heavy write loads, it's generally slow, it has value size constraints, it offers very primitive querying, etc... Why not replace etcd altogether with something like Postgres + Redis/NATS?
that touches on what I consider the dichotomy of k8s: it's a really scalable system that makes it easy to spin up a cluster locally on your laptop and interact with the full API locally just like in prod. so it's a super scalable system with a dense array of features. but paradoxically most shops won't need the vast majority of k8s features ever and by the time they scale to where they do need a ton of distributed init features they're extremely close to the point where they'd be better served by a bespoke system conceived from scratch in house that solves problems very specific to the business in question. if you have many thousands of k8s nodes, you're probably in the gray area of if using k8s is worth it because the loop of k8s will never be as fast as a centralized push control plane vs the k8s pull/watch control plane. and naturally at scale that problem will only compound
but it's also standard, you can hire for it, outsource it, etc.
and it's pretty modular too, so it can even serve as the host for the bespoke whatever that's needed
though I remember reading the fly.io blog post about their custom scheduler/allocator which illustrates nicely how much of a difference a custom in-house solution makes if works well
The other draw: Because k8s is open, you can easily hire employees, contractors, consultants and vendors and have them immediately solve problems within the k8s ecosystem. If you run a bespoke system, you have to train engineers on the system before they can make large contributions.
You can do leader election without etcd. The thing etcd buys you is you can have clusters of 3, 5, 7 or 9 DB nodes and lose up to 1, 2, 3, or 4 nodes respectively. But honestly, the vast majority of k8s users would be fine with a single SQL instance backing each k8s cluster and just running two or more k8s clusters for HA.
k3s doesn't require etcd, I'm pretty sure GKE uses Spanner and Azure uses Cosmos under the hood.
The API server is the thing. It so happens that the API server can mostly be a thin shell over etcd. But etcd itself while so common is not sacrosanct.
https://github.com/k3s-io/kine is a reasonably adequate substitute for etcd. sqlite, MySQL, PostgreSQL can also be substituted in. Etcd is from the ground up built to be more scale-out reliable, and that rocks to have baked in. But given how easy it is to substitute etcd out, I feel like we are at least a little off if we're trying to say "etcd is also the entire point of k8s" (the APIserver is)
It's been a while since I've checked this but a few years ago we tried to limit test kine on a large-ish cluster and it performed pretty poorly. It's fine for small clusters but the way they have to implement the watch semantics makes it perform poorly (at least this was the case a few years ago).
that's fair but that 99% of all apiserver deployments in the world have the same standard boilerplate footprint is a large part of why it became so ubiquitous. that people running it locally don't have to make any decisions about how to deploy which database or why to use this one over that one... and that's also the same situation in production so people doing stuff in dev aren't punched in the face by an exponentially more complex system in production is huge.
> etcd is also the entire point of k8s. that it's a single self-contained framework and doesn't require an external backer service. there is no kubernetes without etcd.
Sorry, this is just BS. etcd is a fifth wheel in most k8s installations. Even the largest clusters are better off with something like a large-ish instance running a regular DB for the control plane state storage.
Yes, etcd theoretically protects against any kind of node failures and network partitions. But in practice, well, nobody really cares about the control plane being resilient against meteorite strikes and Cthulhu rising from the deeps.
I'm with you, I think most people might think they don't need this reliability, until they do. I'm sure there is some subset of clusters where the claim is correct.
But from the article, turning off fsync and expecting to only lose a few ms of updates. I've tried to recover etcd on volumes that lied about fsync and experienced a power outage, and I don't think we managed to recover it. There might be more options now to recover and ignore corrupted WAL entries, but at that time it was very difficult and I think we ended up just reinstalling from scratch. For clusters where this doesn't matter or the SLOs for recovery account for this, I'm totally onboard, but only if you know what you're doing.
And similar the point from the article that "full control plane data loss isn’t catastrophic in some environments" is correct, in the sense of what the author means by some environments. Because I don't think it's limited to those that are management by gitops as suggested, but where there is enough resiliency and time to redeploy and do all the cleanup.
Anyways, like much advice on the internet, it's not good or bad, just highly situational, and some of the suggestions should only be applied if the implications are fully understood.
I feel like etcd is one of the few use cases where Intel Optane would actually make sense. I build and run several bare metal clusters with over 10k nodes and etcd is by and large the biggest pain for us. Sometimes an etcd node just randomly stops accepting any proposals which halts the entire cluster until you can remove the bad etcd node.
From what I remember, GKE has implemented an etcd shim on top of spanner as a way to get around the scalability issues, but unfortunately for the rest of us who do not have spanner there aren’t any great options.
I feel like at a fundamental level that pod affinity, antiaffinity, and topology spreads are not compatible with very large clusters due to the complexity explosion in large clusters.
Another thing to consider is that the larger a cluster becomes, the larger the blast radius is. I have had clusters of 10k nodes spectacularly fail due to code bugs within k8s. Sharding total compute capacity compute capacity into multiple isolated k8s clusters reduces the likelihood that a software bug is going to take down everything as you can carefully upgrade only a single cell at a time with bake periods between each cell.
Yep, every cluster approaching 10k I know of has either pared back etcd's durability guarantees or rewritten and replaced it in some manner. Actually the post goes into detail about doing this exactly, the Alibaba paper they reference says about the same.
> Sharding total compute capacity compute capacity into multiple isolated k8s clusters reduces the likelihood that a software bug is going to take down everything as you can carefully upgrade only a single cell at a time with bake periods between each cell.
Yeah, I've been meaning to try out something like Armada to simplify things on the cluster-user side. Cluster-providers have lots of tools to make managing multiple clusters easier but if it means having to rewrite every batch job..
It,s nice to know that the upper bound of the resiliency of a k8s cluster is the amount of redundancy etcd has - which is in essence a horizontally scaled monolith.
AFAIK all the hyperscalers have replaced etcd for their managed Kubernetes services [1], [2], [3] - though Azure is the least clear about what they actually use currently.
[1]: https://aws.amazon.com/blogs/containers/under-the-hood-amazo...
[2]: https://cloud.google.com/blog/products/containers-kubernetes...
[3]: https://azure.microsoft.com/en-us/blog/a-cosmonaut-s-guide-t...
Interestingly the public of Azure’s etcd-compatible service was withdrawn before exiting preview.
[1] https://learn.microsoft.com/en-us/answers/questions/154061/a...
That's really impressive and an interesting experiment.
I was about to say that Nomad did something similar, but that was 2 million Docker containers across 6100 nodes, https://www.hashicorp.com/en/c2m
This is an awesome experiment and write up. I really appreciate the reproducibility.
I would like to see how moving to database that scales write throughput with replicas would behave, namely FoundationDB. I think this will require more than an intermediary like kine to be efficient, as the author illustrates the apisever does a fair bit of its own watching and keeping state. I also think there's benefit, at least for blast radius, to shard the server by api group or namespace.
I think years ago this would have been a non starter with the community, but given AWS has replaced etcd (or at least aspects) with their internal log service for their large cluster offering, I bet there's some appetite for making this interchangable and bringing and open source solution to market.
I share the authors viewpoint that for modern cloud based deployments, you're probably best avoiding it and relying on VMs being stable and recoverable. I think reliability does matter if you want to actually realize the "borg" value and run it on bare metal across a serious fleet. I haven't found the business justification to work on that though!
I read this as napkin math[1] for Kube and thoroughly enjoyed. You can only find the important numbers relative to performance and scaling by trying to accomplish some kind of goal. Benchmarks are mostly bikeshedding.
[1]: https://sirupsen.com/napkin
Love this. There's no reason k8s shouldn't scale much further.
If you don't need the isolation of of k8s then don't forget about erlang, which is another option to scale up to 1 million functions. Obviously k8s containers (which are fundamentally just isolated processes) and erlang processes are not interchangeable things, but when thinking about needing in the order of millions of processes erlang is pretty good prior art
This is 1m nodes, you typically run tens or hundreds of pods per node, each with one or more containers. So more like 100m+ functions if I follow the Erlang analogy correctly?
This is not analogous. It’s just someone beating the Erlang drum. You can’t PyTorch in Erlang.
There are similar libraies in Elixir. Is the ecosystem for ML as developed as for python? Nope, but not every ML project needs the most obscure libraries etc.
(For the record I don't really see Erlang clusters as a replacement for k8s)
Kubernetes is way heavier than Erlang’s lightweight processes, so for millions of tasks at scale, a middle-ground solution could blend Erlang’s concurrency efficiency with k8s’ orchestration power, dodging containers’ overhead while keeping flexibility for diverse workloads. That's if you don't actually need the strict isolation of pods/containers and you're just trying to run something at massive scale. I don't get why so many people want to run everything as heavy container processes or pods vs coming up with a better solution. The point is we don't have to fit every problem into the shoe called kubernetes if it doesn't seem to fit, and we should look at other ways to spin up millions of processes
I don't get the point of benchmarking k8s without the guarantees of etcd. At some point, you are just competing with clusterssh.
How often do you have sudden host failures? Especially if you use a half-decent server with redundant components for the DB node?
Once in maybe 10 years?
The node failure rate is much higher than that. On a 1M node cluster of cloud-managed instances (AWS, GCP, Azure, etc.) you'd likely see failures a few times a month, if not more.
Instead of giving up the good guarantee of etcd, a better approach maybe grouping some nodes together to create a tree like structure with sub clusters.
That was the whole concept behind KCP iirc. It was designed to provide tenancy atop 1 or more clusters.
This is an absolutely incredible technical deep-dive. The section on replacing etcd with mem_etcd resonates with challenges we've been tackling at a much smaller scale building an AI agent system.
A few thoughts:
*On watch streams and caching*: Your observation about the B-Tree vs hashmap cache tradeoff is fascinating. We hit similar contention issues with our agent's context manager - switched from a simple dict to a more complex indexed structure for faster "list all relevant context" queries, but update performance suffered. The lesson about O(1) writes vs O(log n) reads being the wrong tradeoff for high-write workloads is universal.
*On optimistic concurrency for scheduling*: The scatter-gather scheduler design is elegant. We use a similar pattern for our dual-agent system (TARS planner + CASE executor) where both agents operate semi-independently but need coordination. Your point about "presuming no conflicts, but handling them when they occur" is exactly what we learned - pessimistic locking kills throughput far worse than occasional retries.
*The spicy take on durability*: "Most clusters don't need etcd's reliability" is provocative but I suspect correct for many use cases. For our Django development agent, we keep execution history in SQLite with WAL mode (no fsync), betting that if the host crashes, we'd rather rebuild from Git than wait on every write. Similar philosophy.
The mem_etcd implementation in Rust is particularly interesting - curious if you considered using FoundationDB's storage engine or something similar vs rolling your own? The per-prefix file approach is clever for reducing write amplification.
Fantastic work - this kind of empirical systems research is exactly what the community needs more of. The "what are the REAL limits" approach vs "conventional wisdom says X" is refreshing.
People don’t realize how crucial Ben was in the forming of OpenAI as it is known today. This is an extremely underrated post.
Typical large scale high performance computing clusters are at a size of 10k nodes (for instance Jupiter and SuperMUC in Germany) [1]. These centers are quite remarkably big buildings. I wonder how much 1M node single k8s clusters there are in the world right now. Most likely at the hyperscalers.
[1] what is a node? Typically it is a synonym for "server". In some configurations HPC schedulers allow node sharing. Then we talk about order of 100k cores to be scheduled.
I doubt any Hyperscalers are running 1M Node clusters either. They probably just have groups of clusters at each datacenter and some overall scheduler that determines which cluster is best suited for workload during deployment then connects to that cluster and schedules the workload.
Some hyperscalers even have services for that. Which even makes it possible to have cross cluster ingress. And other things. And it makes it possible to have multiple cluster ingress different regions that somewhat work together.
This post is just about a reference architecture, but last I knew OpenAI ran VMs on Azure.
https://openai.com/index/scaling-kubernetes-to-7500-nodes/
>> [1] what is a node? Typically it is a synonym for "server". In some configurations HPC schedulers allow node sharing
I'm sure they mean actual servers / not just cores. Even in traditional HPC it isn't abstracted to the level of individual cores usually since most HPC jobs care about memory bandwidth - even with Infiniband or other techniques throughput / latency is much worse than on a single machine. Of course, multiple machines are connected (usually using MPI / Infiniband) but important to try to minimize communication between nodes where possible.
For AI workloads, they are running GPUs - so 10K+ cores on a single device so even less likely to be talking about cores here.
without publishing mem_etcd code, and without telling us what happens when one of the etcd/mem_etcd node dies to compare, this write up doesn't provide much information.
I believe the code is here:
https://github.com/bchess/k8s-1m/tree/main/mem_etcd
https://github.com/bchess/k8s-1m/blob/main/RUNNING.adoc#mem_...
“Perhaps my spiciest take from this entire project: most clusters don’t actually need the level of reliability and durability that etcd provides.”
This assumption is completely out of touch, and is especially funny when the goal is to build an extra large cluster.
etcd is also the entire point of k8s. that it's a single self-contained framework and doesn't require an external backer service. there is no kubernetes without etcd. much of the "secret sauce" of kubernetes is the "watch etcd" logic that "watches" desired state and does the cybernetic loop to bring the observed state adhere to the desired state.
The API and controller loops are the point of k8s. etcd is an implementation detail and lots of clusters swap it out for something else like sqlite. I'm pretty sure that GCP and Azure are using Spanner or Cosmos instead of etcd for their managed offerings.
Yep. K3s can use SQLite or Postgres.
not exactly a fair assessment since neither of those were out and/or available to the kubernetes team at the time. sure, some things at many times from now into eternity may be or become better suited for the kubernetes data plane but at the time if etcd wasn't used there would be no kubernetes today
The Kubernetes team chose etcd specifically because they were trying to replace Borg's master/slave database at Google. Nothing about Kubernetes requires etcd; the team was trying to solve a Google-internal problem with it (and in the end, didn't gain traction within Google.) k3s uses sqlite by default which was an option at the time, other clusters today use PostgreSQL.
Have you looked at the etcd keys and values in a Kubernetes cluster? It's a _remarkably_ simple schema you could do in pretty much any database with fast prefix or path scans.
Is it? I honestly kinda believe that etcd is probably the weakest point in vanilla k8s. It is simply unsuitable for heavy write environments and causes lots of consistency problems under heavy write loads, it's generally slow, it has value size constraints, it offers very primitive querying, etc... Why not replace etcd altogether with something like Postgres + Redis/NATS?
that touches on what I consider the dichotomy of k8s: it's a really scalable system that makes it easy to spin up a cluster locally on your laptop and interact with the full API locally just like in prod. so it's a super scalable system with a dense array of features. but paradoxically most shops won't need the vast majority of k8s features ever and by the time they scale to where they do need a ton of distributed init features they're extremely close to the point where they'd be better served by a bespoke system conceived from scratch in house that solves problems very specific to the business in question. if you have many thousands of k8s nodes, you're probably in the gray area of if using k8s is worth it because the loop of k8s will never be as fast as a centralized push control plane vs the k8s pull/watch control plane. and naturally at scale that problem will only compound
but it's also standard, you can hire for it, outsource it, etc.
and it's pretty modular too, so it can even serve as the host for the bespoke whatever that's needed
though I remember reading the fly.io blog post about their custom scheduler/allocator which illustrates nicely how much of a difference a custom in-house solution makes if works well
The other draw: Because k8s is open, you can easily hire employees, contractors, consultants and vendors and have them immediately solve problems within the k8s ecosystem. If you run a bespoke system, you have to train engineers on the system before they can make large contributions.
> Why not replace etcd altogether with something like Postgres + Redis/NATS?
Holy Raft protocol is the blockchain of cloud.
You can do leader election without etcd. The thing etcd buys you is you can have clusters of 3, 5, 7 or 9 DB nodes and lose up to 1, 2, 3, or 4 nodes respectively. But honestly, the vast majority of k8s users would be fine with a single SQL instance backing each k8s cluster and just running two or more k8s clusters for HA.
k3s doesn't require etcd, I'm pretty sure GKE uses Spanner and Azure uses Cosmos under the hood.
The API server is the thing. It so happens that the API server can mostly be a thin shell over etcd. But etcd itself while so common is not sacrosanct.
https://github.com/k3s-io/kine is a reasonably adequate substitute for etcd. sqlite, MySQL, PostgreSQL can also be substituted in. Etcd is from the ground up built to be more scale-out reliable, and that rocks to have baked in. But given how easy it is to substitute etcd out, I feel like we are at least a little off if we're trying to say "etcd is also the entire point of k8s" (the APIserver is)
It's been a while since I've checked this but a few years ago we tried to limit test kine on a large-ish cluster and it performed pretty poorly. It's fine for small clusters but the way they have to implement the watch semantics makes it perform poorly (at least this was the case a few years ago).
that's fair but that 99% of all apiserver deployments in the world have the same standard boilerplate footprint is a large part of why it became so ubiquitous. that people running it locally don't have to make any decisions about how to deploy which database or why to use this one over that one... and that's also the same situation in production so people doing stuff in dev aren't punched in the face by an exponentially more complex system in production is huge.
> etcd is also the entire point of k8s. that it's a single self-contained framework and doesn't require an external backer service. there is no kubernetes without etcd.
Sorry, this is just BS. etcd is a fifth wheel in most k8s installations. Even the largest clusters are better off with something like a large-ish instance running a regular DB for the control plane state storage.
Yes, etcd theoretically protects against any kind of node failures and network partitions. But in practice, well, nobody really cares about the control plane being resilient against meteorite strikes and Cthulhu rising from the deeps.
I'm with you, I think most people might think they don't need this reliability, until they do. I'm sure there is some subset of clusters where the claim is correct.
But from the article, turning off fsync and expecting to only lose a few ms of updates. I've tried to recover etcd on volumes that lied about fsync and experienced a power outage, and I don't think we managed to recover it. There might be more options now to recover and ignore corrupted WAL entries, but at that time it was very difficult and I think we ended up just reinstalling from scratch. For clusters where this doesn't matter or the SLOs for recovery account for this, I'm totally onboard, but only if you know what you're doing.
And similar the point from the article that "full control plane data loss isn’t catastrophic in some environments" is correct, in the sense of what the author means by some environments. Because I don't think it's limited to those that are management by gitops as suggested, but where there is enough resiliency and time to redeploy and do all the cleanup.
Anyways, like much advice on the internet, it's not good or bad, just highly situational, and some of the suggestions should only be applied if the implications are fully understood.