Of course I could just drop $15 grand on a dual EPYC Rome motherboard with 128 cores. https://www.newegg.com/supermicro-mbd-h11dsi-n702-ma015-o-dual-amd-epyc-7000-series/p/N82E16813183691?Description=epyc%20motherboard&cm_re=epyc_motherboard-_-13-183-691-_-Product
Imagine a 128 core machine running a hypervisor.
On four cores you have Linux set up as a dedicated I/O processor–it's the only process that has peripherals. On four cores you have Linux set up as a memory pipe processor and container manager.
On the remaining 120 cores you have containers with either one or two cores. In the containers you are running discrete services compiled with Unikernels and only communicating via memory pipes set up as either pure serial streams or as network sockets.
Basically we are talking about running a data center in a single computer. Most likely with similar use cases, software architectures, and management tools predominating.
And, sure, most of us don't really want to be running big data services or Oracle at home. But there are other use cases. Render farms, for example.
Personally I'd like to try building a massive actor system optimized for running simulations using millions of actors, with a hierarchical communications metaphor.
Rusted Neuron is a Mastodon Instance operated by Jack William Bell