Stuff The Internet Says On Scalability For March 13th, 2020

http://highscalability.com/blog/2020/3/13/stuff-the-internet-says-on-scalability-for-march-13th-2020.html

2 exaflops : El Capitan supercomputer will use AMD CPUs & GPUs. 

zie : tldr; If you can’t afford 6+ full-time people to babysit k8s, you shouldn’t be using it.

@mims : Natural gas power plant is using 14 megawatts a day to mine bitcoin in upstate New York. That’s enough power for 11,000 homes. Is this really the best use of fossil fuels that were probably fracked out of the ground?

Daniel Lemire : What is clear, however, is that creating a thread may cost thousands of CPU cycles. If you have a cheap function that requires only hundreds of cycles, it is almost surely wasteful to create a thread to execute it. The overhead alone is going to set you back.

Brian Martin : In summary, there are many sources of telemetry to understand our systems performance. Sampling resolution is very important, otherwise you’re going to miss these small things that actually do matter. In-process summarization can reduce your cost of aggregation and storage. Instead of taking 60 times the amount of data to store secondly time series, you can just export maybe five percentiles. The savings becomes even more apparent when you are sampling at even higher rates, you might want to sample every hundred millisecond, or 10 times per second, or something like that. You might want to sample even faster than that for certain things. Really, it all goes back to what is the smallest thing I want to be able to capture. As you start multiplying how much data you need within a second, the savings just becomes even more. 

Edit