Elixir (really the BEAM VM) uses processes (not OS processes) as a unit of work.
A process is completely isolated (no shared memory), and can only communicate with other processes via message-passing.
A single instance can support millions of processes, so they’re fairly lightweight.
Schedulers are are mapped 1:1 with OS threads, and are responsible for scheduling processes.
Fairness (at least in the default configuration) is prioritized, so a single errant process can’t stop others from progressing, even on a single scheduler.
I assume there’s a way to disable/customize this for “hot path”-style code if necessary.
Splitting logic into processes helps both for fault and latency tolerance.
The demo sets up a Connection process for an open WebSocket; this process spawns other Computation processes for actual units of work. Benefits to this style:
You can maintain a long-running connection to the server and still have it process multiple units of work in parallel.
A single computation that’s taking too long (or even spinning infinitely) doesn’t stop you from running other units of computation.
The live demos were great, and used Phoenix LiveView to render fairly rich UI without writing any JS.
I don’t think that LiveView is doing anything that can’t be replicated elsewhere; definitely want to give this a shot when building up readysetlog.
BEAM is very operable. You can shell into a running BEAM instance (the shell becomes one more of many processes) and perform diagnostics.
The main argument was that Erlang/Elixir+BEAM is a much more solid choice to start off with than Rails.
I’m inclined to agree; I don’t see these benefits being particularly useful to a mature system, but this sounds a lot better than a Rails MVP, for example.
Making a system fault tolerant (a long /errant computation can’t stop other users of your system from making progress) is a lot more involved in other stacks.
This slide explains this a bit more succinctly:
I’m not entirely sure how Erlang is a replacement for an actual database (under “Persistable Data”, though) 🤔
BEAM can operate in cluster mode, where message-passing can transparently occur between physical machines.
This is problematic in practice for many reasons, but is an option.
Much of this talk was a live demo (which was great!)