Thinking about Concurrency
I was just sittin’ around the other day, thinkin’ about concurrency, when I noticed something kinda neat.
The thing that makes concurrency tricky is shared mutable state. If you don’t have shared mutable state then you don’t have race conditions, and if you don’t have race conditions then you don’t really have a problem with concurrency.
Modern languages handle shared mutable state by removing one of those properties. That is, a language will either (1) not share state between processes, (2) make its data immutable (at least under certain conditions), or (3) not manage state to begin with.
It’s a bit like the CAP theorem. Sharing, mutability, or state: pick two!
Languages using actor-based concurrency (like Erlang) use message-passing instead of shared state, so they’re in the “no sharing” group. I really like this approach – reasoning about it feels natural to me, and the “passing messages between things” model fits my OOP-shaped head.
Quite a few languages (like Haskell and Clojure) have immutable data structures, so they’re in the “no mutation” group. This seems to be the most popular approach.
I can’t think of any languages that address the concurrency issue by eschewing state altogether. It seems a bit extreme, but it would certainly solve the problem. Certain languages seem less comfortable with state than others (when introducing Scheme, for example, SICP doesn’t discuss state for the first couple hundred pages, and functional languages usually try to avoid it as a matter of best practice). Declarative languages seem like they’d fit a stateless paradigm nicely, but I don’t know of any concrete examples. I’d love to see some, though!
Anyway, I’m sure this isn’t an especially novel thought, but it was novel to me. Fun times!
You might like these textually similar articles: