The author has completely neglected the most common cases where either "tasks" require multiple resources (e.g. when copying between two files) or where the ordering of tasks is important. It is these two conditions that lead to deadlocks and livelocks.
As for the neglected cases, you can build those out of these primitives: canonical ordering of tasks for multiple resource acquisition; work stealing and/or a DAG executor where ordering is important.
My only criticism is that it's a little redundant to implement both mutexes and semaphores - a mutex is just a semaphore of 1. But the rest is wonderful.
As long as you trust the code running on your machine (if you don't, you have bigger issues to worry about), you'll be fine - nobody signals a semaphore before waiting on it if they're hoping to achieve mutual exclusion
Yea, a bit confusing. He surely knew what he meant but I didn’t.
Given the prevalence of mutexes and semaphores in embedded I was hoping to take something useful away, but yea the issue is I have no other work to do. I can’t do anything really until this read or write or access is done. I can’t write here until this read is finished not because of access but logically I need input to determine output, etc. A bit different from your comment, but similar lines i think.
There’s a talk about this approach in the game Destiny: https://www.gdcvault.com/play/1022164/Multithreading-the-Ent...
And also in the Uncharted engine: http://twvideo01.ubm-us.net/o1/vault/gdc2015/presentations/G...
They go a step further by allowing threads to pause and resume their state by changing the stack pointer.
These sort of systems are useful in games where you basically want to be using all of the cpu all of the time and there’s no reason to ever give control back to the operating system’s scheduler (with mutexes).
However, I do quite like this way of thinking about synchronisation. I wonder how many other kinds of synchronisation primitives (I'm thinking about RCU and seqlocks) can be re-thought using this method.
A few high performance C++ systems use variants of the dispatcher approach. For example ScyllaDB and I know of a few HFT systems like that. Boost.asio is similar (the serializer is, or used to be, called strand) and so is the dispatcher proposal in C++.
A few systems I have seen strictly map each dispatcher to a thread (which is in turn exclusively pinned to an hardware core), so the serialization is implicit. "Active" objects expose their home dispatcher and to interact with the object one would post a lambda to that object dispatcher. In practice this is a form of async message passing. To get the answer back either a response callback is posted to the original thread or some future/promise abstraction is used.
This only works if the second thread's yellow task is independent of its blue task's results. Which is, unfortunately, only 1% of the cases.
Otherwise, a pretty nice overview. The bar charts are helpful.
For whom does the bell toll? It tolls for him (not 'he').
Who is there? He (not 'him') is there.
You just have things do stuff and then toss pointers to threads on the other side of the workqueue. Most things I do this with I basically keep each thread in a different part of the pipeline and they pass the task to the next one. Sometimes this means each thread has a warm cache for the part of the task it's working on, assuming the majority of the working set of each task is more correlated with the stage of the pipeline than the specific data moving through it.
OP - i.e. @ingve - it might be interesting to take a look into some strategies - like the pull one - on Actor systems like AKKA
I'm not sure that that's true in general but it's true that it's easier than when you don't know those things.
Getting a degree is something I very much want to do, but it's something I haven't been able to do (due to the time and money required to do it). I ended up jumping straight into the workforce in the hopes that at some point I'll have saved up enough to actually enroll (since my high school grades left much to be desired; it's a miracle I even got my HS diploma).
Really, though, 99.995% of IT (in my experience at least) doesn't actually require a degree (obviously, since I'm doing it right this very moment without a degree and have been doing so for more than half a decade). It does require a lot of background knowledge that, sure, might've come to me faster had I received a formal education in IT, but I've been able to acquire it just fine through real-world work experience (and have been paid for it, to boot).
My interest in getting a degree has more to do with my interest in a lot of the more theoretical concepts. I'm able to stumble through a lot of things via self teaching, but I do feel like I'm missing various critical pieces, so I'd like to fix that. None of those pieces are getting in the way of me doing my current job, but I do feel like my career is prone to stagnation if I don't have those pieces.
Unfortunately, that is not true: you can get by twiddling something, but the software you'll write without formal education in computer algorithms and data structures will always be inferior and often unmaintainable. This is from personal experience debugging lots of code over the years written by people without formal computer science education.
You could attempt to make the argument that not everyone needs to be a programmer, but system administration skills of such people are also subpar: sooner or later one needs to automate, and the shell scripts written by such people are always either hacked together or overcomplicated because the knowledge of algorithms, data structures and common solutions (like this enqueuing scheduler) simply isn't there. It's exceedingly difficult to educate oneself on something one isn't even aware exists.
If you really love IT and love what you do, get a computer science degree which provides a quality, required education (and not just for the paper so you can say you have one).