> ...you’d be crazy to think hardware was ever intended to be used for isolating multiple users safely.
But of course it was. Protection rings were invented for Multics, the first operating system specifically intended to isolate multiple users safely. And side-channel timing attacks were called out as a concern at the time.
We may have lost our way somewhere, but let’s not rewrite history.
you're thinking at a different level than she is
We need more knowledge, more education, more auditing, more communication, and this across all levels of the stack. If we use abstraction, perhaps we need to recognize that abstraction does not absolve us of the responsibility for auditing the implementation, and that abstraction has not only value but also cost.
To make this easier across so many dimensions, we probably want to simplify our stacks and minimize our dependencies and supply chains far more drastically than we do.
There's also a cost to all this understanding, and it comes down to what the business values. Security or velocity. But perhaps you can get both, especially as you design for simpler systems.
Does anyone know whether dedicated hosting companies such as Hetzner or xneelo reflash the firmware on server boards (and disks and everything with firmware in it) before they rent it out again?
And whether they use more forceful hardware techniques to do the reflashing, and not software to politely ask the firmware to reflash itself?
I would hope they would but I can't find any security policy showing they do, or how.
I disagree with this. In fact, my conclusion is the opposite. It's a good pattern to limit the complexity of communication between different layers and have strong encapsulation of state.
The simpler an interface is, the easier it is to integrate with other components and the less likely it is to run into integration issues and expose vulnerabilities.
The simpler the interface (with fewer, simpler endpoints), the fewer e2e scenarios you need to account for, this allows you to build more stable, more secure software which is easier to test.
Modules which expose a large number of endpoints tend to encourage micromanagement of that module's internal state and this leads to issues.
You need to have clear separation of concerns between layers and components in order to be able to come up with the simplest interface possible which allows different components to interact.
Bits of information leaking through faulty abstractions are exactly what make conduits for "side-channel" attacks.
It takes a lot of effort and thinking to figure out an abstraction which fulfils all of its requirements while at the same time exposes a simple communication interface.
Leaky abstractions means you have the wrong mental model.
Abstractions (attempt to) hide complexity. Worst cases are black boxes.
Mental models attempt to clarify how something works, to make it easier to reason how a thing works.
Modularity is just a strategy to mitigate bad design, not the cause.
Modularity enables divide & conquer, specialization, etc. Conway's Law is a good thing.
Architecture is the visible design choices, defined thru interfaces.
It'd be amazing if we divined some new strategies. Like maybe GA or the newer stuff.
But having people all the same then introduces non-communication problems. Lack of genetic diversity is a potential catastrophe. Lack of outside perspective makes novel thinking difficult. Lack of change breeds complacency.
So you need your software to work pretty much all the same way, but you need diversity in how it works to survive environmental challenges.
Nature has a solution for this (as in most things): evolution. You mutate a bit every so often and eventually something works better. Then you stick with that and repeat. It's a continuous improvement cycle driven by measurably better outcomes from experiments.
This can work for hardware too but it's definitely not as straightforward. But no matter what you use it on, you can't have some clunky, hierarchical, reactive, conservative, single-minded power structure holding the process back. The stack will always suffer as long as there's a giant anchor at the top keeping it from being agile and mutating. It's not so much a cultural thing as an organism; if the organism can't agree on how to grow, it's not gonna fare well against competition.
We need different people in software. We need more different people too and we need to communicate better. The real problem is the way we isolate our selves. I think the solution is less code and more understanding of what the limitations of existing code are. This requires us to prioritize communication and understanding.
Maybe if everyone does this, the parties of each conversation that isn't happening will be aware of each other. That's a first step.
I haven't used NixOS but this sounds intriguing. Could you explain in a little more detail how it helps with this?
2. Because all our builds are so so sandboxed, there's this uncanny feeling that we sometimes know the dependencies / prereqs of a package better than the upstream developers!
Because there is not enough communication between the engineers writing a this code lol. Ie: Conways law ;-)