Source: I've worked at a hedge fund and I've seen how the sausage is truly made. Many friends have told me similar stories about other firms.
Took me about 3 years to realize I was being paid half of my coworkers doing the same to lesser jobs as mine. Took me resigning with an offer in hand for twice what I was making before they upped their offer. I stayed for another 3 years before I got so fed up with the bullshit I actually left for a lower paying job (also less stress).
tbf, academia is its own racket.
(Edit: I work at Jane Street.)
I.e. should I look for a job there only because of the money, or also because I can make a difference somehow in the bigger societal picture?
Elite developers often make things mundane. No fires. No explosions. etc.
Source: Former jane street employee, also worked at two FAANGs before that, so I think I have some basis for comparison
But, not an enjoyable ride if you actually love quality tech.
(But, money! So much money!!)
I am not sure if the software they are writing is "elite" but OCaml is raising the bar a bit higher.
I've never heard anyone say that google pays below market. Perhaps there are those who pay more, but it's definitely above "the market"
That said JS is probably in a lot of different products, some more latency sensitive than others, but speed isn't what they are known for.
The company is >20 years old.
OCaml is a general purpose language that provides an equilibrium between the avoidance of bugs introduced by state (like all functional languages), speed, and polymorphic type inference. At the time of adoption, the other choices were Haskell (too academic, not practical), Erlang (no type inference, not suited for large code bases with complex business logic), and lisp (too slow, loose/optional type system). The last time I checked, OCaml was third only to C and C++ in terms of speed. It is also important to consider how intellectually stimulating it is to write OCaml. If you can achieve the three things mentioned at the top of this paragraph while also creating a brand of gravitas and intellect that attracts top-tier talent, of course you would choose OCaml.
Would a new, uninitiated market maker write something in OCaml? Unlikely, as they would probably use C++, Rust, or Scala with a 1TB heap and GC disabled. Ignoring the learning curve and time/dollar constraints of starting a hedge fund, I would choose OCaml over the three mentioned.
 https://downloads.haskell.org/~ghc/6.6.1/docs/html/libraries... (Control.Concurrent contains forkIO - thus, green threads!)
Guessing from what information is available to me, it was a matter of personal preferences, not technical decision.
From  in some other comment, it really was a matter of personal choice.
At  it can be found that ghc had green threads at 2002.
You're right that it wasn't OCaml from the beginning, but I believe it was quite a bit earlier than that. The firm dates to 1999-2000, and OCaml came into the picture sometime around 2004.
The reason they avoided Haskell, supposedly, is the lack of predictability in its performance, largely due to its laziness. OCaml, meanwhile, has a fairly straightforward compilation that allows moderately experienced developers to have a very good idea of what the corresponding machine code will be.
Why C++ now? Still the fastest and tons of quants and highly skilled programmers know it. When you consider the correlation between C++ developers’ technical acumen and quantitative skills, coupled with the maturity and increasing convenience of the ecosystem, it makes sense.
Which is not different than being locked into e.g. JVM family, or even being locked into OCaml itself.
VCL is only available to corporate shops and those that aren't into FOSS religion.
MFC is in maintenance mode, and so far Windows developers are more keen moving into one of .NET UI stacks while keeping some C++ code as COM/DLLs or even C++/CLI, than jumping into UWP/WinUI. It remains to be seen if WinUI 3.0 will change the migration trend.
Then on mobile OSes, it isn't even an option, unless you want to write your own GUI from scracth using OpenGL/Metal/Vulkan.
- Trading completely based on the current prices. If you're doing purely arbitrage, eg looking for the ask to go under the bid (elsewhere), you are gonna have to be really, really fast, because that kind of thing is so very obvious nobody hasn't thought of it.
- One step from from that is passively leaving an order in and hoping for the same thing. Eg you leave a bid in a less liquid market, below the bid in the main market, and hope someone hits you. You then immediately throw that onto the main market. Gotta be super fast to do that, because of course everyone else can see someone traded.
- Market making based on some form of ML on the orderbook, basically imbalance. Here you need to do more than just comparing two prices. You also aren't entirely leaning on the current price to offload your position, you might hold onto your position a bit. So now there's risk involved, and you might need to decide how big a position you want. And not everyone will have the same position, so not everyone is after the same trade.
- Market making based on multiple orderbooks. Say you have an ETF basket, and you want to be able to make bids and offers around it, as well as the underlying shares. So then you have a different position to everyone else, there's a fair bit more information to digest, and there's a fair bit more modelling which will mean different participants have different decisions. This means your decisions will not be contested in the same way that obvious arbitrage would be.
I have personally worked on systems that would run an entire week between deployments and never gced.
Object pools typically can't match the GC on latency and throughput, however they do provide a lot more predictable and stable performance.
Nowadays Rust seems to cover all those features with zero-cost abstractions.
(In fact, GCs can often give you better latency than malloc/free unless you go into custom allocators with object pools... which is part of writing non-consing code in GCed languages)
This is talked about with an example in this talk around 18m30s:
Also as someone else mentioned, Jane Street doesn't try regularly competing (to my knowledge as someone in the industry) at trading horizons that demand lowest the lowest possible latency.
If you welcome general corporate blogs, I think Google AI's technical blog is quite good. Their frequent publications on distributed computing strike a good balance of academic and practical ideas:
Cloudflare also has one of my favorite more engineering focused blogs.
Unfortunately I don't have a citation for that. I think I've seen it talked about before here on HN before.
I suspect at least one use case has gone away since the Fowler wrote that: wanting to saturate the write channel into a database in general. Consistent hashing document DBs like DynamoDB, Cassandra, etc. are basically infinitely scalable for writes. It's not clear to me if LMAX still makes sense if you want to assemble the db writes into a strongly ordered stream.
I don't really understand what are you are saying there ?
But we use disruptors for processing millions of messages/events as quickly as possible, with a variety of different consumers.
So their edge isn’t based on speed. They gain more from correctness and ability to express these ideas.
Edit: my point is that it is likely avoidable through certain coding practices in garbage collected environments.
I didn't ask, but am curious, how that compares to the type guarantees of Rust? Would moving to Rust cause them to lose that advantage of compile time error catching? I've never written a line of Rust (hopefully that changes soon) so I don't know, but am certainly interested.
Don't garbage collect, or do it at the end of the trading day. There's plenty of VMs designed for this, at least in the JVM space (which is where my personal experience has been).
RAM is cheap. Latency is expensive.
There's an entire library of tricks for ensuring that GC pauses don't affect trading, but the biggest one is to use userspace allocated memory pools.
I've written a java compiler that does GC and watched that part work; no pauses even for data structures with loops, and generally excellent performance. Not released yet, because I've also watched other parts not work.
But GC without pauses: That worked. And I was relieved² when I saw that my ivory tower worked, after many people had told me it couldn't possibly ;)