Naturally others will say you don't need IDE-like support and that's fine. It's just the way I enjoy learning. Autocomplete is an amazing learning tool.
I find this is the worst part right now. Latency is everything if it is tied to actual typing in of code.
Excited to see this get fixed, or at least improved!
While rust-analyzer doesn't always resolve type (unlike Jetbrains Rust plugin, which always does but sometimes incorrectly), it still makes writing code faster and doesn't have RLS quirks (which come from the fact that RLS has to compile the code before offering suggestions, so code has to be correct at some point, which is not good if you are prototyping)
1. Only one side panel, so I can't see outline, test results and project files at the same time as I am used to see on widescreen monitors in other IDEs.
2. Language servers are still not comparable to solutions offered by java IDEs and in case of C++ they are worse than anything else I used.
3. Python extension constantly forgets and founds unit test. There is little support for unit test in other languages.
4. Official C++ extension despite being completely useless consumes several gigabytes of space for "indexed" files (I wonder if it is so bad to not hurt sales of Visual Studio). I also tried to use clangd which is better but there is still a lot of work to be done before it is useful.
I like sublime rust support for really small projects and eclipse support for larger projects (which is not ideal) but I have not coded anything serious in rust yet, so I do not know if there is some good IDE.
I chuckled at this, because the implementation is actually shared between the official C++ extension and Visual Studio. That extra space usage is likely the recent addition of automatic PCH generation (to VS Code, VS has done that for a long time).
On a more serious note: I have a long-term goal to learn either Vim or Emacs for various reasons, the above mentioned being one of them.
It's hard to feel so spectacularly unproductive as I do when I try them out though, so it's easy to give up.
What's there best way to go about learning Vim or Emacs?
What you need is a motivating factor. Vim is almost guaranteed to be there on any Linux/BSD/Mac shell environment, which motivates people in those environments to learn it even if they learn no other editor.
Beyond such environments, like on any desktop GUI, what'd motivate you to try GUI Vim? For Emacs, I can say the integration between any set of tools you can imagine could be a motivation.
The reason Emacs users don't particularly appreciate IDE's is that Emacs' integration of code editor, compiler output, debugger control makes the development cycle extremely fast. Add any 1 more use to that. File explorer? Emacs dired. Process monitor? Emacs proced. Interactive shells? Emacs integrates Unix shell, Python sessions, anything interactive actually.
You will get all your work done in Emacs, just using editor operations! Go to the source code for a compiler error? Hit "Enter" or click on the error in the compile window. Go to the source code being debugged? Stepping in the debugger automatically opens the file at the right line number. Found the bug? The code is already there ready to be fixed; switch to the compile window & compile. Find a file in a directory? Do a text search in its dired window. Rename all files beginning with foo to begin with bar instead? Do a search-replace.
Emacs also scales way better than any desktop editor or Vim, whether it is opening GB-sized log files, or binary files viewed/edited in hexadecimal. Before long, you start managing your to-do's using org-mode, and doing all your Git operations in the insanely featureful Magit interface.
The list gets longer and longer every release. And many Linux shell environments support the same basic keyboard short-cuts (Ctrl-A to go to beginning of line, Ctrl-E to go to the end, Ctrl-R to search in the shell history), and also provide light-weight Emacs clones like 'zile', 'mg' for editing-only use in the terminal.
vi is the predecessor of, and a subset of, vim, and is probably available on even more platforms than vim is. So knowledge of that single editor (vi) enables you to work on any of those platforms. And you can always progress to (and through - it is big) vim later, at your own pace.
I wrote it at the request of two Windows system administrator friends who were given additional charge of some Unix systems. They later told me that it helped them to quickly start using vi to edit text files on Unix.
Vim is great, but after 20 years of using it, I'm moving mostly to Jetbrains stack since flying around the code is faster and syntastic (a vim plugin) is slow as shit on large projects. Thankfully the vim bindings in Intellij are pretty darn faithful.
Check out this old HN post
Sitting at a unix terminal at the university and being forced - you can't install anything - to learn either of both to get anything done. ;)
Idea + rust is free minus debugging.
This may be the most interesting use case of them all. WebAssembly is fast, but it's also not fun to write. There are languages like AssemblyScript and Lys that will let you write WebAssembly in what appears to be a higher-level language, but you're still schlepping bytes around and must build the entire universe yourself.
Rust offers an alternative: a high-level language with readily accessible tooling to easily switch compile targets from native to wasm.
Not only that, but any general-purpose library code you write to support your web application can be redistributed as native binaries for use in C/C++/Python/Ruby/etc. projects.
Although the original idea for such a universal web/native target, Java, died from self-inflicted wounds, the underlying need for a way to deploy code across all platforms, native and web, never went away. Rust+wasm is a solution that works today.
There's something else as well. WebAssembly is a sandboxed runtime, meaning it runs in its own little environment. Anything it knows about the outside is an opt-in add-on.
This makes wasm an interesting target for systems in which the JVM is currently used. For example, database extension environments and web servers.
Obviously only once webassembly can do the full job
There's still hope for ad-blocking, then.
Personally I would prefer a well-designed GC'd language with a strong type system and native compilation over Rust, unless I was doing something with specific demands on parallelism or embedded software.
In the no-GC no-runtime niche for a very long time there was nothing viable besides C and C++. For programmers who want C-like level of control, performance and low-level compatibility there are very few alternatives, and if you want non-crash-prone parallelism on top of that, there's nothing but Rust.
I have (re-)implemented couple of real-life projects, and each time Rust was much better higher-level language than eg. Java, Python or JS. Borrow checker fights are non-issue in practice: just avoid references, pass messages as copies and when you have to share data use `Arc` and copy on write (or just add `Mutex`).
There's a reason why Rust is most loved PL for 4 years in a row. Once you try it, and get past initial adjustments (mosty: what owns what relationship) everything clicks, and I guarantee that you won't miss GC at all.
Many GC enabled system languages do offer both mechanisms.
It is a matter of enjoying productivity it offers, while having the tools to fine tune performance when it actually matters.
Can you give me an example? Java's `finalize` is not deterministic destruction. D's scope guards (or Go's `defer`) are also not like destructors, because a calling code has to take care of them.
> It is a matter of enjoying productivity it offers,
There's hardly any productivity gain, and it is being offset by productivity gained by a reliable and hassle-free resource management.
Edit: Oh, and if you don’t think that’s enough, note that Rust doesn’t guarantee destruction to ever occur in that case (“considered safe”):
Dealing with borrow checker on cases that are still being worked on (NLL 2, GUI callbacks), using unsafe for graphs or dealing with use-after-free array indexes for the alternative workaround, unsafe Drop implementations, doesn't look hassle free to me.
Wherever the memory heap is involved, if the system isn't memory-starved, it can be argued that deterministic free-ing is a step too far; maybe you don't want time to be spent putting that long-ish list of heap-allocated values on the free list when exiting a function on the critical path. There is something to be said for lazy operations in such contexts, which the GC can provide (off the critical path - using GC.disable() in D) - a lot easier than a "region-based" memory management solution in C/C++/Rust. I don't know of a synthetic benchmark result to tilt the argument either way.
At a system level, it is slightly dismaying that the "back-pressure" required to trigger lazy operations is only present for memory, and only within a Unix process; when it comes to limits on open file handles or sockets or overall OS memory usage, there is no back-pressure to reclaim them - indeed no mechanism to lazily schedule files/sockets for closure.
It's not only about file sockets. It's about your core abstractions. Thread-pools, channels, flushing streams, events and other stuff, unlocking Mutexes, auto-validating guards etc...
I have production or semi-production experience with code written in C, D, C++, Python, Go, Java, Node, Scala, Rust and others (please excuse argument from authority, but that's all I have right now) so I can tell the difference and I think ...
... people that haven't worked with Rust long enough greatly under-appreciate number of stuff that benefit from deterministic resource-like management - not only on system resource usage / performance side, but simply the day to day reliability and writing simple bug-free code with ease.
I meant lazy operations for avoiding unnecessarily making things like memory deallocation temporal. Unless you have a strict memory budget (and you might additionally need to have mitigation for memory fragmentation), freeing whenever GC deems it fit, or streams flushing whenever the OS/library deems it fit, can't be worse - but could potentially be better - for performance than being done only at points decided arbitrarily by the language and the program structure. It is simply easier to disable GC on critical paths.
But as you say, you are not (only) talking about performance. You are talking about determinism for predictability. I hope you are not implying predictability across various threads/processes that make up the system, only within 1 thread/process where that predictability leads to reliability.
Reliability across threads/processes needs strategy because failures are inevitable (yes, I have drunk the Erlang kool-aid too, among other ones). While I have no doubt that Rust's design principles would support developing such strategies, I do wonder as to what scale this would work up to ...
... as you say, I don't know the size of projects Rust has scaled to, I just know enough about Rust itself to balk at its complexity (and I am a C++ person!). Maybe we are just talking from different viewpoints. You, having done a variety of non-trivial production code in a plethora of languages, plump for Rust. But I honestly wonder about the size & longevity of your C/C++ semi-production code - semi, because it is C/C++ ;-) - I have worked on large C++ codebases for long-lived products, and I somehow can't see another complex language solving more problems net-net.
If it is a typical program, i.e. not a device driver or OS kernel running on low-memory hardware, having a GC + runtime doesn't preclude performance, for a long-running program. Of course, for short-running programs, just malloc, don't free, is the fastest.
To me, Rust is a cleaner and stricter but also more narrow systems programming language than C++, and can be used as a sensible replacement for many C++ code bases, particularly as Rust adds features. Rust still has some challenges. For example, the DMA-driven, schedule-based memory safety models that have become fashionable in high-performance database kernels are not compatible with Rust's memory model. You'd have to make most of the code "unsafe" (in a Rust sense, it is actually an alternative memory safety model designed around a different set of assumptions).
Some infrastructure, e.g. database engines, is ubiquitous, but all instances are of the same few software products written in C/C++/Java. Those ubiquitous instances have banged most bugs out of those codebases, so it is an uphill battle there to convince incumbents and upstarts alike, of the value of a new implementation. But Rust & Pony are marvelously positioned to march up that hill, especially if you throw in multi-threading.
My impression is that only a minority of widely-programmed back-end infrastructure that suits Rust is written in C/C++ (say, map-reduce kernels). The code-base size & the data flow complexity inside those components is pretty limited, and Rust should be tractable at that scale.
Most widely-programmed infrastructure software - in the back-end and on cell phone OS'es - have been merrily using Java/Python/Ruby/Erlang. OCaml is quoted as being used in the management component of VMware. These are much larger applications; is there any evidence or hint that Rust isn't onerous to develop larger systems in? Without that evidence, I feel (not think) that a disciplined GC'ed language (D? Pony?) has a better chance there.
For speeding up other-language libraries, D has a -betterC mode, which prevents you from using the subset of the language and the libraries (standard & user-defined) that relies on GC. The remaining language is a very clean C that simply works on the other language's GC'd memory (using the other language's C interface), and can use stack allocation or any heap allocation strategy of your choice for its working set (reference counting ala C++ shared_ptr could be the obvious choice, but it is your party).
For other language applications, it is a valid option to speed up the entire application by writing it in D, as it has "all" the features of those other languages + all the convenience that is afforded by a GC + threads if you don't want a multi-process design. I quote "all" because I mean useful things like blocks/closures, generic data structures, etc. - of course, neither Rust nor D give you runtime devices like monkey-patching/meta-class hackery/prototype changes.
Our plan was:
- Take one part of the system to rewrite over JNA
- Run a massive amount of Scala tests against the new code
- When green, take another part
- Meanwhile the other part of the team writes the user-facing code base, connecting it to the new Rust crates
- The test suit works also over an integration layer, so the other team can test their code and the backend with the same test suite
We're quite far already. Originally I was the only person with Rust experience. Now there's five of us.
A custom malloc that knew free would never be called would be faster still. I wonder if any short-running utilities do this?
I just want to point out that the Swift package management situation is currently bad. SPM (https://github.com/apple/swift-package-manager) can't be used within Xcode, and is basically not used for anything outside of server-side swift / Linux.
It also doesn't look like SPM will be integrated with Xcode this WWDC, so you're stuck with CocoaPods or Carthage.
I'm really jealous of cargo. And RLS (Swift LSP doesn't really work well because it doesn't know about your dependencies - back to the package manager issue).
Unreal and UWP/COM developers are perfectly fine with writing C++ code with a GC around.
COM and other refcounted ones get a pass. But I'm surprised that Unreal gets away with a mark-and-sweep GC. Perhaps because it's only required for UObjects, and the rest of the codebase can still easily avoid using the GC? You can even cause use-after-free bugs on UObjects, if you want to.
> Its (thread-local) memory heap is GC'd by default.
What happens to objects that are allocated in one thread, and then have their reference passed to another thread?
It depends on whether shared-memory is a design requirement or an implementation artifact (Erlang does just fine with a largely shared-nothing model).
It also depends on whether a program runs for a short time, or for a long time. If you are running for a short time, why not just avoid the GC entirely; "manage" memory manually by malloc-ing and never free-ing (free-ing too takes time, and doesn't make memory available to other processes anyway).
Standard library has... issues
No community consensus on a common base setup, questions about which library to use for a task often get answers like "well, do you want to use functors or monads? because that will change the library we recommend" which is really not what you want to hear when you're just trying to set up your first server.
Documentation is, in general, bad. Most packages just give a list of function signatures.
The community is small, academic, and often French.
If OCAML were already popular I don't think the lack of parallelism would be huge issue, but it's niche, which is a huge downside to any language in a business context. The upsides the language provides in terms of performance, ergonomics, and maintainability need to overcome that downside. All the issues I listed mean that OCAML can't generally pass that bar.
OCaml: multicore, standard library situation (which one?) is a mess, adoption.
F#: Have to deal with null and lack of ADTs when you interact with .NET (or JS for Fable) standard lib. The tooling situation in F# has been a mess since .NET Core, especially in Linux. Treated as a 2nd class citizen by MS.
The other issue with the package ecosystem is cross-platform support. While OCaml itself works on windows, opam doesn't (or at least didn't) without a lot of extra work, and it seemed like most packages were designed only for unixish OS's.
There are projects where I've used Rust instead of OCaml, even though I'd have preferred to use OCaml, simply because the infrastructure is so much better and easier to use for Rust.
Isn't that d-lang?
I think there are probably more users that would love this but they are not users of the language to begin with. The community that actually forms around Rust is the type of community that does not want GCs, wants the borrow checker and the constraints that form the language.
If you like GC, I feel like there are plenty of other good options, like D, Ocaml, CLR languages (C#/F#).
In contrast, a language very much like Rust, but with GC, would be the right tool for their problems.
Even the compiler itself would not fit your description.
Just like if you wanted heavy number crunching and data manipulation, I'd push someone towards R and Scala as they have the libraries to handle it (and, I'd also say Python, but I'm starting to show my biases here). Go is great for back end systems, as that's what it was designed for, and it's now getting libraries built out for it for other things to fill in gaps for other things, just as Rust is filling in other spaces, but there's only so far it can go from its original design doc. The GC makes it, inherently, not a systems language. That doesn't mean it isn't great at other things. Rust is great as a systems language, that doesn't mean it isn't great at other things, but it's terrible as a rapid prototyping language. It lacks that capability. It wasn't designed for that =/
w.r.t. Arc/Rc, I'd say that it might feel verbose at first, but it makes explicit what a GC does implicitly, and you can get all the benefits of Rust's ownership model at the same time. I can pick and choose when to rely on the GC and when to explicitly manage lifetimes myself. That's really cool! I maybe a bit weird in this, but I like complexity to get surfaced so I can't pretend it's not there.
My personal itches are usually web projects. Once Swift gets async/await and Vapor|Kitura switch to it, I'll probably jump to Swift. I really dislike dealing with futures/promises. The Swift web frameworks also don't generate standalone binaries like Go/Rust do...the tend to push you to cloud/docker solutions. Swift is still pretty Apple platform centric, but that feels like it's changing even now.
It has value types, Span, PInvoke etc. that make low level interop simple, GC and higher level semantics and better ecosystem/tooling than most alternatives.
Runtime size and GC limit some use cases
Rust, for me, took what was, prior to learning Rust, a vague undefined notion and really helped me see it formalized into what I can now really call a "mental model" of ownership/lifetimes. It has made my code better in every language I work in.
- Building an explicitly and observably welcoming open source community
- Separating corporate money from technical direction, even at the cost of faster execution
- Upholding pragmatism over all else (e.g. keeping around a C/C++ API)
I'm not sure whether these principles carry over to all languages, or if there's anything you want to add or subtract. Would be cool to have a broadly applied philosophy endorsed by many language stakeholders.
A quick response to your points: I'm not sure "upholding pragmatism over all else" applies to Rust. It really depends on the details. Open source community is absolutely important, and while you can argue that money is separate from technical direction, that's a very complex topic.
I might be a strange guy liking Haskell and C++. ;)
Just the presence of well integrated Algebraic Data Types (ADTs) makes an incredible amount of difference. They are used to represent errors in a meaningful and easy to understand way (```Result<T>```), are used to show that a function may or may not return a meaningful value without needing a garbage value (```Option<T>```), and the optional case can even be used to wrap a null pointer scenario in a safe way (```Option<Ref<T>>``` being the closest to a literal translation I think).
That's just one small feature that permeates the language. Whatever the opposite of a death-of-a-thousand-cuts is, Rust has it.
When Rust 1.0 was announced I had a look. I was very surprised because that language was very different from my previous look, when Rust still had GC and green threads (and a syntax with mystic sigils).
Rust 1.0 looked like it could be a good language, but coming from mainstream languages with a richer ecosystem -- such as Java/Scala or C# --, Rust lacked many common libs. For instance, a high-level approach to non-blocking IO, which is something I need when writing servers. There were other issues -- lexical lifetimes often meant one needed to jump through hoops to please the borrow checker. I thought Rust would need a lot more time to mature enough to be a serious contender in my PL list. The next year, the Zero-Cost Futures post by Aaron Turon showed up in my news feed. It seemed interesting, but it was only a plan. A few months later Tokio was born and, again, it seemed interesting. But there's no way it was ready, was there? I went back to business as usual. I kept hearing about Rust, but I had no intention to use it in the immediate future.
Last year, I was offered a full-time Rust job out of the blue (because of my experience in building distributed systems; I had to learn Rust as a part of the job). I was interested because, in general, I was looking for a change -- but I also wanted to make sure I wouldn't spend energy on a language that wouldn't be relevant in 10 years (at least, according to my intuition). I've had job offers in other emerging languages (or old but trendy FP languages) in the past but I don't think they will becoming mainstream, and I want a language where 3rd party libraries for every (modern) protocol, API, database, etc, are complete and safe (and that's more likely to happen with mainstream languages).
I evaluated Rust again to see how mature it was. I was very surprised at the speed of development of the language and of its ecosystem, especially compared to other PL communities I had been part of. The energy in Rust reminds me of Java's better days (but with a modern language). There are librairies for everything, the language and compiler is surprising stable and mature (YMMV). I believe this is a testament to Rust's productivity and reuse. Rust is not perfect, but it's trying to reach an interesting trade-off. It is already very usable considering how recent it is (obviously taking advantage of LLVM).
In my experience, projects with such hype are often just good at marketing. You try them, and they have a lot of issues that they (sometimes) fix later; I'd say it was the case for Cassandra, Docker, Kubernetes, TypeScript (and more). With Rust, I started almost immediately on edition 2018 and found the tooling, ecosystem, language and compiler to live up to its hype. Sure, compile times could be faster, the RLS/IDE support is still a work-in-progress (though usable), but I'd happily trade compile times for fewer bugs at runtime, and most of all, the trajectory looks great; coming soon: async/await, const generics, GATs, and better IDE.
So -- congratulations to everyone who had a part in making Rust happen, and happy birthday!
Congrats to the Rust Core Team!
A few caveats:
* Rust is getting really big and complicated. The basic borrow checking was pretty neat and accessible (except for hard cases), but a ton of other stuff has been added. Appreciate that getting through it all is difficult, and don't be discouraged. (I think discouragement is one of the biggest barriers to learning a programming topic, for no good reason.)
* Consider starting by writing command-line utilities you would like to have. If you instead decide to start with a GUI UI or full-screen character terminal UI crate (package), you might find that they tend to use a lot of Rust features in their APIs (perhaps necessarily), and you might also have to work around complicated ownership&lifetimes because of the model of the crate. This is an unusual barrier.
* Rust is really a systems programming language. To fully appreciate Rust, you have to need that performance, and know how much more difficult it is to write correct code in C (it's even harder than most C programmers think). But, for Web development, Rust might still come in handy for for high-performance backend work, and possibly later for full-stack (where maybe you won't have to write any JS bits on the frontend, because WASM).
A lot of web dev people start off by writing command-line tools. Later this year, Rust will also be significantly easier to write server-side web stuff in, it’s a little tough to get into at the moment. You could also try front-end web stuff with WebAssembly! The PI stuff is also a good choice.
There’s links to learn more about these topics on https://www.rust-lang.org/ that should help you get started!
Very nice to hear.
I remember all the talking while back about the major bugs regarding the language.
Long answer: it depends whether you mean actual vulnerabilities, or soundness bugs
Known bugs that could affect security of programs written in Rust get fixed ASAP. There was one serious bug in std's VecDeque that caused memory corruption. There was a more recent issue where if you override type_id method that wasn't supposed to be overridden, and then use another method that relies on type_id being correct, you get crashy garbage. In C or C++ that'd be called garbage-in, garbage-out, and a bad programmer shooting themselves in the foot. In Rust that was considered a vulnerability.
Apart from that, there are known soundness bugs in the language/compiler/LLVM that could lead to undefined behavior, miscompilation, or otherwise weasel out of things that the language is meant to guarantee:
At this point these are mostly edge cases that you're unlikely to hit in real code, but if you really really want to make your program crash, Rust can't stop you.
However, I keep having to step back. And in this short time Rust has progressed a lot (and that isn't entirely a good thing either).
Rust might be the most liked language, but that is mostly a measure of how vocal its supporters are. It is also very underused for how liked it is. It is become a C-like Haskell in that people like it a lot, but don't use it for much important. Like Haskell, it might turn out to be a great research language, but not be used much in practice. I was hoping to see a language that was much more practical.
I'm about to take another shot at Rust, and I decided this time I should just ignore all the dislike of `unsafe` and just use it as much as I need to. I think I was making life more difficult on myself by trying to avoid it so much, but when you have a lot of pointers that have multiple refereces between then, you kind of have to. Doulble linked and mutually linked structures - where any pointer chance can lead to a mutation - are Rust's achillies heel. Unfortunately, you see that a lot in system-level programming. I write a lot of high performance code and low-latency networking. My main reason to use Rust is for performance reasons and a language with fewer footguns than C++., but something that doesn't completely remove my ability to produce good code.
However, I think Rust has moved too fast in the language department. It has changed and continues to change at a pace that prevent people how mainly use it as a tool for other things from becoming an idiomatic writer of the language. The idioms change too quickly.
The networking stack for efficient non-blocking event driven code still isn't where it needs to be. In Java for example, the NIO abstractions are much better. C++ just turns you over to the C system API. Rust is a disaster in the middle with kind of having a socket abstraction and telling you to just use libc for the rest. This has been an issue for years. Tokio is the wrong abstraction on the wrong level for me, and mio underlying library is kind of a mess with its token architecture, how it handles user defined events. Hopefully it has improved in the last couple years. It used to be slower than it needed to be.
"Slower than it needs to be" can describe a number of things in Rust, also the default hash implementation. Why would you use a secure hash for your default? If you really need that, you know you do, and 99.999% of the uses cases don't need that. I often need small key performance, which is basically the trade off they decided against.
C++ has blown its complexity budget a 100x over. Rust: hold my beer. In just a few year, Rust has kept up to c++ in that measurement (again, not a good thing). I'd like the language to stabilize a bit and work on what is currently there instead of running off to add support for yet another programming paradigm (async and userland threads - feel like a trip back in time to 20 years ago).
Side note on docs formatting: Reading the documentation is still incredibly difficult for me. It doesn't flow well, the top of the page has no index - you basically just have to scroll down. The page width is fixed to 960px so there are interior scroll bars on the code segments. No use of background color is mostly white so it is a big mess of black on while with some small indentation differences. Look at cppref and cpluplus: both have essentially broken down contents at the top making it easier to find what you are looking for, they both make use of color effectively. And everything on Rust Docs starts off fully expanded: you don't actually expand what you want to see instead of have to fold what you don't want to see (so first thing on every page, I have to immediately fold everything, then unfold the often too long context section). Seems backwards to me. And navigation on ther two sites is much better once you get deep on the docs. You can't go to a rust page doc and skim down a page or two looking for the call you want (e,g.: I need to find the mutating calls on a tree). On the cpp docs on both sites very eary, on rust it is tough. There is no "here are all the lookup methods" and "here at all the methods to look things up"
> I'm about to take another shot at Rust, and I decided this time I should just ignore all the dislike of `unsafe` and just use it as much as I need to.
What it sounds like to me is that you struggled to learn Rust, and instead tried to write C or C++ in Rust. While they're similar languages, they're also very different. While it's true that these structures are used often in C and C++, they're used way less in Rust, and arguably, those kinds of structures are often bad for performance, not good. As always, it depends. It can be tough to communicate idiom, and to learn the "Rustic way" of doing things. This is true of every language, of course :)
> However, I think Rust has moved too fast in the language department. ... I'd like the language to stabilize a bit and work on what is currently there
This is actually the major theme of this year! https://blog.rust-lang.org/2019/04/23/roadmap.html
I disagree about overall complexity, but again, that's fairly subjective.
> Side note on docs formatting:
It is really, really hard to please everyone here. Some people love it, some people hate it. Sorry you hate it :/
> This is actually the major theme of this year! https://blog.rust-lang.org/2019/04/23/roadmap.html
Major theme of the year, but not major theme of the language. One year of stability followed by one year of working on the next breaking idiom change followed by one year where that change is executed doesn't give you a stable language.
Also consider the pace of compiler updates. With C++, I can be very productive with my distro provided compiler. With Rust, the ecosystem is jumping onto new compiler versions so fast that you basically have to use rustup or a rolling release distro when you have a project with a medium amount of crates you depend on (hundreds). Cargo further has no way of recognizing the MSRV of crates and it's updating the index without asking so you can't even safely add new dependencies or update dependencies without increasing the MSRV of your Cargo.lock.
All of this can change, and maybe the rate of modifications will slow down. But right now it rather points towards the opposite direction, with the recent introduction of editions.
Doesn't C++ have exactly this problem? It keeps adding new features, which change the language idioms. It actually seems much worse than Rust in this area, as Rust is mostly re-enforcing the existing idioms, while C++ is almost like two different languages now (well, probably even more than two).
C++ is a different language in every company I worked for. Ha. Even a different language per team/project sometimes.
Tab vs spaces, different capitalization, different indentation style, exceptions vs no-exceptions, 0 is failure vs 1 is failure and then we like featureA, we don't like featureB, and so on.
But then again, the people behind Rust have made usage of the try! macro harder for a questionably useful keyword that could have been named differently as well. And now it's part of the 2018 edition and you need two more characters to use the try! macro even though there are still cases where it's much better than ? and nothing is being done about it.
So it is very practical stuff for me :)
The biggest difficulty was absence of something analogous to ASP.net, so I had to invent my own... but even there it proved an excellent and practical choice, the macro system is very expressive and the strong typing gives a lot of confidence when refactoring.
Personally, I have moved from linked data structures to more cache friendly data structures, which means contiguous spaces. No jumping around. It's very liberating.
Replacing large linked lists with arrays is rarely an actual win. With an array, insertion and deletion become far more expensive, virtual memory is more likely to become fragmented or grow monotonically, and the cache misses avoided are almost certainly irrelevant to total performance.
Highly performant code tends to get complex. One way to ensure high performance and reduce complexity is to deal with values, not pointers; it is good for concurrency (no aliasing), excellent for cache locality, and the value is right there. Multi-versioned stable values work excellently, like RCU-locks in the Linux kernel.
Then there is the fact that mutation makes things even harder. So for example, you can easily implement a tree using Box to hold pointers to child nodes, but you'll then need a little bit of unsafe code to write a mutable iterator over the tree. (Even mutable iterators for slices require unsafe code under the hood.)
>I decided this time I should just ignore all the dislike of `unsafe` and just use it as much as I need to. I think I was making life more difficult on myself by trying to avoid it so much, but when you have a lot of pointers that have multiple refereces between then, you kind of have to. Doulble linked and mutually linked structures - where any pointer chance can lead to a mutation - are Rust's achillies heel.
Sriram_malhar pointed out that doubly linked lists are usually not a good data structure to use anyway. I was pointing out that there are lots of more useful data structures that raise similar issues in Rust.
I don't think that using unsafe is "fine", necessarily. It is much better to implement a data structure on top of safe Rust if at all possible.