Hacker News new | comments | show | ask | jobs | submitlogin
Kotlin: The Problem with null (arturdryomov.online)
128 points by bmc7505 a year ago | hide | past | web | 107 comments | favorite

>Unfortunately, it is not really possible to change Kotlin behave the same way. Apple uses a bridging mechanism to connect Objective-C and Swift binaries, when Kotlin uses the same bytecode as Java. To make a simpler mental picture, imagine Objective-C and Swift being connected side-by-side and Kotlin and Java as a stack, where Kotlin is on top. I presume it would be pretty challenging to provide a proper compatibility with Java, transforming all nullable Kotlin values to Optional and vice-versa, especially in such tight areas like Java reflection.

This is not correct. The Kotlin compiler could treat every parameter and return value that doesn't have a nullability annotation as an implicitly unwrapped optional (Type!). It could even support all of the popular nullability annotations. There is no requirement to choose just one.

The beauty of IUO is that you can ignore the nullability and you have the same amount of safety you have today: if you touch a null value you get an exception (or abort in Swift's case). The benefit is you can insert a null check in one place which then propagates through the rest of your Kotlin (or Swift) code.

If i understand you correctly, you're describing the mechanism Ceylon uses:


I think this is pretty cool. I haven't used it in practice.

Yeah (agree) that's not correct. Kotlin has to compile to Java bytecode, true, but it does not overlay Java like e.g. Groovy does. Kotlin and Java very much exist side-by-side in exactly the same way Swift and objective-c do: they (can) target the same machine but have different language scemantics.

The Chris Lattner quote at the end explains the fundamental difference. Kotlin is an interloper in a Java world - that includes the JVM, a VM designed to run Java.

Swift integrates and interoperates with but is not based on the Objective C runtime. Apple also controls all the relevant bits and pieces and they can bridge, compile-to, shim, wrap, etc however they want. They can can pick the design approach and fiddle with all the parts to make them fit. This isn't a luxury Kotlin has.

That would have been true to say if this was a decade ago. After the advent of .NET and CLI's ability to support multiple languages, Sun started moving the JVM away from being Java specific and including features to support other languages. Your criticism hasn't been correct for years.

It is not a criticism as much as it is a fact, no less true than ever. The JVM is designed to run Java. Other languages also compile to Java bytecode, and Sun and Oracle have made some helpful changes.

But the VM is OVERWHELMINGLY designed around the requirements of Java and to be performant for Java. Try compiling a non-OO language to fast bytecode without spending a lot of time considering what the analogous Java code would compile to.

Java bytecode is so close to Java, you can decompile it almost directly to readable Java source. Try that with JRuby or Clojure or Scala and see how close you land.

Kotlin can compile to Javascript too. Just saying.

I'm not sure I understand this, it seems you're responding to some perceived 'criticism' rather than things Lattner mentions. Yes, a few things have been tweaked in the JVM to make life easier for non-Java languages. Fundamentally, its semantics are closely tied to Java's (or vice versa or both or something!). You aren't going to write a (sanely performing) JVM language that has a different memory management model or value types or TCO - not until the JVM supports those things and even then, hopefully in a way that matches what you have in mind for your language. This isn't so much a criticism as a non-controversial fact.

Up until recently it would have been non controversial. With Graal/Truffle it's possible to run many languages on the JVM whilst bypassing bytecode entirely. Those language implementations usually run as fast or much faster than other runtimes. It includes LLVM bitcode that does manual memory management, which can run at least some benchmarks about as fast as gcc would.

> [Kotlin] does not overlay Java like e.g. Groovy does

Apache Groovy 1.x was dynamically typed only, and focused on scripting and glue code for the JVM and syntactic compat with java, so could be said to "overlay Java". Since Groovy 2.0 however, it changed tack with static typing targeted for Android development, not keeping up with syntax changes in Java such as lambdas from Java 8, and tried to compete with Java instead of complementing it. It no longer "overlays Java", and in fact lost out to Kotlin and Scala in its effort to compete with Java. Groovy should have stuck to its knitting instead of changing direction every few years.

They could have gone hardcore and said any non-primitive, unanotated platform type is nullable but that would have made interop really ugly. And all of the null checks would have muddied up code and added (admittedly minimal) runtime costs.

I want something in between Kotlin and Scala. I want option as a real type that is treated as a max-single-element collection that can be flattened with the same APIs as other collections. But I want no runtime penalty. People in Rust are so lucky to have zero cost abstractions for these things. I suppose I'll need better compile time support (even more than Scala macros) and whole program optimizations (i.e. cross JAR) to get zero cost optional on the JVM. Things like Scala Native use LLVM and surely Option things are inlined or otherwise optimized out.

I wrote a zero-allocation `Option`-style (monadic) data structure for Scala a while ago [1]. Unlike all the previous attempts, it supports the distinction between `None`, `Some(None)`, `Some(null)`, `Some(Some(None))`, etc., which is what allows it to remain monadic. The surprise is that it does not use `null` as the `None` value. The downside is that `toString()` is altered: `Some("hello").toString()` returns `hello`, not `Some(hello)`.

There was an experiment to use it as a replacement for the implementation of `scala.Option` in the dotty compiler code base [2], but it is inconclusive so far; it should be tried directly in the collections library.

[1] https://github.com/sjrd/scala-unboxed-option

[2] https://github.com/lampepfl/dotty/pull/3181

Just curious, if Some(hello).toString produces `hello`, what does None.toString() and Some(None).toString() and Some(Some(None)).toString() produce? (My guess is `None` for all)

I think it's impressive† that you got nesting to work, I'm really curious how you pulled that off for longs and doubles without incurring extra overhead.

† Though, given the magic you worked in getting Union types working in the ScalaJS facades I shouldn't be surprised.

Oh primitive types do get one level of boxing (on the JVM): a `Double` becomes a `java.lang.Double`. But it doesn't become a `Some` with a `java.lang.Double` inside, so we gained one allocation anyway. It is not possible to remove that box without compiler support, and even then not in all cases (return values for examples) because `Double` contains a finite amount of values 2^64, and `Option[None]` has 2^64 + 1 values.

And in that implementation, `None.toString == "None"`, `Some(None).toString == "Some(None)", etc. Although that could be changed.

Seems like one could use the wasted digit of signed numbers to stor options rather than have asymmetric range (two's complement) or positive/negative zero... IE have a [ed:bit string] that indicates none/some?

It's no longer monadic in the presence of a pervasive `null` value. Such is the power of null - it breaks abstractions that have nothing to do with it.

It is monadic if the monad laws about it hold. `null` has nothing to do with it. In the context of my unboxed option, `null` is like `5`: one of many primitive values, uninterpreted by the abstraction, and therefore it does not break the abstraction.

If you think my unboxed option is not monadic, please provide a counter-example to one of the monad laws.

> They could have gone hardcore and said any non-primitive, unanotated platform type is nullable but that would have made interop really ugly.

Instead you just get a crash if it is null which is worse in my opinion.

> And all of the null checks would have muddied up code and added (admittedly minimal) runtime costs.

This already happens automatically for parameters to a Kotlin function; check out the Kotlin Intrinsics checks.

> Instead you just get a crash if it is null which is worse in my opinion

Only in the same instances you would in Java-to-Java. So the ergonomics aren't improved or reduced. And really, this only helps on returns anyways. Marking every Java param nullable gets you nothing if the implementer didn't handle it well.

> This already happens automatically for parameters to a Kotlin function; check out the Kotlin Intrinsics checks.

Yup, and I don't like it. So even public Kotlin-to-Kotlin calls suffer. Haven't checked in a while, but I would like an option to use annotations only and skip those top-of-method checks.

> Only in the same instances you would in Java-to-Java. So the ergonomics aren't improved or reduced.

This isn't correct. For example you have a String that was passed as a param and were shoving it into JSONObject. In Java if it's null nothing bad happens (except shoving a null value into your JSON). Shoving a null into a JSONObject in Kotlin won't crash in the put operation; you'd get the crash as soon as the method is called and it does the Intrinsics check.

Relevant: https://discuss.kotlinlang.org/t/run-time-null-checks-and-pe...

I've measured the null-checks and other people have too... the runtime performance cost is negligible, so disabling them would get you nothing useful. If they did, Kotlin would have made the compiler flag to disable them (which exists) public... but until someone comes up with an actual reason other than "I feel like it's better" that won't change.

I have reasons. For example, I need to have as few instructions between a JNI call and the next line (I am starting a SingleStep JVMTI callback as part of my development on a fuzzer) and those intrinsics don't help. It's gotten to the point on my advanced JVM projects where I have to just drop into Java all the time to avoid this stuff, get the proper MethodHandle.invokeExact semantics, etc, etc.

I'm tired of being surprised by the bytecode that Kotlin generates. You could argue javac has some magic too, but not near as much and it's pretty well spelled out whereas you won't find the docs for all of these intrinsics.

It's even worse when things just get dismissed as "I feel like it's better" and then when you provide reasons people will say it's not normal. Languages (and their proponents) should not treat the end user (i.e. the dev) like they are stupid, or at least give them the option to turn things off. The trade-off of hiding this stuff is not worth it...it's more like it's the language people doing the "I feel like it's better".

Null checks are effectively free on the JVM. One of the advantages of using null-as-real-null that is rarely discussed by functional programming fans is that null is well supported by the hardware. Just keep the bottom pages unmapped and an attempt to access a null pointer will trigger a hardware fault. If you don't do it that way then you have to do the null checks with branches in software, which bloats your code footprint and now with Spectre perhaps requires slower code too.

> Null checks are effectively free on the JVM

I don't want the bytecodes added. I have tracing I am doing and I need more predictable instructions. It's not just a performance issue.

Also, you should not assume cost of a JVM implementation of things based on your empirical evidence from a single, popular VM. For example, how does TeaVM translate it?

As for your point wrt to options in Scala, there were some libraries that used marker values for none (e.g. NaN for double) that then used value classes to basically provide a zero alloc option type. I’ve done my own internal versions for similar purposes. They work well with the caveats of value classes in Scala having their own set of issues.

For doubles you don't even need to take over NaN. There is a huge range of unused bit patterns in IEEE floats that could be used instead.

I don't think the JVM guarantees the bit pattern will be preserved when passing it around. I know this is the case for NaNs where I have experienced the bit pattern change [0].

0 - https://stackoverflow.com/questions/43129365/javas-math-rint...

It would be really depressing if java normalized NaNs, but i wouldn't put it past some author thinking it was a good idea.

It's not that they normalize them per se, it's just that NaN bit patterns are mostly undefined behavior. I think the JVM spec (or JLS, I forget) implies that you can't really rely on the NaN bit patterns being retained. Double.longBitsToDouble mentions that some processors might not return a double in the same bit pattern as passed.

Such as?

Reading up on the format [1] it seems that every value is defined, which includes a ton of possible NaN values [2].

[1] https://en.wikipedia.org/wiki/Double-precision_floating-poin...

[2] If you read the actual IEEE 754 spec, one suggested use for a NaN value is the address of the offending instruction(s); otherwise there is no defined pattern for NaNs in IEEE-754.

> it seems that every value is defined,

> there is no defined pattern for NaNs


I was referring to using a NaN value without a zeroed mantissa.


A NaN is defined as the maximum exponent (all ones) and a non-zero mantissa. By "no defined pattern for NaNs" I meant there was no fixed value for a single NaN.

I think you can make it feasible with a few features.

Not sure which of these kotlin has if any

1. declaring that a type isn’t null with ! appended.

2. Suppression annotations beatable at the file / class / function / line levels to disable the check

3. Facades / Decorators that can be added by anyone that tell the compiler retrofitted additional typing information on existing libraries (the defintely typed / typescript approach)

ad 1: `!` in kotlin actually mean "platform type that could be nullable or not". When you add !! to a value of that type, you declare it as non-nullable.

Shameless little self promotion

"Comparing Optionals and Null in Swift, Scala, Ceylon and Kotlin"


Personally I like Monads better than the Kotlin solution. Best is probably the way Swift does it.

My issue with Monads (and why I like kotlin and swift nullable types) is that it allows the compiler to infer nullability like the following code:

fun f(a1: Int?, a2: Int?): Int {

  if (a1 == null || a2 == null) {

     return 0


   //now you can do something like

   return a1+a2

In contrast, if it didn't, the code would look like:

fn func1(q: Option<i32>, z: Option<i32>) -> i32 {

    if let Some(q_in) = q {

        if let Some(z_in) = z {

            return q_in + z_in



    return 0

Note the deeply nested parenthesis.

I know you have got a TON of replies demonstrating better ways to check if a value is null or not. I believe they are all missing the point. In a language with monads, Option's should not be part of the function signature unless it is important to the logic of that function. Your given example should look like:

    func1(a: i32, z: i32) -> i32 {
        return a + z
I know it's not 1 to 1, but the idea is there. You would then use a tiny bit of glue code to combine all the stuff you have to get what you want. For example, if you have 2 nullabes as parameters, you use liftM2; if you have 1 nullable, you use liftM; or perhaps you just want reduce a structure, so you reach for foldM. etc. If your monadic code has to constantly figure out what monad it is, you aren't buying yourself much and I could see why you don't find them valuable. And if the function explicitly needs an Option, then it must be important and must be taken into consideration by the caller. I just don't think they should force the caller to consider them where not needed.

I wanted to mention I often see similar statements with almost the identical code comparison that you made. I believe it has to do with retraining oneself to think functionally instead of imperatively. I'm curious about your background.

For good measure, here's another example in Haskell:

    func1 a b = fromMaybe 0 (liftM2 (+) a b)
And an uglier, but fun, point-free version:

    func1 = (fromMaybe 0 .) . liftM2 (+)
I believe real-world examples would hold up better because the glue code would only be where needed.

> Option's should not be part of the function signature unless it is important to the logic of that function.

Yes! Absolutely. Very important observation. In fact in Haskell Maybe should not be part of the signature if you want to return Nothing whenever one of the arguments is Nothing.

> And an uglier, but fun, point-free version

Yikes, please don't! People might get the wrong idea about how good Haskell is written.

In Scala, you can use `for`, although I think your point remains valid for languages without any sort of monad support:

    def func1(q: Option[Int], z: Option[Int]): Int = {
        (for {
            qval <- q
            zval <- z
        } yield q + z).getOrElse(0)

In languages like Haskell and Scala that supports `do` / `for` comprehension sugar, this could be done without the nesting:

  fn :: Maybe Int -> Maybe Int -> Int
  fn q z = fromMaybe 0 $
      q_in <- q
      z_in <- z
      return (q_in + z_in)

In modern Haskell, we are more likely to use the Applicative instance for Maybe, e.g.

    fromMaybe 0 $ 
        (+) <$> xm <*> ym

Or just:

    fn (Just q) (Just z) = q + z
    fn _ _ = 0

Surely since you have monads you can just use the tools they give you?

    fn = liftM2 (+)

That doesn't do quite the same thing; the original example gets rid of the Maybe, defaulting to 0.

Wouldn't this let me call `fn true false` or `fn "hi" 2`? Or the second declaration takes the inferred type of the arguments from the first one?

This isn't two functions: it's actually one function defined by cases. You can think of it as equivalent to

    fn x y = case (x, y) of
      (Just q, Just z) -> q + z
      (_, _) -> 0
but the case becomes implicit in the repetition of the function name at the top level of indentation.

As those others wrote you would probably use for

I wrote a little bit about how to work with the Future Monad in "A Little Guide on Using Futures for Web Developers'


Most languages with monads have tools for working with them. In Haskell, for example, even without `liftM2` or `sequenceM`, you can simply do

    case (xm, ym) of
        (Just x, Just y) -> x + y
        _ -> 0

That's got nothing to do with monads though, you can do that in Swift or Rust which have a monadic option type but don't actually have monads:

    match (xm, ym) {
        (Some(x), Some(y)) => x + y,
        _ => 0

Oh, come on. Is yielding 0 there really the best approach? It's a copout. You want full referential transparency.

In Rust you can also write:

    if let (Some(q_in), Some(z_in)) = (q, z) {
        q_in + z_in
    } else {

In Swift you can also reference the enum types directly, like `var h:String? = .some("Hello")`. You're also missing `??` for Swift's version of default values.

At least for android kotlin programming, with the release of api level 27 + support libraries, it’s a lot better. They have placed the annotations all over the place. Using nullable annotations have good use cases in java too.

I would think optional types and everything else being non nullable is the way forward for java too

Well, the null issues in Java libraries have always been there, I don't really think it's a fault of Kotlin and its tooling that those bugs are not caught at all by the compiler, as all object types in Java are Kotlin optionals if not annotated.

Maybe in time most of the necessary open source projects will be pure Kotlin ones, but for now I believe annotating Java code with null constraints is certainly a boon for everyone and should be encouraged.

I agree that the tooling is getting better. Some of the popular libraries are still figuring out the right patterns to apply to make them more "Kotlin friendly". Tools like the Checker Framework [1], JSR 308 [2], and compiler plugins like Traute [3] can also help with the safety issues on the Java side. Kotlin is beginning to understand these more, and some of the basic annotations are supported, but the tooling is only going to improve with all of the backing that Kotlin currently has.

[1]: https://checkerframework.org/

[2]: https://jcp.org/en/jsr/detail?id=308

[3]: https://github.com/denis-zhdanov/traute

>Turns out Swift does not have null pointers. At all!

Mostly true under the hood; optional value types are tagged unions (I didn't know this before reading this article and checking for myself), but optional reference types are still, as you'd expect, nullable pointers.[1]

[1] https://godbolt.org/g/QZk6wL

The java compiler might not know if a variable is safe to pass as null, but it is not unknowable. I think on every thread similar to this that I am on, I have plugged Coverity as a tool that can and will flag spots like this. That is, it won't just say "it is possible this will be a null pointer exception," instead it will say "setting this value to null will lead to this dereference of a null." It can look magical sometimes, because they will trace pretty deeply into your code.

Which is just my way of saying over and over again that tooling can improve that does not require a complete rewrite of your software.

One major difference is that Kotlin can tell you about nullability problems from simply analysing the method body, without going any further down the execution tree. this is a lot more reliable, and fast enough for real time IDE error reporting, than doing symbolic execution and diving into every method call to figure out all the possible paths.

Same with Rust: it can determine multi threading issues from the surface layer, whereas in other languages in order to detect data races and contention you need some serious tracing analysis as well as doing a lot of runtime profiling.

Ah, please don't take my point as dismissing the new tools, either. Indeed, I would hope that a synergy between all tools lead to better tools for us all. I just have a pet peeve for the attitude of jumping rather quickly to "rewrite it all in new language" and not realizing you can bring a lot of this into existing codebases.

The issue on the Java side is more one of compatibility than technical feasibility. IDEA and Eclipse have been able to infer potential null issues for a while now, but it's difficult to retrofit those features into a compiler that must also accept code written 15 years ago.

It sounds like you’re claiming that Coverity has a solution to the Halting Problem? That would indeed be magical.

Uh, no? Null pointers are far from the halting problem, though.

And to be clear, there are still limitations. It can't say if any value you pass into a function that is not part of the current compilation is safe. For somewhat obvious reasons.

But for a large large class of bugs, using advanced tooling can go a long way. As evidenced by the power of the advanced tooling new languages bring into the compilers. :)

The claim seems to be that the tool will always tell you absolutely whether a null-dereference will happen, and never say “maybe.” The Halting Problem is trivially reducible to this (just put an intentional null-deref in front of each halt).

Not necessarily the same point being made. At least, they are subtly different, to my understanding.

It can tell you that, "if this line is reached, and it was called from this path (with evidence on how this could happen), it will dereference a null." It is not saying that "running this program will guarantee give you a null dereference."

The halting problem is more in line with "this program will terminate on any possible inputs", which is more expansive. It is trivial to say that you can prove some programs won't terminate on a particular input. Question is if you can do it for all inputs, no?

Rice's theorem is a bit more direct here: C is Turing complete, some C programs derefernce null pointers, and some don't. Therefore, the question of whether a an arbitrary C program derefernces a null pointer is in general undecidable.

But I don't think the Op was claiming it was possible to do this perfectly, just possible to write very useful tools. You will always have either flase positives or false negatives (or I suppose inputs where you just hang).

Turing completeness is the bane of static analysis, but that doesn't make it a fruitless endeavor.

Reminds me c compilers will bitch if a variable is unused. That's basically the same problem. And sometimes the compiler will flag something that won't happen because the code path is a phantom[1].

[1] Recent headline 'Phantom Code Paths Found Harmful'

I think you may be conflating the Halting Problem. The problem simply states whether it can be determined that a program, given a set of inputs, will eventually halt. Attempting to use a null pointer can be deterministic at compile time but it's not telling you that your program would halt or not.

A tool that can reliably tell whether a given program will hit a null deference or not can also trivially be repurposed to solve the Halting Problem. Thus, no such perfect deref tool can exist. The best you can claim is that there’s a tool with few false positives and false negatives. But the original post seems to claim that there’s a perfect one.

No, it can't. You are just playing around with a problem you don't understand (null checks have 0 to do with the halting problem, it's always possible to know that a value will never be null in Java at compile time as long as you have all the source code - no dynamic libraries loaded at runtime)... unless you can actually show the proof of that, that would be really interesting to see.

Sure, for an arbitrary program that is. One that is annotated to specify if a function returns/accepts null is a subset of this that is solveable.

The halting problem is used as a means to stick one's head in the sand. The halting problem imo is fundamentally uninteresting. A question that's almost as useful and is answerable is whether a program might not halt. Using your mapping to nullability, this tries to answer whether a pointer might be null. That's all the tools are trying to do and the halting problem doesn't get in the way.

It doesn't sound like the OP is claiming that at all, Coverity and tools like it use a technique called Symbolic Execution [0]

[0]. https://en.wikipedia.org/wiki/Symbolic_execution

While the halting problem can be pretty important in some cases, it's not very relevant in most day to day code. If you're not sure if your code will ever terminate, that usually means that you're doing something wrong.

So sure, sometimes Coverity will fail because of the halting problem, and in those cases it can give an appropriate error message. Most of the time, though, it'll work just fine and be a very useful tool.

Only the experimental Coverity Hypercomputing branch ;-)

After coding enough Crystal, which instead of optionals have raw and anonymous sum types, I can say this: I don't see the point of a type system where nil is a special case. I don't want optionals, I want to have nil as a totally separate type.

Optionals are sum types, too! No special-casing there. It's just that Crystal just has anonymous sum types, I guess, which can be a lot nicer to look at.

Optional is a special case of a sum type where one of the types is Null. If you have Null as a type whose only value is null, then an Optional is just a T|Null. No need for Optional to exist.

Raw being something like `var x : int | str`, and anonymous like `var x = rand() ? 1 : ''` (where int|str is inferred)? Or do you mean something else?

Yes, that is what I meant, yes.

I don't quite get the problem as stated in the article. I assume it's a contrived example and the real (kotlin) code rather looks like:

    val foo: T? = null
Here foo is defined as nullable, hence the problem. If you make it

    val foo: T = null
the kotlin compiler will bark on foo being null and you'll never have a runtime error.

I think the motivating example should be described better but maybe I misunderstand it.

> I don't quite get the problem as stated in the article.

> Here foo is defined as nullable, hence the problem.

How do you know it's "the problem"? The answer is that you don't until it blows up in your face, because this is a Java interop call.

And of course the local could come from an other call into Java which returns a nullable reference, or it could make complete sense for it to be nullable.

Remove part of your sentence, and I think you've answered your own question.

> How do you know it's "the problem"? [...] because this is a Java interop call.

If you're making platform calls, you know you have to deal with nulls (or at least, you should, in my opinion). So you declare returns nullable and avoid passing nulls unless that's documented to be okay.

Kotlin's Elvis operator makes this really easy:

val a: String? = null val b: String = a ?: "alternative"

The problem isn't with `foo`, it's with how Kotlin's Java interop treats `just`.

Because Java doesn't have a language-level concept of non-nullability, Kotlin's interpretation of the type signature for `just` must accept nulls (sure enough, non-nullability is enforced at run time inside RxJava). Kotlin trains you to expect nullability to be a compile-time constraint, and has borderline frictionless with Java. Put the two together, and it's very easy to walk into a runtime-error trap.

I understand that. I still think the example isn't well chosen.


I think the point is different. Author is saying Java interop is fraught with runtime nullable issues that can't be caught at compile time for Java libs sans annotations. It has nothing to do with how it's set in Kotlin. Author says with nullable annotations on tge Java side, it'd bark at your first example too which is the real point.

The author introduces his example with "Let’s take a look at a very short Kotlin + RxJava sample." But IMHO his sample describes a Java-only problem (and solution). He could have written a quite similar article without any mentioning of kotlin.

> He could have written a quite similar article without any mentioning of kotlin

Almost. The point about the language itself respecting the Java nullability annotations is apt.

Assume that there is a valid reason for the "foo" to be allowed as null, but that you wanted to guard against allowing calls to Observable.just to take in a null.

For example, "foo" is a variable at your boundary between the user and the main logic of the system, and the Observable is in the heart of your code and you expect that it should never be passed nulls.

Now, to your point, you could just make sure that you filter the "nullable" foo through a non-nullable "bar." Which is ultimately what you will do.

The article's point is that if the Observable part had been written in Kotlin, this would have likely been the default. Adding a marker to say "nullable" is how you have to do it in Kotlin. In Java, it is the opposite. You have to add a marker saying non-nullable.

At least, that is my reading.

I wrote a post a few years ago comparing nullable types to maybe/optional. https://www.lucidchart.com/techblog/2015/08/31/the-worst-mis...

A lot of it is a more specific cas of sum types vs union types.

It seems like a good article but the grammar makes it difficult to read at times.

Aren't null pointers just a variation on the Maybe Monad?

Maybe is just another Optional type, but with a much weirder name, so that's pretty obvious. But this makes me wonder - does Haskell optimize `Maybe`-like types down to a null-pointer, like Rust does?

Dunno, but I'd have thought that you can make None look like a null pointer, but Some(p) needs to be tagged in some way.

Actually, having Some(x) be just a pointer to x seems to work fine without an explicit tag, as dragonwriter says below.

null pointers can be encoded as an option type[0] (that's basically what languages like Swift or Rust do, they eschew nullable pointers and wrap non-nullable pointers in an option type instead), but they are not because

1. you don't have the attending type-safety of knowing that some pointers can't be null

2. the compiler requiring checking for nullity before every pointer access would be horrendously unwieldy

[0] which can then be compiled back to a regular nullable pointer at runtime

You're forced to check for a Nothing value, wheras with a null pointer you can happily use it like a normal value.

No, because Option/Maybe can nested, nulls can't.

Nulls can be nested if you are using a language with explicit pointer types.


  Int*** x;
Can be a lot like

  Maybe Maybe Maybe Int
It's just that in languages where a type like Integer really means “an integer or a null", the nullability doesn't nest, and no one wants to deal with naked pointers.

.online is a terrible TLD.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact